Recent Advances in Information Systems and Technologies Volume 2 9783319565385, 9783319565378, 3319565370, 3319565389

This book presents a selection of papers from the 2017 World Conference on Information Systems and Technologies (WorldCI

1,983 25 113MB

English Pages 1054 [1072] Year 2017

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Recent Advances in Information Systems and Technologies Volume 2
 9783319565385, 9783319565378, 3319565370, 3319565389

Citation preview

Advances in Intelligent Systems and Computing 570

Álvaro Rocha Ana Maria Correia Hojjat Adeli Luís Paulo Reis Sandra Costanzo Editors

Recent Advances in Information Systems and Technologies Volume 2

Advances in Intelligent Systems and Computing Volume 570

Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected]

About this Series The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing. The publications within “Advances in Intelligent Systems and Computing” are primarily textbooks and proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results.

Advisory Board Chairman Nikhil R. Pal, Indian Statistical Institute, Kolkata, India e-mail: [email protected] Members Rafael Bello Perez, Universidad Central “Marta Abreu” de Las Villas, Santa Clara, Cuba e-mail: [email protected] Emilio S. Corchado, University of Salamanca, Salamanca, Spain e-mail: [email protected] Hani Hagras, University of Essex, Colchester, UK e-mail: [email protected] László T. Kóczy, Széchenyi István University, Győr, Hungary e-mail: [email protected] Vladik Kreinovich, University of Texas at El Paso, El Paso, USA e-mail: [email protected] Chin-Teng Lin, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] Jie Lu, University of Technology, Sydney, Australia e-mail: [email protected] Patricia Melin, Tijuana Institute of Technology, Tijuana, Mexico e-mail: [email protected] Nadia Nedjah, State University of Rio de Janeiro, Rio de Janeiro, Brazil e-mail: [email protected] Ngoc Thanh Nguyen, Wroclaw University of Technology, Wroclaw, Poland e-mail: [email protected] Jun Wang, The Chinese University of Hong Kong, Shatin, Hong Kong e-mail: [email protected]

More information about this series at http://www.springer.com/series/11156

Álvaro Rocha Ana Maria Correia Hojjat Adeli Luís Paulo Reis Sandra Costanzo •



Editors

Recent Advances in Information Systems and Technologies Volume 2

123

Editors Álvaro Rocha DEI/FCT Universidade de Coimbra Coimbra, Baixo Mondego Portugal

Luís Paulo Reis DSI/EEUM Universidade do Minho Guimarães Portugal

Ana Maria Correia Nova IMS Universidade Nova de Lisboa Lisboa Portugal

Sandra Costanzo DIMES Università della Calabria Arcavacata di Rende Italy

Hojjat Adeli College of Engineering The Ohio State University Columbus, OH USA

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-319-56537-8 ISBN 978-3-319-56538-5 (eBook) DOI 10.1007/978-3-319-56538-5 Library of Congress Control Number: 2017935844 © Springer International Publishing AG 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book contains a selection of papers accepted for presentation and discussion at The 2017 World Conference on Information Systems and Technologies (WorldCIST’17). This conference had the support of the IEEE Systems, Man, and Cybernetics Society, AISTI (Iberian Association for Information Systems and Technologies/Associação Ibérica de Sistemas e Tecnologias de Informação), ISCAP (School of Accounting and Administration of Porto/Instituto Superior de Contabilidade e Administração do Porto), and GIIM (Global Institute for IT Management). It took place at Porto Santo Island, Madeira, Portugal, during April 11–13, 2017. The World Conference on Information Systems and Technologies (WorldCIST) is a global forum for researchers and practitioners to present and discuss recent results and innovations, current trends, professional experiences, and challenges of modern information systems and technologies research, technological development, and applications. One of its main aims is to strengthen the drive towards a holistic symbiosis between academy, society and industry. WorldCIST’17 built on the successes of WorldCIST’13, held at Olhão, Algarve, Portugal; WorldCIST’14 held at Funchal, Madeira, Portugal; WorldCIST’15 held at São Miguel, Azores, Portugal; and WorldCIST’16 which took place at Recife, Pernambuco, Brazil. The Program Committee of WorldCIST’17 was comprised of a multidisciplinary group of experts and those who are intimately concerned with information systems and technologies. They have had the responsibility for evaluating, in a ‘blind review’ process, the papers received for each of the main themes proposed for the conference: (A) Information and Knowledge Management; (B) Organizational Models and Information Systems; (C) Software and Systems Modeling; (D) Software Systems, Architectures, Applications and Tools; (E) Multimedia Systems and Applications; (F) Computer Networks, Mobility and Pervasive Systems; (G) Intelligent and Decision Support Systems; (H) Big Data Analytics and Applications; (I) Human-Computer Interaction; (J) Ethics, Computers & Security; (K) Health Informatics; (L) Information Technologies in Education; (M) Information Technologies in Radiocommunications.

v

vi

Preface

WorldCIST’17 also included workshop sessions taking place in parallel with the conference ones. Workshop sessions covered themes such as: (i) Managing Audiovisual Mass Media (governance, funding, and innovation) and Mobile Journalism, (ii) Intelligent and Collaborative Decision Support Systems for Improving Manufacturing Processes, (iii) Educational and Serious Games, (iv) Emerging Trends and Challenges in Business Process Management, (v) Social Media World Sensors, (vi) Information Systems and Technologies Adoption, (vii) Technologies in the Workplace - Use and Impact on Workers, (viii) Healthcare Information Systems Interoperability, Security and Efficiency, (ix) New Pedagogical Approaches with Technologies, (x) ICT solutions with Unmanned Aircraft Vehicles, (xi) Internet of Things for Health, (xii) Pervasive Information Systems. WorldCIST’17 received about 400 contributions from 51 countries around the world. The papers accepted for presentation and discussion at the Conference are published by Springer (this book) and by AISTI (one issue in the Journal of Information Systems Engineering & Management) and will be submitted for indexing by ISI, EI-Compendex, Scopus, DBLP and/or Google Scholar, among others. Extended versions of selected best papers will be published in relevant journals, mainly SCI/SSCI and Scopus indexed journals. We acknowledge all that contributed to the staging of WorldCIST17 (authors, committees, workshop organizers, and sponsors). We deeply appreciate their involvement and support that was crucial for the success of WorldCIST’17. Porto Santo Island April 2017

Álvaro Rocha Ana Maria Correia Hojjat Adeli Luís Paulo Reis Sandra Costanzo

Organization

Conference General Chair Álvaro Rocha

University of Coimbra, Portugal

Co-chairs Ana Maria Correia Hojjat Adeli Luis Paulo Reis Sandra Costanzo

University of Sheffield, UK The Ohio State University, USA University of Minho, Portugal University of Calabria, Italy

Advisory Committee Chris Kimble Cihan Cobanoglu Enes Sukic Eugene Spafford Eva Onaindia Frank Schweitzer Geoffrey Fox Guy Pujolle Janusz Kacprzyk Jean-Claude Thill Jeroen van den Hoven João Tavares

KEDGE Business School & MRM, UM2, Montpellier, France University of South Florida Sarasota-Manatee, USA UIKTEN, Serbia Purdue University, USA Universidad Politecnica de Valencia, Spain ETH Zurich, Chair of Systems Design, Switzerland Indiana University, USA Université Pierre et Marie Curie, France Polish Academy of Sciences, Poland University of North Carolina at Charlotte, USA Delft University of Technology, the Netherlands University of Porto, Portugal

vii

viii

Jon Hall Karl Stroetmann Ladislav Hluchy Marcelo Mendonça Teixeira Nitish Thakor Péter Kacsuk Robert Kauffman Roger Owen Sajal Das Salim Hariri Wim Van Grembergen Witold Pedrycz Xindong Wu Zahir Irani

Organization

The Open University, UK Empirica Communication & Technology Research, Germany Slovak Academy of Sciences, Slovakia Federal Rural University of Pernambuco, Brazil Johns Hopkins University, USA University of Westminster, UK Singapore Management University, Singapore Swansea University, UK Missouri University of Science and Technology, USA The University of Arizona, USA University of Antwerp, Belgium University of Alberta, Canada University of Vermont, USA Brunel University London, UK

Program Committee Abdulla Al-Kaff Adrian Florea Adriana Fernandes Agostinho de Sousa Pinto Aguilar Alonso Ahmed El Oualkadi Alberto Freitas Alessio Ferrari Alan Ramirez-Noriega Alexandre Varão Alexandru Vulpe Almiz Souza e Silva Neto Alvaro Arenas Anabela Tereso Anacleto Correia André Marcos Silva Ankit Patel Antonio Jiménez-Martín Antonio Pereira Armando Mendes Arsénio Reis Babak Darvish Rouhani Bernard Grabot

Carlos III University of Madrid, Spain ‘Lucian Blaga’ University of Sibiu, Romania ISCTE-IUL, Portugal ISCAP/IPP, Portugal Universidad Politecnica de Madrid, Spain Abdelmalek Essaadi University, Morocco University of Porto, Portugal CNR ISTI, Italy Universidad Autonoma de Baja California, Mexico University New Atlântica, Portugal University Politehnica of Bucharest, Romania IFPB, Brazil IE Business School, Spain Universidade do Minho, Portugal CINAV, Portugal Centro Universitário Adventista de São Paulo, Brazil University of Jeddah, Saudi Arabia Universidad Politécnica de Madrid, Spain Polythechnic of Leiria, Portugal University of the Azores, Portugal University of Trás-os-Montes e Alto Douro, Portugal Payame Noor University, Iran LGP-ENIT, France

Organization

Benedita Malheiro Borja Bordel Carla Pinto Carlos Costa Catherine Garbay Cédriz Gaspoz Cengiz Acarturk Christophe Strobbe Christos Bouras Ciro Martins Cláudio Sapateiro Cristian García Bauza Cristian Mateos Dalila Durães Daniel Castro Silva David Cortés-Polo Dorgival Netto Edita Butrimė Edna Dias Canedo Eduardo Santos Eduardo Zurek Egil Ginters Elionai Moura Cordeiro Emiliano Reynares Fabio Galatioto Farhan Siddiqui Fatima Ouzayd Fernando Bobillo Fernando Moreira Fernando Reinaldo Ribeiro Filipe Portela Filipe Sá Fionn Murtagh Floriano Scioscia Francesco Bianconi Frederico Branco George Suciu Gilvandenys Sales Gonçalo Paiva Dias Goreti Marreiros Habiba Drias Hartwig Hochmair

ix

Polytechnic of Porto, Portugal Technical University of Madrid, Spain Polytechnic of Porto, Portugal ISCTE-IUL, Portugal Laboratoire d’Informatique de Grenoble, France University of Applied Sciences Western Switzerland, Switzerland Orta Dogu Teknik Universitesi, Turkey Hochschule der Medien, Germany University of Patras, Greece Universidade de Aveiro, Portugal Polytechnic of Setúbal, Portugal PLADEMA-UNICEN-CONICET, Argentina ISISTAN-CONICET, Argentina Polythechnic of Porto, Portugal University of Porto, Portugal COMPUTAEX Foundation, Spain Universidade Federal de Pernambuco, Brazil Lithuanian University of Health Sciences, Lithuania University of Brasilia, Brazil Pontifical Catholic University of Paraná, Brazil Universidad del Norte, Colombia Riga Technical University, Latvia Universidade Federal do Rio Grande do Norte, Brazil CONICET, Argentina Transport Systems Catapult, UK Canada ENSIAS, Morocco University of Zaragoza, Spain Universidade Portucalense, Portugal Polytechnic of Castelo Branco, Portugal University of Minho, Portugal Polythechnic of Coimbra, Portugal University of Derby, UK Politecnico di Bari, Italy Università degli Studi di Perugia, Italy University of Trás-os-Montes e Alto Douro, Portugal University Politehnica of Bucharest, Romania Instituto Federal de Educação, Ciência e Tecnologia do Ceará, Brazil Universidade de Aveiro, Portugal ISEP/GECAD, Portugal USTHB, Algeria University of Florida, USA

x

Hatem Ben Sta Hector Fernando Gomez Alvarado Hélia Guerra Henrique da Mota Silveira Hing Kai Chan Hugo Paredes Ina Schiering Isabel Lopes Isabel Pedrosa Ivan Lukovic J. Joao Almeida James Njenga Jason Ding Jean Robert Kala Kamdjoug Jezreel Mejia Jie Zeng João Carlos Silva João Manuel R.S. Tavares Jorge Esparteiro Garcia Jorge Gomes Jorge Oliveira e Sá Jongpil Jeong José Luís Herrero Agustín José Luís Reis José M. Parente de Oliveira José Martins Jose Vasconcelos José Luís Pereira Jukai Li Julie Dugdale Justin Dauwels Kashif Saleem Kevin Ho Khalid Benali Korhan Günel Krzysztof Wolk

Organization

University of Tunis at El Manar, Tunisia Universidad Técnica Particular de Loja, Ecuador University of the Azores, Portugal University of Campinas, Brazil University of Nottingham Ningbo, China Universidade de Trás-os-Montes e Alto Douro, Portugal Ostfalia University of Applied Sciences, Germany Politécnico de Bragança, Portugal Coimbra Business School, Portugal University of Novi Sad, Serbia Universidade do Minho, Portugal University of the Western Cape, South Africa Hewlett Packard Enterprise, USA Catholic University of Central Africa, Cameroon Centro de Investigación en Matemáticas (CIMAT), Mexico Tsinghua University, China Polytechnic of Cávado and Ave, Portugal Universidade do Porto, Portugal Polytechnic of Viana do Castelo, Portugal Universidade de Lisboa, Portugal University of Minho, Portugal Sungkyunkwan University, South Korea University of Extremadura, Spain Instituto Universitário da Maia, Portugal Aeronautics Instittue of Technology, Brazil University of Trás-os-Montes e Alto Douro, Portugal University New Atlântica, Portugal Universidade do Minho, Portugal The College of New Jersey, USA University Grenoble Alps, France NTU, Singapore King Saud University, Saudi Arabia University of Guam, Guam Université de Lorraine, France Adnan Menderes University, Turkey Polish-Japanese Academy of Information Technology, Poland

Organization

Kuan Yew Wong Laurentiu Boicescu Lea Skorin-Kapov Leonardo Botega Libo Li Lina Rao Lorenz Diener Lorenzo Capra Luis Gomes Luis Mendes Gomes Luís Silva Rodrigues Mahesh Raisinghani Manuel Mazzara Manuel Perez-Cota Manuel Silva Marcelo Mendonça Teixeira Maria José Sousa Marijana Despotović-Zrakić Mário Antunes Marius Vochin Maristela Holanda Martin Henkel Martín López-Nores Martin Zelm Mawloud Mosbah Michele Ruta Miguel Antonio Sovierzoski Mijalche Santa Michal Kvet Mikael Snaprud Mircea Georgescu Mirna Muñoz Miroslav Bures Mohammed Serrhini Mokhtar Amami Munir Majdalawieh Mu-Song Chen Natalia Miloslavskaya Nelson Rocha Nicolai Prokopyev

xi

Universiti Teknologi Malaysia, Malaysia E.T.T.I. U.P.B., Romania University of Zagreb, Croatia UNIVEM, Brazil IESEG School of Management, France University of the West Indies, Jamaica University of Bremen, Germany Università Degli Studi Di Milano, Italy Universidade Nova Lisboa, Portugal University of the Azores, Portugal Polythechnic of Porto, Portugal Texas Woman University, USA Innopolis University, Russia University of Vigo, Spain ISEP, Portugal UFRPE, Brazil Universidade Europeia, Portugal University of Belgrade, Serbia Polytechnic of Leiria, Portugal E.T.T.I. U.P.B., Romania University of Brasilia, Brazil Stockolm University, Sweden University of Vigo, Spain InterOP-VLab, Belgium University 20 Août 1955 of Skikda, Algeria Politecnico di Bari, Italy Federal University of Technology, Brazil Ss Cyril and Methodius University, Macedonia University of Zilina, Slovakia UiA, Norway Cuza University of Iasi, Romania Centro de Investigación en Matemáticas (CIMAT), Mexico Czech Technical University in Prague, Czech Republic University Mohammed First Oujda, Morocco Royal Military College of Canada, Canada Zayed University, United Arab Emirates Da-Yeh University, Taiwan National Research Nuclear University MEPhI, Russia University of Aveiro, Portugal Kazan Federal University, Russia

xii

Noemi Emanuela Cazzaniga Nuno Melão Nuno Octávio Fernandes Patricia Zachman Paula Alexandra Rego Paula Viana Paulo Maio Paulo Novais Paweł Karczmarek Pedro Henriques Abreu Pedro Sousa Radu-Emil Precup Rahim Rahmani Ramayah T. Ramiro Gonçalves Ramon Alcarria Rasha Abou Samra Reyes Juárez-Ramírez Roger Bons Roman Popp Rui Jose Rui Pitarma Rui Silva Moreira Rustam Burnashev Salama Mostafa Sami Habib Samuel Ekundayo Samuel Fosso Wamba Sergio Albiol-Pérez Silviu Vert Slawomir Zolkiewski Sorin Zoican Stanisław Drożdż Stefan Pickl Stephane Roche Stuart So Tatiana Antipova Thomas Rist Thomas Weber Tzung-Pei Hong Vandana Shreenivas Bhat Victor Alves

Organization

Politecnico di Milano, Italy Polytechnic of Viseu, Portugal Polythechnic of Castelo Branco, Portugal Universidad Nacional del Chaco Austral, Argentina Politécnico de Viana do Castelo, Portugal Polytechnic of Porto, Portugal Polytechnic of Porto, Portugal Universidade do Minho, Portugal The John Paul II Catholic University of Lublin, Poland University of Coimbra, Portugal University of Minho, Portugal Politehnica University of Timisoara, Romania Stockholm University, Sweden Universiti Sains Malaysia, Malaysia University of Trás-os-Montes e Alto Douro, Portugal Technical University of Madrid, Spain Higher Colleges of Technology, United Arab Emirates Universidad Autonoma de Baja California, Mexico FOM University of Applied Sciences, Germany TU Wien, Austria University of Minho, Portugal Polytechnic Institute of Guarda, Portugal University Fernando Pessoa, Portugal Kazan Federal University, Russia Universiti Tenaga Nasional, Malaysia Kuwait University, Kuwait Eastern Institute of Technology, New Zealand Toulouse Business School, France University of Zaragoza, Spain Politehnica University of Timisoara, Romania Silesian University of Technology, Poland Politehnica University of Bucharest, Romania Cracow University of Technology, Poland UBw München, Germany Université Laval, Canada The University of Melbourne, Australia Perm State University, Russia University of Applied Sciences Augsburg, Germany EPFL, Switzerland National University of Kaohsiung, Taiwan SDMCET, India University of Minho, Portugal

Organization

Victor Georgiev Vida Melninkaite Vilma Villarouco Vitalyi Igorevich Talanin Vittoria Cozza Wolf Zimmermann Yair Wiseman Yuhua Li Yuwei Lin Yves Rybarczyk Zdzislaw Kowalczuk Zorica Bogdanović

xiii

Kazan Federal University, Russia Vytautas Magnus University, Lithuania Federal University of Pernambuco, Brazil Zaporozhye Institute of Economics & Information Technologies, Ukraine Polytechnic University of Bari, Italy Martin Luther University Halle-Wittenberg, Germany CBar-Ilan University, Israel University of Salford, UK University for the Creative Arts, UK Universidade Nova de Lisboa, Portugal Gdansk University of Technology, Poland University of Belgrade, Servia

Secreteriat Committee Anabela Sarmento Ana Paula Afonso António Abreu João Vidal de Carvalho

ISCAP, ISCAP, ISCAP, ISCAP,

Portugal Portugal Portugal Portugal

Workshops Managing Audiovisual Mass Media (Governance, Funding and Innovation) and Mobile Journalism Organizing Committee Francisco Campos Freire Sabela Direito Rebollal Diana Lago Vázquez Iván Puentes Rivera Andrea Valencia Bermúdez

University University University University University

of of of of of

Santiago de Compostela, Spain Santiago de Compostela, Spain Santiago de Compostela, Spain Vigo, Spain Santiago de Compostela

Program Committee Francisco Campos Freire Sabela Direito Rebollal Diana Lago Vázquez

Novos Medios Research Group University of Santiago de Compostela, Spain Novos Medios Research Group University of Santiago de Compostela, Spain Novos Medios Research Group University of Santiago de Compostela, Spain

xiv

Andrea Valencia Bermúdez Xosé López García Xosé Rúas Araújo

Iván Puentes Rivera

Valentín Alejandro Martínez Fernández Montse Vázquez Gestal Ana Luna Alonso

Organization

Novos Medios Research Group University of Santiago de Compostela, Spain Novos Medios Research Group University of Santiago de Compostela, Spain Neurocommunication, Advertising and Politics Research Group (NECOM) University of Vigo, Spain Neurocommunication, Advertising and Politics Research Group (NECOM) University of Vigo, Spain Applied Marketing Research Group (iMarka) University of A Coruña, Spain Persuasive Communication (CP2) University of Vigo, Spain Panorama and development of translation in Galicia (TI3) - University of Vigo, Spain

Scientific Committee Abel Suing Alba Silva Rodríguez Ana Belén Fernández Souto Ana Fernandez Souto Ana Isabel Rodríguez Vázquez Ana María López Cepeda Beatriz Legerén Lago Carlos Pío del Oro Sáez Carlos Toural Bran Catalina Mier Sanmartín Clide Rodríguez Vázquez Diana Rivera Rogel Eva Sánchez Amboage Jenny Yaguache Quichimbo José Rúas Araújo Julinda Molares Cardoso Luis Eduardo Vila Lladosa María Magdalena Rodríguez Fernández

Technical Particular University of Loja, Ecuador University of Santiago de Compostela, Spain Universidade de Vigo, Spain Universidade de Vigo, Spain University of Santiago de Compostela, Spain University of Castilla-La Mancha, Spain Universidade de Vigo, Spain University of Santiago de Compostela, Spain University of Santiago de Compostela, Spain Technical Particular University of Loja, Ecuador University of A Coruña, Spain Technical Particular University of Loja, Ecuador Technical Particular University of Loja, Ecuador Technical Particular University of Loja, Ecuador University of Vigo, Spain Universidade de Vigo, Spain University of Valencia, Spain University of A Coruña, Spain

Organization

xv

Miguel Túñez López Moisés Limia Fernández Mónica López Golán

University of Santiago de Compostela, Spain Universidade do Minho, Portugal Pontifical Catholic University of Ecuador - Ibarra, Ecuador Universidad de Vigo, Spain

Mónica Valderrama Santomé Nancy Ulloa Erazo

Pontifical Catholic University of Ecuador - Ibarra, Ecuador Universidade da Coruña, Spain University of Valencia, Spain University of A Coruña, Spain Pontifical Catholic University of Ecuador - Ibarra, Ecuador University of Valencia, Spain Autonomous University of Barcelona, Spain Universidade de Santiago de Compostela, Spain

Natalia Quintas Froufe Olga Blasco Blasco Óscar Juanatey Boga Paulo Carlos López Pedro José Pérez Vázquez Rosario de Mateo Pérez Tania Fernández Lombao

Educational and Serious Games Organizing Committee Brígida Mónica Faria António Pedro Costa Luca Longo Luis Paulo Reis

Polytechnic Institute of Porto (ESTSP-IPP), Portugal University of Aveiro, Portugal Dublin Institute of Technology, Ireland University of Minho, Portugal

Program Committee António Pedro Costa Brígida Mónica Faria Francisle Nery Sousa Henrique Lopes Cardoso Joaquim Gonçalves Luca Longo Luis Paulo Reis Paula Rego Pedro Miguel Moreira

University of Aveiro, Portugal Polytechnic Institute of Porto (ESTSP-IPP), Portugal University of Aveiro University of Porto, Portugal Polytechnic Institute of Cavado e Ave, Portugal Dublin Institute of Technology, Ireland University of Minho, Portugal Polytechnic Institute of Viana do Castelo, Portugal Polytechnic Institute of Viana do Castelo, Portugal

xvi

Organization

Emerging Trends and Challenges in Business Process Management Organizing Committee Rui Dinis Sousa José Luis Pereira Pascal Ravesteijn

University of Minho, Portugal University of Minho, Portugal HU University, the Netherlands

Program Committee Ana Almeida Armin Stein Barry Derksen Daniel Chen Daniel Pacheco Lacerda Fernando Belfo Frederico Branco João Varajão Jorge Oliveira Sá José Camacho José Martins Luis Miguel Ferreira Marie-Claude (Maric) Boudreau Manoel Veras Marcello La Rosa Pedro Malta Renato Flórido Cameira Sílvia Inês Dallavalle de Pádua Thilini Ariyachandra Vinícius Carvalho Cardoso Vitor Santos

School of Engineering - Polytechnic of Porto, Portugal University of Muenster, Germany NOVI University of Applied Sciences, the Netherlands Texas Christian University, USA UNISINOS University, Brazil ISCAC Coimbra Business School, Portugal UTAD, Portugal University of Minho, Portugal University of Minho, Portugal NOVA Information Management School, Portugal UTAD, Portugal University of Aveiro, Portugal University of Georgia, USA Federal University of Rio Grande do Norte, Brazil Queensland University of Technology, Australia Lusófona University, Portugal Federal University of Rio de Janeiro, Brazil University of São Paulo, Brazil Xavier University, USA Federal University of Rio de Janeiro, Brazil NOVA Information Management School, Portugal

Organization

xvii

Social Media World Sensors Organizing Committee Mario Cataldi Luigi Di Caro Claudio Schifanella

Université Paris 8, France Department of Computer Science – University of Turin, Italy RAI – Centre for Research and Technological Innovation, Italy

Program Committee Andrea Ballatore Claudio Schifanella Huiping Cao Luca Aiello Luca Vignaroli Luigi Di Caro Mario Cataldi Rosaria Rossini Rossano Schifanella Simon Harper Yves Vanrompay

Santa Barbara University RAI – Centre for Research and Technological Innovation, Italy New Mexico State University, USA Yahoo! Research, USA Centre for Research and Technological Innovation, Italy Department of Computer Science – University of Turin, Italy Université Paris 8, France ISMB, Italy University of Turin, Italy University of Manchester, UK Ecole Centrale Paris, France

Information Systems and Technologies Adoption Organizing Committee Ramiro Gonçalves José Martins Frederico Branco

Universidade de Trás-os-Montes e Alto Douro; INESC TEC and UTAD Universidade de Trás-os-Montes e Alto Douro; INESC TEC and UTAD Universidade de Trás-os-Montes e Alto Douro; INESC TEC and UTAD

Program Committee Ana Almeida Ana Paula Afonso

ISEP – Instituto Politécnico do Porto, Portugal ISCAP – Instituto Politécnico do Porto, Portugal

xviii

Organization

Ana Raquel Faria António Pereira Arminda Guerra Lopes Catarina Reis Elisabete Morais Fernando Moreira Fernando Reinaldo Ribeiro Frederico Branco Gonçalo Paiva Henrique Mamede Isabel Lopes Jezreel Mejia Miranda João Paulo Pereira José Martins José Luís Mota Pereira Leonel Morgado Leonilde Reis Luis Barbosa Manuel Au-yong Oliveira Manuel Pérez Cota Maria José Angélico Miguel Neto Nuno Melão Paulo Tomé Rui Quaresma Tiago Oliveira Vitor Santos

ISEP – Instituto Politécnico do Porto, Portugal ESTG – Instituto Politécnico de Leiria, Portugal Instituto Politécnico de Castelo Branco, Portugal ESTG – Instituto Politécnico de Leiria, Portugal Instituto Politécnico de Bragança, Portugal Universidade Portucalense Instituto Politécnico de Castelo Branco, Portugal Universidade de Trás-os-Montes e Alto Douro, Portugal Universidade de Aveiro, Portugal Universidade Aberta, Portugal Instituto Politécnico de Bragança, Portugal CIMAT, México Instituto Politécnico de Bragança, Portugal Universidade de Trás-os-Montes e Alto Douro, Portugal Universidade do Minho, Portugal Universidade Aberta, Portugal IPS – Instituto Politécnico de Setúbal, Portugal Universidade de Trás-os-Montes e Alto Douro, Portugal Universidade de Aveiro, Portugal Universidade de Vigo, Vigo, Spain ISCAP – Instituto Politécnico do Porto, Portugal NOVA IMS, Universidade Nova de Lisboa, Portugal Universidade Católica, Portugal Instituto Politécnico de Viseu, Portugal Universidade de Évora, Portugal NOVA IMS – Universidade NOVA de Lisboa, Portugal NOVA IMS, Universidade Nova de Lisboa, Portugal

Technologies in the Workplace - Use and Impact on Workers Organizing Committee Catarina Brandão Ana Veloso

Faculdade de Psicologia e de Ciências da Educação, Universidade do Porto – Portugal Escola de Psicologia, Universidade do Minho – Portugal

Organization

xix

Program Committee Ana Cristina Pinto de Sá Ana Teresa Ferreira Esther Garcia Guy Enosh Hatem Ocel Joana Santos Karin Sanders Mary Sandra Carlotto Shay Tzafrir Snezhana Ilieva Vicente Tur

Portugal Telecom – Portugal Universidade Portucalense Infante D. Henrique – Portugal Universidad de Valencia – Spain School of Social Work, Faculty of Welfare and Health Sciences, University of Haifa – Israel Karabuk University, Faculty of Art, Psychology Department – Turkey Universidade do Algarve – Portugal UNSW Australia Business School, Australia PUCRS, Brazil Faculty of Management, University of Haifa – Israel Sofia University St. Kliment Ohridski – Bulgaria University of Valencia, Faculty of Psychology, Spain

Healthcare Information Systems: Interoperability, Security and Efficiency Organizing Committee José Machado António Abelha Anastasius Mooumtzoglou

University of Minho, Portugal University of Minho, Portugal European Society for Quality in Healthcare, Greece

Program Committee Alberto Freitas Ana Azevedo Brígida Mónica Faria Carlos Filipe Portela Costin Badica Daniel Castro Silva Elena Kornyshova Goreti Marreiros Hasmik Osipyan Helia Guerra Henrique Vicente Hugo Peixoto

University of Oporto, Portugal ISCAP/IPP, Portugal Polytechnic Institute of Porto - ESTSP/IPP, Portugal University of Minho/IPP, Portugal University of Craiova, Romania FEUP-DEI/LIACC, Portugal CNAM, France ISEP/IPP, Portugal State Engineering University of Armenia & Geneva University, Switzerland University of Azores, Portugal University of Évora, Portugal University of Minho, Portugal

xx

Organization

Joaquim Gonçalves José Neves Juliana Pereira de Souza-Zinader Júlio Duarte Luis Mendes Gomes Manuel Filipe Santos Mas Sahidayana Mohktar Mauricio Almeida Pedro Gonçalves Renata Baracho Renato Rocha Souza Victor Alves Wilfred Bonney

IPCA, Portugal University of Minho, Portugal INF-UFG, Brazil IPCA, Portugal University of Azores, Portugal University of Minho, Portugal University of Malaysia, Malaysia UFMG, Brazil University of Minho, Portugal Universidade Federal de Minas Gerais, Brazil Fundação Getulio Vargas, Brazil University of Minho, Portugal University of Dundee, Scotland

New Pedagogical Approaches with Technologies Organizing Committee Anabela Mesquita Paula Peres Fernando Moreira

CICE- ISCAP/IPP and Algoritmi Centre, Portugal CICE- ISCAP/e-IPP, Politécnico do Porto, Portugal IJP - Universidade Portucalense and IEETA – UAveiro, Portugal

Program Committee Alexandra Gonçalves Ana R. Luís César Collazos David Fonseca Ernest Redondo Francesc Valls Frederico Branco Gabriel Mauricio Ramirez Villegas Joana Cunha Joaquim Arnaldo Martins José Martins Lino Oliveira Patrica Paderewski Ramiro Gonçalves

Secretaria de Educação, Portugal Universidade de Coimbra, Portugal Universidad del Cauca, Colombia GTM –La Salle, Universitat Ramon Llull, Spain Universitat Politècnica de Catalunya, Spain Universitat Politècnica de Catalunya, Spain UTAD, Portugal Universidad del Cauca Instituto Federal de Educação, Ciência e Tecnologia do Ceará, Brazil Universidade de Aveiro, Portugal UTAD, Portugal ESEIG/IPP, Portugal Universidad de Granada, Spain UTAD, Vila Real, Portugal

Organization

xxi

Vanessa Agredo Delgado Willey Braz

Unicauca Instituto Federal de Educação, Ciência e Tecnologia do Ceará, Brazil

Pervasive Information Systems Organizing Committee Carlos Filipe Portela Manuel Filipe Santos Kostas Kolomvatsos

Department of Information Systems, University of Minho, Portugal Department of Information Systems, University of Minho, Portugal National and Kapodistrian University of Athens, Greece

Program Committee Alexandre Santos António Abelha Arminda Guerra e Lopes Christos Anagnostopoulos Christos Tjortjis Cristina Alcaraz Daniele Riboni Dimitrios Pezaros Fabio A. Schreiber Filipe Mota Pinto Gabriel Pedraza Ferreira Jarosław Jankowski Jesus Ibanez José Machado Karolina Baras Lina Maria Pestana Leão de Brito Nuno Marques Panagiota Papadopoulou Paulo Cortez Ricardo Queiroz Sergio Ilarri Somnuk Phon-Amnuaisuk

University of Minho, Portugal University of Minho, Portugal Instituto Politécnico de Castelo Branco, Portugal University of Glasgow, UK Int’l Hellenic University, Greece University of Malaga, Spain University of Milano, Italy University of Glasgow, UK Politecnico Milano, Italy Polytechnic Institute of Leiria, Portugal Universidad Industrial de Santander, Colombia West Pomeranian University of Technology Szczecin, Poland Madeira Interactive Technologies Institute, Portugal University of Minho, Portugal University of Madeira, Portugal University of Madeira, Portugal New University of Lisboa, Portugal University of Athens, Greece University of Minho, Portugal ESMAD- P.Porto & CRACS - INESC TEC, Portugal University of Zaragoza, Spain Institut Teknologi Brunei, Brunei

xxii

Organization

Spyros Panagiotakis

Technological Educational Institution of Crete, Greece Sunway University, Malaysia ESPE, Portugal University of Athens, Greece

Teh Phoey Lee Teresa Guarda Vassilis Papataxiarhis

Intelligent and Collaborative Decision Support Systems for Improving Manufacturing Processes Organizing Committee Leonilde Varela Justyna Trojanowska José Machado

University of Minho, Portugal Poznan University of Technology, Poland Department of Mechanical Engineering, University of Minho, Portugal

Program Committee Agnieszka Kujawińska Boris Delibašić Dariusz Sędziak Fatima Dargam Goran Putnik Jason Papathanasiou Jorge Hernández José Machado Krzysztof Żywicki Magdalena Diering Pascale Zaraté Rita Ribeiro Sachin Waigaonkar Shaofeng Liu Varinder Singh Vijaya Kumar Zlatan Car

Poznan University of Technology, Poland University of Belgrade, Serbia Poznan University of Technology, Poland SimTech Simulation Technology, Austria University of Minho, Portugal University of Macedonia, Greece University of Liverpool Management School, UK University of Minho, Portugal Poznan University of Technology, Poland Poznan University of Technology, Poland Toulouse 1 University – IRIT, France New University of Lisbon, Portugal Birla Institute of Technology & Science, India Plymouth University, UK BITS Pilani KK Birla Goa Campus, India VIT University, India University of Rijeka, Croatia

Organization

xxiii

Internet of Things for Health Organizing Committee Joaquim Gonçalves Nuno Sousa

Instituto Politécnico do Cávado e do Ave, LIACC, Portugal Universidade do Minho, Instituto das Ciências da Vida e da Saúde, Portugal

Program Committee Brígida Mónica Faria Hélder Pinheiro Joaquim Gonçalves Luís Paulo Reis Nuno Sousa

Polytechnic Institute of Porto - ESTSP/IPP, Portugal ISEP/IPP, Portugal Instituto Politécnico do Cávado e do Ave, LIACC, Portugal University of Minho, Portugal Universidade do Minho, Instituto das Ciências da Vida e da Saúde, Portugal

ICT Solutions with Unmanned Aircraft Vehicles Organizing Committee António Pereira Álvaro Rocha Fernando José Mateus da Silva

Escola Superior de Tecnologia e Gestão de Leiria, Portugal Universidade de Coimbra, Faculdade de Ciências e Tecnologia, Portugal Escola Superior de Tecnologia e Gestão de Leiria, Portugal

Program Committee Bruno Guerreiro Diego Marcillo Enrique V. Carrera Fernando Caballero Fernando José Silva Isabel Marcelino João Pereira João Valente

Instituto Superior Técnico, Universidade de Lisboa, Portugal Universidad de las Fuerzas Armadas ESPE, Ecuador Universidad de las Fuerzas Armadas ESPE, Ecuador University of Seville, Spain Polytechnic Institute of Leiria, Leiria, Portugal Polytechnic Institute of Leiria, Leiria, Portugal Escola Superior de Tecnologia e Gestão de Leiria, Portugal University Carlos III of Madrid, Spain

xxiv

Jose Carlos Castillo Montoya José Ribeiro Luis Merino Mário Jorge F. Rodrigues Nuno Costa Roman Lara Rosalía Laza Silvana G. Meire Vitor Fernandes

Organization

University Carlos III of Madrid, Spain Escola Superior de Tecnologia e Gestão de Leiria, Portugal School of Engineering, Pablo de Olavide University, Spain School of Technology and Management of Águeda, Univ. Aveiro, Portugal School of Technology and Management, Polytechnic Institute of Leiria, Portugal Universidad de las Fuerzas Armadas ESPE, Ecuador Higher Technical School of Computer Engineering, University of Vigo, Spain Higher Technical School of Computer Engineering, University of Vigo, Spain Polytechnic Institute of Leiria, Portugal

Contents

Software Systems, Architectures, Applications and Tools Monitoring Energy Consumption System to Improve Energy Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gonçalo Marques and Rui Pitarma

3

IT Management of Building Materials’ Planning and Control Processes Using Web-Based Technologies . . . . . . . . . . . . . . . . . . . . . . . . . Adedeji Afolabi, Olabosipo Fagbenle, and Timothy Mosaku

12

Characteristics of a Web-Based Integrated Material Planning and Control System for Construction Project Delivery . . . . . . . . . . . . . . Adedeji Afolabi, Olabosipo Fagbenle, and Timothy Mosaku

20

Integration Between EVM and Risk Management: Proposal of an Automated Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anabela Tereso, Pedro Ribeiro, and Manuel Cardoso

31

Renegotiation of Electronic Brokerage Contracts . . . . . . . . . . . . . . . . . . . Rúben Cunha, Bruno Veloso, and Benedita Malheiro Improving Project Management Practices in Architecture & Design Offices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cátia Sousa, Anabela Tereso, and Gabriela Fernandes TourismShare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nuno Areias and Benedita Malheiro Bee Swarm Optimization for Community Detection in Complex Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Youcef Belkhiri, Nadjet Kamel, Habiba Drias, and Sofiane Yahiaoui Developing a Web Scientific Journal Management Platform . . . . . . . . . . Artur Côrte-Real and Álvaro Rocha

41

51 62

73 86

xxv

xxvi

Contents

A Robust Implementation of a Chaotic Cryptosystem for Streaming Communications in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . Pilar Mareca and Borja Bordel

95

Building a Unified Middleware Architecture for Security in IoT . . . . . . 105 Alexandru Vulpe, Ştefan-Ciprian Arseni, Ioana Marcu, Carmen Voicu, and Octavian Fratu Impact of Transmission Communication Protocol on a Self-adaptive Architecture for Dynamic Network Environments . . . . . . . . . . . . . . . . . . 115 Gabriel Guerrero-Contreras, José Luis Garrido, María José Rodríguez Fórtiz, Gregory M.P. O’Hare, and Sara Balderas-Díaz A Survey on Anti-honeypot and Anti-introspection Methods . . . . . . . . . 125 Joni Uitto, Sampsa Rauti, Samuel Laurén, and Ville Leppänen Intelligent Displaying and Alerting System Based on an Integrated Communications Infrastructure and Low-Power Technology . . . . . . . . . 135 Marius Vochin, Alexandru Vulpe, George Suciu, and Laurentiu Boicescu Intelligent System for Vehicle Navigation Assistance . . . . . . . . . . . . . . . . 142 Marius Vochin, Sorin Zoican, and Eugen Borcoci Making Software Accessible, but not Assistive: A Proposal for a First Insight for Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 João de Sousa e Silva, Ramiro Gonçalves, José Martins, and António Pereira Hand Posture Recognition with Standard Webcam for Natural Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 César Osimani, Jose A. Piedra-Fernandez, Juan Jesus Ojeda-Castelo, and Luis Iribarne Assessment of Microsoft Kinect in the Monitoring and Rehabilitation of Stroke Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 João Abreu, Sérgio Rebelo, Hugo Paredes, João Barroso, Paulo Martins, Arsénio Reis, Eurico Vasco Amorim, and Vítor Filipe A Big Data Analytics Architecture for Industry 4.0 . . . . . . . . . . . . . . . . . 175 Maribel Yasmina Santos, Jorge Oliveira e Sá, Carlos Costa, João Galvão, Carina Andrade, Bruno Martinho, Francisca Vale Lima, and Eduarda Costa Articulating Gamification and Visual Analytics as a Paradigm for Flexible Skills Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 José Araújo and Gabriel Pestana Proposal for a Federation of Hybrid Clouds Infrastructure in Higher Education Institutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Pedro Lopes and Francisco Pereira

Contents

xxvii

Radio Access Network Slicing in 5G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Jinjin Gong, Lu Ge, Xin Su, and Jie Zeng Remote Sensing for Forest Environment Preservation . . . . . . . . . . . . . . . 211 George Suciu, Ramona Ciuciuc, Adrian Pasat, and Andrei Scheianu VBII-UAV: Vision-Based Infrastructure Inspection-UAV . . . . . . . . . . . . 221 Abdulla Al-Kaff, Francisco Miguel Moreno, Luis Javier San José, Fernando García, David Martín, Arturo de la Escalera, Alberto Nieva, and José Luis Meana Garcéa Gaming in Dyscalculia: A Review on disMAT . . . . . . . . . . . . . . . . . . . . . 232 Filipa Ferraz, António Costa, Victor Alves, Henrique Vicente, João Neves, and José Neves Multimedia Systems and Applications Matching Measures in the Context of CBIR: A Comparative Study in Terms of Effectiveness and Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Mawloud Mosbah and Bachir Boucheham The Evolution of Azuma’s Augmented Reality– An Overview of 20 Years of Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Mafalda Teles Roxo and Pedro Quelhas Brito System-on-Chip Evaluation for the Implementation of Video Processing Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Ghofrane El Haj Ahmed, Felipe Gil-Castiñeira, Enrique Costa-Montenegro, and Pablo Couñago-Soto A Review Between Consumer and Medical-Grade Biofeedback Devices for Quality of Life Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Pedro Nogueira, Joana Urbano, Luís Paulo Reis, Henrique Lopes Cardoso, Daniel Silva, and Ana Paula Rocha Computer Networks, Mobility and Pervasive Systems Sensor-Based Global Mobility Management Scheme with Multicasting Support for Building IoT Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Hana Jang, Byunghoon Song, Yoonchae Cheong, and Jongpil Jeong Analytical Approach of Cost-Reduced Location and Service Management Scheme for LTE Networks. . . . . . . . . . . . . . . . . . . . . . . . . . 300 Hana Jang, Haksang Lee, Taehyun Lee, and Jongpil Jeong Design and Security Analysis of Improved Identity Management Protocol for 5G/IoT Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Byunghoon Song, Yoonchae Cheong, Taehyun Lee, and Jongpil Jeong

xxviii

Contents

Cognitive Multi-Radio Prototype for Industrial Environment . . . . . . . . . 321 Michele Ligios, Maria Teresa Delgado, Rosaria Rossini, Davide Conzon, Francesco Sottile, and Claudio Pastrone Cluster Based VDTN Routing Algorithm with Multi-attribute Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Songjie Wei, Qianrong Luo, Hao Cheng, and Erik Joseph Seidel Reputation Analysis of Sensors’ Trust Within Tabu Search . . . . . . . . . . 343 Sami J. Habib and Paulvanna N. Marimuthu Blind Guide: Anytime, Anywhere Solution for Guiding Blind People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Daniel Vera, Diego Marcillo, and Antonio Pereira SOC- and SIC-Based Information Security Monitoring . . . . . . . . . . . . . . 364 Natalia Miloslavskaya Halcyon – Assistive Technology for Alzheimer’s Patients . . . . . . . . . . . . 375 Brad Zellefrow, Saman Shanaei, Rida Salam, Adnan El Nasan, and Hicham Elzabadani A New Generation of Wireless Sensors Networks: Wireless Body Area Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Bahae Abidi, Abdelillah Jilbab, and Mohamed El Haziti Intelligent and Decision Support Systems Load-Based POLCA: An Assessment of the Load Accounting Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Nuno O. Fernandes, Matthias Thürer, Mark Stevenson, and S. Carmo Silva Bottleneck-Oriented Order Release: An Assessment by Simulation . . . . 406 Sílvio Carmo-Silva and Nuno O. Fernandes Including Credibility and Expertise in Group Decision-Making Process: An Approach Designed for UbiGDSS . . . . . . . . . . . . . . . . . . . . . 416 João Carneiro, Diogo Martinho, Goreti Marreiros, and Paulo Novais A Process Mining Approach for Discovering ETL Black Points . . . . . . . 426 Orlando Belo, Nuno Dias, Carlos Ferreira, and Filipe Pinto Algorithms for People Recognition in Digital Images: A Systematic Review and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Monserrate Intriago-Pazmiño, Vanessa Vargas-Sandoval, Jorge Moreno-Díaz, Elizabeth Salazar-Jácome, and Mayra Salazar-Grandes Systematic Assignments in Merging Organizations . . . . . . . . . . . . . . . . . 447 Sylvia Encheva

Contents

xxix

A Conceptual Model for the Professional Profile of a Data Scientist . . . . 453 Carlos Costa and Maribel Yasmina Santos Detecting Evidence of Fraud in the Brazilian Government Using Graph Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Gustavo C.G. van Erven, Maristela Holanda, and Rommel N. Carvalho Identifying Opportunities and Challenges for Adding Value to Decision-Making in Higher Education Through Academic Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 James K. Njenga, Ildeberto A. Rodello, Karin Hartl, and Olaf Jacob Feedback Mechanisms for Decision Support Systems: A Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 Josef Frysak Big Data Analytics and Applications Prediction and Analysis of Hotel Ratings from Crowd-Sourced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Fátima Leal, Benedita Malheiro, and Juan Carlos Burguillo Implementation of Infrastructure for Streaming Outlier Detection in Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Zirije Hasani Big Data Analytics and Customs Throughput: The Case of Jamaica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 Maurice McNaughton, Lila Rao, David Parker, and Daniel Lewis Discovering Knowledge Nuggets in Financial Data: The Case of a Financial Services Institution . . . . . . . . . . . . . . . . . . . . . . . 518 Gunjan Mansingh, Lila Rao, and Maurice McNaughton Insider Attacks in a Non-secure Hadoop Environment . . . . . . . . . . . . . . 528 Pedro Camacho, Bruno Cabral, and Jorge Bernardino Leverage Web Analytics for Real Time Website Browsing Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 Claudio Sapateiro and João Gomes Human-Computer Interaction Human-Computer Interaction in the Public Sector Performance Evaluation Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Tatiana Antipova

xxx

Contents

European Portuguese Validation of Usefulness, Satisfaction and Ease of Use Questionnaire (USE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Carina Dantas, Ana Luísa Jegundo, João Quintas, Ana Isabel Martins, Alexandra Queirós, and Nelson Pacheco Rocha Accessibility in the Virtual Learning Environment Moodle Identification of Problems’ Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571 Lane Primo, Vânia Ulbricht, and Luciane Maria Fadel AppVox: An Application to Assist People with Speech Impairments in Their Speech Therapy Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 Cirano Gonçalves, Tânia Rocha, Arsénio Reis, and João Barroso Picture-Based Task Definition and Parameterization Support System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 Vítor Machado, Nuno Lopes, J.C. Silva, and José Luís Silva Assistive Platforms for the Visual Impaired: Bridging the Gap with the General Public . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602 Tânia Rocha, Hugo Fernandes, Arsénio Reis, Hugo Paredes, and João Barroso ePHoRt Project: A Web-Based Platform for Home Motor Rehabilitation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609 Yves Rybarczyk, Jan Kleine Deters, Arián Aladro Gonzalvo, Mario Gonzalez, Santiago Villarreal, and Danilo Esparza A Virtual Fine Rehabilitation System for Children with Cerebral Palsy: Assesment of the Usability of a Low-Cost System . . . . . . . . . . . . . 619 Sergio Albiol-Pérez, Jose-Antonio Gil Gómez, Elena Olmo, and Alejandro Menal Soler An Empirical Study on Usability Operations for Autistic Children . . . . 628 Angeles Quezada, Reyes Juárez-Ramírez, Samantha Jiménez, Alan Ramírez-Noriega, and Sergio Inzunza Implementation of a Multipoint Virtual Goniometer (MVG) Trough Kinect-2 for Evaluation of the Upper Limbs. . . . . . . . . . . . . . . . . . . . . . . 639 Edwin Pruna, William López V., Ivón Escobar, Eddie D. Galarza, Paulina Zumbana, Sergio Albiol-Pérez, Galo Ávila, and José Bucheli 3D Virtual System Using a Haptic Device for Fine Motor Rehabilitation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 Edwin Pruna, Andrés Acurio S., Ivón Escobar, Sergio Albiol Pérez, Paulina Zumbana, Amparo Meythaler, and Fabian A. Álvarez

Contents

xxxi

VRAndroid System Based on Cognitive Therapeutic Exercises for Stroke Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 Edwin Pruna, Ivón Escobar, Javier Montaluisa, Marco Pilatásig, Luis Mena, Paulina Zumbana, Accel Guamán, and Eddie D. Galarza TriPOD: A Prototypal System for the Recognition of Capacitive Widget on Touchscreen Addressed for Montessori-Like Educational Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Raffaele Di Fuccio, Giovanni Siano, and Antonio De Marco The Activity Board 1.0: RFID-NFC WI-FI Multitags Desktop Reader for Education and Rehabilitation Applications . . . . . . . . . . . . . . . . . . . . . 677 Raffaele Di Fuccio, Giovanni Siano, and Antonio De Marco Analyses of the Flipped Classroom Application in Discussion Forum on LMS Moodle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690 Fabrícia Farias, Gilvandenys Sales, Alexandra Gonçalves, Adriano Machado, and Eliana Leite Affectivity Level for Intelligent Tutoring System Based on Student Stereotype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701 Samantha Jiménez, Reyes Juárez-Ramírez, Víctor H. Castillo, Alan Ramírez-Noriega, and Sergio Inzunza Quantifying the Effects of Learning Styles on Attention . . . . . . . . . . . . . 711 Dalila Durães, César Analide, Javier Bajo, and Paulo Novais Ethics, Computers and Security Analysis of Research on Specific Insider Information Security Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725 Anton Zaytsev, Anatoly Malyuk, and Natalia Miloslavskaya Sab - íomha: An Automated Image Forgery Detection Technique Using Alpha Channel Steganography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 Muhammad Shahid Bhatti, Syed Asad Hussain, Abdul Qayyum, Imran Latif, Muhammad Hasnain, and Sajid Ibrahim Hashmi A Mechanism to Authenticate Caller ID . . . . . . . . . . . . . . . . . . . . . . . . . . 745 Jikai Li, Fernando Faria, Jinsong Chen, and Daan Liang Verification Methodology of Ethical Compliance for Users, Researchers and Developers of Personal Care Robots . . . . . . . . . . . . . . . 754 Carina Dantas, Pedro Balhau, Ana Jegundo, Luís Santos, Christophoros Christophorou, Cindy Wings, João Quintas, and Eleni Christodoulou

xxxii

Contents

Privacy Protection on Social Networks: A Scale for Measuring Users’ Attitudes in France and the USA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763 Jean-Éric Pelet and Basma Taieb Information Security in Virtual Social Networks: A Survey in Higher Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774 Isabel Maria Lopes and João Paulo Pereira Health Informatics Monitoring Health Factors in Indoor Living Environments Using Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785 Gonçalo Marques and Rui Pitarma Technologies for Ageing in Place to Support the Empowerment of Patients with Chronic Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795 Alexandra Queirós, Luís Pereira, Milton Santos, and Nelson Pacheco Rocha Characterization of the Stakeholders of Medical Imaging Based on an Image Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805 Milton Santos, Augusto Silva, and Nelson Rocha Integration and Medication in Hospitals . . . . . . . . . . . . . . . . . . . . . . . . . . 815 Camilla Bjørnstad and Gunnar Ellingsen Detection of Adverse Events Through Hospital Administrative Data . . . . 825 Bernardo Marques, Bernardo Sousa-Pinto, Tiago Silva-Costa, Fernando Lopes, and Alberto Freitas IT Process Improvement in Telehomecare Services for Diabetic/ Hypertensive Patients in a Developing Country: Design and Evaluation into an Ecuadorian Company . . . . . . . . . . . . . . . 835 Hugo Roldán, Andrés Larco, and Carlos Montenegro Semantic Interoperability: A Systematic Research Towards Its Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847 Pedro F.O. Gomes, Eduardo F.R. Loures, and Eduardo A.P. Santos Information Technologies in Education Assessing User Experience for Serious Games in Auditory-Verbal Therapy for Children with Cochlear Implant . . . . . . . . . . . . . . . . . . . . . . 861 Sandra Cano, César A. Collazos, Leandro Flórez Aristizábal, Carina S. Gonzalez, and Fernando Moreira A Reference Framework for Enterprise Computing Curriculum . . . . . . 872 Munir Majdalawieh and Adam Marks

Contents

xxxiii

Mobile Learning in Portuguese Universities: Are Professors Ready? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887 Fernando Moreira, Carla Santos Pereira, Natércia Durão, and Maria João Ferreira A Systematic Mapping Review of All-Learning Model of Integration of Educational Methodologies in the ICT . . . . . . . . . . . . . . . . . . . . . . . . . 897 Gabriel M. Ramirez, Cesar A. Collazos, and Fernando Moreira Restless Millennials in Higher Education - A New Perspective on Knowledge Management and Its Dissemination Using IT in Academia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908 Manuel Au-Yong-Oliveira and Ramiro Gonçalves The Use of Educational Applications for Textual Production in High School: A Systematic Review of the Literature . . . . . . . . . . . . . . 921 Fabiana Santos Fernandes, Patricia Jantsch Fiuza, and Robson Rodrigues Lemos Game Based Learning Contexts for Soft Skills Development . . . . . . . . . 931 Maria José Sousa and Álvaro Rocha Electronic Individual Student Process — A Preliminary Analysis . . . . . 941 António Abreu, Ana Paula Afonso, João Vidal Carvalho, and Álvaro Rocha A Feature Selection Application Using Particle Swarm Optimization for Learning Concept Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952 Korhan Günel, Kazım Erdoğdu, Refet Polat, and Yasin Özarslan Ordering and Visualization of Recommendations . . . . . . . . . . . . . . . . . . . 963 Sylvia Encheva Attribute Dependencies and Complete Orderings . . . . . . . . . . . . . . . . . . . 969 Sylvia Encheva Social Quizzes with Scuiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975 Massimo Di Pierro and Peter Hastings Developing ICT Students’ Practical Skills Through Work in Virtual Learning Environments: Teachers’ Expectations and Actual Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983 Edita Butrime, Rita Marciulyniene, Vida Melninkaite, and Rita Valteryte Information Technologies in Radiocommunications Pilot Assignment Scheme Based on Location in Large-Scale MIMO System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999 Chao Zhang

xxxiv

Contents

Fractal Microwave Absorbers for Multipath Reduction in UHF-RFID Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1008 Francesca Venneri and Sandra Costanzo A Sum-Rate Maximization Scheme for Coordinated User Scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014 Jinru Li, Jie Zeng, Xin Su, and Chiyang Xiao An Approach of Cell Load-Aware Based CoMP in Ultra Dense Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018 Jingjing Wu, Jie Zeng, Xin Su, and Liping Rong Doppler Elaboration for Vibrations Detection Using Software Defined Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022 Antonio Raffo and Sandra Costanzo Application Scenarios of Novel Multiple Access (NMA) Technologies for 5G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029 Shuliang Hao, Jie Zeng, Xin Su, and Liping Rong A Unified Framework of New Multiple Access for 5G Systems . . . . . . . . 1034 Bin Fan, Xin Su, Jie Zeng, and Bei Liu Research on Handover Procedures of LTE System with the No Stack Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039 Lu Zhang, Lu Ge, Xin Su, Jie Zeng, and Liping Rong Dual Band Patch Antenna for 5G Applications with EBG Structure in the Ground Plane and Substrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044 Almir Souza e Silva Neto, Artur Luiz Torres de Oliveira, Sérgio de Brito Espinola, João Ricardo Freire de Melo, José Lucas da Silva, and Humberto César Chaves Fernandes Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1051

Software Systems, Architectures, Applications and Tools

Monitoring Energy Consumption System to Improve Energy Efficiency Gonçalo Marques and Rui Pitarma ✉ (

)

Polytechnic Institute of Guarda – Unit for Inland Developement, Av. Dr. Francisco Sá Carneiro, n° 50, 6300–559 Guarda, Portugal [email protected], [email protected]

Abstract. The National Action Plan for Energy Efficiency sets targets for the years 2016 to 2020 in which energy efficiency is a European priority covering all Member States. Energy efficiency has been increasing its importance over the years National and international laws have been introduced to reinforce that concept. The Internet of Things (IoT) is a concept created to define the techno‐ logical revolution of the devices used in day-to-day connected to the global Internet network. Thereby associating the need for monitoring of energy consumption to the paradigm of the Internet of Things, we intend to design and build an automatic system that allows monitoring and control of electrical devices connected to the Internet via Wi-Fi denominated by iPlug. This system is a smart plug that fits a generic socket and provides information of power consumption (s) device (s) connected to it by calculating the current and the line voltage, this information is stored in a database. The iPlug also lets you know whether the outlet is on or off, change your status, and access to consumption data through two applications, a Web portal and an application for iOS. Keywords: Energy efficiency · Sustainability · Monitoring IoT (Internet of Things) · Smart cities · iOS · Web services

1

Introduction

Energy efficiency as a policy objective refers to commercial, industrial competitiveness and energy security benefits, as well as increasingly to environmental benefits such as reducing CO2 emissions [1]. Energy efficiency is a concern with two main motivations. On one hand the envi‐ ronmental motivation, related to the reduction of wastes that aims to reduce CO2 emis‐ sions. On the other hand, the economic motivation, which stems from the reduction of costs sustained by the operators to keep the network up and running at the desired service level and their need to counterbalance ever-increasing cost of energy [2]. A conclusion that energy efficiency investment offers a high payoff in induced jobs and is generally the least cost and often the most readily implementable approach and that more energy efficiency can diminish the need for both additional fossil fuel plants and new renewable energy sources is presented by [3].

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_1

4

G. Marques and R. Pitarma

A study presented by [4] concludes that conventional energy efficiency technologies can be used to decrease energy use in new commercial buildings by 20–30% on average and up to over 40% for some building types and locations. An application of enterprise energy monitoring system which is based on IoT (Internet of Things) architecture and incorporates several technologies, such as digital instrumentation, communications networks, centralized management, decentralized control and remote monitoring is proposed by [5]. A study proposed by [6] concludes that even the most advanced performance moni‐ toring tool needs motivated end-users, who, supported by proper energy awareness services, will make energy efficient decisions and change established operational proce‐ dures and routines. Another important application of energy monitoring based in IoT stands for a super‐ vising system for solar photovoltaic power generation that can greatly enhance the performance, monitoring and maintenance of the plant that is presented by [7]. A real-time energy consumption monitoring system was presented for large public buildings, which is beneficial for energy saving and helps to establish the building energy consumption performance evaluation is proposed by [8]. A real-time, low-cost, wireless AC monitoring system based on ATmega 328P-PU microcontroller that is capable of acquisition and processing of the data measured by smart meter is proposed by [9]. This paper describes the iPlug system, developed by the authors, which aims to ensure, autonomously, accurately and simultaneously, the energy consumption moni‐ toring of electrical devices. The system consists on a low cost energy monitoring system based on IoT, developed using Arduino, ESP8266 module for Wi-Fi connection to the Internet and micro sensors, for storage and availability of monitoring data on a Web portal in real time. This system is capable to monitor the power consumption of the devices connected to it considering the current consumed by the equipment and the mains voltage. The equipment’s power consumption data can be accessed via Web portal (iPlug Web) or mobile application (iPlug Mobile) but is also possible to connect and disconnect the electrical equipment by sending ON/OFF commands remotely. Moni‐ toring identifies energy waste points or excessive consumption of damaged devices. After monitoring of consumption the user can also proceed to the analysis in order to change behaviours that allow energy efficiency.

2

Technical Solution

The iPlug system is an automatic energy monitoring system that allows the user, such as the building manager, to know, in real time, the energy consumption of the electrical device connect to the system. The data collected is store in a SQL SERVER database using Web services devel‐ oped in ASP.NET C# and the end user can access the data from the Web portal iPlug Web built in the same technologies. After login, the end user can access the iPlug Web and get all the energy monitoring information but is also possible to turn ON or OFF the device connect to the system.

Monitoring Energy Consumption System to Improve Energy Efficiency

5

The monitoring data is shown as numeric values or in a chart form and this portal also allows the user to keep the parameters history. Providing a history of changes, the system helps the user to analyse precisely and detailed the use and consumption of electrical devices. This can give important data to decide on possible interventions to improve energy efficiency in the building. The iPlug Web is also equipped with a powerful alerts manager that advises the user when a specific electrical device have a not normal energy consumption for that the user have to configure the electrical parameters of the electrical device that will be monitored. The wireless communication is implemented using the ESP8266 module which implements the IEEE 802.11 b/g/n networking protocol a family of specifications devel‐ oped by the IEEE for WLANs. The IEEE 802.11 standard supports radio transmission within the 2.4 GHz band [10]. Figure 1 schematically illustrates the system architecture used in the iPlug solution.

Fig. 1. iPlug system architecture.

The iPlug hardware is built using the embedded Arduino UNO microcontroller, an open-source platform that incorporates an Atmel AVR microcontroller [11]. In order to measure the energy consumption iPlug is equipped with a ACS712 sensor a Hall-effectbased linear current sensor IC ACS712, this circuit provides cheap and accurate solu‐ tions for AC or DC current sensing in electrical systems [12]. In order to improve system accuracy an electric circuit is designed to measure the supply voltage, which will be explained in detail later. To control the electrical divide remotely the iPlug uses an electromechanical relay that allows to control devices with up to 10 A and 250 V [13].

6

G. Marques and R. Pitarma

Figure 2 shows the connection diagram for iPlug system in order to demonstrate the connections between microcontroller, sensors and actuators.

Fig. 2. iPlug connection diagram

Figure 3 represents iPlug prototype, a brief description of the used components is presented below.

Fig. 3. iPlug hardware

• ACS712 – is a fully integrated, hall effect-based linear current sensor ic with 2.1 kVrms isolation and a low-resistance current conductor. It has a 5 μs output rise time in response to step input current, 80 kHz bandwidth, total output error 1.5% at TA = 25 °C and a small footprint, low-profile SOIC8 package, stable output offset voltage and has nearly zero magnetic hysteresis [14].

Monitoring Energy Consumption System to Improve Energy Efficiency

7

• ESP8266 – is a WiFi chip with integrated antenna switches, RF balun, power ampli‐ fier, low noise receive amplifier, filters, power management modules. It support 802.11 b/g/n protocols, WiFi 2.4 GHz, support WPA/WPA2, has a integrated low power 32-bit MCU, a integrated 10-bit ADC, has a standby power consumption of < 1.0mW (DTIM3) and can operate at temperature range – 40 C ~ 125 C [15]. • Songle SRD-05 VDC-SL-C Relay - with Relay UL/CUL Rating: 10A @ 125 V AC, 28 V DC, Relay CCC/TUV Rating: 10A @ 250 V AC, 30 V DC. Is capable to control high-power devices up to 10A with a simple high/low signal, provides isolation between the microcontroller and the device being controlled. It has a voltage require‐ ments: 5 V DC (Relay Power), 3.3 V to 5 V DC (Input Signal) and current require‐ ments: ~85 mA (Relay Power) [16].

Fig. 4. Voltage sensor circuit

Fig. 5. Voltage sensor

8

G. Marques and R. Pitarma

• Voltage Sensor – this sensor incorporates a INDEL TSZZ 0,6/005MP, AC-AC 9 V adaptor voltage transformer, the rated input is 230 – 240 V AC 50 Hz and the output is 9 V. To convert AC current into DC current a rectifier bridge is also incorporated in the voltage sensor. In order to convert the transformer output voltage in a range compatible with the analogue ports Arduino a voltage divider is used. To stabilize the output signal a low pass filter is used. Figure 4 shows the voltage sensor circuit and Fig. 5 shows the voltage sensor. The firmware of the iPlug was implemented using the Arduino platform language in the IDE Arduino. It belongs to the C-family programming languages. The iAQ Web was developed in ASP.NET C# and SQL SERVER database. Web services that allow data collection are also built in ASP.NET. The iPlug Mobile is created in Swift programming language in XCODE IDE and its compatible with iOS 7 and above.

3

Results and Discussion

The iPlug Web allows viewing the data as numeric values or in a chart form and the user can also filter by date the timespan he wants to analyse. The results are collected using the iPlug. Figure 6 represents the daily energy consumption of the electrical device. This feature enables the user to have a better perspective of their daily consump‐ tion in order to analyse the data and planning interventions in the building necessary to improve energy efficiency.

Fig. 6. Daily energy consumption (kWh)

Using the daily charts the user can also verify the consumption of electrical apparatus and its evolution per hour (Fig. 7). Likewise the daily data, the user can also view the hourly consumption rates. Based on these results, it is possible to react in real time accordingly in order to improve energy efficiency. In iPlug Mobile App user after authentication can visualize the last 10 data moni‐ toring records, this data can be visualized in numerical or chart forms as showed in Fig. 8.

Monitoring Energy Consumption System to Improve Energy Efficiency

9

Fig. 7. Hourly energy consumption (kWh)

Fig. 8. iPlug mobile application functionalities

The graphic display of the energy monitoring data allows a greater perception of the behaviour of the monitored parameter than the numerical display format. On the other hand, the Internet portal also allows the user to access the historical data, which enables a more precise analysis of the detailed temporal evolution of energy consumption. Thus, the system is a powerful tool for the analyse energy consumption and to support decision making on possible interventions to improve energy efficiency in the building. The iPlug Mobile allows a quick simple access, intuitive and real-time access to the monitored data but also provides the user a way to connect and disconnect electrical devices remotely. As future work the main goal is to make technical improvements, including a creation of a scheduling system to turn on and off electrical devices automatically, this feature is particularly important for the public lighting control for example. Compared to other

10

G. Marques and R. Pitarma

systems the iPlug system has several advantages namely its modularity, small size, low cost of construction and ease installation. Improvements to the system hardware and software are planned to make it much more appropriate for specific purposes such as public illumination control and solar power energy monitoring.

4

Conclusion

This paper presents an energy monitoring system that aims to improve energy efficiency. This system is developed with open-source technologies. The hardware is developed with an Arduino microcontroller as processing unit and an ESP8266 as a communication unit. The sensing unit is composed by an ACS712 current sensor, a relay and a voltage sensor. The results obtained are very promising, representing a significant contribution to energy monitoring systems based on Internet of things. In addition to this validation study, physical system and Web portal and mobile app improvements have been planned with a view to adapt the system to specific cases. In despite of all the advantages in the use of IoT architecture the still exist many open issues as scalability, quality of service problems and security and privacy issues, iPlug system should study ways to respond to these problems. Compared to existing systems, it has great importance due to the use of low cost and open-source technologies. Note that the system has advantages both in ease of installa‐ tion and configuration due to the use of wireless technology for communications but also because iPlug is developed to be compatible with all domestic houses and not only for smartphones or high-tech houses. This system is extremely useful in monitoring energy consumption inside buildings to better understand the current energy consumption of a specific electrical device as well as to study the behaviour of energy consumption of the user. Thus, the system can be used to help the building manager for proper operation and maintenance to improve the energy efficiency of the building.

References 1. Patterson, M.G.: What is energy efficiency? Energy Policy 24(5), 377–390 (1996) 2. Bolla, R., Bruschi, R., Davoli, F., Cucchietti, F.: Energy efficiency in the future internet: a survey of existing approaches and trends in energy-aware fixed network infrastructures. IEEE Commun. Surv. Tutor. 13(2), 223–244 (2011) 3. Wei, M., Patadia, S., Kammen, D.M.: Putting renewables and energy efficiency to work: how many jobs can the clean energy industry generate in the US? Energy Policy 38(2), 919–931 (2010) 4. Kneifel, J.: Life-cycle carbon and cost analysis of energy efficiency measures in new commercial buildings. Energy Build. 42(3), 333–340 (2010) 5. Luan, H., Leng, J.: Design of energy monitoring system based on IOT. In: Chinese Control and Decision Conference (CCDC), pp. 6785–6788 (2016)

Monitoring Energy Consumption System to Improve Energy Efficiency

11

6. Sučić, B., Anđelković, A.S., Tomšić, Ž.: The concept of an integrated performance monitoring system for promotion of energy awareness in buildings. Renew. Energy Sources Healthy Build. 98, 82–91 (2015) 7. Adhya, S., Saha, D., Das, A., Jana, J., Saha, H.: An IoT based smart solar photovoltaic remote monitoring and control unit, pp. 432–436 (2016) 8. Zhao, L., Zhang, J., Liang, R.: Development of an energy monitoring system for large public buildings. Energy Build. 66, 41–48 (2013) 9. Caruso, M., et al.: Design and experimental characterization of a low-cost, real-time, wireless AC monitoring system based on ATmega 328P-PU microcontroller, pp. 1–6 (2015) 10. Bhoyar, R., Ghonge, M., Gupta, S.: Comparative study on IEEE standard of wireless LAN/Wi-Fi 802.11 a/b/g/n. Int. J. Adv. Res. Electron. Commun. Eng. (IJARECE) 2(7), 687– 691 (2013) 11. D’Ausilio, A.: Arduino: a low-cost multipurpose lab equipment. Behav. Res. Methods 44(2), 305–313 (2012) 12. Baig, F., Mahmood, A., Javaid, N., Razzaq, S., Khan, N., Saleem, Z.: Smart home energy management system for monitoring and scheduling of home appliances using zigbee. J. Basic Appl. Sci. Res. 3(5), 880–891 (2013) 13. Songle Relay. https://www.ghielectronics.com/downloads/man/20084141716341001Relay X1.pdf 14. Allegro Microsystems, Fully Integrated, Hall Effect-Based Linear Current Sensor IC with 2.1 kVRMS Isolation and a Low-Resistance Current Conductor (2013). http://www. allegromicro.com/~/media/files/datasheets/acs712-datasheet.ashx 15. Espressif Systems, ESP8266EX Datasheet (2015). http://download.arduino.org/products/ UNOWIFI/0A-ESP8266-Datasheet-EN-v4.3.pdf 16. Shi, Y.: Remote control system of classroom based on embedded web server. In: Du, W. (ed.) Informatics and Management Science VI. Lecture Notes in Electrical Engineering, vol. 209, pp. 165–171. Springer, Heidelberg (2013)

IT Management of Building Materials’ Planning and Control Processes Using Web-Based Technologies Adedeji Afolabi ✉ , Olabosipo Fagbenle, and Timothy Mosaku (

)

Department of Building Technology, Covenant University, Ota, Ogun State, Nigeria {adedeji.afolabi,olabosipo.fagbenle, timothy.mosaku}@covenantuniversity.edu.ng

Abstract. Mismanagement of building materials has constantly plagued the construction industry resulting in issues of cost overrun, delay, high levels of construction waste, wastefulness, project abandonment, climate change etc. The purpose of the research is to examine an IT management of building materials’ planning and control processes using web-based technologies. The study made use of a desktop review of literature and a case diagram to illustrate the various interactions involved in the use of an IT system. A framework of drivers and barriers that affect the use of web-based technologies in planning and control of building materials in order to be able to achieve IT management by Construction Managers and construction firm’s head office was developed. In conclusion, the study developed a framework of a web-based material planning and control system for construction project delivery that engenders openness, transparency and accountability in the management of building materials on construction sites. Keywords: Building materials · e-Corporate governance · ICT · Planning and control systems · Web-based technologies

1

Introduction

The introduction of the internet and the World Wide Web (WWW) have moved the world to a new phase of advancement due to the extreme demands of the business world and the social enthusiasm of the public. For the last three decades, these two entities have become indispensable, changing the way activities are conducted. The internet and the World Wide Web (WWW) are rapidly changing environments driven by technological advancements and the perceived needs of organizations who are poised to gain competitive advantage and unprecedented opportunities by engen‐ dering a web presence. As at 2001, Joshim Aref, Ghaffor and Spafford projected that web based applications in the e-commerce market would exceed one trillion dollars ($1 trillion) over the next several years due to the large number of web applications being developed. This is not far-fetched, individuals and organizations such as Face‐ book (social media), Amazon (e-commerce), Google (search engine), Alibaba (ecommerce), Netflix (web portal), Microsoft and Apple (software and hardware) to mention a few, gross billions of dollars in income annually as a result of their © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_2

IT Management of Building Materials’

13

activities via the internet and the World Wide Web (WWW). The unmatched pros‐ pects and benefits of the internet and the web translating to web based applications means they are here to stay and therefore, should be explored to solve every day issues such as planning, which is an important aspect of life. According to [1], information and communication technology (ICT) tools are masterpiece in increasing organizational output. The introduction of information and communication technology (ICT) in the construction industry over 40 years ago has drastically transformed the traditional landscape of the industry to one with faster and more sophisticated processes. The rational for the use of ICT in the construction industry can be understood from the unique nature of the industry. The industry is perceived as one the largest employer of labour, thereby making the industry information intensive in terms of paper works, processes and communication [2]. These requires close coor‐ dination which ICT offers. [3] added that the internet is a major driver increasing the use of ICT due to its ability to connect various project participants in diverse locations in order to readily exchange information. The close coordination is required due to the heavy exchange of data and information that takes place among the project participants on a daily basis [4]. However, [5] stated that the adoption of ICT by construction firms have been very slow. [6] noted that majority of construction process information is still heavily based on the traditional means of communication such as huge paper works and face to face meetings. As a result, [7] in [3] argued that the industry has suffered from difficult to access, out-of-date and incomplete information. Whereas, [8] stated that the construction industry needs to increase its efficiency of information management by ensuring massive volumes of accurate information are exchanged at high speed and at relatively low cost. The poor oversight on the information generated and used in the construction industry has resulted in some cost, time and quality deprived issues been encountered in the industry. Specific to construction projects and construction firms, the subject of mismanage‐ ment of resources cannot be overemphasized. A major resource which is largely mismanaged on construction sites and affects the project, client, contractor, the firm, the environment and the nation’s economy is building materials. Building materials make up as much as 60–65% of the total working capital of any construction project [9]. The critical nature of mismanaging this entity affects project cost, time and quality [10]. Matters of unavailability of materials, over-ordering, under-ordering, lack of storage spaces, bad stock control, inappropriate material delivery, material [10, 11] which are attributes of an ineffective planning and control of building materials have been traced to problems of time overrun, cost overrun, dissatisfaction of client, high levels of waste, high reports of theft, low quality, abandonment of projects, construction delay, lack of construction data [12, 13]. Moreover, some construction firms have tried to solve these issues by stockpiling of building materials, this has led to tying down limited capital, increased waste and construction material theft [11]. According to [3, 6, 10], the traditional construction methods apply more paper-based work in its data and document management during the construction process, whereas, the emergence of ICT systems could transform conventional to modern methods in managing construction activities. With over 60% of construction professionals connected to the internet, most construction professionals’ use of the internet is mainly

14

A. Afolabi et al.

for sending e-mails which is a far cry from the capacity and benefits which the internet and web based technologies can contribute to the construction industry [3]. The study aims to examine an IT management of building materials planning and control processes using web-based technologies. With this understanding, the study intends to proffer answers to the following research questions; • What are the drivers and barriers to the effective use of web-based technologies in planning and control of building materials? • How can IT systems be used in managing building materials’ planning and control processes using web-based technologies? 1.1 IT Management The Nigerian construction industry is a large employer of labour bringing people together to work. The large number of specialized but independent organizations and individuals require close coordination in order to achieve the cost, time and quality objectives of a construction project. The close coordination is required due to the heavy exchange of data and information that takes place among the project participants on a daily basis [4]. [6] noted that majority of construction process information is heavily based on the traditional means of communication. Traditional communication based, such as huge paper works and face to face meetings. As a result, [7] argued that the industry has suffered from difficult to access, out-of-date and incomplete information. This called for the use of information and communication technology (ICT). [3] opined that ICT is a potent tool for accelerating socio-economic development and narrowing the gap between developing and developed countries. According to [14] information technologies can assist project and construction managers to standardize routine tasks so that available organizational resources are utilized both effectively and efficiently. According to [15], computers opened the door to an inventory system in material management helping to keep up-to-date records on the status of every inventory in stock. This brought a better understanding of production operation and new ways of managing production. Common use of computer based or IT based material management techni‐ ques have been developed over the years. Of such computer based or IT based are mate‐ rial requirements planning (MRP I & II); a computer based information system designed to control manufacturing activities [15–17], electronic mail (e-mail) and electronic commerce including electronic invoicing, payments and receipt of materials process [18], Construction Materials Planning System (CMPS) [19], Material Handling Equip‐ ment Selection Advisor (MHESA) [20], Construction Materials Exchange (COME) [21], Bar-code system - for material storage application [22], and most commonly used Microsoft Excel and Lotus 1-2-3 [23]. Most of these computer based applications were made for the manufacturing sector and are not real time or internet-based.

2

Methodology

The study made use of a desktop review of literature, case diagram to illustrate the various interactions involved in the use of an IT system for planning and control of

IT Management of Building Materials’

15

building materials on a construction site through we-based technologies. The IT system setup is such that, it can only be accessed via an online platform through web browsers on desktop systems. Adopting the framework of [30], where the database system is designed using MySQL connected to the HTML web-interface through a PHP script processing the data back and forth. From literature, a framework of the driver and barriers to the implementation of web-based technologies for planning and control of building materials was developed. The study showed the architectural design of the web application.

3

Drivers of Web-Based Technologies

[3] asserted that the internet (worldwide web) is a major driver increasing the use of ICT due to its ability to connect various project participants in diverse locations in order to readily exchange information. In the study by [3], factors influencing the adoption of ICT include level of competi‐ tion, changing trends in technology, client/customer demand and construction industry demands. In the manufacturing sector, [24] asserted that today’s global business competi‐ tion is imposing special requirements to manufacturing enterprises, such as rapid response to changing requirements, reduction in both time and cost of the product realization process. In the Unified Theory of Acceptance and Use of Technology (UTAUT) model, [25] stated that four drivers play significant role in the acceptance and usage of ICT namely performance expectancy, effort expectancy, social influence and facilitating conditions. In addition, successful technology adoption by expected users in construction firms requires implementation support and encouragement from senior managers if individuals are to adopt and utilize the technology [26]. Figure 1 showed the drivers to the use of web-based technologies in the construction industry. The figure showed the various drivers and barriers that would lead to the usability and acceptability of web-based technologies in the construc‐ tion industry in order to meet the construction project delivery of time, cost, quality and customer satisfaction. Similarly, [27] found that effective upper management support was one of the strongest enablers on innovation implementation in construction firms. [28] added that to reinforce this commitment, all old and informal systems must be eliminated. [29] identified other drivers which include age, gender, education and computer experience.

4

Barriers to Using Web-Based Technologies

It is continually acknowledged that the construction industry has great potential for the uptake of ICT and e-business [30], but, it appears that some fundamental problems still exist. Some of these issues include: organizational factors (people and process); the enabling environment and supportive infrastructure, and the actual technology itself [31]. In the organizational factor, [26] opined that poor user acceptance can occur when tran‐ sitioning from an existing system such as a paper based system to a new system such as a fully electronic environment. [32] explained that when organizations implement a new

16

A. Afolabi et al.

Fig. 1. Drivers and barriers to web-based technologies in the construction industry Source: Author’s design

technology, commonly employees are not ready to adopt that technology and resist its intro‐ duction. According to [3], the factors impeding the use of ICT in the construction industry include insufficient/erratic power supply, job sizes and fees not enough for ICT, high cost of hardware/software, fear of virus attacks, high rate of obsolescence of hardware/soft‐ ware, inadequate ICT content in construction education, scarcity of professional software, high cost of engaging computer staff, lack of management desire and appreciation of ICT, security, low return on investment, personnel abuse and fear of ICT making professionals redundant. Similarly, [33] identified the main barriers to ICT transformation in materials management as: when the industry believes hard copy is substantial, limited technical life cycle, dearth in innovative culture, lack of flexibility of new technologies, lack of relia‐ bility, difficulty to integrate in existing process, risk of technical malfunction, absence of trained staff, resistance from employees, time taken for training, lack of market informa‐ tion, uncertain economic situation, uncertain returns on investment, cost for training staff, high cost of specialist software and maintenance cost. [13, 17] also believed that lack of user-friendly construction software packages is considered a major barrier to the development of ICT in the construction industry. Figure 1 showed the barriers to the use of web-based technologies in the construction industry.

IT Management of Building Materials’

17

Fig. 2. Architectural design of the WB-MPC Source: Author’s design

5

System Design and Implementation

According to [34], the development of a successful web application (website) involves many different kinds of design, including functional design, software architecture, busi‐ ness process or workflow design, user interface design, and database design. The purpose of the web-based material planning and control (WB-MPC) model is to have an inter‐ active web-based interface which allows construction professionals to be able to esti‐ mate and store building material quantities, while planning and controlling the usage of building materials per time. The system is designed in a way that a shortfall in quantities of selected building materials can be brought to the notice of the construction profes‐ sional by periodic short messaging systems (SMS) or email format. Figure 2 shows the architectural system design of the web-based material planning and control (WB-MPC) system; an IT system that can be used for managing these processes. The administrator or project manager for the construction site and the back end users can access the plat‐ form through a login page. The Director, Head office or any other official permitted through the people and permission can access critical information on the state of building materials to be used and onsite leading to transparency, openness and accountability. In addition, there is a messaging platform where back end users can use to make and send clarifications to the project manager on what was noticed about the ongoing project. The

18

A. Afolabi et al.

messaging platform also stores the previous messages that have been sent to the project manager about shortfalls in the building materials on site.

6

Conclusion and Recommendation

The study developed a driver and barrier framework for the effective use of web-based technologies in planning and control of building materials. The study showed the archi‐ tectural design of an IT system for managing building materials’ planning and control processes using web-based technologies. The study recommended the use of web-based technologies in the construction industry in order to engender administrative efficiency, transparency, openness and accountability. In addition, there is need to increase the investment in ICT and ICT training of construction professionals in the construction industry. Acknowledgement. The authors would like to appreciate Covenant University for their financial support towards the publication of this article.

References 1. Nweke, F.H., Ugwu, E.G., Ikegwu, C.A.: Design of project activities tracking system for enhanced project management in Nigeria. Int. J. Res. (IJR) 2(4), 925–937 (2015) 2. Afolabi, A., Emeghe, I., Oyeyipo, O., Ojelabi, R.: Professionals’ preference for migrant craftsmen in Lagos State. Mediterr. J. Soc. Sci. 7(1), 501–508 (2016) 3. Oladapo, A.A.: The impact of ICT on professional practice in the Nigerian construction industry. Electron. J. Inf. Syst. Dev. Countries 24(2), 1–19 (2006) 4. Maqsood, T., Walker, D.H.T., Finegan, A.D.: An investigation of ICT diffusion in an Australian construction contractor company using SSM. In: Proceedings of the Joint CIB– W107 and CIB–TG23 Symposium on Globalisation and Construction, Bangkok, Thailand, 17–19 November, pp. 485–495 (2004) 5. Mole, K.F., Ghobadian, A., O’ Regan, N., Liu, J.: The use and deployment of soft process technologies within UK manufacturing SMEs: an empirical assessment using logit models. J. Small Bus. Manage. 42(3), 303–324 (2004) 6. Mohamed, S., Stewart, R.A.: An empirical investigation of users’ perceptions of web–based communication on a construction project. Autom. Constr. 12, 43–53 (2003) 7. Shoesmith, D.R.: Using internet as a dissemination channel for construction research. Constr. Info. Technol. 3(2), 65–75 (1995) 8. Deng, Z.M., Li, H., Tam, C.M., Shen, Q.P., Love, P.E.D.: An application of internet-based project management system. Autom. Constr. 10, 239–246 (2001) 9. Formoso, C.T., Revelo, V.H.: Improving the materials supply system in small–sized building firms. Autom. Constr. 8, 663–670 (1999) 10. Kasim, N.B., Anumba, C.J., Dainty, A.R.J.: Improving materials management practices on fast-track construction projects. In: 21st Annual ARCOM Conference, SOAS, University of London, vol. 2, pp. 793–802 (2005) 11. Equere, E., Tang, L.C.M.: Dearth of automation: the consequences in Nigeria construction industry. Autom. Constr. 14(4), 500–511 (2010)

IT Management of Building Materials’

19

12. Hussin, J.M., Rahman, I.A., Memon, A.H.: The way forward in sustainable construction: issues and challenges. Int. J. Adv. Appl. Sci. 2(1), 15–24 (2013) 13. Mehr, S.Y., Omran, A.: Examining the challenges affecting the effectiveness of materials management in the Malaysian construction industry. Int. J. Acad. Res. 5(2), 56–63 (2013) 14. Adam, F., Carton, F., Sammon, D.: Project management: a case study of a successful ERP implementation. Int. J. Manage. Projects Bus. 1, 106–124 (2007) 15. Islam, M.S., Rahman, M.M., Saha, R.K., Saifuddoha, A.M.: Development of Material Requirements Planning (MRP) software with C language. Global J. Comput. Sci. Technol. Softw. Data Eng. 13(3), 12–22 (2013) 16. Duzcukoghu, H.: Development a software for material requirements planning and a case study for real Huğlu. J. Technol. 5(3/4), 47–53 (2002) 17. Oladokun, V.O., Olaitan, O.A.: Development of a Materials Requirements Planning (MRP) software. Pac. J. Sci. Technol. 13, 351–357 (2012) 18. Harris, F., MacCaffer, R.: Modern Construction Management. Blackwell Science, London (2001) 19. Wong, E.T.T., Norman, G.: Economic evaluation of materials planning systems for construction. Constr. Manage. Econ. 15, 39–47 (1997) 20. Chan, F.T.S.: Design of material handling equipment selection system: an integration of expert system with analytic hierarchy process approach. Integr. Manuf. Syst. 13, 58–68 (2002) 21. Kong, S.C.W., Li, H.: An e-commerce system for construction material procurement. Constr. Innov. 1, 43–54 (2001) 22. Chen, Z., Li, H., Wong, C.T.C.: An application of bar-code system for reducing construction wastes. Autom. Constr. 2, 521–533 (2002) 23. Sun, M., Howard, R.: Understanding IT in Construction. Spoon Press, London (2004) 24. De Wolf, C.: Material Quantities in Building Structures and their Environmental Impact. Unpublished MSc. thesis, Department of Architecture, Massachusetts Institute of Technology (MIT), USA (2014) 25. Qiang, L., Khong, T.C., San, W.Y., Jianguo, W., Choy, C.: A web–based material requirement planning integrated application. In: EDOC 2001: Proceedings of the 5th IEEE International Conference on Enterprise Distributed Object Computing, Washington, DC, USA, p. 14 (2001) 26. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information technology: toward a unified view. MIS Q. 27(3), 425–478 (2003) 27. Peansupap, V., Walker, D.H.T.: Factors affecting ICT diffusion: a case study of three large Australian construction contractors. Eng. Constr. Architectural Manage. 12(1), 21–37 (2005) 28. Gambatese, J.A., Hallowell, M.: Enabling and measuring innovation in the construction industry. Constr. Manage. Econ. 29(6), 553–567 (2011) 29. Umble, E.J., Haft, R.R., Umble, M.M.: Enterprise resource planning: implementation procedures and critical success factors. Eur. J. Oper. Res. 146, 241–257 (2003) 30. Sargent, K., Hyland, P., Sawang, S.: Factors influencing the adoption of information technology in a construction business. Australas. J. Constr. Econ. Build. 12(2), 72–86 (2012) 31. Anumba, C., Ruikar, K.: Electronic commerce in construction – trends and prospects. Autom. Constr. 11, 265–275 (2002) 32. Goulding, J.S., Lou, E.C.W.: E–readiness in construction: an incongruous paradigm of variables. Architectural Eng. Des. Manage. 9, 265–280 (2013) 33. Kasim, N.B., Ern, P.A.S.: The awareness of ICT implementation for materials management in construction projects. Int. J. Comput. Commun. Technol. 2(1), 1–10 (2013) 34. Wasserman, A.I.: Principles for the Design of Web Applications. Center for Open Source Investigation (COSI) (2006). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81. 875rep=rep1type=pdf. Accessed 25 Sept 2015

Characteristics of a Web-Based Integrated Material Planning and Control System for Construction Project Delivery Adedeji Afolabi ✉ , Olabosipo Fagbenle, and Timothy Mosaku (

)

Department of Building Technology, Covenant University, Ota, Ogun State, Nigeria {adedeji.afolabi,olabosipo.fagbenle, timothy.mosaku}@covenantuniversity.edu.ng

Abstract. For over 40 years that information and communication technology has been integrated in the construction industry, the adoption and use of the internet has been relatively low and slow. The use of the internet for the Nigerian construction industry has been mainly used for communication rather than for many other things the platform has to offer. The purpose of this study is to examine the characteristics of a web-based integrated material planning and control system for construction project delivery. The study used case diagrams and a MVC model in designing the platform for the web-based system. The study revealed that using a web-based system can have the characteristics of ensuring good inventory, good retrieval system, notifications and prompting system and a third party viewing for good over-sight of construction projects. The study recommended more innova‐ tive use of the internet for solving many challenges confronting the construction industry. Keywords: Characteristics · Internet · Material planning and control · Project delivery · Web-based

1

Introduction

A global phenomenon among many construction business have been on how to preserve and ensure their competitiveness. This means that construction firms have to avoid extra cost which can lead to huge losses for the firm [1]. In meeting these goals, construction firms must look at maximizing the use of two (2) critical entities; construction materials and information and communication technology (ICT). Materials are vital in the activities of any industry since unavailability of materials can impede production. It is worthy to note that unavailability of materials is not the only phase that can cause problems. Over-stocking excessive quantities of materials could also make serious problems to managers. In that, storage of materials can raise the costs of production and the overall cost of any project. [2] opined that stockpiling time of materials cause extended tied down capital that would otherwise have been better invested, requiring extensive storage facilities and space. Therefore, construction firms should focus on construction materials by lowering the total costs in supply chain,

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_3

Characteristics of a Web-Based Integrated Material Planning

21

shorten throughput times, drastically reduce inventories, expanding product choice, provide more reliable delivery dates and better customer service, improve quality, and efficiently coordinate global demand, supply, and production [3]. In order to attain this competitive edge, [4] suggested that construction professionals should adopt the use of information and communication technology (ICT). According to [5], computers opened the door to an inventory system in material management helping to keep up-to-date records on the status of every inventory in stock. However, [4, 6, 7] noted that the traditional construction methods used in the Nigerian construction industry apply more paper-based work in its data and document management during the construction process, whereas, the emergence of ICT systems could transform conven‐ tional to modern methods in managing construction activities. In this dynamic and changing environment, [7–9] encouraged that there is a need to make use of more computer-based systems to improve material management on construction sites. Traditional material processes done on a paper-based system have many associated drawbacks: low accuracy, time consumption, labor consumption, loss of data, aiding corruption, high theft and high uncertainty [9]. These are some of the ugly sides of the construction industry which this research intends to address. With over 60% of construc‐ tion professionals connected to the internet [4], Nigeria is yet to massively deploy computer based production and inventory systems compared with other developed economies of the highly industrialized nations [10]. This study intends to examine the characteristics of a web-based system used for planning and control of construction materials for achieving construction project delivery of time, quality and cost. 1.1 Material Planning and Control In order to plan and control a successful construction project, the three parameters of time, cost and quality should be considered [11]. [1] argued that in the analysis of the construction process an equilibrium must be established among the three primary concerns of time, cost and quality. [12] stated that these factors are like three points of a triangle and that neglecting one factor will have a corresponding detrimental effect upon the other two. [13] noted that there has been universal criticism of the failure of the construction industry to deliver projects in a timely way. According to [11], clients have been increasingly concerned with the overall profitability of projects and the accountability of projects generally. Cost overruns, in association with project delays, are frequently identified as one of the principal factors leading to the high cost of construction [14]. When the three components of time, quality and cost are successfully integrated, the project will begin to realize significant, measurable and observable improvements in the attainment of the clients’ objectives [11]. This integration can be made possible through an effective and efficient planning and control system made available in crucial activities of the construction project. An important aspect of material management that takes place on building construc‐ tion sites is material planning. Planning is said to be the formalization of what is intended to happen at some time in the future. Although, a plan does not guarantee that an event will actually happen, therefore the need for controls to help cope with the changes that may occur. Materials planning in a construction process involves the process of

22

A. Afolabi et al.

quantifying, ordering and scheduling of materials. [15] added that material planning process is incomplete until a proper record is setup and maintained while determining target inventory levels and delivery frequency. Fundamentally, a critical purpose of materials planning is to procure the materials for the dates when they are needed. [5, 16] stated that two crucial things lacking in material planning on construction sites are that construction professionals hardly keep proper records and most construction sites experience material delay. [17] argued that material planning is all about achieving the objectives of the organization such as quality (what is needed), time (when it would be available), cost (how much), location (where it is needed). These activities must be fulfilled in order to ensure that the material planning process is comprehensive. An effective material planning system cannot be achieved unless there are controls put in place. Material controlling makes adjustments which allows the construction operation to achieve the objectives that the plan has set. Controls is on a short term basis, addressing the resource constraints that may occur par time. With cost of material alone in the building construction project been put at 55 to 65% [18, 19] suggested that an optimum material control on site should be adopted in order to reduce the cost of construction projects. The construction sector is reported to be generating unacceptable levels of material and manpower waste and it basically due to the lack of effective material control systems on construction sites. All estimators allow wastage factors in pricing a bill of quantities. But, experience has shown that unless site management control of material is tight, wastage can frequently exceed, often by a large margin, than the figure allowed in the tender document. Planning and control is concerned with managing the ongoing activ‐ ities of the operation so as to satisfy customer demand. 1.2 Web-Based Technologies Many sectors have enjoyed the use of ICT and web-based systems in the management of its resources, which has helped to keep up-to-date records on the status of every inventory in stock [5]. Previous works on computer or web-based material planning and control systems have been focused on the manufacturing sector [5, 20, 21]. Some ICT integrated material management system include electronic mail (e-mail) and electronic commerce including electronic invoicing, payments and receipt of materials process [22], Construction Materials Planning System, CMPS [23], Material Handling Equip‐ ment Selection Advisor, MHESA [24], Construction Materials Exchange, COME [25], Barcode system - for material storage application [26], and most commonly used Micro‐ soft Excel and Lotus 1-2-3 [27]. According to [28], ICT integrated material management would strongly ease unnecessary loss of materials, increase efficiency and productivity, higher customer service levels, better space utilization, employee satisfaction, integrated supply chain, ensure availability and quality of materials, ensure the right time and place of material delivery, reduction of errors and rework and reduction of time overrun and material waste. A web-based material planning system is defined as a computer-based information system connected to the internet designed to keep inventory, control, estimate and keeps track of availability of materials to be used on a construction site at any particular period

Characteristics of a Web-Based Integrated Material Planning

23

of time. A construction research firm; Daratech, estimated that anywhere from 5 to 10% of a construction project’s cost can be saved by using Web-based technologies [2]. In order to improve productivity in ordering and quotation activities, contractors and suppliers could change their activities from conventional to more sophisticated or inno‐ vated tools and techniques. A large use of the internet integrated into the organizational system is the introduction of the Enterprise Resource Planning (ERP) [3, 29]. For the manufacturing sector, [20] designed a web-based material requirement planning inte‐ grated application that exploits distributed object technology to develop an enterprise application, integrating material requirement planning (MRP) with job shop simulator. [30] developed a web-based and automated method to assign and track project progress, provide periodic SMS alert to the project managers as well as generate periodic report on project implementation. Every aspect of the construction process has the possibility of been integrated on a platform with vast potentials such as the internet. Its peculiarity as identified by [31] in that the internet is a form of central medium that allows information to be stored and exchanged in a single place accessible to all parties involved in a project.

2

Methodology

The system developed has focused only on the planning and control of selected structural construction materials such as cement, granite, sharp sand, blocks and steel reinforce‐ ment. This is because these materials are crucial to the delivery of construction projects. Construction managers tend to stock pile most of these materials and the unavailability of one of or all these materials has the tendency to lead to project failure. The study made use of case diagrams to illustrate the relationship between variables in the proposed system. The material planning and control system for construction project delivery application was developed using the PHP (PHP Hypertext Preprocessor) Language. PHP is very diverse with a few frameworks. For this particular application a MVC (Model View Controller) PHP framework Laravel 5.3 has been used. Before the project devel‐ opment began a rough sketch of what was to be expected was drafted based on already prescribed project requirements, enabling the proper division of the project segments into the MVC model. The model and view were the first parts of the project to be worked on while the Controller was completed last. The application uses an ORM (Object Rela‐ tional Mapper) called Eloquent to ensure cohesion of data between the model, view and controller. The model structure was developed using MySQL a real-time open source transac‐ tional database system. It is important to note that the data in the database is dynamically inputted by the users of the system. Hence a framework of empty tables, records and fields were first created to accommodate any data to be stored or moved through the database. During the project development, HTML 5, CSS3 and JavaScript 1.8.5. were language used to design the product interface. This interface (view) was designed to ensure easy navigation of its users and simplicity in delivering the project functions.

24

A. Afolabi et al.

In order to have the database filled with data produced by the user through the view a server side scripting language was introduced, in the case PHP making the data base a dynamic database management system (DBMS). PHP was used is link each of the functions presented in the view to their storage points in the database. PHP has also been used to create sessions and link user accounts to the database assigning access levels to different types of users (the system administrators and the ordinary users). As earlier mentioned the framework of PHP used was Laravel 5.3. In order to ensure the integrity of the data entered into certain fields the AJAX (asynchronous JavaScript and XML) framework was introduced. This helped ensure that data entering the database was the right kind of data. JavaScript was also used to carry out all calculations that may be required as data may not remain constant and certain variables may need to be constantly recalculated e.g. Items in the inventory etc. and results from these calculations are auto‐ matically updated into the database by the controller. The project has been developed using adobe Dreamweaver and NetBeans IDE.

3

Characteristics of a Web-Based Integrated MPCS

The characteristics of the web-based integrated material planning and control system for construction project delivery describes the features, character and benefits engen‐ dered on the platform of the WB-MPCS. The following would be discussed as the char‐ acteristics of the web-based system; Login page. Visiting this application with the required URL takes you to a welcome page. As the application is auth-based, you must login to access the information and functionalities that the system provides. Click the login button on the welcome page. This takes you to the login page. Submitting a valid email and password will take you to the home page of the application. If you do not have access to the system, you can contact the administrator. Making it internet based ensures that it can be accessed from any location, it is not susceptible to virus attack and can be monitored by several parties involved in the project. Figure 1 showed the screen shot of the login page.

Fig. 1. Login page of the web-based material planning and control system.

Home. The home page provides an overview of the whole application as well as a navigation point to key areas in the system. You are provided: a summary of the latest

Characteristics of a Web-Based Integrated Material Planning

25

projects, a list of upcoming activities, sent/received messages, to-dos etc. It is a dashboard where other interfaces could be accessed (Fig. 2).

Fig. 2. Home page of the web-based material planning and control system.

Fig. 3. Breakdown parameters for estimating quantities of materials in the system

Projects. From the home page, you can view available projects by clicking the ‘all projects’ link. Click on any project to view it. You can create a new project by clicking the ‘new project’ link and filling the form thereof as shown in Fig. 4. These projects consist of activities. After creating a project, you can visit the project from the ‘all projects’ page. The ‘add new activity’ form allows you create a new activity on each project. Input the activity name, type, start date, finish date and lead time. This system is unique in that the lead time is a date set as a result of several factors that may affect the selected construction material. Therefore, the project manager estimates a workable

26

A. Afolabi et al.

time prior to the start date of the activity when the selected construction material would be on the construction site. The ‘inputs keys’ and ‘inputs values’ fields allow you specify, for each kind of activity, the parameters required. Not inputting the right parameters might lead to errors. The information inputted on the project interface is crucial to the successful use of the web-based material planning system due to the number of activities that would be performed on this interface. The information such as the activity list and quantities required which is extracted from a bill of quantities, timeline of activities (start date, finish date and lead time) which can extracted from the programme of work or inputted at the discretion of the project manager and a breakdown estimate of materials needed to complete task. This requires input values and constants that were programmed into the application. The materials considered include concrete of mix ratio of 1:1:2, 1:2:2, 1:1.5:3, 1:1.67:3.33, 1:2:3, 1:2:3.5, 1:2:4, 1:2.5:3.5, 1:2.5:4, 1:3:4, 1:2.5:5, 1:3:5, 1:3:6 and 1:4:8 which are a representation of cement (bags), sharp sand (tonnes) and granite (tonnes). The reinforcement had different sizes ranging from 6, 8, 10, 12, 14, 16, 18, 20, 25, 32 and 40 while the hollow sandcrete blocks had 3 specs of 100 mm, 150 mm and 225 mm. In order to capture some other use of cement and sand on construction projects, mortar for laying of blocks, plastering and screeding of floors which is a mixture of cement and sharp sand was also considered. This appeared in the variants of 1:1, 1:1.5, 1:2, 1:2.5, 1:3, 1:4, 1:6 and 1:8. Figure 3 showed the case diagram of the rela‐ tionship between the type and the input keys of the selected construction materials. From this illustration, the required quantities of each selected material is calculated and planned for.

Fig. 4. Project page of the web-based system

Inventory. The inventory interface has three (3) main characters; the on hand inventory or stock system, inventory status (i.e. on hand inventory minus estimated quantities), on order inventory (quantities expected and receiving dates). The construction materials inventory is in terms of cement (in bags), sharp sand (tonnes), granite (tonnes), hollow sandcrete blocks (pieces) and steel reinforcement (tonnes or pieces). A filing system for storing scanned receipts, waybill and invoices of materials. As earlier stated that many construction sites use the paper based method of filling construction materials’ document

Characteristics of a Web-Based Integrated Material Planning

27

and these are subject to several drawbacks which do not allow easy retrieval when needed and inhibiting the process of auditing of material quantities used on the construc‐ tion project. Report Generator. The report generator produces various reports which give one a cohesive summary of a project and helps in decision making. Reports can be printed out with the ‘print’ button. The report generator shows charts or graphical depiction of planned estimated materials versus actual material on site (stock), material usage in terms of quantity and time, list of inventory (on hand and on order inventory), project activities summary and total materials used to date and other analysis as required. The system also provides a standard calculator for basic arithmetic operations. Messages. The application allows you send to and receive messages from other users of the system. This follows with a corresponding email to/from the user. Figure 5 showed the messaging platform on the web-based material planning and control system The interface show a list of messages that have been sent out as notifications to prompt the project manager via the registered email of the materials needed for the next activity on the timeline because of the lead time that must have been assigned to the task.

Fig. 5. Messaging platform on the web-based material planning and control system

To-dos. Users of the system can create to-do lists of things they are required to do as regards the projects they are working on. To-dos are private to a user. The note taking platform helps project managers to write issues relating to the project, acting as an activity planner, someone to contact for material supply & delivery and set reminders. The interface helps to keep track of other materials not indicated in the project activities with dates and keep track of other activities that needs to be performed in relation to planning and control of construction materials. In addition, project managers can indi‐ cate the need to reach out to construction materials’ suppliers. The web app has a calendar which has been integrated to the platform. Project activities are displayed on the calendar as shown in Fig. 6.

28

A. Afolabi et al.

Fig. 6. Construction project management calendar

People and Permission. This section helps the administrator to be able to give access to back end login users such as client and head office personnel and also the level of accessibility in terms of what data can be seen. The administrator registers the back end users with their emails and names so that they can access the project they have been assigned through the login page on a web browser as shown in Fig. 7. This is an advantage for making the system web-based, whereby, the web app can be accessed from any location by approved individuals. This platform ensures a proper monitoring system of the plans and control of construction materials by the project manager.

Fig. 7. People and permission interface of the web-based system

4

Conclusion and Recommendation

The characteristics of a web-based integrated system indicated that it can be accessed anywhere, giving a bird eye view of the planning process via the internet. The system involves a real time estimation where construction materials’ details from different projects can be tracked, stored, retrieved and archived for future purposes. The system has up-to-date inventory system and a report generator to aid decision making and

Characteristics of a Web-Based Integrated Material Planning

29

comparison. The character of a messaging platform integrated with the user’s email ensures that the user is prompted adequately on shortfalls and clarifications from the head office and other approved users. Programming and the internet is taking over human activities including the way of life. A web based material planning and control tool would help to integrate the use of internet with a material planning and control technique ensuring that decision making is done quicker in a fast paced world. Making a material planning and control system web based ensures that information regarding building materials can be accessed anywhere in the world. Also, it helps protect the data, ensuring that it is not lost when the hardware is damaged or due to virus attack. The construction industry must adapt to this changing trend of using web based applications to solve some critical construction related activities. Acknowledgement. The authors would like to appreciate Covenant University for their financial support towards the publication of this article.

References 1. Aina, O.O., Wahab, A.B.: An assessment of build-ability problems in the Nigerian construction industry. Global J. Res. Eng. 11(2), 42–52 (2011) 2. Equere, E., Tang, L.C.M.: Dearth of automation: the consequences in Nigeria construction industry. Autom. Constr. 14(4), 500–511 (2010) 3. Shankarnarayanan, S.: ERP systems—using IT to gain a competitive advantage (2003). http:// www.expressindia.com/newads/bsl/advant 4. Oladapo, A.A.: The impact of ICT on professional practice in the Nigerian construction industry. Electron. J. Inf. Syst. Dev. Countries 24(2), 1–19 (2006) 5. Islam, M.S., Rahman, M.M., Saha, R.K., Saifuddoha, A.M.: Development of Material Requirements Planning (MRP) software with C language. Global J. Comput. Sci. Technol. Softw. Data Eng. 13(3), 12–22 (2013) 6. Mohamed, S., Stewart, R.A.: An empirical investigation of users’ perceptions of web–based communication on a construction project. Autom. Constr. 12, 43–53 (2003) 7. Kasim, N.B., Anumba, C.J., Dainty, A.R.J.: Improving materials management practices on fasttrack construction projects. In: 21st Annual ARCOM Conference, SOAS, University of London, vol. 2, pp. 793–802 (2005) 8. Faniran, O.O., Caban, G.: Minimizing waste on construction project sites. Eng. Constr. Architectural Manage. 5(2), 182–188 (1998) 9. Hadikusumo, B.H., Petchpong, S., Charoenngam, C.: Construction material procurement using internet-based agent system. Autom. Constr. 14(6), 736–749 (2005) 10. Oladokun, V.O., Eneyo, E.S., Charles-Owaba, O.E.: Solving the machine set-up problem: a case with a university production workshop. Global J. Eng. Technol. 2(2), 237–242 (2009) 11. Bowen, P.A., Hall, K.A., Edwards, P.J., Pearl, R.G., Cattell, K.S.: Perceptions of time, cost and quality management on building projects. Aust. J. Constr. Econ. Build. 2(2), 48–56 (2000) 12. Hughes, T., Williams, T.: Quality Assurance. BSP Professional Books, Oxford (1991) 13. Newcombe, R., Langford, D., Fellows, R.: Construction Management, 2nd edn. Mitchell Publishers, London (1990) 14. Charles, T.J., Andrew, M.A.: Predictors of cost-overrun rates. J. Constr. Eng. Manage. ASCE 116, 548–552 (1990)

30

A. Afolabi et al.

15. Payne, A.C., Chelsom, J.V., Reavill, L.R.P.: Management for Engineers. Wiley, Chichester (1996) 16. Mehr, S.Y., Omran, A.: Examining the challenges affecting the effectiveness of materials management in the Malaysian construction industry. Int. J. Acad. Res. 5(2), 56–63 (2013) 17. Ogbadu, E.E.: Profitability through effective management of materials. J. Econ. Int. Finance 1(4), 99–105 (2009) 18. Skoyes, R.F.: Material control to avoid waste. Build. Res. Establishment Dig. UK 12(259), 1–8 (2000) 19. Wahab, A.B., Lawal, A.F.: An evaluation of waste control measures in construction industry in Nigeria. Afr. J. Environ. Sci. Technol. 5, 246–254 (2011) 20. Qiang, L., Khong, T.C., San, W.Y., Jianguo, W., Choy, C.: A web–based material requirement planning integrated application. In: EDOC 2001: Proceedings of the 5th IEEE International Conference on Enterprise Distributed Object Computing, Washington, DC, USA, p. 14 (2001) 21. Hasan, A.: A study on material requirement planning system for small scale industries. Mech. Confab 2(3), 96–104 (2013) 22. Harris, F., MacCaffer, R.: Modern Construction Management. Blackwell Science, London (2001) 23. Wong, E.T.T., Norman, G.: Economic evaluation of materials planning systems for construction. Constr. Manage. Econ. 15, 39–47 (1997) 24. Chan, F.T.S.: Design of material handling equipment selection system: an integration of expert system with analytic hierarchy process approach. Integr. Manuf. Syst. 13, 58–68 (2002) 25. Kong, S.C.W., Li, H.: An e-commerce system for construction material procurement. Constr. Innovation 1, 43–54 (2001) 26. Chen, Z., Li, H., Wong, C.T.C.: An application of bar-code system for reducing construction wastes. Autom. Constr. 2, 521–533 (2002) 27. Sun, M., Howard, R.: Understanding IT in Construction. Spoon Press, London (2004) 28. Kasim, N.B., Ern, P.A.S.: The awareness of ICT implementation for materials management. in construction projects. Int. J. Comput. Commun. Technol. 2(1), 1–10 (2013) 29. Ptak, C.A., Schragenheim, E.: ERP: Tools, Techniques, and Applications for Integrating the Supply Chain. St. Lucie Press, Boca Raton (2000) 30. Nweke, F.H., Ugwu, E.G., Ikegwu, C.A.: Design of project activities tracking system for enhanced project management in Nigeria. Int. J. Res. (IJR) 2(4), 925–937 (2015) 31. Olalusi, O.C., Jesuloluwa, O.: The impact of information technology on Nigerian construction industry. Int. J. Eng. Innovative Technol. 2(9), 1–5 (2013)

Integration Between EVM and Risk Management: Proposal of an Automated Framework Anabela Tereso ✉ , Pedro Ribeiro, and Manuel Cardoso (

)

Centre ALGORITMI, University of Minho, Campus de Azurém, 4804-533 Guimarães, Portugal [email protected], [email protected], [email protected]

Abstract. The integration between Earned Value Management (EVM) and Project Risk Management immediately raises the question of the relationship between the two methods, even though EVM and project risk management share the same function in project management: promote the project success. At first glance it seems to be no further connection between these two methodologies, but the fundamentals of EVM are fully influenced by both risk management and risk analyses. This paper seeks to clarify the mechanisms of this function, and in which points they meet or affect each other, with reference to PMBoK®. The several techniques and methods are somewhat disperse and not always coherent. With the several assumptions resulting from the interconnection between the two methods, a framework of integration between them is proposed, as well as its practical implementation in Excel, resulting in a set of tools that can be useful in monitoring and controlling a project. Keywords: Project management · Project control · Earned Value Management · Project risk management

1

Introduction

The use of project management is relatively recent and has been motivated by the need for rapid and effective responses to the changing business environment [1]. One of the components of project management with the greatest need of development, due to increasing global competition and rapid technological growth, is project monitoring and control. Project monitoring and control allows to verify the state of the project during its execution and thus to conclude on its continuity within the parameters considered adequate. In case the parameters are outside the appropriate range, corrective measures can still be taken, which aim to promote the improvement of the state of the project and finally its conclusion with success, that is, within the objectives. Another way to promote project’s success is by risk management, but in this case minimizing the risk of not achieving the project objectives is achieved by creating early responses to the risks that may arise. One of the methods used to monitor and control project status is EVM (Earned Value Management). In this method, the values estimated in the planning phase are compared © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_4

32

A. Tereso et al.

with the project execution values, to determine project’s state, and in this way create any adjustments that lead to project success. Given the increasing need for project monitoring and control techniques, one of the issues to be addressed is the possibility of using these two methodologies together to obtain additional monitoring and control information. This brings the need to investigate in detail the links between these two methodologies. The two methodologies, having very different approaches, lead to the construction of a framework to summarize the links between EVM and Risk Management. After the link between the two methodologies is clarified, the idea of the possibility of trans‐ forming this information into something with practical utility in the monitoring and control of projects arises immediately. Based on the above, the goals of this research were: • Investigate the linkage between EVM and Risk Management; • Build a framework for integrating the two methodologies; • Use the framework to create project monitoring and control tools. The integration between EVM and Risk Management is an area with several lines of research, mainly in the area of the introduction of uncertainty in EVM [2, 3]. The approach taken in this paper is similar to that proposed by the Association for Project Management (APM) [4], but based on the Project Management Body of Knowl‐ edge (PMBoK®) from Project Management Institute (PMI) [5]. This type of approach is based on risk stratification, the different responses to each form of risk, and the intro‐ duction of the various risk components into the EVM methodology. This approach aims to include all forms of risk and risk responses and not just uncertainty.

2

Literature Review

Due to the increase in global competition and rapid technological development, many companies have begun to pay more attention to improving project control. This change took place both at the level of internal and external projects, which generated a growing interest in project monitoring and control techniques [6]. The unique nature of the projects creates uncertainty. This uncertainty generates several scenarios in which one or more project objectives may be affected by uncertain events or conditions, leading to the need to make risk assessment during planning. In PMBoK® this area of expertise is called project risk management [5]. On the other hand, when the project is already in execution, in the monitoring and control phase, it is necessary to have tools that allow the evaluation of the state of the project so that, in case of cost or time slippage, one can act and thus avoid or limit the problem. One of the methods recommended in PMBoK® is known as Earned Value Management (EVM) [5]. Traditional project cost management is done by simply analyzing costs over the project time, without an accurate measurement of the work that has actually been performed. We may be within budgeted cost but not have the expected work done. In the EVM methodology, the project performance is measured at each moment in relation

Integration Between EVM and Risk Management

33

to a cost baseline, which is constructed based on the costs of each task. As these are implemented, we have metrics that indicate whether the project is ahead or behind schedule, whether it is spending more or less than planned, and we have the possibility to predict the total final cost of the project [7]. The method is based on three values that allow obtaining control indices of the project and predictions of costs and deadlines at the end of the project, namely the Planned Value (PV), Actual Cost (AC) and Earned Value (EV). PV is the authorized budget assigned to scheduled work. AC is the realized cost incurred for the work performed on an activity during a specific time period. EV is the measure of work performed expressed in terms of the budget authorized for that work [5]. With these values we can evaluate Cost Variance (CV = EV − AC), and Schedule Variance (SV = EV − PV) or Cost Performance Index (CPI = EV/AC) and Schedule Performance Index (SPI = EV/PV). If CV = 0 and CPI = 1, cost is as planned. If CV > 0 and CPI > 1, cost is under planned. If CV < 0 and CPI < 1, cost is over planned. The same reasoning can be done with the schedule, which may be ahead of schedule (SV > 0 and SPI > 1), on schedule (SV = 0 and SPI = 1) or behind schedule (SV < 0 and SPI < 1) [5]. Both PV and EV are based on the Performance Measurement Baseline (PMB), which is built from the Work Breakdown Structure (WBS) and its schedule in the planning phase. Therefore, in order for the EVM to be used as a realistic monitoring and control technique for project execution, the first step will be to understand and organize the project work during the planning phase. At this planning stage it is necessary to elaborate the scope of the project using the WBS. Through the WBS the work can be organized into more manageable and execut‐ able elements, called work packages (or another element of the division of work consid‐ ered appropriate for the project in consideration). Then the work has to be scheduled logically until all project work is included. Once the work is scheduled and the resources allocated, the scope, schedule and costs have to be integrated and recorded in a cost table, depending on the time/division of the work that is called PMB, like the one created during this research project (see Fig. 1), adapted from PMI [8].

Fig. 1. PBM - Performance Measurement Baseline. Adapted from PMI [8]

This plan, with the budget based on the work package per month, can be used to measure project performance. It is also necessary to have a technique that matches the physical progress of the work, in order to calculate the earned value of each work package, like the one created during this research project (see Fig. 2), adapted from Anbari [9].

34

A. Tereso et al.

Fig. 2. Physical progress of work by the end of July. Adapted from Anbari [9]

3

EVM and Risk Management Integration Model Developed

One of the objectives of this work was the creation of a model that seeks to integrate EVM with risk management, which will be presented in this section. In the first part the risk is stratified in order to create different responses to each type of risk and then is shown how each response can influence the EVM or how the EVM can be used in conjunction with the results of risk management. Practical methods for estimating management and contingency reserves are also described. A second part presents a proposal for a high-level framework, which is not intended to be exhaustive, but rather an example that can be further explored. As risk responses influence the different phases of the EVM methodology, the framework will also be a sequence that will allow each operation to be placed in its temporal position. 3.1 Exposing Risk in the EVM Methodology Based on the approach of EVM to Risk management presented in PMBoK® [5], Practice Standard for Earned Value Management [10] and Practice Standard for Project Risk Management [11], the different types of risks were summarized in Fig. 3.

Fig. 3. Summary of project risks.

The steps to deal with the known and proactively manageable risks are: • Risks identification, characterization and documentation. • Qualitative analysis including the determination of probabilities and their impact on project objectives, leading to prioritization of risks. • Depending on the results of the impact on the project objectives, responses will be developed that include avoiding, transferring, mitigating and accepting the risk. Avoiding, transferring, and mitigating cause cost and time project changes.

Integration Between EVM and Risk Management

35

Project changes brought about by the qualitative analysis translate into a more real‐ istic Cost Baseline and therefore better EVM indices. As for the known and proactively unmanageable risks, there is no way to treat them, either because it is impossible to eliminate the risk or because the risk has been accepted. This type of risk is guaranteed by a contingency reserve. Since this reserve will only be used in case the risk actually occurs, the EVM indexes are corrected whenever the risk occurs. This is because the contingency value is only considered in the Performance Measurement Baseline (PMB) if the risk materializes. Unknown risks correspond to a part of the overall project risks and represent unspe‐ cified project uncertainty. These risks exist in all projects and since no measure can be taken, a management reserve is usually allocated to address this uncertainty. 3.2 EVM and Risk Management Integration Framework In this section we propose a high level framework where risk management is integrated into the development of EVM (Fig. 4). During the planning phase, the steps proposed are: develop the PMB based on the WBS and the schedule; analyze known risks; make the necessary modifications to the project; update PMB due to project modifications; with the global uncertainty, estimate the management reserve; include risk components that are known and proactively unmanageable; create in the control accounts the contin‐ gency reserves; approve the cost baseline; and obtain an estimate for the project budget where the risk is explicit and documented. Then, during control and monitoring phase, the steps proposed are: control management reserve through EVM; and control and check EVM indexes.

Fig. 4. EVM and risk management integration framework proposal.

36

4

A. Tereso et al.

Prototype for Project Monitoring and Control Support

A prototype was created in Excel to demonstrate the application of the integration between EVM and risk management proposed. For the construction of these spread‐ sheets, MS Excel version 2013 was used with some subroutines written in Visual Basic for Applications. As PMB is the basis of EVM, it will be the mechanism used to summarize the cost and time of the project. To create a PMB one needs to organize the work, schedule the work and establish the budget. The PMB is a time-phased cost table based on the Work Breakdown Structure (WBS) and on the project schedule. See in Fig. 5 an example of a PMB with the cumulative chart of planned costs or cumulative Planned Value (PV).

Fig. 5. Example of a PMB with cumulative PV

In the figure, PV 0 would be an initial version before the risk analysis in which the values were memorized for comparison with future versions. Approved changes to the project would result in PV 1. That is, after the risk analysis, responding to identifiable and treatable risks could result in a change to the PMB similar to the example shown. To estimate the final value of the project taking into account the uncertainty we can construct a table in which the cost values of the various work elements are not fixed, but rather, a probabilistic distributions. Although the uncertainty is present, both in cost and time, in this model we only considered the cost uncertainty. Using Monte Carlo simulation [12] with the cost model, we can obtain a probabilistic distribution of the possible cost results for the project. For this purpose we used @Risk, an add-in for Microsoft Excel from Palisade Corporation. The three point estimation technique of PERT (Program Evaluation and Review Technique) was also used to improve the accuracy of the estimates for the costs of the activities [13]. The management reserve is the amount added to the total project budget to respond to the overall project risk and therefore should be dependent on uncertainty.

Integration Between EVM and Risk Management

37

The following is a practical example (Fig. 6) where the management reserve is esti‐ mated from the 3 point technique in conjunction with the Monte Carlo simulation.

Fig. 6. Management reserve and PMB

In this example the columns (Min), (Budget), and (Max) represent the pessimistic, most likely, and optimistic budget values respectively. The (Result) column represents the budget evaluated by Monte Carlo simulation for the work package. The distribution obtained using Monte Carlo simulation is shown in Fig. 7.

Fig. 7. Distribution obtained with the Monte Carlo simulation

The risks identified and not treated proactively are guaranteed by a contingency reserve. In the example presented, the table shown on Fig. 8 serves to relate each element of the work to all risks identified and not treated proactively. Adding the exposure value to each risk gives the value of the total risk exposure for the work package. The total column shows the value of the contingency reserve for each work package. In Fig. 9,

38

A. Tereso et al.

the contingency value appears as a total added to the PMB. This figure shows the total value of the contingency reserve for the project, taking into account the risks registered. As the contingency reserve is only allocated in case the risk materializes, the amounts are only added to the PMB during the execution of the project. In practice, the figure allows controlling the total value of the reserve for the identified risks.

Fig. 8. Risk exposure table

Fig. 9. Risks included on the PMB

The sum of the total value of the PMB with the total contingencies and with the management reserve results in a total for the project with the risks included.

5

Conclusions and Future Research

The objective of this work was the proposal of a framework to support the monitoring and control of projects, based on the integration of EVM with risk management.

Integration Between EVM and Risk Management

39

Risk management and EVM share the same role in project management; promote project success. However, each uses different means to achieve this result. Risk manage‐ ment seeks to anticipate possible responses to non-compliance with the objectives and EVM is intended to monitor the status of these objectives. At first glance there seems to be no further connection between the two methodol‐ ogies, but the fundamentals of EVM are fully influenced by both risk management and risk analysis. In PMBoK®, integration is implicit and is not even referred to as “Inte‐ gration between EVM and risk management”, which makes necessary to clarify each risk component within the EVM methodology [5]. The framework presented in this paper integrates EVM and risk management. It was inspired on PMBoK® with influences of APM [4] and the integration proposed by Hillson [14]. The proposed integration is based on the EVM methodology, risk stratification and risk management methodology. Because EVM is based on the comparison of project execution values with planned values, introducing the components of risk or changes in planning produced by risk management, we get an EVM with more monitoring and control capabilities. On the other hand, risk is now recorded as one of the variables on which EVM depends. Finally, we state that the main characteristic of this model is the correction of the EVM performance indexes introduced by the risk management, and therefore the better quality on the information they provide. Because EVM is dependent on other components of project management, such as WBS, timeline, cost accounting methods, a more advanced framework would need to include these components, making the process more complete. The same can be said of risk management, where for example the analysis of known risks requires a well-devel‐ oped qualitative treatment. In the prototype presented, the costs and schedule are manually placed on the PMB page. One possibility of extension would be the interconnection of the page with commercial software, such as MS Project. Finally, in the current model, only the influences of the risk in cost were considered, but the time or schedule is also influenced. A natural extension would be the inclusion of time in the framework. Acknowledgments. This work has been supported by COMPETE: POCI-01-0145-FEDER007043 and FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/ 00319/2013.

References 1. Kerzner, H.: Project Management: A Systems Approach to Planning, Scheduling, and Controlling. Wiley, Hoboken (2009) 2. Acebes, F., et al.: A new approach for project control under uncertainty. Going back to the basics. Int. J. Project Manag. 32(3), 423–434 (2014) 3. Pajares, J., Paredes, A.: An extension of the EVM analysis for project monitoring: the cost control index and the schedule control index. Int. J. Project Manag. 29(5), 615–621 (2011)

40

A. Tereso et al.

4. APM: Interfacing Risk and Earned Value Management. Association for Project Management, Princes Risborough (2008) 5. PMI: A Guide to the Project Management Body of Knowledge (PMBOK guide), vol. xxi, 589 p. Project Management Institute, Inc., Newtown Square (2013) 6. Kim, E., Wells, W., Duffey, M.: A model for effective implementation of earned value management methodology. Int. J. Project Manag. 21(5), 375–382 (2003) 7. Fleming, Q., Koppelman, J.: Earned Value Project Management, vol. viii, 231 p. Project Management Institute, Newtown Square (2010) 8. PMI: Practice Standard for Earned Value Management, vol. vii, 51 p. Project Management Institute, Inc., Newtown Square (2005) 9. Anbari, F.: Earned value project management method and extensions. Project Manag. J. 34(4), 12 (2003) 10. PMI: Practice Standard for Earned Value Management, vol. xiv, 153 p. Project Management Institute, Inc., Newtown Square (2011) 11. PMI: Practice Standard for Project Risk Management, vol. xi, 116 p. Project Management Institute, Inc., Newtown Square (2009) 12. Mooney, C.Z.: Monte Carlo Simulation, vol. 116. Sage Publications, New York (1997) 13. Moder, J.J., Phillips, C.R.: Project Management with CPM and PERT (1964) 14. Hillson, D.: Earned Value Management and Risk Management: a practical synergy. In: PMI Global Congress 2004 - North America. PMI, Anaheim, California, USA (2004)

Renegotiation of Electronic Brokerage Contracts Rúben Cunha1 , Bruno Veloso2 , and Benedita Malheiro1,2(B) 1

ISEP/IPP – School of Engineering, Polytechnic Institute of Porto, Porto, Portugal {1100478,mbm}@isep.ipp.pt 2 INESC TEC, Porto, Portugal [email protected]

Abstract. CloudAnchor is a multiagent e-commerce platform which offers brokerage and resource trading services to Infrastructure as a Service (IaaS) providers and consumers. The access to these services requires the prior negotiation of Service Level Agreements (SLA) between the parties. In particular, the brokerage SLA (bSLA), which is mandatory for a business to have access to the platform, specifies the brokerage fee the business will pay every time it successfully trades a resource within the platform. However, while the negotiation of the resource SLA (rSLA) includes the uptime of the service, the brokerage SLA was negotiated for an unspecified time span. Since the commercial relationship – defined through the bSLA – between a business and the platform can be long lasting, it is essential for businesses to be able to renegotiate the bSLA terms, i.e., renegotiate the brokerage fee. To address this issue, we designed a bSLA renegotiation mechanism, which takes into account the duration of the bSLA as well as the past behaviour (trust) and success (transactions) of the business in the CloudAnchor platform. The results show that the implemented bSLA renegotiation mechanism privileges, first, the most reliable businesses, and, then, those with higher volume of transactions, ensuring that the most reliable businesses get the best brokerage fees and resource prices. The proposed renegotiation mechanism promotes the fulfilment of SLA by all parties and increases the satisfaction of the trustworthy businesses in the CloudAnchor platform. Keywords: Brokerage · Service Level Agreements · Negotiation · Trust

1

Introduction

CloudAnchor is a multiagent e-commerce platform which offers brokerage and resource trading services to Infrastructure as a Service (IaaS) providers and consumers [9]. The access to these services implies the automated negotiation of contracts – Service Level Agreements (SLA) – between the parties, specifically, brokerage SLA (bSLA) and resource SLA (rSLA) [7]. SLA negotiation relies on a decentralised distributed trust model built from historical interaction data between parties, supporting the implemented Trust-based partner Invitation/acceptance & Negotiation (TIN) services [8]. c Springer International Publishing AG 2017  Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_5

42

R. Cunha et al.

First, any provider or consumer business must establish a bSLA with the platform, which specifies the brokerage fee the business will pay every time it successfully trades a resource. The rSLA is celebrated between consumer and provider and specifies the terms of the resource provision, i.e., uptime, price, etc. While rSLA are typically time bound – resources are leased for well-defined periods – this is not the case of bSLA. Since the relationship between a business and the platform can be long lasting, it is essential for businesses to be able to renegotiate their bSLA terms, i.e., renegotiate the brokerage fee. To address this issue, we designed a bSLA renegotiation mechanism, which takes into account the duration of the bSLA as well as the past behaviour (trust) and success (transactions) of the business in the CloudAnchor platform. Several tests were performed involving multiple consumers and providers with different trustworthinesses. The results show that the implemented bSLA renegotiation mechanism privileges, first, the most reliable businesses, and, then, those with higher volume of transactions, ensuring that the most reliable businesses get the lowest brokerage fees and resource prices. The renegotiation mechanism is fair for both businesses and platform, promotes the fulfilment of SLA and increases the satisfaction of the trustworthy businesses in the CloudAnchor platform. This paper is organised in five sections. Section 2 provides a survey on related work; Sect. 3 presents the designed bSLA renegotiation mechanism; Sect. 4 describes the experiments and results; and, finally, Sect. 5 draws the conclusions and refers future developments.

2

SLA Renegotiation

The renegotiation of SLA based on past partner behaviour, i.e., taking into account the trading history between partners, has been addressed by other authors like [1–3,5,6]. Di Modica et al. [1,2] address the renegotiation of Service-Level-Objective (SLO) at run-time. They propose a WS-Agreement Negotiation protocol extension together with a finer detailed description of the SLO Guarantee Terms, including the specification of the scope of the changes which can be performed at run time. Similarly, Sharaf and Djemane (2015) [6] present another extension of the WS-Agreement specification to support SLA renegotiation. Consumers and providers must define the renegotiable terms or SLO of the SLA that could change during the SLA lifetime, resulting in new SLA negotiation states. Our approach allows the renegotiation of active brokerage agreements based on the trust and success of the business in the platform, i.e., free from any pre-defined change boundaries. Frankova et al. [3] propose an extension to the Open Grid Forum (OGF) WS-Agreement protocol to support run-time renegotiations of SLA. The authors propose the use of semantics to generate warnings related with agreement violations, which can be used to trigger the renegotiation of the Quality of Service (QoS) terms of active agreements. Our negotiation protocol, which is also an extension of the WS-Agreement, can be triggered automatically or on demand by any business registered in the platform.

Renegotiation of Electronic Brokerage Contracts

43

Hani et al. [4] present a review on SLA management systems for cloud-based systems. They identify the following management stages: service negotiation, resource provisioning & allocation, service monitoring & assessment and service termination & penalty. Our platform provides brokerage and negotiation services, including the renegotiation of bSLA, allowing businesses to renegotiate their brokerage fee according to their trading behaviour reactively, i.e., wait till the end of the current agreement to renegotiate, or pro-actively, decide to renegotiate the current agreement at any other time. Parkin et al. [5] propose a re-negotiation protocol which is a WS-Agreement specification extension for the renegotiation of resource agreements between providers and consumers. This renegotiation protocol has a fault tolerance mechanism to deal with lost and duplicated messages. Our trust and success based SLA renegotiation proposal, although it has been applied to brokerage agreements, is a general mechanism applicable to any type of SLA renegotiation.

3

Renegotiation of Brokerage Contracts

In order to have access to the platform services a business must, first, negotiate and establish a bSLA with the platform, specifying the value of the brokerage fee to be paid to the platform every time a resource is successfully traded. This brokerage agreement remained unchanged regardless of the success and behaviour of the business within the platform. The newly designed (re)negotiation mechanism, which is based on the success and behaviour of the business so far, allows businesses to (re)negotiate the terms of the brokerage agreement, namely, the duration and brokerage fee. The SLA agent of each business (provider or consumer), which is the renegotiation initiator, calculates and proposes the new brokerage fee to the platform SLA agent. The proposed brokerage fee depends on: (i ) the duration of bSLA contract; (ii ) the number of successfully negotiated resources; (iii ) the negotiation success rate; and (iv ) the business trustworthiness. Equation 1 determines the partial discount based on the bSLA duration where id is the business identifier and Δt is the duration of the contract in days. This exponential function is adjusted so that the discount ranges from 0.0% to 25.0% for periods from less than one day to one year. Disc(id, t) =

1 √ + 0.134 0.277log10 (10 + Δt)

(1)

Equation (2) calculates the partial discount based on the number of resources obtained or provided, where id is the business identifier and #rSLAN is the total number of resources negotiated by the business. ⎧ 0.00, 0 < #rSLAN ≤ 100 ⎪ ⎪ ⎪ ⎪ ⎨ 0.20, 100 < #rSLAN ≤ 500 Disc(id, #rSLA) = 0.25, 500 < #rSLAN ≤ 1000 (2) ⎪ ⎪ ≤ 2500 0.27, 1000 < #rSLA ⎪ N ⎪ ⎩ 0.30, #rSLAN ≥ 2500

44

R. Cunha et al.

Each business builds partner and self trust models regarding the partner invitation/acceptance (I), rSLA negotiation (N ) and rSLA enforcement (E) stages [8]. Equation 3 represents the general self trust formula applied to each interaction stage S, where id is the business identifier, n is the number of interactions of type S of the business in the platform and OutS,i is the boolean outcome (success or failure) of the interaction i of type S. n OutS,i (3) TS (id) = i=1 n The success rate of a business is provided by Eq. 4 where id represents the business identifier, TE (id) the business enforcement self trust given by Eq. 3, s the number of established rSLA, Δti the duration each established rSLA, r the number of resources offered/required by the business and Δtr the time span of the offer/request of each resource. s Δti Suc(id) = TE (id) ri=1 (4) j=1 Δtj Equation 5 represents the credibility of a business where id is the business identifier, TE (id) is the enforcement self trust and Suc(id) is the success rate of the business in the platform. Cred(id) =

TE (id) + Suc(id) 2

(5)

Equation 6 calculates the brokerage fee discount where id is the business identifier, #rSLA is the total number of established rSLA, Cred(id) is the business credibility in the platform, Disc(id, t) is the partial discount based on the duration of the bSLA and Disc(id, #rSLA) is the partial discount based on the number of established rSLA.   Disc(id, t) + Disc(id, #rSLA) Disc(id, t, #rSLA) = Cred(id) (6) 2 Finally, Eq. 7 calculates the brokerage fee to be proposed to the platform, where bF eedef ault is the default brokerage fee (2.0%) and Disc(id, t, #rSLA) is the fee discount provided by Eq. 6. bF ee(id, t, #rSLA) = bF eedef ault (1 − Disc(id, t, #rSLA))

(7)

The business proposes the brokerage fee given Eq. 7 to the platform SLA agent, which, in turn, applies the conditions presented in Table 1 to decide whether to make a counter offer or to accept it.

4

Tests and Results

We performed two sets of experiments in a single resource request/provision scenario. The scenario encompasses five groups of ten consumers and five groups

Renegotiation of Electronic Brokerage Contracts

45

Table 1. Platform conditions for the assessment of brokerage fee proposals. Providers

Consumers # Resources bFee (%)

Cond. Trust

Cond. Trust

# Resources bFee (%)

1

1.00

0

2.00

1

1.00

0

2.00

2

[0.35; 1.00[ 0

3.00

2

[0.35; 1.00[ 0

3.00

3

1.00

[1; 1000[

[1.70; 2.0]

3

1.00

3000

1.10

5

1.00

>3000

1.10

6

[0.85; 1.00[ [1000; 3000]

[1.30; 1.50]

6

[0.85; 1.00[ 3000

1.15

7

[0.85; 1.00[ [1000; 3000]

[1.30; 1.50]

8

[0.70; 0.85[ [1000; 3000]

[1.50; 1.70]

8

[0.85; 1.00[ >3000

1.15

9

[0.70; 0.85[ >3000

1.35

9

[0.70; 0.85[ 3000

1.45

11

[0.70; 0.85[ >3000

1.35

12

[0.40; 1.00[ 3000

2.10

15

[0.35; 0.50[ 3000

2.10

18

[0; 0.35[

3.50

——

——

[1.70; 2.0]

of ten providers, each group with different trustworthiness. Each provider holds 1000 standard virtual machines (VM) and each consumer makes 1000 single VM requests, using the Trust-based Invitation/acceptance & Negotiation (TIN) services. If all businesses were 100% trustworthy, the offer would meet the demand. However, since the average trustworthiness of the providers or consumers is 60%, it corresponds in fact to an under supply market scenario with an anticipated average failure rate of 40%. Table 2 presents the groups used in both experiments. At start up, the brokerage fee (bFee) is 2.0% for all businesses (consumers or providers), corresponding to the default brokerage fee applied to any newly registered business. Without renegotiation, the brokerage fee (bFee) remains Table 2. Groups of consumers and providers used in the experiments ID

Trust (%) #VM/Consumer ID

Trust (%) #VM/Provider

GC_020

20

1000

GP_020

20

1000

GC_040

40

1000

GP_040

40

1000

GC_060

60

1000

GP_060

60

1000

GC_080

80

1000

GP_080

80

1000

GC_100 100

1000

GP_100 100

1000

46

R. Cunha et al.

unchanged regardless of the success and behaviour of the businesses in the platform. The tests include five complete bSLA negotiation cycles. In the first test, the rSLA are negotiated for the duration of one cycle (short term), while, in the second test, rSLA are negotiated till the end of the five cycles (long term). 4.1

Short Term Resource Agreements

Table 3 displays the results of the first test, including both bSLA and rSLA related data. In this experiment the duration of the rSLA corresponds to a single cycle. As a result, when a new cycle starts, the providers regain their 1000 VM and the consumers request again 1000 VM sequentially. Table 3. Platform results with bSLA renegotiation – single cycle rSLA

In terms of rSLA, Table 3 displays the average number of established rSLA, the number of failed rSLA, the rSLA self failure rate (the percentage of self violated rSLA) per business, the average consumer resource expenditure (AE) or average provider revenue (AR), the average brokerage cost (AC) of the successful

Renegotiation of Electronic Brokerage Contracts

47

rSLA and the average resource loss (the penalties for the self violated rSLA) per resource and business. In the case of established rSLA, when a business (consumer or provider) fails to fulfil the terms of an established rSLA, it must reimburse its partner of the resource price and bFee values. In terms of the bSLA, the table lists the brokerage fee proposed by the business and the platform and highlights the resulting bFee. In the five cycles there were, in average, 5142 successfully traded resources out of 5853 negotiated rSLA per business (failure rate of 12.1%). This means that, in average, the five groups of consumers and providers were able to successfully trade 20.6% of the 25 000 resources required/offered per business in the five cycles. Only GC_100, GP_100, GC_080 and GP_080 traded a significant number of resources. As expected, the most trusted group of providers (GP_100) leased the majority of its resources at a medium average price (it is providing quantity-based discounts to trusted consumers), while the most trusted group of consumers (GC_100) obtained 95.8% of its resources at a low average price. Table 4 summarises the impact of the bSLA renegotiation in the case of short term resource agreements. The overall losses, i.e., the amount paid in penalties by all businesses, has a negligible increase of 0.1% and the overall costs, i.e., the amount paid in brokerage fees by all businesses, decrease 21.1%. The total revenues of the providers as well as the total expenditure of the consumers remains unchanged since the bSLA renegotiation does not have a direct influence on the negotiation of rSLA. The impact of the bSLA renegotiation in terms of average amount spent by consumers per resource, including costs and losses, is −0.4% and in the average amount received by providers per resource, deducing the costs and losses, is +0.5%, i.e., it is advantageous for both parties. Table 4. Impact of the bSLA renegotiation Δ

Consumers (%) Providers (%) Overall (%)

Total losses +0.3 Total costs

−22.1

−0.1

+0.1

−20.0

−21.1

These results clearly show the advantages of the bFee renegotiation in terms of the average price/revenue, costs and losses per resource successfully traded. 4.2

Long Term Resource Agreements

In the second test, each consumer requests just 1000 VM, instead of 1000 VM per cycle, and the rSLA are established till the end of the five negotiation cycles, i.e., providers do not regain their resources at the end of each negotiation cycle. Once a business has established 1000 rSLA, it stops trading. In such a scenario, the most trusted businesses will stop trading earlier, leaving the less trusted to continue trading. Table 5 presents the results.

48

R. Cunha et al. Table 5. Platform results with bSLA renegotiation – multi-cycle rSLA

The consumers were able, in average, to negotiate 2567 and successfully trade 1753 resources per business in the five cycles (31.7% failure rate). In terms of the average number of resources required/offered per business, the businesses were able to successfully trade 35.1% of the 5000 required/offered. The rSLA were established by descending order of the trustworthiness of consumers and providers. The average price of the traded resources was inversely proportional to the trustworthiness of the group. In this adverse scenario, where, cycle after cycle, businesses with lower trustworthiness are forced to trade among themselves, the overall losses and costs with brokerage renegotiation increase +1.5% and +0.3%, respectively, when compared to those without bSLA renegotiation. The impact of the bSLA renegotiation is, in terms of average amount spent per resource by the consumers and including costs and losses, +0.2% and, in terms of the average amount received per resource by the providers and deducing costs and losses, +0.1%.

Renegotiation of Electronic Brokerage Contracts

5

49

Conclusion

The CloudAnchor platform was enriched with a brokerage renegotiation mechanism to allow registered businesses to negotiate the brokerage fee with the platform at will or periodically. Our contribution is a brokerage fee renegotiation algorithm contemplating both the business and the platform sides. On the business side, the brokerage fee is determined using the duration of the agreement, the cumulative number of traded resources as well as the success and the trustworthiness of the business in the platform. On the platform side, the brokerage fee is assessed based on the cumulative number of traded resources and the trustworthiness of the business. The algorithm was tested with short and long term rSLA in a demanding under supply resource scenario caused by a pre-defined average rSLA failure rate of 40.0%. For the short term rSLA, the implemented bSLA renegotiation mechanism, when compared with the prior open-ended bSLA, allows businesses to reduce brokerage costs by 21.1%, while trading resources at the same price and experiencing identical losses. The revenues of providers and the expenditure of consumers are identical because they depend exclusively on the rSLA negotiation mechanism. The losses, which are originated by SLA violations, result in the application of the agreed penalties. In this case, the culprit reimburses the victim of the negotiated resource price accorded during the rSLA negotiation and bFee agreed during the bSLA negotiation. Since the resource price, when compared with the bFee, is predominant, the overall losses are identical. The costs, corresponding to the brokerage service fee, depend exclusively on the fee specified during the bSLA renegotiation and, consequently, benefit from the newly implemented mechanism. In the case of the long term rSLA, the bSLA renegotiation introduces a reduced increase in the overall losses and costs of +1.5% and +0.3%, respectively. The resources, not only are negotiated by descending business trustworthiness, but, once negotiated, become unavailable till the end of the experiment, leaving, cycle after cycle, businesses with lower trustworthiness to negotiate together at higher prices. As a trust-based mechanism, it promotes the fulfilment of agreements. On the one hand, increases the satisfaction of compliant businesses and, on the other hand, penalises faulty businesses, eventually leading to their de-registration from the platform. Finally, it privileges, first, the most reliable businesses, and among these, those who traded more resources, to the detriment of the less reliable ones, ensuring to the first ones trade resources at the best prices. The renegotiation of bSLA can be refined by providing businesses with the ability to identify when to renegotiate in order to obtain the most favourable brokerage fees. Acknowledgements. This work was partially financed by the European Regional Development Fund (ERDF) through the Operational Programme for Competitiveness and Internationalisation (COMPETE Programme), within project «FCOMP-01-0202FEDER-023151» and project «POCI-01-0145-FEDER-006961», and by national funds

50

R. Cunha et al.

through the Fundação para a Ciência e Tecnologia (FCT) - Portuguese Foundation for Science and Technology - as part of project UID/EEA/50014/2013.

References 1. Di Modica, G., Regalbuto, V., Tomarchio, O., Vita, L.: Enabling re-negotiations of SLA by extending the WS-agreement specification. In: IEEE International Conference on Services Computing (SCC 2007), pp. 248–251. IEEE (2007) 2. Di Modica, G., Tomarchio, O., Vita, L.: Dynamic SLAS management in service oriented environments. J. Syst. Softw. 82(5), 759–771 (2009) 3. Frankova, G., Malfatti, D., Aiello, M.: Semantics and extensions of WS-agreement. J. Soft. 1(1), 23–31 (2006) 4. Hani, A.F.M., Paputungan, I.V., Hassan, M.F.: Renegotiation in service level agreement management for a cloud-based system. ACM Comput. Surv. (CSUR) 47(3), 51 (2015) 5. Parkin, M., Hasselmeyer, P., Koller, B..: An SLA re-negotiation protocol. In: Proceedings of the 2nd Non Functional Properties and Service Level Agreements in Service Oriented Computing Workshop (NFPSLA-SOC08), CEUR Workshop Proceedings, vol. 411. Citeseer. ISSN 1613-0073 (2008) 6. Sharaf, S., Djemame, K.: Enabling service-level agreement renegotiation through extending WS-agreement specification. SOCA 9(2), 177–191 (2015) 7. Veloso, B., Malheiro, B., Burguillo, J.C.: Media brokerage: agent-based SLA negotiation. In: Rocha, A., Correia, A.M., Costanzo, S., Reis, L.P. (eds.) New Contributions in Information Systems and Technologies. AISC, vol. 353, pp. 575–584. Springer, Cham (2015). doi:10.1007/978-3-319-16486-1_56 8. Veloso, B., Malheiro, B., Burguillo, J.C.: CloudAnchor: agent-based brokerage of federated cloud resources. In: Demazeau, Y., Ito, T., Bajo, J., Escalona, M.J. (eds.) PAAMS 2016. LNCS (LNAI), vol. 9662, pp. 207–218. Springer, Cham (2016). doi:10. 1007/978-3-319-39324-7_18 9. Veloso, B., Meireles, F., Malheiro, B., Burguillo, J.C.: Federated iaas resource brokerage. In: Kecskemeti, G., Kertesz, A., Nemeth, Z. (eds.) Interoperable and Federated Cloud Architecture, pp. 252–280. IGI Global, Hershey (2016). Chapter 9

Improving Project Management Practices in Architecture & Design Offices Cátia Sousa ✉ , Anabela Tereso, and Gabriela Fernandes (

)

Production and Systems Department, Centre ALGORITMI, University of Minho, Campus de Azurém, 4804-533 Guimarães, Portugal [email protected], {anabelat,g.fernandes}@dps.uminho.pt

Abstract. This paper describes a study on improving project management (PM) practices in architecture & design offices, conducted through semi-structured interviews and focus group with professionals from seven different offices. Taking into account the best PM practices described in literature and the most used prac‐ tices and problems identified in this particular organizational context, a set of key PM practices are proposed. The results show that there are common practices already used by the architecture & design offices, such as: project charter, kickoff meeting, budgeting document and progress meetings. The problems found are mainly related to communication, collecting requirements, schedule control and portfolio management. The set of key PM practices proposed is composed by well-known practices: kick-off meeting, budgeting document, project charter, milestone planning, work packages and deadlines document, communication plan, change request, progress meeting, progress report, meeting minutes, client acceptance form and project closure documentation. Keywords: Project management · Practices · Tools and techniques · Architecture & design offices

1

Introduction

The market is becoming more and more competitive, which creates tremendous pressure on organizations, demanding that their projects comply with the budget, deadlines and require‐ ments [1, 2]. Therefore, the implementation of good project management (PM) practices to give effective and rapid response to face competitiveness is very important [3]. Although organizations are increasingly more orientated to use PM practices to manage their projects, it has not been enough, since there is still a low level of PM maturity within organizations [4]. According to International PM Association (IPMA) [5], in the last ten years PM has not undergone sufficient change to ensure success, as projects continue to fail, costing more 98.5% than the initial budget, going over the deadline by 115% and only 61% complying with the original project requirements. PM practices are understood in this study as the use of PM tools and techniques, which are closer to the day-to-day practice, closer to the things people do, closer to their tacit knowledge [6].

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_6

52

C. Sousa et al.

PM is context dependent [6, 7] i.e. we cannot expect the same results in the appli‐ cation of the same PM practices at all project contexts. However, PM bodies of knowl‐ edge/standards, such as the PMBoK [8], contain the best PM practices to use but do not differentiate the most appropriate for each organization context. Understanding how to manage architecture and design projects and their particular issues appear to be limited in literature. Therefore, this study aims to identify which are the key PM tools and techniques for a particular organization context - the architecture & design offices. Based on a set of semi-structured interviews and a focus group with elements that make up the office teams, the most used PM tools and techniques in these organizations were identified as well as their main difficulties in PM in order to come up with a set of PM practices most useful for this type of organizations. There are architecture & design offices more focused on interior design and deco‐ ration, while others are more focused on architecture projects. Typically, interior design offices act as the intermediary between the customer and several suppliers that provide them with the products/services, such as, furniture, carpets and curtains, among others. Particularly in organizations of this typology is difficult to meet projects’ requirements, deadlines and costs, because most of the products/services are subcontracted. The main work of these offices is to monitor the suppliers to ensure that the product arrives at the customer’s home as expected and the installation team performs the work as agreed. These organizations have typically few employees, but they work with various subcon‐ tracted teams. Having the first author of this paper professional experience in architecture & design offices and having found a very low level of maturity in PM, as well as the lack of literature identifying the best practices to use in managing projects in this particular sector, this study aimed to address the following three research questions: 1. What are the PM practices commonly used in architecture & design offices? 2. What are the major difficulties in PM in architecture & design offices? 3. What are the key PM practices for architecture & design offices? The paper follows a normal structure. After the introduction, Sect. 2 makes a briefly literature review on PM practice. Section 3 describes the research methodology. Section 4 presents the results of the most used PM practices and the main problems found in architecture & design offices and discusses the key PM practices proposed. Finally, the main conclusions and future work are discussed.

2

Literature Review

The PM Body of Knowledge (PMBoK) [8] is one of the most influential publications on what constitutes the basis of the PM profession. As argued by Morris et al. [9], the more tightly defined scope in PMBoK might seem more accessible for practitioners than the wider range of definitions of other bodies of knowledge, such as the APMBoK, from the Association for PM [10] or the PRINCE2, that has become a standard widely used by the British government and widely recognized by the private sector being definitely the most used PM standard in the UK [11].

Improving Project Management Practices

53

According to the PMBoK, PM is carried out by 47 processes organized into 5 process groups: initiation, planning, execution, monitoring and controlling, and closing. A project consists in a value creation undertaking based on a set of objectives, which is completed in a given or agreed timeframe and under constraints, including resources and external circumstances [12]. The most common projects’ success criteria include time, cost and quality [13]. The ability to understand and meet customer needs also defines the quality and success of a project [14]. Customer needs can be based on quality product and appealing design, but at an acceptable cost, i.e. customers want quality at competitive prices [15]. Having technical knowledge of the product, offices may usually give the customer the desired final effect with different alternative costs, varying the material or the form of building, while keeping the product functionality. In architecture & design offices like in other organizations, customer requirements have to be fully interpreted to achieve positive results by the end of the project. In many situations, customers do not have technical knowledge of the product. They only have the needs they want to see fulfilled at the end of the project. Therefore, the project manager aims to plan in order to achieve the desired levels of quality and functionality within the limited cost imposed by the customer [16]. This research is based on a study conducted by Fernandes, Ward and Araújo [17] and a former study of Besner and Hobbs [18]. In 2013, Fernandes, Ward and Araújo, based on 793 questionnaires responses, identified a list of 20 of the most useful PM tools and techniques for improving project performance. Besner and Hobbs surveyed the usage of 70 tools and techniques, with 753 respondents. The tools and techniques using levels varied considerably, from 1.4 to 4.1, based on a scale ranging from 1 (not used) to 5 (very extensive use). Fernandes et al. [17] findings were consistent with the results from Besner and Hobbs [18]. As discussed earlier, PM is context dependent [7, 19]. Each activity sector has its own characteristics, demanding a study of its peculiarities, and architecture & design offices require PM practices tailored to their needs.

3

Research Methodology

The research methodology chosen for this research study was survey, based on semistructured interviews research method, as the method deemed most appropriate to perceive the real problems in this type of offices [20]. Due to time constraints and personal privileged access, the selection of organizations was restricted to Portuguese organizations. The identification of the potential companies to partic‐ ipate on the study was based on two criteria: their particular area of activity - architecture & design, and the previous work contacts that the researcher had with some organizations. Interviewees were elements linked to the offices’ projects, namely graphic designers, designers, architects and budgeters. Organizations were contacted initially by email and later by telephone to confirm the date and time available to carry out the interviews. It was not possible to get a large number of interviews in each organization, because they are small offices, with few elements linked to projects (between five to seven team elements).

54

C. Sousa et al.

18 interviews in 7 different organizations were conducted. After the 16th interview, the interviewer was no longer receiving new information. This means that the inter‐ viewer has nearly heard the full range of ideas and reached saturation [20], therefore it stopped the data collection process. The interviews were conducted between May and July 2016, and were mainly face-to-face interviews at the interviewee’s organization headquarters (56%), while the remaining were conducted by telephone (44%). Before the interview, all participants received a document introducing the study and the pre-defined interview questions by email, to prepare the inter‐ viewee. Each interview started with an introduction about the respondent’s background, with an outline of the research objectives and the definition of some terms used in the study, and the importance of the interviewee contribution. The interviewer asked for authorization to take notes and tape-record and assured the interviewee confidentiality and that the data obtained would only be used for academic purposes. Only 17% of respondents allowed the interview to be recorded. Each interview took on average 1 h. The interviews were focused on exploring the main difficulties in PM in the architecture & design offices and the PM tools and techniques used by these organizations. Lastly, a set of key PM practices for these particular activity sector was proposed, and a focus group with participants of the interviews was conducted in order to discus and validate, in a planned way, the set of key PM practices proposed. When compared to interviews and surveys, the focus group has the advantage of easing the discussion and participation, since the answers of a participant can be complemented by others, so enriching the information obtained. This advantage was considered to largely compen‐ sate drawbacks such as the risk of someone offsetting the others [21].

4

Results and Discussion

4.1 Participants’ Characterization Respondents had a high level of academic education: 45% were Bachelors/Graduates (3 to 5 years of university studies), 33% were Masters and just 22% had only a High School diploma (see Fig. 1). The participants with lower level of academic education were linked to the project’s technical parts. Despite the level of academic education, they had several years of professional experience in the area. Nevertheless, none of the interviewees had any training in PM, which shows the low PM maturity level of these organizations. The majority of participants were male (56%). Participants from different roles in the organizations were interviewed. The respondents were linked to: Technical drawing (28%); Architecture (22%); Design (22%); Project management (17%); and Budgeting (11%) (see Fig. 2). In this study, professionals with many years of professional experience in the area were preferred. Therefore, considering experience: 28% had over 20 years; 33% had between 10 and 20 years; 11% had between 5 and 10 years; and only 28% had less than 5 years (see Fig. 3).

Improving Project Management Practices

Fig. 1. Interviewees by education level

Fig. 2. Interviews by activity area

Fig. 3. Interviewees by professional experience

55

56

C. Sousa et al.

Respondents with greater professional experience helped to detect the major part of the difficulties that have occurred in these offices, as well as the PM tools and techniques used to overcome these difficulties. 4.2 Most Used Project Management Practices in Architecture & Design Offices It was found that architecture & design offices use some of the PM tools and techniques during the project lifecycle. At the project beginning, often there is a kick-off meeting, where the project manager tries to define with the client the project’s requirements and objectives. After all requirements are identified, the architecture & design office performs the project budget. This document is usually standardized and is delivered to the client, including project scope, time and cost. Often, it is necessary to revise the budgeting document, which includes the revision of the project requirements in order to come up with a lower cost. When the client accepts the budget and makes the order to these offices (seen as the project charter), it gives rise to the project formal beginning to carry out the project tasks. A kick-off meeting is held with the project team members, in order to share all relevant information. In some cases, at the beginning of the project, it is also held a kick-off meeting with each of the subcontractor organizations (suppliers) in order to establish all the products’ requirements for to each subcontractor. At the beginning and during the project execution, the architecture & design offices conduct several procurement management processes to the subcontracted organizations. Table 1. Most used project management practices Ranking 1

º



PM practices Project charter

Nº interviewees 18

Kick-off meeting

15

3

º

Budgeting document

15

4

º

Progress meetings

14

5

º

Schedule control of activities

14

6

º

Control of subcontracted work

14

7

º

Execution control

13

º

8 9º

Closing meeting with team members

6

Preliminary studies

5

During project execution several progress meetings are done, namely to approve the project deliverables with the customer and to update the project status. Telephone calls to the customer are also used the update project status. Communication with the internal team and external (subcontractors) teams is done through e-mails, telephone calls and presential visits. Additionally, during project execution, focus is given to control the schedule, the subcontracted work, and the execution of the entire project work.

Improving Project Management Practices

57

To close the project, only few participants mentioned to have a project closure meeting. Normally the project is considered closed when the product(s)/service(s) is delivered to the customer. Table 1 shows the PM practices most commonly used in the 7 offices, where the number of interviewees who mentioned that practice during the interviews is shown. 4.3 Main Project Management Difficulties in Architecture & Design Offices The interview analysis shows that there are many communication difficulties between internal team members and subcontracted organizations. Communication with outsourced organizations is hampered by the necessary supervision of the work to meet the expectations and deadlines established with the client. For architecture & design offices to meet the deadlines, the suppliers need to deliver their product(s)/services(s) on time. The suppliers, most of the times seen as partners, should guarantee the best price and time in order to allow the offices to successfully negotiate the project with the customer. The greatest difficulty of these offices, in terms of time, is the initial project’s approval by the client. They are quite uncertain about their choices, and the more proposals they have, the more hesitant they become. Sometimes the offices end up not proposing several options just to simplify the customer’s decision. After all requirements are defined, they want the project finished as quickly as possible. The projects’ approval by the municipalities, on an architectural project, can take more than 6 months, which also delays the project. In this study it was not possible to define an average project duration, because all interviewees said it depends greatly on the type of project to be carried out. Some of the projects fail to meet the deadlines established due to non-compliance by the client, sometimes because of lack of payments agreed or due to the conditions of the house. There are clients who hire these offices to provide the interior, but the physical spaces are not complete on time. In other cases the customer does not meet the contract regarding payments and the work is suspended until payment is done. Although architecture & design offices have meetings with the clients, to keep them informed of the project status, it was reported by interviewees the lack of technical elements in these meetings. Frequently there are meetings where a construction solution for a particular product is discussed with the client. This may lead to a definition that when presented to the technical elements, they realize that it is not possible to perform. This situation causes the need for a new meeting and discussion with the client. In architecture & design offices there is also the difficulty in prioritizing the projects because usually the offices have several projects running at the same time and need to coordinate all these projects in order to meet all project’s deadlines and requirements. One of the most relevant interviewees even recognized the need to have a tool that could contain the current status of each project in progress in order to have an overview of all projects, to help in the overall portfolio management. Some interviewees reported that there are works that, after being delivered, with all project requirements, customers complain, sometimes several months after the

58

C. Sousa et al.

product(s)/service(s) is delivered by the office. The reasons for these complaints often are not responsibility of the offices, but misuse of the product by the customer. Table 2 summarizes the difficulties encountered in these particular organizations with information about the number of people interviewed who reported each difficulty. The most often reported difficulties were communication, collect requirements and time. The difficulty in communication takes place both with internal PM members and with the subcontractors’ team elements. The collect requirements present sometimes diffi‐ culties due to the customer’ initial indecision, afterwards wanting the project done as soon as possible. The third problem is not being able to deliver projects on time. Table 2. Project management difficulties Ranking 1

º



Difficulties Communication

Nº interviewees 18

Collect requirements

11

3

º

Time

9

4

º

Portfolio management

6

5

º

Lack of rigor in the implementation of projects Project closure with the client

6

º

Project legalization

5

º

Changes throughout the project

4

Lack of information on the drawings Lack of rigor in the drawings Non-compliance with work drawings Lack of building conditions to start work Disagreement between team members Misreading of the drawings Managing the huge amount of work

4 4 3 3

6º 7

8 9º 10º 11º 12º 13º 14º 15º

5

3 2 1

4.4 Key Project Management Practices Proposed in Architecture & Design Offices After the interview data analysis, we managed to identify a set of best PM practices to architecture & design offices, which was discussed and adjusted during the focus group (Table 3). The study focused on PM practices and excluded portfolio management practices. The project kick-off meeting is a practice already used by these organizations. Initially they should have a meeting with the client to understand what he wants, and then a meeting with the project team members to pass information. During these kickoff meetings they should discuss all points of the project. This document accompanies the project from its initiation to its closure.

Improving Project Management Practices

59

Table 3. Set of key project management practices proposed Kick-off meeting Budgeting document Project charter Milestone planning Document with work packages and deadlines Communication plan

Change request Progress report Meeting minutes Client acceptance form Project closure documentation

In this study, it was noted that the budgeting document sent to the client contains the product requirements to be delivered, the material, quantity, size and delivery dates. This document is essential for confirming with the customer the desired products and requirements. When the customer accepts the budget, awarding the project order, the project can start. It is used as a guarantee that the products will be delivered on the agreed date and with the requirements specified by the client. The definition of project milestones is essential because there are several important points to fulfill during these projects. Having well-defined dates can help decreasing the delays. It was proposed that these milestones are set in the Kick-off meeting and discussed among all elements involved in the project. Another suggestion is that these organizations should have a document with work packages and deadlines with a reference to the cost of each activity, especially in the outsourced work packages. Having values assigned to each work package can help having a more accurate budget. For the design elements, it is also very important to define who is responsible for each work package, the work to do and the deadlines to meet. A communication plan that follows the status of the project and its changes, to facil‐ itate communication between project team members, should also be used. All commu‐ nication between the organization and subcontracted firms should continue to be held by email, telephone and presential meetings. All design changes should also be commu‐ nicated by email, with the purpose of being recorded. Before doing changes in the project, the project manager should consult the work breakdown structure, which contains the cost for each work package, and check whether the change, if bringing additional cost, is accepted or not by the client The register of design changes aims to prevent some PM members to be left without access to infor‐ mation about the changes, and serves as a record to attach to the project and ultimately reflect on the consequences of adopting this change. Progress meetings should be held quite often, especially with the customer. As suggested by the respondents, when technical aspects are discussed, the presence of technical elements is essential to support the customer. Therefore, it was proposed to include one of the elements responsible for the technical part of the project in the meet‐ ings. These meetings should be accompanied by a report (meeting minute) to register all the issues discussed and decided between the client and the team members. In this study some of the interviewees reported that some of the projects, after delivery, suffer complaints of damaged material and several times it is due to misuse by the customer. Organizations end up suffering additional costs. A document signed by

60

C. Sousa et al.

the customer to confirm the correct delivery of the material could avoid these unexpected situations, although this measure may not be well accepted, taking into account that in Portugal giving your word still has much value. It was suggested a project closure meeting with the elements involved in the project, gathering all the necessary documents for the closure of the project or project phase. Lessons learned should be gathered and stored during the whole project life-cycle and archived during project closure. Lessons learned should not be viewed negatively but as a way to help teams to be more effective and efficient in future projects.

5

Conclusions and Future Work

Organizations are always looking for increase sales, reduce costs, and increase customer satisfaction. This is possible by better management of their projects, including better planning, use of resources and control. The main contribution of this paper is to PM practice. The paper presents the most used tools and techniques in architecture & design offices, the greatest difficulties in PM experienced by professionals in these offices and finally present a set of best practices to these organizations. 18 semi-structured interviews in 7 different architecture & design offices were made, in order to answer the 3 research questions; and a focus group was conducted with elements from the interviews to validate the proposal set of best PM practices for architecture & design offices. It was observed that these organizations have a very low PM maturity level, using few tools and techniques to overcome their difficulties throughout the project. This study had some limitations and difficulties, as it was dependent on the elements of these organizations not always willing to cooperate. The practices proposed for these archi‐ tecture & design offices were validated by the elements of the focus group, but a more comprehensive study can be done in a future work. It is suggested further in-depth studies of the benefits of using the set of practices proposed, through action research. The appli‐ cation of these practices in the business environment would allow to adjust the proposal of best practices presented. Additionally, portfolio management practices should be also explored in future studies as it was identified as one of the main difficulties in this particular sector of activity. Acknowledgement. This work has been supported by COMPETE: POCI-01-0145-FEDER007043 and FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/ 00319/2013.

References 1. Silvius, G.: Social Project Management Strategic Integration of Social Media into Project Management Practice. IGI Global, Hershey (2016) 2. Morris, P.W.G.: The Management of Projects. Thomas Telford, London (1997)

Improving Project Management Practices

61

3. Alhawaria, S., Karadshehb, L., Taletc, A.N., Mansoura, E.: Knowledge-based risk management framework for information technology project. Int. J. Inf. Manag. 32(1), 50–65 (2012) 4. Silva, D., Tereso, A., Fernandes, G., Loureiro, I., Pinto, J.Â.: OPM3® Portugal Project – Information Systems and Technologies Organizations – Outcome Analysis. In: Rocha, A., Correia, A.M., Costanzo, S., Reis, L.P. (eds.) New Contributions in Information Systems and Technologies. AISC, vol. 353, pp. 469–479. Springer, Cham (2015). doi:10.1007/978-3319-16486-1_46 5. IPMA: IPMA - International Project Management Association (2012). http://ipma.ch/ certification/competence/ipma-competence-baseline/ 6. Besner, C., Hobbs, B.: Project management practice, generic or contextual: a reality check. Project Manag. J. 39(1), 16–34 (2008) 7. Thomas, J., Mullaly, M.: Researching the Value of Project Management. EUA, Project Management Institute, Newtown Square (2008) 8. PMI: A Guide to the Project Management Body of Knowledge (PMBOK® guide), 5th edn. Project Management Institute, Newtown Square (2013) 9. Morris, P.W.G., et al.: Exploring the role of formal bodies of knowledge in defining a profession - the case of project management. Int. J. Project Manag. 24(8), 710–721 (2006) 10. APM, Association for Project Management: APM Body of Knowledge, 6th edn. APM Books, London (2012) 11. Murray, A., et al. (eds.): Managing Successful Projects with PRINCE2, 5th edn. Office of Government Commerce, Norwich (2009) 12. PMAJ: P2M - A Guidebook of Project & Program Management for Enterprise Innovation, vol. 1. Project Management Association of Japan (2005) 13. Muller, R., Turner, R.: The influence of project managers on project success criteria and project success by type of project. Eur. Manag. J. 25(4), 29–309 (2007) 14. Edwards, C.D.: The meaning of quality. In: Quality Progress (1968) 15. Broh, R.A.: Managing Quality for Higher Profits: A Guide for Business Executives and Quality Managers. McGraw-Hill, New York (1982) 16. Dixon, D.: Integrated support for project management. In: Proceedings of the 10th International Conference on Software Engineering. IEEE Computer Society Press (1988) 17. Fernandes, G., Ward, S., Araújo, M.: Identifying useful project management practices: a mixed methodology approach. Int. J. Inf. Syst. Project Manag. 1(4), 5–21 (2013) 18. Besner, C., Hobbs, B.: The perceived value and potential contribution of project management practices to project success. Project Manag. J. 37, 37–48 (2006) 19. Besner, C., Hobbs, B.: Contextualized project management practice: a cluster analysis of practices and best practices. Project Manag. J. 44(1), 17–34 (2013) 20. Saunders, M., Lewis, P., Thornhill, A.: Research Methods for Business Students, 5th edn. Pearson Education Limited, Edinburgh (2009) 21. Langford, J., McDonagh, D.: Focus Groups: Supporting Effective Product Development. Taylor & Francis, New York (2003)

TourismShare Nuno Areias1 and Benedita Malheiro1,2(B) 1

ISEP/IPP – School of Engineering, Polytechnic Institute of Porto, Porto, Portugal {1090430,mbm}@isep.ipp.pt 2 INESC TEC, Porto, Portugal

Abstract. TourismShare is a context-aware recommendation platform that allows tourists to share private locations and videos and obtain recommendations regarding potential Points of Interest (POI), including complementary articles and videos. The user experience is enhanced with the addition audio immersion during video playback and automatic recommendation features. The developed system consists of a distributed application comprising a front-end client module (Android application), which provides the user interface and consumes directly external support services, and the back-end server module, which includes the central database and recommendation service. The communication between the client and server modules is implemented by a dedicated application level protocol. The recommendations, which are based on the user context (user position, date and current time, past ratings and user activity level), are provide on request or automatically, whenever POI of great relevance to the user are found. The recommended POI are presented on a map, showing the timetable together with complementary articles and videos. The audio immersion at video playback time takes into account the weather conditions of the video recording and the user activity level.

Keywords: Context-aware

1

· Recommendation · Immersion · Tourism

Introduction

Information and Communication Technologies (ICT) have greatly contributed to the proliferation of data sources and resource sharing systems. This abundance of information increases the importance of filtering tools such as recommendation systems, which are able to make timely customized suggestions based on the user profile and current context. The main objectives of this project are: (i ) to allow the user to share POI and videos; (ii ) to recommend POI, articles and videos based on the user’s context; and (iii) to provide the user with a richer video playback experience, simulating the recording conditions. This system was inspired by two previous location-based sharing and recommendation projects: (i ) the POI sharing and recommendation of Bruyneel et al. (2014) [3]; and (ii) the video sharing and article recommendation of Baecke et al. [1]. The developed system provides POI recommendations together with complementary articles, videos and timetable, c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 7

TourismShare

63

based on the user context information (user position, date and time, past ratings and activity level) on request or automatically. The user immersion at video playback time takes into account the time and weather conditions of the video recording. Specifically, TourismShare improves the tourist experience not only by recommending context-aware POI, articles and videos, but also by allowing the user to decide when and which POI and videos to share and by enriching video playback with audio immersion. This paper is organised in four sections. This initial section introduces the problem, the goal and the structure of the paper. The second section provides the background on context-awareness and immersiveness. The third section describes the TourismShare system. Finally, the fourth section draws the conclusions and suggests future developments.

2

Background

Recommendation systems are information filtering systems which suggest items or predict the ratings a user would attribute to items based on his/her profile and context. Recommendation systems are becoming increasingly popular since they filter large quantities of data to find and suggest items of interest to users. Nowadays, it is possible to find recommendation systems in e-commerce, e-health, tourism or technology enhanced learning application domains [3,6,12]. 2.1

Context-Awareness

Context includes any information relating to the user which is relevant to the purpose of the application, including demographic, location, time, preferences or social data. A context-aware recommender system predicts user preferences by incorporating contextual information into the recommendation process [4] and allowing it to generate more relevant recommendations [5]. The rating depends on the user, item and context, where the context can be the combination of different types of contextual information like time, location or demographics. In terms of contextual information, TourismShare uses the current user position, activity level and time. The position is used to filter the POI based on the distance between the user and POI, excluding all POI outside the maximum distance radius circle. The maximum distance depends on the current user activity level. The activity level, which is derived from the average user velocity during the last 5 min, is expressed in a 0 to 5 scale corresponding to unknown, inactive, low activity level, medium activity level and high activity level. The time is used to recommend, based on the opening hours, only open POI. 2.2

Immersion

Immersion is a subjective experience of total involvement in a given environment and that can be suggested by using surround sound sensory experiences, high resolution and augmented reality [10]. The usage of immersion features such as

64

N. Areias and B. Malheiro

different media elements highly enriches user experience by creating emotional reactions. An example of the usage of such features is the video immersion system proposed by Ramalho et al. (2013) [9], which allows users to capture, search, publish, view and synchronize interactive TV items. During the video capturing process the system collects information in the form of metadata of the user position, velocity, orientation and current weather information. During video playback, this information is used to implement different perception visual, auditory and tactile features. We adopt a similar approach to implement audio immersion during video playback based on the collection of weather information – used to reproduce weather sounds – and smartphone sensory data – used to reproduce a soundtrack matching the user activity level. The application selects the soundtrack from a play list with soundtracks matching the pre-defined levels of user activity, i.e., from classical music (when the user is inactive) to rock/metal (when the user is highly active). 2.3

Related Work

Table 1 shows the comparison of the six tourism location-based context aware recommender systems which were studied: ErasmusApp [1,3], Cinemappy [7], MoreTourism [11], Rerex [2], SPLIS [13] and CAMR [8]. Table 1. Comparison of related systems Features

ErasmusApp Cinemappy MoreTourism Rerex SPLIS CAMR

Position













Time













Company













Weather













Crowdedness ✗























Temperature ✗











Activity













Lighting













User Mood

Noise













Immersion













This analysis shows the relevance of this project. Although some of these systems implement complex context-aware recommendation algorithms and rely on high volumes of contextual data, none enriches the user experience with immersive features.

TourismShare

3

65

TourismShare

TourismShare is a distributed application composed of three main modules: front-end, back-end and external services (Fig. 1). The front-end module provides the user interface and includes a local (SQLite) database to store private locations and messages. The back-end module manages the user access to the central database and generates recommendations when requested. The communication between the client and server modules is achieved though a dedicated application level protocol. The external services provide map rendering, video upload and playback, weather data and complementary Wikipedia articles.

Fig. 1. TourismShare architecture

3.1

Back-end

The back-end provides two services: persistent data storage and personalised recommendations. The central database is a MySQL database composed of multiple tables, including User, Publisher, Rating, Location, Subcategory, Category, Schedule, LocationSchedule, Report, ReportCat, Device, UserDevice, Videos, WeatherData, Music and several tables for the smartphone sensor data. The recommendation service is based on the available data: (i ) spatial information – geodetic coordinates of the user; (ii ) temporal information – date and time of the search; (iii) user information – level of activity; and (iv ) device information – brand, model, size, etc. The recommendation algorithm is a cascade of four filters governed by the following rules: (i ) a filter is only applied if the number of input items is above five; and (ii ) the outcome of a filter is only used if the number of output items is above three. In the latter case, the recommendation service uses the outcome of the previous filter.

66

N. Areias and B. Malheiro

The first filter – the pre-filter – is used for the recommendation of POI and videos. It collects all items which satisfy the following criteria: (i ) were published by other users (mandatory); (ii ) are located within a radius of 5 Km from the search site (mandatory); (iii) match the category or subcategory (optional); and (iv ) are new sites (optional). The second filter – the spatial filter – narrows the radius of search based on the user activity level: unknown or inactive – 0.75 km; low activity level – 1.50 km; medium activity level – 2.00 km; and high activity level – 3.00 km. The third filter – the temporal filter – extracts all sites which are open at the moment of the search. The last filter – the prediction filter – is a rating prediction algorithm which returns the top five rated locations for the user. The filter considers three cases: (i ) the user has previously rated the site; (ii ) the user has never rated the site, but has rated sites of the same category; and (iii) the user has never rated the site nor sites of the same category. If the user has previously rated the site, the algorithm, first, applies Eq. 1 to determine the reputation of the site’s publisher Rep(p, l), where p is the publisher, l is the site, lpub is the number of sites p published and lrep the number of active reports associated with these sites. Users, while site publishers, have a dynamic reputation, ranging from 0% and 100%. Secondly, uses Eq. 2 to calculate the average rating of the site l of the category c, where n is the number of users who rated the site and Rating(u, l, c) is the rating user u assigned the site.  Finally, applies Eq. 3 to predict the user rating for the site Rating(u, l, c), where u represents the user, Rating(u, l, c) is the previous rating the user assigned to the site, Rep(p) is the reputation of the site publisher, Rating(l, c) is the average rating of the site and β = 80%. lpub − lrep lpub

(1)

n 1 Rating(u, l, c) n u=1

(2)

 Rating(u, l, c) = βRating(u, l, c) + (1 − β)Rep(p) × Rating(l, c)

(3)

Rep(p, l) = Rating(l, c) =

If the user has never rated the site, but has rated sites of the same category, the algorithm first applies Eq. 4 to determine the average rating user u has given to sites of category c, where n is the number of sites of category c the user has rated and Rating(u, l, c) is the rating user u assigned the site l of category  c. Then, uses Eq. 5 to predict the site rating for the user Rating(u, l, c), where Rating(u, c) is the average rating user u has given to sites of category c, Rep(p, l) is the reputation of the site publisher given by Eq. 1, Rating(l, c) is the average rating of the site given by Eq. 2 and β = 60%. 1 Rating(u, l, c) n

(4)

 Rating(u, l, c) = βRating(u, c) + (1 − β)Rep(p, l) × Rating(l, c)

(5)

n

Rating(u, c) =

l=1

TourismShare

67

If the user has never rated the site nor sites of the same category, the algorithm  applies Eq. 6 to predict the site rating for the user Rating(u, l, c), where Rep(p, l) is the reputation of the site publisher given by Eq. 1, Rating(l, c) is the average rating of the site l given by Eq. 2 and β = 40%.  Rating(u, l, c) = βRating(l, c) + (1 − β)Rep(p) × Rating(l, c)

(6)

In this case, the weight of the publisher is higher since the active user does contribute with any input to the rating prediction. In the case of video recommendation, the algorithm applies an additional filter to select popular videos based on the ratio between the number of views and the tine elapsed since the date of the video publication. New videos start with a default ratio of 0.3. Finally, the video soundtracks are selected according to the user activity level. 3.2

External Services

The Android mobile application interacts directly with Google Maps, GeoNames, OpenWeatherMap and the YouTube services. The Google Maps service provides a map-based graphical user interface showing the user position and the various private or recommended POI. The interaction with the Google Maps service, including connecting, loading, map displaying and interacting, is supported by the Google Maps Android API and requires authentication. The GeoNames service provides Wikipedia articles related to the area surrounding the user location. The interaction is supported by the GeoNames API. The service requires authentication and provides a maximum of 40 English articles related to POI located in the region centred on the user location and with the specified radius. The OpenWeatherMap service returns the weather of a location. The application accesses the service during video recordings to obtain the meteorological information, e.g., humidity, pressure, wind or temperature. The YouTube service requires authentication and allows video storage and sharing. The application uses the YouTube Direct Lite API to interact with the YouTube service [1]. 3.3

Front-end

The front-end comprises the client Android application and the local database and interacts with external services and the back-end. The local SQLite database is composed of the Location, Subcategory, Category, Message and MessageCat tables. The client application includes three independent Android services: (i ) the MyPVTService, which is responsible for obtaining the position, velocity and time (PVT) of the user; (ii ) the RecommendService, which is responsible for the automatic recommendation sites, videos and articles; and (iii) the MusicService, which is responsible for the audio immersion. These services are launched at start-up and remain active until they are disabled by the user in the application settings (RecommendService and MusicService).

68

N. Areias and B. Malheiro

MyPVTService provides the user’s PVT. This event driven service, whenever there is a location-related event, verifies the source and the specified data refresh period. While the network provider has a refresh period of 90 s, the GPS sensor updates every 30 s. The GPS sensor events are granted higher priority because they have greater accuracy. Regardless of the source (GPS sensor or network). If the refresh period had been exceeded (30 s or 90 s), the service collects new information. The velocity is stored in a First In, First Out (FIFO) buffer queue of size 10 used to establish the user’s activity level. If the update period is higher than 20 min, the service classifies the activity level as unknown. RecommendService supplies POI recommendations. This service, every 5 min, calculates the distance between the current and the previously recorded user location, assuming the user is authenticated, has the automatic recommendation mode on and is connected to the Internet, If this distance is greater than 300 m, it invokes the back-end recommendation service, avoiding generating redundant recommendations. Whenever new recommendations are generated, the system checks and only notifies the user of the new items. MusicService is responsible for the audio immersion. Assuming the user is authenticated and has access to the Internet, the application requests the playlists to the back-end server. If the user’s activity level is unknown, the service checks periodically (every 3 min) for changes. Once the level of activity is known, the service starts playing the corresponding playlist. There are four playlists associated with the four pre-defines levels of activity (unknown, low, medium, high). The audio immersion, once started, rechecks at the end of each song the user level of activity and, depending on the outcome, may start playing another list. 3.4

Features

TourismShare has two operation modes: stand-alone and connected. In standalone or private mode, the user can only use a subset of the system features since the front-end application is disconnected from the back-end. They include user authentication, user registration, user data update, private POI edition and private POI rating. In connected or shared mode, the front-end interacts with the back-end, requiring successful user authentication. It provides access to the full set of implemented functionalities, including user authentication, user registration, user data update, POI edition, POI rating, POI sharing, POI reporting, video playing, video recording & sharing, on-request POI recommendation and automatic POI recommendation. Automatic Recommendation. The user is notified every time a new POI is recommended (Fig. 2a). This feature can be enabled/disabled in the application settings. On Request Recommendation. The user can request POI recommendations at any time and specify: (i ) the type(s) of POI (places, articles, videos); (ii ) the category and subcategory of the places; (iii) the time (current or another);

TourismShare

69

(a) Automatic recommendation

(b) On request recommendation

(c) Show recommended article

(d) Show recommended location

(e) Video sharing

(f) Audio immersion

Fig. 2. TourismShare Features

and (iv ) the location (current or another). After submitting the request, the user is redirected to a map of his surroundings showing the recommended POI with different markers depending on the type of POI and, in case of a place, the category and subcategory (Fig. 2b). When the user clicks on an article marker, a new screen displays a brief of the Wikipedia article together with a hyperlink to the full article (Fig. 2c). If the user clicks on a place marker, a menu appears for the user to: (i ) report a location; (ii ) rate a location; (iii) add a location to the list of favourites; and (iv ) show a location schedule (week/weekend) (Fig. 2d).

70

N. Areias and B. Malheiro

Video Sharing. This feature uses YouTube and, thus, requires the user to be logged into a Google account. During the recording process, the application collects the weather and sensor data which will be used for the audio immersion during the video playback. After recording, naming and reviewing a video, the user can upload it to Youtube (Fig. 2e). Audio Immersion. During video playback, the application accesses the weather information gathered at the time of the video recording and the current activity level of the user. It plays a background wind sound and a soundtrack according to the current user activity level and the wind velocity at the time of recording. There are four playlists, one for each user activity level (stopped, low, medium, high). Figure 2f illustrates the video immersion. The user can enable/disable this feature in the application settings. 3.5

Tests and Results

In order to characterize the execution of the different features, a performance test was made under the following conditions: (i ) the front-end device was a Motorola Moto G with a Central Processing Unit (CPU) Quad-core 1.2 GHz Cortex-A7, 1 GB random-access memory (RAM) and accelerometer and proximity embedded sensors; (ii ) the back-end platform was Compaq Pressario laptop with a AMD Sempron M120 / 2.1 GHz CPU and 4 GB RAM; and (iii) an optical fibre Internet connection which, during the data transmission test, presented an average download rate of 19.08 Mb/s, an average upload rate of 1.67 Mb/s and an average latency of 81 ms. Table 2 displays the obtained results. Table 2. Average elapsed time, upload and download data per feature Feature

Time (s) Upload (B) Download (B)

User authentication

1.33

856

658

User registration

1.77

1144

660

User data update

1.27

1053

663

Rate place

1.71

774

653

Report place

1.65

783

578

Share place

1.91

1274

664

Edit place

1.88

1235

658

Play video

1.21

795

576

Share video

2.55

3764

576

On-req. POI recomm 2.01

1124

4367

Autom. POI recomm 2.32

1174

4567

During the test, the average elapsed time, the data exchanged between the client device and the server (upload) and between the server and the client device (download) for each feature was determined based on ten measurements.

TourismShare

4

71

Conclusions and Future Work

TourismShare is a tourism context-aware platform for Android smartphone users. In terms of contextual data, it uses: (i ) the current location, level of activity, device and time of the user to make recommendations; and (ii ) weather data and the current activity level of the user for audio immersion. TourismShare improves the tourist experience by recommending context-aware POI, articles and videos, by allowing the tourist to decide when and which personal data (POI and videos) to share and by enriching video playback with audio immersion. In the future, the recommendation algorithm can be refined by using information it currently collects, e.g., the device characteristics, and by integration social network data. The video immersion system, which is currently based on weather information, can be extended by using local sensor information already collected and stored. Finally, the current audio immersion, which is based on the user activity level, may evolve into a music recommendation system based also in the user emotional state and profile. Acknowledgements. This work was partially financed by the European Regional Development Fund (ERDF) through the Operational Programme for Competitiveness and Internationalisation (COMPETE Programme), within project «FCOMP-01-0202FEDER-023151» and project «POCI-01-0145-FEDER-006961», and by national funds through the Funda¸ca ˜o para a Ciˆencia e Tecnologia (FCT) - Portuguese Foundation for Science and Technology - as part of project UID/EEA/50014/2013.

References 1. Baecke, B.: Context-Aware Video-Sharing Android Application. Master’s thesis, Instituto Superior de Engenharia do Porto, Portugal (2014) 2. Baltrunas, L., Ludwig, B., Peer, S., Ricci, F.: Context-aware places of interest recommendations for mobile users. In: CEUR Worshop Proceedings, vol. 740 (2011) 3. Bruyneel, K., Malheiro, B.: ErasmusApp: a location-based collaborative system for erasmus students. In: De Strycker, L. (ed.) ECUMICT 2014. Lecture Notes in Electrical Engineering, vol. 302, pp. 35–47. Springer, Heidelberg (2014) 4. Adomavicius, G., Sankaranarayanan, R., Sen, S., Tuzhilin, A.: Incorporating contextual information in recommender systems using a multidimensional approach. ACM Trans. Inf. Syst. (TOIS) 23(1), 103–145 (2005) 5. Hong, J.-Y., Suh, E.-H., Kim, S.J.: Context-aware systems: a literature review and classification. Expert Syst. Appl. 36(1), 8509–8522 (2009) 6. Melville, P., Sindhwani, V.: Recommender systems. In: Sammut, C., Webb, G.I. (eds.) Encyclopedia of Machine Learning, pp. 829–838. Springer, Boston (2010). doi:10.1007/978-0-387-30164-8 705. ISBN 978-0-387-30164-8 7. Ostuni, V.C., Noia, T.D., Mirizzi, R., Romito, D., Sciascio, E.D.: Cinemappy: a context-aware mobile app for movie recommendations boosted by DBpedia. In: CEUR Workshop Proceedings, vol. 919 (2007) 8. Otebolaku, A.M., Andrade, M.T.: Context-aware media recommendations for smart devices. J. Ambient Intell. Humanized Comput. 6(1), 13–36 (2015) 9. Ramalho, J., Chambel, T.: Immersive 360◦ mobile video with an emotional perspective. In: Proceedings of ImmersiveMe 2013. ACM (2013)

72

N. Areias and B. Malheiro

10. Ramalho, J., Chambel, T.: Windy sight surfers: sensing and awareness of 360◦ immersive videos on the move. In: Proceedings of ImmersiveMe 2013. ACM (2013) 11. Rey-L´ opez, M., Barrag´ ans-Mart´ınez, A.B., Peleteiro, A., Mikic-Fonte, F.A., Burguillo, J.C.: moreTourism: mobile recommendations for tourism. In: Proceedings of ICCE 2011. IEEE (2011) 12. Verbert, K., Manouselis, N., Ochoa, X., Wolpers, M., Drachsler, H., Bosnic, I., Duval, E.: Context-aware recommender systems for learning: a survey and future challenges. IEEE Trans. Learn. Technol. 5(4), 318–335 (2012) 13. Viktoratos, I., Tsadiras, A., Bassiliades, N.: Geosocial splis: a rule-based service for context-aware point of interest exploration. In: Challenge+ DC@ RuleML (2014)

Bee Swarm Optimization for Community Detection in Complex Network Youcef Belkhiri(B) , Nadjet Kamel, Habiba Drias, and Sofiane Yahiaoui University of Science and Technology Houari Boumediene, Bab Ezzouar, Algeria [email protected], [email protected], [email protected], [email protected]

Abstract. The Study of complex networks topology has triggered the interest of many scientists in recent years. It has been widely used in different fields such as protein function prediction, web community mining and link prediction in many areas. This paper purports at proposing an algorithm based on the BSO (bee swarm optimization) for community detection problem we call BSOCD. This algorithm takes modularity Q as objective function and k number of bees to create a search area. Additionally, the algorithm uses a new random strategy to generate the reference solution and the taboo list to avoid cycles during the research process. We validate our algorithm by testing it on real networks. Experiments on these networks show that our proposed algorithm obtains better or competitive results compared with some other representative algorithms. Keywords: Networks · Community detection optimization · Evolutionary algorithm

1

·

Modularity Q

·

Bee

Introduction

Researchers have argued that a variety of complex systems can be modelled as complex networks. For instances, the World Wide Web is a network of web pages interconnected by hyperlinks, social networks are represented by people as nodes and the relationships by edges, biological networks usually represent their biochemical molecule as nodes and the reaction between them as edges. Recently, the majority of the researches placed a major focus on understanding the evolution and the organization of such networks as well as the effect of network topology exerts on the dynamics and behaviours of the system [1–4]. Finding community structure in network is another step towards understanding the complex system they represent. The study of Community structure in network has a long history. It is similar to the well studied problems graph partitioning in graph theory and computer science and hierarchical clustering in sociology [5,6]. Generally, a community is a sub graph of a network whose nodes are similar to each other and dissimilar with nodes outside the sub graph. While it can be argued that communities can overlap, we restrict ourselves to finding disjoint communities. The latter has been the subject of a large research in literature. c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 8

74

Y. Belkhiri et al.

Newman and Girvan have provided a description of a metric for measuring the effectiveness of the found community structure; called modularity Q [7]. The higher Q is, the better quality is. This modularity has predictable deficiencies in certain realistic situations (Like resolution limit [8]), which can be partially overcome by different techniques [9,10] nor alternative [11]. Nevertheless, maximizing this measure of quality is the most used and widely accepted paradigm for detecting communities in networks. Consequently, we choose modularity to measure the quality of the detected communities. Optimizing modularity is NPhard problem for which heuristic methods have been suggested seeking solutions to the underlined dilemma [12,13]. In this paper, we propose an algorithm based on bee swarm optimization algorithm proposed by Drias [14]. The outcomes of adopting BSO to several other issues, such as satisfiability, association rules and clustering, were impressive [14,15]. Our algorithm uses the modularity as the objective function. The experiments showed that BSO is effective in detecting communities in complex network. The remainder of this paper is organised as follows: The second section introduces some definitions including modularity function. In the Sect. 3 we describe related research on community detection. Section 4 introduces general information about BSO Algorithm. Section 5 provides information about the motivations and the details of our proposed algorithm. The Sect. 6 analyzes the obtained results, while the last section serves as a conclusion.

2

Community Detection

The community detection problem in complex network is a hot research topic. A network can be represented as a graph where individuals are modelled by vertices in which interactions between them are represented by edges. The aim of detecting communities is to find a set of p modules that facilitate the interpretation of the community structures and the determination of each individual’s role, where the number p is unknown at the beginning. It can be referred to partitioning the graph into p sub graphs that must satisfy a given quality measure of communities, called objective function. This problem is considered as N P − Hard problem and can be formalized as an optimisation problem. Recently, the community detection problem has been treated as multi objective community detection problem in several researches such as [16,17]. However, the single objective optimization in community detection is still worthwhile and widely applied. Hence, we are interested in single objective optimization. Many objective functions have been proposed in literature as an attempt to evaluate the effectiveness of community structures. In 2004, Newman put forward a quantitative measure called modularity Q. Since then, the optimization of this metric as the objective function becomes the mainstream method of detecting community structure in complex network, such as CDHC [18], BCALS [13], FN [19], and LPAm [20].

Bee Swarm Optimization for Community Detection in Complex Network

75

Let N = (V, E) be an undirected network. V is a set of vertices assumed to be divided into a number of communities and E is a set of edges (links) connecting a pair of vertices. The function Q is defined as follow [7] Q=

ki kj 1  )s(ci , cj ) (Aij − 2m ij 2m

(1)

where Aij denotes the adjacency matrix of the network  N , if nodes i and j have 1 = 0.m = connection side, then Aij = 1, otherwise ij Kij is a number of 2  total edges in the network, and Ki = i Aij represents the degree of vertex I, function s(u, v) is equal to 1 if u = v and 0 otherwise, ci denotes the community which vertex I is assigned.

3

Related Works

Many approaches, in different fields as physics, statistics and data mining, have been proposed to uncover community structures. A presentation of the main methods developed can be found in recent survey [21]. Graph partitioning in [22] is one of the earliest proposed methods which still frequently used. This type of methods are far away from being efficient for community detection problem owing to the fact that it is necessary to provide the number of communities and in some cases even their size which are hard to obtain in time. Hierarchical clustering techniques rely on similarity measure between vertices to form communities. After having chosen the similarity measure, it is computed between each pair of vertices to identify their groups with high similarity. Hierarchical clustering is classified under two categories. The first category is called agglomerative algorithm, in which vertices iteratively form communities if their similarity is sufficiently high. The algorithm starts from vertices as separate clusters and ends up with a graph as unique cluster. The second category is called divisive algorithm, where vertices are iteratively split by removing edges between them if the similarity is low. Here, the algorithm takes the opposite direction of agglomerative. It starts from a graph as one cluster and ends up with clusters containing similar vertices by removing edges as in [7] where the authors use betweenness measure to remove iteratively edges from the network to split it into communities. Moreover, optimization based methods have been considered as the main category. Their aims consist at maximizing a function. Modularity introduced in [7] is the most used and the best quality function for optimization based methods in community detection problem. In literature, optimization based methods can be divided into two categories; single objective optimization based methods and multi-objective optimization. In both categories, evolutionary algorithms are proved to be efficient and effective for optimization problem. Hafez, et al. proposed in [23] artificial bee colony swarm (ABC) which employs three types of bees to solve the community detection problem and emphasizes how the algorithm performance is directly influenced by the use of different popular communities measures. In [24]. Jin, et al. present a genetic algorithm that employs

76

Y. Belkhiri et al.

graph based representation (LAR). In this approach, a candidate solution is represented as a chromosome with n genes, where n is the number of nodes in the network. Each gene in the chromosome stores a value that represents the identifier of another node. Jin, et al. use the modularity as single objective function to detect community structures in complex networks. He, et al. propose in [25] an Ant colony algorithm to discover communities in large scale networks, which takes also modularity Q as objective function. This algorithm is discrepant from the other ant algorithms in the manner in which the ants are used. Besides, it iteratively applies two phases called single layer ant colony optimization (SACO) and multi layer ant colony optimization (MACO). Concerning the SACO, it initializes each vertex as a community and randomly distributes some ants on the network, where each ant freely crawls and decides whether its current vertex joins the community of its previous vertex until there are no more vertices in the network that change their label. Then, MACO takes the results from the first phase and tries to reach a higher level. It considers each community as a vertex and the sum of weight of edges between any two communities as the weight between them. So, this phase builds a higher level network from the previous partition in order to execute SACO again, until the modularity achieves its maximum. In addition, Cai, et al. present in [26] a survey on evolutionary algorithms for network community detection which also cover multi-objective optimization. The evolutionary algorithms are more powerful than heuristic methods. This is why we choose to apply this category, particularly BSO Algorithm.

4

Bee Swarm Algorithm

A metaheuristic is a higher-level heuristic designed to find, generate or select heuristic that may provide a sufficiently good solution to an optimization problem such as the community detection problem. The metaheuristic we use in this paper is called Bees Swarm optimization BSO proposed by Drias in [14]. BSO is inspired by the behaviour of bees. It is based on the behavior of swarm artificial bees cooperating all together to reach a good source of food (good solution). At the beginning a bee named BeeInit flies to find a solution with good features which is called reference solution Sref. After that, and by applying flip strategy other solutions are determined from the Sref. The set of these solutions forms the search area. Then, to expand this search area, each bee will consider Sref as starting point of the search to reach another solution. Once the bee ends producing its solution, it communicates with other bees in a table named Dance table in order to save the best solution. This solution becomes a new Sref for the next iteration. Finally, to avoid cycles, each time the reference solution is stored in taboo list. The algorithm stops whether the optimal solution is found or the maximum number of iteration is achieved. In other words, the process of bee swarm algorithm falls into the following three principals stages:

Bee Swarm Optimization for Community Detection in Complex Network

77

1. BeeInit flies randomly to generate the initial solution, 2. Other bees take the initial solution as a start point to explore other solutions and form a set of solutions named search area, 3. According to the solutions already found in search area and their qualities, select the best solution to become the new initial solution. BSO is an algorithm of combinatorial optimization based on populations, where the solutions and the sources of food are modified by artificial bees that work as operators of variation. The goal of these bees is to find out the food sources with larger quantity of nectar. The bee swarm algorithm has been used for several optimization problems and has demonstrated an ability to deal with high dimensional problems [27,28].

5

Swarm Optimization for Community Detection BSOCD

In this section we present our community detection algorithm based on Bees Swarm optimization called BSOCD. This method relies on BSO metaheuristic to discover community structure. Before giving details of the components of our approach, we show how we adapt BSO algorithm for community detection problem. 5.1

Encoded Form

According to the nature of community detection problem; a solution is a partitioning of nodes V of network G. Each partition contains similar nodes and represents a community. A possible solution is defined by the number of communities and by the distribution of the nodes in these communities. At the beginning of the algorithm, a solution is created to represent the BeeInit source of nectar. To represent this solution we choose string based schema to better notice how the network is divided into several communities as shown in Fig. 1(b). The string represents the assignment of each node in the network to each community. The encoding of each solution is based on the indices of communities, where the indices are the labels of the nodes and the value of each index indicates to which community the node belongs. To generate this solution; at the beginning we suppose that we have only one community m1 and the number of communities will increase during the generation of the solutions. Next, we pick a node randomly and affect it to the community m1 . After that, we take all the nodes left one by one randomly and assign it to a community. In this step, we have two possibilities, whether we create a new community for this node and we increase the number of the communities or we choose randomly one of the existing communities where this node will be

78

Y. Belkhiri et al.

Fig. 1. Encoding initial bee strategy for a network

assigned to. This process continues until there are no nodes left to be assigned. Our encoding strategy is shown above in Fig. 1. Figure 1(a) shows a network of 10 nodes to be partitioned; (b) is a initial bee source of this network and it represents the reference solution, where nodes 0, 1 and 2 belong to community number 3; nodes 3, 4 and 5 are in community number 1; nodes 6 and 9 are affected to community number 2 and finally node 7 and 8 form community number 4. Thus, the network is divided into 4 communities; (c) is the structure of the proposed bees source. 5.2

The Determination of Search Area

The search area is represented by a set of k solutions (k is the number of bees chosen at the beginning of the process). Each of these solutions is the result of k | of community label from the Sref solution (N is the total number changing | N of nodes in the network). To move forward or toward new possible solutions and explore other regions that can furnish better solution than Sref, it is primordial to apply flip algorithm we detail below in Algorithm 2. Let take N = 9 (the number of nodes) and k = 3 (number of bees k) as shown in Fig. 2. The nodes are subscribed from 1–9 and the label of each node is the community identifier where it belongs. So, the number of nodes that must change their label is 3 in each solution. The positions of the nodes that have to be altered by each bee are from position i to (k + i) ∗ J, where i is the identifier of the bees, it starts from 1 to k, and j is the iteration number in the flip algorithm. The bees 1, 2 and 3 change the label community in the following positions: (1, 4, 7), (2, 5, 8) and (3, 6, 9), by changing randomly the current community where the corresponding node has been assigned. The choice of the community

Bee Swarm Optimization for Community Detection in Complex Network

79

Fig. 2. Search area of the initial Sref in Fig. 1(b)

for the nodes is done randomly by picking up one label community from the existing communities. It is worthy to notice that the behavior of bees is based on individual work to produce good solution. Thus, each bee must visit its own nodes to avoid conflict between each other. By the end of the determination of the current search area, all nodes in the network are visited and their previous labels are changed. After that, all bees communicate together in list dance to select the new Sref according to modularity Q. The solution that has the maximum value of modularity becomes the new Sref for the next iteration. This solution will be stored in taboo list to avoid cycles and choose the best solution has been visited ever. 5.3

Objective Function

As mentioned above, the problem of detection community in complex network is NP-hard and requires a quality function in order to evaluate and discover good communities. In our algorithm we use the modularity Q defined in formula (1). This quality function Q assigns a number to each community in a network, and communities are ranked based on the score of Q. Higher score is good community is. 5.4

BSO Community Detection Algorithm

On the basis of above discussion, this section gives BSOCD algorithm and the Flip algorithm

80

Y. Belkhiri et al.

BSOCD Algorithm Data: Network G = (V, E) ,MaxIter ,k //k represents number of bees Result: C //network community Structure -i=0;j=0; - Liste tabo= new Liste(); - Generate randomly the Solution reference Sref; - while i < MaxIter do - Liste taboo = Liste taboo+ Sref; while j < k do - Apply Flip algorithm; //local search using Flip algorithm to determine search area end Sref*= best search area solution; // select new Sref solution Sref= Sref* end C = best Solution in taboo list; Algorithm 1. BSOCD Algorithm Flip Algorithm Data: Sref, Cn , k //Cn is number of communities actually exist and k is a number of bees Result: S ∗ // set of new solutions neighbors that forms search area -i=1; - while i ≤ k do - newSolution = Sref ; - j=1 ; - position =i; while position < T do //T length of Sref solution which is always total number of nodes. - C = Random Cn ; //Pick up a new community label from the existing communities Cn . - newSolution[Position] = Ci ;//change the current community label of node j with a new one Ci . - position= (i+k)*j; // next position to jump to in newsolution. - j=j++; end -S ∗ = S ∗ +newSolution; end Algorithm 2. Flip Algorithm

6

Experiments

In this section, we evaluate the performance of our proposed algorithm. We start by presenting the benchmark set used for our experiments. Then, we make

Bee Swarm Optimization for Community Detection in Complex Network

81

an experiment under the condition that has the same number of iterations by adjusting the number of bees. We finally choose four kinds of community detection algorithm which are CDHC, FN, LPA and Finding and Extracting Communities (FEC) [29] and compare them with our proposed algorithm. CDHC and FN are based on optimization method, where LPA and FEC are based on heuristic method. 6.1

Data sets

In order to show the efficiency of our proposed algorithm, we choose to run it on the three real networks used in literature, which are Zachary Karate Club [30] Dolphin Network [31] and Football Network [32]. Karate Club Benchmark. It is a social network of friendships between 34 members of a Karate club at a US university in the 1970. Due to the conflict between the club president and the Karate instructor, the network is divided into two organizations with nearly equal size. Dolphin Network. The dolphin Network is based on the observations of the behavior of 62 Dolphins over a period of seven years living in Doubtful Sound, New Zealand. It contains 62 dolphins that form two groups. Football Network. It is a network of American College football games between American colleges during regular season fall 2000. It contains 115 teams which are divided into 12 conferences; each one has around 8 to 12 teams. Table 1 is a summary of the three dataset basic information, including the number of natural communities (NC), the number of nodes, and the number of edges. Table 1. Real network information Data sets NC Nodes Edges Zachary

2

34

78

Dolphin

2

62

159

Football

12

115

618

Figure 3 illustrates how the value of modularity Q evolves in Zachary club with the empirical parameters in bees swarm algorithm. The number of iteration is fixed to 200 and the number of bees starts from 5 bees and evolves till 30. We can notice that the global function Q is monotone increasing with the number of bees, under the condition that the number of bees does not across 1 2 |V | and the iteration number is small. The total number of nodes V in this

82

Y. Belkhiri et al.

Fig. 3. Q value according to Bees number

dataset is 34. Thus, the best solution reached the modularity value of 0.357 where the bees number is 15. On the contrary, the modularity Q decreases when bees number is 20, 25, and 30 because with those quantities of bees in network of 34 nodes the algorithm needs greater number of iterations to achieve greater value of modularity. In addition, if the number of bees is too big, the swarm will move away from the region containing Sref with the risk to lose good solutions. In other words, if the reference solution divides the network into several communities at the beginning and the optimal solution might require that the network must be divided into just a few communities, then each bee will make only one jump to node I, and it will set the current community only for this node i. It will be difficult to obtain this partitioning especially with those parameters number of bees and iteration number. We tested Bee swarm algorithm for community detection on the three real networks mentioned above. We adjusted the empirical parameters which are number of iteration and number of bees. The results of our algorithm and the other methods are illustrated in Table 2. Table 2. Algorithm BSOCD on real network and other algorithms clustering quality comparison Data sets CDHC FN

LPA

FEC

BSOCD

Zachary

0.373

0.3807 0.3646 0.3744 0.4197

Dolphin

0.477

0.5104 0.4802 0.4976 0.514

Football

0.602

0.5497 0.5865 0.5697 0.604

Table 2 illustrates a numerical quality comparison of our proposed algorithm BSOCD with four other algorithms. For example, Zachary club naturally contains two communities with the same size. True community structure is shown

Bee Swarm Optimization for Community Detection in Complex Network

83

Fig. 4. BSOCD community structure of Zachary Karate Club

in Fig. 4 where the network is divided into four communities. This decomposition gives the highest value of modularity Q0.4197, as it is shown in Table 2. In addition, for the dolphin network and football network, our algorithm is better than the other algorithms. This also shows that our BSOCD is very effective on real network.

7

Conclusion

In this paper we proposed an algorithm based on bee swarm optimization for community detection problem. The algorithm uses the modularity Q as objective function. Starting with initial solution called reference solution and from this latter, k bees are used to reach the optimum solution by maximizing the global function Q where each bee works individually and try to improve the reference solution in its own region by its own jumps to find a new community structure. Our algorithm was tested on three real world networks. Experimental results confirm the validity and the efficiency of this method. Future works will deal with multi-objective community detection problem to improve more the quality of results.

References 1. Albert, R., Barab´ asi, A.-L.: Statistical mechanics of complex networks. Rev. Mod. Phys. 74(1), 47 (2002) 2. Albert, R., Jeong, H., Barab´ asi, A.-L.: Internet: diameter of the world-wide web. Nature 401(6749), 130–131 (1999) 3. Barab´ asi, A.-L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999) 4. Newman, M.E.J.: The structure and function of complex networks. SIAM Rev. 45(2), 167–256 (2003)

84

Y. Belkhiri et al.

5. Johnson, D.S., Garey, M.R.: Computers and Intractability: A Guide to the Theory of NP-Completeness. Wiley Computer Publishing, San Francisco (1979) 6. Scott, J., Carrington, P.J.: The SAGE Handbook of Social Network Analysis. SAGE publications, Thousand Oaks (2011) 7. Newman, M.E.J., Girvan, M.: Finding and evaluating community structure in networks. Phys. Rev. E 69(2), 026113 (2004) 8. Fortunato, S., Barthelemy, M.: Resolution limit in community detection. Proc. Nat. Acad. Sci. 104(1), 36–41 (2007) 9. Berry, J.W., Hendrickson, B., LaViolette, R.A., Phillips, C.A.: Tolerating the community detection resolution limit with edge weighting. Phys. Rev. E 83(5), 056119 (2011) 10. Lambiotte, R.: Multi-scale modularity in complex networks. In: 2010 Proceedings of the 8th International Symposium on Modeling and Optimization in Mobile, Ad hoc and Wireless Networks (WiOpt), pp. 546–553. IEEE (2010) 11. Yang, J., Leskovec, J.: Defining and evaluating network communities based on ground-truth. Knowl. Inf. Syst. 42(1), 181–213 (2015) 12. Blondel, V.D., Guillaume, J.-L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. J. Stat. Mech. Theor. Exp. 2008(10), P10008 (2008) 13. Belkhiri, Y., Kamel, N., Drias, H.: A new betweenness centrality algorithm with local search for community detection in complex network. In: Nguyen, N.T., Trawi´ nski, B., Fujita, H., Hong, T.-P. (eds.) ACIIDS 2016. LNCS (LNAI), vol. 9622, pp. 268–276. Springer, Heidelberg (2016). doi:10.1007/978-3-662-49390-8 26 14. Drias, H., Sadeg, S., Yahi, S.: Cooperative bees swarm for solving the maximum weighted satisfiability problem. In: Cabestany, J., Prieto, A., Sandoval, F. (eds.) IWANN 2005. LNCS, vol. 3512, pp. 318–325. Springer, Heidelberg (2005). doi:10. 1007/11494669 39 15. Karaboga, D., Ozturk, C.: A novel clustering approach: artificial bee colony (ABC) algorithm. Appl. Soft Comput. 11(1), 652–657 (2011) 16. Shi, C., Yan, Z., Cai, Y., Bin, W.: Multi-objective community detection in complex networks. Appl. Soft Comput. 12(2), 850–859 (2012) 17. Zhou, Y., et al.: Multiobjective local search for community detection in networks. Soft Comput. 20(8), 3273–3282 (2016) 18. Yin, C., Zhu, S., Chen, H., Zhang, B., David, B.: A method for community detection of complex networks based on hierarchical clustering. Int. J. Distrib. Sens. Netw. 2015, 137 (2015) 19. Newman, M.E.J.: Fast algorithm for detecting community structure in networks. Phys. Rev. E 69(6), 066133 (2004) 20. Barber, M.J., Clark, J.W.: Detecting network communities by propagating labels under constraints. Phys. Rev. E 80(2), 026129 (2009) 21. Fortunato, S.: Community detection in graphs. Phys. Rep. 486(3), 75–174 (2010) 22. Kernighan, B.W., Lin, S.: An efficient heuristic procedure for partitioning graphs. Bell Syst. Tech. J. 49(2), 291–307 (1970) 23. Hafez, A.I., Zawbaa, H.M., Hassanien, A.E., Fahmy, A.A.: Networks community detection using artificial bee colony swarm optimization. In: K¨ omer, P., Abraham, A., Sn´ aˇsel, V. (eds.) Proceedings of the Fifth International Conference on Innovations in Bio-Inspired Computing and Applications IBICA 2014. Advances in Intelligent Systems and Computing, vol. 303, pp. 229–239. Springer, Heidelberg (2014) 24. Jin, D., He, D., Liu, D., Baquero, C.: Genetic algorithm with local search for community mining in complex networks. In: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence, vol. 1, pp. 105–112. IEEE (2010)

Bee Swarm Optimization for Community Detection in Complex Network

85

25. He, D., Liu, J., Liu, D., Jin, D., Jia, Z.: Ant colony optimization for community detection in large-scale complex networks. In: 2011 Seventh International Conference on Natural Computation (ICNC), vol. 2, pp. 1151–1155. IEEE (2011) 26. Cai, Q., Ma, L., Gong, M., Tian, D.: A survey on network community detection based on evolutionary computation. Int. J. Bio-Inspired Comput. 8(2), 84–98 (2016) 27. Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical report, Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005) 28. Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Glob. Optim. 39(3), 459–471 (2007) 29. Yang, B., Cheung, W., Liu, J.: Community mining from signed social networks. IEEE Trans. Knowl. Data Eng. 19(10), 1333–1348 (2007) 30. Zachary, W.W.: An information flow model for conflict and fission in small groups. J. Anthropol. Res. 33, 452–473 (1977) 31. Lusseau, D.: The emergent properties of a dolphin social network. Proc. R. Soc. Lond. B Biol. Sci. 270(Suppl 2), S186–S188 (2003) 32. Girvan, M., Newman, M.E.J.: Community structure in social and biological networks. Proc. Nat. Acad. Sci. 99(12), 7821–7826 (2002)

Developing a Web Scientific Journal Management Platform Artur Côrte-Real ✉ and Álvaro Rocha ✉ (

)

(

)

Departamento de Engenharia Informática, Universidade de Coimbra, Coimbra, Portugal [email protected], [email protected]

Abstract. The technological evolution experienced in recent years demanded the adjustment of traditional scientific journals to the electronic media, in order to offer greater comfort and accessibility to the reader. The online availability of scientific journals has led to the emergence of several management platforms, which entail a set of strict steps. However, the existing options are far from perfect. In the present article, we analyse different scientific journals management plat‐ forms and introduce a new platform that is currently under development. It is intended as a market solution that fills the gaps identified in the existing platforms. The main solutions offered by this platform are based on features concerning the submission, review and online publication of a scientific article. Additionally, it offers enhanced usability and innovative features. Keywords: Software engineering · Information systems · Journal management · Web applications · Editorial workflow

1

Introduction

In the realm of academic publications, a scientific journal is a periodic publication that promotes science. Currently, many of these journals are specialized by nature, focusing on a specific theme. There are, however, a number of exceptions in which this process encompasses a large number of scientific fields. In order to control and manage the quality of these electronic publications, a wide range of management systems emerged in recent years, as well as electronic libraries that offer a large collection of scientific articles and other works [1]. Some of these libraries are known and used on a global scale, including SciELO, Emerald, IEEE Xplore, ScienceDirect and SpringerLink. Universities, researchers and scientific societies are increasingly inclined to grant free access to academic investigation and scientific research works, having concluded that providing this access electronically through the Internet is the most simple, economic and powerful way to do it [2, 7, 8]. As such, we need updated manage‐ ment methods to achieve this goal in the current technological environment. Nowa‐ days, practically all scientific journals resort to internet-based systems for the submis‐ sion of articles and peer reviews, saving editors a lot a time when compared to paperbased submissions.

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_9

Developing a Web Scientific Journal Management Platform

87

Owing to the relevance that scientific journals and their management platforms have assumed over the last few years, we felt the need to create a new management application for scientific journals that stands out from other platforms available in the market. The purpose of the present article is to explain, in full detail, the process of publishing a scientific article, from submission to publication; share a comparative study involving different scientific journal management platforms, enlisting a number of perceived advantages and disadvantages in each; and, finally, propose a new management platform for scientific journals, which is currently under development.

2

Editorial Workflow

Any scientific journal management platform requires a high number of information exchanges between different participants in the publication process, including authors, reviewers and editors. Each participant plays a different role, collaborating with different actors within the system. For a successful online publication of any article, a number of steps must be followed: what is known as the editorial workflow. The literature on scientific journal management reveals multiple proposals for editorial workflows [3–5]. Bearing in mind that each workflow is unique, Fig. 1 briefly illustrates the general outline of the edito‐ rial workflow of any scientific journal management platform. We thus present the sequence of steps that are required to automate processes in scien‐ tific journal management systems, that is, from the submission of a scientific article by the author until the moment it becomes available online. This sequence follows a set of defined rules which allows for the process to flow from one person to the next. The process of publishing a scientific article involves, essentially, four types of roles: Author, Editor, Reviewer and Administrator. The role of an Author is to send articles for the journal by submitting an archive; the Editor monitors the entire process of reviewing, editing and publishing, and can request changes, accept or reject the article submitted by the Author; the role of a Reviewer is to review and provide feedback on articles submitted by the Authors and; finally, the Administrator, who has access to the administrative component of the platform, carries out all the tasks involving all types of users. The process of publishing, from a simplified point of view, covers three stages: submis‐ sion, review and publication. An editorial workflow starts when an article is submitted by an Author. This submission is evaluated by an Editor, who may directly reject the article, request missing metadata from the Author, or assign one or more Reviewers to a review process. In their turn, Reviewers have two options: accept or decline the review request. When accepted, the article is reviewed by the Reviewer, who provides feedback to the Editor and the Author as to the article in question. Each review has an expiration date assigned by the Editor, which ensures a control over the organization of the process. After this step, the Editor has to make a new decision: give the Author a new opportunity, starting a resubmission process for the now revised article and repeating the previously described steps; reject or accept the article in the platform. Once approved, the article undergoes an editing process (Copyediting, Layout, Proofreading), which settles the final details, and is assigned a volume and number within the journal.

88

A. Côrte-Real and Á. Rocha

Fig. 1. Editorial workflow

In general terms, this workflow illustrates the majority of the management solutions adopted by scientific journals. However, different systems can shape their own workflows based on the aforementioned flow, where new actors, processes and relationships can be added or even removed. It is important to note that some platforms let the user configure his workflow.

3

Comparison Between Similar Systems

Nowadays, there are several software products available in the market, commercial, free or open-source, each with its own features and benefits. In this subchapter, we will briefly analyse different scientific journal management systems equivalent to the product developed in this work. We will present a comparative table with the functionalities of each system, followed by a conclusive analysis. Before choosing the systems that featured our comparison, we defined a number of selection criteria in order to achieve a fair and rigorous analysis. We were thus able to exclude some of the options we found on the Internet, ensuring above all a minimum quality factor. The criteria were:

Developing a Web Scientific Journal Management Platform

89

• Reputation: platforms had to have a relationship with a recognized entity (university, technological corporation), which attached merit to the system. • Current Functionality: platforms need to be functional. Applications that were no longer functional were automatically discarded. • Documentation: the existence of minimum official documentation is essential for a user to understand the installation process and take advantage of the available features. Any system that involves a high number of actions and procedures without any supporting documentation is hampered in terms of usability. • Experimental Access: since one of the goals of this investigation work was to test different platforms, direct contact was essential. By downloading and using free demos, we were able to analyse the features of each application. • Critical Reviews: from critical reviews written by different users we were able to discern which systems were more reliable. These criteria led to the selection and analysis of the following systems: Open Journal Systems, HyperJournal, Digital Publishing System, GNU EPrints, Editorial Manager and Scholastica. This research and analysis process took place between February 16 and April 4, 2016. In a first moment, we elaborated a list of strong points from different platforms, taking into account functionalities that, in our opinion, could be useful to a common user. Next, from this set of traits, we analysed each system individually, to ascertain if the platform in question indeed offered these different highlighted functionalities. The result of this comparison is presented in Fig. 2.

Fig. 2. Comparison between systems

90

A. Côrte-Real and Á. Rocha

From this Table, we can identify the existence of four open-source platforms: Open Journal Systems, HyperJournal, dPubs and EPrints. Editorial Manager is the only system that requires a payment plan, all the others are completely free. Even though each system has its own functionalities and advantages, some platforms are becoming less influential and, consequently, less used. After contacting the Library of the Cornell University, the entity supporting DPubS, we concluded that many of these systems, including DPubs, have turned to OJS, as it is the most used, as shown by a large and active community of users. However, DPubS is still available online, and its access and use is still possible. Both DPubS and OJS support extensibility: DPubS with the creation of a special directory, and the configuration of its architecture via Perl programming; OJS through an API plugin for PHP developers. This feature is an advantage for both platforms when compared to all the others, as it allows the user to add new functionalities to the platform, shaping its architecture as needed. The ease of installation of a new system is, oftentimes, a key indicator of its potential and its ease of use by technical and non-technical users. But this correlation is not mandatory: some programs may require a difficult installation process and be intuitive and fluid in terms of use and handling. This, however, was not consistent with the experiments we carried out. Typically, the role of developers is to provide detailed, complete, elegant and intuitive installation procedures, as well as systems that reflect these same virtues and values [6]. Despite being one of the most widely used scientific journal management platforms in the world, Open Journal Systems (OJS) is far from being perfect. Despite its great number of features, its weak usability can significantly limit the activities carried out by the user. This is not a user-friendly platform, especially when the user is not familiar with the system. Access to features is not intuitive, which means the user faces many obstacles to find what he wants. The available documentation for each application is also essential for the success of the system. Many of the programs under analysis, such as DPubS, HyperJournal and EPrints, which require technical skills from the user, should provide specific and detailed documentation. This factor, together with a not very active community, is a drawback for the user, both when installing and when using the features of the program itself. In the course of this work, and in view of the strengths and weaknesses of the plat‐ forms under analysis, we identified an opportunity to develop a unique scientific journal management application. This system will be housed in a web environment, not requiring any installation process. As was previously mentioned, one of the main prob‐ lems of these applications is their complexity and the absence of an intuitive usability process. This will be the main focus of our final internship product: a product that is easy to use and offers all the features required from a scientific journal management platform. We will also include extra features, such as a social network between different Authors, a task management system between Editors and an integration with LinkedIn.

Developing a Web Scientific Journal Management Platform

4

91

webJournal Platform

The platform under development has the code name webJournal. The purpose of this work is the development of a free web platform for the management of online scientific journals, taking advantage of technologies such as PHP, CSS, HTML and JavaScript. The main features of our platform include: creation of scientific journals; registration of editorial and scientific council members; article submission; article evaluation; commu‐ nication with authors, editorial and scientific boards; creation of online editions. Addi‐ tionally, new features are currently being developed and integrated to improve the expe‐ rience of the user. Figure 3 exemplifies an Author using the webJournal application, more specifically the list of articles submitted by the author in question.

Fig. 3. Screenshot of an author using the webJournal application

4.1 Editorial Workflow This platform follows an Editorial Workflow similar to that shown in Fig. 1. We will now present the different user Roles and, subsequently, explain the different steps followed since an article is submitted until it is published online. Roles. This platform encompasses 4 types of roles: Author, Editor, Reviewer and Administrator. The features available in the platform change according to user type. Below, we detail the different types of features that, to date, each Actor can enjoy when using the webJournal. Authors: Article Submission, Article Visualization (Pending, Accepted, Rejected); Social Network Use; Message Centre. Editors: Article Visualization/Editing (Unassigned, In Review, In Decision, Accepted, Rejected); Issue Creation/Assignment; ToDo Dashboard; Message Centre. Reviewers: ArticleVisualization/Review; Message Centre.

92

A. Côrte-Real and Á. Rocha

Administrator: User Management, Article Management, Message Centre. After the user is authenticated in the platform, he is shown his assigned list of scientific journals. The user selects a journal where he can access all the available journal features, according to his assigned role type. We will now detail the different editorial workflow stages of our platform. Article Submission (Author). The Editorial Workflow starts when the Author submits an article. To do so, the Author is asked to enter all the information (metadata) concerning the article, as well as the submission file. The Author has the option of submitting an extra file, which will serve as a supplement to the original one. The plat‐ form supports all formats and does not impose any format limits. Afterwards, the Author receives an e-mail confirming his submission to the platform. Decision (Editor). After the Author submits an article, an email notifies all Editors of the scientific journal in question. In this stage, the Editor has the following options: ask the Author to correct metadata; immediately reject the article, bringing the editorial workflow to an end; or assign one or more reviewers to the correction of the scientific article. To ensure a quality control when choosing Reviewers, the platform may suggest Reviewers to the Editor, carrying out a keyword match between the article and the Reviewer. After the article has been reviewed, the Editor is asked to make a new deci‐ sion: he can reject the article, bringing the editorial workflow to an end; or he can give the Author a new opportunity to initiate a resubmission process for the corrected version of the article, based on the insight and feedback offered by the Reviewer. This step can be repeated several times, with a view to a successful submission. Finally, the Editor can accept the article, and the editorial workflow moves to the next stage. Review (Reviewer). The Reviewer can accept or decline an invitation to review an article. If declined, the process returns to the Editor and a new Reviewer is assigned. If accepted, the Reviewer can initiate the review process. He will be assigned a deadline by the Editor. The review process is carried out from a form that is subsequently reviewed by the Editor and, if resubmitted, by the Author. Final Correction (Editor/Author). In this stage, the article has been accepted and undergoes a rectification process to eliminate any flaws prior to its publication. With this in mind, a member of the editorial secretariat will again review the document, to detect anomalies in its structure, element arrangement or even content. Next, the Author will answer a questionnaire to clarify any doubts on the part of the Editor concerning the document. Finally, the editorial secretariat element prepares the corrected document and initiates a new submission. Article Publication. Once corrected and edited, the article is ready to be published. The Editor can now easily assign the article a volume and a number in the journal in question. The article becomes automatically available.

Developing a Web Scientific Journal Management Platform

93

4.2 Distinctive Features LinkedIn Integration. The first innovative feature is the webJournal integration with the social network LinkedIn. Any user will be able to register using his LinkedIn creden‐ tials, the largest professional network in the world. This will render the registration process easier for the user, as much of his information will be retrieved from his social network profile. Social Network. Our webJournal will provide a simplified social network for different Authors in the same journal. This will promote the communication and exchange of ideas between different Authors. Each social network post will assume a specific char‐ acter: Self-Promotion, Help or Off-Topic. The Self-Promotion tag is intended to promote/publicize an article developed by a given Author in order to obtain some feed‐ back and insight from other Authors in the same journal. To help the Authors become familiar with this scientific journal management platform we developed the Help tag, for posts involving any doubts concerning platform usability/features. Lastly, the OffTopic tag identifies posts of a more relaxed nature, including posts unrelated to academic publishing in scientific journals. This tag promotes interactions between different Authors in the same scientific journal, encouraging interconnections. To Do’s. Working in a scientific journal editorial team is not a simple job, as the responsibilities of each Editor and the tasks that need completion are hard to keep up with. Each Editor has several responsibilities and this is why we developed a ToDo system. This feature enables the assignment of tasks and deadlines to different Editors, ensuring the minimum quality of the editorial process, since an article is submitted until it is published in the journal.

5

Conclusions

Even though the webJournal is still under development, a number of interesting features in the editorial publishing field are already available. Communication between users is viewed as essential to the editorial process of any scientific journal, and this is why webJournal developed features that enable the inter‐ action between different users, a social network for Authors and a To Do system shared between Editors. webJournal contends that the existence of a social environment within the platform enhances the quality of a scientific journal editorial process. Usability is another characteristic valued by webJournal. This platform provides a simple and user-friendly design, where any user can complete his actions with fluidity and ease, from article submission to its publication. All the elements available in the platform are visible and, as the user navigates the system, several visual cues are provided to prevent him from being lost or blocked. Another advantage is that this platform is free. Any user can enjoy all the webJournal features without being bound to any payment plan.

94

A. Côrte-Real and Á. Rocha

The existence of structured and explicit documentation is extremely important and useful for any user. This is why webJournal provides a good source of documentation, to eliminate any barriers when being used. In the future, we intend to enrich this platform with the development of new features. Its first working version will be released in February 2017 and will be available to users all over the world.

References 1. Shotton, D.: Semantic publishing - the coming revolution in scientific journal publishing. Learn. Publ. 22(2), 85–94 (2009) 2. Guanaes, P., Guimarães, M.C.: Modelos de Gestão de Revistas Científicas - Uma Discussão Necessária. Perspectivas em Ciência da Informação 17(1), 56–73 (2012) 3. Bogunović, H., Pek, E., Lončarić, S., Mornar, V.: An electronic journal management system. In: Proceedings of the 25th International Conference on Information Technology Interfaces, pp. 231–236 (2003) 4. Valdas, D., Miroslav, S., Vidas, D., Valentinas, K.: EJMS - Electronic Journal Management System. In: Proceedings of the 30th International Conference on Machine Learning, Atlanta, Georgia, USA, vol. 28. JMLR: W&CP (2013) 5. OJS Documentation: OJS in an Hour. https://pkp.sfu.ca/files/OJSinanHour.pdf 6. Cyzyk, M., Choudhury, S.: A survey and evaluation of open-source electronic publishing systems. White Paper (2008) 7. Rocha, Á.: Framework for a global quality evaluation of a website. Online Inf. Rev. 36(3), 374–382 (2012) 8. Leite, P., Gonçalves, J., Teixeira, P., Rocha, Á.: A model for the evaluation of data quality in health unit websites. Health Inform. J. 22(3), 479–495 (2015)

A Robust Implementation of a Chaotic Cryptosystem for Streaming Communications in Wireless Sensor Networks Pilar Mareca and Borja Bordel ✉ (

)

Universidad Politécnica de Madrid, Madrid, Spain [email protected], [email protected]

Abstract. Wireless sensor networks consist of tiny sensor nodes, which act as both data generators and network relays. These sensor nodes present limited processing capabilities, so hardware support is required for many tasks. Security, a key issue in sensor networks, is one of those tasks. However, current security solutions are always supported by complex software algorithms. Therefore, in this paper we propose a chaotic cryptosystem based on Chua’s circuit, especially designed to encrypt streaming communications among sensor nodes. The proposed solution presents a robust design, which enables its implementation using hardware technologies. Moreover, an experimental validation is proposed proving that the maximum encryption error never goes up 12%. Keywords: Cryptography · Chaos · Chaotic masking · Chua’s circuit · Wireless sensor networks · Cyber security

1

Introduction

Wireless sensor networks (WSN) consist of spatially distributed autonomous tiny sensor nodes to monitor and recording physical or environmental conditions, which act as both data generators and network relays [1]. In general, deployments of WSN may present very different characteristics relative to each other. Furthermore, many times sensor nodes making up a unique WSN are a heterogeneous collection of devices with very different functionalities and capabilities. However, despite this fact, various aspects are common to every sensor node or WSN [2]: self-configurable, delay-tolerant, decentral‐ ized, etc. Among all them, one of the most important aspects is their reduced sized and its limited processing (and communication) capabilities [3]. As a consequence, tradi‐ tional algorithms or network solutions are not directly applicable to WSN, as telecom‐ munication networks (such as Internet or Frame Relay) incorporate additional infra‐ structures (such as the power supply) and devices with high capacities which cannot be considered in WSN. One of the disciplines more affected by these limitations is security [4]. Firewalls, secure routing protocols or encryption technologies demand too many resources to be applied to WSN. As a solution, hardware-supported techniques are implemented [5]. Nevertheless, most existing proposals are focused on message transmission. There exist, © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_10

96

P. Mareca and B. Bordel

however, applications where WSN must establish streaming communications among its nodes (such as border control or monitoring vital signals) which also require secure transmissions. Therefore, the objective of this paper is to describe a hardware-based cryptosystem for streaming communications in WSN. The proposed system employs chaotic cryp‐ tography (specifically chaotic masking) in order to cipher the information signals before being transmitted through the wireless communications interface. The proposed system, moreover, presents a reduced size and a very low power consumption as it is based on Chua’s circuit. Loading effects typical of this circuit are removed by means of a robust implementation of the electronic circuit. The rest of the paper is organized as follows: Sect. 2 describes the state of the art on security solutions for WSN and chaotic cryptosystems; Sect. 3 includes the mathematical formalization of the proposed solution and its implementation as an electronic circuit; Sect. 4 presents an experimental validation in order to test the performance of the proposed solution; Sect. 5 contains the experimental results and Sect. 6 concludes the paper.

2

State of the Art

Security is one of the key problems in WSN [4]. Most works on this topic are focused on the development of secure routing protocols trying to avoid cyberattacks such as the sinkhole attack or the Sybil attack [6]. Works on generic security protocol (in order to support, for example, authentication) [7] and surveys about security issues and cyber‐ attacks in WSN may also be found [8]. However, works on information encryption in WSN are quite common, and only some works about hardware-support might be found [5]. The main cause of this lack of works if that applying any encryption scheme requires extra bits, extra memory, extra battery power, etc. so encryption could increase delay, jitter and packet loss in WSN [9] (especially if streaming communications re consid‐ ered). Then, novel technologies should be applied to WSN in order to, for example, guarantee a secure access to the physical layer (among other possibilities). One of these technologies may be chaotic cryptography. Very complex schemes of chaotic cryptography have been defined [10, 11]. Discrete dynamics have been employed as pseudo-aleatory code [12], unidimensional maps have been integrated into spread spectrum techniques [13] and other solutions based on external keys have been described [14]. Additionally, digital and analog systems have been described [15]. However, all these proposals are based on complicated software algorithms. Thus, for WSN, most simple hardware-supported solutions are required. In this sense, Cuomo and Oppenheim [16] propose a couple of synchronized chaotic circuits as cryptosystem (based on Lorenz dynamics), capable of hiding the transmitted information. Moreover, Kokarev [17] has demonstrated the viability of chaotic masking solutions for other dynamics, such as the Chuas’s circuit. All the previously cited proposals, however, are always implemented using numer‐ ical programming and simulation environments. Thus, practical deployment problems

A Robust Implementation of a Chaotic Cryptosystem

97

(such as the loading effects) are not addressed. Our proposal covers this gap as a robust hardware implementation (valid to be implemented in sensor nodes) is described.

3

A Robust Chaotic Cryptosystem

Various works have demonstrated that cryptographic techniques for protecting the transmitted information in WSN cannot be based on traditional digital schemes (which employ keys and complex algorithms) [9]. Instead, analog hardware-supported techni‐ ques are required. Specially, steganography seems to be one of the better alternatives. Steganography aims at hiding the existence of the data flows among the nodes by embedding information into other signals, so transmissions are not perceptible and hence, the medium looks just like usual. In this context, chaotic masking can be a steg‐ anography solution for streaming communications in WSN. 3.1 Mathematical Formalization The basic scenario is showed on Fig. 1. Two sensor nodes are communicating through a wireless interface. Both nodes are provided with a chaotic circuit. Then, may authors [16, 17] have proved that both circuits can get synchronized if one of them (the trans‐ mitter) sends one of the generated chaotic signals to the other (the receptor). Thus, the chaotic signal creates a perturbation in the spectrum which may hide the information streaming, so intruders cannot capture the communication (as in steganography solu‐ tions). However, as both nodes can get synchronized, the receptor may recover the information using a subtractor.

Fig. 1. Basic scenario

Almost every chaotic dynamic may be employed in masking systems. Nevertheless, considering the reduced size of the sensor nodes in the WSN, it is important to select a dynamic with a robust electronic implementation. Among the paradigm systems, Chua’s dynamic (1) is which better meets those requirements. Many synchronization schemes for the Chua’s dynamic are available.

)) ( ( ) 1( ẋ = 𝛼 y − x − m1 x + m0 − m1 (|x + 1| − |x − 1|) 2 ẏ = x − y + z ż = −𝛽y

(1)

98

P. Mareca and B. Bordel

Adaptive control techniques [18], passive-active decompositions [19] and other synchronization techniques [20] have been used as base for the chaotic masking systems. However, the most simple, and robust, synchronization scheme was proposed by Pecora and Caroll [21]. Basically it consists of two identical chaotic systems acting one of them as transmitter and the other one as receptor. Then, at least one chaotic signal (called synchronization signal) is extracted from the transmitter and injected in the receptor from which the corresponding equations or subsystem (generating the injected signals) are removed. Specifically (see Fig. 2(a)), in the Chua’s circuit the x variable is employed as a synchronization signal. With this selection the conditional Lyapunov’s exponents are always negative [22], and the Vaidya’s demonstration [23] proves the complete synchronization may occur.

Fig. 2. Proposed schemes for Chua’s circuit (a) synchronization (b) masking

In order to create a chaotic masking system using this synchronization scheme (called sometimes unidirectional transmitter-receptor decomposition), it is enough to include the removed equations or subsystems in the receptor, but isolated from the synchroni‐ zation signal (see Fig. 2(b)). Thus, in the transmitter, the information streaming would be added to the synchronization signal, and the masked information would be recovered by the receptor using a subtractor. Mathematically, the masking scheme can be expressed considering two coupled Chua’s dynamics (2). ( ( )) ẋ t = 𝛼 yt − xt − f xt ẏ t = xt − yt + zt ż t = −𝛽yt

xs = xt + s (masked information) ( ( )) ẋ r = 𝛼 yr − xr − f xr ẏ r = xs − yr + zr ż r = −𝛽yr ŝ = xs − xr (recovered information)

(2)

A Robust Implementation of a Chaotic Cryptosystem

f (x) = m1 x +

99

) 1( m0 − m1 (|x + 1| − |x − 1|) 2

Using numerical programming it is possible to evaluate the performance of the proposed scheme. In Fig. 3(a) it is showed the spectrum of the synchronization chaotic signal which must hide the information streaming. As can be seen, signals with band‐ widths up to 25 kHz cannot be protected with this scheme. However, sensor nodes use to transmit low data rates, so this value is enough. Additionally, Fig. 3(b) and (c) shows a comparison between an analog and a digital information flow and the data flow recov‐ ered in the receptor using the proposed scheme. Of course, a radiofrequency chain may translate in frequency the spectrum to be transmitted using wireless communications.

Fig. 3. Results of a numerical implementation of the chaotic masking scheme (a) original information and recovered information (b) spectrum of the masked signal

3.2 Electronic Implementation In its origin, the Chua’s circuit was designed as a real electronic circuit (see Fig. 4(a)), so the Chua’s dynamics may be expressed as the evolution laws of this circuit (3). Thus, the proposed masking scheme could be directly implemented using standard electronic techniques (see Fig. 4(b)). The numerical solutions and the synchronization of Chua circuits, have a good behavior. However, electronic implementation of Chua circuit is very sensitive to the accuracy of its components [24]. Therefore, it is necessary to devote special attention to their implementation in simulated and real circuits [25, 26]. In particular, three problems have impeded until now the implementation of an electronic chaotic masking system based on Chua’s circuit: the effects of the load, the required inductances and the high-frequency chaotic noise which tends to appear mixed with the recovered information. In this paper we propose a robust solution which addresses these problems.

100

P. Mareca and B. Bordel

( ) ( )) dv1 1 1( = v2 − v1 − f v1 dt C1 R ( )) dv2 1( 1 i3 + = v1 − v2 dt C2 R di3 1 = − v2 dt L

(3)

First, in order to avoid the effects of the load, voltage followers are included to extract and inject signals in or from the Chua’s circuits. These new elements wean some part of the circuit to the others, so the effects of the load are minimized.

Fig. 4. Electronic implementation of (a) Chua’s circuit (b) traditional masking system based on Chua’s circuit

Second, the need of including various inductances in the system makes impossible to implement the circuit using high-integration techniques. Particularly, inductances require more space than other components in order to be implemented, so it is recom‐ mendable to employ alterative elements such as capacitors. In order to do that, the inductance in the traditional Chua’s circuit (see Fig. 4(a)) is substituted by an inmitances converter [25]. Finally, in order to remove the high-frequency chaotic noise, a second-order SallenKey low-pass filter is included. Figure 5 shows the resulting robust implementation.

Fig. 5. Robust electronic implementation of the masking system based on Chua’s circuit

A Robust Implementation of a Chaotic Cryptosystem

101

Various modules are distinguished in the circuit: • Module A: It is the Chua’s circuit acting as transmitter. It generates the chaotic signal to mask the secured information. It includes the inmitances converter. • Module B and module C: Voltage followers to wean some parts of the circuit and prevent the effect of the load. • Module D: It is an operational amplifier acting as voltage adder, in order to incor‐ porate the secure information to the chaotic signal • Module E: It represents the transmission medium • Module F: It includes the subsystem of the Chua’s circuit which receives the synchro‐ nization signal. It includes the inmitances converter. • Module G: It includes the subsystem of the Chua’s circuit which it is not part of the Pecora and Caroll’s synchronization scheme • Module H: Two operational amplifiers as subtractor and inverting amplifier in order to recover the secure information. • Module I: A second-order Sallen-Key low-pass filter in order to remove the chaotic noise.

4

Experimental Validation

In order to validate the proposed cryptosystem, an electronic circuit was implemented using two different techniques. First it was implemented in the PSPICE electronic circuit simulator, and, second, it was implemented using discrete electronic components (see Fig. 6(a)).

Fig. 6. (a) Electronic implementation of the masking system (b) synchronization curve

Nevertheless, chaotic cryptosystems are difficult to implement using generic commercial components, as they require very precise elements. Particularly, some works have proved that electronic components with tolerance below 3% are required in order to generate good-quality circuits. Thus, although early promising results were obtained using the electronic implementation (see Fig. 6(b)), in this first work we are focusing on a validation based on circuit simulation.

102

P. Mareca and B. Bordel

Considering an electronic circuit simulation of the proposed system, two different types of secure information were employed: a sinusoidal signal and a TTL signal. Detailed information is included on Table 1. Moreover, ten different values for the control parameter (in this case we employed the resistor Rc as control parameter, see Fig. 5) were considered. Thus, in total, twenty different simulations were performed. Table 1. Simulation details for the experimental validation Parameter Amplitude Frequency Duty cycle Offset

Sinusoidal signal 75 mV 2 kHz – 0V

TTL signal 75 mV 2 kHz 50% 0V

Each simulation calculated the first three seconds of operation of the cryptosystem. Then, the medium value of the recovery error is also calculated (4) for each case. [𝜀(t)] =

5

Nmax ∑ 1 |s[n] − ŝ [n]| N max n=0

(4)

Results

Figure 7 shows the comparison between the covered signal and the original secure information in the case of considering Rc = 1800Ω, for both, a TTL signal and a sinus‐ oidal signal.

Fig. 7. Results obtained from the proposed solution

As seen above, the recovered signal presents good quality, although a high frequency parasite frequency is mixed with the recovered information. Using the appropriate filter this effect could be removed.

A Robust Implementation of a Chaotic Cryptosystem

103

Table 2 shows the results for the medium recovery error. As can be seen, sinusoidal signal may be recovered with better quality than the TTL signal. However, the TTL signal recovery may be highly improved considering specialized digital circuits such as the Smith-trigger (employed to obtain pure TTL signals from TTL-like signals). Table 2. Recovery error Experiment Recovery error (%) Sinusoidal signal 2.7% TTL signal 11.5% Total 7.2%

6

Conclusions and Future Works

In this article we propose a chaotic electronic circuit implemented by means a Chua chaotic system to encrypt streaming communications among sensor nodes in WSN. The first result has been a good chaotic synchronization between the emitter and the receiver systems. We have reduced, also, the loading effects by introducing several voltage followers in the receiver system so obtaining a robust implementation of the electronic circuit. In addition, the circuit is characterized by a reduced size and a very low power consumption. We have implemented the cipher system by a electronic simulation in PSPICE code utilizing sine and TTL information signals. The recovery error was 3% for the sinusoidal signal and 7% for the TTL one. The work is addressed to protecting private communications that is essential in current devices using sensor networks. Future work is addressed to masking speech and sound signals with chaos by means of robust chaotic electronic circuits to protect private communications among sensor nodes. Acknowledgments. One of us Borja Bordel has received funding from the Ministry of Education through the FPU program (grant number FPU15/03977) and from the Ministry of Economy and Competitiveness through SEMOLA project (TEC2015-68284-R). We are grateful for discussions with professor Vicente Alcober.

References 1. Akyildiz, I.F., Vuran, M.C.: Wireless Sensor Networks, vol. 4. Wiley, Hoboken (2010) 2. Yick, J., Mukherjee, B., Ghosal, D.: Wireless sensor network survey. Comput. Netw. 52(12), 2292–2330 (2008) 3. Vieira, M.A.M., Coelho, C.N., da Silva, D.C., da Mata, J.M.: Survey on wireless sensor network devices. In: IEEE Conference Emerging Technologies and Factory Automation, vol. 1, pp. 537–544. IEEE (2003) 4. Perrig, A., Stankovic, J., Wagner, D.: Security in wireless sensor networks. Commun. ACM 47(6), 53–57 (2004) 5. Portilla, J., Otero, A., de la Torre, E., Riesgo, T., Stecklina, O., Peter, S., Langendörfer, P.: Adaptable security in wireless sensor networks by using reconfigurable ECC hardware coprocessors. Int. J. Distrib. Sensor Netw. 6(1) (2010). doi:10.1155/2010/740823

104

P. Mareca and B. Bordel

6. Karlof, C., Wagner, D.: Secure routing in wireless sensor networks: attacks and countermeasures. Ad Hoc Netw. 1(2), 293–315 (2003) 7. Perrig, A., Szewczyk, R., Tygar, J.D., Wen, V., Culler, D.E.: SPINS: security protocols for sensor networks. Wireless Netw. 8(5), 521–534 (2002) 8. Wang, Y., Attebury, G., Ramamurthy, B.: A survey of security issues in wireless sensor networks. IEEE Commun. Surv. Tutorials 8(2) (2006). http://digitalcommons.unl.edu/cgi/ viewcontent.cgi?article=1087&context=csearticles 9. Pathan, A.S.K., Lee, H.W., Hong, C.S.: Security in wireless sensor networks: issues and challenges. In: 8th International Conference Advanced Communication Technology, vol. 2. IEEE (2006) 10. Vaidya, P.G., Angadi, S.: Decoding chaotic cryptography without access to the superkey. Chaos, Solitons Fractals 17(2), 379–386 (2003) 11. Wong, K.W., Ho, S.W., Yung, C.K.: A chaotic cryptography scheme for generating short ciphertext. Phys. Lett. A 310(1), 67–73 (2003) 12. Li, S., Li, Q., Li, W., Mou, X., Cai, Y.: Statistical properties of digital piecewise linear chaotic maps and their roles in cryptography and pseudo-random coding. In: Honary, B. (ed.) Cryptography and Coding 2001. LNCS, vol. 2260, pp. 205–221. Springer, Heidelberg (2001). doi:10.1007/3-540-45325-3_19 13. Pareek, N.K., Patidar, V., Sud, K.K.: Cryptography using multiple one-dimensional chaotic maps. Commun. Nonlinear Sci. Numer. Simul. 10(7), 715–723 (2005) 14. Pareek, N.K., Patidar, V., Sud, K.K.: Discrete chaotic cryptography using external key. Phys. Lett. A 309(1), 75–82 (2003) 15. Amigó, J.M., Kocarev, L., Szczepanski, J.: Theory and practice of chaotic cryptography. Phys. Lett. A 366(3), 211–216 (2007) 16. Cuomo, K.M., Oppenheim, A.V., Strogatz, S.H.: Synchronization of Lorenz-based chaotic circuits with applications to communications. IEEE Trans. Circuits Syst. II: Analog Digit. Signal Process. 40(10), 626–633 (1993) 17. Kocarev, L., Halle, K., Eckert, K., Chua, L.: Experimental demonstration of secure communications via chaotic synchronization. Int. J. Bifurcat. Chaos 2, 709–713 (1992) 18. Liao, T.L., Tsai, S.H.: Adaptive synchronization of chaotic systems and its application to secure communications. Chaos, Solitons Fractals 11(9), 1387–1396 (2000) 19. Boccaletti, S., Kurths, J., Osipov, G., Valladares, D.L., Zhou, C.S.: The synchronization of chaotic systems. Phys. Rep. 366(1), 1–101 (2002) 20. Bai, E.W., Lonngren, K.E.: Synchronization of two Lorenz systems using active control. Chaos, Solitons Fractals 8(1), 51–58 (1997) 21. Carroll, T.L., Pecora, L.M.: Synchronizing chaotic circuits. IEEE Trans. Circuits Syst. 38(4), 453–456 (1991) 22. Pecora, L.M., Carroll, T.L.: Synchronization in chaotic systems. Phys. Rev. Lett. 64(8), 821 (1990) 23. He, R., Vaidya, P.G.: Analysis and synthesis of synchronous periodic and chaotic systems. Phys. Rev. A 46(12), 7387 (1992) 24. Kyprianidis, I.M., Haralabidis, P., Stouboulos, I.N.: Dynamics and synchronization of a second-order nonlinear and nonautonomous electric circuit. In: 3rd World Multiconference on Circuits, Systems, Communications and Computers, CSCC 1999, pp. 3241–3247 (1999) 25. Alcober, V., Mareca, P., González,Y.G.: Una Optimización en la Sincronización y Enmascaramiento con el circuito de Chua. XXVIII Reunión Bienal de la Real Sociedad Española de Física. Simposio de Dinámica no-lineal. Sevilla, Spain (2001) 26. Murali, K., Lakshamanan, M., Chua, L.O.: Synchronizing Chaos in driven Chua’s circuit. Int. J. Bifurcat. Chaos 05(2), 563 (1995)

Building a Unified Middleware Architecture for Security in IoT Alexandru Vulpe ✉ , Ştefan-Ciprian Arseni, Ioana Marcu, Carmen Voicu, and Octavian Fratu (

)

Telecommunications Department, University Politehnica of Bucharest, Iuliu Maniu, 1-3, 061071 Bucharest, Romania {alex.vulpe,stefan.arseni,carmen.voicu}@radio.pub.ro

Abstract. During the past few years, the concept Internet of Things (IoT) has seen a rapid evolution from the point of view of products developed for inter‐ connection. Even if the trend is moving towards a more secure and reliable envi‐ ronment, where, even if people are surrounded by sensors, actuators of intelligent devices, their need for privacy and safety is the first priority. Yet, given that recent years had also revealed some major security breaches in the way networks are designed, the need of implementing security in every aspect of our daily lives is growing. The present paper introduces a middleware architecture with the purpose of empowering IoT applications, by having a better understanding of implemen‐ tation and execution of proposed security mechanisms, through testing. The middleware is part of a larger platform based on an IoT gateway node that has 3 different hardware architectures (MCU, SDSoC, traditional CPU) integrated in the shape of a single testbed. Keywords: Lighweight encryption · Security · Internet-of-Things · Middleware · Architecture

1

Introduction

Internet of Things (IoT) has become, in the latest years, one of the biggest topics among research communities due to its fast expansion and the challenges involved in its devel‐ opment. The devices involved in IoT technology (smart mobile devices, RFID tags, wireless equipment, sensor nodes, etc.) bring together numerous risks related to security, energy and power consumption, computational capabilities or available memory. There‐ fore, the use of lightweight cryptography can provide a viable solution and its main purpose is to facilitate a wide range of applications such as, e.g., smart meters, vehicle security systems, wireless patient monitoring systems, Intelligent Transport Systems (ITS) and the Internet of Things (IoT) [1]. The most popular cryptographic algorithms are implemented on software plat‐ forms (AVR Atmel microcontroller, hardware platforms such as FPGA) to appraise their area requirements, memory efficiency and energy consumption in performance analysis. Degree of confusion and diffusion are evaluated in security analysis and any effort to optimize the performance of a parameter on one side will have a negative © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_11

106

A. Vulpe et al.

impact on the performance of other parameters on the other side meaning that the trade-off law is in place [2]. The performances of lightweight algorithms, as well as optimization processes for FPGA-based applications, have intensively been studied in [3–13]. In [3] the authors propose block cipher independent optimization techniques for Xilinx Spartan3 FPGAs and apply these optimizations on HIGHT and PRESENT lightweight algorithms. This way HIGHT consumes less than 100 slices, encrypts data at 65 Mbps and has a better throughput over area ratio than the previously published lightweight implementation of AES [4]. The proposed optimization techniques for lightweight algorithms can also be applied to other algorithms. In an attempt to demonstrate that asymmetric code-based cryptography can be successfully used for various applications, in [5] there has been illustrated that McEliece encryption using QC-MDPC codes for Xilinx FPGAs can be implemented with a significantly smaller resource footprint, still achieving reasonable performance suffi‐ cient for many applications (such as challenge-response protocols or hybrid firmware encryption). For low-cost smart devices like RFID tags, smart cards and wireless sensor nodes a specialized lightweight algorithm named Hummingbird is used to prevent from attacks like linear or differential attacks [6]. The performances over FPGA platform are tested in [7] and results show that this cipher block algorithm works with much high accuracy and the system implemented in Spartan 2 FPGA becomes reconfigurable and has a wellbalanced architecture and low complexity. In 2013, The U.S. National Security Agency (NSA) developed the Simon and Speck families of lightweight block ciphers as an aid for securing applications in very constrained environments where AES may not be suitable, including IoT [8]. Simon32/64 algorithm is implemented in [9] using Xilinx ISE development tools on FPGA model Virtex-5 XC5VFX200T and the results show that selecting small data length and key length and having few resources this Simon algorithm is very suit‐ able for lightweight application and embedded systems. Still, depending on the security level that needs to be achieved, the key-length can be increased hence the level of security is growing. For providing privacy and security among different smart devices in constrained environments, in 2014, the FeW block cipher was proposed. FeW uses a mix of Feistel and generalized Feistel structures to enhance the security against basic cryptanalytic attacks like: differential, linear, impossible differential and zero correlation attacks [10]. This lightweight block cipher implemented on FPGA confirms, in [11], that is suitable for low-resource applications and the algorithm can be implemented with very small area requirements. Not only block or stream ciphers have been taken into considerations when eval‐ uating cryptographic algorithms. Cryptographic hash functions have many informa‐ tion-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication [12]. In [13], the authors performed a comparison between KECCAE (200&400), PHOTON (similar to AES) and SPON‐ GENT (similar to PRESENT) for FPGAs and selected, as performance parameter, the ratio between throughout and slices. It has been demonstrated that the complexity

Building a Unified Middleware Architecture for Security in IoT

107

of round functions leads to throughputs varying from 600 Kbit/s to 119 Kbit/s. Besides, KECCAK proves high throughputs but the most area consuming round function, PHOTON has a better scalability in area but the lowest throughput in the field and SPONGENT is the most efficient in terms of throughput per area and can be the smallest or the fastest in the field, depending on the parameters. The present paper is organized as follows. Section 2 presents a preliminary archi‐ tecture of a testbed for evaluating cryptographic algorithms and IoT gateway stability, over a heterogeneous environment, while Sect. 3 presents the middleware specification for such a testbed. Some preliminary evaluation results are given in Sect. 4, while Sect. 5 draws the conclusions.

2

Preliminary Architecture

To emphasize the differences between hardware and software implementations of a cryptographic algorithm, we made several tests. In one of them, using a Xilinx ML507 that has installed a Xilinx Virtex-5 XC5VFX70T FPGA, we made a comparison regarding execution time of a fixed number of encryptions using the AES cryptographic algorithm with a 256-bit key, implemented both as a software application, for the PowerPC440 running at @400 MHz, and, also, as a hardware module. After running both implementations for 300 consecutive rounds, the results showed that for the software application it took an average of 298500 clock cycles, with a deviation of around 450 clock cycles, while for the hardware implementation it took only an average of 297 clock cycles, while the deviation is significantly smaller, with a value of around 12 clock cycles. Mapping these values in time units, the obtained results become 746 μs for the software implementation and 742 ns for the hardware one, resulting in an approximately 100 × faster execution. Figures 1 and 2 show the variation of the number of clock cycles for both hardware and software implementation.

Fig. 1. Variation of number of clock cycles – hardware implementation

Therefore, hardware implementations can bring an important step-up in assuring faster and reliable security in IoT and, through our proposed architecture, we allow the gradual transformation of cryptographic algorithms implementations from software, for

108

A. Vulpe et al.

Fig. 2. Variation of number of clock cycles – software implementation

traditional processor, to hardware in FPGAs or SDSoCs, by allowing the benchmarking of the algorithms on each point of conversion. In this way, the user can have an overview of how his cryptographic algorithm behaves under each type of platform, so that he can take the best decision for final implementation, based also on the purpose and environ‐ ment it was designed for. Our proposed testbed architecture, therefore, is made up of three different types of hardware components that enable the evaluation of cryptographic algorithms and IoT gateway stability, over a heterogeneous environment, therefore covering a broader spectrum of sensors and applications. More precisely, as presented in Fig. 3, the proposed architecture is constructed from two primary components: • A hardware layer that consists of the three types of processing units: a traditional 64bit processor, a microcontroller (MCU) and a Software Defined System on a Chip (SDSoC). • A software layer that is composed of two sub-layers: one that groups the drivers and APIs targeted for each one of the hardware architectures and another that includes the APIs and software applications that will be developed specifically for this testbed. The three types of hardware components were chosen not only for encompassing the most used architectures in developing IoT systems, but also for identifying algo‐ rithms that can perform better directly on hardware than through auxiliary software and implementing these algorithms. The architecture has already been presented in [14] and this paper builds on our previous work by specifying the middleware API architecture.

3

Middleware Specification

The main challenge with respect to middleware development is ensuring the predicta‐ bility, controllability and adaptability of operating characteristics for applications of the underlying systems. All these issues vary to different degrees in large-scale systems, because of the dynamics of many interconnected systems, which often, may be

Building a Unified Middleware Architecture for Security in IoT

109

Fig. 3. IoT high-level testbed architecture overview

constructed from smaller systems. Therefore, the Software component of the testbed is composed of two sub-layers, as mentioned in [14]. The first sub-layer represents the linker between the hardware platforms and the upper-layer. For each of the hardware architectures, there are different drivers that enable the connection between the supervising machine and the testbed. The second software layer enables “all-purpose” APIs so that developers can create singular applications for multiple targets. This component will integrate the basic functionalities required for ensuring data exchange and management with the hardware platforms. Except the testing and monitoring of the software implementations, this layer brings also the possibility to monitor and integrate data coming from the hardware elements, allowing developers to precisely know the trends of resource and power consumption on the targeted platforms. Considering that, for ease of use, the final solution needs to be a single component that users can easily include in their projects, the entire middleware is constructed as a single interface between users and hardware components. Figure 4 generally presents how the middleware will react when integrated in a testing application. The diagram is composed of two sections, each one dedicated to a use case. First, considering the use case of a targeted application, we consider a simple message tran‐ sition between the calling application and HW modules, managed by the middleware that acts as a controller. Each request is analysed for compliance with the targeted module, and then it is passed on the workflow. In the second use case, we exemplify how the middleware reacts when an appli‐ cation is implemented for targeting all HW components. Similar to the first use case, the middleware checks for compliance, and then it forks the calls to the underlying HW platform. Any received result is put on a waiting list until all the other calls are completed, and then they are grouped and sent to the upper layer, namely the user application.

110

A. Vulpe et al.

Fig. 4. Message exchange between middleware, user application and hardware components

From the programmability point of view, it was of interest what programming language to use for developing the proposed middleware. It should be easy to integrate with the programming languages used by the IDEs for developing using the envisioned hardware boards. Table 1. IDEs and programming languages used for development boards. Development board SeedEye

IDE Contiki OS Packet sniffer 15.4 Duck-lab

TM4C129x Freakduino

TivaWare™ for C Series Arduino IDE

MicroZed

Xilinx Vivado Design

Programming language C C Mainly written in C ++ but integrated with other languages such as Python and R C C Other options available: C#, Python C

Building a Unified Middleware Architecture for Security in IoT

111

Table 1 shows a comparative look on the IDEs and programming languages used for developing algorithms on the chosen boards. Although some of the boards do allow other options, it is easy to see that the majority of the boards use the C language. Therefore, the decision is to use the C/C ++ programming language for developing the middleware.

4

Evaluation Results

This section presents evaluation results from testing a software that implements light‐ weight block ciphers with different optimizations for the x86 platform [15]. Three algo‐ rithms have been implemented: PRESENT, LED and Piccolo and three techniques were explored: table based implementations, vperm (for vector permutation) and bitslice implementations. Figure 5 presents values obtained by the vperm techniques for the algorithms regarding block cipher runs, using different key length. It can be noticed that the best results are obtained for the Piccolo algorithm with a key of 80.

Fig. 5. Values for vperm techniques

When the vperm technique is changed with table based implementations or bitslice implementations the values achieved improve for all algorithms. But, for Piccolo the best results (bitslice technique) are still obtained (Fig. 6). Another possibility to compare the algorithms is to compare the percentage of the duration of the encryption part from the duration of the entire duration of the block cipher run. Figures 7 and 8 show the percentages obtained by LED and Piccolo algorithms when we use different key lengths and techniques. In the case of Piccolo a percentage

112

A. Vulpe et al.

Fig. 6. Values for table-based implementations and bitslice techniques with 16 blocks

higher than 90 is obtained only for table based implementations, instead for LED algo‐ rithm under 90% occurs only for bitslice with 32 blocks.

Fig. 7. Percentage of the encryption part duration from the entire block cipher duration for LED with different key lengths and different implementations

Building a Unified Middleware Architecture for Security in IoT

113

Fig. 8. Percentage of the encryption part duration from the entire block cipher duration for Piccolo with different key lengths and different implementations

5

Conclusions and Future Work

As presented throughout the article, the main objective of the proposed architecture is to provide means of measuring the performance of different types of IoT applications, mainly different implementations of lightweight cryptographic algorithms. Starting from the integration specifications, a first batch of performance tests were made using x86 implementations of some lightweight cryptography algorithms. We chose this platform as it is also a module of the final testing system solution that we propose. Therefore, based on results obtained from our preliminary analysis made with the hardware modules that compose the testbed and their innovation in specific areas, we can state that our proposed architecture will enable its users to create efficient and secure embedded systems. As future work we envision the implementation of the proposed middleware in a software framework, as well as the implementation of a selected set of algorithms on the SDSoC and MCU boards to validate the functioning of the proposed middleware. The mentioned implementations on the SDSoc and MCU boards will consist not only on simple constructions of selected algorithms, but will be provided with demonstration purposes, by having integrated also the functionalities of the middleware for testing their performances. Therefore, these selected algorithms will create a form of guidance for users for their future integration of proposed or preferred cryptographic algorithms.

114

A. Vulpe et al.

Acknowledgments. This work has been funded by University Politehnica of Bucharest, through the “Excellence Research Grants” Program, UPB – GEX. Identifier: UPB–EXCELENTA–2016 project “Platform for Studying Security in IoT”, contract number 96/2016 (PaSS-IoT).

References 1. Biryukov, A., Perrin, L.: Lightweight cryptography lounge (2015). http://cryptolux.org/ index.php/Lightweight_Cryptography 2. Beaulieu, R., Shors, D., Smith, J., Treatman-Clark, S., Weeks, B., Wingers, L.: The Simon and speck of lightweight block ciphers, National Security Agency 9800 Savage Road, Fort Meade, MD 20755, USA, June 2013 3. Yalla, P., Kaps, J-P.: Lightweight cryptography for FPGAs. In: International Conference on Reconfigurable Computing and FPGAs, pp. 225–230 (2009). doi:10.1109/ReConFig.2009.54 4. Chodowiec, P., Gaj, K.: Very compact FPGA implementation of the AES algorithm. In: Walter, C.D., Koç, Ç.K., Paar, C. (eds.) CHES 2003. LNCS, vol. 2779, pp. 319–333. Springer, Heidelberg (2003). doi:10.1007/978-3-540-45238-6_26 5. von Maurich, I., Güneysu, T.: Lightweight code-based cryptography: QC-MDPC McEliece encryption on reconfigurable devices. In: Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 1–6 (2014) 6. Fan, X., Gong, G., Lauffenburger, K., Hicks, T.: FPGA implementations of the hummingbird cryptographic algorithm. In: IEEE International Symposium on HardwareOriented Security and Trust (HOST), pp. 48–51, June 2010 7. Saha, S., Islam, MR., Rahman, H., Hassan, M., Hossain, A.A.: Design and implementation of block cipher in hummingbird algorithm over FPGA. In: Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT), pp. 1–5 (2014). doi: 10.1109/ICCCNT.2014.6963084 8. Beaulieu, R., Shors, D., Smith, J.: SIMON and SPECK: Block ciphers for the internet of things, Cryptology ePrint, Archive, Report 2015/585 (2015). http://eprint.iacr.org/2015/585 9. Feizi, S., Ahmadi, A., Nemati, A.: A hardware implementation of simon cryptography algorithm. In: 4th International Conference on Computer and Knowledge Engineering (ICCKE), pp. 245–250 (2014). doi:10.1109/ICCKE.2014.6993386 10. Kumar, M., Pal, S.K., Panigrahi, A.: FeW: A Lightweight Block Cipher, Scientific Analysis Group, DRDO, Delhi, India, Department of Mathematics, University of Delhi, India (2014) 11. Nemati, A., Feizi, S., Ahmadi, A., Haghiri, S., Ahmadi, M., Alirezaee, S.: An efficient hardware implementation of FeW lightweight block cipher. In: The International Symposium on Artificial Intelligence and Signal Processing (AISP), pp. 273–278 (2015). doi:10.1109/ AISP.2015.7123493 12. Khan, D.: The Most In-Depth Hacker’s Guide, p. 73. Lulu.com, Raleigh (2015). ISBN 1329727681 13. Jungk, B., Lima, L.R., Hiller, M.: A systematic study of lightweight hash functions on FPGAs. In: International Conference on ReConFigurable Computing and FPGAs (ReConFig14), pp. 1–6 (2014). doi:10.1109/ReConFig.2014.7032493 14. Arseni, S., Miţoi, M., Vulpe, A.: PASS-IoT: a platform for studying security, privacy and trust in IoT. In: 11th International Conference on Communications (COMM 2016), Bucharest, Romania, 9–11 June 2016. ISBN:978-1-4673-8196-3 15. Some lightweight cryptography algorithms optimized for x86. https://github.com/rb-anssi/ lightweight-crypto-lib. Accessed Sep 2016

Impact of Transmission Communication Protocol on a Self-adaptive Architecture for Dynamic Network Environments Gabriel Guerrero-Contreras1(B) , Jos´e Luis Garrido1 , Mar´ıa Jos´e Rodr´ıguez F´ ortiz1 , Gregory M.P. O’Hare2 , and Sara Balderas-D´ıaz1 1

2

Software Engineering Department, E.T.S.I.I.T., University of Granada, C/Periodista Daniel Saucedo Aranda s/n, Granada, Spain {gjguerrero,jgarrido,mjfortiz,sarabd}@ugr.es School of Computer Science and Earth Institute, University College Dublin, Belfield, Dublin 4, Ireland [email protected]

Abstract. The quality attributes of services deployed on distributed system are critically conditioned by their placement within the distributed system. To this regard, the host election process is one of the main elements in the self-adaptive replication and deployment of services, as one of the possible approaches to address the changing computational conditions of dynamic network environments in order to ensure quality attributes of the system. In this paper, a study and an analysis of the behaviour of a host election algorithm under reliable and non-reliable transmission protocols (TCP and UDP) is presented. The algorithm has been proposed as a basis for a self-adaptive architecture in previous work. The results demonstrate that the reliability of TCP redound in a better efficiency in the system, despite its high latency and higher consumption of bandwidth, in comparison to UDP. Keywords: Software architecture · Autonomic computing · Election algorithm · Service availability · Transmission Control Protocol (TCP) · User Datagram Protocol (UDP)

1

Introduction

The Service Oriented Architecture (SOA) proposes a modular distribution of the functionalities of a system through services. However, SOA itself is not sufficient to be able to operate in dynamic network environments [4], such as Mobile Cloud Computing. These environments, which are usually based on ad-hoc networks (e.g., MANETs), pose new features, as dynamic network topologies, that if they are not correctly addressed, they may pose a significant impact on the quality attributes of the services [3]. This represents a challenge for software architects and developers, who must make suitable design decisions to address c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 12

116

G. Guerrero-Contreras et al.

them correctly. As a result, self-adaptive architectures have been gaining importance in the research community [14], since they can reduce the impact of context changes in the quality attributes of the system. In this context, and in order to enhance the availability of services in dynamic environments, a self-adaptive software architecture, together a host election algorithm, has been proposed in [7]. This architecture has been designed to provide a common basis for mobile collaborative systems. The performance of the host election algorithm is essential for the proper functioning of the self-adaptive architecture, as it may be affected by the dynamic conditions of the mobile network. Under these conditions, a reliable transmission protocol, as TCP, may be necessary. However, low latency, low overload of the operative system, and reduced data transmission of non-reliable transmission protocols, as UDP, are important features for dynamic and resource constrained systems. This paper presents a comparative study of the proposed host election algorithm under TCP and UDP, in order know if the reliability of TCP redound in a better operating of the system proposed despite its high latency and higher consumption of bandwidth. To this end, the software architecture has been implemented and simulated on Network Simulator 31 .

2

Related Work

In the process of the dynamic replication and deployment of services in mobile networks, two main questions must be addressed: when to replicate and where to deploy a service. In the literature it can be found several approaches that address these issues. For instance, Chandrakala et al. [2] propose a prediction algorithm based on the position of the nodes, their travel speed and range. When a partition is predicted, the node in which the service replica will be deployed is chosen by the distance from the source node, its battery and storage capacity. In [6], the TORA routing protocol is used in combination with an estimation of the residual link lifetime of wireless links. When a node predicts a partition, this node will host a replica of the service, regardless of its characteristics. In other works, node cluster methods are applied in order to turn a distributed network into a set of interconnected local clusters that can be handled individually. In this way the management of the network is simplified. Psaier et al. [12] create node clusters on the basis of the distance between mobile nodes. However, in [1,13] it is shown that the speed of nodes is a better measure to create clusters of mobile nodes. In [1], service replicas are created when too many requests are made to a service from an external group. In both systems [1,12], the node that will host the new replica is chosen by considering its computational capabilities (battery, CPU, memory, etc.), without taking into consideration either its current workload or the network topology. Consequently, the resources of the host node can be quickly depleted. Finally, Hamdy et al. [9] propose a replication protocol based on the interest of nodes in the use of the service. When an application needs to access a service and there is not a replica of that service in its neighbourhood, 1

https://www.nsnam.org/.

Impact of Transmission Communication Protocol

117

a replica of the service will be created and deployed in the same node in which the application is hosted. Generally, these ad-hoc solutions have been developed for specific scenarios, and they are based on an implicit, and often restricted, context model. The definition and use of an explicit context model can facilitate a wider adoption and applicability of the proposal. The possibility of extending the model according to particular requirements for each system would provide a more efficient solution in terms of energy consumption to several scenarios. With regard to communications, the protocol used at transport layer plays an important role in the QoS of the system. Some works have discussed the benefits and advantages of UDP and TCP protocols in MANETs [11]. While UDP provides high flexibility and minimum network requirements, TCP provides a reliable end-to-end packet delivery. However, in accordance with these works, TCP shows a low performance in MANETs owing to wireless channel noise and route changes.

3

A Self-adaptive Software Architecture

The software architecture introduced in [7] has two main objectives: (1) provide a reusable and adaptable base for collaborative support systems, (2) enhance the availability of services in dynamic environments through an adaptive replication and deployment approach. It consists of the following five main services (Fig. 1): – The Monitoring Services encompasses a set of monitoring services, which are sensing the context of the device in order to detect potential events that could affect the availability of a service. In the proposed architecture, device capabilities and network topology are monitored. The information provided by the routing protocol is used to estimate the network topology. – The Context Manager Service is responsible for processing and storing the information received from the monitors of the device. This information will be used by the Replica Manager Service in order to adjust the deployment of the services according to the changes produced in the execution context. – The Communications Service allows to communicate the entities of the architecture between them and with other nodes of the network, under two different communication paradigms: (1) under a Publish-Subscribe paradigm; and (2) under a Request-Response paradigm, following a SOA 2.0 approach [10], in which services are not just passive entities, but also they are able to receive and generate events proactively. – The Replica Manager Service encapsulates the adaptation logic regarding the replication and deployment of the service replicas. In order to provide a fully distributed solution, each service device has a Replica Manager Service replica, and this set of replicas are responsible for coming to an agreement, through the election algorithm, to establish what will be the active service replica. – the Service itself (i.e., the task application), which will be a passive or active replica according to the decision made by the Replica Manager Service.

118

G. Guerrero-Contreras et al.

Fig. 1. Services and software components of the self-adaptive software architecture.

3.1

Host Election Algorithm

There are three possible states for each node: “Local Mode”, “Client Mode” or “Server Mode”. A node is in the “Local Mode” state when there is no reachable node in its neighbourhood. When a node starts the election, the first step is to calculate its score and broadcast it to its group. Then, the node goes into a passive mode where it waits to receive the scores of its neighbours. When the node receives all the scores of its neighbours, or the timer expires, the node calculates what is the most suitable node (within its group) to act as server. In the calculation of the best node, the node will select the node of the group with the best score, this node could be itself. The score of the nodes is replicated information, this is, all the nodes within the group manage the same list of scores. Thus, all the nodes will take the same decision about what node will act as server. In the case of the best node will be the node itself, it will go into the “Server Mode” directly, assuming that the rest of the nodes will take the same decision. In the case where another node of the group is the best, the node will send a message requesting service to that node. At this point, one of the following situations can happen: – The requested node is in the “Server Mode” state. Therefore, the requesting node will receive an affirmative answer and it will go to “Client Mode”. – The requested node is in the “Client Mode” state. The requesting node has not been sent the request to the best node of the group. In this case, the requested node will respond with a SERVER REJECTION message. This message

Impact of Transmission Communication Protocol

119

Fig. 2. KB sent and received by the nodes of the network, owing to the execution of the host election algorithm, under the TCP protocol (left) and the UDP protocol (right).

will include information about the server of the requested node and its score, in order to complete the information of the requesting node. – The requested node is still in transition. At this point, the requested node will return a SERVER REJECTION message, without additional information. – The request message or the acceptance/rejection message is lost. In this case, once that the waiting timer has expired, the request will be sent again. – Finally, in the case of the “Server Mode” state, when the node receives a request for service (SERVER REQUEST), it accepts the request through the message SERVER ACCEPTANCE.

4

Evaluation

The proposed architecture has been simulated and evaluated using the ns-3 network simulator. The simulated scenario aims to approximate the situation of a work team, where different users are moving around of a wide scenario, sharing information through a collaborative service [8]. It consists of a set of mobile nodes with a random walk mobility model. The speed of the nodes varies between 0.5 and 2 m/s, and they have pauses with a duration that varies between 60 and 300 s. The nodes have an IEEE 802.11 wireless connection, with a range of 250 m and a bandwidth of 1 Mbps. The mobility area is 1000 m2 , and they are introduced in a random initial position. The time of the simulated execution is one hour. Each of the nodes has a Replica Manager Service, and a replica of the service. In this implementation, the OLSR (“Optimized Link State Routing”) routing protocol has been used [5] to make possible multi-hop communication between the nodes. Under these conditions, two different versions of the architecture have been implemented: (1) based on a TCP communication (reliable), and (2) based on a UDP communication (non-reliable). The number of nodes has ranged from 4 to 16. The simulation has been performed 100 times with 100 different random

120

G. Guerrero-Contreras et al.

Fig. 3. Efficiency in the message delivery of TCP and UDP protocols.

seeds for each configuration in order to eliminate the influence of any random factor. From the results obtained, two main aspect can be evaluated to know if the reliability of TCP redound in a better operating of the host election algorithm against UDP: (1) use of bandwidth, and (2) how they influence the service availability. 4.1

Use of Bandwidth

The chart of Fig. 2 shows the KB sent and received by the nodes of the network, owing to the execution of the host election algorithm, under the TCP and UDP protocols. TCP has shown a slight increase, but not relevant, in the traffic generated in comparison to UDP, as shown in Table 1. However, UDP shows a problem with message loss, losing more of 1KB of information in a network of 16 nodes (see more detailed information in Table 1). To this regard, TCP shows an efficiency in the message delivery near to 1 (Fig. 3), whereas UDP shows a constant efficiency decrease, losing the 9% of the information sent in a network of 15 nodes (Fig. 4). Table 1. KB sent and loss in the node communication for the execution of the host election algorithm, under TCP and UDP protocols. Nodes TCP

4

5

6

7

8

9

10

11

12

13

14

15

16

KB sent 0.4973 0.7698 1.4283 1.8298 2.4987 3.5122 4.3226 5.4847 6.5302 8.1340 9.4053 11.0173 12.7699 KB loss 0.0016 0.0035 0.0079 0.0119 0.0209 0.0303 0.0415 0.0643 0.0805 0.1065 0.1395 0.1661 0.3507

UDP

KB sent 0.4392 0.6874 1.3532 1.7521 2.4527 3.2260 4.3982 5.4242 6.7014 7.9555 9.2631 10.7583 12.5292 KB loss 0.0089 0.0182 0.0333 0.0566 0.0954 0.1472 0.2156 0.3264 0.4265 0.5908 0.7728 0.9608 1.3157

Impact of Transmission Communication Protocol

121

Fig. 4. Percentage of KB loss in the node communications of TCP and UDP protocols. Table 2. Service availability provided by the self-adaptive software architecture, under TCP and UDP communication protocols. Nodes

4

5

6

7

8

9

10

11

12

13

14

15

16

TCP

99.81% 99.73% 99.46% 99.37% 99.21% 98.88% 98.77% 98.53% 98.42% 98.00% 97.89% 97.55% 97.33%

UDP

99.84% 99.77% 99.45% 99.39% 99.20% 98.99% 98.66% 98.43% 98.22% 98.03% 97.91% 97.69% 97.51%

4.2

Service Availability

The availability of the service is affected by the time that takes the host election algorithm in choosing a host to act as server. Thus, this decision, that involves to all nodes of a network partition, is influenced by the latency in the communication. The host election algorithm has provided a similar service availability with both communication protocols, as shown Table 2. Specifically, except in the case of a network of 10, 11 and 12 nodes, a slightly lower service availability is

Fig. 5. Service availability provided by the self-adaptive software architecture, under the TCP and UDP communication protocols.

122

G. Guerrero-Contreras et al.

provided under TCP (Fig. 5). Note that, the service availability is the time in which, when a node has connection with others, it can access to a service replica (i.e., it is acting as client or server). Therefore, the availability of the service will be inversely proportional to the time required for the execution of the election algorithm.

5

Discussion

The evaluation performed has highlighted that, although TCP presents higher bandwidth requirements than UDP owing to acknowledge packets, it only has shown in the simulations performed a slight increase in traffic generated (of 0.0969 KB in average). Whereas the average efficiency in message delivery of TCP is of 0.9898, against the 0.9458 presented by UDP. However, the service availability provided by the architecture is lower under TCP that under UDP. In average, under UDP the architecture provides a service availability 0.0107% higher. This is caused by the lower latency of UDP in comparison to TCP, which allows the host election algorithm to provide a better response time. Therefore, in this case, as manifested, the TCP’s requirements of bandwidth are not much higher than UDP’s, whereas the TCP’s reliability and UDP’s lower latency are relevant features to take into consideration in the design of the self-adaptive architecture. To this respect, the current proposed host election algorithm the nodes of a group take the same decision about the host election in an independent way. This is possible since the score of the nodes is propagated between the nodes themselves, this is, all the nodes within the group manage the same list of scores. Hence, all the nodes will take the same decision about what node will act as server. This approach avoids the server having to communicate its election to the rest of the nodes. However, it requires a reliable communication protocol, owing to no score message can be lost. This would make necessary TCP, despite its higher latency, to guarantee message delivery, or in the case of a non-reliable transportation protocol, as UDP, it is necessary to implement acknowledgement messages and message forwarding to guarantee the delivery of score messages. However, the latter is more inefficient than in TCP, among other reasons because this task must be implemented and managed in the application layer, instead of in the transportation layer, as in TCP. For this reason, although under UDP, the host election algorithm provides a better service availability, the server efficiency (i.e., average time in which a node is acting as server against the total service availability provided by the architecture) is lower than under TCP. Alternatively, the current approach of host election could be modified to take advantage of low latency of UDP. A voting approach, where each node votes, from its point of view, to the best host node, and the most voted node is elected as host, will release the need of that all the score messages must be delivered to all nodes of the group. This approach will have a better response time, under UDP, and it will more robust against message loss.

Impact of Transmission Communication Protocol

6

123

Conclusions

In this paper, it has been presented and analysed the results obtained from the study of the behaviour of a self-adaptive architecture [7], under different transmission protocols: TCP and UDP, a reliable and non-reliable protocol, respectively. From this study, it can be concluded that TCP presents, as it was expected, a better message delivery efficiency. In a network of 16 nodes, TCP provides a message delivery efficiency of 0.9811, whereas UDP presents a 0.9029. Furthermore, contrary to what was initially expected, the architecture does not notably increase the use of bandwidth under TCP. It only has shown a slight increase in traffic generated (of 0.0969 KB in average). Finally, under UDP, the host election algorithm presents a slight improvement in the response time, improving the service availability provided by the architecture, because of the low latency of this transportation protocol. The results obtained in this study allow us to define the next steps towards improvement of this work, as for instance the study of a voting approach for the host election algorithm, in order to release the need of that all the score messages must be delivered to all nodes of the group, providing a more robust system against message loss, and allowing taking advantage of the low latency of UDP to provide a better response time of the election algorithm. However, it would be necessary to extend this study to wider mobile ad-hoc networks and other mobility models (e.g., reference point group mobility model) in order to know if the affect to the conclusions obtained in the current study. Acknowledgements. This research work is funded by the Spanish Ministry of Economy and Competitiveness through the R&D Project Ref. TIN2016-79484-R, and the Scholarship Program FPU ref. FPU13/05520 granted by the Spanish Ministry of Education, Culture and Sports.

References 1. Ahmed, A., Yasumoto, K., Ito, M., Shibata, N., Kitani, T.: HDAR: highly distributed adaptive service replication for MANETs. IEICE Trans. Inf. Syst. E94–D, 91–103 (2011) 2. Chandrakala, C.B., Prema, K.V., Hareesha, K.S.: Improved data availability and fault tolerance in MANET by replication. In: 3rd IEEE International Advance Computing Conference (IACC 2013), pp. 324–329. IEEE, February 2013 3. Chlamtac, I., Conti, M., Liu, J.J.N.: Mobile ad hoc networking: imperatives and challenges. Ad Hoc Netw. 1(1), 13–64 (2003) 4. Choudhury, P., Sarkar, A., Debnath, N.C.: Deployment of service oriented architecture in MANET: a research roadmap. In: 2011 9th IEEE International Conference on Industrial Informatics, pp. 666–670. IEEE, July 2011 5. Clausen, T., Jacquet, P., Adjih, C., Laouiti, A., Minet, P., Muhlethaler, P., Qayyum, A., Viennot, L.: Optimized link state routing protocol. Network Working Group, pp. 1–76 (2003)

124

G. Guerrero-Contreras et al.

6. Derhab, A., Badache, N.: A pull-based service replication protocol in mobile ad hoc networks. Eur. Trans. Telecommun. 18(1), 1–11 (2007) 7. Guerrero-Contreras, G., Garrido, J.L., Balderas-Diaz, S., Rodriguez-Dominguez, C.: A context-aware architecture supporting service availability in mobile cloud computing. IEEE Trans. Serv. Comput. PP(99), 1 (2016) 8. Guerrero-Contreras, G., Rodr´ıguez-Dom´ınguez, C., Balderas-D´ıaz, S., Garrido, J.L.: Dynamic replication and deployment of services in mobile environments. In: Rocha, A., Correia, A.M., Costanzo, S., Reis, L.P. (eds.) New Contributions in Information Systems and Technologies. AISC, vol. 353, pp. 855–864. Springer, Cham (2015). doi:10.1007/978-3-319-16486-1 85 9. Hamdy, M., Derhab, A., K¨ onig-Ries, B.: A comparison on MANETs’ service repli¨ cation schemes: interest versus topology prediction. In: Ozcan, A., Chaki, N., Nagamalai, D. (eds.) WiMo 2010. CCIS, vol. 84, pp. 202–216. Springer, Heidelberg (2010). doi:10.1007/978-3-642-14171-3 17 10. Krill, P.: Make way for SOA 2.0. InfoWorld (2006). http://www.infoworld.com/t/ architecture/make-way-soa-20-420 11. Mayhew, G.L.: Quality of service in mission orientated ad-hoc networks. In: 2007 IEEE Aerospace Conference, pp. 1–9, March 2007 12. Psaier, H., Juszczyk, L., Skopik, F., Schall, D., Dustdar, S.: Runtime behavior monitoring and self-adaptation in service-oriented systems, pp. 164–173 (2010) 13. Wang, K., Li, B.: Efficient and guaranteed service coverage in partitionable mobile ad-hoc networks. In: Proceedings of Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 2, pp. 1089–1098. IEEE (2002) 14. Weyns, D., Ahmad, T.: Claims and evidence for architecture-based self-adaptation: a systematic literature review. In: Drira, K. (ed.) ECSA 2013. LNCS, vol. 7957, pp. 249–265. Springer, Heidelberg (2013). doi:10.1007/978-3-642-39031-9 22

A Survey on Anti-honeypot and Anti-introspection Methods Joni Uitto, Sampsa Rauti(B) , Samuel Laur´en, and Ville Lepp¨ anen University of Turku, 20014 Turku, Finland {jjuitt,sjprau,smrlau,ville.leppanen}@utu.fi

Abstract. Modern virtual machines, debuggers, and sandboxing solutions lend themselves towards more and more inconspicuous ways to run honeypots, and to observe and analyze malware and other malicious activity. This analysis yields valuable data for threat-assessment, malware identification and prevention. However, the use of such introspection methods has caused malware authors to create malicious programs with the ability to detect and evade such environments. This paper presents an overview on existing research of anti-honeypot and anti-introspection methods. We also propose our own taxonomy of detection vectors used by malware.

1

Introduction

Honeypots can be thought as a means of capturing and analyzing malicious behaviour and traffic on networked computer systems. Depending on the level of involvement, honeypots can be roughly divided into three categories: low-interaction honeypots, medium-interaction honeypots and high-interaction honeypots [19]. Low-interaction honeypots (LIHPs) are fairly simple. Their main functionality is to provide an out-facing interface to masquerade as legitimate service provider and detect irregular activities. As the usual use-case for LIHPs is deployment to a corporate network (or similar), almost all traffic directed at the honeypot is illegitimate. Once an intrusion is detected, system administrators are alerted and the honeypot shuts down. Medium-interaction honeypots (MIHPs) are a little more involved than LIHPs. Instead of simply presenting a selection of out-facing interfaces, MIHPs usually have a specialized task or target. For instance, an MIPH might be configured to service as a SSH honeypot. Casting aside all other functionality, the honeypot aims to provide as convincing interaction with the selected interface domain as possible. Having a strict restriction upon which to build the system, it is possible to deceive malware with highly convincing interaction. High-interaction honeypots (HIHPs) are even more involved; their purpose is to let malware infiltrate into the system in order to gain better understanding of the The authors gratefully acknowledge Tekes – the Finnish Funding Agency for Innovation, DIMECC Oy and Cyber Trust research program for their support. c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 13

126

J. Uitto et al.

methods, technologies and identities of the malicious adversaries. While it is clear that HIHPs offer a much better view on the activities of these malicious characters and better opportunities to collect valuable data, they are also more vulnerable. Having access on the inside means that the malicious program has much wider attack surface at its disposal and the honeypot might be compromised. In this paper, we survey the anti-honeypot and anti-introspection methods used by malware based on existing literature. Based on the existing taxonomies of honeypot detection vectors, we propose our own taxonomy that we believe to be a good fit for the current threat landscape. We also categorize the existing research on anti-honeypot techniques and anti-introspection methods according to our presented taxonomy.

2

Anti-honeypot Methods Against Lowand Medium-Interaction Honeypots

Low- and medium-interaction honeypots both differ from high-interaction honeypots by not letting the attacker infect the system. LIHPs and MIPHs simply ‘converse’ with their mark to the extent they are allowed, trying to gather as much data as possible. Thus the common way to detect these two types of honeypots is by developing and applying fingerprinting methods. 2.1

Network Level Fingerprinting

Low-interaction honeypots are usually deployed using some form of virtualization or simulation. It is not uncommon to have multiple LIHPs running on the same hardware. This in turn has a clear performance impact that reflects on the network response times. As discussed in [18], these latencies can be used to profile network traffic and remotely detect honeypots. While the results depend heavily on the network topology, technologies and profiling methods in use, ideal conditions may yield up to 95% detection rate simply using ICMP ECHO request which is a fairly low-priority network message, resulting in large variance. Similar work on network traffic profiling is done in [8]. Fu et al. measure network link latencies and build a classifier based on Neyman-Pearson decision theory. This classifier is used to detect honeynets build on Honeyd [21] lowinteraction honeypot implementation. They achieve a similar detection rate as in [18]. Large portion of this result is explained by large difference between software and hardware timing resolution. Hardware-based timers used by real network devices operate on microsecond scale while the software based timers operate on millisecond scale. In their paper Fu et al. [8] suggest and implement patches to Linux kernel and Honeyed software in order to increase the timing resolution. According to their experiments, this counter-acts the fingerprinting quite well.

A Survey on Anti-honeypot and Anti-introspection Methods

2.2

127

Application and Service Level Fingerprinting

Low-interaction and medium-interaction honeypots are usually built in a compact manner. They aim to simulate a very specific kind of a system, usually targeted to detect specific type of attacks. The simplest low-interaction honeypot is a simple logging program listening on port 22 (SSH protocol) and recording all received data to a log file. This makes the honeypots easier to develop, deploy and maintain. However, it also makes them easier to recognize. Attackers build complex fingerprints based on the notion of related services. If a network host A offers services X and Y, it is common that the host A should also offer services W, E, and R. A real network host would then have the whole set of services (or some large subset) while the honeypot might be offering only X and Y. This leads to easy detection by attackers [18].

3

Anti-honeypot Methods Against High-Interaction Honeypots

The aim of high-interaction honeypots is to get infected by malicious software and act like a real system as convincingly as possible. This allows the researchers or other security experts to monitor and gather data on the malicious entity. Currently, the most common and interesting catch is a botnet worm, as these entities are highly undetectable and persistent. Being part of a botnet with deep anti-introspection and data analysis tools can yield invaluable operational and technological information. Today’s malware is very introspection-aware and in most cases it is only a matter of time before the honeypot is recognized and the infected host is cast aside. This is especially true for botnets, where the bot-herder has a greater control over the infected hosts. On the positive side, gaining understanding on the methods of discovery of honeypots is valuable information on its own right [15]. HIHPs are more or less real systems which renders them less secure than LIHPs and MIHPs, as the potential attack surface is much larger and the honeypot might end up under the attacker’s control. 3.1

System Level Fingerprinting

High-interaction honeypots must resemble real systems as closely as possible in order to fool a dedicated black hat hacker. This requires convincing system components and software. To achieve this, HIHPs are often implemented as real operating systems with real software. This operating system is either run on a virtualization platform or on real hardware. In the former case, the introspection and data collection tools are bundled in with the honeypot. In the latter, some hooks are used to monitor the virtualized operating system. In either case, these monitoring systems leave traces in the system. Examples of such traces are irregular hardware components, uncharacteristic behavior (e.g. large system call latencies) and hints of monitoring software [6,13–15].

128

J. Uitto et al.

The attackers develop capabilities for detecting these anomalies (or do it manually, if need be) and build methods and fingerprints for detecting monitored systems that are potentially honeypots. Research in the early 2000 is highly concerned with virtualization being an instant kill-switch for the attackers. However, fifteen years of network infrastructure development has seen the raise of huge numbers of virtualized services, which means those concerns are no longer as valid as they used to be. Although we do not cover this issue in detail in this study, techniques malware uses to detect virtual machines and hypervisors still have some significance [4]. 3.2

Operational Analysis

A harder problem for honeypots that wish to remain undetected is the law. Infected hosts are not allowed to continue to infect other systems if the administrator is aware of the infection. To do otherwise is negligent. This enables the botnet masters to simply use infection as means of verifying the nature of the current host. The infected host tries to infect a collection of potential targets. Within those targets, a control node is hidden. If the control node receives the infection, it can then verify the host as a valid infection target. Otherwise the host is dropped [25]. Other operational analysis consists of monitoring the system behavior and trying different functions and contrasting the results against expected values. Port scanning and fetching resources from Internet is also common, as the honeypots often have limitations related to this functionality.

4 4.1

Detection Vector Taxonomies Detection Vectors

A detection vector describes the means the malware can use to detect an unwanted execution environment. Such environments can consist of but are not limited to operating systems with sub-optimal update compositions, virtualized execution environments, sand-boxing solutions or debuggers. Modern malware is more and more environment-aware. Chen et al. have captured and run 6900 malware samples in different environments [2]. They found out that when run in a virtual machine, 95.3% of the malware continued to exhibit the same behaviour as in a normal operating system. However, when run with a debugger attached, 58.5% continued to function as before. Nearly half of the tested malware had some capability to identify debuggers and upon detection reduce or stop malicious behaviour. After these experiments, the malware has probably become even more sophisticated. 4.2

Existing Taxonomies

In the next subsection, we will present a proposed taxonomy of detection vectors. Our taxonomy is based upon two existing taxonomies, the first one presented

A Survey on Anti-honeypot and Anti-introspection Methods

129

by Chen et al. in [2] and the second by Gajrani et al. in [9], with some extensions to accommodate detection vectors available to botnets. This subsection briefly outlines the two earlier taxonomies, presenting the differences between the two. We then proceed to tie the two taxonomies together. Chen et al. present four abstract classifications, each with two subcategories. These categories are fairly general. They also describe the access level the attacker needs to have in order to leverage these detection vectors, how accurate the methods are, how complex they are to employ and how easily these detection vectors can be evaded. The categories identified by Chen et al. are presented below. Concrete examples for most of these categories can be found in [16]. – Hardware. System anomalies that are hardware-detectable, like hardware breakpoints. • Device. Virtual environments tend to display devices that either differ from original models or are completely VM dependent. • Driver. Many virtual machines and debuggers tend to create drivers that are characteristic to those environments. – Environment. This category covers notable differences in the actual execution context. • Memory. Instrumentable environments tend to have memory traces that identify those environments. Open pipes or channels, OS flags, altered memory layout, etc. • System. There are often execution environment specific idiosyncrasies which are results of bugs, implementation details and other factors. Examples include CPU instruction bugs in VMs and call-stack modifications by debuggers. – Application. This category deals with the surrounding software ecosystem and its inherent fingerprints. • Installation. The tools used to instrument malicious software have wellknown components installed in well-known locations. While this information is relatively easy to mask, it is also a very potent detection vector. • Execution. This category deals with running processes and is similar to installation. Processes are easy to detect but also easy to hide. – Behavioral. Measurements on how the system performs operations and responds to requests. • Timing. Introspection environments tend to display some latency in instructions and network messages. The latencies are hard to hide and provide a fairly reliable detection vector. The taxonomy presented by Gajrani et al. is of much finer granularity: the paper outlines a total of 12 different categories. Some of these are phone and Android dependent as the paper discusses sandboxing and detection of sandboxes for Android applications. These categories are included for completeness. Categories identified by Gajrani et al. are: – Background Process. Processes specific to an emulator/VM. – Performance. CPU instruction latency, graphics performance, etc.

130

– – – – – – – – –



5

J. Uitto et al.

Behavior. SMS operation, in- and out-bound phone calls. Software Components. Google Play services etc. API. Binder APIs like isTetheringSupported(). Initial System Design. Contact list, battery status, network status, etc. Hypervisor. QEMU scheduling, instruction pointer updates (QEMU only updates upon non-linear execution points), cache behavior, etc. File. Emulator specific files, or device-specific files. Network. Android emulators tend to sit behind a default gateway and DNS. Sensors. Emulators tend to not simulate sensors (or only simulate a small subset). Device Build. Collection of build values can be used to fingerprint and identify the execution platform. For example, Build.BOARD = unknown could be a sign of an emulator. Phone ID. Smart phones have unique IMEI (International Mobile Equipment Identity) and IMSI (International Mobile Subscriber Identity) numbers, phone numbers, etc. These are typically set to default values on emulation platforms.

Our Detection Vector Taxonomy

The taxonomy presented by Chen et al. is quite abstract and broad whereas Gajrani et al. present a more concrete taxonomy, where each category is tied to fairly specific system aspects. The taxonomy by Chen et al. seems to be usable for the general case. However, based on the current threat landscape and information security literature, we believe it would benefit from increased granularity to match current attack scenarios. A more fine-grained approach also allows us to better separate different categories between LIHPs and HIHPs. We therefore suggest a modified and expanded 2-tier taxonomy: – Temporal. As timing is a fairly prevalent detection vector, it has its own category. • Network. Ways of detecting latencies in the network environment. • Local. Detection of latency in the actual system, on API and instruction level. – Operational. This category covers vectors related to how the machine operates and which operations are possibly allowed/disallowed. • Propagation. In most honeypots, further malicious propagation is denied. • Communication. Does the device communicate as usual and what kind of communication is allowed? • Operation. This covers other operations such as port scanning which may be limited in certain environments. • Idiosyncrasies. As described by Gajrani, some execution environments display clear operational differences, such as the lazy program counter in QEMU. – Hardware. This matches the categories presented by Chen et al. • Device. • Driver.

A Survey on Anti-honeypot and Anti-introspection Methods

131

– Environment Details about the operation environment serve as detection vectors. These are the most simple to access and leverage. • Data. What data resides on the machine; files, installed programs, etc. • Execution. What else is executing on the machine. Identifiable monitor processes, certain auxiliary services, etc. • Identity. Values associated with device/operating system identity, mostly present on mobile platforms. • Memory. Identifiable memory traces such as operating system flags, open pipes, altered memory layout, etc. Table 1 divides the research in the field of anti-honeypot techniques into categories we proposed in the previous section. Some categories have a marking “HIHP-only”. This means that these techniques are not necessarily available for LIHPs as they tend to simulate smaller, incomplete systems. While the collection of papers is by not exhaustive, we believe it conveys a picture about the current research field and subjects. While the details of attack vectors are usually quite implementation specific [24], they also have common characteristics that make categorization worthwhile. Table 1. Distribution of anti-honeypot related research within our taxonomy.

6

Category

Subcategory

Papers

Honeypot

Temporal

Network Local

[8, 18] [15]

LIHP & HIHP Mostly HIHP

Operational

Propagation Communication Operation Idiosyncrasies

[3, 26] [17, 26] [6, 12, 16, 26] [13, 16, 23]

HIHP-only Mostly HIHP LIHP & HIHP LIHP & HIHP

Hardware

Device Driver

[12, 15] [9]

HIHP-only HIHP-only

Environment

Data Execution Identity Memory

[6, 7, 12, 15, 16] [6, 12, 15, 16, 18] [9, 13] [6, 7, 13, 15, 16]

HIHP-only LIHP & HIHP LIHP & HIHP HIHP-only

Conclusions and Future Work

In this paper, we have surveyed anti-honeypot and anti-introspection methods malware uses against low-, medium- and high-interaction honeypots. We also discussed different ways to categorize these approaches and also presented our own improved taxonomy that better corresponds to the present situation. According to current research, botnets are the most notable threat in the wild. Botnets are

132

J. Uitto et al.

versatile, controllable, and offer various business opportunities for the enterprising black hat hacker. Honeypots are an invaluable tool for detecting and monitoring botnets, but botnets are also the most honeypot-resistant malicious entities [3,25]. New ways to categorize detection vectors and new kinds of honeypot solutions are therefore needed. Modern malware can leverage multiple detection vectors to determine whether or not it is operating in a monitored system or under direct introspection. These methods rely on finger-printing different aspects of the operation environment and interfaces. The most effective strategy against such fingerprinting are novel monitoring solutions, rendering existing finger-prints void. However, this is prohibitively expensive and better solutions are needed. In [2] Chen et al. suggested that normal operating systems could try to imitate surveilling machines, rendering surveillance detection less effective. Also, more robust tools are required. In [1] Bahram et al. devise a method for totally circumventing introspection based on virtual machines by modifying kernel data structures used by the guest operating system. As future work, we aim to develop a honeypot system which provides several dual interfaces [X]. The other group of interfaces consists of the old interface and the other contains new, diversified interfaces. The aim is that legitimate software and users interact with the new diversified interfaces. The old, unmodified interfaces work as fakes. Any interaction with the fake interfaces immediately rises alarm in the system. To gather meaningful data, the fake interface should engage the malware in a meaningful way. This setup has the typical requirements of a high-interaction honeypot. The (assumed) malicious entity needs to be provided with an illusion of proper request-response chains and the introspection system must remain undetected. Other interesting avenue of research could be some form of self-aware, adaptive introspection environment. This system needs to be able to detect that is has been detected. After detection it would initiate a replication phase, launching multiple clones of the previous setup with a few modifications, much in a genetic fashion. These test would be iterated until a candidate system which evades the detection emerges. Finally, the increased use of virtualization today may have rendered many of the old anti-honeypot and anti-introspection methods useless, but malware authors keep coming up with new ways to detect introspection. In light of this development, we need to consider good ways to keep monitoring malware without being noticed. One topic of research we consider interesting is advanced deception in the context of HIHPs. Some steps have already been taken into this direction. Methods that can be used in HIHPs to communicate with malware and keep deceiving it as long as possible are addressed in [5,11,22]. A game theoretic approach to the same problem is taken in [10,20]. These kinds of advanced approaches greatly help us in convincing malware it is indeed operating in an ordinary system.

A Survey on Anti-honeypot and Anti-introspection Methods

133

References 1. Bahram, S., Jiang, X., Wang, Z., Grace, M., Li, J., Srinivasan, D., Rhee, J., Xu, D.: DKSM: subverting virtual machine introspection for fun and profit. In: 2010 29th IEEE Symposium on Reliable Distributed Systems, pp. 82–91. IEEE (2010) 2. Chen, X., Andersen, J., Mao, Z.M., Bailey, M., Nazario, J.: Towards an understanding of anti-virtualization and anti-debugging behavior in modern malware. In: IEEE International Conference on Dependable Systems and Networks with FTCS and DCC, DSN 2008, pp. 177–186. IEEE (2008) 3. Costarella, C., Chung, S., Endicott-Popovsky, B., Dittrich, D.: Hardening honeynets against honeypot-aware botnet attacks. University of Washington, US (2013) 4. Credo, T.: Hyper-V how to: detect if you are inside a VM (2009). https://blogs. technet.microsoft.com/tonyso/2009/08/20/hyper-v-how-to-detect-if-you-areinside-a-vm/ 5. Cui, W., Paxson, V., Weaver, N., Katz, R.H.: Protocol-independent adaptive replay of application dialog. In: Proceedings of the 13th Annual Network and Distributed System Security Symposium (2006) 6. Dornseif, M., Holz, T., Klein, C.N.: Nosebreak-attacking honeynets. arXiv preprint cs/0406052 (2004) 7. Ferrand, O.: How to detect the cuckoo sandbox and to strengthen it? J. Comput. Virol. Hacking Tech. 11(1), 51–58 (2015) 8. Fu, X., Yu, W., Cheng, D., Tan, X., Streff, K., Graham, S.: On recognizing virtual honeypots and countermeasures. In: 2nd IEEE International Symposium on Dependable, Autonomic and Secure Computing, pp. 211–218. IEEE (2006) 9. Gajrani, J., Sarswat, J., Tripathi, M., Laxmi, V., Gaur, M.S., Conti, M.: A robust dynamic analysis system preventing sandbox detection by android malware. In: Proceedings of the 8th International Conference on Security of Information and Networks, pp. 290–295. ACM (2015) 10. Hayatle, O., Otrok, H., Youssef, A.: A game theoretic investigation for high interaction honeypots. In: IEEE International Conference on Communications (ICC). IEEE (2012) 11. Hayatle, O., Otrok, H., Youssef, A.: A markov decision process model for high interaction honeypots? Inf. Secur. J. Glob. Perpective 22(4), 159–170 (2013) 12. Hayatle, O., Youssef, A., Otrok, H.: Dempster-shafer evidence combining for (anti)honeypot technologies. Inf. Secur. J. Glob. Perpective 21(6), 306–316 (2012) 13. Holz, T., Raynal, F., Honeypots, D.: System Issues, Part 1 (2005). http://www. symantec.com/connect/articles/defeating-honeypots-system-issues-part-1 14. Holz, T., Raynal, F., Honeypots, D.: System Issues, Part 2 (2005). http://www. symantec.com/connect/articles/defeating-honeypots-system-issues-part-2 15. Holz, T., Raynal, F.: Detecting honeypots and other suspicious environments. In: Proceedings from the Sixth Annual IEEE SMC on Information Assurance Workshop, IAW 2005, pp. 29–36. IEEE (2005) 16. Issa, A.: Anti-virtual machines and emulations. J. Comput. Virol. 8(4), 141–149 (2012) 17. Krawetz, N.: Anti-honeypot technology. IEEE Secur. Priv. 2(1), 76–79 (2004) 18. Mukkamala, S., Yendrapalli, K., Basnet, R., Shankarapani, M.K., Sung, A.H.: Detection of virtual environments and low interaction honeypots. In: Information Assurance and Security Workshop, IAW 2007, pp. 92–98. IEEE SMC (2007)

134

J. Uitto et al.

19. Nawrocki, M., Wahlisch, M., Schmidt, T.C., Keil, C., Schonfelder, J.: A survey on honeypot software and data analysis. arXiv preprint (2016) 20. Pawlick, J., Zhu, Q.: Deception by design: evidence-based signaling games for network defense. In: Workshop on the Economics of Information Security (WEIS) (2015) 21. Provos, N.: Honeyd Virtual Honeypot. http://www.honeyd.org/ 22. Rauti, S., Lepp¨ anen, V.: A survey on fake entities as a method to detect and monitor malicious activity, 8 p. (Submitted to a conference) 23. Spitzner, L.: Problems and challenges with honeypots (2004). http://www. symantec.com/connect/articles/problems-and-challenges-honeypots 24. Sysman, D., Itamar, S., Gadi, E.: Breaking Honeypot for Fun and Profit Honeypots. Black Hat, USA (2015). http://winehat.net/wp-content/uploads/2015/10/ Dean-Sysman-BreakingHoneypots.pdf 25. Wang, P., Wu, L., Cunningham, R., Zou, C.: Honeypot detection in advanced botnet attacks. Int. J. Inf. Comput. Secur. 4(1), 30–51 (2010) 26. Zou, C., Cunningham, R.: Honeypot-aware advanced botnet construction and maintenance. In: International Conference on Dependable Systems and Networks, DSN 2006, pp. 199–208. IEEE (2006)

Intelligent Displaying and Alerting System Based on an Integrated Communications Infrastructure and Low-Power Technology Marius Vochin ✉ , Alexandru Vulpe, George Suciu, and Laurentiu Boicescu (

)

University POLITEHNICA of Bucharest - UPB Bucharest, Bucharest, Romania {marius.vochin,laurentiu.boicescu}@elcom.pub.ro, {alex.vulpe,george.suciu}@radio.pub.ro

Abstract. The paper proposes an intelligent displaying and alerting system, based on a scalable integrated communication infrastructure. The system is envisioned to offer dynamic display capabilities using the ePaper technology, as well as to enable indoor location-based services such as visitor guidance and alerting using iBeaconcompatible mobile devices. The system will include a central display management console, as well as automated procedures for automatically displaying different types of notifications. The system is designed primarily for educational and research insti‐ tutions, allowing remote authentication through eduroam-type technology performed by the user’s distant institution of affiliation. As such, secure access based on locallydefined policies will be implemented, as well as multiple levels of access, from guests to system administrators. Keywords: ePaper · iBeacon · Alerting system · Indoor positioning · Low power display

1

Introduction

With the ever-increasing use of technology in all life aspects, a sustainable and easily managed system for digital and up-to-date room signage for offices, meeting rooms, and conferences has become the following challenge for modern office buildings. The emer‐ gence of Internet of Things (IoT) and digital interactions using electronic paper (ePaper) technology has marked a new phase of development in this direction. The new technology relies on ambient light reflection instead of a backlight, as well as a screen that only consumes a significant amount of energy during the update phase. Such digital displays offer good visibility of information in all light conditions, with the benefit of a low power consumption. The same can be said about iBeacon, which relies on the Bluetooth Low Energy (BLE) standard to create stationary constellations of low-power beacons to deter‐ mine the indoor position of mobile terminals. However, because these technologies are still relatively new, their use requires exten‐ sive computer programming skills to access and manage displayed information. As we stand, the current level of technology relies on the user either micro-managing individual

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_14

136

M. Vochin et al.

displays, or writing complex scripts for the dissemination of multiple information flows and dynamic update of these displays. There are several companies that offer display and notification solutions based on wireless ePaper. All these solutions are, generally, based on three elements: • Display, which can be either ePaper or LCD • Communication infrastructure, which can be based on WiFi, 3G or Bluetooth/ZigBee • Content management and publishing application. Most solutions are focused on static content display and less on dynamic content. There are also digital signage applications which focus on complex content, both static and dynamic, but in this case, a more complex infrastructure is required, and a low power consumption or flexible infrastructure is no longer the target. The paper proposes an intelligent displaying and alerting system (SICIAD) that relies on wireless ePaper and iBeacon technologies [1] to create custom displays for both static and dynamic information, as well as to ease the indoor orientation of guests. Although the system is primarily designed for public institutions like universities or government buildings, some of its applications may include public transport, exposition and commercial centres, museums and both indoor or outdoor amusement parks. Any organization may benefit from an indoor positioning and orientation system, as well as a centrally-managed display and alerting system. The paper is organized as follows. Section 2 presents some background on ePaper and iBeacon technologies, while Sect. 3 details the envisioned architecture and use cases of the SICIAD project. Some preliminary results are given in Sect. 4, while Sect. 5 draws the conclusions.

2

Wireless ePaper and iBeacon

2.1 iBeacon The iBeacon standard [2] is a communication protocol developed by Apple based on Bluetooth Smart technology. It represents a technology that can facilitate the develop‐ ment of location-based applications. A device that uses iBeacon sends radio signals to alert smartphones of its presence. When a mobile device detects a signal from the beacon, it uses the signal to estimate the proximity to the beacon and also the accuracy of the proximity estimation. This process of measuring the proximity to a device is known as “ranging” and it is based on common usage scenarios that rely on the accuracy of the assumption and the measured distance. The estimation is indicated by one of the four proximity states: immediate, near, far and unknown. To broadcast signals, iBeacon devices use Bluetooth Low Energy (BLE) which is based on the 2.4 GHz frequency. BLE is designed for low energy consumption and uses the wireless personal area network (PAN) technology to transmit data over a short distance. The difference between the iBeacon and other location-based technologies is that the beacon is only a one-way transmitter for the receiving device and requires a specific application to be installed on the device so that the user can manage beacons reception.

Intelligent Displaying and Alerting System

137

The deployment of an iBeacon consists of the fact that this device can transmit its own unique identification number to the local area. Receiving devices can also connect to the iBeacon and retrieve data from the iBeacon’s service. Location based services using beacons address three types of audience: application developers, people who deploy devices using the iBeacon technology and people who create devices using the iBeacon technology. 2.2 Wireless ePaper The electronic paper can be considered as a portable storage and display medium which can be electronically written and refreshed multiple times in order to display new content. Such devices can display content which is downloaded from various sources or created with a mechanical or electrostatic tool such as an electronic pencil (stylus). Therefore, the concept of ePaper can be defined as a display technology that simulates the appearance of text written on a traditional physical paper [3]. To make the content more comfortable to be read, electronic paper provides a wider viewing angle than lightemitting displays and perfect readability in ambient light. Applications of electronic visual displays include electronic shelf labels, digital signage, time schedules for public transportation, billboards, portable signs, electronic newspapers and e-readers. The wireless ePaper solution is based on an innovative radio technology that lowers the power consumption due to the fact that it only requires power when the displayed content is changed. The solution provides a wide variety of functionalities and options for displaying information that allow the user to set up very own customized use case, for example for radio-controlled signage at universities [4]. One key functionality consists of the fact that the users can remotely upgrade the displayed content in real time. To allow a highly flexible use, the wireless ePaper displays eliminate the need for an external power supply or a physical network connection due to the fact that the devices are battery powered and radio controlled. Furthermore, the data transmission process can be protected by a 128-bit key, allowing secure encryption and authentication stand‐ ards for eduroam [5, 6].

3

SICIAD Architecture

The SICIAD project was proposed in order to capitalize on existing advanced technology available at a company’s premises, as described in Sect. 2. It targets the development of an intelligent system that can dynamically display information and provide notifica‐ tion on certain events. For this, it proposes the development of an integrated management application for the infrastructure and wireless ePaper displays, along with an interface for connecting to internet calendar, several access levels and e-mail message program‐ ming. The proposed high-level architecture is depicted in Fig. 1.

138

M. Vochin et al.

Fig. 1. High-level SICIAD architecture

The implemented management console will enable the dynamic display of informa‐ tion, either on ePaper devices connected to the infrastructure and without wired power supplies, or on the users’ cell phones, using beacons based on the iBeacon technology. An internet calendar interface with email entry will be created, allowing the display of event schedule, as well as an electronic notice board for announcements, commercials, etc. An application for the operation and monitoring of the system’s state parameters will be implemented, offering the ability to send generated alerts through technologies like e-mail, GSM SMS, ePaper displays in areas of interest, or iBeacon messages. The intelligent system will analyse data from IoT sensors (temperature, CO2, smoke, gas) to identify threats, send alerts through its available means, or even automatically initiate emergency evacuation and facilitate the avoidance of problem areas and to provide guidance towards safe exits (including aid for the hearing impaired). The infrastructure will be designed and implemented using the economic operator’s LANCOM ePaper and iBeacon, existing technologies, as well as generic open-source technologies available to the university. The economic operator will be able to implement the system in domains like: univer‐ sities, schools, conference halls, hospitals, etc. 3.1 Use Cases Several use cases are envisioned for the development of the SICIAD architecture, based on the project’s objectives. Here, wireless ePaper devices are used to display static or dynamic information, while the iBeacon technology can enable smartphone apps to provide guidance, custom advertisements or even location-based alerts. The following use cases are proposed:

Intelligent Displaying and Alerting System

139

1. Dynamic or static announcements and notifications in public transport stations. In this scenario, the ePaper will display the schedule of public transport associated with a stop, along with a short map of the surrounding areas and the transport network, and other useful information. There will also be dynamic information such as the remaining time until a public transport vehicle arrives or temporary changes of the schedule. 2. Guidance for customers in large shopping areas/malls. Here, ePaper is used along with beacons. ePaper can ensure the display of price tags, and can also dynamically modify them. For instance, based on information from beacons, it can enlarge the font if there is a senior person or with eyesight problems. Also, a loyal customer may get a lower price and notifications regarding the existence of such price. 3. Guidance and notifications for museum visitors. In this scenario ePaper is used for tagging exposition halls and exhibition pieces, as well as marking the visitation route. Also, beacons will enable, via a smartphone app, an interactive electronic guide of the expo. 4. Tourist information and guidance in national and adventure parks. Here, beacons are used, via a smartphone app, for providing information on tourist land‐ marks in the area, as well as for providing interactive audio guide. 5. Dynamic display of information in educational and research institutes and visitor guidance. Wireless ePaper displays can be used to replace traditional notice boards. As such, a hierarchical architecture could allow each room’s administrator to present information regarding schedule, special events, contests and projects. Furthermore, additional information could be managed by the upper hierarchy of the institute. For large organizations, visitors could be guided using an iBeacon-based in-door positioning system. 6. Dynamic display of alerts and emergency evacuation. An intelligent monitoring system can determine safety threats and automatically enact predefined evacuation plans. Here, ePaper displays can be used to show the best route for evacuation, while iBeacons can be used to broadcast alerts or guide visitors towards exits through smartphone applications. Based on the above use cases, as well as the details presented in Sect. 2, it can be concluded that ePaper displays controlled over a wireless network can be used to fulfil SICIAD’s main objectives. More important, the use of wireless technology simplifies the deployment of the system, since additional wiring or power sources will not be necessary. However, several special cases result from the use-case analysis. First off, the ePaper displays (including the Lancom ePaper displays proposed for use in the system’s architecture) are designed to work at a low level of power consump‐ tion. This means a slow refresh rate of the displayed information, when changes occur. For static information, this is not an issue: all the displays can be updated in the off hours, when no one is using them. However, an issue may arise when trying to display urgent dynamic notifications, like the emergency evacuation alert from the sixth use case. For such cases, further study is needed regarding the wireless ePaper response time. The specifications of Lancom ePaper displays indicate a battery life time between 5 and 7 years, if the displayed information is changed four times a day [7]. For more interactive applications (like the first use case), or improving response times for emergency situations,

140

M. Vochin et al.

additional power sources may be needed, which may come in the form of solar panels in outdoor deployments, (when battery replacement may become an issue). Secondly, the Lancom WiFi Access Points integrate iBeacon technology. This offers a simple means of determining whether a mobile smart device is in close range of the access point, but a larger iBeacon network is necessary for determining exact indoor position. As such, further work may relate to the integration of stand-alone iBeacon devices in the SICIAD architecture.

4

Preliminary Evaluation of Wireless ePaper and iBeacon

Several tests have been performed in order to determine the delays that occur when updating the information on the ePaper displays from the wireless ePaper server. To that end, the L-151E access point was used together with a WDG-1 7.4″ ePaper display, controlled through an ePaper Server installed on a Windows Server 2013. In the first experimental test scenario the delay of changing ePaper content was determined for simple operations, such as image delete, change, rotate and show id. The results are summarized in Table 1. Table 1. Common operations and corresponding delays. Operation Delete Change Rotate Show ID

Details Delete image Change image 480 × 800 (7.4″) Image rotation Show label display ID

Delay [s] 330–473 27 2 14

In the second test scenario, measurements of three different iBeacon power levels have been made on the same access point and a commodity hardware with BLE receiver such as SM-G361F smartphone. Lancom iBeacon has been calibrated to provide three power levels at a distance of 1 m: –52, –58, –75 – broadcasted with the beacon message, in order to allow an approximation of the distance between the BLE receiver and the beacon. The measurements at reception indicate –40, –46, –62 dBm, at a distance of several cm of the antenna.

5

Conclusion and Future Work

The paper presents a displaying and alerting system, based on an integrated communi‐ cation infrastructure. The system offers dynamic display capabilities using the ePaper technology, as well as enables indoor location-based services such as visitor guidance and alerting using iBeacon-compatible mobile devices. Measurements taken in the evaluation phase show that the ePaper display’s response times are relatively short, being suitable to proposed use cases. Tests show that the largest delays are obtained in the case of deleting images. However, further study will be needed to guarantee rapid response times in case of emergencies.

Intelligent Displaying and Alerting System

141

The batteries provided by the manufacturer for the ePaper displays are sufficient for most of the use cases, whilst being easy to replace for indoor applications. For outdoor applications, ePaper systems can be recharged via solar cells and, due to their low power consumption, may function entire seasons without sunlight, offering a long-term solu‐ tion for displaying information in remote areas. Being based on the BLE standard, iBeacon technology can potentially operate with almost all smart mobile terminals, providing a cost-effective solution for an indoor positioning system. In combination with a smartphone application and a wireless communication system, BLE can enable the distribution of location-based content. Future work with the project will include the development of the system’s manage‐ ment console, along with the further investigation of ePaper response times and iBeacon functional range, as well as the proposed architecture’s scalability, performance and security. Acknowledgments. This work has been funded by UEFISCDI Romania under grant no. 60BG/ 2016 “Intelligent communications system based on integrated infrastructure, with dynamic display and alerting - SICIAD” and partially funded by University Politehnica of Bucharest, through the “Excellence Research Grants” Program, UPB – GEX. Identifier: UPB– EXCELENȚĂ–2016 Research project Intelligent Navigation Assistance System, Contract number 101/26.09.2016 (acronym: SIAN).

References 1. Suciu, G., Vochin, M., Diaconu, C., Suciu, V., Butca, C.: Convergence of software defined radio: WiFi, ibeacon and epaper, In: IEEE 15th RoEduNet Conference: Networking in Education and Research, pp. 1–5 (2016) 2. Sykes, E.R., Pentland, S., Nardi, S.: Context-aware mobile apps using iBeacons: towards smarter interactions. In: Proceedings of the 25th Annual International Conference on Computer Science and Software Engineering, pp. 120–129 (2015) 3. The technology behind LANCOM wireless ePaper displays. https://www.lancomsystems.com/fileadmin/download/reference_story/PDF/Wireless_ePaper_Solution_at_a_ private_college,_Germany__EN.pdf 4. Wireless ePaper solution at a private college, Germany. https://www.lancom.de/fileadmin/ download/reference_story/PDF/Wireless_ePaper_Solution_at_a_private_college,_Germany__ EN.pdf 5. University of Belgrade implements eduroam. https://www.lancom.de/fileadmin/download/ reference_story/PDF/University_of_Belgrade_EN.pdf 6. Winter, S., Wolniewicz, T., Thomson, I.: Deliverable DJ3.1.1: RadSec standardisation and definition of eduroam extensions, GN3 JRA3, GEANT3 (2009) 7. Lancom wireless ePaper displays specification sheet. https://www.lancom-systems.com/ fileadmin/produkte/lc_wireless_epaper_displays/LANCOM-Wireless-ePaper-DisplaysEN.pdf

Intelligent System for Vehicle Navigation Assistance Marius Vochin ✉ , Sorin Zoican, and Eugen Borcoci (

)

University Politehnica of Bucharest - UPB Bucharest, Bucharest, Romania {marius.vochin,sorin,eugen.borcoci}@elcom.pub.ro

Abstract. Navigation assistance systems, aiming to improve safety and optimize the traffic, become more and more popular in vehicular technology. The reason is given by the significant traffic increase and congestion events in large cities, complexity of road infrastructure and unexpected or hazardous conditions that can be found on roads. This paper proposes an improvement in navigation systems by intelligently gathering traffic data provided by integrated car sensors and/or security systems and use these data to warn other participants. System concepts, architecture, preliminary design and performance evaluation aspects are presented, and also tasks organization for an embedded cost effective implemen‐ tation using a real time kernel is illustrated. Keywords: Navigation systems · Vehicular communication · Embedded systems · Real time kernel · Tasks · Social navigation · On-board diagnostics · Controller area network

1

Introduction

Past decades have seen a boom in number of cars circulating on roads, thus creating difficulties both for transportation infrastructure developers, authorities and also for users. Driving a vehicle to a destination has become more challenging, given the increasing in car speed, number and variety of road conditions. Although active and passive car safety technologies evolved, accidents still happen very often, with large material damages and even human victims. The next step in road safety and efficient transportation is to create an intelligent driver navigation assistant that can be able to assist in best path selection decisions and warn the driver, but also other traffic participants in proximity, about potential hazardous conditions. This would also be a requirement to be met by future autonomous driving vehicles [1].

2

Related Work

Navigation systems [2] have been developed to assist drivers in selection of the best route to a destination, while considering off-line static metrics like distance and restric‐ tions like speed limits. Evolved navigation systems like Google Maps offer online information like traffic congestion of road segments, unexpected road blocking, making rerouting more facile © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_15

Intelligent System for Vehicle Navigation Assistance

143

to avoid such conditions. Some of the newest and most popular navigation solutions like Waze [3] provide more, i.e., social interaction, making possible for drivers to report road hazards, accidents, blocking events and other relevant traffic aspects. One important limitation of these new features is that they rely on driver intervention, being limited to subjectivism, and implying new risks such as driver distraction, or potential security problems [4]. Another major technical limitation of online navigation systems is that they actively rely on cellular technology only (GSM, 3G or LTE) to communicate data and alerts between vehicles. This approach introduces significant communication delays and has problems caused by poor or lack of network coverage. Modern vehicles integrate various electronic sensors and security systems that inter‐ communicate through protocols such as Controller Area Network (CAN bus) [5] and provide a special Onboard Diagnosis interface (OBD). Our solution supposes the existence in the vehicles of such an OBD interface capable to deliver information on road conditions.

3

System Specification and Design

The system proposed in this paper will be able to offer to the drivers two major services: first, assistance to vehicle drivers in choosing the best route to a destination and provide dynamic traffic parameters to the user, with rerouting capabilities, by using an online navigation solution, which also presents offline navigation capabilities. Second, the system can also monitor the occurrence of some specific hazardous events (detected by user car’s sensors or safety systems), and then send warning messages to the central routing server or directly broadcast messages towards vehicles (users) currently located in the proximity of the sender. Hardware requirements for the full-featured system will be a car with OBD interface (newer than 1996), a diagnosis OBD Bluetooth dongle, a smart phone with current Android version, Bluetooth, Wi-Fi connectivity, and mobile data plan enabled. The limited embedded version of the system will require a microcontroller based device with Wi-Fi connectivity presented in Sect. 3.2. The main innovations of this project are monitoring of OBD-detected events like ice sensing [6], rain, engaging of anti-lock braking [7], stability control, traction control, other potential hazardous road conditions, and the ability to alert other vehicles by using an online navigation server or broadcasting alerts through vehicle-to-vehicle commu‐ nication, especially when cellular network coverage is weak. The intelligent navigation system proposed in this study should meet the following basic requirements: (a) monitoring for traffic events (b) processing the events, (c) based on processing results making a decision to broadcast information about a specific event and (d) sending warning messages to online navigation server or to neighboring vehicles. 3.1 Preliminary System Basic Architecture The core of this innovative navigation system is the “Navigation Terminal” in Fig. 1, which will be implemented on two platforms: a full-featured Android based one, and a

144

M. Vochin et al.

low-end, cost effective microcontroller based embedded platform, with limited func‐ tionality such as OBD data collecting and short range warning facility only. The latter was implemented with limited capabilities only, in order to exclude the need for a smartphone, active cellular data plan and to provide more accessibility.

Fig. 1. High level view of the system architecture

Android system implementation will gather relevant OBD data and post it to a private map defined in OsmAnd [8] online routing navigation system, by using internet access connection provided by mobile operator data carrier. In areas where no internet access is available, alerts will be broadcasted to neighboring vehicles by using vehicular ad-hoc network communication (VANET) in Android Wi-Fi AP/client mode [9]. 3.2 Embedded System Implementation Functional Aspects This solution, based on a microcontroller such as Analog Devices Blackfin or Infineon XMC4500-ARM microcontrollers [10], has several tasks which are scheduled by real time kernel (Visual DSP kernel or Micrium OS III). The tasks implement the necessary func‐ tionalities of the system: ad-hoc network connection mode, data acquisition from OBD interface, hazardous events detection and communication with neighbors’ vehicles. The system high-level functioning is described in Fig. 2. The following assumptions are considered valid: – there are several vehicles (in this case four) currently present in a specific area (network coverage area); – on each vehicle two processes continuously run in parallel: • monitoring of the OBD interface to detect possible events; • listening and receiving possible messages broadcasted by other sender vehicle (any of the group can be sender) The input data from OBD interface are processed to decide whether a relevant event has occurred. In this latter case, an associate sender is started to communicate the event to all receivers in the coverage area. The alerted receiver at its turn will display specific alerts to the driver.

Intelligent System for Vehicle Navigation Assistance

Fig. 2. Short range alerting (message broadcast)

Fig. 3. Embedded system tasks organization

145

146

M. Vochin et al.

For example, in Fig. 2 the vehicle 4 observes the event A and later the vehicle 2 observes the event B. The occurrence of these events will be communicated by the vehicle 4 to the vehicles 1, 2 and 3 and later by the vehicle 2 to 1, 3 and 4, by sending event alert messages. Figure 3 presents tasks organization, that can be used both for embedded or Android implementation.

Fig. 4. The tasks flowcharts.

Intelligent System for Vehicle Navigation Assistance

147

The following functional modules are considered to be implemented: First, a V2V network interface initialization task is started, then a receiver task is created and set to running. This task is waiting for alerts created by a sender task in other cars, which have a specific event to broadcast. When a message is received the receiver task will create a receiver worker which will process the incoming event. In parallel with listening task, an information acquisition task monitors periodically the OBD interface and stores the information to be further processed by a processing OBD task. An input-output (IO) task monitors the output of processing task to detect if a specific event has occurred (engine malfunctioning, brake system issues etc.). The receiver, information acquisition, IO and processing tasks are scheduled using operating system periodic semaphores similar to [11]. When an event exists, the IO task will create a sender task, which will broadcast a warning message to the neighboring cars in an adhoc network connection mode. The tasks flowcharts are shown in the Fig. 4. The network initialization task waits for a link (the vehicle will be connected in an ad-hoc network) and after the link is established, this task exits and the receiver thread will continuous listen for connections from neighbor vehicles. This tasks use periodic semaphores Network_semaphore and Receiver_semaphore for scheduling. For each accepted connection, the receiver task will create and start a receiver worker task that receives and analyses the information from a neighbor vehicle. The receiver worker will be terminated after it finishes its job. Meanwhile, the acquisition task loads a memory buffer with the information read from OBD interface using a direct memory access (DMA) port. When a new buffer is filled, a DMA interrupt occurs and the associated inter‐ rupt service routine set a flag that signals to the system that new information is ready to be processed by OBD processing task in order to decide if a specific event has occurred. If such event exists, then the IO task will create and start a sender task. The OBD processing and IO tasks are scheduled using periodic semaphores, OBD_semaphore and IO_semaphore. The sender task tries to establish connections with all the neighbors’ receivers to communi‐ cate the event occurrence. If the connection was not established, the sender task retries to establish it using the flag sender_failure. The sender task will be terminated, after it broad‐ casts the appropriate message, using the flag sender_stop. Assuming that the data amount transferred with an alert event is about kilobytes, the vehicle speed is about 100 km/h (30 m/s) and the data processing (e.g. generate warnings to driver) is negligible comparing with data transfer one can conclude that the distance trav‐ eled by the warned vehicles is small (a few meters) so the driver has enough time to take a proper action on alerted event.

4

Conclusions

The system presented in this paper brings an innovative approach to the road navigation problem, improving detection and warning of hazardous traffic conditions. The preliminary system, based on low commodity hardware and requiring non expensive dedicated hardware has been proposed in a full featured online navigation version and also implemented in a low cost embedded one.

148

M. Vochin et al.

The low cost embedded solution performance has been measured, and it achieved transfer rates up to tens of kilobytes per second in tests (using Blackfin BF537 micro‐ controller and VDK real time operating system kernel), making it a feasible platform for broadcasting and receiving road alerts. Future work is to develop and optimize the system implementation, to address vehicle position in V2V alerts, scalability and performance aspects but also to investigate security aspects such as authentication, encryption and privacy concerns. Acknowledgments. This work has been funded by University Politehnica of Bucharest, through the “Excellence Research Grants” Program, UPB – GEX. Identifier: UPB–EXCELENȚĂ–2016 Research project Intelligent Navigation Assistance System, Contract number 101/26.09.2016 (acronym: SIAN).

References 1. Baras, J.S., Tan, X., Hovareshti, P.: Decentralized control of autonomus vehicles. In: Proceedings of 42nd IEEE Conference of Decision and Control, pp. 1532–1537 (2003) 2. Gorog, D.: GPS navigation apps. Aust. MacWorld 162, 68 (2011). http:// search.ebscohost.com/login.aspx?direct=true&db=iih&AN=65071437&site=ehost-live 3. Jeske, T.: Floating car data from Smartphones: what Google and Waze know about you and how hackers can control traffic, 12 (2012). https://media.blackhat.com/eu-13/briefings/Jeske/ bh-eu-13-floating-car-data-jeske-wp.pdf 4. Sinai, M.B., Partush, N., Yadid, S., Yahav, E.: Exploiting social navigation. arXiv Preprint arXiv:1410.0151 (2014). http://arxiv.org/abs/1410.0151 5. Li, R., Liu, C., Luo, F.: A design for automotive CAN bus monitoring system. In: 2008 IEEE Vehicle Power and Propulsion Conference, VPPC 2008 (2008) https://doi.org/10.1109/ VPPC.2008.4677544 6. Fleming, W.J.: New automotive sensors—a review. IEEE Sens. J. 8(11), 1900–1921 (2008). https://doi.org/10.1109/JSEN.2008.2006452 7. Gustafsson, F.: Automotive safety systems. IEEE Signal Process. Mag. 26(4), 32–47 (2009). https://doi.org/10.1109/MSP.2009.932618 8. Rakjit, C., Liu, W., Gutierrez, J.A.: EESManager: making greener cloud apps. In: 2013 22nd ITC Specialist Seminar on Energy Efficient and Green Networking, SSEEGN 2013, pp. 31– 36 (2013). https://doi.org/10.1109/SSEEGN.2013.6705399 9. Su, K.C., Wu, H.M., Chang, W.L., Chou, Y.H.: Vehicle-to-vehicle communication system through Wi-Fi network using android smartphone. In: Proceedings - 2012 International Conference on Connected Vehicles and Expo, ICCVE 2012, pp. 191–196 (2012). https:// doi.org/10.1109/ICCVE.2012.42 10. Zoican, S.: Networking applications for embedded systems. In: Babamir, S.M. (ed.) Real Time Systems, Architecture, Scheduling and Application, Intech 2012, ISBN 978-953-51-0510-7, pp. 1–20 11. Zoican, S., Vochin, M., Zoican, R., Galațchi, D.: Lane departure warning system implementation using the Blackfin microcomputer. In: Proceedings of 12th IEEE International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, October 2016, ISBN 978-1-5090-3748-3/16/$31.00. IEEE (2016)

Making Software Accessible, but not Assistive: A Proposal for a First Insight for Students João de Sousa e Silva1 ✉ , Ramiro Gonçalves1, José Martins1, and António Pereira2 (

)

1

INESC TEC, Universidade de Trás-os-Montes e Alto Douro, Apartado 1013, 5001-801 Vila Real, Portugal [email protected], {ramiro,jmartins}@utad.pt 2 Departamento de Informática, Instituto Politécnico de Leiria, Leiria, Portugal [email protected]

Abstract. The academy can and should have a major role in the promotion of software accessibility. To try to clarify a number of empirical arguments and certainties regarding the usage of accessible, but not assistive, software, the answers to a survey given to 15 blind or low vision people are depicted. To demonstrate how under addressed this topic is by the academy an experiment was made, and its results are portrayed. The novel contribution that this paper offers is the relation between relevant accessibility documentation to its appropriate type of user interface, which is intended to encourage the introduction of the topic of software accessibility implementation. Also, a proposal for a first slide, regarding accessibility implementation in software meant to be shown to software engi‐ neering students, who should produce accessible software in their future, is presented. As a conclusion, some insights are given and new possible research avenues are depicted. Keywords: Software engineering · Software accessibility · Web accessibility · Digital accessibility · Software · Graphical user interface

1

Introduction

Accessibility in software is increasing in its importance at each moment just as software prevalence is becoming denser every day. From desktop computers, to laptops, to smart‐ phones, tablets, and smart watches, we are already, literally, wearing software platforms on our bodies, and the future has, for sure, much more reserved. Also, life expectancy is increasing, and this means that a lot more potential users who require accessible soft‐ ware is rising [1]. Accessible software does not simply mean that a person with some impairment will be able to use it, it means much more. With accessible software, people who were not able to do simple everyday things, which make up the daily life of the majority of the population, are now able to accomplish them [2, 3]. Nowadays accessible software is, probably, the biggest agent to promote an inclusive society [4]. Once making accessible software is perfectly possible, it is simply ignoble not to do so. However this is still pretty much the case [1]!

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_16

150

J. de Sousa e Silva et al.

Accessible software benefits not only people with disabilities [1]. Although this fact should be enough to make every developer want to produce accessible software, and every stakeholder request it, the actual situation is very far from this [5, 6]. The world seems to not be going towards that direction. There are several laws, directions, and guidelines from numerous governments intended to promote/force accessibility in software [7–9], but this is, however, not working. Stakeholders seem not to be aware of the ultimate difficulties of their elderly futures – elderly people are another section of the population that benefits from acces‐ sibility in software [10]. Developers, maybe as a reaction to the stakeholder’s desires, see as their first priority the simple deploy of their software, instead of the production of a good product. Simultaneously to the fact that it should be hard for authorities, which should force the implementation of accessibility in software, to measure accessibility levels, it is not obvious for stakeholders the extra revenue and savings that they can achieve just by demanding an accessible software. Thinking about it in terms of money dictatorship, demonstrating to stakeholders the possible increment on their revenue which accessible software could produce [11, 12] could be a privilege way to give accessibility the rele‐ vance that it should have. Unfortunately, stakeholders are typically much more inter‐ ested in short term results, therefore, this invalidates revenue rise as an indicator. Also, the fact is that there are not that many developers able to produce accessible software. This does is not due to it being a hard task –the production of accessible software– but more due to the lack of knowledge in this field [12]. Companies like Google or Microsoft, producers of software used around the world, with advance software development processes and financially healthy, fail in this area. The mentioned companies even have accessibility departments, but, as stated, they often fail! A possible explanation for this is that accessibility knowledge, essentially, solely exists within at that particular depart‐ ment. Rather, it should be spread through all departments. Thinking about this, the exis‐ tence of an accessibility department could be interpreted as a bad sign for the general accessibility issue. Instead, all of the engineers within a company should know about accessibility. This should exist in the development process by default, being as natural as the implementation of any other feature. This idea will be possible when the acces‐ sibility concept, relevance, and knowledge becomes present in the developers’ and stakeholders’ minds. This leads the subject to another agent: the academy. The academy can and should have a major role in software accessibility. Software accessibility concepts and techniques should be taught in academic environments, while at the same time this is where developers and researchers may have the chance to acquire these abilities without labour pressures. What cannot happen, surely, is to have software accessibility treated as a niche feature, as it seems to be the case nowadays.

2

Surveys and Considerations

There are several assessments done on digital accessibility. For example, the Study on Assessing and Promoting E-Accessibility, endorsed by the European Union and

Making Software Accessible, but not Assistive

151

published at November 2013 [13]. This tested e-accessibility in 27 European countries and, not surprisingly, indicated that much work needs to be done in this area. In July 2015, WebAIM has conducted a survey of screen reader software from allover the world [14], where a total of 2515 responses were validated. This survey was done on, both, screen reader software –e.g. what screen reader is being used, what is the used operating system– and on what was the opinion about the evolution of Web acces‐ sibility. Also, the survey enquired about some more specific e-accessibility problems – e.g. accessibility in PDF files. However, this survey seems to assume, as its baseline, a number of unclear arguments and certainties which are in fact ignored also by many stakeholders. Concretely, the survey asked about Web accessibility in news Websites. This may lead to the conclusion that those who work with disabled people, know, empirically, that this section of the population are consumers of this type of content. Although this is a strong belief, some background work seems to be missing in order to validate this same belief, since e-accessibility is clearly not well addressed. To try to create a better understanding that could sustain a stronger approach to eaccessibility, a survey was initiated at 5th of April 2016. It was requested to blind and partially sighted who were members of the Portuguese association of blind and partially sighted people (ACAPO) to answer a few questions. The surveys were send by e-mail, or taken at the ACAPO centre in the city of Leiria, with the help of its office assistant. The survey is still taking place, but, at the present moment it was already possible to collect the answers of 15 people. Therefore, some preliminary information can already be depicted. First, participants were asked to place their age in a predefined range. 2 were between 18 and 24 years old, 1 between 24 and 29, 4 between 30 and 39, 3 between 40 and 49, 3 between 50 and 59, and 2 were 65 years old or more. From those who have answered, all were computer users, 7 smartphone users, and 2 tablet users. 14 were Internet users and use their devices to read. With this it is already possible to cogitate that probably electronic devices are taking a major role in the life of blind individuals, and this seem to be across ages. The reduced number of braille contents, its hard access and reproduction, makes digital content fantastic, since it is easily accessible and reproducible once you have an Internet connection. When asked what were the 3 most usual tasks performed with their devices, 9 said that it was sending and receiving e-mails, 10 writing and reading, and 10 was Web searches. 6 said that they use a native application to do one or more of these activities. These were the most frequent answers. Here it is noticeable that e-mail is a relevant way of communication in this group. We can speculate that e-mail is even more important to persons with disabilities, due the its easy access in comparison with regular mail, or the filling and deliver of official forms and/or letters. As for reading and writing, here, again, the idea is reinforced that these devices are a fantastic way to access written content, and also to create it. The Web searches is just a confirmation of the relevance of the Web for these individuals, especially if we think that, probably, this section of the population would not have any other way to access information in, for example, an encyclopaedia. When the scope of the previews question was reduced to solely on-line tasks, the answers which became more relevant, besides sending and receiving e-mails and Web

152

J. de Sousa e Silva et al.

searches, were chatting, with 4 answers, and social networking, with 5. 2 said that they use a native application to do one or more of these activities. Here it is possible to see that these devices are, not surprisingly, also important in socializing. Another question was if the devices were used to read news, and 13 people said yes. 3 said that they use a native application to do one or more of these activities. This number is important in order to realize that for many disabled people, their devices are an important way of reading the news. Maybe even the only alternative to TV and radio. When asked if they use the Web to search for information regarding a product, in order to decide whether to buy it or not, only 5 said yes. Here we can speculate that probably the lack of Web accessibility is having a negative influence, and a solution should be worked in order to increase this number. It is easy to understand that for disable people, special a person who is visually impaired, it is easier to check a product’s features through their own device than in a physical shop. Maybe the most relevant number was obtained when candidates were asked if they had used their device in order to access State services. Only 2 said yes. The potential of using State services through devices is tremendous, and, at least, the Portuguese state seems to not be giving this benefit to its citizens with disabilities. We can speculate that, again, this is due to poor Web accessibility. Here it is important to say that accessibility is harder to implement in services that require interaction. In other words, it is easier to make accessible content when it is just meant to be readable. Finally, the low usage of native applications in this group is remarkable, when we think about the massive number of these, and the number of their downloads at their official application store such as APPStore and PlayStore.

3

Experiments

In order to introduce e-accessibility into the lexicon of developers, the academy can and should do its job. To demonstrate how under addressed the topic of digital accessibility is by the academy a small experiment was made. A good indicator of the relevance given to a topic by the academy is the number of papers related to that topic available on paper databases. Therefore 2 databases were chosen – i.e. Web of Science, and Science Direct– and 2 searches were performed at each of these with two expressions – i.e. “web acces‐ sibility”, and “software accessibility”. The results are depicted as follows (Table 1): Table 1. Occurrences on databases of two expressions. Expression

Occurrences at Web of Science Web accessibility 724 Software 23 accessibility

Occurrences at Science Direct 434 49

Date of search 6 of April, 2016 6 of April, 2016

The searches were across the entire databases. The Web of Science’s universe was of 132.894.950 papers. This information was not available on Science Direct.

Making Software Accessible, but not Assistive

153

Since the outcome was so reduced, it was easy to see that out of the results obtained from Web of Science with the expression “software accessibility”, only 15 papers were effectively related to software accessibility. With Science Direct, the outlook is identical; simply the percentage of results related to software accessibility was even lower. From the 49, only 11 are about the topic. Software Engineering from Ian Sommerville [15] and SWEBOK v3.0 [16] were also checked so as to find the number of occurrences of the term “accessibility”. The results were 1 in each book. Unfortunately, even these single occurrences are not related in any way to e-accessibility. Looking at these indicators, it is clear that e-accessibility – is not being appropriately considered by the academy. These facts can have a major negative influence in the spread of digital e-accessibility research and routines. Software engineering manuals do not talk about the topic. As a result, the topic is not appropriately taught in academic environments. Also, since there are so few papers about it, this could result in any actual research becoming harder due the lack of references, discouraging researches from develop work on this topic.

4

A First Slide Proposal

In order to spread e-accessibility among developers, the topic should be taught in the academic environment. As shown, this topic – e-accessibility – is extremely under addressed. The topic is so poorly taught that it requires a baseline to even start the learning process. Due to this lack of discussion, in combination with the savage amount of documents regarding e-accessibility, a good starting point would be to relate each type of document to its appropriate type of user interface. We believe that this would save a lot of research and useless reading. Therefore, below a table is presented as proposal for an e-accessibility slide 0 (Table 2): Table 2. Relationship between each type of document and its appropriate type of user interface. Type of User interface Web UI

Documents to consult Web Content Accessibility Guidelines (WCAG) 2.0 - W3C Native Application with standard controls from Human Interface Guidelines from the host the host operating system operating system; Accessibility programming guide of the host operating system Native Application with UI controls made from Host operating system accessibility APIs scratch UI for a big software system, such as an ISO 9241-171:2008 - Ergonomics of humanoperating system system interaction – Part …

As for Web User Interfaces, Web Content Accessibility Guidelines 2.0 from World Wide Web Consortium (W3C) [17] are the recommended accessibility standards from many organizations –including governmental organizations – for the establishment of an accessible Web for persons with disabilities. It comprises of guidelines, and

154

J. de Sousa e Silva et al.

checkpoints to ensure a certain level of accessibility addressed to specific disability related problems. For Native Application with standard controls, the human interface guidelines from the host operating system are mandatory, in case developers want to create a GUI akin the operating system style. It is relevant to say that when a developer keeps the same graphic style from the host operating system on his application, he is already increasing the level of accessibility, since the interaction will be similar to the rest of system. Therefore, there is probably no need of a specific learning curve. Using standard graph‐ ical components, the developer wouldn’t have to make them particularly accessible, since they are already built with the accessibility features provided by the accessibility APIs. Consequently, using the standard components, developers would just have to consult the accessibility programming guide of the host operating system in order to use those components accurately. When developing a native Application with UI components made from scratch, developers may follow the same approach than in the previews paragraph, and in addi‐ tion, study the documentation regarding the Host operating system accessibility APIs in order to implement them in the right way on their new and personalized graphical components. Only after this, will the UI will be accessible. For the creation of a bigger UI for a larger software system, such as an operating system, the recommendation is to use the ISO 9241-171:2008 - Ergonomics of humansystem interaction – Part. Prepared by Technical Committee ISO/TC 159, Ergonomics, Subcommittee SC 4, Ergonomics of human-system interaction, this ISO “provides ergonomics guidance and specifications for the design of accessible software for use at work [18], in the home, in education and in public places”, as states in its abstract. This should by the guidance for a new big UI, built from scratch. In addition to this guidance, there are some specific governmental rules, such as Section 508 from the United States of America [19], which may be consulted. However, the above recommendations overlap with these governmental guidelines. Actually, these national recommendations, such as the Brazilian eMAG - Modelo de Acessibilidade em Governo Eletrônico [20], are in line with the international abovementioned recommen‐ dations.

5

Conclusion

E-accessibility is a very under implemented feature. The agents with decision capacity are likely to not actually care or even know enough about it. Since e-accessibility is not being fulfilled, and its power is under evaluated, a possible and reasonable solution would be to address it in the academic environment. Since the topic is under addressed in the academic environment, and it seems that there is a lack of information regarding the topic, the information presented in the previews section was design to be an easy and informative first approach for its implementation. It was designed to relate the right documentation to each type of UI, therefore, removing a complexity barrier in its imple‐ mentation by filtering the enormous amount of documentation regarding the topic.

Making Software Accessible, but not Assistive

155

An important step is to set accessibility steps, explicitly, in the software development methods. This work is already in progress. To motivate developers to implement acces‐ sibility in software, a promotional work to make evident the different benefits of acces‐ sibility is envisioned for the future. These could range from automated software testing [21, 22] or, to the best of our knowledge, a totally new study area of system integration through accessibility. Acknowledgments. This work is financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation COMPETE 2020 Programme within project «POCI-01-0145-FEDER-006961», and by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia as part of project «UID/EEA/50014/2013». Our thanks to the Portuguese association of blind and partially sighted people (ACAPO), especially to Fernando Ferreira, the desk assistant and mobility trainer in Leiria. Also, we would like to thank to Fundação para a Ciência e a Tecnologia (FCT), which supports this research through Operacional Capital Humano Program (POCH), financed by the European Union.

References 1. Díaz-Bossini, J.M., Moreno, L.: Accessibility to mobile interfaces for older people. Procedia Comput. Sci. 27, 57–66 (2014) 2. Sánchez-Gordón, M., Moreno, L.: Toward an integration of web accessibility into testing processes. Procedia Comput. Sci. 27, 281–291 (2014) 3. Gonçalves, R., Martins, J., Branco, F, Pereira, J., Rocha, T., Peixoto, C.: AccessWeb Barometer: a web accessibility evaluation and analysis platform. In: Proceedings of the INTERNET 2015 - The Seventh International Conference on Evolving Internet, Malta (2015) 4. Passerino, L.M., Montardo, S.P.: Inclusão Social via Acessibilidade Digital: Proposta de Inclusão Digital para Pessoas com Necessidades Especiais. Revista da Associação Nacional dos Programas de Pós-Graduação em Comunicação 8, 2–18 (2007) 5. Gonçalves, R., Martins, J., Pereira, J., Oliveira, M., Ferreira, J.: Accessibility levels of portuguese enterprise websites: equal opportunities for all? Behav. Inform. Technol. 31, 659– 677 (2012) 6. Gonçalves, R., Martins, J., Pereira, J., Oliveira, M., Ferreira, J.: Enterprise web accessibility levels amongst the forbes 250: where art thou o virtuous leader? J. Bus. Ethics 113, 363–375 (2013) 7. W3C. https://www.w3.org/standards/webdesign/accessibility 8. Jaeger, P.T.: Beyond section 508: the spectrum of legal requirements for accessible E-government web sites in the United States. J. Gov. Inform. 30, 518–533 (2004) 9. Sloan, D., Horton, S.: Global considerations in creating an organizational web accessibility policy. In: Proceedings of the 11th Web for All Conference. ACM, New York (2014). Article No. 16 10. Barros, A.C., Leitão, R., Ribeiro, J.: Design and evaluation of a mobile user interface for older adults: navigation, interaction and visual design recommendations. Procedia Comput. Sci. 27, 369–378 (2014) 11. Web Accessibility in Mind: Web accessibility and SEO. http://webaim.org/blog/webaccessibility-and-seo/ 12. Horton, S., Sloan, D.: Accessibility for business and pleasure. ACM Interact. 23, 80–84 (2015)

156

J. de Sousa e Silva et al.

13. Kubitschke, L., Cullen, K., Dolphi, C., Laurin, S., Cederbom, A.: Study on Assessing and Promoting E-Accessibility. European Commission (2013) 14. Web Accessibility in Mind: Survey of Preferences of Screen Readers Users. http:// webaim.org/projects/screenreadersurvey/ 15. Sommerville, I.: Software Engineering. Addison-Wesley, Boston (2011) 16. Bourque, P., Fairley, R.E.: Guide to the Software Engineering Body of Knowledge, Version 3.0, IEEE Computer Society (2014). www.swebok.org 17. World Wide Web Consortium: Web Content Accessibility Guidelines 2.0. http:// www.w3.org/TR/200X/REC-WCAG20-20081211/, http://www.w3.org/TR/WCAG20/ 18. DIN EN ISO 9241-171: Ergonomics of Human-System Interaction – Part 171: Guidance on Software Accessibility (2008) 19. Section 508 of the rehabilitation act. 29 U.S.C. § 794d 20. EMAG: Modelo de Acessibilidade em Governo Eletrônico. Version 3.1 (2014) 21. Microsoft: Using UI Automation for Automated Testing (2016). https://msdn.microsoft.com/ en-us/library/aa348551(v=vs.110).aspx 22. Gonçalves, R., Martins, J., Branco, F.: A review on the Portuguese enterprises web accessibility levels–a website accessibility high level improvement proposal. Procedia Comput. Sci. 27, 176–185 (2014)

Hand Posture Recognition with Standard Webcam for Natural Interaction C´esar Osimani1 , Jose A. Piedra-Fernandez2 , Juan Jesus Ojeda-Castelo2(B) , and Luis Iribarne2 1

Applied Research and Development Center on IT (CIADE-IT), Universidad Blas Pascal, C´ ordoba, Argentina [email protected] 2 Department of Informatics, Applied Computing Group (ACG), University of Almeria, Almeria, Spain [email protected], [email protected], [email protected]

Abstract. This paper presents an experimental prototype designed for natural human-computer interaction in an environmental intelligence system. Using computer vision resources, it analyzes the images captured by a webcam to recognize a person’s hand movements. There is now a strong trend in interpreting these hand and body movements in general, with computer vision, which is a very attractive field of research. In this study, a mechanism for natural interaction was implemented by analyzing images captured by a webcam based on hand geometry and posture, to show its movements in our model. A camera is installed in such a manner that it can discriminate the movements a person makes using Background Subtraction. Then hands are searched for assisted by segmentation by skin color detection and a series of classifiers. Finally, the geometric characteristics of the hands are extracted to distinguish defined control action positions.

Keywords: Natural interaction recognition

1

·

Image processing

·

Hand gesture

Introduction

Among the basic needs in intelligent environments is the supply of personalized information to users through embedded systems in which these users can interact naturally with devices. Therefore, Hand Posture Recognition (HPR) techniques are of interest to facilitate daily life. HPR applications are on the way to being used to control home appliances, for interaction with computer games or for sign language translation. HPR is another input communicating with ubiquitous systems for achieving intuitive and natural interaction. The purpose of this study was detection of hand movements in real time. This is not easy due to the number of variants in forms and viewpoints hands can appear in, showing the palm or fist, partially hidden and with a wide variety of finger positions. c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 17

158

C. Osimani et al.

The real-time detector proposed is based on segmentation by Background Subtraction, face and skin-color detection, supported by edge detection and analysis of geometric shapes. The main selling point of hand gesture recognition is that you do not need to touch any input device. In human-computer interaction, there are examples of control by several types of hand movements. This paper presents a definition of basic hand-movements and gestures for interaction in a user interface. This system is intended to support user demands in real time. Identifying and following hands requires a robust system that is able to recognize the complex structure of the hand, follow it and interpret it. Some studies on real-life applications are described in [14,23]. The application domains of hand gesture recognition [17] are: desktop applications, sign language, robotics, virtual reality, home automation, smart TV, medical environment, etc. In the sign language recognition [4], the hand segmentation is a key task in the gesture recognition process, that’s why some authors use HSV [5] color model or YCbCr [1]. Currently, there are techniques related to Machine Learning such as Hidden Markov Models [21] or Neural Network [9], which are used in the recognition process. In fact, some projects are integrating devices as Microsoft Kinect [12], Intel RealSense [8] or Leap Motion [15], in order to identify the different signs of this language. In home automation there is a project called Wisee [16] which is a system that detects when a person is doing a gesture in everywhere at home because the system works through WiFi. In games, it should be pointed out that educational serious games recognize gestures to improve the learning and physical skills in preschool children [7]. The gestures in the field of medicine are useful to interact with medical instruments, control the resources management of a hospital and special needs people have an alternative way to interact with the computer. In addition, in this case [20] it had the aim of developing an intelligent operating room. This operating room is composed by four subsystems where one of them is hand gesture recognition. The user is able to move ray-X images, select the history of a patient from the database or write down a comment in the image. The rest of the paper is organized as follows: Sect. 2 describes the methods which has been implemented in order to develop a functional prototype. Section 3 illustrates the prototype developed and the process used to evaluate it. Section 4 summarizes the conclusions and discusses the future work.

2

The HPR Proposal for Natural Interaction

The methods used are described below: Background subtraction, classifier cascade for face, skin colors detection, shape identification and tracking characteristics. The prototype was developed in C++ along with Qt libraries (for the graphical user interface and event management), OpenGL (for 3D effects in presenting processed images), the standard C++ library and Open CV library,

Hand Posture Recognition with Standard Webcam for Natural Interaction

159

Fig. 1. The schema of the workflow system

which provides a large number of implementations of the algorithms most widely used in image analysis and processing. Figure 1 shows an overview of our work flow and the subsections below discuss each technique used in detail. 2.1

The Recognition Process

The gesture recognition process is organized in three main parts: Background Subtraction, Calibration and Hand Pose Recognition. The Background Subtraction process separates the background of the image obtained from the camera with the skin color detection method. In this way, the user image is obtained in order to detect hands and the background is ignored. The calibration process is responsible for identifying the position of the hands. First of all, the face location is detected to make the process faster and more efficient, since the hands will be close to the face and thus the process will not seek in unnecessary regions. Once the face has been identified, it is removed as the background and we get the hands position. Finally, in the process of HPR, is used the Lucas-Kanade algorithm to track the hands and identify its position at all times. Then, the geometric features extraction method identifies the contour of the hand and a few significant points in the fingers are placed to identify if the hand is close or open. The following sections explains in more detail the main methods of the process. 2.2

Background Subtraction

Background subtraction is a method used in computer display to separate foreground objects from the background in images captured by a stationary video camera by calculating the differences between frames. This technique saves samples of previous images in the memory and generates a background model based on statistical properties of those samples. From there, a binary image is constructed that acts as a mask for segmentation or separation of background and

160

C. Osimani et al.

foreground objects. In addition, a Gaussian blur filter is used to remove brusque changes in images. Figure 1 shows the result of background subtraction. 2.3

Cascade Classifiers

OpenCV implements face detection by a statistical method based on training samples (images with faces and images without faces) from which information is extracted that distinguishes a face from one which is not a face. One of the most widely used methods, developed by Viola and Jones [18], which trains to determine the characteristics of a particular object (face, eyes, hands, etc.), comes from this idea. In this study, class CascadeClassifier (available in OpenCV) is used in face detection for two purposes: To detect the starting position of the hands with respect to the face at the time of system calibration, and to identify image regions where it is unnecessary to search for hands. Starting conditions for correct use of this prototype are defined and one of them is the starting location of the person for beginning the calibration stage. 2.4

Skin Color Detection

Segmentation is complemented with skin-color detection to reduce the hand detection region. Lab Color space is used because it translates a change in color into a change with closely matching visual importance. The three parameters represent luminosity (L = 0 black and L = 100 is white), its position between red and green (a, green is negative and red is positive) and its position between yellow and blue (b, negative for blue and positive for yellow). Good results are obtained by selecting 109 to 133 for a component. 2.5

Geometric Features Extraction

Hand geometry extraction begins by using the OpenCV findContours function which returns a set of points that form the contours in the binary input image [6]. Some contours may be contained within others. However, only the outermost contours are retained and they are filled in with a solid color. This way, a new binary image is created which works like a mask to obtain a delineated image of the hands. The points on the convex hull of this image are likely to be fingers. However there will also be other convex points because part of the arm may appear in the image. One way to identify the convex points corresponding to finger tips is to make use of convexity defects. The points that make up the convex contour are found by making use of the convexHull function, and the points in the convexity defects are calculated using the convexityDefects function. The convex hull includes the contour of the hand and by selecting the points in each segment of it that are separated the most from this hull, the points forming the bottom of the spaces between the fingers are found. These points are the defects in convexity and enable the number of outstretched fingers to be found very simply. An example is shown in Fig. 2.

Hand Posture Recognition with Standard Webcam for Natural Interaction

161

Fig. 2. The convex hull and convexity defects

One advantage of this finger detector is that the algorithm is very simple, and its disadvantage is its low precision in determining the number of extended fingers. However, in this study, it was only necessary to differentiate a closed hand from an open one, and this method is good enough for that. The threshold for distinguishing between a closed hand and an open one is three fingers. 2.6

Tracking Hands

The last stage consists of tracking the hands by implementing the Lucas-Kanade method based on patterns of apparent motion or optical flow. This method, developed by Bruce D. Lucas and Takeo Kanade is a widely used differential method for estimating optical flow [2]. It is assumed that the flow is constant in a local zone of the pixel under consideration, and solves the basic optical flow equations for all the pixels in that neighborhood using the least frames criterion. It is assumed that movement of the hand in the image between two consecutive frames is slight and approximately constant within the vicinity of the point considered. Then it may be assumed that the optical flow equation is maintained for all the pixels within the vicinity centered on the point considered. Among the OpenCV tools is the goodFeaturesToTrack function which enables a group of pixels good for tracking to be found. As would be expected, a group of pixels good for tracking is one which has texture and edges. This is a problem in images of hands because they have a uniform visual texture. After the groups of points (good features to track) are found, the calcOpticalFlowPyrLK function is used to find the corresponding characteristics or group of pixels in the next frame. Since the result could deliver false positives, the Forward - Backward Error method is used (see Fig. 3). The same method is used in the second frame to estimate the

Fig. 3. Forward-Backward Error method

162

C. Osimani et al.

characteristics in the first frame. With this, characteristics common to both are acquired which are the correct tracking characteristics in the second frame.

3

Practice and Experience

This section shows the prototype developed to test the techniques of hand posture recognition described in previous section and the real interface where it has been applied. Finally, it is showed the results about the experiments which have been done in different conditions. 3.1

Prototype Interface

The experimental prototype requires a first adjustment before starting consisting of two steps: capture of images showing background, and exposure of both hands for their calibration. Figure 4 shows the graphical user interface for the first adjustment. When the user presses the Calibration button, a message on the screen tells him to move away from the scene so background images can be taken. Then the user is told to position his body and hands for calibration.

Fig. 4. Graphical user interface of our application

3.2

Application Domain

The concept mashup [22] in web development is related to Web applications which obtained data from other APIs or sources and thus creating a new service which is more useful for the user. The Enia project [19,24] is based on mashups interfaces (see Fig. 5). This kind of interface is the application domain of the gesture recognition flow. The main feature of this project is the developing of a dynamic user interface which is adapted for the user habits. The user interface contains COTS interface components like widgets which are managed by an intelligent agent. The user can do different actions in the interface such as opening or closing the menu, create a component or move a component. The gestures about opening

Hand Posture Recognition with Standard Webcam for Natural Interaction

163

Fig. 5. The ENIA interface

and closing the hand have been included in the prototype of ENIA project and have been associated to the opening or closing menu actions. The gesture recognition has been designed in order to recognize them regardless of the hand. The aim of this is the interface can be managed by people with hemiplegia. These people just can move a side of their body, for instance the right hand or the left one, thus the system has been designed to recognize gestures with any of the hands. In addition, this feature is really useful for left-handed or right-handed people. 3.3

Experiments

The experimental prototype was evaluated in different environments on an HP ENVY 2.2 GHz Intel Core i7 computer with 8 GB DDR3 SDRAM at a rate of 15 fps with an RGB WebCam with resolution of 640 × 480 pixels. There are no problems in capturing background images as long as the camera remains immobile and does not cause changes in that background. The clearest examples of events that generate problems for background subtraction are changes in scene lighting and shadows a person could cast while using the prototype. Techniques such as those mentioned in [6] can be used to minimize these problems. During the initial calibration, optimum results are obtained in detecting the hand by its geometry and color. Problems that arise during tracking with a decrease in accuracy rates are due to the uniform visual hand texture. This issue impedes detection of good points for tracking. However, the results shown in Table 1 are promising. Test videos which were included in the test sessions, were recorded with a non-uniform background and three people were doing different movements with one of their hands at the same time. The videos show opening and closing hand motions, and vertical and horizontal movements with the purpose of controlling the interface with joystick. Closing the hand, the users can control the joystick, however, if they open the hand the joystick control is released. The videos for the test have these features: – 10 opening and closing hand movements (5 videos with each hand). The posture recognition was tested in two distances: 60 cm and 1 m. The subjects did 200 opening and closing movements and 20 videos were recorded totally.

164

C. Osimani et al. Table 1. Accuracy rate in hand posture recognition

Motions Opening and closing hand motions

Distances

Accuracy

60 cm

1m

Two distances

192/200 (96%)

187/200 (93.5%) 379/400 (94.75%) 91%

Horizontal motions with 183/200 (91.5%) 177/200 (88.5%) 360/400 (90%) the close hand Vertical motions with the close hand

181/200 (90.5%) 173/200 (86.5%) 354/400 (88.5%)

– 10 vertical motions with the close hand (5 videos with each hand). The posture recognition was tested in two distances: 60 cm and 1 m. The subjects did 200 opening and closing movements and 20 videos were recorded totally. – 10 horizontal motions with the close hand (5 videos with each hand). The posture recognition was tested in two distances: 60 cm and 1 m. The subjects did 200 opening and closing movements and 20 videos were recorded totally. This process was tested with three different people, using a distinct background. The video dataset has 180 videos with 1800 movements totally. The information in Table 1 shows a hit rate of 91%. However, it’s necessary to know that the users who made the test, knew the interface operation, therefore, the hit rate wasn’t reduced because of inexperienced users.

4

Conclusions and Future Work

This paper has shown evidence that the background subtraction technique, when the camera is kept stationary, is an excellent option for segmenting images. It should also be emphasized that the accuracy rate is increased by combining different techniques to detect the same object, without each technique separately having to have a high accuracy rate. Customization of hand movements and their associated effect on the device controlled is very important in natural interaction systems. And this is logical, since in daily life different people may perform the same task with different movements. The most common examples are people who are right-handed or left-handed and perform their tasks with different hands. Future work is to recognize more gestures and attach them to another actions of the ENIA project such as moving a component or zoom in and zoom out. Apart from hand gesture recognition, we want to add face recognition intended to include this system in adaptive Web user interfaces as an interaction mechanism for selecting and managing components. For environmental intelligence, this prototype could also facilitate work in interaction and modeling of actions and behaviors in an intelligent building. Acknowledgments. This work was funded by the EU ERDF and the Spanish Ministry of Economy and Competitiveness (MINECO) under Project TIN2013-41576-R.

Hand Posture Recognition with Standard Webcam for Natural Interaction

165

This work also received funding from the CEiA3 and CEIMAR consortiums. We thank our colleagues from CIESOL and Solar Energy Resources and Climatology research group (TEP165), who provided data and expertise that greatly assisted the research.

References 1. Adithya, V., Vinod, P.R., Gopalakrishnan, U.: Artificial neural network based method for Indian sign language recognition. In: 2013 IEEE Conference on Information & Communication Technologies (ICT), pp. 1080–1085. IEEE (2013) 2. Bouguet, J.: Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corp. 5, 1–10 (2001) 3. Dey, S., Anand, S.: Algorithm For multi-hand finger counting: an easy approach. arXiv preprint (2014). arXiv:1404.2742 4. Ghotkar, A., Kharate, G.: Study of vision based hand gesture recognition using Indian sign language. Computer 55, 56 (2014) 5. Ghotkar, A., Khatal, R., Khupase, S., Asati, S., Hadap, M.: Hand gesture recognition for Indian sign language. In: 2012 International Conference on Computer Communication and Informatics (ICCCI), pp. 1–4. IEEE (2012) 6. Hasan, M., Mishra, K.: Novel algorithm for multi hand detection and geometric features extraction and recognition. Intl. J. Sci. Eng. Res. 3, 1–12 (2012) 7. Hsiao, H., Chen, J.: Using a gesture interactive game-based learning approach to improve preschool children’s learning performance and motor skills. Comput. Educ. 95, 151–162 (2016) 8. Huang, J., Zhou, W., Li, H., Li, W.: Sign language recognition using real-sense. In: 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), pp. 166–170. IEEE (2015) 9. Ibraheem, N., Khan, R.: Vision based gesture recognition using neural networks approaches: a review. Intl. J. Hum. Comput. Interact. (IJHCI) 3, 1–14 (2012) 10. Intachak, T., Kaewapichai, W.: Real-time illumination feedback system for adaptive background subtraction working in traffic video monitoring. In: 2011 International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS), pp. 1–5. IEEE (2011) 11. Kolsch, M., Turk, M.: Analysis of rotational robustness of hand detection with a Viola-Jones detector. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004, ICPR 2004, pp. 107–110. IEEE (2004) 12. Lang, S., Block, M., Rojas, R.: Sign language recognition using kinect. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2012. LNCS (LNAI), vol. 7267, pp. 394–402. Springer, Heidelberg (2012). doi:10.1007/978-3-642-29347-4 46 13. Liu, L., Xing, J., Ai, H., Ruan, X.: Hand posture recognition using finger geometric feature. In: 2012 21st International Conference on Pattern Recognition (ICPR), pp. 565–568. IEEE (2012) 14. Pang, Y., Ismail, N., Gilbert, P.: A real time vision-based hand gesture interaction. In: Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation, pp. 237–242. IEEE (2010) 15. Potter, L., Arauillo, J., Carter, L.: The leap motion controller: a view on sign language. In: Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration, pp. 175–178. ACM (2013)

166

C. Osimani et al.

16. Pu, Q., Sidhant, G., Shyamnath, G., Shwetak, P.: Whole-home gesture recognition using wireless signals. In: Proceedings of the 19th Annual International Conference on Mobile Computing & Networking, pp. 27–38. ACM (2013) 17. Rautaray, S., Agrawal, A.: Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev. 43, 1–54 (2015) 18. Rautaray, S., Agrawal, A.: A novel human computer interface based on hand gesture recognition using computer vision techniques. In: Proceedings of the First International Conference on Intelligent Interactive Technologies and Multimedia, pp. 292–296. ACM (2010) 19. Vallecillos, J., Criado, J., Iribarne, L., Padilla, N.: Dynamic mashup interfaces for information systems using widgets-as-a-service. In: Meersman, R., et al. (eds.) OTM 2014. LNCS, vol. 8842, pp. 438–447. Springer, Heidelberg (2014). doi:10. 1007/978-3-662-45550-0 44 20. Wachs, J.: Gaze, posture and gesture recognition to minimize focus shifts for intelligent operating rooms in a collaborative support system. Intl. J. Comput. Commun. Control V, 106–124 (2010) 21. Yang, W., Tao, J., Xi, C., Ye, Z.: Sign language recognition system based on weighted hidden Markov model. In 2015 8th International Symposium on Computational Intelligence and Design (ISCID), pp. 449–452. IEEE (2015) 22. Yu, J., Benatallah, B., Casati, F.: Understanding mashup development. IEEE Internet Comput. 12, 44–52 (2008) 23. Zhu, S., Guo, Z., Ma, L.: Shadow removal with background difference method based on shadow position and edges attributes. EURASIP J. Image Video Process. 2012, 1–15 (2012) 24. The ENIA Project: http://acg.ual.es/projects/enia/

Assessment of Microsoft Kinect in the Monitoring and Rehabilitation of Stroke Patients João Abreu1, Sérgio Rebelo1, Hugo Paredes1,2(&), João Barroso1,2, Paulo Martins1,2, Arsénio Reis1,2, Eurico Vasco Amorim1,2, and Vítor Filipe1,2 1

University of Trás-os-Montes e Alto Douro, 5000–801 Vila Real, Portugal [email protected], [email protected], {hparedes,jbarroso,pmartins,ars,eamorim, vfilipe}@utad.pt 2 INESC TEC, Rua Dr. Roberto Frias, 4200–465 Porto, Portugal

Abstract. Telerehabilitation is an alternative way for physical therapy of stroke patients. The monitoring and correction of exercises can be done through the analysis of body movements recorded by an optical motion capture system. This paper presents a first study to assess the use of Microsoft Kinect in the monitoring and rehabilitation of patients who have suffered a stroke. A comparative study was carried out to assess the accuracy of joint angle measurement with the Microsoft Kinect (for Windows and for Xbox One) and Optitrack™. The results obtained in the first experiment showed a good agreement in the measurements between the three systems, in almost all movements. These results suggest that Microsoft Kinect, a low cost and markerless motion capture system, can be considered as an alternative to complex and high cost motion capture devices for the monitoring and rehabilitation of stroke patients. Keywords: Monitoring  Rehabilitation capture  Microsoft Kinect  Optitrack™



Stroke



Joint angle



Motion

1 Introduction The stroke rehabilitation process requires that patients perform intensive physical therapy with the help of physiotherapists, which may become an exhaustive task. The long therapy sessions often lead to a lack of motivation in performing exercises due to the high number of repetitions [1]. Therefore, this brings negative effects to the rehabilitation process and may delay the clinical recovery. One possible approach to overcome this issue is the introduction of “serious games” as a stimulus to practice the exercises and to encourage the patient to the therapy. Moreover, this allows that the therapy can be carried out at home. When playing games, the patients stimulate the accomplishment of specific movements, which enables the development of motor skills of the affected limbs (arms and legs) important to their recovery process, while making the therapy sessions less tedious and more fun. To make the rehabilitation effective it is © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_18

168

J. Abreu et al.

necessary to monitor the correct execution of exercises. A possible approach is the analysis of body movements recorded by an optical motion capture system. The objective of this study is to validate the accuracy of Microsoft Kinect to measure the movement of upper limbs performed by stroke patients in therapy exercises. The joint angular variation data of five movements, obtained with the two Kinect versions were compared with the data captured with the Optitrack™ Flex 3 with 6 cameras. The Optitrack™ is used as a reference system for its high accuracy.

2 Background Patients who survive strokes suffer from cognitive, motor or visual losses that depend on the length and location of the damaged brain tissue [6]. Virtual reality games are very appealing to a wide variety of individuals, as this type of technology is very immersive and motivating. By developing games as rehabilitation tools, we can achieve the motivation associated with the games. In these environments, users are focused on the game and not so much in the exercise they are doing [7]. Several projects have explored the use of new technologies for patient motivation and allow doing the exercises at home [8–10]. These telerehabilitation systems offer users and therapists new possibilities for treatment using the Microsoft Kinect. These devices make use of an avatar that plays the exercises performed by the patient based on data captured by the Kinect [11]. Using the Kinect numerous motion detection games were created, which have proved very appealing and not for the sole purpose of entertainment, but also to promote health and rehabilitation. The Microsoft Kinect device offers a flexible solution with a low cost for rehabilitation, not requiring any kind of markers. This allows its installation at the patients’ homes with an affordable cost. The major disadvantage of this device is its precision. Thus, it is only possible to use it for cases where a high accuracy is not required. However, previous studies showed that clinical rehabilitation monitoring and correction can be performed without extreme precision [4]. On the other hand, optical systems with reflectors markers are commonly used for motion capture for its accuracy, but they have some drawbacks: it requires the placement of markers in the body for performing a high quality data acquisition; comprises several cameras; requires a large space in order to capture the volume needed for data collection [2, 3]; and has a very high cost, making the installation at patients’ home unaffordable.

3 Methods The Microsoft Kinect has a huge potential for motion capture usage on rehabilitation. We have performed a study involving one subject performing well known rehabilitation movements and gathering the motion data with three different motion capture devices to identify the Microsoft Kinect limitations.

Assessment of Microsoft Kinect in the Monitoring and Rehabilitation

3.1

169

Participants and Materials

A male participant without any pathologies (20 years, 170 cm tall, 65 kg weight), volunteered to participate in this experience. For the analysis of the movements, the participant was equipped with a special suit fitted with 34 reflective markers. Three systems were used for motion capturing: Optitrack™ Flex 3 with 6 cameras and the two Microsoft Kinect devices (for Windows and for Xbox One). The Optitrack™ (Fig. 1B) records the movement at 100 Hz and requires the use of reflective markers mounted on a special suit (in our case we used 34 markers) placed on major joints (Fig. 1A and C), so a relatively large space is required to capture the high volume and ensuring the fluidity of movements and ability to obtain not one but several bodies in the capture. The Optitrack™ was considered a reference system in this study because of its high precision.

Fig. 1. (A) Skeleton that shows the body positions where the markers should be placed; (B) One of the six cameras used by the Optitrack™ system; (C) 3D Markers positions.

Microsoft Kinect is a motion sensor developed for the Windows/Xbox 360 (Kinect I) and Xbox One (Kinect II). These sensors use a frequency of 30 Hz and have four key features: RGB camera (Red, Green, Blue) that allows the body recognition; sensor (Infra-Red), which allows the recognition of the environment around in three dimensions; issuer (Infra-Red), which emits laser pulses, measuring the time they take to reflect and to be detected by the sensor IV, through the Kinect software will determine the depth in 3 dimensions; and its own processor and software [12]. The major differences between the two versions of the sensor are presented on Table 1 [13, 14].

3.2

Procedure

For the initial position, it was asked of the participant in the study to stand upright in his natural posture and to look straight ahead with both hands down at his sides. In addition, the participant was asked to wear a special suit coated with spherical markers [15, 16]. During tasks, kinematic data was recorded simultaneously by the three motion capture systems. Both Microsoft Kinect devices were placed in front of the subject at a distance of 2 m. The Optitrack™ cameras cover the capture volume with multiple views. No particular instructions were given about speed or amplitude to reach.

170

J. Abreu et al. Table 1. Technical specifications of Kinect I and Kinect II.

Field of view (H  V) Camera resolution (H  V) Depth resolution (H  V) Maximum depth range Minimum depth range Depth technology Tilt motor USB standard Supported OS

Kinect I 57.5°  43.5°

Kinect II 70°  60°

640  80 @ 30 fps

320  240

1920  1080 @ 30 fps (15 fps with low luminance) 512  424

6m

4.5 m

40 cm

50 cm

Triangulation between near infrared camera and near infrared laser source (structured-light) Yes 2.0

Indirect time of flight

Win 7, Win 8

Win 8

No 3.0

Five movements from upper body commonly used in physical therapy of stroke were performed by the participant with ten repetitions of each one [17]. The following movements were chosen (Fig. 2): elbow flexion (flexion of the elbow keeping the forearm in the anatomical position), shoulder abduction (raising the arm in the coronal plane), frontal arm elevation (raising the arm in the sagittal plane), horizontal arm abduction (moving the arm in the transverse plane) and neck flexion (flexion of the neck in the sagittal plane). The selection criteria for these movements was the most referenced movements in the rehabilitation therapies [5].

Fig. 2. (A) Elbow flexion movement; (B) Shoulder abduction movement; (C) Frontal arm elevation movement; (D) Horizontal arm abduction movement; (E) Neck flexion movement.

Assessment of Microsoft Kinect in the Monitoring and Rehabilitation

3.3

171

Data Analysis

The sampling frequency of Optitrack™ system was set to 100 Hz, while the Kinect was set at 30 Hz. In order to compare the data captured by the two motion capture systems, running at different capture (sampling) rates, custom MATLAB code was developed to down-sample the 100 Hz OptiTrack™ data into a 30 Hz data set. The positions of the markers were expressed in a coordinate system that is positively driven to the right in the side vector of the individual (X axis), upward in the vertical vector (Y-axis) and backward in the sagittal vector (Z axis) [18]. The markers positions estimated by both systems suffered some noise. Due to this problem, we used a Butterworth low pass filter of the 4th order with a cut-off frequency of 8 Hz to eliminate high frequency noise. After filtering the data, joint angles were calculated for all the three systems at the neck, elbow and shoulder, in the three motion planes. Before analysis all unsuccessfully tracked trials and outliers were removed from the data set, remaining five trials in each movement. The time-series data were normalized to 100 data points, spaced of 1% at 1%, to construct ensemble averages for each movement. Joint angles were calculated for each movement. For each movement comparisons of Microsoft Kinect I, Kinect II and Optitrack™ were made derived angles using several statistical metrics such as maximum amplitude, minimum amplitude, range of motion and root mean square error. All these results are presented at Table 2. Table 2. Mean results expressed in degrees (MAX = maximum amplitude, MIN = minimum amplitude, ROM = range of motion); RMSE = Root mean square error. MAX (°)

Elbow flexion Shoulder abduction Neck flexion Frontal arm elevation Horizontal arm abduction

MIN (°)

ROM (°)

RMSE

Optitrack

Kinect 1

Kinect 2

Optitrack

Kinect 1

Kinect 2

Optitrack

Kinect 1

Kinect 2

Optitrack – Kinect 1

Optitrack – Kinect 2

132,38 157,66

127,36 159,07

141,31 163,62

12,73 42,09

12,70 26,40

19,27 23,05

119,65 115,57

114,67 132,67

122,04 140,57

18,75 9,57

18,22 11,01

50,81 154,53

43,74 157,00

47,59 163,77

23,29 37,34

5,58 37,49

5,05 26,01

27,52 117,19

38,16 119,51

42,54 137,76

8,18 7,85

7,2 8,65

119,83

118,85

115,69

81,36

49,30

41,82

38,47

69,56

73,87

29,15

34,75

Illustrative examples of movement traces are given in Fig. 3. Each plot contains the average joint angle across the five trials obtained with the three capture systems.

4 Results In this study we can argue that the precision obtained in measuring the angles of joints with the Microsft Kinect is sufficient for most of the prescribed exercises for post stroke rehabilitation. This type of pathology affects the upper limbs and the head in particular, therefore, only movements of these parts of the body were evaluated. The five selected movements were recorded simultaneously by both Kinect devices and the reference system, the Optitrack™.

172

J. Abreu et al.

A

B

C

Fig. 3. Average joint angle for the movements: (A) Frontal arm elevation. (B) Shoulder abduction. (C) Elbow flexion.

Thus, it was possible to draw the necessary conclusions to perform statistical calculations such as RMSE and obtain the ROM. Moreover, it was possible to verify that the values of maximum, minimum and ROM are similar between Kinects devices, despite the Kinect I data is closer to the reference system. By analysing the data, the movements that can be considered more accurately are the Frontal Arm Elevation, Shoulder Abduction and Neck Flexion, showing such movements, low RMSE values (9,57 and 11,01 to Shoulder Abduction, 7,85 and 8,65 to the Frontal Arm Elevation, 8,18 and 7,2 to the Neck Flexion). Also for these three movements, we found that the range of values of the system used as a reference and Microsoft Kinect have relatively similar values that respect to the maximum, minimum and ROM, especially the level of the maximum amplitude values. Moreover, the movements of the Elbow Flexion and Horizontal Arm Abduction present higher RMSE values (18,75 and 18,22 to the Elbow Flexion, 29,15 and 34,75 to the Horizontal Arm Abduction), despite presenting similar relative maximum values. Consequently, this does not verify the same for minimum values and,, for the ROM. The movement of Elbow Flexion presents very similar values of maximum, minimum and ROM.

5 Final Remarks This study evaluated the accuracy of Kinect motion capture device, which proved to be able to measure and evaluate movements with the necessary precision for post stroke rehabilitation therapies. The results showed that the precision of the Kinect is lower than that of the reference system, however for the analyzed movements it proved to be suitable, as observed in previous studies [19–21]. The Kinect has a set of advantages that make it very attractive for telerehabilitation systems: price, portability and markerless. Thus, in future work, there is the possibility of using this technology in the creation of rehabilitation systems with many advantages over traditional methods, as previously mentioned. This kind of system has some weaknesses. Further research is necessary to improve the capture of lower amplitude and magnitude movements. These information can be used as a reference for future work related to motion capture using the Microsoft Kinect.

Assessment of Microsoft Kinect in the Monitoring and Rehabilitation

173

Acknowledgments. This work was supported by Project “NanoSTIMA: Macro- to-Nano Human Sensing: Towards Integrated Multimodal Health Monitoring and Analytics/NORTE-01-0145-FEDER-000016” financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).

References 1. Saini, S., et al.: A low-cost game framework for a home-based stroke rehabilitation system, Universiti Teknologi Petronas, pp. 55–56. IEEE (2012) 2. Cappozzo, A., et al.: Position and orientation in space of bones during movement: anatomical frame definition and determination. Clin. Biomech. 10(4), 171–178 (1995) 3. Frigo, C., et al.: Functionally oriented and clinically feasible quantitative gait analysis method. Med. Biol. Eng. Comput. 36(2), 179–185 (1998) 4. Baenan, F.A., et al.: Biomechanical validation of upper-body and lower-body joint movements of Kinect motion capture data for rehabilitation treatments, p. 656. IEEE (2012) 5. Thame, A.C.F., et al.: The Upper limb functional rehabilitation of spastic patients post stroke, Universidade de Sorocaba, pp. 179–181. IEEE (2010) 6. Alankus, G., Lazar, A., May, M., Kelleher, C.: Towards customizable “games for stroke rehabilitation”, pp. 2113–2122. In: CHI 2010, Atlanta, Georgia, USA (2010) 7. Burke, J.W., McNeill, M.D., Charles, D.K., Morrow, P.J., Crosbie, J.H., McDonough, S.M.: Optimising engagement for stroke rehabilitation using serious games. Vis. Comput. 25(12), 1085–1099 (2009) 8. Antón, D., et al.: KiReS: a Kinect-based telerehabilitation system, pp. 457–458. IEEE (2013) 9. Roy, A.K., Soni, Y., Dubey, S.: Enhancing effectiveness of motor rehabilitation using Kinect motion sensing technology, pp. 298–301 (2013) 10. Fernández-Baena, A., et al.: Biomechanical validation of upper-body and lower-body joint movements of Kinect motion capture data for rehabilitation treatments, pp. 4–6 (2012) 11. http://www.xbox.com/en-in/kinect 12. Gonzalez-Jorge, H., et al.: Metrological comparison between Kinect I and Kinect II sensors. Measurement 70, 22–23 (2015) 13. Kinect II Tech Specs. http://123kinect.com/everything-kinect-2-one-place/43136/ 14. Matthew, S.: How does the Kinect 2 compare to the Kinect 1? 6 December 2014. http:// zugara.com/how-does-the-kinect-2-compare-to-the-kinect-1/ 15. Wu, G., Siegler, S., Allard, P., Kirtley, C., Leardini, A., Rosenbaum, D., et al., Stan dardization and Terminology Committee of the International Society of Biomechanics: ISB recommendation on definitions of joint coordinate system of various joints for the reporting of human joint motion – Part I: ankle, hip, and spine. International Society of Biomechanics. J. Biochem. 35(4), 543–548 (2002) 16. Wu G., van der Helm, F.C., Veeger, H.E., Makhsous, M., Van Roy, P., Anglin, C., et al., International Society of Biomechanics: ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion – Part II: shoulder, elbow, wrist and hand. J. Biochem. 38(5), 981–992 (2005) 17. Bonnechère, B., et al.: Validity and reliability of the Kinect within functional assessment activities: comparison with standard stereophotogrammetry. Gait Posture 39, 594–596 (2014) 18. Clark, R.A., et al.: Validity of the Microsoft Kinect for assessment of postural control. Gait Posture 36, 373–375 (2012)

174

J. Abreu et al.

19. Fernández-Baena, A., et al.: Biomechanical validation of upper-body and lower-body joint movements of Kinect motion capture data for rehabilitation treatments (2012) 20. Duan, Z., et al.: Evaluation of Kinect as an analysis tool for kinematic variables of shoulder and spine motions. J. Med. Res. Dev. 4(4), 35–41 (2015) 21. Choppin, S., Lane, B., Wheat, J.: The accuracy of the Microsoft Kinect in joint angle measurement. Sports Technol. 7(1–2), 98–105 (2014)

A Big Data Analytics Architecture for Industry 4.0 ( ) Maribel Yasmina Santos ✉ , Jorge Oliveira e Sá, Carlos Costa, João Galvão, Carina Andrade, Bruno Martinho, Francisca Vale Lima, and Eduarda Costa

ALGORITMI Research Center, University of Minho, Guimarães, Portugal {maribel,jos,carlos.costa,joao.galvao,carina.andrade, bruno_martinho,franciscavlima,eduardacosta}@dsi.uminho.pt

Abstract. In an era in which people, devices, infrastructures and sensors can constantly communicate exchanging data and, also, generating new data that traces many of these exchanges, vast volumes of data is generated giving the context for the emergence of the Big Data concept. In particular, recent devel‐ opments in Information and Communications Technology (ICT) are pushing the fourth industrial revolution, Industry 4.0, being data generated by several sources like machine controllers, sensors, manufacturing systems, among others. Joining the volume and variety of data, arriving at high velocity, with Industry 4.0, makes the opportunity to enhance sustainable innovation in the Factories of the future. In this, the collection, integration, storage, processing and analysis of data is a key challenge, being Big Data systems needed to link all the entities and data needs of the factory. In this context, this paper proposes a Big Data Analytics architecture that includes layers dedicated to deal with all data needs, from collec‐ tion to analysis and distribution. Keywords: Industry 4.0 · Big Data · Big Data Architecture · Big Data Analytics

1

Introduction

Nowadays, data is generated at unprecedented rates, mainly resulting from the advance‐ ments in cloud computing, internet, mobile devices and embedded sensors [1, 2]. The way people interact with organizations and the rate at which the transactions occur may create unprecedented challenges in data collection, storage, processing and analysis. If organizations find a way to extract business value from this data, they will most likely gain significant competitive advantages [2]. Big Data is often seen as a catchword for smarter and more insightful data analysis, but it is more than that, it is about new challenging data sources helping to understand business at a more granular level, creating new products or services, and responding to business changes as they occur [3]. As we live in a world that constantly produces and consumes data, it is a priority to understand the value that can be extracted from it. In this context, Big Data will have a significant impact in value creation and compet‐ itive advantage for organizations, such as new ways of interacting with customers or to develop products, services and strategies, raising profitability. Another area where the concept of Big Data is of major relevance is the Internet of Things (IoT), seen as a network of sensors embedded into several devices (e.g., appliances, smartphones, cars), © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_19

176

M.Y. Santos et al.

which is a significant source of Big Data and can bring many business environments, like factories, into the era of Big Data [4]. In this context of the factories of the future, Industry 4.0 is seen as the fourth industrial revolution, where technological innovations enhance productive processes through the integration of more automation, controlling and information technologies. To support data needs in these factories of the future, this paper proposes a Big Data Analytics architecture integrating components for the collection, storage, processing, anal‐ ysis and distribution of the data to those that need it, making available an integrated envi‐ ronment that support decision-making at the several levels of the managerial process. This paper is organized as follows. Section 2 summarizes Industry 4.0 and the facto‐ ries of the future, pointing the role of Big Data in this fourth industrial revolution. Section 3 describes the evolution of the Business Intelligence and Big Data Analytics area, given the context for the emergence of the Big Data concept. Section 4 presents the Big Data Analytics architecture, describing its several layers and how they comple‐ ment each other. Section 5 concludes with some remarks and guidelines for future work.

2

Industry 4.0 and the Factories of the Future

Industry 4.0 is a recent concept that was used for the first time in 2011 in the Hannover Fair in Germany and this concept involves the main technological innovations applied to production processes in the field of automation, control and information technologies [5]. The basic foundation of Industry 4.0 implies that through the connection of machines, systems and assets, organizations can create smart grids all along the value chain controlling the production processes autonomously. Within the Industry 4.0 framework, organizations will have the capacity and autonomy to schedule maintenance, predict failures and adapt themselves to new requirements and unplanned changes in the production processes [6]. In the context of major industrial revolutions, Industry 4.0 is seen as the fourth industrial revolution. The first industrial revolution, around 1780, essentially consisted in the appearance of the steam engine and the mechanical loom. The second industrial revolution, around 1870, included the use of electric motors and petroleum fuel. The third industrial revolution, around 1970, it is recognized in the context of the use of computerized systems and robots in the industrial production. Finally, the fourth indus‐ trial revolution, occurring now, is the revolution where computers and automation will come together in an integrated way, i.e., robotics connecting computerized systems equipped with machine learning algorithms, in which the production systems are able to learn from data with very few inputs from human operators, enabling the increase of efficiency and autonomy of the production processes and, also, making them more customizable [5–7]. For the development and deployment of Industry 4.0, six principles are identified guiding the evolution of intelligent production systems for the coming years [5, 8], namely: 1. Interoperability - systems, people and information transparently intercommunicated in the cyber-physical systems (a fusion of the physical and virtual worlds). This

A Big Data Analytics Architecture for Industry 4.0

2. 3. 4.

5. 6.

177

allows exchanging information between machines and processes, interfaces and people; Real-time operation capability - instantaneous data acquisition and processing, enabling real-time decision making; Virtualization - creating smart factories, allowing the remote traceability and moni‐ toring of all processes through the several sensors spread throughout the shop floor; Decentralization - the cyber-physical systems are spread accordingly to the needs of the production providing real-time decision-making capabilities. In addition, machines will not only receive commands, but will be able to provide information about their work cycle. Therefore, the smart manufacturing modules will work in a decentralized way to improve production processes; Service Orientation - use of service-oriented software architectures coupled with the Internet of Things concept; Modularity - production processes accordingly to the demand, coupling and decou‐ pling of modules in production, giving flexibility to change machine tasks easily.

Based on the principles described above, Industry 4.0 became possible due to the technological advances of the last decade in the areas of information technology and engineering. The most relevant are: • Internet of Things (IoT) – it consists of networking physical objects, environments, vehicles and machines by means of embedded electronic devices allowing the collec‐ tion and exchange of data. Systems that operate on the IoT are endowed with sensors and actuators, the cyber-physical systems, and are the basis of Industry 4.0 [5, 6, 8, 9]; • Security – one of the major challenges to the success of the Industry 4.0 lies in the security and robustness of Information Systems. Problems such as transmission fail‐ ures in machine-to-machine communication, or even eventual “gagging” of the system can cause production disruption. With all this connectivity, systems will also need to protect the organization’s know-how embedded in the processing control files [10, 11]; • Cloud – cloud-based manufacturing can be described as a networked manufacturing model with reconfigurable cyber-physical production lines enhancing efficiency, reducing production costs, and allowing optimal resource allocation in response to a customer variable-demand [6, 9, 11]; • Mobile and Augmented Reality – mobile devices with reliable and inexpensive posi‐ tioning systems allow the representation of real-time positioning in 3D maps, enabling the use of augmented reality scenarios. These are expected to bring tangible gains in areas such as identification and localization of materials or containers or in maintenance related activities [9]; • Big Data – in Industry 4.0 contexts data is generated by several sources like machine controllers, sensors, manufacturing systems, among others. All this volume of data, arriving at high velocity and in different formats is called “Big Data”. The processing of Big Data in order to identify useful insights, patterns or models is the key of sustainable innovation within an Industry 4.0 factory [12]. In this context of Industry 4.0, people need to adapt their skills to the needs of the Factories of the Future. The manual labor will be replaced by specialized labor, raising

178

M.Y. Santos et al.

new opportunities to very well trained professionals, in an environment of huge tech‐ nological variety and challenges [5]. Summarizing, when implementing an Industry 4.0 scenario, the focus is not in new technologies, but in how to combine them in a new way, considering three levels of integration: the cyber-physical objects level, the (Big) data infrastructure and models of the mentioned cyber-physical objects, and the services based on the available (Big) data [7]. For this, and being Big Data the central topic of this paper, it proposes an architecture able to provide the necessary layers and components for the collection, storage, processing, analysis and visualization of vast volumes of data. Central in this architecture is the Big Data Warehouse, an analytical repository that integrates and consolidates data for decision support.

3

Business Intelligence and Big Data Analytics Evolution

Over the last years, the interest in Big Data has increased considerably [13], particularly after 2012, as can be seen in Fig. 1. It is important now to look back and see the evolution of data analytics in Business Intelligence (BI) systems and, after that, how we arrived at the Big Data era.

Fig. 1. Increased interest in Big Data. Retrieved from [13].

Looking back to 1958, Hans Peter Luhn, a researcher from IBM, defined BI as “the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal” [14], proposing an automatic system for the dissemina‐ tion of information to the several players of any industrial, scientific or government organization. The system was based on the use of data-processing machines for providing useful information to those who need it. The processing capabilities were based on statistical procedures and complemented with proper communication facilities and input-output equipment, providing a comprehensive system that accommodates all information needs of an organization. The key point in Luhn’s proposal was to optimize business using data, a concern that is maintained in more recent definitions of the BI area. Looking to the Gartner1 glossary, BI is nowadays defined as “an umbrella term that includes the applications, infrastruc‐ ture and tools, and best practices that enable access to and analysis of information to improve and optimize decisions and performance”. Although a broader definition, the

1

http://www.gartner.com/it-glossary/business-intelligence-bi/, accessed November 2016.

A Big Data Analytics Architecture for Industry 4.0

179

focus is maintained in the data processing capabilities to provide useful information and insights for improving business. Looking into the same glossary for the definition of Big Data, it is defined as “highvolume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation”. Putting aside Big Data characteristics like volume, velocity and variety, the key asset is still information and processing information for supporting the decision-making process. Given this context, an evolution can be seen from BI to Big Data in terms of the supporting technologies and development frameworks, although the organizational role is the same, processing capabilities to give useful insights on business to decisionmakers. The evolution from BI, or from Business Intelligence and Analytics (BI&A), to Big Data is addressed in the work of [15], making a retrospective characterization of the BI&A itself and showing what changes to Big Data. For these authors, the term business analytics was introduced to mention the key analytical components of BI, whereas Big Data is used to describe datasets that are so large and complex that require advanced and unique data storage, management, analysis and visualization technologies. In this context, Big Data Analytics offers new research directions for BI&A. Retrospectively, [15] propose a framework that characterize BI&A intro three eras, BI&A 1.0, BI&A 2.0 and BI&A 3.0, verifying the evolution, applications areas and emerging research areas. In BI&A 1.0, data is mostly structured, distributed by several data sources that include legacy systems, and often stored in Relational Database Management Systems (RDBMS). Data Warehouses (DW) are a foundation of this era and DW schemas are essential for integrating and consolidating enterprise data, supported by Extraction, Transformation and Loading (ETL) mechanisms. Online Analytical Processing (OLAP) and reporting tools based on intuitive graphics are used to explore data, providing interactive environments for ad-hoc querying processing, complemented by statistical methods and data mining algorithms for advanced data analytics. BI&A 2.0 starts to emerge when the Internet and the Web offered new ways of data collection and analytical research. In these contexts, detailed and IP-specific user search and interaction logs are collected through cookies and server logs allowing the explo‐ ration of customers’ needs and potentiating the identification of new business opportu‐ nities. This era is centered in text and web analytics from unstructured data, using data analytics techniques as web intelligence, web analytics, text mining, web mining, social network analysis or spatial-temporal data analysis [15]. BI&A 3.0 emerges with the new role of mobile devices and their increasingly use in our modern society. Mobile phones, tablets, sensor-based Internet-enabled devices, barcodes, and radio tags, communicating together in the Internet of Things (IoT), support mobile, location-aware, person-centered and context-relevant operations [15]. In the context of a vast amount of web-based, mobile and sensor-generated data arriving at ever increasing rates, such Big Data will drive the identification of new insights that can be obtained from highly detailed data.

180

4

M.Y. Santos et al.

Big Data Analytics Architecture for Industry 4.0

Being the contribution of this paper the proposal of a Big Data Analytics architec‐ ture for Industry 4.0, Fig. 2 shows the proposed architecture and its main layers and components. This proposal benefits from state-of-the-art work, either in the identi‐ fication of its main components and in the identification of the Big Data technolo‐ gies to be adopted [16].

Fig. 2. Big Data Architecture for Industry 4.0.

The proposed architecture is divided into seven layers, each layer including compo‐ nents and each component can be associated to some technological tools. In Fig. 2, each layer is represented by a rectangle in dashed lines, while the other boxes are used to specify the components. Each one of these boxes is divided in two areas: white and grey areas. The white area is used to indicate the component itself, while the grey area includes

A Big Data Analytics Architecture for Industry 4.0

181

the specification of the technology to be used. The full arrows represent the data flows between components. The first layer to be explained in this architecture is the Entities/Applications layer and represents all Big Data producers and consumers, as for instance customers, suppliers, managers (in several managerial levels), among other. These entities are usually consumers of raw data, data indicators or metrics, like Key Performance Indi‐ cators (KPI), which, in this architecture, are available from the Big Data Warehouse (BDW). The second layer represents the different sources of data, here named as the Data Sources layer, usually including components such as follows: Databases (operational/ transactional databases), Files, ERPs, E-Mail, Sensors, among others. These compo‐ nents can generate data with low velocity and concurrency (like, for instance, data from periodical readings from databases), or data with high degree of velocity and concur‐ rency (like, for instance, data streams from electronic devices - smart meters and other sensors). All this data will feed the ETL/ELT layer (extraction, transformation, and loading of the data), the third one, corresponding to the process of extracting data from data sources and bringing it into the BDW (included in the Data Storage layer). There are several technologies that can be used to implement the ETL/ELT process, integrating data from multiple data sources, being here pointed the following ones: • Talend - is a data integration platform that has several elements used to do data extraction, transformation and loading. This technological tool contains connectors for file system in Hadoop, NoSQL databases, among others [17]; • Kafka - is used for building real-time data pipelines and streaming applications, having horizontal scalability and fault tolerance [18]; • Spark Streaming - through this component data streams can be obtained, transformed and later loaded into the Data Storage layer components. Spark Streaming interacts with the streaming broker to obtain the data available through streaming producers [19]; • Spark - is a real-time processing tool that allows better performance using the RAM memory. It is a simple programming model to develop data extraction, transforming and loading, which can also be used for building data mining models [19]; • Apache Sqoop - is a component that allows the efficient migration of large volumes of data from relational databases to Hadoop, in a process of data extraction and loading [20]. The fourth layer, the Data Storage layer, is divided into two sub-layers with different components that will be used in different contexts: 1. NoSQL data storage sub-layer - data streams will be stored in a real-time fashion into a NoSQL database. There are several NoSQL technologies available, like the ones that are column-based, document-based, graph-based, key-value or multi‐ model. Among the several NoSQL database are HBase, Cassandra, MongoDB, CouchDB, DynamoDB, Riak, Redis, Neo4J, among many others. To choose the most adequate NoSQL database to real-time environment we use the work made in [21] and the most adequate are Cassandra and HBase.

182

M.Y. Santos et al.

2. Hadoop BDW sub-layer – the data will be stored in an historical perspective and this layer has two components. In the first – Staging Area – the data will be loaded for further use. The data stays here for a delimited time period and can be stored in HDFS, a distributed file system for storing large volumes of data in Big Data contexts. In the second – Big Data Warehouse Area – the data previously loaded into the staging area will be extracted, transformed and loaded into the BDW. Once the data is available in the BDW, it will be available for data analytics through the SQL query engine. Hive in a Big Data context is the BDW infrastructure with similar role to a traditional DW, however it is built on HDFS which enable distributed storage and processing, for storing and aggregating large volumes of data. The fifth layer, the Raw Data Publisher, enables downloading data by using Web Services from the available data stored in the Data Storage layer. This interface is included to avoid direct accesses to the Hadoop cluster, where the data is stored by other Entities/Applications layer. The proposal is to use REST (Representational State Transfer) Web Services that, due to their architectural style based on Hypertext Transfer Protocol (HTTP) primitives, facilitate the integration with different applications and different connection drivers according to the chosen storage technologies. At this point, it is important to mention that the Entities/Applications layer can directly use the Raw Data Publisher layer, as this architecture integrates, stores and distributes data to the several parties in the industrial scenario of a specific organization. The sixth layer, the Big Data Analytics, includes components that facilitate the anal‐ ysis of vast amounts of data, making available different data analysis techniques, namely: • Data Visualization – is a component used for the exploration/analysis of data through intuitive and simple graphs; • Data Mining – or Knowledge Discovery, is the component responsible for identifying new patterns and insights in data; • Ad-hoc Querying – is a component that allows the interactive definition of queries on data, attending to the users’ analytical needs. Queries are defined on-the-fly, mostly depending on the results of previous analyses on data. This component must ensure an easy-to-use and intuitive querying environment; • Reporting – is the component that organizes data into informational summaries in order to monitor how the different areas of a business are performing; and, • SQL Query Engine – this component provides an interface between the other compo‐ nents in this layer and the Data Storage layer. In this sixth layer, different technologies are already available and can be used, like for example R, Weka, Spark and others commercial tools like Tableau, SAS, Power‐ Pivot, QlikView, SPSS, among others. The technologies that can be used in the SQL Query Engine component must be in conformity with the NoSQL database in the Data Storage component, in this case Presto was chosen, because it has connectors to the NoSQL database Cassandra, but it is worth mentioning that many other technologies can be used, apart from Presto, as Impala, HAWQ, IBM Big SQL, Drill, among others. Finally, the last layer is associated with Security, Administration and Monitoring, including components that provide base functionalities needed in the other layers, and

A Big Data Analytics Architecture for Industry 4.0

183

that ensure the proper functioning of the whole infrastructure. In this layer, the components needed are: • Cluster Tuning and Monitoring – detects bottlenecks and improves performance by adjusting some parameters of the adopted technologies for the components included in the architecture; • Metadata Management – the needed metadata can be divided into three categories: – Business – describes the data ownership information and business definition; – Technical – includes database systems’ names, tables definition, and data char‐ acterization like columns’ names, sizes, data types and allowed values; – Operational – description of the data status (active, archived, or purged), history of the migrated data and transformations applied on it; • Authorization and Auditing – user authorizations, data access policy management and tracking user’s operations are represented in this component; • Data Protection – associated with policies for data storage, allowing data to be stored encrypted or not, attending to how critical or sensitive is the data; • Authentication – representing the authentication of the users in the Big Data infra‐ structure, here shortly named as the Big Data cluster.

5

Conclusions

This paper presented a Big Data Analytics architecture for Industry 4.0, describing its main layers and components, providing support for the collection, integration, storage, processing, analysis and distribution of data. This Big Data Analytics architecture has been designed considering the volume, variety and velocity of data that can be generated, the several processing needs and the different end-users, or roles, they may have in the decision-making process. This architecture is now under implementation and, for future work, it may suffer enhancements as results of this implementation process, helping in its validation and adoption. Acknowledgments. This work has been supported by COMPETE: POCI-01-0145-FEDER007043 and FCT (Fundação para a Ciência e Tecnologia) within the Project Scope: UID/CEC/ 00319/2013, and by Portugal Incentive System for Research and Technological Development, Project in co-promotion no. 002814/2015 (iFACTORY 2015-2018). Some of the figures in this paper use icons made by Freepik, from www.flaticon.com.

References 1. Dumbill, E.: Making sense of big data. Big Data 1(1), 1–2 (2013). doi:10.1089/big.2012.1503 2. Villars, R.L., Olofson, C.W., Eastwood, M., Big data: what it is and why you should care. In: IDC (2011). http://www.tracemyflows.com/uploads/big_data/idc_amd_big_data_whitepaper.pdf 3. Davenport, T.H., Barth, P., Bean, R.: How big data is different. MIT Sloan Manage. Rev. 54(1), 43–46 (2012)

184

M.Y. Santos et al.

4. Chen, M., Mao, S., Liu, Y.: Big data: a survey. Mob. Netw. Appl. 19(2), 171–209 (2014). doi: 10.1007/s11036-013-0489-0 5. Hermann, M., Pentek, T., Otto, B.: Design principles for Industrie 4.0 scenarios. In: 49th Hawaii International Conference on System Sciences (HICSS), pp. 3928–3937 (2016) 6. Jazdi, N.: Cyber physical systems in the context of Industry 4.0. In: 2014 IEEE International Conference on Automation, Quality and Testing, Robotics, pp. 1–4 (2014) 7. Drath, R., Horch, A.: Industrie 4.0: hit or hype? [industry forum]. IEEE Ind. Electron. Mag. 8(2), 56–58 (2014) 8. Kagermann, H.: Change through digitization—value creation in the age of Industry 4.0. In: Albach, H., Meffert, H., Pinkwart, A., Reichwald, R. (eds.) Management of Permanent Change, pp. 23–45. Springer, Wiesbaden (2015) 9. Almada-Lobo, F.: The Industry 4.0 revolution and the future of manufacturing execution systems (MES). J. Innov. Manage. 3(4), 16–21 (2016) 10. Sommer, L.: Industrial revolution - Industry 4.0: are German manufacturing SMEs the first victims of this revolution? J. Ind. Eng. Manage. 8(5), 1512–1532 (2015) 11. Thames, L., Schaefer, D.: Software-defined cloud manufacturing for Industry 4.0. In: The Sixth International Conference on Changeable, Agile, Reconfigurable and Virtual Production, vol. 52, pp. 12–17 (2016) 12. Lee, J., Kao, H., Yang, S.: Service innovation and smart analytics for Industry 4.0 and Big Data environment. In: Proceedings of the 6th Conference on Industrial Product-Service Systems and Value Creation, vol. 16, pp. 3–8 (2014) 13. Google Trends: Interest in Big Data over time (2016). https://www.google.pt/trends/ explore#q=big%20data. Accessed 15 Nov 2016 14. Luhn, H.P.: A business intelligence system. IBM J. Res. Dev. 2(4), 314–319 (1958) 15. Chen, H., Chiang, R.H., Storey, V.C.: Business intelligence and analytics: from big data to big impact. MIS Q. 36(4), 1165–1188 (2012) 16. Costa, C., Santos, M.Y.: BASIS: a big data architecture for smart cities. In: SAI Computing Conference (SAI), pp. 1247–1256, 2016 17. Talend: Talend Big Data Integration (2016). https://www.talend.com/products/big-data. Accessed 17 Nov 2016 18. Apache Kafka (2016). https://kafka.apache.org/. Accessed 17 Nov 2016 19. Spark, A.: Spark, November 2016. http://spark.apache.org/. Accessed 17 Nov 2016 20. Sqoop, A.: HDFS Architecture Guide (2016). http://sqoop.apache.org/. Accessed 17 Nov 2016 21. Costa, C., Santos, M.Y.: Reinventing the energy bill in smart cities with NoSQL technologies. In: Ao, S., Yang, G.-C., Gelman, L. (eds.) Transactions on Engineering Technologies, pp. 383–396. Springer, Singapore (2016)

Articulating Gamification and Visual Analytics as a Paradigm for Flexible Skills Management José Araújo1 ✉ and Gabriel Pestana2 ✉ (

1

)

(

)

Universidade Europeia, Laureate International Universities, SAP, Lisbon, Portugal [email protected] 2 INOV – Inesc Inovação, R. Alves Redol Nº 9, 1000-029 Lisbon, Portugal [email protected]

Abstract. Employees are the human capital which contributes to the success of high-performance and sustainable organizations. Rapidly changing work environ‐ ments need innovative paradigm-changing solutions for monitoring and followingup on each employees’ professional progress, reducing the deviation between the competencies held by each employee and the core competencies required by the organization. Such gaps can become a critical risk factor and compromise the accomplishment of strategic and operational objectives. In this paper, we present a survey about the three predominant scientific areas that support this research work: Visual Analytics, Gamification and Talent Management. The Visual Analytics contributes with interactive dashboards, by providing mechanisms to assist employees’ self-awareness and presenting recommendations regarding competen‐ cies improvement, and Gamification techniques with Talent Management approaches to improve employees’ auto-regulation, enabling the acquisition and motivation for new skills and competencies. The paper also presents a domain model to dynamically provide information regarding skills development and trigger events. Keywords: Visual Analytics · Skills management · Self-awareness · Semantic context · Monitoring events · Gamification

1

Introduction

Employees are the human capital which contributes to the success and development of high-performance and sustainable organizations. Rapidly changing work environments need innovative solutions to monitor and follow-up on each employees’ professional progress, trying to create instruments to help aligning the set of competencies held by each employee and those required by the organization, which can become a critical risk factor. It is foreseen that skills management recommendations should endorse training actions which best contribute to improve the knowledge and expertise of each employee. The literature [1] also recommends the adoption of gamification techniques to challenge employees for healthy competitiveness, either by acquiring new skills/competencies or to invest in career progression, in order to meet business demands. © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_20

186

J. Araújo and G. Pestana

The Skills Development Module (SDM) presented in this paper addresses these challenges, strengthened with the opportunity to collaborate in the Active@Work project (http://www.activeatwork.eu/), an European Union (EU) funded R&D project. The requirements and the design of the SDM domain model were accom‐ plished within the scope of the Active@Work (A@W) project, see Sect. 4 for more detailed information. Besides addressing aspects in the field of Competence Manage‐ ment, the module also includes a Training Catalogue to promote training offers core for the organization. This means that the catalogue intends to improve employees’ Self-Awareness and Auto-Regulation about which skills/competences are relevant for the organization over time. The SDM provides a contribution to the functionalities provided by the Virtual Assistant tool (VAT), one of the core components within the software architecture of the A@W project. This integration aims to create a highly motivating and rewarding environment, helping employees to manage and develop their skills, promoting engage‐ ment and motivation to fit into organization needs and expectations, addressing the following research areas: Visual Analytics, Gamification and Talent Management. • Visual Analytics, by adopting information visualization technique, complemented with a set of metadata characterizing the user profile, it is possible for the employee to have access, in real-time to relevant information through interactive dashboards. The dashboard provides employees with a clear perception of their strengths and weaknesses, using a set of indicators (e.g. CV Evaluation, ranking and CV compo‐ sition) and events (alerts, warnings and recommendations) which are dynamically triggered by the Intelligent Agent (IA). • Gamification, by adopting data-driven elements and techniques that game designers use to engage, reward and differentiate individual efforts in the working environment. Such approach indorses the adoption of healthy competitiveness practices, adding value to business and promoting loyalty. In such environment, developing selfawareness helps the employees to “play” with alerts and recommendations, and have the opportunity to “score” by adopting changes in their behaviour, which may be extremely important in case the employee’s CV (i.e., perceived skills or expertise) start to lose relevance. • Talent Management, by adopting processes designed to motivate and retain highly productive employees, it is possible to combine Skills Management capabilities with information visualization techniques, helping organizations to maximize potential use of Talent Management technology. Through a training catalogue with specific skills offering, employees can search and manifest interest in some training(s) offers or look for career options. In this paper, we briefly describe the methodology used and research areas explored, as well as the major motivations and challenges addressed by the SDM. In Sect. 2, a literature review is presented concerning the methodology and research areas. Section 3 describes the SDM Dashboards, Skills Metadata Management and the domain model. Section 4 presents a case study within the scope of the A@W project. Finally, Sect. 5 summarize the main achievements together with future work.

Articulating Gamification and Visual Analytics as a Paradigm

2

187

Literature Review

The Design Science concept when applied to skills management can be defined as the knowledge in the form of constructs, techniques and methods for creating artefacts to satisfy a sets of functional requirements [2]. Design Science Research (DSR) is a research methodology that provides a framework that starts with a survey about the project scope to achieve a preliminary awareness of the challenges related to the problem domain, identifying hypothesis to be tested and evaluated using information artifacts. It provides a number of iterative cycles over which the proposed model is incrementally refined, ultimately resulting in a successful implementation and improving the humanmachine interface. The execution generates outcomes which might contribute to generate findings to feed the knowledge about the research problem that is being studied, in a continuous cognitive improvement process. The analysis of the results might provide additional inputs or contribute to redesign some services that will also be considered as a relevant achievements to better understand the problem domain, as well as in the identification of stakeholders concerns and interests in the system. An active participation of the enduser is therefore required in validating the identified artifacts on each DSR step. Many models of DSR were developed over time [3–7]. In this paper we follow a typical DSR workflow with a user-centric approach, combining the approaches from [8– 10], which focus on the DSR process model, with the approach from [11], which is focused on Cognitive processes. The DSR methodology was also adopted because of its user centric approach, based on the requirements gathering (Partner field work) and mockup techniques to facilitate the communication between all the intervenient elements. In this paper Visual Analytics (VA) has a determinant role. VA is an emerging research discipline that explores the combination of typical methods of Business Intel‐ ligence with the visual perception and analysis capabilities of the human user. VA is characterized through interaction between visualization techniques, models about the data, and the users’ profile in order to discover knowledge [12]. The VA process aims at tightly coupling automated analysis methods and interactive visual representations. The SDM makes use of VA concepts such as Data Analysis (via metadata info-structure of support to the data characterization/business model) and Human Interaction and Cognition (via Self-Awareness and Auto-Regulation) to present information using interactive Dashboards. Information is displayed according to different perspectives and levels of granularity, allowing users to identify, explore, and communicate their pref‐ erences, in order to achieve one or several organizational goals. Gamification when applied to skills managements is foreseen as an emerging concept. Since 2010, it has been extensively mentioned in new domains, therefore a consensual definition has not yet been agreed on. One of the definitions that found widespread acceptance defines Gamification as the use of game-thinking and game mechanics in non-game contexts in order to engage users to address identified problems [1]. In simple terms, it is a way of using “game” elements techniques to engage employees, reward and recognize individuals and keep them motivated to achieve ambi‐ tious value propositions to business and promoting loyalty.

188

J. Araújo and G. Pestana

In this paper, motivation and healthy competitiveness between employees are seen as key elements for the construction of a successful gamification strategy. To increase competitive advantage, the organization should adopt tools to enable employees to benefit from rewards and recognition, as well as to monitor existing skills and compare each competency with the ones required by the organization or detained by the main competitors. The key is to create competition in a scalable and automated way that can be used to drive repeatable results in a sustainable manner. The main goal is to promote knowledge-sharing and explore paradigms based on game theory and take advantage of technological advances to endorse empathy (with origins in Interaction Design and DSR), introducing concepts built on observation of the experiences, needs and prefer‐ ences of the users. Talent management (TM) from a skills management’s perspective is the science of using strategic human resource planning to improve business value and to make it possible for organizations to reach their goals. Everything is designed to recruit, retain, develop, reward, engage and make people more performant and aware to a TM strategy linked to business strategy. The SDM was designed to motivate and retain highly productive employees, it intends to help organizations to maximize potential use of TM technology by helping to improve and strengthen employee skills, in order to meet their strategic and opera‐ tional objectives. In order to provide adaptive and personalized learning plans and recommendations, it will be necessary the combination of learning capabilities in order to obtain highly personalized learning experiences. This is particularly relevant for key learning indicators and indicators measuring the employee skilled level. Machine learning techniques and algorithms can help improve the quality of the diagnostic infor‐ mation presented by TM, in order to provide recommendations more adjusted to the profile/role of each user.

3

Skills Management Awareness Model

3.1 Workforce Skills Analysis In the scope of the SDM, the VA Framework features a combination of automated anal‐ ysis techniques with interactive techniques of information visualization. The objective aims to enhance comprehension of large data sets (i.e., Big Data) for analysis through interactive visual interfaces (i.e., Dashboards), structured into sections to streamline and enable the reading of the information in an eye-blink. The articulation of Gamification techniques with TM Management approaches intends to improve employees’ selfawareness and auto-regulation. This articulation is accomplished by providing specific skills trainings and career options in order to increase employees’ skills and promote job progression within the organization. The two main concept are presented as follow: • Self-Awareness, in the proposed model interactive Dashboard interfaces are presented as a monitoring instrument, combined with key elements like motivation, challenges and healthy competitiveness. The incorporation of Gamification enables the user to explore paradigms based on game theory and take advantage of

Articulating Gamification and Visual Analytics as a Paradigm

189

technological advances to promote a better engagement of employees. In the SDM, the Personal Skills Dashboard illustrated in Fig. 1 makes use of concepts like healthy competitiveness (e.g., assessments) and reputation (e.g., ranking and scoring) through data analysis techniques expressed in a set of indicators generated dynami‐ cally based on the qualifications reported by the employee. Such approach provides relevant information to the employee regarding a global classification of their skills and triggering events (alerts & recommendations), enabling simultaneously to know how (s)he can progress (in the short/medium term) in his/her training which will always be aligned with his/her profile/role. The SDM enables the business user to configure the most relevant indicators, according to each organization’s strategic and operational goals and objectives, stimulating workforce competition and motivation to improve their skills, while staying aligned.

Fig. 1. Mockup of the personal skills and training catalogue indicators layout.

• Auto-Regulation, the employee can access to this type of information through alert mechanisms and indicators. The interactive Dashboard provides a way to promote behavioral changes and allows the user to be committed with the organization requirements regarding specific expertise. Such awareness can be extremely impor‐ tant if the employee’s CV decreases in its relevancy (e.g., acknowledged competen‐ cies versus required skills for the current position). Apart from the structure of employees’ competencies (Personal Skills) classification, there is also a Dashboard with a list of training offers. The Training Catalogue interface is presented also in Fig. 1. The idea is to enable the employee to access and enrol in the advised training offers, which represents a mechanism to help the employee to keep his/her CV

190

J. Araújo and G. Pestana

updated to changing business needs and simultaneously increase the corresponding overall CV ranking or evaluation for a specific expertise. The specification of the Dashboards required the definition of a metadata info-struc‐ ture capable of integrating all the knowledge about User Models and respective arche‐ types (i.e., an original model or type after which other similar things are patterned; a prototype) information. Data analysis assumes a strategic dimension in the generation of multidimensional information in order to study the user taking into consideration the role or functions within the organization. The value of metadata aims to classify more efficiently and better organize the information, as well as to yield deeper insight into the actions taking place across the organization, providing more intelligence and higher quality information to fuel data sharing and support interactive environments where team work is required, with each intervenient having a specific role/contribution within each project. 3.2 Metadata Info-Structures Metadata is “data that provides information about other data”1. While the metadata application is manifold covering a large variety of fields, there are specialised and wellaccepted models to specify types of metadata. These can be distinguished between two main classes: structural/control metadata and guide metadata, as described in [13]. Structural metadata is used to characterize the structure of database objects such as tables, columns, keyword (from a business point of view) and indexes. Guide metadata is used to help humans find specific items and is usually expressed as a set of keywords in a natural language. The SDM is aligned with a solution based on structural metadata, which allows a business user to dynamically characterize the data structure of the objects used by the software components of the proposed solution (e.g., Personal Skills, Training Catalogue and the corresponding Dashboards), this class is composed by a set of indicators providing a rich set of assessment, each one with specific threshold to notify the user when the values being reported/analysed are below the value defined foe each threshold. This approach provides a full flexibility, enabling business user to maintain the structure of attributes that characterize each of the subcategories available on the system, config‐ uring the layout according to their information needs (without any intervention from IT department). 3.2.1 Skills Metadata Management The maintenance of the structure of attributes that characterize each of the subcategories available on the system needs to be kept updated and an appropriate approach for managing metadata needs to be set-up. Metadata management can be defined as the endto-end process and governance framework for creating, controlling, enhancing, attrib‐ uting, defining and managing a metadata schema, model or other structured aggregation 1

Cfr: Metadata 07/05/2016.

(http://www.merriam-webster.com/dictionary/metadata),

accessed

on

Articulating Gamification and Visual Analytics as a Paradigm

191

system, either independently or within a repository and the associated supporting processes (often to enable the management of content). Within this paper the approach was to complement, whenever feasible, the descrip‐ tion of the metadata info-structure with the presentation of a preliminary mockup to represent the visual schema of the system user interface (UI). The mockup technique is used as a visual information blueprint, facilitating the communication between all team members and are a powerful way to keep the design of the solution (i.e., graphical specification of requirements) within the project boundaries - scope and objectives. In this way, it is possible to graphically express the metadata info-structure in an early stage of the project. As presents in Sect. 2, DSR was adopted because it follows a user-centric approach that encourages the use of the mockups technique for an active interaction with the enduser in the validation of functional and in particular non-functional requirements. This approach streamlines communication and promotes a good relationship with the enduser, encouraging him to have an active participation in an early stage of the project specification. The purpose here is to learn from the feedback and if necessary iterate back to prior stages to improve the mockups. Voting in the team or ideally with the endusers can select the best drafts, which shall be taken as a basis for the final solution development. The SDM addressed intends to provide a Skills Management Metadata Framework able to assist business users managing dynamically the data structures for Personal Skills, Training Catalogue and the corresponding Dashboards. The configuration inter‐ face is composed by four main components: Lists, Parameters, Metrics and Indicators, displayed through a tab menu. • Lists, which defines the set of metadata that represents the data structure of a list of values defined by the user, enabling business users to dynamically assign a list of values to each attribute of the Skills Parameter created for “Personal Skills/Training Catalogue”, if required. • Parameters, which defines the set of metadata that represents the data structure of a Skills Parameter, enabling business users to define the attributes to be presented as headlines in the “Personal Skills/Training Catalogue” workspace areas. • Metrics, which defines the set of metadata that represents the data structure of a Metric defined by the user, enabling business users to dynamically assign a metric to each attribute created, if required. For each Metric, the business user can create one or several rules (i.e., expression), and the corresponding values. • Indicators, which defines the set of metadata that represents the data structure of an Indicator defined by the user, enabling business users to manage the list of indicators to be presented in the dashboard interface for both the “Personal Skills” and the “Training Catalogue”. For each Indicator, the business user should define the type (e.g., Cost, Benefit or On-Target) and the corresponding rules (i.e., expression). Depending on the type of Indicator, the business user should define the values from the corresponding fields of the “Thresholds” and “Messages” sections, as well as to set the “Period” of validity for each Indicator.

192

J. Araújo and G. Pestana

The use of conceptual domain models is an important part of this process as it defines the Universe of Discourse (UoD) and facilitates the proper semantic integration of the information within a domain (i.e., business context) as well as the relationships between the concepts. The domain model is a high level logical view of the system behaviour and structure. The conceptual approach provides a high level view of what is expected, this means that it is not concerned with the way in which data or information are phys‐ ically held or processed. It may even include concepts and data which are not currently implemented or stored exactly as modelled. As such, it represents information that will be shared across the research project and which is necessary for the development team to implement, assuring in this way a proper support to monitor performance and opera‐ tional decisions. The Skills Management domain model was designed using UML notation (see Fig. 2), where Mxxx code is used to identify classes to store and manage metadata. The Mxxx type classes represent the list of Components described in Sect. 3.2.1 (i.e. Lists, Parameters, Metrics and Indicators) and are divided in two subtypes: MxxxH, which is used to identify header-related metadata and MxxxL, which is used to identify linesrelated metadata. When the business user needs to create a new subcategory (e.g., Education), for the category “Hard Skills” under the employees’ competencies (Personal Skills), it should start by creating a “List”. Whenever a new “List” is created, one new entry is created in the “MListH” class (with the header information, e.g., List Name) and as many entries in the “MListL” class as the number of values defined. The list of values created should be assigned to an attribute whenever the business user would like to restrict the input to this domain of values.

Fig. 2. Skills Management domain model using the UML notation (with Mxxx and Vxxx type classes).

Articulating Gamification and Visual Analytics as a Paradigm

193

The next step is to create the skills parameter that represents the new subcategory (e.g., Education) by using the “Parameters” component. Whenever a new “Parameter” is created, a new entry is created in the “MParameterH” class (with the header infor‐ mation – for instance a Tooltip) and as many entries in the “MParameterL” class as the number of attributes defined (e.g. Education is composed by three attributes), but also a new class (Vxxx type, e.g. VSS_Education) is automatically generated with the attrib‐ utes defined. The Vxxx code is used to identify classes to store and manage SkillsManagement related data and it is divided in two subtypes: VSSxxx, used to identify Staff Skills (SS)-related data and VTCxxx, used to identify Training Catalogue (TC)related data. For each Vxxx type classes, there are two attributes: Status and Action, which are automatically created and have pre-defined values. These two attributes are mandatory, in order to evaluate the status of the employees’ skills and triggers an Action, whenever a specific skills becomes invalid (e.g. expired certification). The business user can dynamically adjust the metadata characterization of the elements (e.g. add new attrib‐ utes) from the Vxxx type class generated, as required and accordingly in the “MPara‐ meterL” class. In case the business user wants to dynamically assign a metric to each attribute of the new subcategory created (i.e. Education), it should use the “Metric” component. Whenever a new “Metric” is created, one new entry is created in the “MMetricH” class (with the header information – for instance a Metric Name) and as many entries in the “MMetricL” class as the number of combinations of “Expressions” and “Values” defined (e.g. List of AQs selected is composed by 4 values, means 4 entries in the MMetricL class). The last step is to create indicators, for the new subcategory created (i.e., Education) in the “Personal Skills” Dashboard. Whenever a new “Indicator” is created (e.g. Curric‐ ulum Evaluation – CE), one new entry is created in the “MIndicator” class (with the Indicator information – for instance an Indicator Name) and in the “Thresholds” class (with the Thresholds information, depending on the Indicator type selected, e.g., Cost/ Benefit/On Target). The domain model addresses the following goals: Provide a conceptual framework of the things in the problem space; Capture the most important concepts (business object) in the context of the business; Foundation for use case/workflow modelling and helps to focus on the semantic context, providing a glossary of terms – noun based. The domain model is used to identify concepts pertinent to the characterization of the information artifacts to be modelled in the software, which may include business rules and data relevant for a specific business context/model. A domain model uses the vocabulary of the business domain so that it can be used to communicate with all stakeholders in describing requirements, processes and rules.

4

Case Study – Active@Work

Active@Work (A@W) is an EU project, funded by the AAL Programme (http:// www.aal-europe.eu/our-projects/call-6/) and co-funded by the EU, with the main goal

194

J. Araújo and G. Pestana

to support senior employees to perform their job efficiently without risking their health condition. The A@W focus is an integrated approach to manage the negative impacts of aging both physiologically and psychologically, while taking the advantage of senior employees’ valuable experiences. The main scientific challenges for A@W can be summarized in three main aspects: (1) the management and extraction of useful information from vast amounts of envi‐ ronmental and physiological data, (2) the development of a customized system to influ‐ ence behavioral change, and (3) the development of a solution flexible to be useful in differing working environment. In order to meet these challenges, A@W had to inves‐ tigate: (i) how best to provide the dynamic accurate measurement and data transfer of useful information about end-user, (ii) how best to use physiological and environmental data to improve the senior employees well-being and influence end-users to modify their behavior, (iii) how to arrive at the best business model to convert a promising technology into a useful and cost-effective product, and (iv) how to demonstrate and validate the new methodologies on two case studies in Spain and Belgium. From a Technological perspective the guiding principles of the A@W project is to follow the next generation design architecture, by creating a modular architecture which is key to support new kinds of business strategies. Figure 3 presents a high-level view of the project modular architecture. Because of simplicity, only the data flow with the SDM is represented, for more detailed information we recommend to go to the A@W web site.

Fig. 3. Virtual Assistant Tool (VAT) architecture, source: http://www.activeatwork.eu

The SDM addressed in this paper provides a contribution to one of the core compo‐ nents of the modular architectures, creating a user-friendly environment that makes it easy for employees to manage/develop their skills or invest in career options, promoting engagement and motivation to fit into organization needs and expectations, where senior employee’s knowledge and expertise can be an important asset.

Articulating Gamification and Visual Analytics as a Paradigm

5

195

Conclusions and Future Work

This paper was undertaken based on the landscape of Skills Management that reveals existing problems regarding the need to promote employees’ self-awareness and autoregulation, increasing their motivation and challenging them to acquiring new skills/ competencies or investing in new career options. The opportunity to collaborate in the A@W project and the interest in Gamification, together with VA and TM research areas were relevant research challenges to the specification of the info-structure of the meta‐ data to enable business users to dynamically manage the characterization of parameters. Additionally the design of an innovative solution for the SDM, capable to demonstrate the existing articulation between these emerging research areas were also very demanding. Data Analysis (via metadata info-structure of support to the data characterization/ business model) and Human Interaction and Cognition (via Self-Awareness and AutoRegulation) contributed to the specification of interactive dashboard interfaces with alert mechanisms and indicators to assist employees’ awareness of their skills and perform‐ ance, promoting simultaneously behavioral change and adoption of good-practices. The combination of Gamification techniques with TM approaches also contributed to foster a competitive working environment. In this context, motivation, challenges and healthy competitiveness are seen as key elements for increasing employee’s self-aware‐ ness and auto-regulation, helping to gradually create a motivated community committed to success factors relevant to both the employee and the organization. The achieved model expresses a ground base guidance in developing similar. A future area of research, based on the outcome of this research work, is deeply related to data science and to the implementation of an Intelligent Agent (IA), a software compo‐ nent using predictive modelling or machine learning techniques to improve the quality of system personalized recommendations. Based on automated learning, the IA can use algorithms to train the system and improve predictions based on the generated knowl‐ edge from the available data (an aspect to be explored within the field of data science), together with the metadata info-structure defined for the SDM. Through algorithms that can learn and make predictions based on the generated knowledge from the available data the IA solution may improve employees’ awareness of the risk associated to CV evaluation, by: Support automatic employees’ notifications; Monitor the achievement/ assessment of the employee in each training action; Pre-validate the reported informa‐ tion (list of training actions selected by the user); Alert for inconsistencies (constrains due to precedence between some of the selected training actions).

References 1. Huotari, K., Hamari, J.: Defining gamification: a service marketing perspective. In: Proceedings of the 16th International Academic MindTrek Conference, pp. 17–22 (2012) 2. Vaishnavi, V.K., Kuechler Jr., W.: Design Science Research Methods and Patterns: Innovating Information and Communication Technology, p. 248. Auerbach Publications, Boca Raton (2007) 3. Hevner, A.: A three cycle view of design science research. Scand. J. Inf. Syst. 19(2) (2007)

196

J. Araújo and G. Pestana

4. Kuechler, B., Vaishnavi, V.: On theory development in design science research: anatomy of a research project. Eur. J. Inf. Syst. 17(5), 489–504 (2008) 5. Kasanen, E., Lukka, K., Siitonen, A.: The constructive approach in management accounting research. J. Manage. Account. Res. 5, 243–264 (1993) 6. March, S.T., Smith, G.F.: Design and natural science research on information technology. Decis. Support Syst. 15(4), 251–266 (1995) 7. Peffers, K., Tuunanen, T., Rothenberger, M.A., Chatterjee, S.: A design science research methodology for information systems research. J. Manage. Inf. Syst. 24(3), 45–77 (2007) 8. Dasgupta, S.: Technology and Creativity, 1st edn. Oxford University Press, New York (1996) 9. Purao, S.: Design research in the technology of information systems: truth or dare (2002) 10. Vaishnavi, V., Kuechler, W.: Design science research in information systems. Assoc. Inf. Syst. 1–12 (2004) 11. Takeda, H., Veerkamp, P., Tomiyama, T., Yoshikawa, H.: Modeling design processes. AI Mag. 11(4), 37–48 (1990) 12. Keim, D.A., Kohlhammer, J., Ellis, G., Mansmann, F.: Mastering the Information Age Solving Problems with Visual Analytics. Eurographics Association, Goslar (2010) 13. Bretherton, P.T., Singley, F.P.: Metadata: a user’s view. In: International Conference on Very Large Data Bases (VLDB), pp. 1091–1094 (1994)

Proposal for a Federation of Hybrid Clouds Infrastructure in Higher Education Institutions Pedro Lopes1 ✉ and Francisco Pereira2 (

1

)

ESTGL-IPV, Av. Visconde Guedes Teixeira, 5100-074 Lamego, Portugal [email protected] 2 UTAD, Quinta de Prados, 5001-801 Vila Real, Portugal [email protected]

Abstract. Cloud Computing paradigm can provide Higher Education Institu‐ tions a tool to reduce costs and boost efficiency to current and future resources’ configurations within the institutions. This means more integration and interac‐ tion between datacenters, following a growing demand for computational power. To meet these goals, we propose in this article a set of guidelines and a supporting model, developed to address the referred issues. The work was set to defined a “cookbook” for the best practices in the conception, design and implementation of an academic hybrid cloud federation between Higher Education Institutions. The model relies on integration between several private clouds within Higher Education Institutions, disclosing it as a single cloud with more aggregated features/resources than the individual clouds, keeping the essence and intrinsic autonomy of each institution, supporting platform heterogeneity, and providing a different level of relationship and integration that each institution wishes to undertake. Keywords: Cloud computing · Cloud federation · Cloud infrastructure · Cloud interoperability · Datacenter · Academic cloud federation

1

Introduction

As in the business area, entities merge and share resources in order to obtain a higher efficacy with lesser costs. The reality of higher education institutions is somehow different, but never the less these institutions are pressure to obtain better results and efficiency due to reduction in funding and more oversight from funding entities. The merging, as in the business case, with reduction in size and workforce or assets, is not desirable due to the scientific and social nature of these institutions. A middle ground between owning all the needed resources and a complete integration, from the computing point of view, should be achieved. To address the growing demand for computational power within the institutions and maximize the efficiency, and redundancy when possible, Cloud Computing paradigm can be useful. A practical example of this is the recent implementation of a private cloud based on OpenStack by Conseil Européen pour la Recherche Nucléaire (CERN), as a proof of concept [1].

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_21

198

P. Lopes and F. Pereira

CERN needs’ for computational power and to provide it to research teams in different areas resulted in a continuous engagement and contribution to the supported platforms [2]. CERN is currently one of the major contributors to the OpenStack community with the release of platform modules source code blocks [3]. Since Cloud Computing is a major research area at several international organizations, the definition for best prac‐ tices guides is important, following the recommendations of GÉANT that listed a set of standards for the creation of a federation of hybrid clouds at level of Higher Education Institutions [4]. Cloud Federation is a concept that points to the integration of different cloud providers services in a single pool, supporting fundamental characteristics of interoperability associated with migrating resources, redundancy features, and integra‐ tion of features and services complementary [5]. Cloud developers and researchers propose or implement a wide range of significant contributions within federation architectures for addressing and solving some of the issues in the implementation of the paradigm [5–12]. Armstrong et al. that describe the Cloud Scheduler, uses HTCondor to construct a batch system that implements distancing through the status of the Condor job. Buyya et al. propose InterCloud, an architecture oriented to the application scheduling service. One problem in the current cloud ecosystem is that most cloud providers or products offer their own APIs. Some are becoming “de facto” standards [13], but the heterogeneity between the platforms or the organizations’ approach view make it difficult to achieve interoperability and portability across clouds. To achieve this interoperability and portability it is important to establish a new standard, surrounding cloud infrastructures. Open standards such as OCCI, an protocol and API, provide all of management tasks with a strong focus on integration, portability, interoperability and innovation, presently in refinement [14]. In our view and also supported by the literature [15], there is a need for the stand‐ ardization of the various constituent layers in a cloud federation. Approaches like OCCI contribute to the evolution in design and development for the supported platforms or in the middleware integration between platforms.

2

Conception

The proposed model differs for others approaches at an organizational point of view, but also at the technological level, namely by employing open source platforms that are versatility, robust and enterprise ready, providing and supporting standards APIs and protocols. The proposed model intendeds to be deploy within the Portuguese Higher Education Institutions’ panorama. We believe that it will create an organizational change, both at an organic and technological level. The model scope can be extended to public services and include external partners outside of the Higher Education domain. Institutions can use the defined Cloud Federation without the investment associated to the implementation and maintenance of a datacenter. The availability and sharing of resources between institutions and their partners requires the existence of a mechanism for accounting of resources’ usage that will allow the control and billing for the provided services. The core of the Academic Federated Cloud model relies on integration of several Higher Education Institutions private clouds in order to disclose the whole as a

Proposal for a Federation of Hybrid Clouds Infrastructure

199

single cloud with more features/resources than individual ones. As a prerequisite to safeguard the intrinsic autonomy from each institution, the model supports different levels of commitment, taking into account the degree of relationship and integration on which each institution wants to undertake. The Academic Federated Cloud based on the proposed model could be the genesis for the creation of a national distributed datacenter, physically housed in multiple geographic locations, using IaaS and PaaS as a service model, allowing migration of resources between clouds. A politic/organizational deci‐ sion regarding the adoption of this model or a similar, could provide cost savings to the country in high IT, allowing for example, the reduction in initial investment and main‐ tenance costs of institutional datacenters. The model definition, from a design analysis, independently of the approach chosen by each higher education institution, should follow the following guidelines: 1. Independence from the technical platform that implements the model, requiring that chosen platform should be open source, so that the necessary adjustments at configuration level and the integration of platforms can implement; 2. Allow the integration and adaptation at the physical infrastructure level in each institution, i.e. the development and adoption of the model and respective platform should be as flexible as possible, allowing integration into institutions’ datacenters and the adaptation to their specifications and standard operations; 3. Allow the creation of a distributed management structure, i.e., the possibility of multiple levels of access and respective attributions of responsibilities, mitigating considerably the central management overhead in staff and services within each institution; 4. Allow the creation of users and groups, encouraging a better organization, manage‐ ment, and resource management and respective access, adaptable to heterogeneous realities within the different institutions; 5. Groups management for better organization in the distribution of resources by Cloud Zone (clouds of each Higher Education Institutions) and external to the federation, as well within the federation; 6. Allowing the use of multiple authentication sources (LDAP, X509, SSH, user and password or token); 7. Allow the registry of events and analysis of resource usage related with each user or group, in order to manage more rationally and equitably the resources; 8. Allow the creation of access control mechanisms, accounting and quotas for resources, complementary to the two previous points, facilitating the administration in its different degrees of access; 9. Allow the classification and grouping of multiple homogeneous physical resources, i.e., allow the organization of clouds with heterogeneous resources into groups of more homogeneous equipment with the integration of the previous mechanisms of management, organization, accounting and access control; 10. Allow any intrinsic autonomy of each institution, i.e., allow an institution to have access to the federation’s resources without the need to integrate the federation and thus mandatorily share the institution own resources.

200

P. Lopes and F. Pereira

Within the referred guidelines, it is important to consider the possibility of an admin‐ istrative overhead that may burden the staff in the computer services from each institu‐ tion, related to the management of new features. The implementation of the model should bring attractiveness and present simplicity by including a distributed management, considering the proximity and the decomposition of responsibilities. This may help the resolution of problems that can arise locally. The model is based on the integration of several private clouds in order to form or better, to disclose it as a single cloud of aggre‐ gated resources. In Table 1 we can observe the defined classification for each of the elements that compose the proposed model. Table 1. Elements classification of the model Clouds zone External Clouds

Clouds of each institution of higher education Clouds external to institutions, public or private partners Connections type A Connecting links between Clouds Zones and external Clouds Connections type B Connecting links between the various Cloud Zones

Figure 1a shows how each element is integrated into the model and how they are interconnected to obtain the desirable Academic Federated Cloud. A Cloud Zone is composed by the resources and networking allocated by each individual institution for the defined zone.

Fig. 1. a. Model proposed for the Federated Cloud. b. Users and administration for the proposed model.

On the connections between each of the institutional clouds or with external clouds, we always assume the Internet connection of each institution. Figure 1b details the four (plus one) management levels of the Federation up to the end users. Each level consists of a group of users with an assigned set of permissions, based on its hierarchy. The federation is the aggregation of the various clouds, but is nonetheless managed and administered as a standard institutional cloud.

Proposal for a Federation of Hybrid Clouds Infrastructure

201

Federation Administrators have access to all of the Cloud Federation administration. Cloud Administrators are responsible for the administration of each institutional cloud, Virtual Datacenters (VDC) Administrators are responsible for sets of virtual resources, distributed in the federation, either local or distributed through more than one institu‐ tional cloud. Finally, we have the end users that effectively interact with the allocated resources. For the end users, resources usage should transparent without the perception of one or more private clouds, independent of their physical location, but always with the knowledge of the level of resources usage and attribution. For each of the other administrator (federation or cloud) there is an awareness of where the resources are physically located and the level of usage. The allocation of resources via quotas is made available by the administrator (federation or cloud) of each institution to its cloud VDC administrators, which are then provided to the end users. The physical management structure is composed of three well-defined levels in terms of access and permissions. Federation Administrators, Cloud Administrators and the VDC Administrators. This approach allows that the management at each level is specialized in theirs tasks. Conceptual Administrators are the only non-mandatory elements of the proposed model, having the responsibility to administer, in conceptual terms, all aspects of inte‐ gration of the Federation, namely defining what is shared and in which circumstances, what is the volume of resources available for sharing and definition of the management strategies, and finally the development of Academic Federated Cloud. Federation Administrators are responsible for the management and integration of all the Cloud Zones, with the technical responsibilities for the full implementation of the Academic Federated Cloud. Each Cloud Zone has at least one administrator that interact regularly with the Cloud Administrators. Cumulatively, Cloud Administrators can accumulate the responsibilities inherent to Conceptual Administrators. Cloud Administrators are responsible for maintaining the physical infrastructure of the Cloud Zone(s) in which they are inserted, participating in the integration between the various clouds in strict partnership with Federation Administrators, as well providing support and technical clarification for the VDC Administrators. VDC Administrators are responsible for the management of virtual resources, creating users, applying quotas, managing and supporting End-users. End-users use the resources made available under the terms set by their VDC Administrators. Given some limitations in smaller institutions or with a different organic organization, some staff may accumulate more than one position within the proposed model. At this stage, this aspect was not studied regarding the cost/benefits using a multilevel model of administration. With the cloud being transparent from technological point of view, it can lead to users’ perception of a large range of available resources, but in any institutional cloud resources are finite and limited. The allocation by quotas for accessing the various resources available must be implemented, from the base of the pyramid, with the Cloud Administrators, allocated to the VDC Administrators, which in turn will provide them to end users. It is important to articulate this administrative process with a user admin‐ istration module, supporting features such as control and management access to the various resources in a cloud, should it be external or federated. It should be noted that the use of various authentication sources, creation of a group management is necessary for a better organization of the different users, with a full integration between system

202

P. Lopes and F. Pereira

quotas and the access control sub-systems. The adoption of the proposed federal model can allow the creation of a solid technological base to support the needs of Higher Education Institutions, even in the deployment phase of the cloud, transmitting inno‐ vation, support, knowledge and confidence. The networking with other institutions emerges as a guarantee of support at critical moments, when faced with a lack of resources within the individual private cloud. This guarantee is important so that the institutions can make strategic decisions, without being hostages of their own internal capacity limitations, making them more versatile.

3

Implementation and Results

The platform choice for an implementation of this nature should not be attached to the specifics characteristics of a defined software package, never the less we should evaluate the main features from each platform considered, based in the goals initially defined for the cloud architecture. Several open source platforms were evaluated to support the choice of which platform should be selected with the required features for the develop‐ ment of a prototype for the proposed model. Within the studied platforms [16–18], it was observed that all were based on the same set of fundamental architectural compo‐ nents, namely features such as access control, networking, security, instances, images, only differing on the implementation and architectural approaches, mainly depending on the view of each organization responsible for the development of each individual platform [19]. In a first step, as proof of concept and in academic context, the implementation and testing of a prototype was defined. This way it was possible to assess various services and aspects inherent to a cloud federation. For this purpose, OpenNebula platform was chosen, based on its simplicity, flexibility, openness and also on knowledge of the plat‐ form reliability. The platform has the support of an active and large community, with a growing deployment of instances in business and academic worlds. OpenNebula was tested in the prototype implemented and it was possible to observed the previous referred features. OpenNebula also permits a gradually upgrade adapted to the growthing needs of each institution, without requiring large investments in specialized expertise or in hardware resources [20]. The platform’s administrative features are in accordance with referred guidelines 1, 2, 4, 5, 6, 7, 8 and 9. To test the feasibility of the model it was necessary the configuration and deployment of at least two instances of private clouds, which were located in geographically separated and independent institutions. In this case at the School of Technology and Management Lamego (ESTGL), an organic unity of the Polytechnic Institute of Viseu (IPV), it was deployed an institutional cloud and at the University of Trás-os-Montes and Alto Douro (UTAD) was deployed another insti‐ tutional cloud. In an early stage of this work, we analyse the state of the art concerning the different forms of sharing and different levels of communication needed between cloud platforms concerning interoperability and portability issues [21]. The interoperability and portability necessary includes the utilization and unification of various technological standards was provided by the OCCI API [14]. This will allow multiple platforms to communicate and

Proposal for a Federation of Hybrid Clouds Infrastructure

203

interact more transparently and in a simplified way. Such standards are not available at the platforms core, presenting some vagueness about theirs path and at the same time with different philosophy of development. Some constraints are remaining at the interopera‐ bility level between the platforms tested (OpenNebula, OpenStack and Eucalyptus). The implementation of OCCI follows the guideline 1 and 10. In terms of interoperability within the proposed model, we suggest that each cloud should be set as a hybrid cloud and simultaneously as a public cloud. In this approach, each institutional cloud is independent and can expand its resources using resources form others clouds, being the other cloud an institutional or public cloud. Simultaneously, the institu‐ tional cloud can allow access to its resources from other institutional clouds, being presented as public cloud or more correctly as a semi-public cloud. The private cloud can provide its resources to other institutions whenever it will consider appropriate. This latter approach emerged to address two important aspects: First the interoperability at federation level in terms of different CMP and secondly to reduce the resistances from institutions to the integration and implementation of an Academic Federated Cloud with others Higher Education Institutions. In this context, the use of standards such as the OCCI API has become vital. It is not feasible to create a new API with all the compatibility constraints and interoperability between platforms leading us down to a path of development and differ‐ entiation that is not intended. We advocate for a standardization of an API that will allow independence of CMP. Taking into account the analysis done during the implementation process, this federative approach brings a greater interdependency between institutions as they share their physical infrastructure and administration. In this way, analysing the federal implementation with the selected cloud platform, we can highlight the existence of two distinct access levels: Administration of supporting physical equipment and cloud platform management. In this federated approach, the administration is shared at the Academic Federated Cloud level. The implementation of Academic Federated Cloud incorporates guideline 3 (distributed management). It is important to note that at the Feder‐ ation level the authentication and authorization is based on the core module from Open‐ Nebula platform, even though a more sophisticated add-on can be used. Shibboleth, an open source federated identity solution, allows integration with others platforms such as OpenStack or CloudStack [22, 23]. At a later stage of consolidation and maturity model, this add-on can be incorporated into the prototype, provide ground for a wider set of authentication methods, in accordance with guideline 6. The prototype model was implemented with resources from three Cloud Zones. The ESTGL housed two Cloud Zones in their datacenter and UTAD datacenter with one Cloud Zone, so it would be possible to observed the feasibility of the model in relation to the deployment in a geographically distributed institutions. At the present the data‐ store with the virtual machines is based in on the Network File System (NFS). Each Cloud Zone has access to its the NFS datastore and all Cloud Zones have access to one shared NFS datastore, enabling VM migration in the shared NFS between all Cloud Zones. At the same time the Elastic Compute Cloud (EC2) control and management module of OpenNebula was set, which allows the control and management of virtual machines through an interface similar to the Amazon EC2. In this way each Cloud Zone can be manage as a public cloud, allowing resources to be available to external partners. The prototype, currently in use in academic pre-production environment, did not require

204

P. Lopes and F. Pereira

regular maintenance and tracking. We can conclude that the inherent Cloud Zone serv‐ ices implemented transmits a significant feeling of reliability within a technology of this kind. Concerning redundancy, two aspects were considered. The first at the equipment level of a Cloud Zone with virtualization and the second at the VMs level. The imple‐ mentation of hooks, allowed the triggering of scripts associated with the changing states of resources, whether physical equipment for virtualization or VMs used in the Cloud Zone. With the implementation of automation procedures measures, it is possible to ensure a higher reliability and system availability. When an equipment goes into a fault mode, the VM supported by it is migrated or recreated in other available physical equip‐ ment. A time of about 30 s was required for the platform scheduler to detect the equip‐ ment fault and the hook runtime activation associated with the fault. This time can depend on the number of VMs being process at the time. In the prototype, the scheduler was set to execute every 30 s and then run the hook associated with the fault. We observed measurements of 2 to 3 min of interruption in the access to a VM disk with a 1 GB connection, when the virtualization equipment was disconnected from the infrastructure. The accounting records for the services provided to users in the Cloud Zone is inde‐ pendent for others Cloud Zones. Even when a Cloud Zone is set as a public cloud, the accounting module works seamlessly, allowing each Cloud Zone defines the cost values to be charged for each of its available resources. This feature supports guidelines 8 and 10. For the migration of VM images between the Cloud Federation, we resorted to exporting and importing VMs images between datastores. In the deployed scenario, it was possible to migrate a VM with 4 GB in 58 min, average time, but there were cases where the observed values extended to 1 h 30 m. These results can arise from two aspects. First, the prototype is based on existing infrastructure in the two institutions and the speed of ESTGL the public network is 100 Mbps full duplex. The traffic generated by the institution services and users compete with the traffic generated by the Academic Federated Cloud. On the other hand, in this phase we didn’t included containers in federation to mitigate the transfer of VMs between different Clouds Zone in the Academic Federated Cloud, which can provide a more efficient transfer method [24, 25].

4

Conclusions

At this stage, we can say that the proposed concept for a federated computational struc‐ ture within Higher Education Institutions is a valid option with promising results, but with the awareness that more research is needed to mitigate some aspects and also with the perspective in mind for improvements. The model emphasizes on the creation of a strong technological base that supports most of the computational and respective organ‐ izational needs for Higher Education Institutions, in a boarder scope, even within the implementation phase of the Academic Federated Cloud. This can provided innovation, support, knowledge and confidence [26]. Collaboration with other institutions guaran‐ tees the required support at critical moments when faced with the lack of resources within the private cloud. This confidence is important for the institutions in order to support their strategic decisions without being held or limited by their own internal capacity, making the Academic Federated Cloud more versatile and flexible.

Proposal for a Federation of Hybrid Clouds Infrastructure

205

We noticed the existence of a set of related research areas that can take advantage of a distributed infrastructure, as suggested in the proposed model, making the Academic Federated Cloud a fertile ground for development, when there is a wider range of resources available to researchers. Another aspect is the research around the infrastruc‐ ture, in search for higher quality and services availability, covering a range of research areas in computer science. Considering the results obtained with the migration of VM between Cloud Zones, we reinforce the need to have a dedicated network interconnec‐ tion for the Cloud Zones. This can mitigate delays and improve interaction at the level of interconnection between the Cloud Zones, such as the two defined in the prototype. The implementation based in a high debit dedicated network is co-related to the need to provide a higher quality of service. As future work, studying a full implementation of an Academic Federal Cloud, given its distributed technological nature, is vital to the design and implementation mecha‐ nisms that can support the high availability required. Within our review of the carried implementation, we identify a set of priorities that stressed the urgent need to draw a notebook for good management practices and in parallel data extraction mechanisms from the operations logs of operations made by all stakeholders in the system, to ascer‐ tain metrics in different areas. Also being investigate as a paradigm, based on its genesis as a geographically distributed systems, the study and implementation of clouds recovery and redundancy mechanisms, with the constituents ʻclouds’ resources of the federation based on slave replicas mechanisms scattered thru the various Cloud Zones in the Academic Federated Cloud [27]. The normal backup operations would be partially replaced by timings “on the fly” of all multiple components inherent to full implemen‐ tation of an Academic Federated Cloud.

References 1. Bell, T.: CERN Uses OpenStack. https://www.openstack.org/user-stories/cern/ 2. Sverdlik, Y.: CERN’s OpenStack Cloud to Reach 150,000 Cores (2015). http:// www.datacenterknowledge.com/archives/2014/11/07/cerns-openstack-cloud-to-reach-150000cores-by-2015/ 3. Verge, J.: CERN Contributes Identity Federation Code to OpenStack. http:// www.datacenterknowledge.com/archives/2014/06/30/rackspace-cloud-user-cern-contributescode-to-openstack/ 4. GÉANT: Campus Best Practice. http://services.geant.net/cbp/Pages/Home.aspx 5. Kurze, T., Klems, M., Bermbach, D., Lenk, A., Tai, S., Kunze, M.: Cloud federation. In: CLOUD COMPUTING 2011: The Second International Conference on Cloud Computing, GRIDs and Virtualization, pp. 32–38. IARIA, Rome (2011) 6. Armstrong, P., Agarwal, A., Bishop, A., Charbonneau, A., Desmarais, R., Fransham, K., Hill, N., Gable, I., Gaudet, S., Goliath, S., Impey, R., Leavett-Brown, C., Ouellete, J., Paterson, M., Pritchet, C., Penfold-Brown, D., Podaima, W., Schade, D., Sobie, R.J.: Cloud Scheduler: a resource manager for distributed compute clouds. p. 10 (2010) 7. Moreno-Vozmediano, R., Montero, R.S., Llorente, I.M.: IaaS cloud architecture: from virtualized datacenters to federated cloud infrastructures. J. Grid Comput. 11, 253–260 (2012)

206

P. Lopes and F. Pereira

8. Wu, H., Ren, S., Garzoglio, G., Timm, S., Bernabeu, G., Kimy, H.W., Chadwick, K., Jang, H., Noh, S.Y.: Automatic cloud bursting under FermiCloud. In: Proceedings of International Conference on Parallel Distributed Systems – ICPADS, pp. 681–686 (2013) 9. Celesti, A., Tusa, F., Villari, M., Puliafito, A.: How to enhance cloud architectures to enable cross-federation. In: 2010 IEEE 3rd International Conference on Cloud Computing, pp. 337– 345 (2010) 10. Sotomayor, B., Montero, R.S., Llorente, I.M., Foster, I.: An open source solution for virtual infrastructure management in private and hybrid clouds. In: Internet Computing, vol. 13, pp. 14–22. IEEE (2009) 11. Buyya, R., Ranjan, R., Calheiros, R.N.: InterCloud: utility-oriented federation of cloud computing environments for scaling of application services. In: Hsu, C.-H., Yang, L.T., Park, J.H., Yeo, S.-S. (eds.) ICA3PP 2010. LNCS, vol. 6081, pp. 13–31. Springer, Heidelberg (2010). doi:10.1007/978-3-642-13119-6_2 12. Ferrer, A.J., Hernández, F., Tordsson, J., Elmroth, E., Ali-Eldin, A., Zsigri, C., Sirvent, R., Guitart, J., Badia, R.M., Djemame, K., Ziegler, W., Dimitrakos, T., Nair, S.K., Kousiouris, G., Konstanteli, K., Varvarigou, T., Hudzia, B., Kipp, A., Wesner, S., Corrales, M., Forgó, N., Sharif, T., Sheridan, C.: OPTIMIS: a holistic approach to cloud service provisioning. Future Gener. Comput. Syst. 28, 66–77 (2012) 13. Amazone: Amazone EC2 API. http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ Welcome.html 14. The Open Cloud Computing Interface. http://occi-wg.org/ 15. García, Á.L., del Castillo, E.F., Fernández, P.O.: Standards for enabling heterogeneous IaaS cloud federations. Comput. Stand. Interfaces 47, 19–23 (2016) 16. OpenStack Foundation: OpenStack Open Source Cloud Computing Software. http:// www.openstack.org/ 17. Eucalyptus Systems: Eucalyptus. https://www.eucalyptus.com/ 18. OpenNebula Systems: OpenNebula. http://opennebula.org/ 19. Sempolinski, P., Thain, D.: A comparison and critique of eucalyptus, opennebula and nimbus. In: 2010 IEEE Second International Conference on Cloud Computing Technology Science, pp. 417–426 (2010) 20. RedIRIS: vcTECNIRIS-36. http://www.rediris.es/tecniris/archie/vcTECNIRIS-36.html 21. Lopes, P., Costa, A., Pereira, F.: Hybrid clouds infrastructures in higher education institutions – a proof of concept. In: EUNIS 2016 Proceedings, pp. 30–33, Thessaloniki, Greece (2016) 22. Shibboleth plugin for Opennebula. http://community.opennebula.org/ecosystem:studicloud 23. Shibboleth authentication plugin for Openstack. https://github.com/burgosz/openstackhorizon-shibboleth 24. Wood, T., Shenoy, P., van der Merwe, J., Ramakrishnan, K.: CloudNet: dynamic pooling of cloud resources by live WAN migration of virtual machines. SIGPLAN Not. 46, 121–132 (2011) 25. Suen, C.-H., Kirchberg, M., Lee, B.S.: Efficient migration of virtual machines between public and private cloud. In: 2011 IEEE Third International Conference on Cloud Computing Technology Science, pp. 549–553 (2011) 26. IA Forrester Research: How Organizations Are Improving Business Resiliency With Continuous IT Availability. pp. 1–8 (2013) 27. Swanson, M., Wohl, A., Pope, L., Grance, T., Hash, J., Thomas, R.: Contingency Planning Guide for Information Technology Systems. NIST Special Publications, p. 800 (2002)

Radio Access Network Slicing in 5G Jinjin Gong1, Lu Ge2, Xin Su2, and Jie Zeng2 ✉ (

)

1

Broadband Wireless Access Laboratory, Chongqing University of Posts and Telecommunications, Chongqing, China 2 Tsinghua National Laboratory for Information Science and Technology, Research Institute of Information Technology, Tsinghua University, Beijing, China [email protected]

Abstract. Network slicing is one of the key technologies for 5th-Generation (5G). It enables operators to construct network slices with similar qualities as dedicated stand-alone networks, but to realize them on a common physical plat‐ form. Meanwhile, the isolation among logical network slices can keep them from the negative impacts of other network slices. In this paper, we give an overview of the radio access network (RAN) slicing, and present some viewpoints on three categories of network slices. Moreover, an approach, which dynamically creates RAN slice to meet specific business and service requirements, is proposed. It does not only optimize performance and maximize resources utilization, but also shorten the creating time of slices. Keywords: 5G · Network slicing · Radio Access Network · Core network · Virtualization

1

Introduction

In future network, abundant application scenario whose requirements in capacity, band‐ width, latency, reliability, etc., are distinct from each other, will emerge [1]. In 5G, only one network is difficult to satisfy all of these requirements, or the costs of construction and operation are unacceptable for operators. Network slicing provides a solution to solve these problems. It constructs different logical networks on a unified physical infrastructure via virtualization technologies [2]. A logical network is a network slice that contains a number of network functions, network topology and communication links, etc., and slices are logically independent of each other. The 3rd Generation Partnership Project (3GPP) clearly claimed that the next gener‐ ation system shall support network slicing [3]. At the moment, 3GPP focuses on CN slicing, whereas RAN slicing is still in discussion. Specific impacts and requirements on the RAN in 5G urge to slice RAN like CN, and RAN slicing is trying to create new business opportunities. In RAN slicing solution, the slices run on the wireless platform

J. Gong—China’s 863 Project (No. 2015AA01A706), the National S&T Major Project (No. 2015ZX03002004), Science and Technology Program of Beijing (No. D151100000115003). © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_22

208

J. Gong et al.

which contains radio hardware and baseband resource pool [4]. It allows operators to construct a logical network with similar qualities as a dedicated stand-alone network but realizes it by using shared resources, such as spectrums, sites, transport. Allowing effi‐ cient resource sharing between RAN slices can maximize utilization and achieve a high level of energy and cost efficiency. Virtualization is the key technology of RAN slicing. [5, 6] introduce long term evolution (LTE) virtualization, and a hypervisor taking responsibility of virtualizing eNodeB into several virtual eNodeBs. [7] is able to dynamically allocate network resources to different slices in order to maximize the satisfaction of users. The approaches above share a unified physical infrastructure, but there are no common func‐ tions shared between virtual eNodeBs. The remainder of this paper is organized as follows: Sect. 2 introduces RAN slicing overview. Section 3 describes an approach to implement RAN slicing. Section 4 gives the conclusion of this paper.

2

RAN Slicing Overview

Three categories of network slices are envisioned to be supported by 3GPP. They are massive machine type communications (MMTC), ultra-reliable and low latency commu‐ nications (URLLC), and enhanced mobile broadband (EMBB) [8]. MMTC (e.g., slice 1 in Fig. 1.) represents a kind of scenario which has a high requirement in density connections, like smart city. In this scheme, direct communication between terminal can avoid longdistance transmission between base station and terminal, and reduce energy consumption effectively. In addition, new multiple access technology can exponentially increase system capability of devices connection via information overlay transmission for multi-user.

Slice 2 Slice 3

Slice 1

Fig. 1. A framework of RAN slices.

URLLC (e.g., slice 2 in Fig. 1.), for example, autonomous vehicles, remote control, requires ultra-low latency and ultra-reliability. In this scenario, separation between control

Radio Access Network Slicing in 5G

209

plane and data plane is optional. The control plane optimizes data transmission path, and the data plane is responsible for forwarding data in high speed. Meanwhile, it can apply shorter frame structure and optimize signaling procedure. In order to improve transmission relia‐ bility, it can utilize advanced modulation coding and retransmission mechanisms as well. In contrast, scenarios of EMBB (e.g., slice 3 in Fig. 1.) focus on high broadband, such as high resolution video, virtual reality, etc., which has high traffic density and high speed for user experience. Control plane decoupling from data plane is required in this scenario, since the data plane is concentrated on forwarding data in high speed. More‐ over, ultra-dense network (UDN), which reuse frequency resources effectively, and greatly enhance the efficiency of unit area frequency reuse, is suitable for this slice. In addition, multiple low density parity check code, new bit mapping technology and ultraNyquist modulation can be used. According to the application scenarios’ requirements, network slicing customizes the required network functions and flexible networking to optimize the business process and data routing. The network has the ability for dynamic allocation resources to improve the utilization of network resources.

3

An Approach to Implement RAN Slicing

A RAN slice contains access coverage and signal processors, which can be shared by another RAN slice. Access coverage includes the kinds of access forms, such as macro cell, small cell, micro cell, etc. Coverage of different slices can be established through a combination of those. Signal processors as a sub-slice is the critical part of a RAN slice. Each RAN slice has different signal processors sub-slice, according to the type of the application scenario. sub-slice 1

sub-slice 3

sub-slice 2

Data plane

Slice specific control plane

Slice specific control plane

Data plane

Control&Data plane

fn ...

fn ...

fn ...

fn ...

fn

f2

f2

f2

f2

... f3

f1

f1

f1

f1

f2 f1

Common control plane ... f1 f2 fn

Scheduler

Hypervisor

Physical Hardware

Fig. 2. RAN slicing implementation via virtualization.

In fact, the requirements of different application scenarios in resources utilization, isolation and latency are different. Each base station contains several protocols which can implement different functions. As showed in Fig. 2, we divide functions into control

210

J. Gong et al.

plane functions and data plane functions. Moreover, the control plane functions are divided into slice specific functions and common functions which can be shared among sub-slices. Hypervisor is responsible for virtualizing physical resources into virtual computing resources, virtual storage resources, virtual network resources, and allocating physical/virtual resources to sub-slices. The functions of sub-slices are distinct from each other. A RAN slice is a logical network that contains all functions needed, and can provide the corresponding network services. As mentioned above, each RAN slice has a different sub-slice. If the application scenarios of sub-slice 1 and sub-slice 2 are not strictly isolated and require high speed, then they can share some common functions, and the control plane is split from the data plane. It is benefits to shorten the time of sub-slice creation, and improve resources utilization. However, the control plane and data plane of sub-slice 3 can be tightly coupled. Sub-slices of RAN slices are running on common physical hardware, and the network functions of sub-slices are customizable, scalable and sustainable, etc. What functions and technolo‐ gies of each slice should be deployed are already introduced in Sect. 2.

4

Conclusion

In this paper, we introduce the concept of network slicing. Network slicing is the foun‐ dation for 5G network to support diverse application scenarios. According to the three categories of network slices’ characteristics, we give some perspectives designs. Mean‐ while, we propose an approach to implement RAN slicing. Signal processors as a subslice is the critical part of a RAN slice, and each sub-slice’s control plane can be split from data plane, and shares common functions. It is beneficial to shorten the time of slice’s creation, optimize performance and maximize resources utilization. RAN slices can be customizable and scalable according to the requirements.

References 1. NGMN Alliance: NGMN 5G White Paper. Version 1.0 (2015). http://www.ngmn.org/5gwhite-paper/5g-white-paper.html 2. NGMN Alliance: Description of Network Slicing Concept. Version 1.0 (2016). http:// www.ngmn.org/publications/technical.html 3. 3GPP Technical report 23.799: Study on Architecture for Next Generation System. Version 0.7.0 (2016) 4. Zhou, X., Li, R., Chen, T., Zhang, H.: Network slicing as a service: enabling enterprises’ own software-defined cellular networks. IEEE Commun. Mag. 54, 146–153 (2016) 5. Liang, C., Yu, F.R.: Wireless network virtualization: a survey, some research issues and challenges. IEEE Commun. Surv. Tutorials 17, 358–380 (2014) 6. Zaki, Y., Zhao, L., Goerg, C., Timm-Giel, A.: LTE mobile network virtualization. Mob. Netw. Appl. 16, 424–432 (2011) 7. Menglan, J., Massimo, C., Toktam M.: Network slicing management & prioritization in 5G mobile systems. In: European Wireless Conference, pp. 1–6. IEEE (2016) 8. 3GPP Technical Report 22.891: Feasibility Study on New Services and Markets Technology Enablers. Version 1.3.2 (2016)

Remote Sensing for Forest Environment Preservation George Suciu1 ✉ , Ramona Ciuciuc1, Adrian Pasat2, and Andrei Scheianu2 (

1

)

Telecommunication Department, University Politehnica of Bucharest, Bd. Iuliu Maniu 1-3, 061071 Bucharest, Romania {George,Ramona.Ciuciuc}@beia.ro 2 Research Department, Beia Consult International, Str. Peroni 16, 041386 Bucharest, Romania {Adrian.Pasat,Andrei.Scheianu}@beia.ro

Abstract. In the past few years, wireless sensors and wireless sensor networks (WSN) were very popular in the scientific community. That is why there are increasingly smaller sized devices which can be used in numerous applications. However, the use of these sensors and the capability of building a wireless sensor network not only have raised many questions but there also have been proposed numerous methods to solve certain problems. In this paper, we evaluate remote sensing solutions and technologies with the aim to propose a conceptual archi‐ tecture for an intelligent forest monitoring system. The proposed solution is designed to implement and integrate different types of sensors and WSN for monitoring at acoustic level and warning in case of events with potential destruc‐ tive effects on the forest environment. Keywords: Remote sensing · Wireless Sensor Networks · WSN · Forest environment

1

Introduction

Remote sensing solutions are usually based on a solid architecture which contains a wireless sensor network of ground sensors, a central server, radio and wire communi‐ cations and a layer of intelligence information system. In this paper, we propose three innovative components for a remote sensing system: energy efficiency in forest environments for increasing the lifetime of the sensor network, a statistical model for forestry risk factors and threats for the prediction and confirmation of an event, collaborative automation of system resources and intervention services in case certain events. We address these characteristics through the analysis of related work in the state-ofthe-art and verify the best choice. System reliability and security are based on appropriate measures of physical protection of the components exposed to corrosion. Also, security aspects are surveyed with regards to the structure of communications layers. We examine these features in Sect. 2. Section 3 documents a survey of Envi‐ ronmental Sound Recognition solutions. Further, in Sect. 4 we conduct an analysis of network topology, the adaptation of sensors and radio propagation and mitigate the

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_23

212

G. Suciu et al.

constraints of a forest environment using the proposed software workbench for threat early warning, while Sect. 5 draws the conclusions.

2

Related Work

In [1] the authors present a secure and reliable communication standard, WirelessHART, which is a self-healing and self-organizing protocol which can find its neighbors and establish a path with those neighbors by channel hopping and synchronization infor‐ mation and measuring signal strength. WirelessHART uses 2.4 GHz frequency band, a free unlicensed portion of the spectrum. Channel Hopping can be used to avoid inter‐ ference among frequency bands which enhances reliability. Security in WirelessHART standard can be dived into three levels such as End-toEnd, Per-hop, and Peer-to-Peer. Security in WirelessHART is enforced in Network Layer and Data Link Layer. Data Link Layer provides hop-to-hop security between two devices using Network key, and Network Layer enforces end-to-end security between source and destination using a session key(s) and/or join key. Symmetric Key Encryption for Secure Communication. Figure 1 represents a generic WirelessHART network architecture. The network supports two topologies: a direct connection between device and gateway (star topology) and connections over multiple hops (mesh topology).

Fig. 1. WirelessHART network architecture

In [2] it is presented a forest fire detection system which is based on a WSN, its scheme being performed with MICA motes using GPS attached. The system is designed to collect different types of environmental measurements like humidity, temperature, and pressure during a fire. The data gathered is sent to a base station, where a database server stores all the information. This particular system integrates the Crossbow MICA2 mote and TinyOS programmed in the nesC language. This software is specifically developed for embedded devices. For experimenting the system, 10 motes were used in two burns in California. The output was satisfactory since motes were capable of collecting and sending accurate information before they were damaged by the fire. In [3] it is proposed a protection solution of sensor nodes (motes) from wireless sensor networks called Firesensorsock. The system was designed to avoid the risk of

Remote Sensing for Forest Environment Preservation

213

losing data when the nodes are handling forest fire measurements. Firesensorsock is a personalized protection system, which will thermally insulate the sensor but it will still allow it to sense the correct temperature. The authors aim to build a wireless sensor network that can withstand the fire action. After testing Firesensorsock, the results proved that the presence of a fire can be detected inside the protection. Also, the experi‐ ment showed that the system is efficient in the open air, hence, it is possible to detect a fire and track its evolution. The objective of [4] is to build a system using wireless sensor network randomly placed on a forest domain. The sensors are supposed to detect a fire at an approximate interval of 10–15 min and then to send an alert to a central server. Since the sensors are provided with small wireless range transmitters, the information will be passed from a sensor to another until the data reaches the sink node. When the measurements are received by the sink node, a routing processing is performed to check if the fire represents a danger. If the result is affirmative, the location of the fire will be determined, and an alarm will be sent to the fire department. The alarm signal will contain exact information regarding the location, temperature, the fire spread speed, so the firefighters can be able to analyze the gravity of the situation. It is described a decision-making process which consists of tracking the propagation of the fire and analyzing the behavior and logic behind it. The system was tested using 50 nodes in 4 different scenarios. The advantages provided by the system are: simple temperature sensors can be used in an efficient and non-expensive way instead of complicated or specialized devices, it helps in distinguishing between different types of situations, low possibility of false alerts, it provides useful information for the firefighters to efficiently intervene. The authors in [5] propose a wildfire early detection system which is based on radioacoustic sounding system (RASS). The system will perform remote temperature meas‐ urements in a forest, having the ability to collect air temperature information, including temperature variation due to fire. The system is based on the idea that variation of the air temperature above the trees can be detected if a map with thermal information is created. The two main device types are represented by the acoustic sources which are in charge of generating sound waves in certain conditions and radars with watchtower which unlike the acoustic sources, they are continuously scanning for acoustic waves. The results obtained after testing the system in different scenarios showed that it can be efficient when used above the trees. Also, the authors highlighted the fact that the system can be used with other existing projects to receive better results. In [6] the authors present a real-time forest fire detection based on data gathering and processing in WSN. In this proposed system, the sensors placed in the forest are collecting information, and then sending them to their respective cluster nodes, hence forming a neural network. The neural network is processing the received data and then providing a weather index, which measures the probability of a fire to be caused by the weather. The index is sent further to a manager node which will analyze the received indexes and will decide whether there is a threat or not. After performing simulations of the system, the authors concluded that their approach is efficient and that it can be applied in many other sensor network applications.

214

G. Suciu et al.

In [7] it is described a framework for designing the WSN of a forest monitoring system. The proposed framework is intended to be an efficient way of rapidly detecting the fires with small energy usage. In normal conditions, the system sends various meas‐ ured data but in the case of a fire, it switches to an alert mode and it will act much faster, the information will be transmitted at higher speed. Therefore, in normal conditions, the sensors are consuming a small quantity of energy. Also, the approach is flexible, the system being able to adapt to different scenarios (season, terrain etc.). The authors desire to achieve an efficient forest fire detection system considering the following: energy efficiency, early detection and accurate localization, forecast capability, adapting to harsh environments. After the proposed system was tested in terms of effectiveness and energy consumption, it was observed that there can be obtained accurate results with no drastic effects of the small amount of energy consumed. A low cost and low power consumption forest monitoring method is approached in [8]. The proposed system integrates a PIC18F4685 MCU Microchip and a Silicon Laboratories Si4432 ISM Band transceiver. When designing the system, the authors considered various characteristics such as the parameters to be measured, time and location, the architecture of the WSN. The system includes three types of nodes: sensor nodes(SN), relay nodes (RN) and base nodes (BN). The RN are receiving at certain intervals alerts status from the SN. Otherwise, the RN are sending an alert message to the alert center via GSM. The sensors used for the system are smoke detection and temperature sensors. To keep the computation and power low, the authors created the network in a star topology The performed tests revealed that the WSN can detect and monitor forest fires while using power and cost efficient hardware.

3

Methods for Environmental Sound Recognition

As mentioned before, our research is aimed at implementing a unique object monitoring system (such as vehicles, electrical machinery or motors) by identifying them with analysis of sound and/or vibration measurement data. The expected results are maps which will contain the location and the frequency of events, databases, schemes of geospatial response to events, applications for storing, processing and merging data in a functional model of the system. We documented this task by analyzing the method described in [9]. The authors conducted an in-depth survey on recent developments in the Environmental Sound Recognition (ESR) field. They have divided the ESR methods into two types: stationary and non-stationary techniques. The stationary ESR techniques are dominated by spectral features. Although these features are easy to compute, there are limitations in the modeling of non-stationary sounds. The non-stationary ESR techniques obtain features derived from the wavelet transform, the sparse representation, and the spectrogram. An experimental comparison of multiple methods has been performed, with the purpose of gaining more insights. We can review the content of the ESD (Environmental Sound Database) in Table 1, and selected methods for comparison are presented in Table 2.

Remote Sensing for Forest Environment Preservation

215

Table 1. Environmental Sound Database (ESD) (C1)AirplaneFlyBy

660

(C14)DogsBarking

577

(C27)Rubbing

500

(C2)AirplaneInterior

662

(C15)Fans/Vents

585

(C28)Snoring

459

(C3)ApplauseCheer

424

(C16)FireCrackle

697

(C29)Streams

1194

(C4)BabyCryFuss

842

(C17)Footsteps

786

(C30)Thunder

412

(C5)Bees/Insects

514

(C18)GasJetting

269

(C31)TrainInterior

980

(C6)Bells

456

(C19)GlassBreakCrash

715

(C32)Vacuum

524

(C7)Birds

1189

(C20)HelicopterFlyBy

916

(C33)Waterfall

792

(C21)MachineGuns

526

(C34)WhalesDolphins

510 300

(C8)BoosOhsAngry

621

(C9)CatsMeowing

392

(C22)MetalCollision

1000

(C35)Whistle

(C10)CeramicCollision

800

(C23)Ocean

322

(C36)Winds

(C11)Clapping

829

(C24)PaperTearCrumble

351

(C37)WoodCollision

(C12)Coins

616

(C25)PlasticCollision

550

(C13)Crickets

550

(C26)Rain

694

956 1187

Table 2. Selected methods for comparison Label

Method

Feature

Classifier

Dimensionality reduction/feature selection

Stationary (S)/ Non-Stationary (NS)

M1

N/A

MFCC

SVM

No

S

M2

Karbasi et al. [4]

SDF

K-NN

DCT

S

M3

Valero and Alias [5]

NB-ACF

SVM

No

S

M4

Han and Hwang [6]

SVM

No

NS

M5

Umapathy et al. [7]

Grammatone Wavelet WPT

SVM

LDA

NS

M6

Chu et al. [8]

MP-Gabor

SVM

No

NS

M7

Modified MPGabor Spectrogram

SVM

No

NS

M8

Sivasankaran and Prabhu [9] Khunarsal et al. [10]

FFNN

No

NS

M9

Souli and Lachiri [11]

SVM

Mutual information NS

M10

N/A

SVM

No

NS

M11

Wang et al. [12]

Log-Gabor filtered Spectogram MP-Gabor of Mel-filtered signals Non-Uniform Freq. Map (NUMAP)

SVM

PCA + LDA

NS

Some of the conclusion drawn from the experimentation are: – Stationary feature MFCC gives the best performance followed by the proposed nonstationary features in M10; – The wavelet-based method gave results comparable to stationary methods such as SDF and NB-ACF;

216

G. Suciu et al.

– Spectrogram based methods performed poorly. Also, there have been pointed out two methods for future research and development: 1. Database Expansion and Performance Benchmarking The authors emphasized the need to increase the database by including more sound events. Also, it has been noticed that there are numerous kinds of environmental sounds for which there is no standard taxonomy. Related work is mentioned in [10], where are described the efforts to classify sounds to various categories from the application perspective. 2. Ensemble-based ESR Since there are numerous types of environmental sounds, it is difficult to design a proper set of features. Also, another problem is the fact that different features require different processing architectures. A proposed idea would be that instead of learning/training a classifier for a single set of features, we may use multiple classifiers (experts) targeting different aspects of signal characteristics with a set of complementary features. Unfortunately, there is no best way to design an ensemble framework, and a considerable amount of effort is still needed in this area.

4

WSN Constraints in Forest Environment and Proposed Solution

One of the biggest challenges encountered when developing a sensor network is the efficient usage of the available resources such as scarce bandwidth and limited energy supply. These two characteristics should be improved at all layers of the communication protocol stack. When a sensor generates a high volume of data, a significant amount of energy is consumed, hence it is important to address the mentioned issues at the routing layer. In the following subsections, we present the WSNs architecture and solutions that will try either to optimize working cycle on a single sensor or to maximize the lifetime of the network. 4.1 WSN Architecture In [11] is presented a tested solution for a Forest-Fires Surveillance System (FFSS). The system has been developed for monitoring mountains from South Korea. The architec‐ ture of the system is composed of WSNs, a transceiver, middleware and a Web appli‐ cation. The nodes contained in this network collect different types of data such as temperature, humidity, and illumination from the environment. The information is gathered in one main node, also called a sink node. The transceiver (gateway), which is connected to the Internet will receive the data from the sink node. Then, using a formula from the Forestry Office, the forest-fire risk level is determined by a middleware program. An alarm is activated when a fire is detected which eases the extinguishing process. For this work, the operating system used is TinyOS. Moreover,

Remote Sensing for Forest Environment Preservation

217

the WSN use a minimum cost path forwarding (MCF) to send their data to a sink node. In Fig. 2 we present the WSNs structure.

Fig. 2. WSNs Structure

Fig. 3. FFSS Network Protocol Diagram

In the forest-fires scenarios, the routing protocol uses like other flat routing protocol using Minimum Cost Path Forwarding (MCF). In Fig. 3 is illustrated a diagram of a Configuration Step of the FFSS Network Protocol. MCF finds shortest paths from all the sensor nodes to the base station and requires no explicit routing tables to maintain each node. Since the energy might be drained from upstream nodes when routing all the data along the shortest path, the authors proposed a method, where they lower this effect by restricting the amount of energy each node can spend in a round. MCF design has been driven by the following three goals: Optimization, Simplicity, and Scalability. Furthermore, IoT and cloud convergence resulted into a new architecture called fog computing [12], which will is the basis of our proposed solution. 4.2 WSN Energy Efficiency Solutions in Forest Environments Energy efficiency is one of the main constraints, we analyzed the survey of the authors in [13], which explored the most recent studies that address the critical issues in WRSNs (Wireless Rechargeable Sensor Networks) and identified the open challenges that need to be tackled. The issues debated include scalable real-time energy information gathering, optimal recharge scheduling and integration of wireless charging with typical sensing applica‐ tions. They have pointed out possible future research directions based on the most recent discoveries and results from physics and power electronics. These are: (1) An improvement in network scalability requires an extension of the wireless charging range and ultra-fast battery technology; (2) A hybrid and green WRSN which puts together renewable environmental energy sources with wireless energy to supply the nodes while the SenCar can provide an autonomous, eco-friendly and perpetual sensor network in future [14]. In Fig. 4 is

218

G. Suciu et al.

presented a diagram of the core network components of a WRSN, more exactly the architecture of the wireless energy transfer techniques and recent developments to be applied in various sensing applications.

Fig. 4. Basic network components of a WRSN

4.3 Collaborative Automation of System Resources and Intervention Services in Case of Environmental Threat Events In [15] we proposed a viable approach for the task which referred to the development of a collaborative automation of system resources and intervention services in case of environmental threat events. The workbench provides tools for developing, deploying and controlling the execution of time-critical applications, supporting every stage of the application lifecycle.

Fig. 5. Basic scenario of disaster earlier warning and the critical time constraints

Remote Sensing for Forest Environment Preservation

219

Also, it features an application infrastructure co-programming and control model that relates application logic, QoS constraints, and developments in programmable infrastructure. One of the use cases is an early warning system, which often collects data from realtime sensors, processes the information using tools such as predictive simulation, and provides warning services or interactive facilities for the public information. In Fig. 5 we present a basic scenario of disaster earlier warning and the critical time constraints. As an early warning for natural disasters is of paramount importance, the remote sensing system needs to adapt to the time critical constraints, especially regarding delay and jitter.

5

Conclusions

This paper reviewed current technologies used for environment monitoring, and also extracted relevant information and challenges for further development. As mentioned in the introduction part, making use of technology can certainly sustain and ease the ways of natural ecosystem protection. Furthermore, we analyzed the constraints and proposed a software workbench to implement and integrate different types of sensors and WSN for monitoring at acoustic level and warning in case of events with potential destructive effects on the forest environment. As future work, we envision the extension of the system with additional sensors. Acknowledgments. The work has been supported by UEFISCDI Romania under grants CarbaDetect, MobiWay, EV-BAT, SeaForest, ESTABLISH and SoMeDi projects, and funded in part by ASUA project, grant no. 337E/2014 “Accelerate” project and European Union’s Horizon 2020 research and innovation program under grant agreement No. 643963 (SWITCH project).

References 1. Paul, P.P., Ram, N.S., Usha, M.: Symmetric key encryption for secure communication using wireless hart in wireless sensor networks (WSN). Aust. J. Basic Appl. Sci. 10(1), 625–630 (2016) 2. Doolin, D.M., Sitar, N.: Wireless sensors for wildfire monitoring. In: Smart Structures and Materials 2005: Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems, San Diego, CA, USA, 7 May 2005 3. Antoine-Santoni, T., Santucci, J.-F., Gentili, E., De Silvani, X., Morandini, F.: Performance of a protected wireless sensor network in a fire. Analysis of fire spread and data transmission. Sensors 9, 5878–5893 (2009) 4. Alkhatib, A.A.A.: Wireless sensor network for forest fire detection and decision making. Int. J. Adv. Eng. Sci. Technol. 2(3), 299–309 (2013) 5. Sahin, Y.G., Ince, T.: Early forest fire detection using radio-acoustic sounding system. Sensors 9(3), 1485–1498 (2009) 6. Yu, L., Wang, N., Meng, X.: Real-time forest fire detection with wireless sensor networks. In: Proceedings of 2005 International Conference on Wireless Communications, Networking and Mobile Computing, vol. 2. IEEE (2005)

220

G. Suciu et al.

7. Aslan, Y.E., Korpeoglu, I., Ulusoy, Ö.: A framework for use of wireless sensor networks in forest fire detection and monitoring. Comput. Environ. Urban Syst. 36(6), 614–625 (2012) 8. Kovács, Z.G., Marosy, G.E., Horváth, G.: Case study of a simple, low power WSN implementation for forest monitoring. In: 2010 12th Biennial Baltic Electronics Conference. IEEE (2010) 9. Chachada, S., Kuo, C.C.J.: Environmental sound recognition: a survey. In: APSIPA Transactions on Signal and Information Processing, vol. 3, pp. 14–20 (2014) 10. Potamitis, I., Ganchev, T.: Generalized recognition of sound events: approaches and applications. In: Tsihrintzis, G.A., Jain, L.C. (eds.) Multimedia Services in Intelligent Environments. Studies in Computational Intelligence, vol. 120, pp. 41–79. Springer, Heidelberg (2008) 11. Son, B., Her, Y.S., Kim, J.G.: A design and implementation of forest-fires surveillance system based on wireless sensor networks for South Korea mountains. Int. J. Comput. Sci. Netw. Secur. (IJCSNS) 6(9), 124–130 (2006) 12. Suciu G, Halunga S, Vulpe A, Suciu V.: Generic platform for IoT and cloud computing interoperability study. In: 2013 International Symposium on Signals, Circuits and Systems (ISSCS), pp. 1–4 (2013) 13. Yang, Y., Wang, C., Li, J.: Wireless rechargeable sensor networks—current status and future trends. J. Commun. 10(9), 696–706 (2015) 14. Ochian A, Suciu G, Fratu O, Suciu V.: Big data search for environmental telemetry. In: 2014 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom), pp. 182–184 (2014) 15. Zhao, Z., et al.: A software workbench for interactive, time critical and highly self-adaptive cloud applications (SWITCH). In: 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), pp. 1181–1184 (2015)

VBII-UAV: Vision-Based Infrastructure Inspection-UAV Abdulla Al-Kaff1(B) , Francisco Miguel Moreno1 , Luis Javier San Jos´e1 , Fernando Garc´ıa1 , David Mart´ın1 , Arturo de la Escalera1 , Alberto Nieva2 , and Jos´e Luis Meana Garc´ea2 1

Intelligent Systems Lab, Universidad Carlos III de Madrid, Avenida de la Universidad, 30, 28911 Legan´es, Madrid, Spain {akaff,franmore,lsanjose,fgarcia,dmgomez,escalera}@ing.uc3m.es 2 COPISA Integrated Services, Barcelona, Spain {alberto.nieva,joseluis.meana}@copisaindustrial.com http://www.uc3m.es/islab, http://www.grupocopisa.com

Abstract. Unmanned Aerial Vehicles (UAV) have the capabilities to undertake tasks in remote, dangerous and dull situations. One of these situations is the infrastructure inspection, at which, using the UAV decreases the risk and the operation time of the task comparing to a human inspector. Therefore, this paper presents a small vision-based UAV with the capability of inspection tasks of a civil and industrial infrastructure. The presented system is divided into three main algorithms; Depth-Color image correlation, Plane segmentation and distance estimation and Visual servoing. The system has been validated with real flight tests, and the obtained results show the accuracy of the system in both inspection measurements and the UAV maneuver controlling.

Keywords: Inspection

1

· UAV · Computer vision · Visual servoing

Introduction

The field of Unmanned Aerial Vehicles (UAVs) has been typically limited to and supported by the defense and military industries, this is due to the cost and the complexity of designing, building and operating these vehicles. Recently, with the developments in microelectronics and the increase of computing efficiency, small and micro unmanned aerial vehicles (SUAVs and MAVs) have encountered a significant focus among the robotics research community. Moreover, because of their ability to operate in remote, dangerous and dull situations, especially helicopters and Vertical Take-Off and Landing (VTOL) rotor-craft systems are increasingly used in several civilian and scientific applications; such as surveying and mapping, rescue operation in disasters [9,14], spatial information acquisition, inspection [3,7,10,21], animal protection [26], agricultural crops monitoring [2], or manipulation and transportation [18]. These capabilities proposed the advantage to substitute the human operators in the risky and hazard environments. c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 24

222

A. Al-Kaff et al.

Aerial imagery or aerial filming is considered one of the basic and demanding application; such as filming sports games [4], events [19], or even weddings [11]. With the advances in computer vision algorithms and sensors, the concept of using aerial images just for photography and filming was changed to be used widely in more complex applications; such as thematic and topographic terrains mapping [1,16,25]; exploration of unreachable areas such as islands [27], rivers [22], forests [28] or oceans [24]; surveillance purposes [12,15]. This article presents a vision-based system, implemented on a UAV for infrastructures inspection purposes, as shown in Fig. 1. The proposed system consists of two stages; First, Reactive Control, at which, on-board and real time processing is applied in order to provide the flight stability and maintaining the safety distances to the building. The second stage is the Inspection, where the RGB-D visual information is gathered to be processed in the ground station in order to analyze the defects or changes in the building under inspection.

(a) Inspection data

(b) Inspection drone

Fig. 1. Building inspection

The article is structured as follows: Sect. 2 presents the state-of-the-art work related to the techniques used with UAVs for inspections. In Sect. 3, the proposed vision-based algorithms are explained. Section 4 discusses the experimental results. Finally, conclusions are summarized in Sect. 5.

2

Related Works

Aerial inspection is one of the most recent and in demand applications that takes the advances of the UAVs (especially rotor-crafts). Along with the safety and the decreasing of human risk, UAVs have the advantage of reducing operational costs and time of the inspection tasks. However, it is important to keep the image stability against any kind of maneuver [6]. UAVs can perform inspection tasks in different terrains and situations; such as buildings, bridges [17], wind turbines, power plant boilers [5], power lines [8], and even tunnels. An integrated visual-inertial SLAM sensor has been proposed in [20], in order to be used with the UAVs for industrial facilities inspection. This system consists of a stereo camera, MEMS gyroscopes and accelerometers. The UAV performs autonomous flights following predefined trajectories. The motion of the UAV is

VBII-UAV: Vision-Based Infrastructure Inspection-UAV

223

mainly estimated by the inertial measurements; then it is refined using the visual information. From the experiments, it has been shown that the system suffers from a delay between the inertial sensors and the stereo camera. Thus, a calibration process is required. In addition, the results showed a drift error of 10 cm in the displacement over time. Another visual-inertial sensor has been introduced in [21]. At which, a visual-inertial stereo camera is used to estimate the UAV pose and build a 3D map of the industrial infrastructures while inspection. In [3], two visual servoing approaches were presented for power lines inspection. Both approaches dealt with the problem of keeping the UAV in a determinate distance to the power line during the inspection. In the first one, a visual servoing formulation has been combined with the Linear Quadratic Servo (LQS) to improve the control of the UAV. While in the second approach, the control problem was solved using the Partial Posed Based Visual Servoing (PPBVS) model. As it has been shown from their experiments, the PPBVS is more efficient and more robust than the IBVS. However, it is very sensitive to the calibration errors. Autonomous UAV for wind turbines inspection has been presented in [13,23]. First, the Global Navigation Satellite System (GNSS) and altimeter are used for positioning the UAV in a determinate distance from the tower, then the UAV are rotated to face the hub based on visual information. These works used Hough Transform to detect the tower, the hub, and the blades. The only difference is in the tracking phase, where in [23] the Kalman filter is used to track the center of the hub, while in [13], optical flow algorithms are used, then the motion direction, velocity and distance of the hub and the blades can be estimated. Finally, the UAV flights in a preprogrammed path in order to perform the inspection task. The novelty of the proposed system is to use the RGB-D information that is obtained from the Kinect system in order to detect the defects and to provide accurate millimetric measurements of the cracks in the surfaces under inspection. In addition, to provide the feedback information of the control system to maintain the autonomous flights.

3

Proposed Algorithms

The presented system is divided into three main algorithms as it shown in Fig. 2; First, Depth-Color image correlation: to perform a correlation process between the depth and the color images, in order to estimate the real measurements from the image pixels. Second, Plane segmentation and distance estimation: to segment and filter the wall plane from the constructed dense point cloud, in order to obtain accurately the safety distance to the wall during the flights. Finally, Visual servoing: which takes the obtained safety distance as an input to a PID reactive control in order to maintain the wall distance. 3.1

Depth-Color Image Correlation

The technical inspection is conducted using Graphical User Interface (GUI). This GUI provides various tools for analyzing and measurements purposes. Although

224

A. Al-Kaff et al.

Fig. 2. System overview

all the processes in the GUI performed are with the color images, the system works internally with both color and depth information, obtaining the real measurements between points, even when the camera is not perpendicular to the wall. For this purpose, the color and depth images are correlated, hence, all the depth information associated to each pixel in the color image is obtained. Due to the different frequencies of receiving the color and depth images, a synchronization step is required. This synchronization is achieved by matching each depth frame to the closest color frame, using their timestamps. The next step is to process the 2D color pixels in its corresponding 3D information. To deal with this problem, let a point Q = (x, y, z)T in the 3D space be projected onto the camera plane to a 2D point q = (υ, ν)T , by intersecting the ray – that connecting the 3D point and the camera center C - with the image plane which is situated with a focal length f . By looking at similar triangles, Q can be transformed into q as follows: ⎛ ⎞ ⎞⎛ ⎞ x ⎛ ⎞ ⎛ r00 r01 r02 tx ⎜ ⎟ fx 0 cx υ y⎟ (1) q = ⎝ ν ⎠ = ⎝ 0 fy xy ⎠ ⎝ r10 r11 r12 ty ⎠ ⎜ ⎝z ⎠ = PQ 0 0 1 r20 r21 r22 tz 1 1 where, P is the perspective projection matrix and it is formulated by the product of the camera intrinsic and extrinsic matrices. To find the point in the depth camera plane that is corresponding to the pixel in the color image, it is required to find the epipolar line of these points. Suppose points qr and ql in the depth and color images respectively The relation between qr and ql is giving by: qrT F ql = 0

(2)

F = (Pr−1 )T EPl−1

(3)

where F and E are the Fundamental and Essential matrices. Then the epipolar line in both images is calculated as follows: lr = F q l ,

and

ll = F T qr

(4)

VBII-UAV: Vision-Based Infrastructure Inspection-UAV

225

The relation between the two points in the different planes is: Qr = RQl + T

(5)

where, qr = Pr Qr , ql = Pl Ql , R is the rotation matrix and T is the translation vector. Then Eq. 5 will be: Pl−1 ql = RPr−1 qr + T

(6)

´ l − T´ qr = Rq

(7)

or, ´ = Pr R−1 P −1 and T´ = Pr R−1 T . Equation 7 allows to correlate the where, R l point in the color plane to its corresponding in the depth plane. However it does not provide the real depth. Finally, the Zl real value of the color coordinates is obtained using one of the following equations: Zl = x−1 r00 Zr υ + r´01 Zr ν + r´01 Zr + t´x ] l [´ −1 Zl = yl [´ r10 Zr υ + r´11 Zr ν + r´12 Zr + t´y ] Zl = r´20 Zr υ + r´21 Zr ν + r´22 Zr + t´z

(8)

where Zr is the depth obtained in the disparity map of the depth image. However, because of working with real data, Zl is calculated by taking the mean of the values calculated from the three equations. 3.2

Plane Segmentation and Distance Estimation

During the flight, the 3D information is processed to segment the wall plane and measure the safety distance to the UAV. This method relies on the whole wall area, so it is more robust against outliers and works with angled planes. Since working with dense point clouds is an expensive computational task, most small embedded computers are not able of real-time processing. Therefore, a two-stages filtering process has been applied. The first stage consists of a cropping process that gets the center of the cloud, where the second stage is a voxel grid filter to reduce the number of points in the cloud. An example of the segmentation algorithm is shown in Fig. 3. After the point cloud has been filtered, a segmentation algorithm is executed to detect the wall. At which, RANSAC algorithm is applied, to compute the coefficients of the mathematical model of the plane. The fitted model is defined as a plane perpendicular to the camera, then the empirical angle threshold  = ±15◦ and distance threshold d = ±15 cm are set to estimate the inliers. Although RANSAC has the advantage of the simplicity and the robustness of eliminating the outliers, its computational time is expensive, especially while working with dense point clouds. For this reason, the point cloud is filtered before the segmentation.

226

A. Al-Kaff et al.

(b) Wall segmented

(a) Color image

(c) Segmentation after filters

Fig. 3. Wall segmentation

3.3

Visual Servoing

Based on the data obtained from the vision-based algorithms, a higher level reactive control algorithm has been implemented to control the flight maneuvers during the inspection. As it shown in Fig. 4, this algorithm takes the distance to the wall and the RC elevator command as inputs to a PID controller. Based on the flight mode, the PID overrides the RC commands before sending to the flight controller. This algorithm aims to maintain the UAV in a safe distance from the wall. Furthermore, keeping the distance to the wall constant, allows to record the sequences more homogeneously. The UAV has three flight modes: manual, semi-autonomous and autonomous. For safety reasons, the autonomous modes can be disabled by a safety switch in the RC controller. The manual mode starts at distances greater than 3 m, at which, the pilot has the total control of the UAV. The semi-autonomous

Fig. 4. Control system

VBII-UAV: Vision-Based Infrastructure Inspection-UAV

227

mode is activated when the UAV flights at distances [2–3] m to the wall. In this mode the forward maneuver command is regulated by the PID to ensure a smooth approximation to the wall. Once the UAV approaches at 2 m to the wall, the autonomous mode is activated, disabling all the commands of the forward maneuver and maintaining the distance using the PID controller. This mode aims is to keep the UAV at 2 m to the wall, and gives the pilot the independency to maneuver in the other directions without concerning about the safety issues.

4

Experimental Results

In order to evaluate the performance and the robustness of the proposed algorithms in the previous sections, different experiments have been carried out of real flights, in outdoor environments, taking in consideration different conditions; such as illumination, textures, building surfaces, and flight altitudes. In the experiments, a carbon fiber quadcopter of total weight 2.4 kg, based on Pixhawk control system was used. The quadcopter is equipped with GPS, magnetometer and IMU (accelerometers, gyroscopes, and barometer). In addition, Kinect v2 which provides 1920 × 1080 RGB images, and 512 × 424 infrared pattern, mounted on Walkera G-2D gimbal is used as the main sensor. The onboard processing performed by the ODROID-XU4 embedded computer, which has Samsung Exynos 5422 CortexTM -A15 at 2.0 GHz CPU, 2 GB LPDDR3 RAM, and eMMC 5.0 HS400 Flash Storage. The software is integrated with ROS, under Ubuntu operating system. 4.1

Results (Safety Distance Estimation)

The performance of the wall segmentation algorithm was measured by performing experiments at different distances to the wall (2, 3, and 4 m). In order to estimate the accuracy of the distance measurements, the error has been computed as the difference between the wall distance obtained by the visual system and the real distance (groundtruth). For this purpose, the UAV was situated in fixed position, thus, the actual distance to the wall is known. Table 1 shows the wall distances obtained by the system compared to the ground truth. From the table, it is observed that the error is less than 7 cm, with an average accuracy of 97.7%. Two reasons for generating the distance error; The first reason is that the wall is not planner and contains different structures that are affect the measurement (less than the real distance). The second reason is although the RANSAC algorithm is robust against outliers, there is a distance threshold set to avoid excluding noisy points in the wall from the plane. This threshold makes the outliers near to the wall to be included in the plane, so that the plane is shifted forward and provides less distance. 4.2

Results (Reactive Control)

In order to verify the reactive control algorithm, all the information about the flights, UAV attitudes and velocities, and the RC commands is recorded.

228

A. Al-Kaff et al. Table 1. Plane segmentation algorithm performance Real distance (m) Measured distance (m) Error Accuracy (%) 2.0

1.94

0.06

97.0

3.0

2.93

0.07

97.7

4.0

3.93

0.07

98.3

(a)

(b)

(c)

Fig. 5. UAV control; a: Distance, b: RC commands, and c: Actual pitch angle

Figure 5 shows the results obtained by the flight. As it is illustrated in Fig. 5a, in the seconds 68 and 148, the UAV located at a 2 m distance when the autonomous mode is activated. In those moments, the pilot performs forward maneuver, however, the output of the control algorithm ignores that command and sends a backward command in order to maintain the safety distance as it shown in Fig. 5b, resulting in a positive pitch angle as it shown in Fig. 5c. On the other hand, the semi-autonomous mode shown in the seconds 48 and 64. Where, the UAV is approaching smoothly, although the forward command is to go fast. 4.3

Results (Measurements)

To evaluate the measuring algorithm, all the experiments have been carried out at a distance of 2 m to the wall. Figure 6 shows the results obtained of measuring

VBII-UAV: Vision-Based Infrastructure Inspection-UAV

(a)

229

(b)

(c)

Fig. 6. Inspection measurements

various elements with different dimensions. All the results were compared to a ground truth (realmeasurements). The total accuracy of the measurements as 2 m distance is 99.1%. Furthermore, to study the robustness of the algorithm, the same measurements have been taken at distances 3 m and 4 m, where the results provide accuracy of 98.5% at both distances. The main reason of the decreasing in the accuracy is the sensor limitations. That is because of the decreasing in the pixel density in depth and color images, being that problem more significant when measuring small elements.

5

Conclusions

In this paper, vision-based algorithms have been presented as a framework to cope with cutting-edge UAVs technology in infrastructure inspection in a GPSdenied environments. The proposed algorithms take the advantages of on-board processing and the use of the Kinect v2 to accomplish complex tasks, that is, safe distance maintaining and obtaining inspection measurements. The robustness and the performance of the presented reliable solutions are illustrated through real results under demanding circumstances, such as building surfaces, illumination, textures, and flight altitudes. The results proved the accuracy of the vision-based algorithms, with an overall accuracy of 97% in detecting the wall plane and estimating the distances to the UAV, and 99.1% in the measurements. In addition, the implemented reactive control system with the three flight modes allows to improve the stability of the UAV, and increase the safety of the autonomous inspection. Future aspects of this research include fusing information from different sensors with the visual data in order to improve the robustness of the system.

230

A. Al-Kaff et al.

Acknowledgments. Research supported by the Centre for the Development of Industrial Technology (CDTI) and European Regional Development Fund (FEDER) through the Programa Operativo Pluri-regional de Crecimiento Inteligente (IDI 20150860).

References 1. Ahmad, A., Tahar, K.N., Udin, W.S., Hashim, K.A., Darwin, N., Hafis, M., Room, M., Hamid, N.F.A., Azhar, N.A.M., Azmi, S.M.: Digital aerial imagery of unmanned aerial vehicle for various applications. In: 2013 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp. 535– 540. IEEE (2013) 2. Anthony, D., Elbaum, S., Lorenz, A., Detweiler, C.: On crop height estimation with UAVs. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 4805–4812. IEEE (2014) 3. Araar, O., Aouf, N.: Visual servoing of a Quadrotor UAV for autonomous power lines inspection. In: 2014 22nd Mediterranean Conference of Control and Automation (MED), pp. 1418–1424. IEEE (2014) 4. Atlantic: The future of sports photography (February 2014). http://www. theatlantic.com/technology/archive/2014/02/the-future-of-sports-photographydrones/283896/ 5. Burri, M., Nikolic, J., Hurzeler, C., Caprari, G., Siegwart, R.: Aerial service robots for visual inspection of thermal power plant boiler systems. In: 2012 2nd International Conference on Applied Robotics for the Power Industry (CARPI), pp. 70–75. IEEE (2012) 6. Cho, O.H., Ban, K.J., Kim, E.K.: Stabilized UAV flight system design for structure safety inspection. In: 2014 16th International Conference on Advanced Communication Technology (ICACT), pp. 1312–1316. Citeseer (2014) 7. Choi, S., Kim, E.: Image acquisition system for construction inspection based on small unmanned aerial vehicle. In: Park, J.J.J.H., Chao, H.-C., Arabnia, H., Yen, N.Y. (eds.) Advanced Multimedia and Ubiquitous Engineering. LNEE, vol. 352, pp. 273–280. Springer, Heidelberg (2015). doi:10.1007/978-3-662-47487-7 40 8. Du, S., Tu, C.: Power line inspection using segment measurement based on HT butterfly. In: 2011 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pp. 1–4. IEEE (2011) 9. Erdos, D., Erdos, A., Watkins, S.: An experimental UAV system for search and rescue challenge. IEEE Aerosp. Electron. Syst. Mag. 28(5), 32–37 (2013) 10. Eschmann, C., Kuo, C.M., Kuo, C.H., Boller, C.: Unmanned aircraft systems for remote building inspection and monitoring. In: 6th European Workshop on Structural Health Monitoring (2012) 11. Explora: Using drones in wedding photography and videos (March 2016). http:// www.bhphotovideo.com/explora/video/tips-and-solutions/camera-sky-usingdrones-wedding-photography-and-videos 12. Geng, L., Zhang, Y.F., Wang, P.F., Wang, J.J., Fuh, J.Y., Teo, S.H.: UAV surveillance mission planning with gimbaled sensors. In: 11th IEEE International Conference on Control & Automation (ICCA), pp. 320–325. IEEE (2014) 13. Høglund, S.: Autonomous inspection of wind turbines and buildings using an UAV. Ph.D. thesis (2014)

VBII-UAV: Vision-Based Infrastructure Inspection-UAV

231

14. Kruijff, G.J.M., Tretyakov, V., Linder, T., Pirri, F., Gianni, M., Papadakis, P., Pizzoli, M., Sinha, A., Pianese, E., Corrao, S., et al.: Rescue robots at earthquakehit Mirandola, Italy: a field report. In: 2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–8. IEEE (2012) 15. Lilien, L.T., benn Othmane, L., Angin, P., Bhargava, B., Salih, R.M., DeCarlo, A.: Impact of initial target position on performance of UAV surveillance using opportunistic resource utilization networks, pp. 57–61. IEEE, September 2015 16. Ma, L., Li, M., Tong, L., Wang, Y., Cheng, L.: Using unmanned aerial vehicle for remote sensing application. In: 2013 21st International Conference on Geoinformatics (GEOINFORMATICS), pp. 1–5. IEEE (2013) 17. Metni, N., Hamel, T.: A UAV for bridge inspection: visual servoing control law with orientation limits. Autom. Constr. 17(1), 3–10 (2007) 18. Michael, N., Fink, J., Kumar, V.: Cooperative manipulation and transportation with aerial robots. Auton. Robot. 30(1), 73–86 (2011) 19. sUAS News: Airborne camera makes concert scene (August 2014). http://www. suasnews.com/2014/08/airborne-camera-makes-concert-scene/ 20. Nikolic, J., Burri, M., Rehder, J., Leutenegger, S., Huerzeler, C., Siegwart, R.: A UAV system for inspection of industrial facilities. In: 2013 IEEE Aerospace Conference, pp. 1–8. IEEE (2013) 21. Omari, S., Gohl, P., Burri, M., Achtelik, M., Siegwart, R.: Visual industrial inspection using aerial robots. In: 2014 3rd International Conference on Applied Robotics for the Power Industry (CARPI), pp. 1–5. IEEE (2014) 22. Rathinam, S., Almeida, P., Kim, Z., Jackson, S., Tinka, A., Grossman, W., Sengupta, R.: Autonomous searching and tracking of a river using an UAV. In: American Control Conference, 2007, ACC 2007, pp. 359–364. IEEE (2007) 23. Stokkeland, M., Klausen, K., Johansen, T.A.: Autonomous visual navigation of unmanned aerial vehicle for wind turbine inspection. In: 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 998–1007. IEEE (2015) 24. Sujit, P.B., Sousa, J., Pereira, F.L.: Coordination strategies between UAV and AUVs for ocean exploration. In: 2009 European Control Conference (ECC), pp. 115–120. IEEE (2009) 25. Tampubolon, W., Reinhardt, W.: UAV data processing for large scale topographical mapping. In: ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-5, pp. 565–572, June 2014 26. Xu, J., Solmaz, G., Rahmatizadeh, R., Turgut, D., Boloni, L.: Animal monitoring with unmanned aerial vehicle-aided wireless sensor networks. In: 2015 IEEE 40th Conference on Local Computer Networks (LCN), pp. 125–132. IEEE (2015) 27. Ying-cheng, L., Dong-mei, Y., Xiao-bo, D., Chang-sheng, T., Guang-hui, W., Tuan-hao, L.: UAV aerial photography technology in island topographic mapping. In: 2011 International Symposium on Image and Data Fusion (ISIDF), pp. 1–4. IEEE (2011) 28. Yuan, C., Liu, Z., Zhang, Y.: UAV-based forest fire detection and tracking using image processing techniques. In: 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 639–643. IEEE (2015)

Gaming in Dyscalculia: A Review on disMAT Filipa Ferraz1, António Costa1, Victor Alves1, Henrique Vicente1,2, ( ) João Neves3, and José Neves1 ✉ 1

Centro Algoritmi, Universidade do Minho, Braga, Portugal [email protected], {costa,valves,jneves}@di.uminho.pt, [email protected] 2 Departamento de Química, Escola de Ciências e Tecnologia, Universidade de Évora, Évora, Portugal 3 Mediclinic Arabian Ranches, PO Box 282602 Dubai, United Arab Emirates [email protected]

Abstract. Dyscalculia is a particular learning disability that affects around 6% of the world population. However, dyscalculics are not brainless; they fight to learn mathematics, notwithstanding nurturing an acceptable education environ‐ ment at home and school. Indeed, dyscalculic children fall behind early in primary school, and may develop anxiety or a strong dislike of mathematics. When reach adult life are still paid less than ordinary people and have difficulties on handling their ordinary finances. Therefore, this work is about a game; disMAT, which is an app whose purpose entails to appeal children to train their mathematical skills. disMAT involves planning by choosing strategies for change as kids move through the game. Unlike a whole-class mathematics activity, a game may support one’s child’s individual needs. Undeniably, it must be challenging, have rules and structure, include a clear ending point, and focus on specific abilities. Keywords: Dyscalculia · Gaming therapeutics · Learning disability

1

Introduction

Learning Disabilities (LD) interfere in the individual’s daily life, in matter of social interaction, personal confidence and professional opportunities. Hopefully, nowadays it is possible to remediate some of these LD, or even attenuate its severity, which helps to increase their quality-of-life. Development Dyscalculia (DD), or just dyscalculia, is a specific LD, belonging to the Mathematical Learning Disabilities (MLD) group. The DD’s name is due to the fact that LD there exists since the conception of the individual. On the contrary, acalculia is what is called acquired dyscalculia due to a brain injury or other accident [1]. To a proper understanding of the concept (dyscalculia), it is wise to highlight its definition, i.e., dyscalculia is an “inability to perform mathematical operations”, which can be seen “as development impairment in (…) mathematics”. According to the Diag‐ nostic and Statistical Manual-IV (DSM-IV), the screening criteria describes that the mathematical ability in the individuals is “substantially below that expected, given the person’s chronological age, measured intelligence, and age-appropriate education”, © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_25

Gaming in Dyscalculia: A Review on disMAT

233

which “significantly interferes with academic achievement or activities of daily living that require mathematical ability” [2]. As noted in the various official definitions of dyscalculia, it may be subdivided in categories according to the type of affected fields or according to the brain’s immaturity, which reflects on the severity of the disorder [3]. Regarding the type, dyscalculia may be understood as: • • • •

Lexical, when one has difficulties in reading mathematical symbols; Verbal, when one finds it hard to name numbers, symbols or even quantities; Graphical, when one cannot write mathematical symbols; Operational, when one faces problems when carrying out mathematical operations and calculations; • Practognostic, when one has troubles in enumerate, manipulate, compare and relate objects and figures of themselves; and • Ideagnostic, when one finds it complicated to make mental calculations and opera‐ tions, as well as understanding mathematical concepts.

It is relevant to note that an individual with lexical dyscalculia may have verbal dyscalculia and any other [4]; it depends on the degree of dyscalculia that he/she presents. Now, regarding this degree or severity, it may be distinguished according to three stages of immaturity [5]: • A former one, when the individual with dyscalculia presents improvements after therapeutic interventions; • An intermediate one, when the individual has dyscalculia along with other learning disabilities (e.g., dyslexia); and • A last one, when the individual presents an intellectual deficit as consequence of a brain injury1. Classification, the process in which ideas and objects are recognized, differentiated and understood, in this case with respect to dyscalculia’s types (indeed with respect to its severity), requiring multiple tests and exams, as well as experts from different fields (e.g., psychology, paediatric, education) to perform the assessment. In order to accom‐ plish this goal, the screening of this disorder does not fit on a standard test, i.e., it involves a set of tests, namely psychological ones (e.g., dyslexia standard test and detection of warning signs by age range), and medical exams (e.g., EEG, FMRI, stress tests). Following the diagnosis or signs of dyscalculia, it is paramount to attenuate its conse‐ quences, namely due to the fact that dyscalculia stands for an irreversible disorder. Indeed, there are some therapeutics that may be applied to these cases, like re-educating the individual with brain training [6, 7]; using the LearningRX program, a piece of software that meets a variety of o cognitive needs [8]; adapting the teaching system (Special Needs in Education); or adopting didactic games [9]. But, to accomplish better results, it is wise to start therapeutics in the early ages of the children’s lives [10]. Later, it will have impact on their adult lives and on the way it relates with every aspect regarding mathematical abilities. 1

Acalculia can be included in this stage.

234

F. Ferraz et al.

Those aspects are dealing with procedures, spatial and temporal memory, counting, calculations and numbers representation, either in a qualitative or quantitative form [11]. Some of these consequences may be seen as mistaken the left hand side with the right one; on the inability to represent objects in different forms (e.g., with numbers, dots, words); incapacity to point out the bigger figure given two of them; and others simple tasks that can disturb a regular daily routine. In terms of causes for this disorder there exist several approaches, viz [12, 13]. • Geneticists believe on a heritage base, especially when their parents present the same LD or genetic diseases (e.g., Turner’s, Williams’, and Fragile X syndromes); • Linguistics suggest that there is a misapprehension of the language and its formalities, affecting the understanding of mathematics concepts; • Neurologists think that the responsible areas for the number senses in the brain have malformations or are not entirely developed; • Pedagogues interpret that DD is a consequence of an inefficient teaching-learning system; • Paediatricians consider medical conditions as causes, like a poor intrauterine growth or exposition to harmful substances or high levels of lead during the pregnancy; and • Psychologists state that environmental factors can contribute to DD, like a poor education or traumatic episodes. Unfortunately, DD is usually associated with others disorders as attention deficit, hyperactivity and dyslexia, which confuses the diag‐ nosis and therapeutics. Although DD has an incidence rate of 6 to 7%, its awareness is object of concern [14]. Additionally, the lack of innovative learning support systems to compete with the existent ones – computer software and outdated paper guides – constitutes the required motivation to develop a specific and didactic mobile application to individuals with dyscalculia [15, 16]. The next section is about the therapeutics gaming, in which it is presented the devel‐ oped app, disMAT, along with its architecture, implementation and assessment proce‐ dures. A case study it is also available. In the last section are presented the conclusions and outlined future work.

2

Therapeutics Gaming

Since the therapeutics in these cases are recommended to start earlier as possible, namely in the primary and junior schools. At these levels, the children are between 5 and 10 years old, being at these ages that they develop the concept of number, i.e., differentiate between symbolic and non-symbolic representations of quantities. Therefore, the maturity of the conceptualization achieved in those years will affect their adult lives. Moreover, different types of exercises, either mental, written or oral, must be designed and endorsed towards this purpose, i.e., helping the child to develop and improve it numeric sense [9]. Nevertheless, these drills have to be appealing to attract the child and well-planned, in order to work on the significant areas of the brain.

Gaming in Dyscalculia: A Review on disMAT

235

Therefore, and having into consideration the technological era through which one is going through, where children play with smartphones and tablets all the time and every‐ where, the app creation came up as a mean to enforce therapeutics of dyscalculia and other math learning disabilities. Although the use of games as a mean to attract children to learn different math concepts is not a novelty in itself, the difference between the now and before appears in the type of games preferred by them. In past times children used to mount puzzles, play board games and even domino; now they prefer it in a digital appearance, as well as games with increasing difficulty, where they have to go through different levels of things that are not easily ended. Hence, the opportunity to design a game directed to help children with dyscalculia emerged with the need to rise the mathematic results in the Portuguese system in partic‐ ular, and on the globe in general.

3

disMAT

In today’s technologic era, where gadgets are the trend, an electronic and mobile appli‐ cation is an attraction to the kids. Indeed, this app has the purpose to assist children with mathematical learning disabilities, and in particular with dyscalculia, in an attractive and fun way. The developed application was named disMAT, in which dis comes from discalculia and mat from mathematics [1, 15]. It is an app whose purpose entails to appeal children to train their mathematical abilities in a three levels game, where the difficulty is distrib‐ uted across a set of tasks (being used ten at each level) that intend to stimulate the brain’s areas affected by dyscalculia. The last task it is not accounted to the final score once its complexity is mostly relative when considered in comparison or relation to the others tasks, i.e., it constitutes a bonus to children in terms of the way they see this challenge. Additionally, it was created two languages versions, in English and Portuguese. Each screen view with written words has an associated button that reads all the content of that view, turning this app valuable for children who can read, who cannot read and who have difficulties in reading but want to test their knowledge in mathematic. The last but not the least, it is important to refer that this app runs under the Android Operating System (AOS), since it is supposed to be the OS most common at the target population. 3.1 Architecture, Implementation and Assessment The IDE Android Studio was chosen to develop this app, using languages as Java, C++ and XML, as well as other packages, like image edition and audio-visual ones to manipulate figures, and reading voice and associate it to the audio buttons. The final product consists of an Android 4.2 app, with the vertical orientation set, the audio-visual buttons, free Internet access and no user restrictions. As referred above, this app goes through nine plus one tasks approaching the areas of laterality, direction, size, memory, measures, time, orientation and quantities, and it

236

F. Ferraz et al.

is organized by difficulty. This organization is depicted into activities2 in Fig. 1, where it is offered the game’s flow, i.e., subsequently to a preliminary page, it is presented a menu with the options about, help and levels. In about the player gets to the app purpose and who developed it; in help the user gets to know how to play with the game; and in levels the users changes to another view where he/she may consult the score attained at each level and the level where he/she may resume the game. Each level presents a task per view. After the last task, the user returns to the levels view.

Fig. 1. The app’s activities scheme.

To properly understand the usability of the app, it is presented in Fig. 2 the disMAT architecture diagram. As it may be observed, there are several users that play the app either in smartphones or in tablets. Once a user concludes a level, there is the possibility to send the displayed behaviour to a cloud (if connected to the internet) or to store the results in a database. Whenever the Internet access is on, the developer may access the session’s results and handle them, i.e., to understand both the reasons for common misbehaviours and the actions that should be taken.

Fig. 2. The disMAT architecture.

After this pre-processing and analysis, the information is available for educators, i.e., it is an easy-to-use reference guide to help them define and correct student behav‐ iours. Presented in an outline form, one may say:

2

An activity is an application component that provides a screen with which users can interact in order to do something.

Gaming in Dyscalculia: A Review on disMAT

237

• • • •

Identifies the prime cause of every student conduct; Tells how each conduct affects teachers, other students, and the learning environment; Suggests methods for handling every student conduct; Reveals the common mistakes teachers often make when trying to correct conducts; and • Cross references other related conducts. The child assessment is made based on the score per task, the answer time in each task, the total task time, the score per level and the total level time. An overall analysis gives us the weaknesses and the strengths of the child, especially when the results can be compared with classmates. Additionally, the children answers a form where they identify their main obstacles, give their opinion on the app and suggest changes on it. 3.2 Case Study This app was tested between two classes of the 3rd and 4th grades, with 19 and 26 students, respectively, where 47.83% of the population is female and 52.17% is male, with an age average of 9.18 years old. In Table 1 are presented the main statistics about the two classes, 3rd and 4th grades, including the average age (already referred above), the gender relation (22 female to 23 male students) and the score’s levels and times, in which it is highlighted: • For Level 1 an average score of 89.56% and an average response’s time of 01:36 min; • For Level 2 an average score of 64.67% and an average of response’s time of 02:14 min; and • For Level 3 an average score of 72.44% and an average of response’s time of 04:16 min. Table 1. Comparison between 3rd and 4th Grades, and Both Classes by Average Age, Gender Relation, Levels 1, 2 and 3 Average Scores and Response’s Times. Average Age (in years) 3rd Grade

Gender Relation

Level 1 Average Score (in 100)

Level 1 Average Response’s Time (min)

Level 2 Average Score (in 100)

Level 2 Average Response’s Time (min)

Level 3 Average Score (in 100)

Level 3 Average Response’s Time (min)

8.68

7F/12M

83.68

01:42

63.68

02:09

70.00

04:29

4 Grade

9.54

15F/11M

93.85

01:31

65.38

02:19

74.23

04:06

Both Classes

9.18

22F/23M

89.56

01:36

64.67

02:14

72.44

04:16

th

In Table 2 are presented the difficulties experienced by the group in study, where is made a comparison between the two classes in terms of number of students and respec‐ tive percentage, by field of difficulty. It should be noticed that there is an average of 18 students with difficulties, being the fields most affected the ones that deal with measures in terms of height (63.04%), telling the hours (82.61%), measures in terms of weight (60.87%) and positioning/orientation (58.70%).

238

F. Ferraz et al.

Table 2. Comparison between Field of Difficulties in the 3rd and 4th Grades, and in Both Classes by Number of Students and Percentage of Students. Field of Difficulties Size Laterality Memory (Pairs) Measures (Height) Time Memory (Logo) Measures (Weight) Positioning/ Orientation Money Memory (Puzzle) Average

No. Students 3rd Grade

4th Grade

% Students Both Classes 3rd Grade

4th Grade

Both Classes

5 11 4

3 3 3

8 14 7

26.32 57.89 21.05

11.54 11.54 11.54

17.39 30.43 15.22

15

14

29

78.95

53.85

63.04

18 5

20 2

38 7

94.74 26.32

76.92 7.69

82.61 15.22

17

11

28

89.47

42.31

60.87

12

15

27

63.16

57.69

58.7

14 0

8 1

22 1

73.68 0

30.77 3.85

47.83 2.17

10

8

18

53.16

30.77

39.35

Following these results, it is noted that dealing with measures, hours and orientation are the weak points of these students, maybe because the teaching-learning system is not appropriated or efficient, or because the tasks presented in the app were not clear enough to them. Otherwise, there is a need to make more tests and different types of assessments in both classes to detect the failure on the system to these children. To conclude, each student of the group of 45 filled a form where 10 students answered that had difficulties, 15 answered that didn’t know if had difficulties and 20 answered that didn’t had difficulties. Of the 25 that had difficulties or where in doubt about it, 14 answered that had troubles understanding what were asked and 11 that had troubles understanding the game. In a general dyscalculia assessment per student, as presented in Table 3, it was observed that there are students that are between classifications limits, thus to inconclusive analysis, like the student number 4 that has a level of dyscalculia somewhere between high and very high severities. Table 3. Dyscalculia Assessment per Student. Student Dyscalculia Assessment

1 Very High

2 High

3 Some

4 High/ Very High

5 High

… …

41 High

42 Low/ Some

43 Some

44 Low

45 High

3.3 SWOT Analysis for disMAT The SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis is a structured planning method that evaluates elements of a project or product. It involves specifying

Gaming in Dyscalculia: A Review on disMAT

239

the objective of the project and identifying the internal and external factors that are favourable and unfavourable to achieve that objective [17]. The core components of this analysis are: • Strengths, i.e., features of the object under scrutiny that give it an advantage over and above others; • Weaknesses, i.e., characteristics that place the object under study with a disadvantage relatively to others; • Opportunities, i.e., elements that the object under analysis could exploit to its advantage; and • Threats, i.e., elements in the environment that may cause trouble for the object under revision. Indeed, the SWOT analysis provides the ground as well as the problem solving methodology in use with this app. The analysis is detailed in Table 4, being divided into quadrants that stand for beneficial and harmful influences of internal or external origin. One’s reading may use the former quadrant for Strengths, the second one for Weak‐ nesses, the third for Opportunities and the later for Threats. Table 4. SWOT analysis for the app disMAT. Helpful Internal origin Strengths Support tool for children with dyscalculia and others math LD Updated support tool to children trends and needs Possibility to extend and complete the app features anytime Big opening in the Portuguese market Suggest demonstrative results and information to the educator to a better scholar accompaniment External Opportunities origin Embed an intelligent system where the tasks are selected automatically and presented according to the user’s difficulties The search by the government for new technologies to implement in the education field The lack of learning support systems to screen and accompany the children with LD

4

Harmful Weaknesses The need for internet access to send the player performance Modest design Few tasks and levels of difficulty Low fluidity of the views

Threats The use of the tool in classrooms may be a problem if the educators do not accept it Arising of tools of this type since there are more and more companies investing in the mobile applications field

Conclusions

This math game may help not only to improve the math results obtained by children in their early years in school, but also to provide the elements that may potentiate their progress in this area, since they can play the game everywhere and at any time as long as they have a gadget with the app installed. Indeed, disMAT came up in an era where the technologies are becoming the dailybasis of kids, reaching the individual’s weaker areas without she/he notices it. But the

240

F. Ferraz et al.

real requirement is projected in the need of a learning support tool to aid these kids to evolve outside classes, or even as a complement to them. In an early conclusion, this tool has been well-accepted, and has been proving results in the route of distinguish evidences of dyscalculia, even though its low rate of accuracy. The next step regards the development of an improved and wider app, serving tasks based on recent screening tests used by mathematicians and psychologists, without running away from the essence of the disMAT. The acceptance of this app by the educa‐ tors and kids as a complementary tool should be worked on and achieved. Actually, the children who benefited most from the instruction were those who had the highest error rates at the beginning of the study. The results suggest that this improvement is in number sense access, rather than in number sense per se. On the other hand, and on the research side, it allows us to make inferences about which minimal factors are important in contributing to number sense development, i.e., in practice, it may offer individualized instruction on core cognitive components for children who are lagging behind. As such it may be a useful curricular supplement for educators and parents. Acknowledgments. This work has been supported by COMPETE: POCI-01-0145FEDER-007043 and FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/ CEC/00319/2013.

References 1. Ferraz, F.T.: Sistema de Apoio à Aprendizagem na Área da Discalculia em Menores. http:// hdl.handle.net/1822/40865 2. Binder, M.D., Hirokawa, N., Windhorst, U. (eds.): Encyclopedia of Neuroscience, pp. 929– 1027. Springer, Berlin (2009) 3. Kosc, L.: Developmental Dyscalculia. J. Learn. Disabil. 7, 164–177 (1974) 4. Kuhl, D.E.: Voices Count: Employing A Critical Narrative Research Bricolage For Insights Into Dyscalculia. http://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=3612&context=etd 5. Romagnoli, G.: Dyscalculia: A Challenge in Mathematics. CRDA, São Paulo (2008) 6. Ansari, D.: The neural roots of mathematical expertise. Proc. Nat. Acad. Sci. USA 113, 4887– 4889 (2016) 7. Bartés-Serrallonga, M., Serra-Grabulosa, J.M., Adan, A., Falcón, C., Bargalló, N., SoléCasals J.: Smoothing FMRI data using an adaptive Wiener filter. In: Madani, K., Correia, A.D., Rosa, A., Filipe, J. (eds.) Computational Intelligence. Studies in Computational Intelligence, vol. 577, pp. 321–332. Springer, Cham (2015) 8. Carpenter, D.M., Ledbetter, C., Moore, A.L.: LearningRx cognitive training effects in children ages 8–14: a randomized controlled trial. Appl. Cogn. Psychol. 30, 815–826 (2016) 9. Rubio, G., Navarro, E., Montero, F.: APADYT: a multimedia application for SEN learners. Multimedia Tools Appl. 71, 1771–1802 (2014) 10. Merkley, R., Ansari, D.: Why numerical symbols count in the development of mathematical skills: evidence from brain and behavior. Curr. Opin. Behav. Sci. 10, 14–20 (2016) 11. Johnson, D.J., Myklebust, H.R.: Learning Disabilities: Educational Principles and Practices. Pro-Ed, Austin (1967) 12. Ansari, D., Lyons, I.M.: Cognitive neuroscience and mathematics learning: how far have we come? Where do we need to go? ZDM Math. Educ. 48, 379–383 (2016)

Gaming in Dyscalculia: A Review on disMAT

241

13. Howard-Jones, P.A., Varma, S., Ansari, D., Butterworth, B., De Smedt, B., Goswami, U., Laurillard, D., Thomas, M.S.: The principles and practices of educational neuroscience: commentary on bowers. Psychol. Rev. 123, 620–627 (2016) 14. Berch, D., Mazzocco, M.: Why Is Math So Hard for Some Children? The Nature and Origins of Mathematical Learning Difficulties and Disabilities. Paul H. Brookes Publishing Co., Baltimore (2007) 15. Ferraz, F., Neves, J.: A brief look into dyscalculia and supportive tools. In: Proceedings of the 5th IEEE International Conference on E-Health and Bioengineering (EHB 2015), pp. 1– 4 (2015). IEEE Edition 16. Rubinsten, O., Henik, A.: Developmental dyscalculia: heterogeneity might not mean different mechanisms. Trends Cogn. Sci. 13, 92–99 (2008) 17. Hay, G.J., Castilla, G.: Object-based image analysis: Strengths, Weaknesses, Opportunities and Threats (SWOT). In: Proceedings of 1st International Conference on Object-Based Image Analysis (OBIA 2006), p. 3 (2006)

Multimedia Systems and Applications

Matching Measures in the Context of CBIR: A Comparative Study in Terms of Effectiveness and Efficiency Mawloud Mosbah(&) and Bachir Boucheham Department of Informatics, University 20 Août 1955, Skikda, Algeria [email protected], [email protected]

Abstract. In this paper, we compare between many matching measures (distances, quasi-distances, similarities and divergences), in the context of CBIR, in terms of effectiveness and efficiency. The major effort put up to now, within the area of CBIR, is usually made into the indexing stage. This work then highlights the importance of the matching process, as an important component within the CBIR system, through making under experimentation a large number of matching measures. The experiments, conducted on the Wang database (COREL1 K) and using two signatures: histograms and color moments, reveal that Euclidean distance, usually used in the context of CBIR, gives less performances comparing to the Ruzicka similarity, Manhattan distance and Neyman-X2. Keywords: CBIR  Matching measure  Similarity  Distance  Quasi-distance  Divergence  Effectiveness  Efficiency

1 Introduction An information retrieval system aims to extract, form a large collection of information, a subset satisfying user requirement expressed as a query. An image retrieval system is an information retrieval system with the specificity that the information to retrieve is an image. Practically, there are two board directions for building an image retrieval system: textual retrieval based on the textual annotation associated with images referred to as TBIR and retrieving based on the visual content of the images known as CBIR. An image retrieval system in general, and CBIR system in particular, have the same architecture of that a common information retrieval system which contains the following three processes: the offline indexing stage which aims encoding the images in a compact and relevant format, the interrogation process designating the interface and the utilization protocol and finally the matching process comparing the encoded query with the index base. Review of literature shows that there exist a lot of signatures. For color for instance, there are simple histogram [1, 2] (global and local) and color moments [3] to correlogram and color coherent vector (CCV) [4]. Similarly, a large spectrum of measures is available, in the case of matching measure, ranging from distance and similarity to quasi-distance and divergence. To the best of our knowledge, the part of CBIR system related to the matching measure receives a less interest comparing to the other parts dealing with signatures © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_26

246

M. Mosbah and B. Boucheham

and interrogation protocol. Indeed, researchers are usually interested to focus on signatures. In this paper, we achieve a comparative study of many existing matching measures allowing helping developers to well choose the adequate measure when building a CBIR system. Our work then is close to the work done in [5] which evaluates different similarity measurements in terms of effectiveness and efficiency using shape features and standard shape datasets. This work is then arranged as follows: in Sect. 2, we look at some works related to matching process in many application fields. In Sect. 3, we present the matching process in a CBIR system and we give some formal definitions of the considered matching measures. Section 4 shows the experimentations done with its materials and settings. We conclude the paper with a conclusion and some perspectives.

2 Related Works Review of literature reveals that there exists few works done for matching measures, in the context of CBIR, in contrast with the other compounds that of indexing and interrogation process. Our work then seems to be an extended version of the work done in [5] where only some measures have been considered without any categorization. Review of literature shows that there are some works reserved to matching measure in other pattern recognition and data mining fields. In [7] for instance, authors have built the edifice of distance/similarity measures by enumerating and categorizing a large variety of them. They also pointed out the importance of finding suitable distance/similarity measures which cannot be overemphasized. Isabelle Bloch have proposed, in [8] a classification of fuzzy distances with respect to the requirements needed for applications in image processing under imprecision. She has distinguished, on the first hand, distances that basically compare only the membership functions representing the concerned fuzzy objects, and on the other hand, distances that combine spatial distance between objects and membership functions. Other works looks at matching measures are found in [9–15].

3 Matching Process In this section, we explain the CBIR component addressed by our paper namely the matching process. The matching process image-query allows measuring the relevance of an image to the submitted query. For that, a CBIR system encodes the collection being asked and the query using the same formalism or signature. As shown on Fig. 1, the output of this process of matching is a score that represents the relevance probability of an image to the supplied query. Based on these scores of relevance, the CBIR system ranks the images when visualizing them to the user as an answer of the query. Having a good matching measure, that compares signatures, is then a key task for building an effective CBIR system.

Matching Measures in the Context of CBIR

247

Fig. 1. Some images representing the 10 classes of the Wang database

There exist many measures of matching ranging from distance and similarity to quasi-distance and divergence. In this section, we present the formal definition of each one [6]. 3.1

Distance

Let X a set. A function d: is called a distance (or dissimilarity) on X if, for all, there holds [6]: 1. d ðx; yÞ  0 (non-negativity). 2. d ðx; yÞ ¼ d ðy; xÞ (symmetry). 3. dðx; xÞ ¼ 0 (reflexivity). We consider, in this work, the following distance measures: Euclidean Distance. This distance has been employed in [16]. qX ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð ai  bi Þ 2

ð1Þ

Manhattan Distance. This distance has been exploited in [1, 17]. X

j ai  bi j

ð2Þ

minfai ; bi g P P minf ai ; bi g

ð3Þ

Intersection Distance. P

1 Sorensen Distance.

P j a  bi j P i ð ai þ bi Þ

ð4Þ

P j ai  bi j P minfa€ı ; bi g

ð5Þ

Kulczunsky Distance.

248

M. Mosbah and B. Boucheham

Soergel Distance. P j ai  bi j P maxfa€ı ; bi g

ð6Þ

maxfjxi  yi jg

ð7Þ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðf1  f2 ÞT Aðf1  f2 Þ

ð8Þ

Chebyshev Distance.

Squared Distance. DQ ¼

  d Where: A ¼ aij and aij ¼ 1  maxij d f1 and f2 are feature vectors indexing the ð ij Þ two images being compared. Mahalanobis Distance This distance has been exploited in [18, 19]. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Dm ¼ ðf1  f2 ÞT C 1 ðf1  f2 Þ

ð9Þ

Where C is the co-variance matrix. Canberra Distance. X j ai  bi j j ai j þ j bi j

3.2

ð10Þ

Similarity

Let X a set. A function s: X  X ! R is called a similarity on X if s is non-negative, symmetric, and if sðx; yÞ  sðx; xÞ holds for all x; y 2 X1 , with equality if and only if x ¼ y [6]. We employ, in this work, the following similarities: Ruzicka Similarity. P minfx€ı ; yi g P maxfx€ı ; yi g

ð11Þ

Roberts Similarity. P

minfx€ı ;yi g ðxi þ yi Þ max fx€ı ;yi g sumðxi þ yi Þ

ð12Þ

Matching Measures in the Context of CBIR

249

Motyka Similarity. P minfx€ı ; yi g P ð xi þ yi Þ

ð13Þ

Cosine Similarity. This similarity has been used in [18]. !! T:S cos ineðT; SÞ ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    !!  T  S 

3.3

ð14Þ

Quasi Distance

Let X a set. A function X  X ! R is called a quasi-distance on X if d is non-negative, and sðx; xÞ ¼ 0 holds for all x 2 X [6]. We use, in this work, the following quasi-distance measures: X2 Quasi Distance. This quasi-distance has been used in [20]. X ð p1 ð x Þ  p2 ð x Þ Þ 2 x

p2 ð x Þ

ð15Þ

Neyman-X2 Quasi Distance. X ð p1 ð x Þ  p2 ð x Þ Þ 2 x

p1 ð x Þ

ð16Þ

Separation Quasi Distance.   p1 ð x Þ max 1  x p2 ð x Þ

3.4

ð17Þ

Divergence

Let X a set. A function d: X  X ! R is called a divergence (or a semi-metric) on X [6] if is non-negative, symmetric, if d ðx; xÞ ¼ 0 for all x 2 X, and if d ðx; yÞ  d ðx; zÞ þ d ðz; yÞ Jeffrey Divergence. X x

ðp1 ð xÞ  p2 ð xÞÞ ln

p1 ð x Þ p2 ð x Þ

ð18Þ

250

3.5

M. Mosbah and B. Boucheham

Experimental Results

The first aim of this paper is to compare between similarity, distance, quasi-distance and divergence measures. For this purpose we utilize the heterogeneous Wang database [21] composed of 1000 images, the three first color moments [3] and histogram as signatures, the precision/recall measures [22] and utility value [24] as evaluation metrics and we have used a computer of I3 processor and 2 GB of memory. All these materials are given as follows: • Histogram The histogram is a statistic vector, the elements of which hold the pixels count for each color in the image. The histogram utilized here is a global histogram quantized using a fixed palette of 16 bins as done in [1, 23]. • The First Color Moment m¼

N 1X fij N j¼1

ð19Þ

Where: N is the number of pixels in the image. fij is the value of the pixel of ith row and jth column. • The Second Color Moment vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X  2 v¼t fij  m N j¼1

ð20Þ

• The Third Color Moment vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N X u 3 1 s¼t fij  m N j¼1

ð21Þ

• The Precision Pr ecision ¼

NRIR TNIR

ð22Þ

• The Recall Recall ¼

NRIR TNRI

ð23Þ

Matching Measures in the Context of CBIR

251

Where: NRIR: Number of relevant images retrieved. TNIR: Total number of images retrieved. TNRI: total number of relevant images in the database. V: is the utility value. P: is the precision value and s belongs to the range [0 1]. • The Utility Value The utility concept method is inspired from the work done in [24] which consists in assigning higher scores to relevant images in descending order of their rank within the returned results. The value assigned to each image is given by the following formula: v¼

1  ðN  RÞ N

ð24Þ

Where N: is the number of the returned images and R is the rank of the image. The value V belongs then to the range of 0 to 1. The Utility Value then allows shifting Precision/Recall curves to values which simplifies the manual comparison and makes the automatic comparison possible (Figs. 2, 3, 4, 5, 6, 7, 8, 9 and 10). • The Wang Database (COREL-1 K)

Fig. 2. Comparison between the effectiveness of similarities based on Utility value in the case of color moments.

Fig. 3. Comparison between the effectiveness of similarities based on Utility value in the case of histogram.

252

M. Mosbah and B. Boucheham

Fig. 4. Comparison between the effectiveness of distances based on Utility value in the case of color moments.

Fig. 5. Comparison between the effectiveness of distances based on Utility value in the case of histogram.

Fig. 6. Comparison between the effectiveness of quasi-distances based on Utility value in the case of color moments.

Matching Measures in the Context of CBIR

253

Fig. 7. Comparison between the effectiveness of quasi-distances based on Utility value in the case of histogram.

Fig. 8. The average precision/recall of the Jeffrey Divergence over histogram and color moments signatures.

Fig. 9. The comparison of the better measures in terms of utility value in the case of color moments.

254

M. Mosbah and B. Boucheham

Fig. 10. The comparison of the better measures in terms of utility value in the case of histogram.

4 Discussions From the effectiveness perspective, we can see clearly the following points: – For the color moments signature (Table 3): • Ruzicka is the best similarity followed by Roberts similarity that largely outperform Motyka similarity and more Cosine similarity. • Manhattan and Canberra outperform all distances measures. Intersection and Squared distances are the worst. • Neyman- X2 is the best quasi-distance that outperforms X2 quasi-distance and very more Separation. • Ruzicka similarity is the best measure that outperforms all the considered measure, where considering color moments as signature, followed by Manhattan and Canberra distances, Neyman-X2 quasi distance. – For the Histogram signature (Table 2): • Ruzicka and Motyka are the best similarities that largely outperform the other considered similarities. • Sorensen is the best distance closely followed by Manhattan distance. • Neyman-X2 is the best quasi-distance followed consequently by Separation and X2 quasi-distances. • Neyman-X2 is the best measure that outperforms all the other considered measures, where considering histogram as signature. We can also say, according to Table 1 for ranking measures in terms of their utility value, that histogram signature is globally better than color moments. According to Table 1, we can also deduce that it is advised to use Neyman-X2 quasi-distance when employing with histogram signature and utilizing Ruzicka similarity when employing with color moments as signature. – From the retrieval efficiency, we can clearly say: • Histogram signature requires a lot of time with reference to color moments. • Squared distance requires the highest time no matter what the signature to consider. It is followed by Euclidean distance and may be the Sorensen and the Jeffrey divergence. The other measures, especially similarities, require only a few time.

Matching Measures in the Context of CBIR

255

Table 1. The ranking of considered matching measures over color moments and histogram in terms of Utility Value. Matching Measure Neyman_X2_HIS Sorensen_HIS Manhattan_HIS Ruzicka_HIS Motyka_HIS Jeffrey_HIS Interection_HIS Mahalanobis_HIS Cosine_HIS X2_HIS Roberts_HIS Soergel_HIS Ruzicka_Color_Moments Neyman_X2_Color_Moments Manhattan_Color_Moments Canberra_Color_Moments Roberts_Color_Moments Jeffrey_Color_Moments Chebyshev_HIS X2_Color_Moments Chebyshev_Color_Moments Kulczunsky_Color_Moments Soergel_Color_Moments Sorensen_Color_Moments Kulczunsky_HIS Mahalanobis_Color_Moments Motyka_Color_Moments Separation_Color_Moments Euclidean_Color_Moments Euclidean_HIS Cosine_Color_Moments Canberra_HIS Separation_HIS Squared_Color_Moments Squared_HIS Intersection_Color_Moments

Its utility value 388,658 385,828 378,022 377,785 377,742 374,286 372,516 371,728 354,878 352,606 349,123 349,059 348,238 347,683 345,577 344,18 344,081 339,013 333,899 333,323 327,358 320,935 319,407 318,903 318,481 310,626 308,94 308,525 304,107 301,278 294,642 284,872 269,877 269,504 250,77 214,962

256

M. Mosbah and B. Boucheham

Table 2. The ranking of considered matching measures in terms of consumed time in the case of histogram signature. Matching measure Ruzicka Cosine Intersection Kulczunsky Canberra Separation Roberts Motyka Soergel Chebyshev X2 Neyman-X2 Euclidean Sorensen Manhattan Mahalanobis Jeffrey Squared

Consumed time 15,6 15,6 15,6 15,6 15,6 15,6 31,2 31,2 31,2 31,2 31,2 31,2 46,8 46,8 46,8 46,8 46,8 62,4

Table 3. The ranking of considered matching measures in terms of consumed time in the case of color moments signature. Matching measure Ruzicka Roberts Motyka Cosine Intersection Sorensen Kulczunsky Soergel Chebyshev Manhattan Mahalanobis Canberra X2 Neyman-X2 Separation Jeffrey Euclidean Squared

Consumed time 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 15,6 31,2 46,8

Matching Measures in the Context of CBIR

257

5 Conclusions In this paper, we have achieved a comparative study of a large number of matching measures, in terms of effectiveness and efficiency, in the context of CBIR. The obtained results, performed on the Wang database (COREL-1 K) reveal that, from the performance view point, Ruzicka similarity is the best measure for the color moments signature while Neyman-X2 quasi-distance is the best for the histogram signature. From the efficiency view point, results show that all similarities requires a less processing time and even Neyman-X2 needs relatively a less processing time. The results achieved leads to addressing the selection issue in the context of matching measure either for a given submitted query or for a given utilized signature.

References 1. Swain, M.J., Ballard, D.H.: Color indexing. Int. J. Comput. Vis. 7(1), 11–32 (1991) 2. Gong, Y., Chuan, C.H., Xiaoyi, G.: Image indexing and and retrieval using color histograms. Multimedia Tools Appl. 2, 133–156 (1996) 3. Stricker, M., Orengo, M.: Similarity of color images. In: Storage and Retrieval for Image and Video Database III (1995) 4. Pass, G., Zabith, R.: Histogramme refinement for content based image retrieval. In: IEEE Workshop on Applications of Computer Vision, pp. 96–102 (1996) 5. Zhang, D., Lu, G.: Evaluation of similarity measurement for image retrieval. In: Proceedings of the International Conference on Neural Networks and Signal Processing, vol. 2, pp. 928– 931 (2003) 6. Deza, M.M., Deza E.: Encyclopedia of Distance. Springer, Heidelberg (2009) 7. Chan, S.-H.: Comprehensive survey on distance/similarity measures between probability density functions. Int. J. Math. Models Methods Appl. Sci. 1(4), 300–307 (2007) 8. Bloch, I.: On Fuzzy distances and their use in image processing under imprecision. Pattern Recogn. J. 32(11), 1873–1895 (1999) 9. Strehl, A., Ghosh, J., Mooney, R.: Impact of similarity measures on web-page clustering. American Association for Artificial Intelligence Technical Report (2000) 10. Owsinski, J.W.: Machine-part grouping and cluster analysis: similarities, distances, and grouping criteria. Bull. Pol. Acad. Sci. 57(3), 217–228 (2009) 11. Collins, J., Okada, K.: A comparative study of similarity measures for content-based medical image retrieval. In: CLEF (Online Working Notes/Labs/Workshop) (2012) 12. Perlibakas, V.: Distance measures for PCA-based face recognition. Pattern Recogn. Lett. 25(6), 711–724 (2004) 13. Rui, H., Ruger, S., Song, D., Liu, H., Huang, Z.: Dissimilarity measures for content-based image retrieval. In: 2008 IEEE International Conference on Multimedia and Expo., pp. 1365–1368. IEEE (2008) 14. Liu, H., Song, D., Rüger, S., Hu, R., Uren, V.: Comparing dissimilarity measures for content-based image retrieval. In: Li, H., Liu, T., Ma, W.-Y., Sakai, T., Wong, K.-F., Zhou, G. (eds.) AIRS 2008. LNCS, vol. 4993, pp. 44–50. Springer, Heidelberg (2008). doi:10. 1007/978-3-540-68636-1_5 15. Mosbah, M., Boucheham, B.: Distance selection based on relevance feedback in the context of CBIR using the SFS meta-heuristic with one round. Egypt. Inf. J. (2016). Elsevier

258

M. Mosbah and B. Boucheham

16. Voorhees, H., Poggio, T.: Computing texture boundaries from images. Nature 333, 364–367 (1988) 17. Stricker, M., Orengo, M.: Similarity of color images. In: Proceedings of SPIE: Storage and Retrieval for Image and Video Databases, vol. 2420, pp. 381–392 (1995) 18. Smith, J.R.: Integrated spatial and feature image system: retrieval, analysis and compression. Ph.D. thesis. Columbia University (1997) 19. Van Trees, H.L.: Detection, Estimation, and Modulation Theory. Wiley, New York (1971) 20. Rubner, Y.: Perceptual metrics for image database navigation. Ph.D. thesis. Stanford University (1999) 21. http://Wang.ist.psu.edu/docs/related.shtml 22. Babu, G.P., Mehre, B.M., Kankanhalli, M.S.: Color indexing for efficient image retrieval. Multimedia Tools Appl. 1, 327–348 (1995) 23. Kavitha, C., Babu Rao, M., Prabhakara Rao, B., Govardhan, A.: Image retrieval based on local histogram and texture features. (IJCSIT) Int. J. Comput. Sci. Inf. Technol. 2(2), 741– 746 (2011) 24. Fishburn, P.: Non-linear Preference and Utility Theory. Johns Hopkins University Press, Baltimore (1998)

The Evolution of Azuma’s Augmented Reality– An Overview of 20 Years of Research Mafalda Teles Roxo ✉ and Pedro Quelhas Brito (

)

LIAAD-INESC TEC, Faculdade de Economia, Universidade do Porto, Rua Dr. Roberto Frias, 4200-464 Porto, Portugal [email protected]

Abstract. Augmented Reality (AR) is no longer just a gimmick. 50 years after the development of the first head-mounted display, and approaching the 20th anniversary of the first conference dedicated to AR, it is time for a new review on the theme. As such, we present a bibliometric analysis of scientific literature since 1997, using as database the Web of Science. This allowed identifying the most relevant authors, their distribution by subjects, the evolution of publishing by year and the most frequent publications. Keywords: Augmented reality · Bibliometric analysis · Marketing

1

Introduction

The idea suggested by Azuma in 1997 of a technology to allow overprinting 3-D elements from a virtual world onto the physical world, in real time interaction, is now reaching two decades of existence [2]. What started around 1960 as a see-through head-mounted display - “The Ultimate Display” [38] -, has evolved into a medium that is now established among the scientific community [41], and present in our everyday lives, in many fields, such as gaming, retail, architecture, health, and many others [3, 8]. Bibliometric analysis (BA) is “the application of mathematics and statistical methods to books and other media of communication” [32]. Therefore, among other aspects, this is a suitable method to assess the quantity and quality of the work developed in specific disciplines over the years, and their most influential authors [12]. Hence, BA offers researchers the means to assess the state of the art in a particular discipline, as well as a more efficient identification of new areas of research [35]. This article has the following structure: Sect. 2 presents the link between literature related to augmented reality and marketing; Sect. 3 presents the methodology, Sect. 4 shows the results and discussion, and finally Sect. 5 presents the study limitations and suggestion for future research.

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_27

260

2

M.T. Roxo and P.Q. Brito

Augmented Reality, a Marketing-Oriented Perspective

Augmented reality (AR) is a highly versatile technological application. Consequently, we have seen its application in fields such as surgery [40], marketing [19], journalism [29], tourism [6], and the treatment of phobias [34], to name a few. This technology is contributing to improve how we interact and represent our knowl‐ edge. It is promoting productivity, once it creates more intelligent, context-aware and transparent immersive experiences for people, business and things [15]. Also, AR is impacting the economy as it is expected that the revenues earned by the AR apps market will grow tenfold, from $ 515 millions in 2016, to $ 5,7 billions by 2021 (in line with the forecast from 2015) [20, 21]. These forecasts are also fueled by changes in the marketing dynamics we are witnessing in some technological brands. Mergers and acquisitions illustrate it such as the Metaio1 (an AR developer) integration in Apple and the rumors of Snap acquiring the AR Israeli startup Cimagine Media2. At the same time, we are witnessing an increasing use of AR as a medium with appli‐ cation in various areas. In marketing, it emerges as a means of communication between brands and consumers. Consumers interacting with AR are increasingly sophisticated, techsavvy, demanding brands and companies to provide them experiences, not limiting their satisfaction to the utilitarian role of the offered products/services [4, 30, 31]. In addition, we verify a more complex and comprehensive consumer-brand interac‐ tion, using a multiple touch points approach [28]. Accordingly, AR emerges as a suitable medium allowing brands to interact and communicate efficiently (in due time, in the right context and with the adequate infor‐ mation) with their stakeholders, offering new experiences [5, 8, 10, 14, 27]. This explains the inevitability of the connection between AR and marketing, since this technological application fills the need for experience demanded by the consumers. Because of the aforementioned ‘phenomenon’, major brands are currently using augmented reality solutions in their marketing strategies (e.g. IKEA’s catalogue3, Ralph Lauren’s presentation of Polo for Women Spring 2015 collection4, among others) [17, 37]. In addition, there is also the influence of AR in academic research, with special emphasis on how AR affects consumer behavior as a mean of communication [18, 19].

1

2

3

4

“Apple buys German augmented-reality software maker Metaio”: http://www.reuters.com/ article/us-apple-metaio-idUSKBN0OE1RO20150529. Accessed on January 3, 2017. “Snap reportedly acquired augmented reality startup Cimagine Media for up to $40 million”: http://venturebeat.com/2016/12/24/snap-reportedly-acquired-augmented-reality-startupcimagine-media-for-up-to-40-million/. Accessed on January 3, 2017. IKEA Catalogue: https://www.youtube.com/watch?v=dwt-mgxq_ao&nohtml5=False. Internet. Retrieved April 5, 2016. The Official Ralph Lauren 4D Holographic Water Projection: https://www.youtube.com/ watch?v=ugBbTiBmZ2g. Internet. Retrieved April 5, 2016.

The Evolution of Azuma’s Augmented Reality

3

261

Methodology

Bibliometric analysis (BA) has the ultimate goal to provide a quantitative assessment of scientific literature state of the art, as produced in a given field of knowledge, being a method possible to apply in any area of research [1, 11]. This analysis allows a quantitative evaluation of the performance of scientific publi‐ cations, such as the number of publications per author, citation analysis, and also detecting patterns in literature [13, 23, 33]. The main questions motivating our research are: (1) What is the state of the art after 20 years of the research in AR connected to Marketing? (2) Which authors contributed the most towards this development? (3) Where did this knowledge emerge from? And (4) What are the main streams for further research? Thus, the main objective for the use of bibliometric analysis in the context of AR and marketing is to quantify the data concerning the evolution of the number of docu‐ ments published per year; the type of document (article published in journal or confer‐ ence articles – including Springer’s Lecture Notes); the main authors and their produc‐ tivity; and research areas [9, 36]. Our line of research was developed based on the work by de Bakker et al., Kim et al., and Schmitz et al. [9, 22, 36], from where we built our research strategy. The process stages 1 and 2 correspond to document collection a the production of database for analysis; stage 3 comprises that analysis (see Fig. 1).

Fig. 1. Research strategy

3.1 Sampling The bibliographic database used was the Web of Science (WoS), where we researched the following keywords: “augmented reality”, “marketing”, “consumer behavior”,

262

M.T. Roxo and P.Q. Brito

“consumer psychology” and “business. The rational that motivated this choice of keywords is described above, and it is related with the emergence of the use of AR applications in business, economics and management [15, 21], and with the so called research on new media [25, 28]. The research was further refined according to the following restrictions: the documents should be written in English, the type of document should be proceedings papers and articles, and the time interval considered was from 1997 until November 1, 2016 with a total of 156 documents. In line with the established practice in BA [7, 16], two researchers carried out a review of titles, abstracts and keywords in order to eliminate documents whose subject diverged from the goals of the present research, which reduced the database to 134 documents. To perform the statistical analysis of the data we used Microsoft® Excel for Mac [26].

4

Results and Discussion

After the analysis of the data resulting from 20 years of research on augmented reality in a marketing context, we verified there was a time span of three years since the publi‐ cation of the survey by Azuma [2] and the emergence of the first article on the subject. In addition, the large increase in interest on the subject took place between 2010 and 2011, and 2013 and 2015, which were very fruitful years (concerning 2016 it is still too early for conclusions) (see Fig. 2). This may be because since 2010, the Marketing Science Institute (MSI) started to emphasize the importance of understanding consumer manifestations in the different media [24, 25].

Fig. 2. Evolution of documents published per year

With regard to the sort of documents, we found that most of the scientific knowledge produced in this area is published in the form of conference papers (see Fig. 3). This result seems to indicate that this is an area essentially depending on technological devel‐ opments, and as such, its life cycle is not befitting the timings the journals require to publish the articles. Nevertheless, analyzing the 10 most cited articles in the database,

The Evolution of Azuma’s Augmented Reality

263

we identified that 8 in 10 were published in top journals (journals rated Q1 by Scimago Journal & Country Rank in 2015), one was published in a Q2 journal and only one was a conference proceeding.

Fig. 3. Percentage of articles vs. conference proceedings

Concerning the authors, this is an area of knowledge where, on average, three authors write each article, being that there are articles with 26 and 12 authors. About 36% of the articles have more than three authors, and another 36% have one or two authors. The most productive authors are Anna Javornik (with 2 articles and 1 proceeding) and Bernard Kress (with 3 published proceedings). It should be noted that only the second author was quoted (one of his proceedings was cited 8 times), while the first was not. This derives from the dates of the documents, the ones from Kress are from before 2014, while Javornik published her articles in journals (Q1 and Q2) more recently, in 2016. Analyzing publications distribution by topics, there is a predominance of publica‐ tions in marketing and business, whereas consumer psychology (CP) and consumer behavior (CB) only achieve one publication each (Fig. 4). This is because the first two topics are broad ranging, compared to CP and CB, and because the articles classified as CP and CB also belong to the topic “marketing”.

Fig. 4. Distribution of documents per research keyword

It is also worth mentioning that the major breakthroughs found among the existent literature are concerned with technical aspects of the augmented reality. The research

264

M.T. Roxo and P.Q. Brito

area relating AR and Business, Economics and Management categories are still in an embryonic state, therefore it is important to emphasize what is known, to organize in a more structural way such findings.

5

Limitations and Future Research

Pioneering the use of bibliometric analysis addressing both augmented reality and marketing, there is still a long path to follow, and consequently the goal of this study is specifically to assess the state of the art, instead of conducting an exhaustive study of the field of augmented reality. The approach made by this study only used one single database, the Web of Science, which in itself, according to [39], has a limited scope, when compared for instance with Scopus (whose results are wider, and focus more on specific technical aspects). This suggests further research should employ other databases, such as ABI/INFORM and Google Scholar, namely. Another aspect that may bring a new sort of insights is the use and establishment of the platform ResearchGate as a source of dissemination of scientific knowledge. Concerning the methods employed in a bibliometric analysis, we underscore it allows citation and co-citation analysis, as well as textual analysis, both in the form of content analysis and in the form of text mining, through data mining algorithms. Acknowledgements. Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020” is financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).

References 1. Andrés, A.: Measuring Academic Research: How to Undertake a Bibliometric Study. Chandos Publishing, Oxford (2009) 2. Azuma, R.: A survey of augmented reality. Presence Teleoperators Virtual Environ. 6(4), 355–385 (1997) 3. Berryman, D.R.: Augmented reality: a review. Med. Ref. Serv. Q. 31(2), 212–218 (2012) 4. Boswijk, A., Thijssen, T., Peelen, E.: The Experience Economy: A New Perspective. Pearson Education Benelux, Amsterdam (2007) 5. Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., Ivkovic, M.: Augmented reality technologies, systems and applications. Multimedia Tools Appl. 51(1), 341–377 (2011) 6. Chung, N., Han, H., Joun, Y.: Tourists’ intention to visit a destination: the role of augmented reality (AR) application for a heritage site. Comput. Hum. Behav. 50, 588–599 (2015) 7. Costa, E., Soares, A.L., de Sousa, J.P.: Information, knowledge and collaboration management in the internationalisation of SMEs: a systematic literature review. Int. J. Inf. Manag. 36(4), 557–569 (2016) 8. Craig, A.B.: Understanding Augmented Reality: Concepts and Applications. Elsevier, Waltham (2013)

The Evolution of Azuma’s Augmented Reality

265

9. de Bakker, F.G.A., Groenewegen, P., Den Hond, F.: A bibliometric analysis of 30 years of research and theory on corporate social responsibility and corporate social performance. Bus. Soc. 44(3), 283–317 (2005) 10. Deloitte. Virtual reality: A billion dollar niche (2016) 11. Diodato, V.P.: Dictionary of Bibliometrics. Routledge, London (1994) 12. Ferreira, M.P., Reis, N.R., de Almeida, M.I.R., Serra, F.R.: International business research: understanding past paths to design future research directions. In: Devinney, T.M., Pedersen, T., Tihanyi, L. (eds.) Philosophy of Science and Meta-Knowledge in International Business and Management (Advances in International Management), vol. 26, pp. 200–330. Emerald Group Publishing Limited (2013) 13. Fetscherin, M., Toncar, M.: The effects of the country of brand and the country of manufacturing of automobiles. Int. Mark. Rev. 27(2), 164–178 (2010) 14. Furht, B.: Handbook of Augmented Reality. Springer, Florida (2011) 15. Gartner. Hype Cycle for Emerging Technologies (2016) 16. Guilera, G., Barrios, M., Gómez-Benito, J.: Meta-analysis in psychology: a bibliometric study. Scientometrics 94(3), 943–954 (2013) 17. Javornik, A.: Classifications of augmented reality uses in marketing. In: Proceeding of the 2014 ISMAR - IEEE International Symposium on Mixed and Augmented Reality - Media, Arts, Social Science, Humanities and Design 2014, pp. 67–68 (2014) 18. Javornik, A.: “It’s an illusion, but it looks real!” Consumer affective, cognitive and behavioral responses to augmented reality applications. J. Mark. Manag. 32(9–10), 987–1011 (2016) 19. Javornik, A., Rogers, Y., Moutinho, A.M., Freeman, R.: Revealing the shopper experience of using a “magic mirror” augmented reality make - up application. In: Proceedings of the 2016 ACM Conference on Designing Interactive Systems, pp. 871–882 (2016) 20. Juniper. Augmented Reality: Consumer, Enterprise & Vehicles 2015–2019 (2015) 21. Juniper. Augmented Reality: Developer & Vendor Strategies 2016–2021 (2016) 22. Kim, K., Hayes, J.L., Avant, J.A., Reid, L.N.: Trends in advertising research: a longitudinal analysis of leading advertising, marketing, and communication journals, 1980 to 2010. J. Advertising 43(3), 296–316 (2014) 23. Leefmann, J., Levallois, C., Hildt, E.: Neuroethics 1995–2012. a bibliometric analysis of the guiding themes of an emerging research field. Front. Hum. Neurosci. 10, 336 (2016) 24. Marketing Science Institute. Research Priorities 2010–2012. Marketing Science Institute, Cambridge (2010) 25. Marketing Science Institute. Research Priorities 2014–2016. Marketing Science Institute, Cambridge (2014) 26. Microsoft. Microsoft® Excel for Mac. (2016) 27. Olsson, T., Lagerstam, E., Kärkkäinen, T., Väänänen-Vainio-Mattila, K.: Expected user experience of mobile augmented reality services: a user study in the context of shopping centres. Pers. Ubiquit. Comput. 17(2), 287–304 (2013) 28. Parise, S.S., Guinan, P.J., Kafka, R.: Solving the crisis of immediacy: how digital technology can transform the customer experience. Bus. Horiz. 59(4), 411–420 (2016) 29. Pavlik, J.V., Bridges, F.: The emergence of Augmented Reality (AR) as a storytelling medium in journalism. Journalism Commun. Monogr. 15(1), 4–59 (2013) 30. Pine II, B.J., Gilmore, J.H.: Welcome to the experience economy. Harvard Bus. Rev. 76(4), 97–105 (1998) 31. Pine II, B.J., Gilmore, J.H.: The Experience Economy. Harvard Business Review Press, Boston (2011). Updated Edition 32. Pritchard, A.: Statistical bibliography or bibliometrics. J. Documentation 25, 348–369 (1969) 33. Reuters, T.: Whitepaper Using Bibliometrics, vol. 12. Thomson Reuters, Philadelphia (2008)

266

M.T. Roxo and P.Q. Brito

34. Riva, G., Baños, R.M., Botella, C., Mantovani, F., Gaggioli, A.: Transforming experience: the potential of augmented reality and virtual reality for enhancing personal and clinical change. Front. Psychiatry 7, 1–14 (2016) 35. Sainaghi, R., Phillips, P., Zavarrone, E.: Performance measurement in tourism firms: a content analytical meta-approach. Tourism Manag. 59, 36–56 (2017) 36. Schmitz, A., Urbano, D., Dandolini, G.A., de Souza, J.A., Guerrero, M.: Innovation and entrepreneurship in the academic setting: a systematic literature review. Int. Entrepreneurship Manag. J. (2016). doi:10.1007/s11365-016-0401-z 37. Scholz, J., Smith, A.N.: Augmented reality: designing immersive experiences that maximize consumer engagement. Bus. Horiz. 59(1), 149–161 (2016) 38. Sutherland, I.E.: The ultimate display. Multimedia: from Wagner to virtual reality. In: Proceedings of IFIP Congress, pp. 506–508 (1965) 39. van Raan, A.F.J.: Advances in bibliometric analysis: research performance assessment and science mapping. In: Bibliometrics: Use and Abuse in the Review of Research Performance, pp. 17–28. Portland Press, London (2014) 40. Weitman, E., Saleh, M., Marescaux, J., Martin, T.R., Ballantyne, G.H.: Robotic colorectal surgery: evolution and future. Semin. Colon Rectal Surg. 27(3), 121–129 (2016) 41. Zhou, F., Dun, H.B.L., Billinghurst, M.: Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR. In: Proceedings of the 7th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2008, pp. 193–202 (2008)

System-on-Chip Evaluation for the Implementation of Video Processing Servers Ghofrane El Haj Ahmed1(B) , Felipe Gil-Casti˜ neira1 , 1 nago-Soto1,2 Enrique Costa-Montenegro , and Pablo Cou˜ 1

AtlantTIC Research Center for Information and Communication Technologies, University of Vigo, Vigo, Spain [email protected], {xil,kike,pablo}@gti.uvigo.es 2 Gradiant, Vigo, Spain [email protected]

Abstract. Nowadays, users are demanding an increasing amount of multimedia content (watching videos, streaming contents in real time, using video-conferences for communication, etc.), but it is usually necessary to perform advanced operations in those video streams to adapt them to the size of the user’s terminal screen, the state of the network or to provide different video processing services. In the world of telephony, Media Servers (MSs) perform this kind of operations for large amounts of users. To improve their performance, hardware acceleration devices, such as Graphics Processing Unit (GPUs), are typically used. Nevertheless, current Systems-on-Chip (SoC) devices include high performance embedded GPUs that can be used to implement a Media Server, but it is necessary to evaluate if such devices are able to perform the typical video operations in real time. Keywords: Graphics Processing Unit Chip

1

· Video processing · System-on-

Introduction

Nowadays, video processing technologies are essential for different industrial areas. Nevertheless, the requirements in terms of volume of information, speed or latency, make it difficult to implement these operations only in software, so specialized processors have to be used to accelerate the different tasks. Discrete GPUs are being used to perform heavy video processing tasks, but they can be expensive and “power hungry”. Nevertheless, SoCs are usually designed for smartphones or other battery powered devices, and therefore, the energy efficiency is a key requirement. Several researchers have studied power consumption and modeled the power of embedded GPUs and other hardware accelerated blocks [1–3]. Furthermore, SoC devices integrate most of the essential elements that are required to operate, making it possible to design small boards c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 28

268

G. El Haj Ahmed et al.

that need less power and space. Thereby, it is possible use a SoC to accelerate different operations that are typically performed by MSs, such as video encoding, decoding, scaling, etc. MSs are usually installed at Telecom operators facilities, and if we could reduce their size and power consumption, it would be possible to reduce their operational costs, and to move those MSs and other video services to the “edge” of the network [4], allowing to minimize the latency in the transmission or reception of video, even for high quality video (high resolution, high framerate, etc.) for the “tactile” Internet [5] (applications thar require extremely low latencies, for example, telepresence or the remote control of a robot). Nevertheless, in order to design and implement a new architecture of MSs based on SoCs, it is necessary to evaluate different SoCs alternatives to know if they are able to handle the different video processing operations that a MS usually performs in real time. In this paper, we evaluate different SoCs with their embedded GPUs and other hardware accelerated blocks. In order to perform the evaluation, we use three different frameworks that are typically used to process video: the GStreamer multimedia framework [6], MPlayer [7], and FFmpeg [8]. This paper is organized as follows: Sect. 2 shows a review of research work related to the study of the performance of GPUs for video processing. Section 3 presents the boards and the frameworks used, the proposed experiments, the results obtained and a comparison of the performance of the different boards. Finally, Sect. 4 presents the conclusions and the future work.

2

Related Work

Several researchers have studied how GPUs can be used to accelerate different video operations, such as encoding or face detection, in different hardware platforms. Some researchers used desktop devices to perform such operations. For instance, Shen et al. [9] demonstrated the importance of using GPUs in order to help the CPU to accelerate video decoding and they presented an architecture wherein the CPU and the GPU are pipelined. Katsigiannis et al. [10] studied and examined the performance benefits of using the GPU over the CPU for an experimental video compressing algorithm. In their tests, they employed a desktop CPU and GPU to compare between their performance and they proved that the GPU approach can be 21.303 times faster than the CPU for the decoding process, while for encoding the speedup ratio reached 11.048. Lin [11] presented a multi-pass algorithm to accelerate the motion estimation on video encoding, demonstrating that the GPU can accelerate the motion estimation twice as fast as only the CPU. Moriyoshi et al. [12] proposed a relaxation approach of interMB dependencies and a frame-pipelining to fully utilize many processor cores on GPU and their results proved that their GPU accelerated H.264 encoder ran more than 10 times faster than the CPU. Other researchers have also studied the performance of different embedded devices to perform video operations. Darling et al. [13] tested the performance of hardware acceleration with GStreamer on OMAP35x, but only using one type of hardware acceleration with only one video codec (MPEG2). Cheng et al. [14] performed a comparison between the execution time and the power consumption

System-on-Chip Evaluation

269

of a mobile GPU versus a mobile CPU on a face recognition system, proving that a Tegra GPU can be 4.25 times faster than the CPU implementation. They also did a comparison with a desktop GPU (NVIDIA GPU GeForce 8800 + Intel i7). Rister et al. [15] introduced an implementation of Scale-Invariant Feature Transform (SIFT) on mobiles devices in order to evaluate their GPUs. I was tested using 4 mobiles devices against an optimized CPU version running in a single thread. Their results showed that a heterogeneous system (CPU+GPU) can be from 4.7 to 7.0 times faster than the CPU alone. The results presented in these papers that studied different embedded devices do not provide enough information to conclude if the proposed SoCs are suitable to implement a MS or to be used by any other video processing application (e.g. they do not analizye the typical video operations performed by a MS). Therefore, it is necessary to study if the processing capabilities of such architectures are enough to implement a MS or any other video processing service for a telecommunication network.

3

Experiments and Results

In this section, we evaluate four embedded boards with different SoCs (and their embedded GPUs and hardware acceleration blocks): MIPS Creator Ci20, UDOO QUAD, Raspberry Pi 2 model B+, and NVIDIA Jetson TK1. We also evaluated a high end GPU server in order to compare it with the boards. Table 1 summarizes the main features of each board and the GPU server. For the different tests, we used GStreamer and FFmpeg to implement different operations of video processing. GStreamer is not supported by the Creator Ci20 board, so we had to use MPlayer to evaluate its video processing capabilities. Unfortunately MPlayer can support only the video decoding. GStreamer and FFmpeg already supported the hardware acceleration capabilities of the tested boards. To evaluate the different SoC and determine if they are valid to implement a MS or a similar service which require video processing capabilities, we have measured the performance (in terms of frames per second or FPS) of a group of typical video operations for the tests: decoding, encoding, transcoding and scaling. The results of the evaluation are presented in Table 2, which includes the FPS using only the CPU, the FPS using the CPU and the GPU, and the speedup ratio with the use of the GPU. The results that reached real-time (24 frames per second) are in bold. For the tests we used the same video sequence (Big Buck Bunny videos sequences [16]) with three different resolutions (480p, 720p and 1080p). The videos were usually encoded or decoded to H.264 (as it is the dominant codec with 72% of all encoding activity [17]). The framerate per second of all videos is 24 and their duration is 596.4 s. Note that if a board does not support a particular video codec or resolution, we do not consider it for the results. All the tests were repeated five times, so we present the average value of the measurements.

270

G. El Haj Ahmed et al. Table 1. Main features of the boards

Features

MIPS Rasberry Pi 2 Creator Ci20 model B+

UDOO QUAD

NVIDIA Jetson TK1

GPU server

SoC

Ingenic JZ4780

Broadcom BCM2835

Freescale i.MX 6

Tegra K1 SOC

-

CPU

Dual 1,2 GHz XBurst MIPS32 little endian

700 MHz ARMv6k

ARM Cortex-A9 Quad core 1 GHz

NVIDIA 4-Plus-1 Quad-Core ARM Cortex-A15 CPU

Intel Xeon Processor E3-2620 v3

GPU

SGX540

Broadcom VideoCore IV

Vivante GC NVIDIA Kepler 2000 + Vivante GPU with 192 GC 355 + CUDA Cores Vivante GC 320

NVIDIA Geforce GTX 980

RAM

1 GB

1 GB

1 GB

32 GB

Size

90 ∗ 95 mm

85.60 ∗ 56.5 mm 110 ∗ 85 mm

133 ∗ 133 mm

-

Price

$65

$35

$135

$192

$3022

Power supply

5W

10 W

24 W

60 W

1600 W

3.1

2 GB

Decoding Test

This subsection presents the results of the video decoding tests with the different boards. First, we decoded the H.264 video on each board with GStreamer and measured the speed without enabling the GPU. Then we enabled the hardware acceleration features and performed the measurements again. We calculated the average frames per second in each test. The tests show that NVIDIA Jetson TK1 is the embedded board that achieves the highest framerate among the tested, and both the CPU and the GPU exceed real time decoding requirements, i.e. 24 FPS. Besides, its GPU performs 2.32 times better than the CPU when using the resolution 480p, 1.67 in the case of 720p and 1.41 for 1080p. The UDOO QUAD can also decode the video at real time with the three resolutions. Using hardware acceleration the decoding speed is better than using only its CPU, between 1.46 and 1.61 times faster. The CPU of the Raspberry Pi 2 reaches real time only for 480p and 720p videos, but the GPU achieves real time also for 1080p. The Creator Ci20 board requires the use of the GPU to decode the 480p and 720p videos in real time, but it is not able to decode the 1080p resolution video in real time. The NVIDIA Jetson TK1 achieves the best decoding speed for the different video resolutions. If we compare the embedded boards with the GPU server the results are clearly inferior. Nevertheless, almost all the studied boards can decode high quality video in real time.

System-on-Chip Evaluation

271

Table 2. Results of the tests (frames per second) Test

MIPS Rasberry Pi UDOO Creator Ci20 2 model B+ QUAD

Decode 480p

CPU

15

Decode 720p

219

Speedup

2.47

3.17

CPU

10

247 1.61

351

2079

813 2.32

4011 1.93

33

85

242

1101

106

127

403

1554

Speedup

2.6

3.21

1.49

1.67

1.41

CPU

5

16

41

133

530

60

188

922

CPU+GPU 13 Encode 480p

153

GPU server

26

CPU+GPU Decode 1080p

69

37

CPU+GPU

NVIDIA Jetson TK1

49

Speedup

2.6

3.06

1.46

1.41

1.74

CPU

-

12

16

21

227

CPU+GPU -

147

183

310

Speedup

-

12.25

11.44

14.76

2.38

CPU

-

541

3

4

9

148

CPU+GPU -

14

15

92

315

Speedup

-

4.67

3.75

10.22

2.13

CPU

-

1

3

4

85

CPU+GPU -

6

8

29

Speedup

-

6

2.67

7.25

2.41

Transcode MPEG-4 to H.264 720p

CPU

-

Transcode MPEG-4 to H.264 1080p

Enode 720p

Encode 1080p

205

4

7

14

155

CPU+GPU -

43

33

155

617

Speedup

-

10.75

4.71

11.07

3.98

CPU

-

1

3

8

80

CPU+GPU -

19

16

72

285

Speedup

-

19

5.33

9

3.56

Transcode VP8 CPU to H.264 720p CPU+GPU -

4 24

6 30

5 74

157 601

Speedup

-

Transcode VP8 CPU to H.264 1080p CPU+GPU Speedup Scale 720p to 480p

3.2

5

14.80

3.83

3 15

3 85

81 268

28.33

3.31

29 291

328 639

-

10

5

CPU CPU+GPU -

7 34

12 43

Speedup Scale 1080p to 480p

6 1 10

4.86

3.58

10.03

1.95

CPU CPU+GPU -

-

6 24

10 25

27 178

254 298

Speedup

4

2.5

6.59

1.17

-

Encoding Test

An uncompressed 24 FPS YUV raw video was encoded to H.264 and we measured FPS achieved for three different resolutions (480p, 720p and 1080p). All the boards can encode to 480p in real time if the hardware acceleration features are enabled, where the performance is increased 11.44 times for the Raspberry Pi 2, 12.25 times for the UDOO QUAD and 14.76 times for the NVIDIA Jetson TK1. Nevertheless, Raspberry Pi 2 and UDOO QUAD do not

272

G. El Haj Ahmed et al.

reach the real time for 720p and 1080p. Only the NVIDIA Jetson TK1 can encode in real time for those resolutions when using hardware acceleration, where enconding is 10.22 and 7.25 times faster. In the case of the GPU server, it encodes the 1080p video at 205 FPS, that is 7 times faster than the NVIDIA Jetson TK1 board. So we can see that hardware acceleration is not necessary to decode videos in the different embedded boards, but essential for encoding to achieve real time. 3.3

Transcoding Test

This operation consists in converting a video from an original format to a new format to adapt it to the particular characteristics of the network or the display. In this test, we transcoded MPEG-4 and VP8 files to H.264. It is only possible to reach real time on the UDOO QUAD board for 720p videos using hardware acceleration. The Raspberry Pi 2 board can transcode 720p MPEG-4 videos to H.264 in real time only if the hardware acceleration is enabled. The experimental results for the NVIDIA Jetson TK1 board show that it can perform transcoding operations in real time for the different resolutions if we enable the hardware acceleration. Regarding to the GPU server, it can transcode in real time with all tests even if the GPU is not enabled. Therefore, in the case of the embedded boards it is necessary to enable the hardware acceleration in order to transcode the videos in real time. The results are similar also using the codec VP8 instead of MPEG-4. 3.4

Video Scaling Test

Video scaling consists in resizing a video into a new resolution. In this experiment we downscaled a H.264 compressed video, from 1080p and 720p to 480p. The UDOO QUAD board can scale from both 1080p and 720 to 480p in real time using hardware acceleration. Nevertheless, with the CPU the frame rate is lower than 24 frames per second, so it does not achieve real time. Raspberry Pi 2 can scale H.264 videos from 720p to 420p in real time using the GPU. It also just reaches real time in the 1080p to 480p scaling test. However, Jetson TK1 achieves the real time even only using the CPU. The GPU server achieves 639 FPS in the test that scales from 720p and 298 FPS from 1080p.

4

Conclusions

Our experimental results show that it is possible to perform complex real time video operations (such as encoding, decoding, scaling, or transcoding) with embedded boards. Nevertheless, hardware acceleration is almost always necessary, and only some boards are able to reach real time. Therefore, we can say that it is possible to use a System-on-Chip to implement a Media Server for telecommunication networks. Even though the SoCs tested in this research

System-on-Chip Evaluation

273

presented worse results than the GPU server, their lower price and power consumption prove them as a feasible solution for implementing a Media Server that can be placed in different places of the network (for example, in the edge in order to reduce latencies). As future work, it will be necessary to analyse how many devices will be necessary to support the typical number of users that can connect to a Media Server in different scenarios, and to create a metric that can be used to decide when the SoC based solution is better than a traditional approach in terms of cost and power efficiency. Acknowledgment. This work was funded by the European Commission under the Erasmus Mundus E-GOV-TN project (E-GOV-TN Erasmus Mundus Action 2 – Lot 6 – Project no. 2013-2434).

References 1. Huang, C.W., Chung, Y.A., Huang, P.S., Tsao, S.L.: High-level energy consumption model of embedded graphic processors. In: IEEE International Conference on Digital Signal Processing (DSP), pp. 105–109 (2015) 2. Vatjus-Anttila, J.M., Koskela, T., Hickey, S.: Power consumption model of a mobile GPU based on rendering complexity. In: IEEE Seventh International Conference on Next Generation Mobile Apps, Services and Technologies (NGMAST), pp. 210–215 (2013) 3. Mochocki, B., Lahiri, K., Cadambi, S.: Power analysis of mobile 3D graphics. In: Sixth Conference on Design, Automation and Test in Europe (DATA), pp. 502–507 (2006) 4. Beck, M.T., Feld, S., Fichtner, A., Linnhoff-Popien, C., Schimper, T.: Me-volte: network functions for energy-efficient video transcoding at the mobile edge. In: 18th International Conference on Intelligence in Next Generation Networks (ICIN), pp. 38–44 (2015) 5. Fettweis, G.P.: The tactile Internet: applications and challenges. IEEE Veh. Technol. Mag. 9(1), 64–70 (2014) 6. GStreamer (2016). https://gstreamer.freedesktop.org/ 7. MPlayer (2016). http://www.mplayerhq.hu/ 8. FFmpeg: A complete, cross-platform solution to record, convert and stream audio and video (2016). http://ffmpeg.org/ 9. Shen, G., Gao, G.P., Li, S., Shum, H.Y., Zhang, Y.Q.: Accelerate video decoding with generic GPU. IEEE Trans. Circuits Syst. Video Technol. 15(5), 685–693 (2005) 10. Katsigiannis, S., Dimitsas, V., Maroulis, D.: A GPU vs. CPU performance evaluation of an experimental video compression algorithm. In: Seventh International Workshop on Quality of Multimedia Experience (QoMEX), pp. 1–6 (2015) 11. Lin, Y.C., Li, P.L., Chang, C.H., Wu, C.L., Tsao, Y.M., Chien, S.Y.: Multi-pass algorithm of motion estimation in video encoding for generic GPU. In: IEEE International Symposium on Circuits and Systems (2006) 12. Moriyoshi, T., Takano, F., Nakamura, Y.: GPU acceleration of H.264/MPEG-4 AVC software video encoder (2011) 13. Darling, C.D., Singh, B.: GStreamer on Texas Instruments OMAP35X processors. In: Proceedings of the Ottawa Linux Symposium, pp. 69–78 (2009)

274

G. El Haj Ahmed et al.

14. Cheng, K.T., Wang, Y.C.: Using mobile GPU for general-purpose computing: a case study of face recognition on smartphones. In: IEEE International Symposium on VLSI Design, Automation and Test (VLSI-DAT), pp. 1–4 (2011) 15. Rister, B., Wang, G., Wu, M., Cavallaro, J.R.: A fast and efficient sift detector using the mobile GPU. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2674–2678 (2013) 16. Big Buck Bunny videos sequences (2016). https://peach.blender.org/ 17. 2016 Global Media Format Report (2016). http://www.encoding.com/files/ 2016-Global-Media-Formats-Report.pdf

A Review Between Consumer and Medical-Grade Biofeedback Devices for Quality of Life Studies ( ) ( ) ( ) Pedro Nogueira1,2 ✉ , Joana Urbano1,2 ✉ , Luís Paulo Reis1,3 ✉ , ( ) ( ) ( ) Henrique Lopes Cardoso1,2 ✉ , Daniel Silva1,2 ✉ , and Ana Paula Rocha1,2 ✉

1

Artificial Intelligence and Computer Science Laboratory (LIACC), Porto, Portugal 2 Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal {pedro.alves.nogueira,jurbano,hlc,dcs, rocha.apaula}@fe.up.pt 3 School of Engineering of the University of Minho (EEUM), Braga, Portugal [email protected]

Abstract. With the rise in wearable technology and “health culture”, we are seeing a rising interest and affordances in studying how to not only prolong life expectancy but also in how to improve individuals’ quality of life. On one hand, this attempts to give meaning to the increasing life expectancy, as living above a certain threshold of pain and lack of autonomy or mobility is both degrading and unfair. On the other hand, it lowers the cost of continuous care, as individuals with high quality of life indexes tend to have lower hospital readmissions or secondary complications, not to mention higher physical and mental health. In this paper, we evaluate the current state of the art in physiological therapy (biofeedback) along with the existing medical grade and consumer grade hard‐ ware for physiological research. We provide a comparative analysis between these two device grades and also discuss the finer details of each consumer grade device in terms of functionality and adaptability for controlled (laboratory) and uncontrolled (field) studies. Keywords: Psychophysiology · Quality of life · Biofeedback · Fitness tracking

1

Introduction

Originally starting with fitness watches and more recently with the advent of smart‐ phone-powered fitness applications, recent years have seen an increase in the prevalence and complexity of wearable technology. This technological advancement has been made possible by the miniaturization of sensor technology and increased battery/circuit effi‐ ciency, which has been driven by an exponentially growing healthcare and “health culture”. One of the main reasons for the emergence of this “health culture” is the result of an increasing life expectancy of patients, which often isn’t accompanied by their quality of life (QoL), thus leaving patients with great pain, restricted mobility and considerable adverse effects to their daily life and future health prospects. In other words, beyond prolonging life, it is essential to also increase patient’s quality of life. © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_29

276

P. Nogueira et al.

Quality of life is now considered an important aspect in clinical practice for patients with chronic illnesses, as someone with poor QoL indexes will likely suffer from a lack of mobility (due to, for example, joint pain), which in turn leads to low exercise and/or low mental health levels. These, in turn, slowly chip away at their health condition, aggravating their condition in a feedback loop and increase the overall risk of developing secondary health conditions. Despite it’s overarching importance, the current methods to assess quality of life, automatic or semi-automatic, and its use in clinical decision support systems are still underexplored and there are virtually no applications in the market for this. As a first step in developing a structured approach towards building such systems, in this paper we evaluate the existing state of the art in biometric analysis systems – both medical and consumer grade – and present a comparative analysis.

2

State of the Art

The most objective way to evaluate quality of life objectively is to monitor patients’ physiological state over a relatively long period of time (ranging from weeks to, ideally, months). The most common way of doing this is through biofeedback techniques. Biofeedback itself was originally developed for medicinal purposes in the 1970s era as a training procedure to overcome medical conditions, such as Attention deficit hyper‐ activity disorder (ADHD) [1]. However, in the last decade, it has re-emerged as a viable technology for use in ludic applications and some researchers have leveraged this by integrating biofeedback into interactive training or rehabilitation applications. This “sugar coating” makes the process more natural and less strenuous or boring for patients and can thus increase both the amount of data collected, as well as the efficiency of the program itself. Despite this growing popularity, biofeedback apparatus is often medical-grade, and thus, expensive (ranging from $6,000 to $15,000 + per device), meaning it is not readily available to the public or large studies. Several hardware manufacturers are attempting to provide inexpensive physiological input solutions that use brain signals (e.g., Emotiv Epoc) and other physiological measures, such as skin conductance, electromyography, respiration rates, and electrocardiography (e.g., BITalino). Throughout this section, we provide a review of the most current technology and research with regards to biofeedback for medical purposes. 2.1 A Primer of User Physiological Metrics Electrodermal Activity. Electrodermal activity (EDA), usually measured in the form of skin conductance (SC), is a common measure of skin conductivity. EDA arises as a direct consequence of the activity of eccrine (sweat) glands. Some of theses glands situated at specific locations (e.g., palms of the hands and feet soles) respond to psycho‐ logical changes and thus EDA/SC measured at these sites reflects emotional changes as well as cognitive activity [2]. SC has been linearly correlated with arousal [1, 3] and extensively used as stress indicator [3], in emotion recognition [1, 3–5] and to explore

A Review Between Consumer and Medical-Grade Biofeedback Devices

277

correlations between gameplay dimensions [6]. It is usually measured using two Ag/ AgCL surface sensors snapped to two Velcro straps placed around the middle and index fingers of the non-dominant hand [2]. Cardiovascular Measures. The cardiovascular system is composed by the set of organs that regulate the body’s blood flow. Various metrics for its activity currently exist, among which some of the most popular ones are: blood pressure (BP), blood volume pulse (BVP) and heart rate (HR). Deriving from the HR, various secondary measures can be extracted, such as inter-beat interval (IBI) and heart rate variability (HRV). HR is usually correlated with arousal or physical exercise and can be easily differentiated using a combination of other sensors like, for example, skin conductivity [2]. HR, along with its derivate and HRV has also been suggested to distinguish between positive and negative emotional states (valence) [3]. Heart rate is a very common measure for most sports watches and fitness bracelets and is usually measured via a photoplethysmogram (the volumetric measurement of an organ) using a pulse oximeter that illuminates the skin and measures changes in light absorption. For medical grade devices, the preferred method is to usually infer this from a raw ECG data stream (more precise) or the participants’ BVP readings using a finger sensor (less precise). Respiration. A Respiration sensor (RESP) measures the volume of air contained in an individual’s lungs, as well as their breathing patterns. It is usually measured using a sensitive girth sensor stretched across the individual’s chest [2]. It can be inferred indi‐ rectly through other methods such as an accelerometer or gyroscope but results are dependent on the uses and easily muddied by high physical movement so it is not advised on high precision scenarios. 2.2 Medical Biofeedback Applications As we have previously mentioned, originally biofeedback was designed to aid in medical therapy by helping patients to overcome medical conditions or to perform patient moni‐ toring/assessment [9, 10]. For example, Dong presents a music therapy approach where the users’ negative emotional states are counterbalanced through music [11]. In a similar approach, in [18] the authors presented a system to aid body balance rehabilitation by using simple audio frequencies to indicate correct posture. In related work, Huang et al. developed a neural motor rehabilitation biofeedback system for use in a virtual 3D world [12]. Due to biofeedback’s easy integration with interactive and multimedia applications, various serious games have been designed to aid in the treatment of medical conditions. For example, a game was presented which targets the treatment of swallowing dysfunc‐ tions [13]. Riva et al. proposed a General Anxiety Disorder treatment that triggers changes in the game world based on the patient’s heart rate and skin conductance [14]. A very similar biofeedback game (“Nevermind”) for fear management based on players’ heart rate readings was also designed [15]. Several approaches geared more towards self-improvement have also been proposed. For example, “Brainball” [17] and Bersak’s proposed racing game [18] are relax-to-win

278

P. Nogueira et al.

indirect biofeedback games that introduce a competitive player-versus-player environ‐ ment where the most relaxed player has a competitive advantage. See Table 1 for a comparative analysis of clinical biofeedback applications. Table 1. Review of 10 medical and therapeutic applications of biofeedback techniques. Reference Blanchard [9]

SS 42

BF type Monitoring

Bryant [10]

1

Monitoring

Dong [11] Rocchi [16]

4 8

IBF IBF

Huang [12]

2

IBF

Stepp [13]

6

DBF

Riva [14]

24

IBF

Reynolds [15]

NA

IBF

Hjelm [17]

NA

IBF

Bersak [18]

NA

IBF

3

Adaptations Thermal feedback Muscle exercise regimen feedback Musical excerpts Audio frequencies Musical and visual stimuli Control virtual fish Virtual object placement and properties Audiovisual stimuli (game events) Ball movement/ orientation Car acceleration

Treatment Elevated BP

Sensors BP

Swallowing dysfunctions

EMG

Music therapy Balance control

EEG ACC

Motor rehabilitation Swallowing dysfunctions General anxiety disorder

ACC, PST EMG HR, SC

Fear/anxiety disorders

HR

Relax to win

EEG

Relax to win

SC

Biofeedback Devices

As we have seen in the previous section, biofeedback was originally developed for medical research applications, having only gained widespread traction in recent years. Thus, most of the existing state of the art research has been developed using medical grade devices, which essentially make no compromises in terms of accuracy but are somewhat lacking in practical terms. Conversely, consumer grade biofeedback sensors, such as the ones present in modern fitness or lifestyle trackers, focus heavily on being “everyday usable” and trade sensor accuracy for other conveniences such as unobtru‐ siveness and battery life. In this section, we start by succinctly describing the capabilities of the best-of-breed medical grade biofeedback hardware. We do so in order to contextualize the existing standard for sensor accuracy and hardware features so that we can establish a baseline for comparing consumer grade devices. As our target study essentially requires researchlevel quality readings but also the amenities of everyday usability and autonomy, this is a necessary comparison.

A Review Between Consumer and Medical-Grade Biofeedback Devices

279

3.1 Medical Grade Devices Medical grade devices are designed with the goal of offering versatile but mostly highly advanced systems for physiological research. Most of them share the same technical specifications – as is required by strict medical guidelines – and hardware format – for compliance and competition purposes. The most popular solution on the market are the devices Nexus-10 series manufactured and sold by Mind Media BV. The Nexus-10 offers 8 input channels in total, with their configuration being custom‐ izable by the users for a particular set of sensors, depending on the study to be conducted. Mind Media offers a wide range of modalities that can be simultaneously measured. These include: electroencephalography (EEG), slow cortical potentials (SCP), electro‐ myography (EMG), electrooculography (EOG), electrocardiography (ECG), blood flow via blood volume pulse (BVP), oximetry (O2), skin conductance (SC), respiration patterns (RSP), body temperature (TMP), accelerometer (ACC) and force sensors (FS). From these, a wide range of secondary or processed variables can be extracted (e.g. heart rate, heart rate variability, breathing patterns, brainwave features, etc.) using the included software suite which is able to capture, collect and present the sensor data in real-time. It also allows users to configure custom dashboards and apply real-time filters to the data prior to logging them in several custom, text-based formats – including raw data outputs via Bluetooth to external applications. On the practical side, the Nexus-10 (and all of its competitors) presents a simple but heavy data acquisition solution, roughly the size of a human hand (120 × 140 × 45 mm) and weighing around 500 g. While not too bulky, it is cumbersome and noticeable in every usage, especially if the patient is moving around or doing physical activity. Recent versions include a lithium-ion battery pack (8000 mAh lithium polymer) that reduces weight and should last for over 24 h. Regarding sensor accuracy and sampling rates, the Nexus offers high quality medical grade connectors that isolate noise or artifacts when touching or pulling on them. Sampling rates are also the highest in the business with dual channel inputs (ECG, EMG, EEG) recording data at 2048 Hz and single channel inputs (SC, BVP, TMP) and derived readings (e.g. HR) recording at 32 Hz. In terms of sheer performance and autonomy, as we will see in the following section, the Nexus has essentially the edge over consumer grade devices. Where it loses ground is mainly in its inability to offer a more compact package (most consumer grade devices are less than half of its weight and a fraction of the size) and the cost. A single unit complete with the necessary cables and pre-gelled electrodes can easily cost over 8,000€, which is enough to buy, on average, over 40 consumer grade devices and makes it highly inadequate for large or unsupervised studies. 3.2 Consumer Grade Devices Unlike medical grade devices, designed for high-end research, consumer biofeedback is a recent trend that has been focused on ludic activities (e.g. biofeedback videogames), fitness tracking and lifestyle monitoring. Most of the available devices on the market have appeared in the last 4–5 years, having been made possible by (1) the miniaturization

280

P. Nogueira et al.

and mass production of sensor technology, mostly due to the advent of smartphones, and (2) by the increasing prevalence of “health cultures” and popular awareness of the importance of physical and mental well-being. In this section we evaluate eight consumer grade physiological recording devices, each from different manufacturers. In the following section, we present a comparative analysis between each of the consumer grade devices as well as how they stack up to medical grade ones. Regarding variables, we will analyze the following ones: (1) Price, (2) Included sensors, (3) Derived variables, (4) Existing API, (5) Software Suite, (6) Operating System Compatibility, (7) Commu‐ nication protocols and (8) Battery Life. The first device on our list is the Feel Wristband from Feel1. It retails for $199 and it is designed to log emotion patterns throughout the day. How it achieves this is not described, as the algorithm is understandably proprietary, but there is an evident lack of scientific research backing this claim, which raises doubts as to it’s accuracy. From the available information on their website, it is evident they use skin conductivity, which has been shown to directly correlate with arousal [2] – one of the main emotional dimensions in Russell’s circumplex model. They also use a 3D accelerometer to track physical activity, and in all likelihood use it to correct improper SC activations due to exercise; it is also likely they are computing the subject’s heart rate for this. The biggest issue with this emotional detection process is that there is no way of identifying valence – the second emotion in Russell’s circumplex model – and without it, it is impossible to distinguish the emotion’s positive or negative charge, only it’s intensity. The issue is not necessarily crippling to the usage of the device but lays itself to some doubts and thus, not suitable for academic research. The second device on our list is the Zenta Wristband by Vinaya2. It retails for $50 less than the Feel Wristband ($149) and offers the same general functionality with a significant amount of extra sensors. Overall, it presents SC, TMP, and O2 sensors. It also comes with an accelerometer and a microphone. From the included sensors it derives an impressing number of variables: HR, HRV, RSP, pulse transit time and pulse wave velocity, as well as a few higher-level ones such as, discrete emotions, calorie tracking and activity tracking. Both emotion prediction and calorie tracking require user input (overall mood and daily caloric intake, respectively). Our third entry is the Microsoft Band 23, which is, by far, the most complete package on the market at the time of writing. It retails for the same price as the Feel Wristband ($199) but offers the following biometric sensors: SC, O2, and TMP. It also includes GPS, an ambient light sensor, a Gyroscope in addition to the standard accelerometer, an UV sensor, a barometer, and a microphone. It can compute the user’s HR from the O2 sensor but it is not clear as to why RSP readings are not described on the technical sheet. It presents itself as the most research-focused solution on the market and does not offer discrete emotion processing. It does however, act as a fitness tracker so it measures calorie intake/expenditure, sleep patterns and sleep quality analysis tools. It is also the only device on the market that offers a dedicated visualization and data processing suite (Microsoft Health), which is free of charge. 1 2 3

http://www.myfeel.co/. http://www.vinaya.com/. https://www.microsoft.com/microsoft-band/en-us.

A Review Between Consumer and Medical-Grade Biofeedback Devices

281

The fourth and fifth entries are the Basis Peak4 and Jawbone UP35. They retail for $199 and $129 but, oddly, the Basis Peak offers less sensors as it comes only with a O2 from which it derives BVP and HR and a gyroscope and accelerometer. The Jawbone on the other hand offers the same O2 sensor in conjunction with SC and TMP sensors from which it derives HR and RSP measures. It only includes an accelerometer though and not a gyroscope. In terms of fitness tracking, both track caloric intake and expen‐ diture, as well as sleep patterns, their quality and physical activity. None of them offer any data visualization platform or APIs to read the data in real-time. The last three entries in our list are not necessarily wearable or dedicated biometric tracking platforms but aim to tackle the market though sheer volume or a low cost/ personalization. The first of these is the Apple Watch 26, which retails starting at $269 and offers HR and RSP7 measurements though photoplethysmographic measurement and GPS tracking. The second platform is the Android counterpart to the Apple Watch, the Android Wear 28. Contrary to the Apple Watch, the Android Wear is free and is composed of the SDK to develop wearable apps so it doesn’t offer a dedicated hardware platform. As such, it is not possible to assess which sensors it offers. The biggest advantage to both of these platforms is that while they don’t do most of the work for the user, they enable tech-savvy users and researchers to build their own applications from scratch and completely customize them, while also allowing them to access data in realtime and integrate with other existing apps on the marketplace. In short, they have the biggest potential. The main issue is the limited range of sensors on these wearable plat‐ forms, which can hinder data collection. The final entry on our list, BITalino9, addresses the main disadvantages that seen on the consumer grade segment by offering a hybrid solution between them and medical grade devices. BITalino is an Arduino-based biometric solution designed for researchers and electronics/engineering hobbyists that want to build their own physiologic recording devices. It retails from $149 to $200 for the most complete pack, which includes all the necessary hardware, as well as cables and sensors for measuring ECG, EMG, EEC, SC and acceleration. At the moment it seems it doesn’t have a temperature sensor available but these are readily available online and can be easily integrated into the platform. It offers a free and complete software suite (Open Signals) to visualize and process data offline or in real-time and can perform relatively complex signal and statistical analysis. It is not as practical as most of the devices on the market and is not waterproof or water resistant as all of the devices discussed so far but overall, it is the most versatile and cost-efficient solution.

4 5 6 7 8 9

http://www.mybasis.com/technical-specifications/. https://jawbone.com/fitness-tracker/up3. https://www.apple.com/watch/. Can be derived by users via the provided SDK only. https://www.android.com/wear/. http://www.bitalino.com/.

282

4

P. Nogueira et al.

Discussion

In this section we analyze each of the devices discussed on the two previous sections and compare them in terms of: (1) Features: “How complete is the device and how much data can it produce?” (2) Signal Processing: “How much signal processing (e.g., filtering, noise reduction) does the device allow or require to extract meaningful infor‐ mation?” (3) Precision: “How accurately can the signal be interpreted for both raw and derived measures and how much noise interference is present?” (4) Sensor Relia‐ bility: “How likely is the sensor to fail and how much calibration does it require?” (5) Intrusiveness: “Would the required apparatus interfere with the daily lives of the candidates, potentially impairing the study or biasing it in any significant way?” (6) Practicality: “How long does the device’s battery last and at what sampling rates?” and (7) Cost: “Based on the current retail prices, what would be the necessary budget to perform a medium-sized study (50-100 individuals)?” Table 2 presents a breakdown of the described devices in detail. It summarizes our previous discussion and allows a quick reference for the purposes of this discussion. Regarding the pure volume of data, most devices generate roughly the same outputs: HR, SC and some form of activity tracking via accelerometers or gyroscopes. The clear winners here are the Microsoft Band 2 and BITalino platforms, which not only include all these outputs but also include a few not present on the competing devices. In terms of precision, it is difficult to evaluate how these devices will fare without a large, controlled field study with all of them but, in general, all devices gather data in the same way and feature similar sensors and sensor placements so, if properly used, results should not differ significantly. In terms of how they compare with medical-grade devices, it should be expected that more movement or electromagnetic interference be created given that these are lower grade devices. This can be alleviated with proper filtering but the greatest drawback is the fact that given that the sensors are placed in less-than-ideal body locations (e.g. SC should be measured in the index and middle fingers and most devices measure it on the wrist [2]). This makes the sensors prone to data collection failures, which in turn make the data stream incomplete and unusable, especially on longer-term studies. This leads us into our fifth point, intrusiveness. While most devices, being worn on the wrist and being lightweight, are generally not very intrusive, none of them are waterproof and have been reported to be somewhat frail in terms of construction due to the sensor miniaturization. There have also been cases of user complaints regarding some minor discomfort when using them for prolonged times due to the blood flow restrictions or sensors digging into the wrist area. This is also tied to practicality and another point of concern is the lack of substantial autonomy – all devices use a recharge‐ able LiPo battery that lasts anywhere between 24 and 96 h – and this implies either reducing the sampling rates to a bare minimum or creating data collection pauses for charging the devices. Whether this is a pain point depends heavily on the focus of the study but should be carefully considered as it can put in question the validity of the study itself.

A Review Between Consumer and Medical-Grade Biofeedback Devices

283

Table 2. Breakdown of currently existing consumer-grade physiological devices. Device name Feel Wristband Zenta Wristband

Price $199 $149

Microsoft Band 2

$199

Basis Peak Jawbone UP3 Apple Watch 2 Android Wear 2 BITalino Device name Feel Wristband Zenta Wristband Microsoft Band 2 Basis Peak Jawbone UP3 Apple Watch 2

$199 $129 $300 $150+ $149+ API Log Log Log Log Log SDK

Android Wear 2

SDK

BITalino

SDK

Sensors/Raw variables SC, HR, Acc SC, Tmp, HR, HRV, RSP, Acc, Noise SC, HR, Light, UV, Tmp, Gyro, Barometer, GPS, Noise BVP, HR, Gyro SC, BVP, HR, Tmp, Acc HR, GPS NA ECG, EMG, EEG, SC, Acc Suite OS App iOS/Android App iOS/Android MS Health iOS/Android App iOS/Android App Desktop iOS 3rd Party Apps iOS/Android 3rd Party Apps Open Desktop Signals iOS/Android

Derived variables Emotions Emotions, calories, pulse wave/patterns Calories, sleep tracking Calories, sleep tracking Calories, sleep tracking Activity tracking NA HRV, Signal filtering Comm Battery life Bluetooth 48 h, LiPo 48 h, LiPo 48 h, LiPo 96 h, LiPo 48 h, LiPo 24 h, LiPo NA NA

Our final analysis point concerns price and here all devices are generally within the same price point so there is not much discussion to be had. Clearly, the most attractively priced ones are the Jawbone UP3 and BITalino due to the sheer sensor/price ratio and the least attractive one is the Apple Watch 2. However, the differences are basically nil when comparing between consumer and medical grade devices so the main question should be whether medical grade is needed and, if not, which devices are available for shipping on the market. When considering all points, it seems clear that the most complete and versatile packages are the Microsoft Band 2 (MB2) and BITalino platforms. Where they differ is the fact that the MB2 does not require any significant assembly or coding to start collecting data and is much less intrusive than the BITalino. On the other hand, the BITalino offers a very complete package that, bar the higher signal precision, rivals medical grade devices. All in all, it is not possible to define the “best” device as this will depend on the myriad of experimental conditions such as, sampling frequency, raw data output or processing capabilities, acceptable intrusiveness, budget, etc. that are imposed by the study and need to be factored into the equation. However, for most purposes, our opinion is that if there is an option to do the study in a controlled or laboratory environment, if there is a need to collect a wide range of physiological signals, if signal quality should be medical grade but budget is a depriving

284

P. Nogueira et al.

factor and/or if practicality can be somewhat compromised upon, the BITalino platform should be the best option. For all other cases where practicality is a concern and there is no need to collect more complex physiological data such as EMG or EEG, the MS2 seems to be the best available option on consumer devices. Acknowledgments. This article is a result of the project QVida + : Estimação Contínua de Qualidade de Vida para Auxílio Eficaz à Decisão Clínica, NORTE‐01‐0247‐FEDER‐003446, supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF). The authors also acknowledge to the strategic project LIACC (PEst-UID/CEC/ 00027/2013).

References 1. Chanel, G., Kronegg, J., Grandjean, D., Pun, T.: Emotion assessment: arousal evaluation using EEG’s and peripheral physiological signals. In: Proceedings of International Workshop on Multimedia Content Representation, Classification and Security, pp. 530–537 (2006) 2. Stern, R.M., Ray, W.J., Quigley, K.S.: Psychophysiological Recording, 2nd edn. Oxford University Press, New York (2001) 3. Mandryk, R., Atkins, M.: A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies. Int. J. Hum Comput Stud. 65(4), 329–347 (2007) 4. Leon, E., Clarke, G., Callaghan, V., Sepulveda, F.: A user-independent real-time emotion recognition system for software agents in domestic environments. Eng. Appl. Artif. Intell. 20(3), 337–345 (2007) 5. Hazlett, R.: Measuring emotional valence during interactive experiences: boys at video game play. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1023–1026 (2006) 6. Pedersen, C., Togelius, J., Yannakakis, G.N.: Modeling player experience for content creation. Comput. Intell. AI Games 2(1), 121–133 (2009) 7. Nasoz, F., Lisetti, C.L., Alvarez, K., Finkelstein, N.: Emotion recognition from physiological signals for user modeling of affect. In: 3rd Workshop on Affective and Attitude User Modelling (2003) 8. Figueiredo, R., Paiva, A.: ‘I want to slay that dragon’ - influencing choice in interactive storytelling. In: Digital Interactive Storytelling (2010) 9. Blanchard, E.B., Eisele, G., Vollmer, A., Payne, A., Gordon, M., Cornish, P., Gilmore, L.: Controlled evaluation of thermal biofeedback in treatment of elevated blood pressure in unmedicated mild hypertension. Biofeedback Self Regul. 21(2), 167–190 (1996) 10. Bryant, M.A.M.: Biofeedback in the treatment of a selected dysphagic patient. Dysphagia 6(2), 140–144 (1991) 11. Dong, Q., Li, Y., Hu, B., Liu, Q., Li, X., Liu, L.: A solution on ubiquitous EEG-based biofeedback music therapy. In: IEEE 5th International Conference on Pervasive Computing and Applications (ICPCA), pp. 32–37 (2010) 12. Huang, H., Ingalls, T., Olson, L., Ganley, K., Rikakis, T., He, J.: Interactive multimodal biofeedback for task-oriented neural rehabilitation. In: 27th Annual International Conference of the Engineering in Medicine and Biology Society (IEEE-EMBS), pp. 2547–2550 (2005) 13. Stepp, C.E., Britton, D., Chang, C., Merati, A.L., Matsuoka, Y.: Feasibility of game-based electromyographic biofeedback for dysphagia rehabilitation. In: 5th International IEEE/ EMBS Conference on Neural Engineering (NER), pp. 233–236 (2011)

A Review Between Consumer and Medical-Grade Biofeedback Devices

285

14. Riva, G., Gaggioli, A., Pallavicini, F., Algeri, D., Gorini, A., Repetto, C.: Ubiquitous health for the treatment of generalized anxiety disorders. In: UbiComp 2010, Copenhagen, Denmark (2010) 15. Reynolds, E.: Nevermind (2013). www.nevermindgame.com 16. Rocchi, L., Benocci, M., Farella, E., Benini, L., Chiari, L.: Validation of a wireless portable biofeedback system for balance control: preliminary results. In: IEEE Second International Conference on Pervasive Computing Technologies for Healthcare, PervasiveHealth, pp. 254– 257 (2008) 17. Hjelm, S.I., Browall, C.: Brainball–using brain activity for cool competition. In: NordiCHI, pp. 177–188 (2000) 18. Bersak, D., McDarby, G., Augenblick, N., McDarby, P., McDonnell, D., McDonald, B., Karkun, R.: Intelligent biofeedback using an immersive competitive environment (2001)

Computer Networks, Mobility and Pervasive Systems

Sensor-Based Global Mobility Management Scheme with Multicasting Support for Building IoT Applications Hana Jang1,2 , Byunghoon Song1,2 , Yoonchae Cheong1,2 , and Jongpil Jeong1,2(B) 1

2

College of Information and Communication Engineering, Sungkyunkwan University, Suwon, Gyeonggi-do 16419, Republic of Korea [email protected], {ycheong,jpjeong}@skku.edu IoT Convergence Research Center, Korea Electronics Technology Institute (KETI), Seongnam 13509, Republic of Korea [email protected] Abstract. For various IoT applications, sensor mobility is a fundamental issue, particularly regarding energy efficiency. To this end, we propose a network-based mobility-supported IP-WSN protocol which is referred to as Sensor-based Global Mobility Management Scheme with Multicasting Support (SGM). Our analysis is carried out by calculating the signaling cost, total signaling cost and mobility cost, and we analyze the change in the signaling cost, total signaling cost, and mobility cost as the number of IP-WSN nodes and the number of hops increases. Our analytical results show that our proposed SGM is more cost-effective than existing schemes, including Sensor Proxy Mobility IPv6 (SPMIPv6). Keywords: Wireless Sensor Networks SPMIPv6

1

· SGM · IP-WSN · 6LoWPAN ·

Introduction

The rapid expansion of mobile wireless communications over the last few years has spawned many different wireless communication networks. These networks will be interconnected and interworked with each other to offer access to Internet services for mobile users anytime anywhere. IPv6 defines the manner in which IPv6 communications are carried out over an IEEE 802.15.4 interface the low power wireless personal area network (6LoWPAN) standard of the working group of the Internet Engineering Task Force (IETF) [1,2]. Although 6LoWPAN helps realize an implementation of IP-WSNs by making end-to-end communications to the external world feasible, excessive tunneling through the air results in increased signaling costs for SNs, making it difficult to establish reliable communications. Nevertheless, excessive signaling costs can be reduced by applying IP-WSNs [3,4]. Moreover, most modern communication protocols are host-based, and individual nodes need to participate in mobility related signaling, which is virtually impossible for an IP-WSN [5]. In this sense, PMIPv6 [6] c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 30

290

H. Jang et al.

is a network-based protocol that provides mobility support for any IPv6 host within a restricted and topologically-localized portion of the network, without requiring the host to participate in any mobility-related signaling. 6LoWPANbased IP-WSNs may use sensor network compatible PMIPv6 to introduce and improve the mobility scenario in a localized domain. IP-WSNs can be used in various application areas, including industrial control, structural monitoring, healthcare, vehicle telematics, and agricultural monitoring [7]. Node-to-node communications are very important since these are collaborative functions. In these cases, IP-WSNs based on a mesh approach can improve communication, while individual SNs can act as routers or as fully functional devices. The routing issue over 6LoWPAN is discussed, and moreover, the SGM-based IP-WSN facilitates node-to-node seamless communication. Thus, the IP-WSN architecture is proposed to provide energy efficient mobility for individual SNs or for a group of SNs. This paper presents an energy efficient sensor network-based localizedmobility management protocol in an IP-WSN domain. We propose the SGM operational architecture and provide a sequence diagram and network architecture. We then, evaluate the performance of the proposed protocol architecture. A mathematical analysis is conducted to demonstrate the effectiveness of the proposed scheme, and our analytical results indicate that the proposed scheme reduces the signaling costs and the mobility costs in terms of the number of IP-WSNs. The remainder of this paper is organized as follows: Sect. 2 describes the PMIPv6, 6LoWPAN, and multicasting-based mobility management for related research. The proposed SGM protocol architecture and the sequence diagram are presented in Sect. 3. Section 4 presents the results of the performance analysis. Finally, the conclusion is provided in Sect. 5.

2

Related Work

The 6LoWPAN working group has defined an adaptation layer to send IPv6 packets over IEEE 802.15.4. The goal of 6LoWPAN is to reduce the size of the IPv6 packets to make them fit in 127 byte IEEE 802.15.4 frames. The 6LoWPAN proposal consists of a header compression scheme, a fragmentation scheme, and a method to frame the IPv6 link local address into IEEE 802.15.4 networks [4,7]. The proposal also specifies an improved scalability and mobility for the sensor networks. The challenge for the 6LoWPAN lies in the sizable differences between an IPv6 network and an IEEE 802.15.4 network [8]. An IPv6 network defines the maximum transmission unit to have 1280 bytes while the IEEE 802.15.4 frame size is of 127 octets. Therefore, the adaptation layer between the IP layer and the MAC layer must transport IPv6 packets over IEEE 802.15.4 links. The adaptation layer is responsible for fragmentation, reassembly, header compression and decompression, mesh routing, and addressing for packet delivery under a mesh topology. The 6LoWPAN protocol supports a scheme to compress the IPv6 header from 40 bytes to 2 bytes [9,10]. Nevertheless, the mobility of 6LoWPAN

Sensor-Based Global Mobility Management Scheme

291

can give rise to new and exciting applications. Several existing communication technologies have been considered as candidates to provide internal and external communication infrastructure for a wireless body sensor network (WBSN). However, most protocols have certain limitations to their use in WBSNs. Similarly, some communication technologies like Ultra-Wideband (UWB) require complex protocols and hardware, which may not be feasible for WSBNs. Previous reports have noted that 6LoWPAN could be one of the most suitable technologies for WBSNs since it is based on the IEEE 802.15.4 specification [11]. The IEEE 802.15.4 standard suggests a low data rate, low power, less complex protocols and hardware for an SN. Its interaction with IPv6 implies that the SN should easily interoperate with all other IP networks, including the Internet. This feature, in turn, means that the sensor data can be accessed from anywhere in the world. A design for a micro-mobility support in SNs is proposed with roaming across several Access Points (AP) in a Bluetooth SN [12]. The MN does not require specific features, which reflects the concept of network-based mobility management [13]. The Multicast Listener Discovery (MLD) proxy is a function of the supported MLD Membership Report sent from the MN, and it is necessary to send multicast communication to the SN. The specified multicast router function is also necessary because it is a multicast forwarding state management platform for MNs and for multicast data sent from MNs. However, the standardization for the PMIPv6 MLS (Multicast Listener Support) in the IEFT MultiMob workgroup needs to be improved, and we do not consider the optimization of its specific performance. Our focus was on the developing the basic deployment specifications for the mobility-unaware MNs. First, disconnected services are not considered in the optimization to improve the performance of the multicast handover. Accordingly, during the handover, the MN multicast communication is lost. Therefore, it is not possible to provide a continuous handover. Second, the transmission contains unnecessary multicast communication. The MN of the last subscribers to the multicast service is sent to the network when the handover is executed, which is an unnecessary form of multicast communication for the previously connected network. These unnecessary transmissions of the Multicast communication will continue until the update of the multicast forwarding state for the MN is completed.

3

Sensor-Based Global Mobility Management Scheme Based on Multicasting Support

Our SGM is a localized mobility management protocol based on PMIPv6, and it consists of a multicast-based sensor LMA (mSLMA), multicast-based sensor MAG (mSMAG), and a SN. The SGM domain consists of numerous SNs based on IPv6 addresses, and it is considered as a federated IP sensor domain. Each SN has a tiny TCP/IP communication protocol stack with an adaption layer and an IEEE 802.15.4 interface. This configuration can forward information to other nodes with similar types and can sense information from the environment. In fact, this type of SN acts as a mini sensor router. The other type of SN has a

292

H. Jang et al.

protocol stack and an environmental sensing capability, but it can forward the sensing information to a nearby mini sensor router node. Figure 1 shows the operational architecture of the SGM, which includes the mSLMA, mSMAG and SN. In addition, it shows how these entities communicate with each other through different types of interfaces.

Fig. 1. Operational procedures of the SGM.

The mSMAG requires two or more interfaces to communicate with different access networks, such as the PMIPv6 network. It includes functionality for the network, adaptation, and physical layers. The network layer provides the address, addressing, routing, and neighbor discovery mechanisms, in addition to the data structure to hold the SN information. The most important layer is the adaptation layer, which ensures that mesh routing, compression and decompression, and fragmentation and reassembly are performed correctly. The physical layer provides access to different physical interfaces. The mSLMA holds network related information such as the binary cache entry (BCE), encapsulation, and decapsulation. The mSMAG, SN, and corresponding node (CN) interact to make Multicasting Routing possible because it includes the Multicasting Core BCE and provides a data structure to hold additional information such as new flags, link local addresses for each interface, home prefixes, bi-directional tunnel interface identifiers, access technology, and time stamps. All of the SNs consist of IPv6 addresses for local and global communications. The message flow within the sensor network are integrated with the SGM, as follows:

Sensor-Based Global Mobility Management Scheme

293

1. The L2 trigger is passed to the mSMAG(P). The SN is ready to pass the mSMAG(P) to the mSMAG(N). 2. The mSMAG(P) sends the HI (Handover Initiative) and MSO (Multicast Support Option) messages to the mSMAG(N), after which the mSMAG(N) sends the Pre-BPU and MSO Authentication Query to the mSLMA2. In response, the mSLMA2 sends the Pre-BPU and MSO Authentication reply message to the mSMAG(N), and the mSMAG(N) sends mSMAG(P) a HAck (Handover Acknowledgement) message that includes either an acknowledge (Ack) or a negative acknowledge (NAck). 3. The mSMAG(P) sends the DeRdg.PBU message to the mSLMA1, and the mSLMA1 sends the DeReg.PBA message to the mSMAG(P). Then, mSMAG(N) informs that the L2 is connected to the SN. 4. Once the SN sends the mSMAG(N) to the RS (Router Solicitation) message, mSMAG(N) is sent in order to the Multicast Buffering Traffic and RA (Router Advertisement) to the SN. 5. Finally, the SN communicates with the Corresponding Node (CN) configured according to the SGM. Thus, data can be transmitted from the SN to the CN and vice versa.

4

Performance Analysis

This section evaluates the performance in terms of the network mobility model, cost analysis and numerical results [14,15]. 4.1

Network Mobility Model

The mobility of the SNs is a major advantage of using IP-WSNs over conventional static wireless sensor networks. Mobility is a key concern for the design and performance of IP-WSNs. The mobility model plays a key role in evaluating different mobility management strategies, including registration, hand off, and authentication. A mobility model with the minimum assumptions that is simple to analyze is very useful for an IP-WSN. Most studies on wireless network performance assume that the coverage areas are configured using a hexagonal or a square shape, and in this paper, we assume that IP-WSN networks are configured with a hexagonal topology. Each SN for the IP-WSN area is assumed to have identical movement patterns within and across the IP-WSN. A 2D hexagonal random walk mobility model can then be used to study the movement patterns of the mobile SNs. We will use a network model subject to modify some of the six-layer personal area network model where n = 6. In our network models, the IP-WSN consists of a cluster of hexagonal sensor nodes [16]. 4.2

Cost Analysis

The properties of the regular Markov chains can be exploited to analyze the behavior of the proposed model [9].

294

H. Jang et al.

Let P be the regular transition probability matrix. Then, the steady state probability vector π can be solved using the following equations: πP = π,

m 

πi = 1

(1)

i=1

where m is the number of states and P is the fundamental matrix for the regular Markov chain. Then, Z = [Zij ] = (I − P − A)−1

(2)

A is a limiting matrix determined by P , and the powers of Pn approach the probability matrix A. Each row of A consists of the same probability vector π = π1 , π2 , πn . When all entries in this column vector equal 1, it forms the identity matrix, I. The matrix Z can be used to study the behavior of the regular Markov chain. Using the matrix involves calculating the number of possibilities. For example, Yj (K) is the number of times that a process is in state Sj in the first k steps, and y(k) is mean number of times that the process is in state Sj , starting from state Si . (k)

Mi [yi ] → (Zij − πj ) + kπi

(3)

The total number of boundary updates in k steps, starting from state Si , can be computed from the total number of times that the process is in the boundary states, after the initial state Sj . Then, the average number of location updates Ubu can be given with the following analytical model: (k)

(k)

(k)

(k)

Ubu = Mi [y1 ] + Mi [y2 ] + Mi [y3 ] + Mi [y4 ] =

4 

Mi [yn(k) ]

(4)

n=1

We can use the above equation to determine the number of binding update messages. Since we need to send a binding update message whenever the sensor node moves between IP-WSNs, a binding update message is generated each time a node enters a boundary state. Therefore, we need to determine the expected number of times that the process enters into a boundary state within K steps. Thus, the SNs need to send the Ubu binding update messages, given that the SN experiences a total of K transitions between the mSMAGs. Accordingly, the ratio of the intra-IP-WSN mobility is denoted as Mintra and is expressed as follows: (K − Ubu ) (5) K Likewise, the ratio of the inter-IP-WSN mobility is denoted as Minter and it is expressed as Mintra =

Minter =

Ubu K

(6)

Sensor-Based Global Mobility Management Scheme

295

We evaluated our proposed model based on the signaling costs, mobility cost, and energy consumption, with the help of the different parameters mentioned in Table 1 [6,15,17]. Table 1. System parameters. Parameter

Description

Value

P BU

Proxy Binding Update Message

40/64

P BA

Proxy Binding Acknowledgement Message 512/1024

DmSM AG−mSLM A Distance between mSMAG-mSLMA

2

DSN −mSLM A

Distance between SN-mSMAG

1

α

Unit transmission cost in a wireless link

10

β

Unit transmission cost in a wired link

1

RS

Router Solicitation Message

8/16/24

RA

Router Advertisement Message

7660/52

Csd

Sensor Mobility Cost

600/760/1000

Cbu

Binding Update Cost

0/1104/2176



Redirecting Packets to MN

0.5/0.8

δ

Discarding Packets

0.2

K

Location Step

30

The mobility cost is evaluated based on the signaling cost. To evaluate the total signaling costs, we compare the results of our analytical model with those from the MIPv6 and PMIPv6 data. Figure 2 depicts the analytical model to analyze the performance of the proposed model. It consists of two different SGM domains that are connected over a PMIPv6-based inter-networks. The distance between the mSMAG and the SN is denoted by DSN −SM AG and the distance between the mSMAG and mSLMA is denoted by DSM AG−SLM A . In this analytical model, different distances are used to calculate the signaling costs, that are incurred as a result of the transmission of the data and control signal. This cost varies due to differences in the signal transmissions. From this equation, the total signaling cost T CSP M IP v6 for the proposed scheme based on SPMIPv6 can be calculated by summing the signaling cost SCSP M IP v6 and the packet delivery cost P DSP M IP v6 such that T CSP M IP v6 = SCSP M IP v6 + P DSP M IP v6

(7)

The process is nearly identical for the SPMIPv6 as follow, SP M IP v6 SP M IP v6 SP M IP v6 SCSP M IP v6 = Minter Csd + Minter (Csd + Cbu )

(8)

296

H. Jang et al.

Fig. 2. Network architecture for the performance analysis.

P DSP M IP v6 = λp · tL2 · η(CCN,SLM A + CSLM A,SM AG + CSM AG,SN + P CSLM A ) · 

(9)

where Csd and Cbu are calculated in terms of the SPMIPv6. SP M IP v6 = α · (RSSP M IP v6 + RASP M IP v6 )DSN −SM AG Csd SP M IP v6 Cbu

= β · (P BUSP M IP v6 + P BASP M IP v6 )DSM AG−SLM A

(10) (11)

Also, SMR (Session Mobility Ratio) is used for to obtain various results from the analysis, and we calculate the total signaling cost as the SMR increases. T CSP M IP v6 (12) SM R Then, from this equation, the total signaling cost T CSGM of the proposed scheme based on the SGM can be calculated by summing the signaling cost SCSGM and the packet delivery cost P DSGM . SP M IP v6 CSM = R

T CSGM = SCSGM + P DSGM

(13)

Likewise, for the SGM, SGM SGM SGM SCSGM = Minter Csd + Minter (Csd + Cbu )

(14)

P DSGM = λp · tL2 · η(CCN,mSLM A + CmSLM A,mSM AG(p) + CmSM AG(p),mSM AG(n) + CmSM AG(n),SN

(15)

+ P CmSLM A + 2P CmSM AG ) ·  where Csd and Cbu are calculated in terms of the SGM, SGM = α · (RSSGM + RASGM )DSN −mSM AG Csd SGM Cbu

= β · (P BUSGM + P BASGM )DmSM AG−mSLM A

(16) (17)

Sensor-Based Global Mobility Management Scheme

297

SGM Note that Cbu is the value of the proposed network architecture based on multicasting to perform the binding update, but it does not occur due to the cost. Also, the SMR is used for to obtain various analysis results, and we calculate the total signaling cost as the SMR increases. SGM CSM R =

4.3

T CSGM SM R

(18)

Complexity Analysis

Our proposed scheme performs the mobility management applying Network Mobility (NEMO) for the communication between the vehicles. Finally, our performance analysis shows significantly reducing the handoff signaling cost.

(a) Number of IP-WSN nodes.

(b) Number of Hops.

Fig. 3. Mobility Costs.

Figure 3 (left) represents the mobility costs and the number of IP-WSN nodes. The change in the mobility cost is evaluated for PMIPv6, SPMIPv6 and SGM as the number of IP-WSN nodes increases. Here, the number of IP-WSN nodes is set to maximum of 100, and the number of nodes increases by 10. Figure 3 (right) presents the mobility cost and the number of hops. The change of in the mobility cost is evaluated for PMIPv6, SPMIPv6 and SGM as the number of hops increases. Here, the number of hops is set to maximum of 20, and the number of nodes increases by 1. Figure 4 represents the total signaling cost and the SMR. The change in the total signaling cost is evaluated for PMIPv6, SPMIPv6 and SGM as the SMR increases. Here, the scope of the SMR is set to 0.1 ≤ SM R ≤ 1. The results of the analysis indicate that the total signaling costs for each scheme decrease as the SMR increases. However, the SGM scheme has less overhead than PMIPv6

298

H. Jang et al.

Fig. 4. Total Costs vs. SMR.

and SPMIPv6, and the binding update cost is not incurred. The total signaling cost is much less than of the others, and it converges towards zero, and therefore, our proposed scheme is more cost effective than the others.

5

Conclusion

Mobility in IP-WSN environments is an important issue that is also related to the energy efficiency. In this paper, we have proposed a multicast-based fast mobility management scheme (SGM) to support IP-WSNs by reducing the signaling costs and mobility costs. We conducted cost analysis and performance evaluation, to compare the signaling cost, mobility cost and total signaling cost through the use of various parameters as the number of IP-WSN nodes and number of hops increase. The results indicate that our proposed scheme does not incur the binding update cost and is more cost-effective than the others. We also analyzed the change in the total signaling costs for PMIPv6, SPMIPv6 and SGM as the SMR increases. In conclusion, the SGM scheme has a much lower total signaling cost than the others, and the total signaling cost converges towards zero. Acknowledgments. This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (NRF-2016R1D1A1B03933828). This work was supported R&D Program by the Ministry of Trade, Industry & Energy (MOTIE) and the Korea Evaluation Institute of Industrial Technology (KEIT) (10065737, Development of Modular Factory Automation and Processing Devices toward Smart Factory in Parts Manufacturing Industry). This work was supported by the Technology Innovation Program (10054486, Development of Open Industry IoT (IIoT) Smart Factory Platform and Factory-Thing Hardware Technology) funded By the Ministry of Trade, industry & Energy(MI, Korea). This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No. R-20160530003936, Development of Quality Inspection System based on IIoT Platform for improving Quality in Small and Medium manufacturers).

Sensor-Based Global Mobility Management Scheme

299

References 1. Montenegro, G., Kushalnagar, N., Hui, J., Culler, D.: Transmission of IPv6 Packets over IEEE 802.15.4 Networks. IETF RFC 4944, pp. 1–29 (2007) 2. Ni, X., Shi, W., Zheng, S.: Design of micro mobility support in bluetooth sensor networks. In: IEEE International Conference on Industrial Informatics, pp. 150– 154 (2006) 3. Istepanian, R.S., Jovanov, E., Zhang, Y.T.: Guest editorial introduction to the special section on M-Health: beyond seamless mobility and global wireless healthcare connec-tivity. IEEE Trans. Inf. Technol. Biomed. 8(4), 405–414 (2004) 4. Singh, D., Lee, H.J., Chung, W.Y.: An energy consumption technique for global healthcare monitoring applications. In: Proceedings of the 2nd International Conference on Information Sciences, Information Technology, Culture and Human, pp. 539–542 (2009) 5. Shelby, Z., Bormann, C.: 6LoWPAN: The Wireless Embedded Internet, vol. 43. Wiley, Hoboken (2011) 6. Heinzelman, W.R., Chandrakasan, A., Balakrishnan, H.: Energy-efficient communication protocol for wireless micro sensor networks. In: Proceedings of the 33rd Annual Hawaii International Conference on System Sciences, vol. 8, pp. 8020–8030 (2000) 7. Kim, J.H., Hong, C.S., Shon, T.: A lightweight NEMO protocoli to support 6LoWPAN. ETRI J. 30(5), 685–695 (2008) 8. Yoo, S.W., Jeong, J.P.: Analytical approach of fast inter-domain handover scheme in proxy mobile IPv6 networks with multicasting support. KIPS Trans. Part C 19(2), 153–166 (2012) 9. Gundavelli, S., Leung, K., Devarapalli, V., Chowdhury, K., Patil, B.: Proxy Mobile IPv6. IETF RFC 5213 (2008) 10. Kushalnagar, N., Montenegro, G., Schumacher, C.: IPv6 over Low-Power Wireless Personal Area Networks (6LoWPANs). IETF RFC 4919 (2007) 11. Kim, E., Kaspar, D., Chevrollier, N., Vasseur, J.P.: Design and Application Spaces for 6LoWPANs. IETF internet-Draft (2009) 12. Pathan, A.S.K., Hong, C.S.: SERP: secure energy-efficient routing protocol for densely deployed wireless sensor networks. Ann. Telecommun. 63(9–10), 529–541 (2008) 13. Zhang, R., Chu, F., Yuan, Q., Dai, W.: A study on an energy conservation and inter-connection scheme between WSN and internet based on the 6LoWPAN. Mobile Information Systems 2015, 1–11 (2015) 14. Chiang, K.H., Shenoy, N.: A 2D random walk mobility model for location management studies in wireless networks. IEEE Trans. Veh. Technol. 53(2), 413–424 (2004) 15. Islam, M.M., Abdullah-Al-Wadud, M., Huh, E.N.: Energy dfficient multilayer routing protocol for SPMIPv6-based IP-WSN. Int. J. Sens. Netw. 18(3–4), 114–129 (2015) 16. Chalmers, R.C., Almeroth, K.C.: A mobility gateway for small-device networks. In: Proceedings of Second IEEE Annual Conference on Pervasive Computing and Communications, pp. 209–218 (2004) 17. Islam, M.M., Huh, E.N.: Sensor proxy mobile (SPMIPv6) - a novel scheme for mobility supported IP-WSNs. Sensor 11(2), 1865–1887 (2011)

Analytical Approach of Cost-Reduced Location and Service Management Scheme for LTE Networks Hana Jang, Haksang Lee, Taehyun Lee, and Jongpil Jeong(B) College of Information and Communication Engineering, Sungkyunkwan University, Suwon, Gyeonggi-do 16419, Republic of Korea [email protected], {belkin,unicom,jpjeong}@skku.edu

Abstract. In this paper, we propose a cost-reduced location and service management scheme in LTE (Long Term Evolution) networks, where a per-user service proxy is created in order to serve as a gateway between the mobile user and all client-server applications engaged by the mobile user. Our results indicate that the centralized scheme performs the best when the mobile user’s SMR (service to mobility ratio) is low and ν (session to mobility ratio) is high, while the fully distributed scheme per-forms the best when both SMR and ν are high. Through analytical results, we demonstrate that different users with vastly different mobility and service patterns should adopt different integrated location and service management methods to optimize system performance. Keywords: Location management works · SMR

1

·

Service management

·

LTE net-

Introduction

LTE (Long Term Evolution) networks [1] provides a wide range of information services. This involves instances when a mobile user (MU) sends requests to a server and the server sends replies to the mobile user. To reply from the server to a MU, the server must know the MU’s location, which may be changed after requests are sent. For this reason, it has been suggested that a per-user service proxy be created for each mobile user to tackle the problem of personal mobility. The service proxy performs numerous tasks such as tracking locations of the MU, maintaining service context information for the service engaged, accepting service requests from the MU, transforming requests into proper formats, and forwarding server replies to the MU. As the personal proxy explicitly tracks the MU location, it eliminates the overhead for the server application [2] by first checking with the underlying location management system to know the current MU location before data delivery. In this paper, we investigate and analyze integrated location and service management schemes in order to explore this cost tradeoff with the goal of identify conditions under which a particular scheme should be adopted by a MU c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 31

Cost-Reduced Location and Service Management Scheme

301

based on the MU’s own mobility and service characteristics for the minimization of network cost. These schemes derive from the basic MME-Cell and LA schemes for location management, and the personal service proxy scheme for service management in the LTE network. The amount of cost saving is relative to the speed of the LTE network and is proportional to the number of users, so the benefit is especially pronounced for slow and congested networks with a large number of mo-bile users. Here, we note that the use of smart terminals capable of reporting their locations may necessitate the use of new location and service management schemes (e.g., paging and letting smart terminals inform ongoing services of their location changes) rather than based on the MME-Cell structure in the LTE network as is considered in this paper. The rest of the paper is organized as follows. Section 2 provides a description of the related work. Section 3 describes in detail the cost-reduced location and service management scheme. In Sect. 4, performance analysis are described. Finally, Sect. 5 summarizes the paper.

2

Related Work

The seamless management of user mobility is an issue that involves every OSI network layer [3,4] from layers 1 and 2 (handover between cells), through layer 3 (routing updates in the network core), up to the application layer (persistence of transport connections and user state, delay-tolerant operation). The Internet Protocol suite did not originally include any support for end-point mobility. Over the years, a whole family of Mobile IP (MIP) procedures were introduced in an attempt to provide mobility support in a backward-compatible way. On the other hand, current cellular standards, such as GSM [5], EvDO [6], and LTE [7,8] have all been designed with mobility in mind and integrate the appropriate support in the core network. The cellular control plane includes elements that store and maintain the state of the terminal while its association to the network persists (as provided by the MIP Home Agent), and oversees the creation of appropriate bearers to seamlessly provide applications with the illusion of a constant connection between the mobile terminal and the network. The Mobility Management Entity (MME) [9] for the next generation LTE cellular network supports the most relevant control plane functions related with mobility: it authenticates the User Equipment (UE) as it accesses the system, it manages the UE state while the users are idle, supervises handovers be-tween different base stations (extended Node B, eNB), establishes bearers as required for voice and Internet (packet data network, PDN) connectivity in a mobile context, generates billing information, implements so called lawful interception policies, and oversees a large number of features defined in its extensive 3GPP specifications. In [10], they discussed the notion of distributed servers each covering a service area and thus a service handoff occurs when the MU crosses a service area boundary. They assumed the existence of service handoffs and analyzed cache retrieval schemes to be employed during a transaction execution to improve the

302

H. Jang et al.

cache hit ratio by selecting the best server from which the MU will retrieve cached items upon a service handoff [10]. In another related work, they investigated the impact of mobility on mobile transaction management [11]. In particular, they analyzed a service handoff scheme to move the transaction management from one service area to an-other as the MU crosses a service area in the PCS network. These studies assumed that replicated servers exist in service areas. Also, no integrated location and service management was considered to reduce the overall cost due to location and service management operations.

3

Cost-Reduced Location and Service Management Scheme for LTE-A Networks

We first describe a LTE system model for location management services. Then we describe an extended system model for integrated location and service management. We consider the LTE network architecture [9] where the LTE service areas are divided into registration areas (RAs). We assume that a particular MU will remain in a Cell before moving to another. For simplicity, the residence time is assumed to be exponentially distributed with an average rate of s. Such a parameter can be estimated using the approach described in [10] on a peruser basis. We also assume that the inter arrival time between two consecutive calls to a particular MU, regardless of the current location regarding the MU, is exponentially distributed with an average rate of λ. Under the basic MME/Cell scheme, a mobile user is permanently registered under MME. When the mobile user enters a new Cell area, it reports to the new Cell, which in turn informs the MME by means of a location update operation. When a call is placed, the system first searches the MU’s current location through the MME and then the call is delivered. For notational convenience, let the average round trip communication cost between a Cell and MME be T , representing the cost for a location update operation, as well as for a search operation under the basic MME/Cell scheme. And we describe the operational procedures used to handle location update, call delivery, and service requests in four schemes, as follow: Location Update: If (this is an anchor boundary crossing movement) A location update message is sent to the MME through the new Cell The service context is moved to the new Cell, which now serves as the new anchor A location update message is sent to all application servers Else The new Cell sends location update message to the anchor Call Delivery: A location request message is sent to the MME

Cost-Reduced Location and Service Management Scheme

303

to know the anchor of the called user If (the local anchor is the current serving Cell) The anchor sends a response to the MME that the MU is found Else The local anchor forwards the request to the current serving Cell The current Cell sends a location response to the MME The MME updates its record such that the current Cell becomes the new anchor The service context is moved to the current Cell (which is the new anchor) A location update message is sent to all application servers Service Request: A request is sent from the MU to its current Cell If (the current Cell is the local anchor) The request is sent to the server and then a response is sent back to the MU Else The current Cell forwards the request to the anchor The anchor forwards the service request/response to the server/MU We assume that the best integrated scheme is selected on a per-user basis regarding the minimization of network cost, not to be affected by other users in the system.

Fig. 1. Dynamic anchor schema (left) and static anchor schema (right).

304

H. Jang et al.

Under the dynamic anchor scheme, as shown in Fig. 1 (left), a location anchor is used for location management such that the anchor changes whenever the MU crosses an anchor boundary. In addition, the anchor may also change its location within an anchor area when a call delivery operation is serviced. The service proxy dynamically moves with the anchor and is always collocated with the anchor. Below we provide the algorithmic description of the dynamic anchor scheme for processing the location update, call delivery, and service requests. Under the static anchor scheme, the service proxy is again co-located with the anchor. However, the anchor will remain at a fixed location as long as the MU remains in the same anchor area. The only condition under which the anchor would move (along with the service context transferred) is when the MU moves across an anchor boundary. The procedures for processing the location update, call delivery, and service requests are the same as in the dynamic anchor scheme except that upon a successful call delivery, the anchor’s location remains unchanged. Thus, there is no need to migrate the service proxy to the current serving Cell (if they are not the same) after serving a call delivery operation. We illustrate a static anchor in Fig. 1 (right). When the MU moves within anchor area 1 from Cell A to Cell B and then to Cell C, the local anchor in Cell A is updated to point to the current Cell without updating the MME. An incoming call will invoke a search operation at the MME database to first find the anchor and then locate the current Cell. The location of the anchor (where the service proxy is co-located) remains unchanged after a call is serviced. The anchor moves only when the MU moves out of the current anchor area (from Cell C to Cell D in this case). For each service request issued from MU, it is serviced by the service proxy co-located with the anchor. As in dynamic anchor, there is no additional cost for the service proxy to find the MU, since the service proxy is co-located with the anchor.

4 4.1

Performance Analysis Network Modeling

We first define the communication cost analysis model for two states in the LTE system. Then, we demonstrate how the performance metric can be assessed for various schemes. For analysis, the two-dimensional hexagonal random walk model [11–14] has been adopted. The LTE system can be assumed to be configured as a hexagonal network with a cell having radio coverage of an eNB. The UE moves from one cell to another, and its movement is modeled based on the two-dimensional hexagonal random walk model. In this model, a hexagonal cell structure is modeled and the cells are classified in a 6-layer cluster. We assume that a UE resides in a cell unit for a specified time period and then moves to any of the neighboring cells with equal probability. Using this, a one-step transition matrix of this random walk can be derived by letting P (x, y), (x , y  ) be the one step probability from state (x, y) to (x , y  ). The one-step transition matrix for this random walk is matrix P = (P (x, y)(x , y  )).

Cost-Reduced Location and Service Management Scheme

305

From the Markov Chain model, i is defined as the number of cell crossings since the last packet of call ends, αi,j is the state transition probability from state i to state j, denoted by: ⎧ ⎪ ⎨1 − ρ, for j = i + 1 (1) αi,j = ρ, for j = 0 ⎪ ⎩ 0, otherwise The probability that one or more packet of calls arrived between two cell crossings can be calculated as follows: 



ρ= t=0

∗ fm (t)dt = 1 − fm (λs ) = 1 −



γλm λs + γλm

γγ=1 =

λs λs + λm

(2)

From the probability above, we can model the signaling cost of two mobility states in an LTE system: LTE-ACTIVE where the network directs UE to the serving cell and the UE is ready to perform Uplink/Downlink transport with very limited access delay, and LTE-IDLE where the UE in a low power consumption state, could be tracked in the tracking area and be able to travel to LTE-ACTIVE at approximately 100 ms. For the LTE-ACTIVE mode, the signaling procedure costs are defined: Total handover cost Ch is the handover cost when an on-going application session when UE moves across the cells; We define H as the handover cost for one handover process, Handover cost during UE stays in state i of the Markov Chain, Ch (i) = θ(n, i) × H

(3)

Average handover cost per state transition is, Ch (i) =

7

pi Ch (i) = ρ

i=0

7

(1 − ρi )i Ch (i)

(4)

i=0

Average handover cost per unit of time is, Ch (i) = λm × Ch (i)

(5)

Total session activation cost Ca is the cost to setup the transport tunnel for the new arrived session. The session activation cost for the process of one session activation is represented by A. Assuming ν is the number of packet call arrivals between two cells crossing, then ν = λs /λm . Session activation cost during UE stay in state i of the Markov Chain, Ca (i) = ν × Nunit × A

(6)

The average session activation cost per state transition is, Ca (i) =

7 i=0

pi Ca (i) = ρ

7 i=0

(1 − ρi )i Ca (i)

(7)

306

H. Jang et al.

The average session activation cost per unit of time is, Ca (i) = λm × Ca (i)

(8)

Service request cost Cs is the average cost to communicate with the application server through proxy. For the centralized scheme, each operation incurs a communication cost between the user’s current cell and the MME co-located with the centralized service proxy. Thus, total cost in the LTE-ACTIVE Mode is CT.active = Ch + Ca + Cs

(9)

In the LTE-IDLE mode, the signaling cost includes: Total location update cost when the core network updates the UE location as UE moving across the tracking area (TA), and the total paging cost when the Core network pages all the cells in a TA to track the position of the UE. From the same equation above, but with the value for location update cost Cl and paging cost Cp replacing the Ch and Ca , we can have the total signaling cost for the LTE-IDLE mode, CT.idle = Cl + Cp

(10)

In the fully distributed scheme, each time the UE moves across a cell boundary, three costs occur, i.e., a cost of H is required to update the MME database for keep-ing track of the UE, a cost of Mcs × τ3 is required to transfer the service context to the new cell in order to provide continuous services where τ3 stands for the communication cost between two neighboring cells, and finally a cost of Ns T is required to inform Ns application servers of the address change for the service proxy. Each time a call is placed for the mobile user, the MME consults the current cell to acquire the location information with the communication cost T . For each service request, since the service proxy is always co-located with the current cell of the UE, the only communication cost is from the proxy to the server. Summarizing above, Chdistributed = λm × (Ch (i) ) + Mcs × τ3 + Ns T )

(11)

Thus, CT is defined, distributed = Chdistributed + Ca + Cs CT.active

(12)

To calculate CT of the dynamic anchor scheme, we introduce additional cost parameters in Table 1 for ease of presentation. These cost parameters can be calculated as follows: Suppose N states exist in the underlying Markov model. Let Pi be the steady state probability that the system is found in state i. The average cost to serve the location update, call delivery and service requests can be obtained by assigning da be the search cost cost values to these N system states. Specifically, let Ci,call assigned to state i given that a search operation is being serviced in state i under the dynamic anchor scheme. Then, the average search cost under dynamic

Cost-Reduced Location and Service Management Scheme

307

Table 1. System prarmeters for performance analysis. Parameter CServInM CServOutM CServCvdC CServN onCvdC CServCvdS CServN onCvdS CServC CServS

Meaning The average cost of performing an intra-anchor location update operation when the UE changes its cell within the same anchor area The average cost of performing an inter-anchor location update operation when the UE moves out of the current anchor area The cost to handle a call delivery operation when the current cell is the same as the anchor cell The cost for handling a call delivery operation when the current cell is different from the anchor cell The cost to handle a service request when the anchor resides in the current serving cell The cost to handle a service request when the anchor is different from the current serving cell The cost to handle a call delivery The cost to handle a service request

da anchor, Cada can be calculated as the expected value of Ci,call weighted by the state probability, i.e.,

Cada = λm ×

N

da pi × Ci,call

(13)

i=1 da Ci,call is CServN onCvdC if in state i, the current cell is different from the da anchor, i.e. Otherwise Ci,call is assigned CServCvdC to account for the fact that da da the current cell is the same as the anchor in state i. Similarly, Let Ci,h and Ci,s be the cost for serving the location update and the service request in state i.

Chda = λm ×

N

da pi × Ci,h , Csda = λm ×

N

i=1

da pi × Ci,s

(14)

i=1

If in state i, the location update cost would be CServInM . If the MU has just made an inter-anchor movement, the location update cost would be CServOutM . If in state i, the MU has not yet made a move, then the location update cost in state i is the average cost weighted on the probability of whether the user’s next move is an inter- or intra-anchor, i.e., PInA × CServInM + POutA × CServOutM is the probability of intra-anchor movement in state i and POutA is the probability da value of inter-anchor movement in state i. In the second equation above, Ci,s depends on if the current Cell is different from the anchor Cell in state i. If da da is CServCvdC , Ci,s is CServN onCvdC . The the anchor is not the current cell, Ci,s total cost per time unit incurred to the LTE network in LTE-ACTIVE under the dynamic anchor, can be calculated as: da = Chda + Cada + Csda CT.active

(15)

In the static anchor scheme, the local anchor and the service proxy remain static in one Cell as long as the UE resides in an anchor area. The major difference between

308

H. Jang et al.

the static anchor model and the dynamic anchor model is that there is no Flag indicating whether the anchor Cell is located in the current serving cell because unlike the dynamic anchor scheme, the anchor is at a fixed location upon entry to a new anchor area and remains there until the UE departs the anchor area. Therefore, we only need to consider the average cost of accessing the anchor from any cell in the anchor area without having to track if the current cell is the same as the anchor cell. Let τ1 be this average communication cost between the anchor and a cell in the anchor area. By following a similar approach performed for the dynamic anchor scheme, the costs incurred to the LTE system per time unit under the static anchor scheme for Handover, call delivery and service requests can be calculated, respectively, as: Casa = λm × Chsa = λm ×

N

N

sa pi × Ci,a = λm ×

i=1 sa pi × Ci,h =γ×

i=1

N

N

pi × CServC

i=1 sa pi × Ci,s =γ×

i=1

N

pi × CServS

(16) (17)

i=1

Therefore, the total cost per time unit incurred to LTE network under static sa is calculated as: anchor, CT.active sa = Chsa + Casa + Cssa CT.active

4.2

(18)

Numerical Results

We present numerical data obtained based on our analysis for a LTE network consisting of a 2-layer cell, TA/TAL and MME modeled by the hexagonal network coverage model. Performances of the centralized, fully distributed, dynamic anchor, and static anchor schemes in the LTE network in terms of the communication cost incurred to the network per time unit as a function of CMR and SMR under identical network signaling-cost conditions, whereby all costs are normalized with respect to the cost of transmitting a message between a cell and its MME, i.e., Cta = 0.5 such that Cmme = 1 and Clte = 6. In Fig. 2, as the SMR increases, the cost rate under all four schemes increase because when the mobility rate γ is fixed, increasing SMR increases the service request rate, which in turn incurs more service-related costs for all four schemes. Figure 3 demonstrates the impact on total cost by call request rate and service request rate. As the call request rate increases, the cost for all four kinds of schemes will increase. Scheme using an anchor performs better. Service request rate increases, dynamic anchor performs the best. If service request rate is low, crossing cells incurs proxy’s change. Thus, the full distributed scheme performs poorly.

Cost-Reduced Location and Service Management Scheme

(a) γ= 0.1

309

(b) γ= 10

Fig. 2. Total cost under different service to mobility ratio (SMR) values

(a) λm = 1.0

(b) λm = 10

Fig. 3. Total cost under different context session and service request rate

5

Conclusion

In this paper, to reduce the overall communication cost for servicing mobilityrelated and service-related operations by the integrated LTE networks, we investigated and analyzed several possible cost-reduced location and service management schemes and identified conditions under which one scheme may perform better than others. The analysis results are useful for identifying the best scheme to be adopted in order to provide personalized services to individual users based on their user profiles. Our analysis results show that the dynamic anchor scheme performs the best in most conditions except when the context transfer cost is high (when the server is heavy). These results suggest that different users with vastly different mobility patterns should adopt different cost-reduced location and service management methods to optimize system performance. Acknowledgments. This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (NRF-2016R1D1A1B03933828).

310

H. Jang et al.

References 1. Kitagawa, K., Komine, T., Yamamoto, T., Konishi, S.: A handover optimization algorithm with mobility robustness for LTE systems. In: Personal Indoor and Mobile Radio Communications (PIMRC), pp. 1647–1651, September 2011 2. Ro, S., Choe, J.: A study on inter-domain support in proxy mobile IPv6. J. KIIT 10(2), 199–206 (2012) 3. Bora, G., Bora, S., Singh, S., Arsalan, S.M.: OSI reference model: an overview. Int. J. Comput. Trends Technol. (IJCTT) 7(4), 214–218 (2014) 4. Roussopoulos, M., Maniatis, P., Swierk, E., Lai, K., Appenzeller, G., Baker, M.: Person-level routing in themobile people architecture. In: Proceedings of the USENIX Symposium on Internet Technologies and Systems, vol. 2, pp. 165–176. USENIX Association, October 1999 5. Hossain, K.M., Kirtaniaa, P.C., Arefin, A.: An automated load system using GSM network, Technical report of Northern University, Bangladesh, Feburary 2015 6. Chin, T., Shi, G., Lee, K.C.: Method and apparatus for the multimode terminal to monitor paging messages in CDMA EVDO and frame synchronous TD-SCDMA networks, U.S. Patent No. 8,996,041 (2015) 7. Chen, X., Suh, Y.H., Kim, S.W., Youn, H.Y.: Reducing connection failure in mobility management for LTE HetNet using MCDM algorithm. In: Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), pp. 1–6, June 2015 8. Gupta, A.K., Jha, R.K.: A survey of 5G network: architecture and emerging technologies. IEEE Access 3, 1206–1232 (2015) 9. Fortes, S., Aguilar-Garca, A., Barco, R., Barba, F., Fernandez-luque, J., FernndezDurn, A.: Management architecture for location-aware self-organizing LTE/LTE-a small cell networks. IEEE Commun. Mag. 53(1), 294–302 (2015) 10. Jermyn, J., Jover, R.P., Murynets, I., Istomin, M., Stolfo, S.: Scalability of Machine to Machine systems and the Internet of Things on LTE mobile networks. In: World of Wireless, Mobile and Multimedia Networks (WoWMoM), pp. 1-9, June 2015 11. Xie, J., Akyildiz, I.F.: A novel distributed dynamic location management scheme for minimizing signaling costs in Mobile IP. IEEE Trans. Mob. Comput. 1(3), 163–175 (2002) 12. Yang, S.R., Lin, Y.B.: Performance evaluation of location management in UMTS. IEEE Trans. Veh. Technol. 52(6), 1603–1615 (2003) 13. Dunham, M.H., Kumar, V.: Impact of mobility on transaction management. In: ACM Data Engineering for wireless and Mobile Access, pp. 14–21, August 1999 14. Akyildiz, I.F., Lin, Y.B., Lai, W.R., Chen, R.J.: A new random walk model for PCS networks. IEEE J. Sel. Areas Commun. 18(7), 1254–1260 (2000)

Design and Security Analysis of Improved Identity Management Protocol for 5G/IoT Networks Byunghoon Song1 , Yoonchae Cheong2 , Taehyun Lee2 , and Jongpil Jeong2(B) 1

IoT Convergence Research Center, Korea Electronics Technology Institute (KETI), Seongnam 13509, Republic of Korea [email protected] 2 College of Information and Communication Engineering, Sungkyunkwan University, Suwon, Gyeonggi-do 16419, Republic of Korea {ycheong,unicom,jpjeong}@skku.edu

Abstract. The Internet of Things (IoT) has become a powerful element of next generation networking technologies. In an IoT-enabled environment, things or physical objects no longer stay unresponsive. Instead they are connected to the Internet and embedded with processing and communication capabilities. A great rise of mobile network users is causing identity management problems on mobile service provider through mobile networks. This paper proposes improved I2DM to solve user ID management and security problems on mobile Internet application services over 5G/IoT networks. IIDM protocol breakup loads which made by existing I2DM protocols mutual authentication via mobile operator, via sending some parts to Internet application service provider, enhancing mobile and ID management of service provider and network load and process load from information handling and numbers of transmitting packets, to propose more optimized protocol against further demanding of 5G/IoT mobile networks. Keywords: Identity management networks · Security analysis

1

·

IIDM

·

IDM3G

·

I2DM

·

Mobile

Introduction

Mobile users seek diverse services on smart phones. The wide range of mobile services has increased demand for faster wireless networks. Telecom providers provide LTE (Long Term Evolution) servers to subscribers. The main advantage of LTE compared to 3G is that it is high-throughput, low latency and has plug and play, available FDD (Frequency Division Duplex) and TDD (Time Division Duplex) on the same plat-forms, improved user performance, simple architecture and lower operating costs than alternate networks. EPS (Evolved Packet System) is an AllIP based system which supports a variety of access networks such as LTE and HSDPA/HSDPA+ (High Speed Downlink Packet Access). LTE in EPS technology is based on release 8 of 3GPP (3rd Generation Partnership Project) [1]. c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 32

312

B. Song et al.

Authentication in the LTE network involves high network costs to request or communicate authentication information between external visited networks and HSS due to different network methods and location problems between visited and home networks. Therefore, in 3GPP, HSS is able to send multiple authentication vectors at one time for use in the external network. The number of authentication vectors in 3G for an authentication information map of TS 29.002 in 3GPP is fixed at 5 [2]. This number is not suitable for a variety of authentication event environments in LTE networks. In an LTE network, subscribers may request the number of authentication vectors for the HSS. In this paper, subscribers may request the number of authentication vectors for the HSS or may request the optimized number of authentication vectors for use within a specific external network to minimize network traffic between HSS and the external network. We analyze authentication patterns in various environments using mathematical modeling. We also calculate the optimal number of authentication vectors based on authentication patterns for minimum cost authentication signaling. IIDM proposed in this paper performs task similar to that of previous I2DM [3], IDM3G [4,5], SSO (Single Sign-On) [6], which operates cross-certificate not just an individual communication between service provider and MO on Internet, satisfying users need [7]. Yet it focuses on minimizing degradation in MO and maintenance expanse. It is done by constructing trusted-base with cross-certificate between service provider and MO, and implying principles of PGP (Pretty Good Privacy) [8,9] based on PKI (Public Key Infrastructure) [10,11] enables mutual dependencebase communication, and also able ID management to service providers. Under this base, IIDM is can shorten mobile ID authentication steps (cutting packets for mobile user authentication), improve mobile network bandwidth, and availability. The rest part of the paper is organized as follows. In Sect. 2, related works is presented. Our proposed IIDM protocol is proposed in Sect. 3. In Sect. 4, performance analysis and security analysis are described, respectively. Finally, this paper is concluded in Sect. 5.

2 2.1

Related Work IoT/LTE Network Systems

In LTE networks, a variety of factors evolved from 3G technology. In Fig. 1, a diagram of a subscriber accessing a new external network shows the routing of user data for the LTE network system and security technologies [1,2,12]. In LTE networks, the principal parties are the HPLMN, the VPLMN and the user/subscriber. The user is represented by the user equipment (UE), which consists of the mobile equipment (ME) and the subscriber module (UICC/USIM). In the LTE network, the certificate (Authentication Credentials) is called an authentication vector (EPS-AV). Unlike in UMTS, a session key exists to replace the KASME master key [3]. Figure 2, the EPS-AKA protocol shows the EPS-AV delivery.

Design and Security Analysis of Improved Identity Management Protocol

313

Fig. 1. LTE network system.

As shown in Fig. 2, the EPS-AKA authentication procedure for LTE is described. In LTE, authentication supports mutual authentication, including authentication of the UE by the network and authentication of the network by the UE. In LTE authentication, the authenticating parties are HSS in the home network and the user service identity module (USIM) in UE. Two major authentication procedures are described in this paper. – Distribution of authentication vector: This procedure distributes authentication vectors (AVs) from the HSS to the MME. MME can include multiple EPS-AV at once to HSS requests based on authentication vector size K of EPS-AV. – Authentication and Key Establishment: In LTE, mutual authentication is achieved by sharing a secret key between the USIM and the HSS [13]. MME from the HSS receives EPS-AV and will be authenticated by the mobile using security information.

Fig. 2. LTE certification process.

314

2.2

B. Song et al.

Mobile Network Authentication

In authentication and key agreements for 3GPP, the authentication server generates multiple authentication vectors and sends them to the visited network. This mechanism reduces the amount of signaling traffic between the visited network and the authentication server. However, the visited external network has additional storage costs for storing multiple authentication vectors [13,14]. Numerous analytical models have been proposed to investigate the impact of the number of authentication vectors [15–17]. The study showed that their proposed preauthentication scheme decreases the authentication delay with minor increased signaling overhead. In order to reduce authentication signaling traffic, access to HSS through a proper authentication vector of appropriate array size may reduce transmission costs. In these studies, an appropriate authentication signaling scheme (EPS-AKA) is proposed. UMTS (Universal Mobile Telecommunications System)-AKA (Authentication and Key Agreement) is one of the 3G mobile network technique, protocol designed for safe networks on wireless networks such as cellular phone, wireless phone, wireless user networks, wireless LAN. UMTS-AKA [18] is network access authentication mechanism defined by 3GPP (3rd Generation Partnership Project) [19]. UMTS-AKA bases on Challenge/Response protocol concept, which is designed to not share pairing-key between communicating individual but still one individual authenticated as other. This UTMS-AKA authentication and Key authenticating mechanism is standard for 3GPP for 3G security, guaranteeing user ID management, confidentiality, and integrity [20]. I2DM [3] and IDM3G [4,5], based on UTMS-AKA protocol in using internet on the 3G mobile network, focusing on cross-certificate and administration between service provider and users, has been proposed to implement aspect of SSO, to avoid providing repetitive ID [21]. Also, as an independent individual, it can be applied not only to UTMS mobile but also in interaction of 3G mobile and WLAN. IDM3G protocol can reduce most of the authentication-exchanging step between SP authenticated in advance and new SP, that previous service providers in IP network for implementing SSO (mainly Microsoft.Net Passport) had to take, by transmitting TicketSP IP, RND, IMSI, TMSI, generated by many exchanging process between MO and MO to SP. And it also considers characteristics of mobile network (Serving network), handover between MOs. IDM3G consists of 4 separate individuals, user (U), USIM connected to users equipment (USIM/UE), MO (MO every component related to infrastructure) and service provider (SP). According to Liberty Alliance protocol, subject is a combination of user and USIM/UE, ID provider as a MO, and service provider is defined as a SP. IDM3G USIM/UE makes ticket with TMSI and UMTS-AKA factors that MO shares, and sends it to SP, on MOs side its waiting for authentication request of Ticket made by USIM/UE information it holds, and comparing information asked by SAML from SP.

Design and Security Analysis of Improved Identity Management Protocol

3

315

Our Proposed IIDM Protocol

Based on I2DM [3] and IDM3G [4,5] protocol, IIDM proposes improvements in both performance aspects and security aspects. Especially, I2DM are expected to load a lot to MO itself because timer operates while receiving USIM/UE information from SP. Figure 3 simply describes principles of priori trusted-base for authenticating encrypted Ticket. Protocols mechanism bases on pre-registering public key of MO and SP, and establishing trust/managing relationship.

Fig. 3. IIDM Protocol.

Registering SPs public key using PGP algorithm, registering MO and SP coupled, can establish trust of SP and effectively manage SP as a service provider. And authentication of public key can utilize the third authentication organization MO supplies. In the case of U.S., PGP’s public key maintains LDAP/HTTP public key server with 300,000 public keys registered, and this server is mirrored to different sites over the world. And it can maintain similar effect as a digital signature. IIDM protocol assumes the followings. First, MO and SP is firmly agreed in business and is trusted relationship (including safe route). Business agreement and trust is established with using agreement between MO and users, and user ID is known only to MO. Terminal users identity can be different from contractors but contracting parties are responsible, and real usage can be defined by biometrics. Second, in order to start services providing to users in MO, SP should in advance accept key exchanging according to PGP algorithm and ID Ticket comparing algorithm. This agreement is shared between ID provider (MO) and service provider (SP). MO should manage and distribute registered SPs public key. And through this process MO can SP acquire trusted relationship. Third, user authenticates USIM by sending PIN through 3GPP’s specification, and UMTS-AKA mechanism previously explained is arranged for cross-certification

316

B. Song et al.

of USIM and MO. On this step, CK (encrypted data) and IK (independent protection) is calculated by both USIM and MO, and TMSI is encrypted by MO and transmitted to USIM.

Fig. 4. IIDM Message Delivery Procedure.

As shown in Fig. 4, information of SP, already involved in trusted relationship goes through MO and when SPs authentication is required (specific ID is held), IIDM’s protocol is initialized when user tries to access. Figure 4 describes IIDM protocol structure with message transmits between communication subjects.

4

Performance Analysis

Important parameter in an operation of protocol is that mobile users’ equipment has limited performance to the network communication. To measure IIDM’s performance, this USIM/UEs equipment environment must be investigated. Also other number of messages exchanged within USIM/UE and independent entities in different protocol is included in evaluation. IIDM uses sets of pre-calculated values (Earned during cross-certificate network access base on UTMS-AKA mechanism) and values sets are calculated during an operation. This function decreases MO’s operation load, by making random number generation in step 4, SP IP encryption, and Ticket comparison in step 10 to happen in SP. These calculations including random number generation is appropriately controlled by USIM product compatible to 3GPPs specification. For number of messages

Design and Security Analysis of Improved Identity Management Protocol

317

exchanged within related communication entity, IIDM is compares. NET passport protocol and Liberty Alliances ID management protocol profile [22–24]. Summation of number of exchanged messages within user client and other communication entities through protocol and also that of each protocol is calculated. Same condition is considered for every protocol and it includes precondition that service provider and ID provider has already authenticated each other and messages related to that is not calculated. Liberty Alliance protocol opposite to .NET Passport protocol, assumes that user is already authenticated by ID provider, includes these authentication messages, and needs to be pointed out that SP and MO is mutually authenticated through registering process. To generalize result, at least 2 more messages need to be added to Liberty Alliance protocols 5 main messages (authentication request, and authentication response). IIDM doesn’t require these messages to unify MOs role and load balancing from SP as a 5G mobile architecture and ID provider. Table 1. Comparison of the number of messages for communication entities of IIDM, IDM3G and I2DM. Analysis of message number for communication entities

IIDM IDM3G I2DM

The total number of messages between UE/SP

5

5

5

The total number of messages between USIM/MO/SP

4

7

6

Number of messages between MO/SP or MO/UE

1

2

1

Number of entities of message delivery for total procedures 1

18

16

Table 1 shows comparison result of message per communication entities for two protocols. IIDM involves considerably low number of messages compares to Liberty Alliance and, NET passport protocol, and also the total message exchanging number is the lowest. This is result of considering performance improvement speed between communication entities, and unifying 5G infrastructures protocol function, adapting mutual dependence mechanism. IIDM is simple, but based on existing standard, guarantees easy implementation, technical function, and compatibility of user terminal equipment. The first important security feature of the proposed protocols is the guarantee for integrity and authenticity of the message transactions. This follows from the fact that every transmitted message contains either a digital signature or a hash in which identity-related information is included and can be verified by the intended receiver. Security evaluation to IETFs 10 recommendation articles are as follow: – Privacy: IIDM protocol can prevent tapping and wiring, and also protect users identity. IMSI is not transmitted to SP, and TMSI is also according to PGP algorithm, encryption transmitted in one way. Message transmitted to MO is temporal ID identified in MO and SP. Information transmitted to SP is authority to access service and related information decoded with private key that is symmetric to self-issued public key.

318

B. Song et al.

– Cross-certificate: Cross-certificate is established between protocols between independent individuals. Using UTMS-AKA mechanism, UE/USIM and MO are cross-certificated. Cross certificate between Mo and SP includes precondition of protocol, securing safe route according to business agreement between two individuals. And it is valid through direct or the 3rd partys MO and SPs PGP key managing mechanism. Authentication between USIM and user is implemented through PIN according to 3GPPs specification [23]. Moreover, with well-designed biometric component stronger authentication mechanism can be operated. – Confidentiality: Confidentiality can be established by symmetric encryption using CK. Strength of encryption mechanism is succeeded through UTMSAKA of 3GPP specification. – Integrity: Protection of integrity is implemented by message authentication code (MAC) basing on UTMS-AKA algorithm. – Replay Attack: Protection of replay attack protection consists of 4 components. First, authentication vector comparison (stated in 3GPP) that forges CK and IK in full authentication steps in UTMS-AK, Second, TMSI comparison, (stated in UTMS-AKA’s mechanism) Third, RND calculation that follows integrity protection of MAC. RND distinguishes special Ticket and are fixed through specific time frame. – MITM (Man-in-the-middle) Attack: It is Hijacking attack session with hackers in the middle of attack, and maintaining independence is accomplished by previously mentioned integrity, confidentiality, Replay attack protection, and being connected to cross-certificate. Also IIDM is designed based on crosscertificate, making it strong for MITM. Also using PGP mechanism to compare SPs public key is strong for Social Engineering Attack, Phishing or Farming. – Brute force/Dictionary Attack: UTMS-AKA’s not a password protocol, thus it is basically strong for Dictionary Attack or Brute force attack. – Key Derivation Protection: Encrypted key CK and IK inherited from UTMS protocol’s security level, bases on UMTS-AKA’s specification. Asymmetrical key between MO and SP bases on PGP algorithm and key exchanging structure. – Random Number Generation: RND used for security program is generated ac-cording to IETF’s recommendation of voluntariness [25]. – DoS Attack Protection: There could be various DoS attack. One of appropriate attack for IDM is sending error messages to related communication entities, description of protocol at present level doesn’t include occasion of error and the relevant error messages. This is one of the research issues to be conducted afterward.

5

Conclusion

Performance improvement in mobile terminals brought a sudden increase in amount of mobile network bandwidth usage, and as network operators come to focus this new market, quantitative and qualitative expansion is on progress.

Design and Security Analysis of Improved Identity Management Protocol

319

But security part such as monitoring or personal information privacy is not considered as much. IIDM cut down the network cost by maximizing load balancing to SP to solve weakest point, which existing I2DM and IDM3G had with concentration of MO processing, and strengthens the base to cope with Social Engineering Attack, and also provide ease for ID management, providing unique transparency and confidentiality in mobile network, inherited from IDM3Gs strong point. And IIDM moving authenticating entity MO to SP, gained security using 256 bit key and RSA based PGP mechanism to USIM/UEs information that existing I2DM and IDM3G didn’t provided as a SP. As a security protocol, studies for implementing the real IoT services, studies about cost reduction caused by restricting SPs degree of freedom and practical performance evaluation are left for research projects afterward. Acknowledgments. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A1B03933828). This work was supported R&D Program by the Ministry of Trade, Industry & Energy (MOTIE) and the Korea Evaluation Institute of Industrial Technology (KEIT) (10065737, Development of Modular Factory Automation and Processing Devices toward Smart Factory in Parts Manufacturing Industry). This work was supported by the Technology Innovation Program (10054486, Development of Open Industry IoT (IIoT) Smart Factory Platform and Factory-Thing Hardware Technology) funded By the Ministry of Trade, industry & Energy (MI, Korea). This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No. R-20160530-003936, Development of Quality Inspection System based on IIoT Platform for improving Quality in Small and Medium manufacturers).

References 1. Third gerneration partnership project: Technical specification group services and system aspects. General Packet Radio Service (GPRS) enhancements for E-UTRAN (release 10), 3GPP, TS 23.401 version 10.7.0, pp. 56–59, March 2012 2. Third generation partnership project: Technical specification group services and system aspects. Mobile Application Part (MAP) specification (release 10), 3GPP, TS 29.002 version 10.6.0, pp. 139–143, March 2012 3. Park, I.S., Lee, Y.D., Jeong, J.P.: Improved identity management protocols for secure mobile cloud computing. In: HICSS-46, pp. 4958–4965, January 2013 4. Dimitriadis, C.K., Polemi, D.: An identity management protocol for Internet applications over 3G mobile networks. Comput. Secur. 25(1), 45–51 (2006) 5. Hadole, P.A., Rohankar, J., Katara, A.: Development of secure mobile cloud computing using improved identity management protocol. Int. J. Recent Innov. Trends Comput. Commun. 10(2), 645–650 (2014) 6. He, W.: Single sign on. Networks 33, 51–58 (2000) 7. Park, J.H., Yang, L.T., Hussain, S., Xiao, Y.: Security for multimedia and ubiquitous applications. Telecommun. Syst. 44(3), 179–180 (2010) 8. Harihareswara, S.: User Experience is a social justice issue. Code4Lib J. 1(5), 1 (2015)

320

B. Song et al.

9. Kormann, D.P., Rubin, A.D.: Risks of the passport single signal protocol. Comput. Netw. 33(1), 51–58 (2000) 10. Djellali, B., Chouarfia, A., Belarbi, K., Lorenz, P.: Design of authentication model preserving intimacy and trust in intelligent environments. Netw. Protoc. Algorithms 7(1), 64–83 (2015) 11. Tanimoto, S., Moriya, T., Sato, H., Kanai, A.: Improvement of multiple CP/CPS based on level of assurance for campus PKI deployment. In: 2015 16th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), pp. 1–5. IEEE, June 2015 12. Koien, G.M.: Mutual entity authentication for LTE. In: IEEE Transaction on Computer, pp. 689–694, July 2011 13. Huang, C.-M., Li, J.-W.: Authentication and key agreement protocol for UMTS with low bandwidth consumption. In: Proceedings of the 19th International Conference on Advanced Information Networking and Applications (AINA), vol. 1, pp. 392–397, March 2005 14. Al-Saraireh, J., Yousef, S.: A new authentication protocol for UMTS mobile networks. EURASIP J. Wirel. Commun. Netw. 2006(2), 19–19 (2006) 15. Lin, Y.-B., Chen, Y.-K.: Reducing authentication signaling traffic in thirdgeneration mobile network. IEEE Trans. Wirel. Commun. 2(3), 493–501 (2003) 16. Al-Saraireh, J., Yousef, S.: Analytical model for authentication transmission overhead between entities in mobile networks. Comput. Commun. 30(8), 1713–1720 (2007) 17. Zhang, Y., Fujise, M.: An improvement for authentication protocol in thirdgeneration wireless networks. IEEE Trans. Wirel. Commun. 5(9), 2348–2352 (2006) 18. Choudhary, A., Bhandari, R.: Analysis of UMTS (3G) Authentication and Key Agreement Protocol (AKA) for LTE (4G) network. Int. J. Recent Innov. Trends Comput. Commun. 3(4), 2146–2149 (2015) 19. 3GPP. http://www.3gpp.org 20. 3Gpp: TS 33.102., Technical specification group services and system aspects, 3G security, security architecture V6.0.0 (2003) 21. Skianis, C.: Special issue of telecommunications systems on security, privacy and trust for beyond-3G networks. Telecommun. Syst. 35(3), 87–88 (2007) 22. Aarts, R., Kavsan, B., Wason, T.: Liberty ID-FF bindings and profiles specification. Liberty Alliance Project, Version 1, 1–61 (2003) 23. Microsoft Corporations: Microsoft. net passport review guide. Technical report (2003). www.microsoft.com 24. Kormann, D.P., Rubin, A.D.: Risks of the passport single signal protocol. Comput. Netw. 33(1), 51–58 (2000) 25. Eastlake 3rd, D., Crocker, S., Schiller, J.: Randomness recommendations for security (no. RFC 1750) (1994)

Cognitive Multi-Radio Prototype for Industrial Environment Michele Ligios, Maria Teresa Delgado, Rosaria Rossini, Davide Conzon(B) , Francesco Sottile, and Claudio Pastrone Istituto Superiore Mario Boella (ISMB), Pervasive Technologies (PerT) Research Area, Via P.C. Boggio 61, 10138 Torino, Italy {ligios,delgado,rossini,conzon,sottile,pastrone}@ismb.it http://www.ismb.it/en

Abstract. Today, a large number of cost saving and energy efficient applications are enabled by Wireless Sensor and Actuator Networks (WSANs). Usually, these solutions have serious connectivity problems in scenarios where other wireless technologies are co-located sharing the frequency spectrum (e.g. industrial shop-floors). To cope with this issue, the concept of Multi-Radio (MR) has been introduced, which promotes the simultaneous use of multiple radio communication interfaces, leveraging their different characteristics, to improve the overall system performance and reliability. The proposed approach based on cognitive algorithm, considers two wireless technologies operating at the 2.4 GHz frequency band, namely Wi-Fi and 6LoWPAN, and provides a concrete implementation of the system using a real test-bed industry scenario. The solution provides a reliable communication infrastructure for manufacturing processes, firstly combining the properties of several physical layer standards and secondly, providing the ability to recover from temporary network failures by switching from a communication channel to another one. Keywords: Multi-Radio · WSAN · Connectivity · 6LoWPAN · Wi-Fi · Cognitive algorithm

1

Introduction

Leveraging 802.15.4 IEEE compliant radio interfaces, a large number of cost saving and energy efficient applications are enabled by WSANs. However, one of the main issues of the WSANs is related to the serious connectivity problems in scenarios where other wireless technologies are co-located and share the spectrum. One of the main research direction studied to address this problem is the MR concept. This type of solution promotes the communication through several network interfaces, leveraging their different characteristics (e.g. in terms of data rate, power consumption and coverage), to improve the overall system performance and reliability [5]. The solution proposed in this paper considers wireless c Springer International Publishing AG 2017  ´ Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, A. Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5 33

322

M. Ligios et al.

technologies, operating at the frequency of 2.4 GHz, providing a concrete implementation of a reliable communication infrastructure in the industry scenario. This result is obtained, thanks to the combination of the properties of several radio communication interfaces, by which provide the ability to recover from temporary network failures (switching from a communication channel to another one). The system is formed by nodes equipped with 6LoWPAN and Wi-Fi interfaces, which deliver their application messages, through the available network interfaces. For this purpose, the Cognitive Multi-Radio Nodes (CMRN)s, Sect. 4, implement a cognitive algorithm to choose the best interface to use, based on context information and application requirements. As a result, the system is able to dynamically choose the radio interface for communication; indicate when is time to switch to another interface; and understand whether the interfaces have to be used individually or simultaneously. The approach followed in the design of the system aimed at minimizing the network down-time, packet losses and delays, keeping in consideration also scalability aspects. The paper is organized, as follows: Sect. 2 presents a summary of related works in the field; Sect. 3 presents the approach followed to build the proposed solution, while describing its architecture. Sections 4 and 5 describe the scenario and details related to the MR system implementation, in term of hardware and software. Finally, in Sect. 6 the authors depict their conclusions and describe some of the possible future upgrades of the system.

2

Multi-Radio State of the Art

MR systems bring many advantages, such as robustness to interference, increased bandwidth and ease of deployment. For this reason, in recent years, a number of these systems have been proposed [8]. Originally, the allocation of roles to radios was static, assigning one technology for communication and one for control. For example, in [6] the authors combine the 802.11 protocol with the chip CC24201 . Even if such solutions are still valid, more innovative techniques foresee multiple radio interfaces exploited for increasing bandwidth and tolerating disconnection on mobile wireless devices. In [7] the Mobile Access Router uses different types of radio interfaces, like GPRS, 3G and/or WLANs to aggregate bandwidth and avoid stalled transfers. Such types of mechanisms are aggressive in using multiple interfaces and do not take energy into account, when choosing an interface. Instead, one dynamic, energy-aware system is Coolspots [8], which chooses Bluetooth transmission when available, and Wi-Fi when Bluetooth is insufficient to meet the bandwidth requirements. However, this system uses only network layer feedback to choice when to use a radio and it does not leverage the benefits of a fine-grained, link-layer approach. In [9,10] another approach is proposed, which combines radios 802.11 with low-power radios (802.15.4) and chooses the appropriate interface, based on data size; energy efficiency is achieved by batching packet transmission. Such approach lacks in the ability to react to variations of context conditions. In recent works on wireless mesh networks have designed 1

http://inst.eecs.berkeley.edu/∼cs150/Documents/CC2420.pdf.

Cognitive Multi-Radio Prototype for Industrial Environment

323

solutions with multiple radios per node, to make channel assignment more effective [11]. Although these approaches present interesting features, they do not consider the energy efficiency and they do not use an algorithm to dynamically choose the best interface to use. Finally, more details about previous works that have inspired the one proposed in this paper can be found in [14,15].

3

Proposed Cognitive Multi-Radio (CMR) Approach

This section introduces the proposed approach describing its architecture and the cognitive algorithm developed. 3.1

CMR System Architecture

The proposed architecture has been designed to guarantee robustness in sensor data communication. Exploiting the concept of MR, to provide reliable communication, in challenging harsh environments, like the industrial shop-floor, where collecting data from sensors could be considered a critical task, i.e. when the collected data is used to guarantee the workers’ safety, in critical situations. CMR system relies on two main components: the CMRN and the Cognitive Multi-Radio GW (CMR GW). Figure 1 presents, at high-level, the architecture of the system and two components.

Fig. 1. Architecture of the CMR system.

A CMRN is formed by one or more sensors connected to a “smart device” that have the ability to process the data collected from the sensors and to select the most appropriate radio technology, to communicate with the CMR GW, among the ones available. The CMR GW is a special type of CMRN, able to collect and aggregate data from other CMRNs, regarding channel occupancies and sensor data either over 6LoWPAN or Wi-Fi. It also hosts the Spectrum Sensing Node (SSN), which senses the channel occupancies and forwards this information to the Spectrum Manager (SM). Using a mechanism described in [3], this latter component analyzes the information and sets the network operation channel,

324

M. Ligios et al.

to the channel of lowest occupancy, also in case any interference is detected. Finally, the CMR GW can force the migration of all the connected nodes to a specific interface if it detects a problem on the current one. In other words, the nodes connected to the CMR GW receive a broadcast message that specifies the interface to use, i.e. all the nodes will use 6LoWPAN now. 3.2

CMR Intelligent Functions

CMR’s architecture has been developed considering a cognitive module, described in the next subsection, that exploits intelligence both at node and at network level in order to guarantee a reliable communication link between CMRN and CMR GW. This module allows the system to efficiently use the different available wireless technologies, taking advantage of their complementary characteristics in terms of data rate, latency, robustness and energy consumption among others. In fact, CMR supports robust and reliable communication leveraging a cognitive approach, particularly, enabling self-configuration, management and healing capabilities. The aim of CMR is to select the best radio technology available at any given time. To do so, it provides the following three functions: – Network Resource Management: CMRN is responsible for the identification of the available radio interfaces and their corresponding status (available/notavailable), collecting information about the interfaces properties and their performance, which are used by the next function; – Interface Monitoring: It periodically checks the reliability of the available radio interfaces, by computing and analysing the Received Signal Strength Indication (RSSI), transmission delay and wireless channel occupancy among other indicators; – MR Management: Management of CMRNs includes various tasks, such as performing network discovery and creating association between the CMR GW and CMRNs after the nodes are powered on. Acting on these characteristics, it is possible to define a communication system with the necessary intelligence to respond to application and environmental requirements, while limiting the impact in terms of energy consumption and electromagnetic emissions. Cognitive Algorithm. The cognitive algorithm adopted in CMR is a Multiple Attribute Decision Making (MADM) mechanism [17], with a feedback loop to consider the Quality of Service (QoS), Furthermore, it is important to note that a fuzzy logic approach [18] has been used, because some of the attributes cannot be precisely defined. Figure 2 shows schematically the sequence of operations that compose the algorithm. The parameters of the communication channel, selected as attributes of the MADM are: bandwidth, cost, security, energy consumption, and communication traffic. For each node, there is a list of preferred performance attributes,

Cognitive Multi-Radio Prototype for Industrial Environment

325

Fig. 2. Cognitive algorithm.

which are stored in so called “requirement vector”, with their corresponding weights (“weight vector”). In the same way, for each radio interface, a “performance vector” is kept, containing their attributes performance, which are updated continuously (stored in the “network candidate vectors”). Among the methods available for MADM fuzzy [19], an “all fuzzy” approach has been chosen, since is a good trade-off between performance and ease-of-use. This approach requires to transform crisp data in fuzzy data, even if they are crisp in nature. In this case, both the node preferences, “requirement vector”, and the attributes performance of each candidate network, “network candidate vectors”, need to be converted to fuzzy numbers. After this, the next step is to determine how to compare the two set of vectors: the one of the preferences (with associated weights) and the ones of the candidate’s performance. The similarity model used is the one proposed in [1]. Once the values are compared and the similarity is evaluated, it is possible to fill the decision matrix to select the network interface to use. Based on the previous evaluation, the MADM algorithm processes these values to determine a ranking of the existing network candidates and is able to select the best possible network interface available. The ranking is made implementing N a SAW scoring, through this expression: A∗SAW = argmaxi1,...,M J=1 wj vij where wj is the weight of the attribute j-th and vij is the adjusted value of the j-th attribute of the i-th network. The adjusted values are calculated in two ways: – for positive criteria (where more is better), like bandwidth and security aij + = amax ; vij ij

– for negative criteria (where less is better), like cost, energy consumption and − packet losses vij =

amin ij aij .

326

4

M. Ligios et al.

CMR Implementation

Several types of node platforms have been identified, evaluated and selected to implement the approach proposed in Sect. 3 and to build the CMRN and the CMR GW. Particularly the developed nodes rely on two radio technologies: 6LoWPAN (IEEE 802.15.4) and Wi-Fi (2.4 GHz) based on the IEEE 802.11 family of standards. The next subsections contain the implementation details, in term of hardware and software of the two components. 4.1

CMRN

In the developed prototype (Fig. 4), the CMRN is constituted by a Raspberry Pi 22 with a STM32W Rev, C board3 mounted for the communication with a temperature sensor to collect temperature data. As indicated in Sect. 3, the CMRN has two interfaces: Wi-Fi that uses a Wi-Fi USB stick and 6LoWPAN that uses the Telos rev. B platform4 . The Telos is equipped with Chipcon CC2420 radio chip (an IEEE 802.15.4 compliant transceiver). In terms of software, the modules that compose the CMRN architecture can be logically divided in three layers as depicted in Fig. 3, namely lower (the orange modules), middle (the green modules) and upper layers (the blue modules). The lower layer of the software (namely “data collection”) contains the modules that directly communicate with the sensors (e.g. the “Serial Client” that communicate with serial devices). The “Serial Client” communicates directly with two other software components that manage the communication in uplink

Fig. 3. Software modules of the CMRN and CMR GW. 2 3 4

https://www.raspberrypi.org/products/raspberry-pi-2-model-b/. http://www.promelec.ru/pdf/STM32W-RFCKIT.pdf. http://www.memsic.com/userfiles/files/Datasheets/WSN/telosb datasheet.pdf.

Cognitive Multi-Radio Prototype for Industrial Environment

327

Fig. 4. CMRN prototype.

and downlink, namely the “Uplink Manager” and “Downlink Manager”, respectively. The “Uplink Manager” is used to upload the collected data to the CMR GW, which controls the CMRNs, while the “Downlink Manager” is used for receiving all the commands that need to be sent to the devices, through the CMR GW. These two modules are directly connected to the active interface. The “Uplink Manager” waits for the data from the sensors and, as soon as they are available, it forwards the data to the active interface. On the other side, the sensors receive the command from the “Downlink Manager”, when some action is required. The “Downlink Manager” forwards also the messages to the “Interface Selector”, when the CMR GW indicates the need to migrate from one interface to another one. The middle layer contains two modules: the “Interface Monitor” and the “Interface Selector”. The “Interface Monitor” collects continuously the environmental data from the available interfaces, to evaluate in real-time their status and the quality of the link. The data collected from the “Interface Monitor” are then sent to the “Interface Selector”, which analyzes them, using the cognitive algorithm described in Sect. 3, allowing to select between Wi-Fi and 6LoWPAN, for data transmission. In the upper layer, there are modules that are used to control the communication technologies available for the CMRN (currently Wi-Fi and 6LowPAN). The two interfaces have many common parts and for this reason, a “Generic Interface” has been developed that handles all these parts, exposing APIs, which are used from the other software components of the lower level, to interact with the communication interfaces. In this way, the specific components for 6LoWPAN and Wi-Fi contain only the low-level code to be used to send and receive data using those technologies.

328

4.2

M. Ligios et al.

CMR GW

The CMR GW, in Fig. 5, is also implemented using a Raspberry Pi 2, with the same interfaces of CMMRN. For the 6LoWPAN communication, the CMR GW implements the Border Router (BR), in charge of forwarding the traffic from CMRN to CMR GW. The BR is integrated in CMR GW via the Serial Line IP (SLIP) de facto standard [2] and communicates with the CMRNs via 6LoWPAN protocol (through a 6LoWPAN interface connected via serial connection). The Wi-Fi gateway acts like an access point for the CMRNs connected through this interface. The software architecture of the CMR GW (see Fig. 3) contains a specific module, called “Lighthouse”, which is used to keep the list of the CMRNs present in the network updated. The “Lighthouse” broadcasts a beacon, at regular intervals, with all relevant network information. The CMRNs are listening, waiting for this beacon. As soon as they receive it, the CMRNs answer with an authentication request. In this way, the “Lighthouse” module receives the request and it is able to register the presence of the nodes in the network, and the type of radio interface used. The CMR GW, as the CMRN, has two modules to manage the communication, the “Uplink Manager” and the “Downlink Manager”. The latter is used to send commands to the CMRNs using its active interface; instead, the “Uplink Manager” is used to forward the received sensor data to the LinkSmart middleware5 . Finally, the specific 6LoWPAN and Wi-Fi drivers are used to read and write data, using the chosen technology. Similar to the CMRN, the common communication parts are exposed through a “Generic Interface” to the other components.

Fig. 5. CMR GW prototype. 5

https://www.linksmart.eu/redmine.

Cognitive Multi-Radio Prototype for Industrial Environment

5

329

CRM Integration in Factory Enviroment

The WSAN technology provides interesting features for monitoring tasks in industrial environments, thanks to its local processing and its wireless communications, as well as its small form factor and low cost [12]. Usually, industrial environments are challenging for wireless solutions, because of the presence of many physical banners, like metallic objects; as well as various machines types and coexisting radio networks, which generate radio interference. This challenging environment is mainly a problem for resource-constrained devices, like WSANs, which have low radio transmission power and data rates, and may lead to data loss and network failure [13]. For these reasons, an industrial context (see Fig. 6) has been selected to test CRM, which has been integrated in a platform developed within the context of the Satisfactory European project [16]. Although the developed architecture enables the operation of a complex industrial platform, for the scope of this work, the authors will discuss exclusively the components necessary to integrate the CMR approach. Particularly, CMR system is connected with the “Device Manager” and it is used to build a robust and reliable communication network, that collects data from the WSANs and forwards them to the higher levels of the architecture. The “Device Manager” and the “Event Manager” are responsible for integrating and homogenizing the data acquired by the diverse set of sensors into the project framework. The data collected will be processed by the “Semantic Context Manager” and transformed into semantically enriched events (e.g., localization, multiple-media, gestures and content) that will be used as a tool to monitor and control the manufactory processes.

Fig. 6. CMR integration in the industrial scenario.

6

Conclusions and Future Works

This paper has presented a novel MR solution, describing the approach followed for its design, implementation details and the use in the smart-factory

330

M. Ligios et al.

scenario, provided by the Satisfactory European project. The presented prototype provides several features, introduced in Sect. 4, which exploit the available physical network interfaces to exchange packets, leveraging a cognitive algorithm to choose the best network interface to use, in real-time. It leverages the nodes status information, on one hand, to create and continuously update the interface priority table (used to choose the best interface) and, on the other hand, to switch to another installed interface, when one network is down. Furthermore, integrated from previous works [3,4], the CRM GW has one component able to monitor the IEEE 802.15.4 spectrum and to allocate dynamically the 6LoWPAN interface to the best available channel, when interference attacks are detected on the network operating channel. In the future, the system would be enhanced integrating new radio interfaces. The CMR system, with the other components of the Satisfactory platform will be deployed in a real smart factory scenario, in order to collect data and test the validity of the proposed approach in a concrete environment. Finally, it is important to highlight that, also if this is a good proof-of-concept to show the features provided by MR, it is not the final step needed to have a full effective solution for the MR problem. Indeed, the next upgrade will be a refactoring that allow to integrate it in the Operating System at kernel level. Finally, it is important to highlight that, even if this is a good proof-ofconcept that fuses together and shows both advantages of cognitive and MR approach, it could not be the final step to have a full effective solution for the MR problem. Indeed, the next upgrade will be a refactoring that allow to integrate such approach at kernel level in the Operating System. Acknowledgment. This paper is an output of SatisFactory project – A collaborative and augmented-enabled ecosystem for increasing SATISfaction and working experience in smart FACTORY environments – funded from Horizon 2020 Framework Programme of European Union under grant agreement no 636302.

References 1. Chen, S.-M.: New methods for subjective mental workload assessment and fuzzy risk analysis. Cybern. Syst. 27(5), 449–472 (1996) 2. Romkey, J.A.: Nonstandard for transmission of IP datagrams over serial lines: slip. In: Arpanet Working Group Requests for Comment, DDN Network Information Center. SRI International, Menlo Park (1988) 3. Tomasi, R., et al.: Frequency agility in IPv6-based wireless personal area networks (6LoWPAN). In: International Conference on Wired/Wireless Internet Communications, WWIC 2010, June 2010 4. Kasinathan, P., et al.: DEMO: an IDS framework for internet of things empowered by 6LoWPAN. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, CCS 2013 (2013) 5. Makaya, C., et al.: An interworking architecture for heterogeneous ip wireless networks. In: Third International Conference on Wireless and Mobile Communications, ICWMC 2007. IEEE (2007)

Cognitive Multi-Radio Prototype for Industrial Environment

331

6. Mishra, N., et al.: Wake-on-WLAN. In: Proceedings of International Conference on the World Wide Web (WWW), pp. 761–769 (2006) 7. Rodriguez, P., et al.: A commuter router infrastructure for the mobile internet. In: Proceedings of the Second International Conference on Mobile Systems, Applications, and Services, Boston, MA, June 2004 8. Pering, T., et al.: CoolSpots: reducing the power consumption of wireless mobile devices with multiple radio interfaces. In: Proceedings, ACM MobiSys, pp. 220– 232, June 2006 9. Sengul, C., et al.: Improving energy conservation using bulk transmission over high-power radios in sensor networks. In: Proceedings of ICDCS, Beijing, China, June 2008 10. Lymberopoulos, D., et al.: Towards energy efficient design of multi-radio platforms for wireless sensor networks. In: Proceedings of IPSN, St. Louis, MO, April 2008 11. Draves, R., et al.: Routing in multi-radio, multi-hop wireless mesh networks. In: Proceedings of ACM MobiCom, Philadelphia, PA, pp. 114–128, September 2004 12. Gungor, V.C., et al.: Industrial wireless sensor networks: challenges, design principles, and technical approaches. IEEE Trans. Indus. Electron. 56(10), 4258–4265 (2009) 13. Caro, D.: Wireless Networks for Industrial Automation. ISA (2008) 14. Delgado, M.T., et al.: Underlying connectivity mechanisms for multi-radio wireless sensor and actuator networks. In: The 9th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, WiMob 2013, Lyon, France, October 2013 15. Khaleel, H., et al.: Multi-access interface selection based on data mining algorithm. In: 2013 IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications (APWC), pp. 1457–1460, 9–13 September 2013 16. http://www.satisfactory-project.eu/satisfactory/ 17. Lahby, S., et al.: Survey and comparison of MADM methods for network selection access in heterogeneous networks. In: 2015 7th International Conference on New Technologies Mobility and Security (NTMS), pp. 1–6 (2015) 18. Cignoli, R., et al.: Basic fuzzy logic is the logic of continuous t-norms and their residua. Soft Comput. 4(2), 106–112 (2000) 19. Yao, Y.Y.: Combination of rough and fuzzy sets based on α-level sets. In: Lin, T.Y., Cercone, N. (eds.) Rough Sets and Data Mining: Analysis for Imprecise Data, pp. 301–321. Kluwer Academic Publishers, Boston (1997)

Cluster Based VDTN Routing Algorithm with Multi-attribute Decision Making Songjie Wei, Qianrong Luo ✉ , Hao Cheng, and Erik Joseph Seidel (

)

School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing 210094, China {swei,15205170353,hcheng,seidel}@njust.edu.cn

Abstract. Multi-copy routing may cause the excessive consumption of network resources in Vehicular Delay Tolerant Network (VDTN). Most existing routing algorithms only consider controlling the redundant messages in the global network to improve performance, but the local road redundancy still exists. In this paper, we propose a cluster based VDTN routing algorithm with multiattribute decision-making (CR-MA) in urban areas. CR-MA enables nodes aware of local message distribution. We consider both the message distribution as coverage and the node and network attributes to establish a multi-attribute deci‐ sion-making model which manages to achieve tradeoff between message redun‐ dancy and delivery performance. CR-MA is benchmarked in simulation against Prophet, Epidemic and SAW. Experiment results proves its superiority in both message routing effective, performance, and overhead. Keywords: VANET · DTN routing · Multi-attribute decision making

1

Introduction

Vehicular Ad-hoc Network (VANET) is a typical application scenario of Delay Tolerant Network (DTN) [1] for inter-vehicle communication using wireless communication technology. It is difficult to ensure a stable communication path between source and destination due to the high mobility of nodes, intermittent inter-node connectivity and the high change of network topology. We study the Vehicular DTN (VDTN) that extends the features of DTN to VANET. The key issue in VDTN routing algorithm is how to choose a suitable relay node for each message based on the paradigm of store carry and forward (SCF). Existing VDTN routing protocols can be classified as either single-copy or multiplecopy routing protocols based on the number of message copies disseminated through the network. Direct Delivery [2] and First Contact [3] are typical single-copy routing protocols. A single-copy routing protocol’s network overhead is trivial but it suffers long delivery delay and low delivery ratio. A message should be replicated to enough nodes to increase the chance of being exposed to its destination node. Epidemic [4] is a multiple-copy routing protocol implementing flooding to minimize the delivery delay and maximize the delivery ratio when nodes having enough buffer space. Spray and Wait [5], Binary Spray and Wait [5] and Spray and Focus [6] are three typical controlled © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_34

Cluster Based VDTN Routing Algorithm

333

flooding routing protocols that set the maximum number of replications. Prophet [7] and MaxPro [8] are information-based routing protocols that improve routing performance by using history encounter times of nodes. The information used currently can be divided into three types: history encounter times of nodes, prior information of the network and location information. However, none of those considers both the message distribution and node’s attributes to tune message replication. Social-based routing protocols are an evolution for DTN by exploring social behaviors and properties of nodes to group them into different clusters or communities. Contact frequency and duration [9], movement area of the node [10] and the trajectory of node [11] are three bases for most clustering methods. Currently, most routing algorithms only consider controlling the redundant messages in the global network to improve performance, but the local network congestion problem still exists. The heterogeneity of the message coverage on each road make the volume of message exchanged highly diversified upon node encounters at different geographic locations and time. Therefore, the local message coverage should be carefully consid‐ ered as an impact factor in the routing algorithm in order to control local network congestion. Existing clustering methods mainly distribute nodes into two-dimensional clusters. We believe the structure of urban road limits nodes’ direction and scope, and propose a one-dimensional linear clustering method based on road map. In this paper, we propose CR-MA, a cluster-based multi-attribute algorithm for message forwarding in VDTN. First, one-dimensional linear clusters are formed based on the road structures. Then nodes try to understand and predict the message coverage in each cluster. We consider both the message distribution and node attributes when deciding when and where to duplicate messages.

2

System Model

The application scenario of a VDTN routing algorithm involves the configuration, wire‐ less communication capability, and movement of each individual vehicle node, as well the organization of all the nodes as a communication network. This section introduces how these essential elements are modeled and presented in the proposed message distri‐ bution and routing algorithm. Vehicle nodes are grouped into clusters based on their locations and moving status. We present a road-based node clustering mechanism and explain how messages are transmitted intra and inter-clusters. 2.1 Vehicular Node A vehicular node moving on road can initiate, buffer, transmit and deliver messages. A message is generated in fixed size by the initiator when needed by an application running on the node. Messages whose destinations are different from the current node will be buffered on the node and forwarded to the next intermediate hop when appropriate. There is a constraint on the message buffer size, and thus restrict the number of live messages (messages not at destination) buffered on each node. Neighbor nodes when in each other’s wireless communication range may reliably exchange information about each

334

S. Wei et al.

other’s moving status, message buffer digest, and forward messages when necessary. Each node makes routing and forwarding decisions independently by considering two factors, the nodes’ intrinsic state, such as power and storage resources, moving history and status, and external message and network state, including message coverage and neighbor connections. A vehicular node also has awareness of its current road location, i.e. which road it is current running on, either by road id or name. While an onboard GPS plus electrical map is sufficient to provide such information, the road-awareness requirement is easier to be met with other vague methods like cellular base station sensing or road nameplate recognition. The node movement model used in this paper is the Shortest Path Map Based move‐ ment model (SPMB). Initially, all vehicular nodes are uniformly distributed on the urban roads. Each node randomly selects a point in the road map as the current destination. The roads from current location along the shortest path to the destination location are calculated by the Dijkstra algorithm to make a route. Once the destination is reached, node randomly selects the next destination to repeat the route navigation. 2.2 Vehicular Network All the vehicular nodes equipped with wireless communication and ad hoc networking capability form a message-passing network on roads in an urban area. The network is infrastructure-less and wireless-connected. The urban road connections in a city are fixed and thus regulate network topology and variations and extensions. Since a vehicle’s wireless sensing and transmission range is restricted by its transmitter device and power, and vehicles are free to move independently, the inter-vehicle link is dynamic in frequent setting up and disconnecting, which makes the network a delay tolerant network (DTN) based on message buffering and forwarding for communication. The fundamental routing decision in such network is, for a node buffering messages to decide for each message when encountering a neighbor node, whether to duplicate the message to the other node or not. 2.3 Node Clusters and Message Coverage As showed in Fig. 1, an urban road in the network can be interpreted as a long tube, so the nodes moving on the same road are naturally grouped as a cluster. Node follows its route and its trajectory on each road are predictable in a short period of time. We propose a very straightforward node clustering method based on roads. Instead of using complex calculation for clustering formation and maintenance, or electing cluster head to coor‐ dinate, each node is automatically aware of which cluster it belongs to by only deter‐ mining which current road it is running on. Nodes are truly independent and equal in a cluster. Unnecessariness of cluster heads also avoid the hidden risk of routing bottleneck and paralysis due to head node malfunction.

Cluster Based VDTN Routing Algorithm

335

Fig. 1. Road-based node clustering in VANET

For each message being transmitted in the network, there are some vehicular nodes carrying (buffering) it on each node. If we denote such node’s wireless communication range as a circle in Fig. 1, then every road is partially covered by such circles, which represent the spread of a specific message in the network, or what we called, the message coverage on each road and in the network. Obviously as soon as the destination node of a given message is under the message’s coverage, the message can be successfully delivered. The best coverage is achieved when every node buffers a duplicate of the message, which brings in the fastest message delivery like in Epidemic routing [4], but also has to tolerate the extreme message transmission overhead and consumption of limited buffer space. The proposed routing algorithm in next section is an attempt to trade off for equilibrium of these conflictions.

3

Cluster-Based Routing with Multi-attribute Consideration

In this section, we present a Cluster-based Routing algorithm with Multiple Attributes considered (CR-MA). A message carrier node considers both local message coverage on road and the node’s current state to decide whether to delegate the message to an encountered neighbor node. CR-MA optimizes message coverage and overhead when making forwarding decision by predicting the message coverage in each cluster and leveraging four attributes of node, including node velocity, buffer statue, adhesiveness to cluster, and the cluster message coverage. 3.1 Message Coverage Prediction As defined in Sect. 2.3, we use message coverage to measure the spread of a specific message on a road, i.e. in a cluster. Given M indicating a message, R as a road, the in formula (1). Each message duplicate coverage of M on R is computed as coverageM R covers a segment of the road length defined as the diameter of the node’s wireless communication range who carries the message. NM is the predicted number of nodes on road R with message buffering message M.

336

S. Wei et al.

coverageM R

) ( NM ⋅ 2r = min 1, DR

(1)

r is the wireless communication radius of a node. DR denotes the length of road R. On road, every node tracks and predicts message spread by exchanging message buffering information with encountered nodes. The specific procedures that nodes compute a message M’s coverage on a road are as follows: Procedure 1. When node i enters a new road R, it initializes two information tables, one for tracking the prediction of message coverage, abbreviated as MC, and the other for remembering the message information of encountered nodes, abbreviated as NM. ⟨ ⟩ , where M ∈ Mi(Mi is the Each entry structure in MC is formatted as M, coverageM R is 2r∕DR when collection of messages buffered by node i. The initial value for coverageM R only the current node is known to carry message M. An entry in NM is formatted as ⟨ ⟩ NIDj , MIDj , vj , tj , where NIDj indicates the node ID encountered, MIDj the collection of the message IDs carried by node j, vj the speed of node j no more than the speed limit vlimit on road R, and tj the timestamp of the latest encounter. For the predicted NM on node i as in formula (1), it is computed as NMi =

∑ i j∈NMM

agingt−tj ⋅

vj vlimit

(2)

i NMM is a subset of NM table on node i with all encountered node’s IDs which carries message M. aging values between 0 and 1 indicating the decay ratio with time elapse. Although the current node is aware that node j carries when last encountered, this awareness becomes less affirmative when time elapses, and if the node’s moving velocity is high. t is the current time.

Procedure 2. In two cases a node needs to update its NM. Case 1: When node i encounters node j, they exchange information about each other’s id, velocity and the IDs of collection of messages buffered. Then they synchronize NM. If a node finds a record of new node in the encountered node’s NM, it replicates the new record into its own NM. They also update the NM record of the other node’s encounter time. Case 2: Node i updates its record about node j in NM when i relays a message to j. Procedure 3. In two cases a node adds a new record to MC. Case 1: When node i initiates a new message, it adds a new record with the initial value for message coverage to MC. Case 2: When node i receives a new message, it then calculates the new message’s coverage and adds the record to MC.

Cluster Based VDTN Routing Algorithm

337

3.2 Multi-attribute Decision-Making We use multiple attributes to aid on each node’s decision of selecting the appropriate relay nodes for messages. Table 1 lists the four aspects of node attributes considered. The forwarding decision for message M from node i to j is computed as a probability, which is a combination of subset probabilities in the four considered aspects of node and network attributes. The following discussion assumes node i come across with j. Table 1. Subset probabilities for message forwarding Pvelocity

The forwarding probability based on node speed

Pbuffer

The forwarding probability based on buffer availability

Pvitality

The forwarding probability based on node vitality

Pcoverage The forwarding probability based on message coverage

The forwarding probability based on node speed. Anode tends to spread messages faster to others when moving at higher speed. We compute the probability of message forwarding based on the relation moving speed of the two nodes as follows:

Pvelocity

⎧ ⎪ ⎪ =⎨ ⎪ ⎪ ⎩

| | |vi − vj | | | , i and j move in the same irection vlimit vi + vj , (i and j move in the opposite direction) 2 ⋅ vlimit

(3)

Where vi indicates the speed of node i, and vj indicates the speed of node j. The forwarding probability based on node buffer availability. To balance the message buffer usage among nodes, the message forwarding decision-making prefers to store messages on those with more buffer space. The probability is computed as 𝛽i = e 𝛽j −

Pbuffer

(4)

Where 𝛽i and 𝛽j are the ratio of free buffer space on node i and j, respectively. PBuffer = 0 if the message buffer on node j is full (𝛽j = 0). The forwarding probability function based on the node’s vitality. The vitality of a node on a road is the remaining lifetime of this node on the road. Without human input and GPS-based route planning, it is impossible to predict when a vehicular node is leaving the current road. Instead we use the duration of existence on this road and the node’s speed as clues to estimate a node’s remaining vitality on the current road. The assumption is that the longer a node has been running on a road, and the faster the running speed, the node is more likely to exit. The forwarding probability on node vitality is calculated as follows:

338

S. Wei et al.

Pvitality = 1 − e−velocity ⋅ e−duration

(5)

If node i has been moving at speed velocity for a duration time period. The forwarding probability based on message coverage on the road. We assume that node i is moving on road R with the message M. We interpret a road as a long tube and the direction and range of a node is limited to the two ends of the road. So M can cover the whole road as long as enough nodes carry copies of M. More duplicates of the message provides better coverage on the road, which brings in higher change of covering the message destination node. However, more redundant message copies also consumes node buffer space, and leads to worse routing cost-effectiveness. We use the message coverage calculated in formula (1) on current road as a guidance when deciding to delegate a duplicate of each message to the encountered node. Pcoverage = e−coverageR

M

(6)

Multi-attribute Decision-making model. To combine all the attributes together for a of node i forwarding the message M to node j, we combine the above probability PM i,j four probabilities as PM = w1 Pvelocity + w2 Pbuffer + w3 Pvitality + w4 Pcoverage i,j

(7)

∑4 Where i=1 wi = 1, (0 < wi < 1), w1 , w2 , w3 and w4 are weights of the node velocity, node buffer availability, node vitality and message coverage in the multi-attribute deci‐ sion-making model. We choose different weights for diverse network scenarios by using the Analytic Hierarchy Process (AHP) [12]. AHP is a pragmatic structured technique for organizing and analyzing complex decisions, based on a small amount quantitative data. We determine different weights by different judge metrics in various network scenarios using AHP. For example, when the size of node’s buffer is low and the cache capacity of node is the key factor restricting the network performance, we can get higher weight of the node’s the remaining buffer space by AHP.

3.3 CR-MA Routing Process As described in Sect. 2.3, every node consciously belongs to a cluster based on its current road location. When two nodes run across each other, there are only two cases of their belonging clusters, either the same one (thus on the same road) or different (in the area of road intersection). Nodes within each other’s sensing and communication ranges exchange information about messages buffered, and then decide on delegating messages to the other based on the multiple node and message states and attributes. (1) Intra-Cluster Forwarding If there are messages whose destination is the neighbor node, the owner node forwards messages preferentially. Otherwise the owner node calculates the forwarding

Cluster Based VDTN Routing Algorithm

339

probability pM of each message regarding to the neighbor node based on the multii,j attribute decision-making model. The calculated probability is benchmarked against a triggers the message replication configurable threshold 𝛾, and a higher-than-threshold pM i,j from node i to j. (2) Inter-Cluster Forwarding Intuitively nodes running on different roads and meeting each other in intersection areas are occasional, and the connection duration is momentary when both moving in crossing directions. We can use the same multi-attributed based probability pM for deci‐ i,j sion of inter-cluster message transmission, except for the message coverage prediction on the destination road is simply set as 0, since the owner node has no clue about the message distribution on a different road. Correspondingly the threshold 𝛾 for inter-clus‐ tering forwarding is set smaller than in the intra-clustering case. The choice of threshold value 𝛾 should be configured based on the resource available of vehicular nodes, the scale of road map, and the affordable network routing overhead. While a higher threshold reduces the message routing cost, it may also grow the message delivery ratio and timeliness.

4

Experiments and Evaluation

We apply the ONE simulator as the experimental platform and implement the proposed CR-MA routing decision making algorithm as a new routing module in it. 4.1 Simulation Configuration Table 2 shows a summary of ONE simulation configuration. Based on the original Helsinki road map, we simplify the road map by eliminating and combining trivial roads. The derived road network is composed of 60 major roads. The total length of roads is 128.4 km, individual length between 4325 m and 250 m, as in Fig. 2. Table 2. Simulation parameters. Parameter Simulation time/h Movement model Number of nodes Speed/(m/s) Buffer size/MB Transmission rate/(kbit/s) Transmission range/m Message sending interval/s Message’s TTL/min Message size/kB

Value 6 Shortest Path Map Based movement 100, 150, 200, 250, 300 2.7–13 10, 15, 20, 25, 30 500 20 3 300 500

340

S. Wei et al.

Fig. 2. The original (left) and modified (right) road network in Helsinki, Finland

To benchmark the proposed CR-MA for performance, we compare it with other wellknown high-quality DTN routing algorithms including Epidemic, Prophet and SAW. The value of Prophet’s parameters are set as pinit = 0.75, β = 0.25, γ = 0.98. The maximum number of message copies to spray for SAW is 6. The elimination mechanism of redundant messages [13] is applied to each routing algorithm in order to eliminate the message copies after successful delivery. 4.2 Simulation Results and Discussion The performance of routing algorithm under various node populations. In order to test the scalability of the proposed algorithm, we conduct simulations with node population varying from 100, 150, 200, 250 to 300. Node buffer is fixed as 40 messages.

Fig. 3. Routing performance with various numbers of nodes

Figure 3(a) shows the delivery ratios of all algorithms rises with the growth of the number of nodes. CR-MA and SAW achieve consistently high delivery ratios in small or large populations, with the former always performing the best. This is because CRMA uses a multi-attribute decision-making model to achieve nearly optimal message forwarding across nodes. Figure 3(b) shows while the message overhead of SAW is the lowest because of its limit on the number of message copies, the message overhead of CR-MA is lower than Epidemic and Prophet. CR-MA avoids unnecessary message

Cluster Based VDTN Routing Algorithm

341

flooding by considering the message coverage on each road (cluster). Figure 3(c) shows the delivery delay of all algorithms decreases when the number of nodes grows. This is because messages are forwarded to more nodes and the multipath of a message can reduce message arrival delay. Compared with the others, the delivery delay of CR-MA is the minimum. Messages reach their destinations faster as a result of CR-MA tending to forward messages to faster-moving nodes. The performance of routing algorithm under various buffer size. Buffer availability is another important attribute in CR-MA affecting forwarding decision making. We also experiment various message buffer sizes with the node number fixed as 150.

Fig. 4. Routing performance with various buffer sizes

Figure 4(a) shows the delivery ratio of CR-MA is superior to Epidemic, Prophet and SAW. The delivery ratios of all algorithms rise up as the buffer size of nodes increases because a larger buffer space indicates more and longer carrying of messages, which increases the chance of messages being transmitted to their destinations. Figure 4(b) the message overhead of CR-MA is consistently lower than Epidemic and Prophet. Buffer availability is another key attribute considered when forwarding messages. Figure 4(c) shows the delivery delay of all algorithms is increased when the buffer size of node increases because that messages are cached longer. Compared with competitors, CRMA achieves the shortest delivery delay.

5

Conclusion

This paper proposes a clustering based VDTN routing algorithm with multi-attribute decision-making (CR-MA). CR-MA exploits the layout and organization of roads as restriction of node movement and message spread. Nodes on the same road are organized in a cluster to collaborate on message spreading on this road. We define the concept of message coverage on road to predict and increate the probability of a message reaching its destination. The coverage value, amid other node and network attributes including node velocity, vitality, and availability, help make heuristic decision on when and where to replicate messages. The proposed routing algorithm is verified in simulations and benchmarked with other major algorithm. CR-MA demonstrates improvement and

342

S. Wei et al.

superiority in message delivery ratio, delay and cost. As our future work, we will explore on an automatic configuration for parameter choices so that the proposed CR-MA can be self-adaptive in different application scenarios. We will also try to adopt the current road-based clustering and message coverage mechanism for realistic applications such as collaborative road traffic awareness and forecast. Acknowledgments. This material is based upon work supported by the China NSF grant No. 61472189, the CERNET Next-Generation Innovation Project under contract No. NGII20160601, and the CASC fund No. F2016020013. Options and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

References 1. Abdelkader, T., Naik, K., Nayak, A., Goel, N.: A performance comparison of delay-tolerant network routing protocols. IEEE Netw. 30(2), 46–53 (2016) 2. Spyropoulos, T., Psounis, K., Raghavendra, C.S.: Single-copy routing in intermittently connected mobile networks. In: Proceedings of IEEE SECON, pp. 235–244 (2004) 3. Spaho, E., Barolli, L., Kolici, V., Lala, A.: Performance evaluation of different routing protocols in a vehicular delay tolerant network. In: International Conference on Broadband and Wireless Computing, Communication and Applications, pp. 157–162 (2015) 4. Jin, Z., Wang, J., Zhang, S., Shu, Y.: Epidemic-based controlled flooding and adaptive multicast for delay tolerant networks. In: Proceedings of the 2010 Symposia and Workshops on Ubiquitous, Autonomic and Trusted Computing, pp. 191–194 (2010) 5. Kumar, S., Ahmed, S.H., Qasim, U., Khan, Z.A., Amjad, N., Azeem, M.Q., et al.: Analyzing link and path availability of routing protocols in vehcular ad-hoc networks. J. Basic Appl. Sci. Res. 4(2), 189–206 (2014) 6. Benamar, N., Singh, K.D., Benamar, M., Ouadghiri, D.E., Bonnin, J.M.: Routing protocols in vehicular delay tolerant networks: a comprehensive survey. Comput. Commun. 48(8), 141–158 (2014) 7. Ouadrhiri, A., Rahmouni, I., Kamili, M., Berrada, I.: Controlling messages for probabilistic routing protocols in delay-tolerant networks. In: Proceedings of International Symposium on Computers and Communications, pp. 1–6 (2014) 8. Burgess, J., Gallagher, B., Jensen, D., Levine, B.N.: Maxprop: routing for vehicle-based disruption-tolerant networks. In: Proceedings of IEEE INFOCOM, vol. 6, pp. 1–11 (2006) 9. Li, Z., Li, Q., Zhang, H., Liu, F.: Closely social circuit based routing in social delay tolerant networks. J. Comput. Res. Dev. 49(6), 1185–1195 (2012) 10. Han, J., Shi, J., Ren, Y.: DTN routing algorithm based on region segmentation. Comput. Sci. 42(10), 113–116 (2015) 11. Wang, E., Yang, Y.J., Li, L.: A clustering routing method based on semi-markov process and path-finding strategy in DTN. Chin. J. Comput. 38(3), 483–499 (2015) 12. Fan, G., Zhong, D., Yan, F., Yue, P.: A hybrid fuzzy evaluation method for curtain grouting efficiency assessment based on an AHP method extended by D numbers. Expert Syst. Appl. 44, 289–303 (2016) 13. Small, T., Haas, Z.J.: Resource and performance tradeoffs in delay-tolerant wireless networks. In: ACM Workshop on Delay Tolerant Networking, pp. 260–267 (2005)

Reputation Analysis of Sensors’ Trust Within Tabu Search Sami J. Habib ✉ and Paulvanna N. Marimuthu (

)

Computer Engineering Department, Kuwait University, P.O. Box 5969 13060 Safat, Kuwait [email protected]

Abstract. Reputation of the sensors are highly essential in authenticating the sensing data, especially when sensors are deployed in a hostile environment. We refer to the term data-trust as the degree of confidence, which can be represented as a quantitative score, based on the reputation of the sensor, where the reputation is comprehended with spatial and temporal redundancy. In this paper, we analyzed the vulnerability of the sensors subject to radical environmental condi‐ tions, and we have derived a first-order differential equation utilizing a linear combination of trust factors to quantify the trust value. The trust value is expressed as a weighted combination of two trust factors: coherent data (spatial redundancy) and periodic behavior (temporal redundancy) of the sensors. The selection of weights are automated based on a cost value suited for the operating environment, and it is treated as a combinatorial optimization problem, with an objective func‐ tion is to maximize the confidence of the sensor. We employed Tabu Search to find the better combination of weights to be associated with the trust factors, in order to find the positive subspace reflecting the domain of trusted sensor oper‐ ations. We carried out many experiments with varying proportions of the selected trust factors; and the experimental outcomes were analyzed in drawing boundaries of trusted domain (trust space). Our experimental results with varying malfunc‐ tioning sensor readings showed that Tabu Search reduced the search space by 22% in comparison to the local search utilizing Simulated Annealing. Keywords: Analytical modeling · Data-trust · Reputation · Tabu Search · Simulated Annealing

1

Introduction

The present focus on smart home and smart city developments lead to the deployment of sensors in various monitoring applications. The growth of information and commu‐ nication technology made the sensors a cost-effective platform; however, the quality and durability of the sensor are found to be decreasing within a short span of time. Hence, in addition to the security issues in data transmission, the trustworthiness of a sensor has also to be accounted to ensure the reliability of the sensing node on its intended services. Reliability is computed based on the sensor’s reputation, which is the view of other sensing nodes over the target node. Thus, trust is employed as a complementary mech‐ anism, and it is a subjective phenomenon, and it is derived from the reputation of the entity [1].

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_35

344

S.J. Habib and P.N. Marimuthu

In general, trust management schemes are classified as centralized and distributed; in centralized scheme, a pre-assigned individual sensor evaluates the reputation of other sensors, whereas, each sensor computes the reliability of its neighbors utilizing distrib‐ uted trust management [2]. A centralized trust scheme saves a sensor’s resources by reducing the processing time and it is considered to be efficient. The trust computation utilizes two types of trust factors: direct and indirect. A direct trust factor is based on personal characteristics, such as available energy, servicing frequency, lifespan availability and the number of successful transmissions, whereas an indirect trust is based on the recommendations and notifications from others in the group [3]. An efficient trust value computation is achieved by selecting a combination of direct and indirect trust parameters suitable to the operating environment. In this paper, we have considered a clustered, single-hop wireless sensor network (WSN) to support efficient resource and security management, where grouping the sensors into few clusters based on their geographical locations, reduce the energy drainage of sensors located within the sink node’s proximity. Further, managing the trust seems to be easier within the subset of sensor nodes, as the amount of computations are less. In this clustered WSN, a sensor node with high energy is selected as a gateway for each cluster, and we have utilized a centralized-trust-management scheme implemented on the gateway to assess the trust of any sensor node. We have developed a formal trust model to estimate the trust value of the sensor node, which is a first-order differential equation describing a linear relationship between the two selected trust factors: (i) the percentage of deviation of malfunctioning sensor data with its neighbors and (ii) the percentage of deviation of malfunctioning sensor data with its previous sensing records over the defined period. We are aware that most of the trust modeling researches are based on reputation computed from its past behavior, which may not be able to catch a suddenly spoiled or misbehaved node. Hence, we have proposed a trust management algorithm, which estimates the trust values from two trust factors: the neighboring sensors data at the same instance of misbehaving as spatial redundancy and its coherent data over a defined timespan as a temporal redundancy. The trust management algorithm tries to find out the range of weights assigned to the proposed trust factors in generating the subspace of the polynomial with high trust values. Thus, we have formulated the generation of subspace of the first-order polynomial as an optimization problem with an objective function is to maximize the trust value. We have enhanced our previous work on the trust management [4] to include Tabu Search as the search algorithm, which improved the search process (22%) by avoiding the revisit of infeasible solutions.

2

Background

The trust models for sensor nodes are classified as centralized or distributed. In central‐ ized trust model, a gateway or sink node computes the trust values of the associated sensors, whereas the sensors evaluate the trust value of their neighbors by themselves in distributed models. The distributed trust model was first introduced by [5], where each sensor estimated the past behavior of other nodes while computing the trust values. The authors used probability statistic method to assess the trust without considering the

Reputation Analysis of Sensors’ Trust Within Tabu Search

345

recommendations. In another work, Probst and Kasera [6] derived a confidence interval to define a range of trust values, which was computed by using the mean and variance values of the collected information about other nodes by taking into account the context and their experience records. In a recent work by Jiang et al. [7], an efficient distributed trust model was proposed, where they utilized communication, energy and data-trust as direct trust and reliability and recommendation as indirect trust. Krasniewski et al. [8] proposed a centralized trust management for cluster-head election, but, the increased communication payload slowed down the system. A central‐ ized trust system in a data forwarding network was proposed by Zhan et al. [9], where the intermediate nodes recorded the information about the packets being forwarded and the next-hop trust was estimated from their packet delivery ratio provided by the sink node. Few researchers utilized hybrid trust model, where they employed both centralized and distributed trust mechanisms. In paper [10], a distributed trust management was used among the clusters and a centralized trust management was used by the clusterheads. Shaikh et al. [11] proposed a group based trust management scheme to secure WSN, where the whole group of sensors were assigned with a single trust value with reduced energy consumption. Bio-inspired approaches are sparingly utilized in trust management. Marmol and Perez [12] utilized trust as a factor in path selection while routing the data packets, where they employed ant-colony algorithm in choosing the trusted path. In this paper, we have considered a central-trust mechanism and we have included a combination of temporal and spatial redundant trust factors in generating the subspace comprised of high trusted values. We have employed Tabu Search as an iterative search algorithm in selecting the better proportion of trust factors during the trust space gener‐ ation. To our knowledge, we are the first one to utilize Tabu Search in generating the space of trusted computations within WSN.

3

Analytical Modeling

We consider a WSN as a tuple comprising of sensors (S), gateways (G) and { } central servers (CS), as illustrated in Fig. 1. The set of N sensors, S = s1 , s2 , s3 , … , sN { } and M gateways, G = g1 , g2 , g3 , … , gM are deployed randomly within a two-dimen‐ sional area and we assume that the number of sensors are greater than the number of gateways (N >> M). Groups of sensors form clusters with the nearby gateway, where each sensor transmits the data in a single-hop fashion. The lifespan of each sensor depends on the energy consumed by its own operations, such as sensing the environ‐ mental phenomenon, and transmitting the data to the gateway. The gateway consumes energy in aggregating and forwarding the sensors data to the central server. WSN is { } divided uniformly into a set of G clusters, C = c1 , c2 , c3 , … , cG to increase the lifespan of WSN. The gateways are acting as sink nodes to consolidate the ‘sensing’ data from the associated sensors.

346

S.J. Habib and P.N. Marimuthu

Fig. 1. Sensor network model.

We defined two trust factors, which are derived from temporal and spatial parame‐ ters. The deviation of malfunctioning sensor data from its neighbors is selected as the temporal value and the deviation of malfunctioning sensor data from its previous sensing records over a defined period is selected as the spatial redundancy parameter. The firsttrust-factor 𝛽1 is defined as in Eq. (1), which represents the deviation of the current sensor data (dst ) from the average of l data samples transmitted in prior time instances from i time t. Equation (1) is valid until there is sufficient number of prior samples to compare and the maximum length of historical data (l) would be defined by the user based on the sensitivity of data-trust. Since we seek a ratio in our analytical model, we assume the actual value of D∗N is known mathematically. l ∑

𝛽1t

=

dst −

p=1

dst−p i

l

i

D∗N

(1) , where t ≻ p ≥ l

if (t − p) ≠ t

The possible values taken by D∗N is shown in Eq. (2), where the numerator in Eq. (1) is the deciding factor. The difference between the terms in numerator of Eq. (1) takes on three values: (i) negative, when the historical data is greater than the current sensed data, (ii) zero, when both the historical and current sensed data are equal, and (iii) posi‐ tive, when the current sensed data is higher than its historical data. ⎧ dt ⎪ si l ∑ ⎪ dst−p ⎪ i p=1 ⎪ t D∗N = ⎨ dsi , l l ⎪ ∑ t−p ⎪ ds ⎪ p=1 i ⎪ l ⎩

Positive

Zero

Negative

(2)

Reputation Analysis of Sensors’ Trust Within Tabu Search

347

The second-trust-factor 𝛽2, is defined as in Eq. (3), and it represents the deviation of current sensor data (dst ) from the average of the aggregated data at a gateway gi, that are i sent from its neighboring (m-l) sensors at similar instance of time t. We have added constraints to ensure the presence of sufficient number of sensors (m) within a cluster at any time instance t, where the term m is bounded by a lower value x, so as to compare the untrusted sensor si with its x neighbors, as illustrated in Eq. (3). The value of x is determined during the experiment. The term m may have a maximum of n/2 sensors, as the sensor network should have a minimum of two clusters (with evenly distributed sensors) to belong to the category of clustered network. m ∑

𝛽2t

=

dst −

j=1 j≠i

dt sj

m − 1 , where x ≤ m ≤ n D∗N 2

i

(3)

Here, D∗N takes on different values, as explained in Eq. (4), with the three possible numerator values: negative, zero, and positive. t

⎧ dsi m ⎪ ∑ ⎪ dt sj j=1 ⎪ j≠i ⎪ t D∗N = ⎨ dsi or m − 1 m ⎪ ∑ dt ⎪ ⎪ j=1 sj ⎪ j≠i ⎩ m−1

Positive

Zero

(4)

Negative

t

We defined the trust value (𝛼si) of a sensor si within a cluster cj as a linear combination of two defined trust factors, as in Eq. (5). The weight factors W1 and W2 are added to vary the proportion of two trust-factors towards the trust value computation. In our attempt to model the trust, we have represented the trust values as a continuous variable over a range (0, 100). Thus, a constraint (6) is added to ensure that sum of the weights are always less than or equal to one. The trust factors 𝛽1 and 𝛽2 as dimensionless quan‐ tities. We added a multiplication factor of 100 in Eq. (5) to represent the trust values in percentage within the range of 0 to 100. 𝛼st = W1 ∗ (1 − 𝛽1t ) ∗ 100 + W2 ∗ (1 − 𝛽2t ) ∗ 100, where 0 ≤ (W1 , W2 ) ≤ 1

(5)

W 1 + W2 ≤ 1

(6)

i

348

4

S.J. Habib and P.N. Marimuthu

Trust Space Generation Within Tabu Search

The proposed trust management algorithm is illustrated in Fig. 2, which is designed in order to provide a provisional level of trustworthiness by examining all possible ranges. The algorithm identified the odd sensor(s) reporting erroneous data by comparing its data with the remaining sensors’ data within the cluster at the gateway.

Fig. 2. Proposed trust management algorithm.

The proposed algorithm starts by assessing each sensor and if the sensor’s collected data is different from the rest of the sensors reporting similar event, then, it is malfunc‐ tion. On finding malfunctioning, the algorithm generates the subspace comprising of trust values by varying the weights of the trust factors within Tabu Search (TS). Further, analysis on the generated data based on the threshold of acceptance, will lead to the acceptance or rejection of data. Tabu Search [13] is a single solution meta-heuristic, which enhances the perform‐ ance of local search algorithms by avoiding the revisit of infeasible solutions. In our previous work on trust management, we employed Simulated Annealing (SA) as a search tool in selecting the better proportion of evidences (trust factors) while computing the trust values. Hereby, we have enhanced the search procedures by employing Tabu Search, where we added a short-memory to SA to record the infeasible solutions, which are deviating from the current accepted solution. Thus, the redundant computations during the transformation of evidence space into trust space is very much reduced.

Reputation Analysis of Sensors’ Trust Within Tabu Search

5

349

Results and Discussion

We have considered a WSN of 100 sensors, which are partitioned into 10 clusters. We defined a minimum of three sensors within a cluster at any time instance t, so as to compare the untrusted sensor with at least two neighbors. We artificially imposed a misbehaving sensor within WSN, by randomly altering the sensing values of a sensor. We coded the trust management algorithm in Java platform, which generated a set of sensing values and a sensing value deviating from other sensor nodes. We modelled the trust value as a continuous variable in the solution space bounded between (0,100), as presented in Eq. (5). The weights w1 and w2 are selected randomly, satisfying the constraints in Eq. (6). We carried out a number of experiments to analyze the behavior of our proposed algorithm in modeling the data-trust of a sensor. The testing scenario is comprised of a set of sensors deployed in an open environment to monitor temperature, where the secured operating temperature range of the sensor is 30 °C to 50 °C. The temperature may be elevated near 60 °C due to hostile environment, thus leading to erroneous behavior of sensors. The trust management algorithm finds the trust space of the malfunctioning sensor, which has employed Tabu Search as the search tool. Hereby, we selected Simulated Annealing as a local search algorithm and a short-memory is added within Simulated Annealing in order to store the list of infeasible solutions. The param‐ eters associated with Simulated Annealing are listed in Table 1. Table 1. Simulated Annealing parameters. Parameters Initial Temperature t Cooling rate constant α β, the constant to increase M M, the number of modifications to the solution MaxTime

Value 1000 °C 0.4 0.6 10 10000

The range of sensing temperatures reported as inappropriate, the boundary condi‐ tions for generation of sensing temperatures reported by the neighboring sensors and the range of past history of sensing data from the malfunctioning sensor, in generating the experiment scenario, are listed in Table 2. In Table 2, the term Tc (si ) represents the current sensing data at time ti, which is showing deviation, the term Tnei represents the neighboring sensor readings at time ti, and the term Ti is used to represent the observed sensor readings at prior time instances. SA tries to maximize the trust value by varying the possible combinations of weighted trust factors.

350

S.J. Habib and P.N. Marimuthu Table 2. Experimental data. Sensing data Malfunctioning sensor si ‘m’ neighboring sensors

Range values Tc (si ) > 60 °C and Tc (si ) < 40 °C ∑ Tnei

40° C < Data samples from the past l instances 40 °C > K ) single-antenna UE in a coherence block. Assume that the duration of the coherence block is smaller than the channel coherence time. UEs send pilot sequences to the BS for uplink channel estimation at BS side, then the BS can obtain the downlink CSI by use of channel reciprocity. UEs in the same cell use the mutually orthogonal pilot and the different cells use the same set of mutually orthogonal pilot [11]. The location of kth UE in jth cell and the ith cell is denoted by (xjku , yujk ) and (xib , ybi ) respectively. Assume that the UEs which uses the same pilot sequence in different cell has the same UE serial number. The pilot sequence used in the kth UE is denoted by

𝐬k = [sk1 , sk2 , ⋯ sk𝜏 ]T

(1)

𝐡jki denotes the channel vector of kth UE in jth cell to ith cell and 𝜏 is the pilot length. BS’s antenna is Uniform Liner Array (ULA) with supercritical antenna spacing D, i.e. less than or equal to half wavelength. Hence we have the following multipath model

1 ∑ (p) 𝐡jki = √ 𝐚(𝜃jki )𝛼jki P p=1 P

(2)

where P is the arbitrary number of independent identically distributed (i.i.d.) paths, i.i.d.

𝛼jki ∼  (0, 𝛽jki ) is the channel coefficient, where 𝛽jki is the large scale fading coefficient (p) which consist of path-loss and shadow fading. 𝜃jki ∈ [0, 𝜋] is the i.i.d. random AoA of pth path. Note that we can limit angles to [0, 𝜋] because any 𝜃 ∈ [−𝜋, 0] can be replaced (p) ) is the steering vector, by −𝜃 giving the same steering vector. And 𝐚(𝜃jki

1 ⎡ ⎤ D ⎢ ⎥ −j2𝜋 (p) ⎢ ⎥ 𝜆 cos(𝜃jki e ) (p) ▵ ⎢ ⎥ 𝐚(𝜃jki ) = ⋮ ⎢ ⎥ ⎢ ⎥ (M − 1)D ⎢ −j2𝜋 (p) ⎥ 𝜆 ) cos(𝜃 e ⎣ jki ⎦

(3)

𝜆 is the wavelength of signal. The received M × 𝜏 pilot signal which transmitted by the kth UE observed at ith BS is written as 𝐘k = 𝐡iki 𝐬Tk +

L ∑

𝐡jki 𝐬Tk + 𝐍

(4)

j=1,j≠i

where 𝐍 ∈ ℂM×𝜏 is the Additive White Gaussian Noise (AWGN), whose elements are i.i.d. and drawn from  (0, 𝜎 2 ).

1002

3

C. Zhang

Chanel Estimation

According to [9], the MMSE estimate of the desired channel 𝐡iki by ith BS is ̂𝐡iki = 𝐑iki (𝜎 2 IM + 𝜏

L ∑

𝐑jki )−1 𝐒̄ H vec[𝐘k ]

(5)

j=1

where 𝐒̄ = 𝐬k ⊗ 𝐈M, 𝐑jki ∈ ℂM×M is the covariance matrices of 𝐡jki, given by 𝐑jki =

P 𝛽jki ∑

P

𝔼{𝐚(𝜃jki )𝐚H (𝜃jki )} = 𝛽jki 𝔼{𝐚(𝜃jki )𝐚H (𝜃jki )}

(6)

p=1

We can get channel estimate of the desired channel in the case of no interference from other cells by setting the interference terms to zero in (5), thus to obtain the estimate ̂𝐡int−free = 𝐑iki (𝜎 2 IM + 𝜏𝐑iki )−1 𝐒̄ H vec[(𝐡iki 𝐬T + 𝐍)] iki

(7)

As we can see from (5) to (7), the channel estimation critically relies on the knowl‐ L ∑ edge of the covariance matrices. In the expression of ̂𝐡iki, 𝐑jki is the interference j=1,j≠i

terms. (p) min , 𝜃 max ], ∀p and another ∈ [𝜃iki Assume that the AoA of kth UE in ith cell to ith cell 𝜃iki iki AoA of k UE in different cell to i cell 𝜃 (p) ∉ [𝜃 min , 𝜃 max ], ∀p. According to the th

th

jki

iki

iki

conclusion of [9] we can get that as the number of antennas M → ∞, (p) 𝐚(𝜃jki ) null(𝐑iki ) ⊃ span{ √ } M

(8)

It means that when the number of antennas go to infinite and the AoA of desired channel and interfering channels are non-overlapping, the subspace of the steering vector of interfering channels fall into the null space of desired channel covariance matrices. So as the AoA of desired channel and interfering channels are non-overlapping, we can come to the conclusion int−free lim ̂𝐡iki = ̂𝐡iki

M→∞

(9)

Proof: From the channel model we can get that the channel covariance matrices 𝐑iki can be decomposed into 𝐑iki = 𝐔iki 𝚺iki 𝐔Hiki

(10)

For simplicity, the expression will be written as 𝐑i = 𝐔i 𝚺i 𝐔Hi . Where 𝐔i is the signal eigenvector matrix of M × mi and 𝚺i is an eigenvalue matrix of mi × mi. When the AoA

Pilot Assignment Scheme Based on Location

1003

of desired channel and interfering channels are non-overlapping, we can get 𝐔Hi 𝐔j = 0, ∀i ≠ j, M → ∞ from (8). L ∑ In the same way 𝜏 𝐑j can be decomposed into j=1,j≠i

𝜏

L ∑

𝐑j = 𝐖𝚺𝐖H

(11)

j=1,j≠i

where 𝐖 is the signal eigenvector matrix and 𝐖𝐖H = I. According to (8) we know that the span of 𝐖 is included in the orthogonal complement of span of 𝐔i. Now denote 𝐕 is the unitary matrix corresponding to the orthogonal complement of both span{𝐖} and span{𝐔i }, so that the M × M identity matrix IM can be decomposed into

𝐈M = 𝐔i 𝐔Hi + 𝐖𝐖H + 𝐕𝐕H

(12)

Then the estimation of desired channel ̂𝐡i can be written as

̂𝐡iki = 𝐔i 𝚺i 𝐔H (𝜎 2 𝐔i 𝐔H + 𝜎 2 𝐖𝐖H + 𝜎 2 𝐕𝐕H + 𝜏𝐔i 𝚺i 𝐔H + 𝐖𝚺𝐖H ) i i i ⋅ (𝜏

L ∑

(13)

𝐡jki + 𝐒̄ H vec[𝐍])

j=1

As M → ∞, we can use the asymptotic orthogonality among 𝐔i,𝐖 and 𝐕 to get the following equation: lim ̂𝐡iki = 𝐔i 𝚺i (𝜎 2 𝐔i + 𝜏𝐔i 𝚺i )−1 (𝜏

M→∞

L ∑

𝐡jki + 𝐒̄ H vec[𝐍])

j=1

= 𝐔i 𝚺i (𝜎 2 𝐈mi + 𝜏𝚺i )−1 𝐔Hi (𝜏

L ∑

𝐡jki + 𝐒̄ H vec[𝐍])

(14)

j=1 −1

= 𝐔i 𝚺i (𝜎 𝐈mi + 𝜏𝚺i ) 2

(𝜏𝐔Hi 𝐡iki

+𝜏

L ∑

𝐔Hi 𝐡jki + 𝐒̄ H vec[𝐍])

j=1,j≠i (p) (p) min , 𝜃 max ], ∀p}, 𝐡i ⊂ span{𝐚(𝜃iki ), 𝜃iki ∈ [𝜃iki Because of we iki ‖𝐔H 𝐡 ‖ ‖ i iki ‖ lim = 0 from (8). So the expression of (14) can be written as ‖ M→∞ ‖ H i≠j ‖𝐔i 𝐡jki ‖ ‖ ‖

lim ̂𝐡i = 𝐔i 𝚺i (𝜎 2 𝐈mi + 𝜏𝚺i )−1 (𝜏𝐔Hi 𝐡iki + 𝜏 𝐒̄ H 𝐍)

M→∞

can

get

(15)

It is equal to (7) if we apply the EVD decomposition of 𝐑i in (7), and (9) has been proved.

1004

4

C. Zhang

Location-Based Pilot Assignment

From the analysis results of the previous sections, when the number of antennas tends to infinite and the AoA of desired channel and interfering channels is non-overlapping, the estimation of desired channel is identical to the interference free case. So in this section, we will describe a novel scheme to decrease the influence of pilot contamination. The signal AoA distribution is governed by the physical propagation environment which is dominated by the scatterers around the UE from [12]. As show in Fig. 1, we consider a One-Ring model with radius r comprising many scatterers around the UE. (p) min , 𝜃 max ], ∀p ∈ [𝜃jki Thus we can obtain the range of AoA as 𝜃jki jki

cell j

r r

θikimax θ ikimin

min θ jki

max θ jki

cell i

Fig. 1. The One-Ring model with radius r of desired and interfering users at the target BS. The min , 𝜃 max ], [𝜃 min , 𝜃 max ] respectively. AoA range of the desired and interfering user are [𝜃iki jki iki jki

min = arctan( 𝜃jki max = arctan( 𝜃jki

xjku − xib yujk − ybi xjku − xib yujk − ybi

) − arctan( √

r (xjku )2

+ (yujk )2

)

(16) ) + arctan( √

r (xjku )2 + (yujk )2

)

We consider L arbitrary UE cells, and regard cell 1 as the target cell. Our object is to find the appropriate L − 1 UEs in the surrounding cells and assign them the same pilot sequence for each UE in the target cell. It’s worth noting that not all UEs in the target cell can satisfy the non-overlapping condition. So we take the distance factor into consideration and establish the object function F=

L K ( ∑ ∑ j=2 i=1

▵𝜃jki (D𝜂jki )t ⋅ 𝜉ij

)

(17)

Pilot Assignment Scheme Based on Location

1005

min , 𝜃 max ] and [𝜃 min , 𝜃 max ], D where ▵𝜃jki means the “Distance” between [𝜃iki jki iki jki jki denotes the distance from kth UE in jth cell to the BS of ith cell and 𝜂 is the path-loss min , 𝜃 max ] and [𝜃 min , 𝜃 max ] has overlapping, we define that ▵𝜃 < 0, exponent. If [𝜃iki jki jki iki jki { 1, ▵𝜃jki > 0 . The procedure of pilot assignment can be described as t= −1, ▵𝜃jki < 0 maximize F =

L K ∑ ∑

(▵𝜃jki ⋅ (D𝜂jki )t ⋅ 𝜉ij )

j=2 i=1

∑ K

subject to

(18)

𝜉ij = 1, ∀j

i=1

𝜉ij ∈ {0, 1}.

To maximize the object function can make sure that the AoA of desired channel and interfering channels is non-overlapping, even though this condition cannot be satisfied, the pilot contamination can also be decreased via the long distance from the undesired UE to the BS of target cell. The subject condition can guarantee that only one UE is selected from each cell for each UE of the target cell.

5

Simulation Results and Analysis

In this section, we present numerical results on the channel estimation error performance of the proposed and traditional method. We consider a 7-cell network and take the center cell 1 as the target cell. Each cell serves 10 single antenna UEs. The length of pilot sequences 𝜏 is 10. The radius of the cell and scatterer is 500 m and 80 m respectively. The number of paths has been set to 50. The path-loss exponent 𝜂 is 3.8 and the antenna spacing D = 𝜆∕ 2. We use the normalized channel estimation error to evaluate the proposed scheme

err = 10log10 (

K ∑ ‖̂ ‖2 ‖𝐡1j1 − 𝐡1j1 ‖ ‖F j=1 ‖ K ∑ ‖ ‖2 ‖𝐡1j1 ‖ ‖F j=1 ‖

)

(19)

where ̂𝐡1j1 is the channel estimation of UE in the target cell, and 𝐡1j1 is the actual channel information of these UEs. In the simulation, the AoA of signal has uniform min , 𝜃 max ]. distributions among [𝜃jki jki In Fig. 2, the normalized average channel estimation error versus the number of antennas is given. The black line represents the conventional LS channel estimation method and the rest of the three lines are obtained by MMSE channel estimation for random, AoA-only and the proposed pilot allocation respectively. As we can see, the

1006

C. Zhang

performance of conventional LS channel estimation method is unchanged with the increasing of antenna number. By exploiting the location information of UEs and BSs, the AoA based scheme has a remarkable gain than the random method. With AoA-only method, only the AoA of signal has been taken into consideration. However, the condi‐ tion of non-overlapping cannot always be satisfied. When the number of UEs is small, we don’t have many options to choose the eligible UEs to assign the same pilot sequence. And the performance of the proposed method has an extra gain compared to the AoAonly MMSE method.

Fig. 2. Normalized average channel estimation error versus the number of antennas.

Figure 3 shows the relationship between the normalized average channel estimation error and radius of scatterers r. As the picture shows, the performance is decreasing with the radius of scatterers increasing. This is because that the AoA of desired and interfering channel is easier to be overlapped with a wide range of angel.

Fig. 3. Normalized average channel estimation error versus the radius of scatterer with 40 antennas.

6

Conclusions

In this paper, we have studied the MMSE channel estimation method and derived the relationship between the signal AoA and the channel estimation error. Through the

Pilot Assignment Scheme Based on Location

1007

derived expressions, we have proposed a pilot assignment scheme based on the location to maximize the AoA gap of desired channel and interfering channel. The AoA based scheme can decrease the influence of pilot contamination to a great degree. Furthermore, the AoA and the distance from UEs to the target cell have been considered in this proposed method. The simulation results show that the proposed method has a better performance than the AoA-only way. Furthermore, the smaller the radius of scatterer is, the better the performance will be. Acknowledgments. This work was supported by the China’s 863 Project (No. 2015AA01A706), the National S&T Major Project (No. 2016ZX03001017), Science and Technology Program of Beijing (No. D161100001016002), and by State Key Laboratory of Wireless Mobile Communications, China Academy of Telecommunications Technology (CATT).

References 1. Marzetta, T.L.: Noncooperative cellular wireless with unlimited numbers of base station antennas. IEEE Trans. Wirel. Commun. 9(11), 3590–3600 (2010) 2. Ngo, H.Q., Larsson, E.G., Marzetta, T.L.: Energy and spectral efficiency of very large multiuser MIMO systems. IEEE Trans. Commun. 61(4), 1436–1449 (2013) 3. Larsson, E.G., Edfors, O., Tufvesson, F., Marzetta, T.L.: Massive MIMO for next generation wireless systems. IEEE Commun. Mag. 52(2), 186–195 (2014) 4. Rusek, F., et al.: Scaling up MIMO: opportunities and challenges with very large arrays. IEEE Sig. Process. Mag. 30(1), 40–60 (2013) 5. Adhikary, A., Nam, J., Ahn, J.Y., Caire, G.: Joint spatial division and multiplexing—the largescale array regime. IEEE Trans. Inf. Theor. 59(10), 6441–6463 (2013) 6. Zhu, X., Dai, L., Wang, Z.: Graph coloring based pilot allocation to mitigate pilot contamination for multi-cell massive MIMO systems. IEEE Commun. Lett. 19(10), 1842– 1845 (2015) 7. Appaiah, K., Ashikhmin, A., Marzetta, T.L.: Pilot contamination reduction in multi-user TDD systems. In: 2010 IEEE International Conference on Communications (ICC), pp. 1–5 (2010) 8. Mochaourab, R., Björnson, E., Bengtsson, M.: Pilot clustering in asymmetric massive MIMO networks. In: 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 231–235 (2015) 9. Yin, H., Gesbert, D., Filippou, M., Liu, Y.: A coordinated approach to channel estimation in large-scale multiple-antenna systems. IEEE J. Sel. Areas Commun. 31(2), 264–273 (2013) 10. He, Q., Zhang, X., Xiao, L., Liu, X., Zhou, S.: A low-complexity pilot reuse scheme to increase the throughput of massive MIMO. In: Vehicular Technology Conference (VTC Fall), pp. 1– 5 (2015) 11. Saxena, V., Fodor, G., Karipidis, E.: Mitigating pilot contamination by pilot reuse and power control schemes for massive MIMO systems. In: 2015 IEEE 81st Vehicular Technology Conference (VTC Spring), pp. 1–6 (2015) 12. Shiu, D.-S., Faschini, G.J., Gans, M.J., Kahn, J.M.: Fading correlation and its effect on the capacity of multi-element antenna systems. In: IEEE 1998 International Conference on Universal Personal Communications, ICUPC 1998, Florence, vol.1, pp. 429–433 (1998)

Fractal Microwave Absorbers for Multipath Reduction in UHF-RFID Systems Francesca Venneri and Sandra Costanzo ✉ (

)

DIMES, University of Calabria, 87036 Rende, CS, Italy {venneri,costanzo}@dimes.unical.it

Abstract. A novel fractal microwave absorber is proposed to operate within the UHF-RFID band for the reduction of multipath adverse effects. Good miniaturi‐ zation capabilities are demonstrated for a 868 MHz absorber unit cell with an absorptivity more than 99% and a very thin substrate thickness (≤λ0/100 at the operating frequency). Thanks to its compactness and effectiveness in achieving perfect absorption, the proposed configuration is appealing for those applications operating in confined indoor environments. Keywords: Fractals · Microwave absorbers · UHF RFID

1

Introduction

Ultra-high frequency (UHF) passive RFID systems [1] are gaining increasing interest in various commercial and industrial applications operating in restricted indoor loca‐ tions, where RF signals are typically affected by multipath interferences. For this reason, practical techniques are urgently needed to prevent incorrect readings of UHF-RFID tags due to multipath phenomena. To this end, software-based approaches are proposed in literature [2]. As an alternative, the use of microwave absorbers can represent an effective hardware solution to reduce multiplex reflection interferences. However, the size of traditional microwave absorbing materials is too large. Furthermore, in the UHFband, conventional λ/4-thick Salisbury screens become very bulk, whereas ferrite-based absorbers are very heavy and relatively expensive. In order to overcome the above limitations, metamaterial absorbers (MAs) are recently proposed as small volume and lightweight microwave absorbing structures, useful to improve the reliability of UHF-RFID systems: an optically transparent absorber operating at 920 MHz is proposed in [3], that prevents blind areas for surveillance cameras installed near UHF-RFID systems; a low-cost MA absorber is proposed in [4, 5] for the European 865 ÷ 868 MHz band and a miniaturized MA unit cell integrating four lumped resistors, suitable for very small indoor locations, is discussed in [6]. MAs, introduced by Landy et al. in 2008 [7], consist of a periodic resonant metallic structure printed on a low-loss and thin grounded dielectric slab that can be designed to achieve a perfect absorption around a given frequency and/or a frequency band. Several MAs configurations have been developed for practical appli‐ cations over the entire electromagnetic spectrum [8, 9]. In this paper, a fractal based © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_100

Fractal Microwave Absorbers for Multipath Reduction

1009

MA absorber is proposed for multipath reduction in UHF-RFID systems. In partic‐ ular the Minkowski fractal geometry, already adopted by the authors for the design of reflectarray antennas [10, 11], is investigated to achieve a good compromise in the design of ultra-thin, low-cost and miniaturized MA unit cells, which can be fruitfully adopted for multiple reflection mitigation in confined indoor environments. With respect to the absorbing structures proposed in literature [3–6], the configuration adopted in this work allows to achieve very small unit cells (about 40–65% smaller than those designed in [3–5]), without using any lumped resistors [6] and adopting a thinner substrate, that is about λ0/100 at the operating frequency against the λ0/15 thickness of the structure in [6]. Detailed simulations of UHF fractal-based absorbers are presented and discussed, highlighting their inherent miniaturization capabilities. Future developments of proposed MA configurations are outlined in the conclusions.

2

Fractal MA Unit Cell

The structure proposed in this work is depicted in Fig. 1. It consists of a fractal shaped periodic frequency selective surface (FSS) (Fig. 1(b)) printed on a thin grounded dielec‐ tric substrate (Fig. 1(a)). The structure is designed to perform perfect absorption around a prescribed frequency f0. To this end, the proposed Minkowski patch (Fig. 1(b)) is properly synthesized to match the unit cell input impedance (Fig. 1(c)) with that of the free space, namely ζ0. Both degrees of freedom inherent to the adopted FSS shape, namely the patch length L and the inset size SL (Fig. 1(b)), are properly exploited to satisfy the matching condition. As already demonstrated by the authors for the reflec‐ tarrays design [10, 11], the above degrees of freedom allow to reduce the size of the metallic shape offering the capabilities to work with smaller inter-element spacings (i.e. unit cell size D), so avoiding grating lobes occurrences in reflectarrays [10], or practical for reducing bulk and mass of MAs in the UHF-band.

Fig. 1. Proposed microwave absorber: (a) side view; (b) unit-cell top view; (c) equivalent circuit model of the unit-cell.

1010

F. Venneri and S. Costanzo

In order to give a physical interpretation of the mechanism that allows to match fractal unit cell and free space impedances, an equivalent transmission line (TL) model [12] is adopted (Fig. 1(c)). The absorber unit cell is modeled with an impedance Zcell, which is equal to the parallel connection of a series RLC-circuit, representing the conducting patch (i.e. Zpatch), and the input impedance Zh of the grounded dielectric slab (Fig. 1(c)). The above R, L, and C parameters take into account, respectively: ohmic and dielectric losses of the cell; the magnetic flux between the patch and the ground plane; the parasitic capacitance between the edges of adjacent patches and (in the case of very thin substrates (h < 0.3D) [12]), the capacitance between the patches and the ground plane, due to the evanescent Floquet modes. As the FSS shape is synthesized to perform perfect absorption at the resonance, the unit cell reflection coefficient (Eq. (1)) will reach a minimum in the correspondence of frequency f0. 𝛤 =

res Re{Zcell } − 𝜁0 res Re{Zcell } + 𝜁0

(1)

In order to describe the effect on the matching condition given by the geometrical/ electrical parameters relevant to the MA, the following approximated formula [12] for the real part of the input impedance Zcell is adopted: 𝜁02 [ res Re{Zcell }=

𝜀′r

( √ )] tg2 k0 h 𝜀′r

−2𝜀′′r D2 1 + AFSS 𝛿𝜎 𝜔0 C(𝜀′ + 1)2

(2)

where ω0 is the first resonance frequency, k0 is the free space propagation constant, AFSS, δ and σ are, respectively, the area, the skin depth and the conductivity of the metalized patch, while 𝜀′r + j𝜀′′ is the relative permittivity of the substrate.

3

Design and Simulation of a Fractal Absorber for UHF-RFID Band

A fractal MA absorber based on the use of a commercial FR4 substrate (εr = 4.4, tanδ = 0.02, h = 3.2 mm) is designed to operate in the UHF-RFID band around f0 = 868 MHz. In order to achieve a perfect absorption at the operating frequency, the Minkowski FSS shape (Fig. 1(b)) is properly tuned by changing both the length L as well as the scaling factor S. As first case, a repetition period D equal to λ0/4 is fixed. A commercial full-wave code, based on the infinite array approach, is adopted as design tool, assuming a normally incident plane-wave as source. In order to derive the design rules for the proposed absorber configuration, the input impedance and the reflection coefficient of some unit cell samples are reported in Fig. 2. The above input parameters are computed versus frequency, by varying the patch lengths

Fractal Microwave Absorbers for Multipath Reduction

1011

L from 67 mm up to 75 mm, and tuning the scaling factor S within a range comprising the value giving the resonance at 868 MHz. It is evident from Fig. 2(a) that by increasing the size L of the Minkowski element, the input impedance of the unit cell also increases. As a matter of the fact, the patches result to be more coupled to each other, so the unit cell capacitance C grows up and, as expected from Eq. (2), the real part of Zcell increases. Furthermore, for a fixed patch length L, the scaling factor S affects both the cell resonant frequency as well as the input impedance magnitude. As a matter of the fact, Fig. 2 shows that by increasing S, the unit cell impedance is lowered, as the distance between two adjacent elements becomes greater in correspondence of the inset SL (Fig. 1(b)), so causing a smaller capacitive coupling (Eq. (2)). Thus, taking into account the above considerations, an adequate tuning of both geometrical parameters, L and S (Fig. 1(b)), allows to achieve a perfect absorption at the prescribed frequency.

Fig. 2. Simulated Minkowski unit cells having D = λ/4 and L equal 67 mm and 75 mm: input impedance (a) and reflection coefficient (b) vs. frequency for different scaling factor S.

In the case of the analyzed cells, the best absorption is achieved at 868 MHz when L and S are respectively equal to 67 mm and 0.21 (Fig. 2). In this last case, a very good matching is achieved between Zcell and the free space impedance (Fig. 2(a)) that corre‐ sponds to a reflection coefficient value of about –36 dB (Fig. 2(b)). This last value gives an absorptivity A(ω0) greater than 99.9%, where A(ω) is defined as: A(ω) = 1 – R(ω) = 1 – |S11|2, with R(ω) the power reflection coefficient and ||S11 || (i.e. |Γ| in Fig. 2(b)) the input reflection coefficient of the cell.

1012

4

F. Venneri and S. Costanzo

Miniaturization of Fractal Absorber Unit Cell

As demonstrated in [10] the proposed Minkowski fractal geometry allows to reduce the patches size, leaving unchanged the substrate features. The intrinsic miniaturization capabilities of fractal shapes are exploited in this work to design a miniaturized absorber unit cell for the UHF-RFID band, useful to reduce multipath effects in restricted indoor operating areas. By following the useful design rules outlined in Sect. 2, a set of 868 MHz absorber unit cells is designed by varying D from 0.3λ down to 0.15λ. Both simulated input impedances as well as reflection coefficients of each designed cell are depicted in Fig. 3. It is possible to observe how a proper tuning of both patch geometrical parameters assures a perfect absorption in all considered cases. As a matter of the fact, the inherent higher capacitive coupling due to reduced unit cell sizes is properly limited through the use of a greater scaling factor S. In this way, the resonant input impedance can remain well matched to the free space (Fig. 3(a)). A very good absorptivity (A(ω0) > 99%) is derived from simulated reflection coefficients of each resonating cell (Fig. 3(b)), that confirms the effectiveness of the proposed configuration in designing miniaturized UHF absorber panels having very small thickness (i.e. h = 3.2 mm ≅ λ0/100).

Fig. 3. Simulated Minkowski unit cells resonating at 868 MHz: Input impedance (a) and reflection coefficient (b) vs. frequency for different unit cell size D.

In conclusion, the proposed configuration allows to effectively work with a unit cell that is 40-65% smaller with respect to UHF absorber configurations proposed in litera‐ ture [3–5].

5

Conclusions

A fractal Minkowski FSS shape is proposed to design a thin and compact metamaterial absorber for multipath reduction in UHF-RFID applications. The adopted configuration has been extensively analyzed through the use of a commercial full-wave code and adopting a simplified equivalent circuit model of the unit cell. Useful design rules have been retrieved from the analysis stage. Furthermore, good miniaturization capabilities and very high absorption percentage have been demonstrated, making the proposed

Fractal Microwave Absorbers for Multipath Reduction

1013

configuration appealing for those applications operating in confined indoor locations. As future developments, the miniaturization capabilities of the proposed configuration can be exploited to design multiband microwave absorbers simply by embedding two or more miniaturized patches in the same cell.

References 1. Finkenzeller, K.: RFID Handbook: Fundamentals and Applications in Contactless Smart Cards and Identification. Wiley, Hoboken (2003) 2. Wiseman, Y.: Compression scheme for RFID equipment. In: IEEE International Conference on Electro Information Technology (EIT 2016), North Dakota, USA, pp. 387–392 (2016) 3. Okano, Y., Ogino, S., Ishikawa, K.: Development of optically transparent ultrathin microwave absorber for ultrahigh-frequency RF identification system. IEEE Trans. Microw. Theor. Tech. 60(8), 2456–2464 (2012) 4. Costa, F., Genovesi, S., Monorchio, A., Manara, G.: Low-cost metamaterial absorbers for sub-GHz wireless systems. IEEE Antennas Wirel. Propag. Lett. 13, 27–30 (2014) 5. Costa, F., Genovesi, S., Monorchio A., Manara, G.: Perfect metamaterial absorbers in the ultra-high frequency range. In: International Symposium on Electromagnetic Theory, Hiroshima (2013) 6. Zuo, W., Yang, Y., He, X., Zhan, D., Zhang, Q.: A miniaturized metamaterial absorber for ultrahigh-frequency RFID system. IEEE Antennas Wirel. Propag. Lett. doi:10.1109/LAWP. 2016.2574885 7. Landy, N.I., Sajuyigbe, S., Mock, J.J., Smith, D.R., Padilla, W.J.: Perfect metamaterial absorber. Phys. Rev. Lett. 100(20), 207402 (2008) 8. Maier, T., Brückl, H.: Wavelength-tunable microbolometers with metamaterial absorbers. Opt. Lett. 34(19), 3012–3014 (2009) 9. Chen, H.T., Padilla, W.J., Cich, M.J., Azad, A.K., Averitt, R.D., Taylor, A.J.: A metamaterial solid state terahertz phase modulator. Nat. Photonics 3, 148–151 (2009) 10. Costanzo, S., Venneri, F.: Miniaturized fractal reflectarray element using fixed-size patch. IEEE Antennas Wirel. Propag. Lett. 13, 1437–1440 (2014) 11. Costanzo, S., Venneri, F., Di Massa, G., Borgia, A., Costanzo, A., Raffo, A.: Fractal reflectarray antennas: state of art and new opportunities. Int. J. Antennas Propag. 2016, 17 (2016). doi:10.1155/2016/7165143. Article ID:7165143 12. Costa, F., Genovesi, S., Monorchio, A., Manara, G.: A circuit-based model for the interpretation of perfect metamaterial absorbers. IEEE Trans. Antennas Propag. 61(3), 1201– 1209 (2013)

A Sum-Rate Maximization Scheme for Coordinated User Scheduling Jinru Li1, Jie Zeng2 ✉ , Xin Su2, and Chiyang Xiao2 (

)

1

Broadband Wireless Access Laboratory, Chongqing University of Posts and Telecommunications, Chongqing, China 2 Tsinghua National Laboratory for Information Science and Technology, Research Institute of Information Technology, Tsinghua University, Beijing, China [email protected]

Abstract. With the densification of cellular networks (This work was supported by China’s 863 Project (No. 2015AA01A706), the National S&T Major Project (No. 2016ZX03001017), and by Science and Technology Program of Beijing (No. D161100001016002)), co-channel interference affects the system throughput signif‐ icantly, while coordinated resource allocation among multiple access points (APs) can improve the network performance. In this paper, we propose an efficient scheme of coordinated user association and subcarrier allocation to obtain the maximum sum rate in a cell cluster. Simulation results show that our scheduling scheme improves the sum rate compared with the maximum received signal strength based user scheduling. Keywords: Network densification · Coordination · User association and subcarrier allocation

1

Introduction

Because of high spectral reuse factor, network densification is identified as one main method to meet the exponentially increasing demand on wireless traffic volume incre‐ ment. In dense network, various APs with different transmission power are randomly deployed. With more APs which use the same frequency bands deployed, the co-channel inter‐ ference becomes one of the main factor affecting network performance. Coordinated resource allocation among multiple APs is a promising way to improve the performance and resource utilization [1]. When implementing resource allocation, both the channel condition and load balance should take into consideration. In these case, [2] acquired the global optimal solution and demonstrated that the optimal user scheduling signifi‐ cantly enhances network performance levels. [3] explored global network coordinated for AP selection and power allocation strategies to obtain higher throughput. But the joint optimization for user association, subcarrier and power allocation always turn out to be a nonconvex combinatorial optimization problem [4, 5]. In this work, we consider the downlink communication of a two-tier heterogeneous cellular network with random topology. The APs are connected to macro BSs (MBS) with backhaul links, MBS are only responsible for scheduling policy [6]. In a multi-APs © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_101

A Sum-Rate Maximization Scheme for Coordinated User Scheduling

1015

and multi-users cell cluster, we seek for optimal user association and subcarrier alloca‐ tion to obtain the maximum sum rate. The rest of this paper is structured as follows. In Sect. 2, system modeling and problem formulation are presented. Section 3 describes the proposed coordinated user association and subcarrier allocation, while in Sect. 4 simulation results are given. Finally, conclusions are provided in Sect. 5.

2

System Model and Problem Formulation

In a multi-APs and multi-users cell cluster, the all APs are denoted by set 𝐉 = {1, 2, … , J}, the whole users are denoted by set 𝐈 = {1, 2, … , I}, the whole spectrum band B shared by APs is equally divided into K subcarriers denoted by set 𝐊 = {1, 2, … K}, hj,i,k denotes the fading between AP j and user i on subcarriers k, and it is assumed to be distributed with Rayleigh fading. This paper focuses on the AP selection and subcarrier allocation only, so we assume the transmit power on each subcarrier is p. Then the achievable rate on subcarrier k between AP j and user i can be written as ri,j,k

Where Ωi,j,k =

∑ ∀j′ ∈𝐉,j′ ≠j

( ) phi,j,k B = log2 1 + 2 K 𝜎 + Ωi,j,k

(1)

phi,j′ ,k is the interference to UE i causing by AP j′ using subcar‐

rier k. 𝜎 2 is the power of Gaussian white noise. Then the sum rate in a cell cluster can be calculated I J K ∑ ∑ ∑ B xi,j,k ri,j,k K i=1 j=1 k=1

s.t.:

I ∑

xi,j,k ≤ 1 ∀k ∈ 𝐊, j ∈ 𝐉

i=1 K I ∑ ∑

(2)

xi,j,k ≤ K ∀j ∈ 𝐉

i=1 k=1 J ∑

xi,j,k = 1 ∀i ∈ 𝐈

j=1

Where xi,j,k = 1 represents that the subcarrier k on AP j was allocated to user i , otherwise xi,j,k = 0. The first constraint guarantees that one subcarrier of each AP can be only assigned to one user in a transmission interval. The second formula ensures that the sum number of subcarriers allocated to the associated users must not greater than the maximum number of subcarrier. We also assume that each user can only connect to one AP which is given in the form of the third constraint.

1016

3

J. Li et al.

User Association and Subcarrier Allocation Algorithm

Now we should find optimal xi,j,k to obtain the maximum sum rate. First, we use a set { } 𝐕 = v1 , v2 , … vn to denote the all possible combination between user, AP, and subcar‐ rier. In which, the element of vn = ijk represents that subcarrier k on AP j is allocated to user i , so |𝐕| = I × J × K . Then, we define a set 𝐬m, which denotes one feasible allo‐ cation scheme, obviously, 𝐬m is a subset of 𝐕. We further assume the whole subcarrier is allocated to users, then 𝐬m has J × K elements. The whole feasible allocation scheme is denoted by set 𝐒. Then any two elements in 𝐬m, which can be expressed by ijk an i′ j′ k′, must satisfy the following two constraints. Frist, if i = i′, then j = j′. This condition indicates that the same user can not connect to multiple APs. Second, if j = j′, then k ≠ k′. This condition satisfies that the same subcarrier can not allocation to multiple users. After acquiring the whole feasible allocation scheme, i.e. 𝐒, we can obtain the optimal scheduling 𝐬∗m by addressing the following problem 𝐬∗m = arg max 𝐬m ∈𝐒

I J K ∑ ∑ ∑ B xi,j,k ri,j,k K i=1 j=1 k=1

(3)

So now the biggest problem is to find all the feasible allocation sets, i.e. 𝐒. We use an effective search method to find each of 𝐬m as described in Table 1. Table 1. An effective search method to find all the feasible allocation scheme

4

Simulation Results and Analysis

In this section, we evaluate the performance of the proposed coordinated user scheduling algorithms by Monte-Carlo method. We consider a circular area with unit radius, within which users and APs are uniformly and independently distributed. The pass loss between any user and AP is set to 4. The number of the subcarriers is 3, and the bandwidth of each subcarrier, i.e. B∕ K is normalized. For simple out, the number of users is fixed at 10.

A Sum-Rate Maximization Scheme for Coordinated User Scheduling

1017

As shown in Fig. 1, the sum rate increases when the number of APs in a cluster various from 3 to 9. Compared to the maximum received signal strength (max-RSS) based user association, our coordinated user association and subcarrier allocation scheme can obtain greater sum rate. This is because when use max-RSS based user association scheme, many users may connect to one AP, the spectrum resource on other APs will be unused. Besides the interference between access points is very serious. 14 coordinated allocation max-RSS based association

12

Sum Rate in bps/Hz

10

8

6

4

2 3

4

5

6

7

8

9

number of AP

Fig. 1. Sum rate versus number of APs by adopting different user association and subcarrier allocation schemes

5

Conclusions

This paper presents a coordinated user association and subcarrier allocation scheme aiming to improve the sum rate of the whole users in a cell cluster. We first built a coordinated scheduling model. Then, we found the all feasible user schedule schemes and obtained the maximum sum rate. Simulation show that the coordinated scheduling has great improvement on the sum rete in a cell cluster.

References 1. Jin, Y., Cao, F., Dziyauddin, R.A.: Inter-cell interference mitigation with coordinated resource allocation and adaptive power control. In: Wireless Communications and Networking Conference (WCNC), pp. 1558–2612 (2014) 2. Gotsis, A.G., Stefanatos, S., Alexiou, A.: Optimal user association for massive MIMO empowered ultra-dense wireless networks. In: IEEE International Conference on Communication Workshop (ICCW), pp. 2164–7038 (2015) 3. Gotsis, A.G., Alexiou, A.: Global network coordination in densified wireless access networks through integer linear programming. In: IEEE PIMRC 2013, pp. 2166–9570 (2013) 4. Parsaeefard, S., Dawadi, R., Derakhshani, M., Le-Ngoc, T.: Joint user-association and resource-allocation in virtualized wireless networks. IEEE Access 4, 2738–2750 (2016) 5. Chen, S., Xing, C., Fei, Z.: Distributed resource allocation in ultra-dense networks via belief propagation. China. Communications 12, 1–13 (2015) 6. Sun, S., Adachi, K., Tan, P.H.: Heterogeneous network: an evolutionary path to 5G. In: Asia-Pacific Conference on Communications (2015)

An Approach of Cell Load-Aware Based CoMP in Ultra Dense Networks Jingjing Wu1, Jie Zeng2 ✉ , Xin Su2, and Liping Rong2 (

)

1

Broadband Wireless Access Laboratory, Chongqing University of Posts and Telecommunications, Chongqing, China [email protected] 2 Tsinghua National Laboratory for Information Science and Technology, Research Institute of Information Technology, Tsinghua University, Beijing, China [email protected]

Abstract. Ultra dense networks (UDN) can be considered as one of the key technologies of 5G, since it can improve capacity and spectral efficiency. With the densification of cell networks, the signals received by cell edge users may be from more than one cell. In this situation, coordinated multipoints (CoMP) tech‐ nology can be used to improve the system throughput. Considering the constraints of resource allocation posed by CoMP, a cell load-aware based CoMP (CLACoMP) is proposed. It can balance the cell load by adjusting the number of users serviced by base stations. The simulation results show that, CLA-CoMP can improve system throughput compared with traditional CoMP and non-CoMP. Keywords: UDN · CoMP · Cell load information

1

Introduction

UDN is considered as one of the key technologies to improve capacity and spectral efficiency in future fifth-generation (5G) mobile networks [1]. UDN is considered as one of the best way to meet users (UEs) expectations and support future wireless network deployment [2]. In UDN, more base stations (BSs) are deployed and cells radius are smaller. Therefore bring some technical challenges including the performance degra‐ dation of the cell edge UEs since serious inter-cell interference (ICI). In order to alleviate cell edge UEs suffered interference, [3] was proposed a channel state and interference aware power allocation scheme (PAG) can improve the performance of the cell edge UEs. The method through adjusting the transmit power to avoid causing too much disturbance to other cell edge UEs. But not consider the collaboration between BSs. Cell edge UEs can use CoMP technique to improve UEs rate and total system throughput. It can be used to transform interference signals to useful signals to mitigate This work was supported by China’s 863 Project (No. 2015AA01A706), the National S&T Major Project (No. 2016ZX03001017), Science and Technology Program of Beijing (No. D161100001016002). © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_102

An Approach of Cell Load-Aware

1019

ICI [4]. In [5] a distributed scheduling scheme is proposed for downlink coherent joint transmission (CJT). The CJT has been demonstrated to extract the largest CoMP gain in both cell average performance and cell edge performance. In the UDN, the deployment of cells is super dense and the cell edge can be over‐ lapped. This situation is considered in this paper. The signals received by cell edge UEs may be from more than one base station. The rest of the paper is organized as follows. Section 2 describes the system model. Section 3 defines a CoMP based on cell load-aware. The simulation results and analysis are in Sect. 4. Section 5 concludes this paper.

2

System Model

All BSs are divided into clusters according to the coupling loss between the small cell [6]. In this paper, one cluster is considered. In a cluster, cell edge may be overlapping with other cells. In the cell edge, CoMP is utilized to improve edge UEs rate and system throughout. First of all, each BS chose service UEs according to the channel state infor‐ mation (CSI) and then the number of UEs serviced by the BS is adjusted according to the cell load information (CLI). In this section, downlink small cell networks are considered. Assuming the system based on orthogonal frequency division multiplexing (OFDM) with a frequency reuse factor of one. In a small cell, the available bandwidth is divided into S orthogonal subchannels. k BS indexed from 1 to K, i UE indexed from 1 to I, let k = {1, 2, … , K} and i = {1, 2, … , I} denote sets of BSs and UEs, respectively. Therefore, the signal to inter‐ ference plus noise ratio (SINR) of i th UE can be defined as Eq. 1.

∑ SINRi =

k∈Ki



t∈K−Ki

pki Hik

pt Hit + N0

(1)

Where Ki represents the service BSs of UE i . pki and pt are collaboration transmitter power and interference transmitter power, respectively. In order to reduce the complexity of the problem, we consider the equal power allocated [7]. Hik and Hit are channel gain. N0 represents the background noise power. In this paper, large scale fading and small scale fading are considered in Hik and Hit. Under the described above, the data rate ri of the each UE i and overall system throughput Rc of the cluster c can be defined as Eqs. 2 and 3. Where B is the sub-channel bandwidth.

( ) ri = B log2 1 + SINRi Rc =

I ∑ i=1

ri

(2) (3)

1020

3

J. Wu et al.

Cell Load-Aware Based CoMP

According to the system model described, BSs chose service UEs based on the CSI and CLI. In the cluster, assume the CSI between BSs and UEs of BSs is known. BSs deter‐ mine the service UEs by the CSI. Due to the limitation of CoMP, when a sub-channel is occupied by CoMP UE that cannot be reuse by other UEs in the small cell. Therefore sub-channels of available are reduced in the small cell. When the cell load is large enough, the small cell performance will be decreased because of BS service of every CoMP UE. So the cell load-aware based CoMP is presented to balance the load of the cell and then improve system throughput in the cluster. The CLI associated with the number of UEs is assumed to be known at the BSs. The BS adjusts the number of UEs serviced by it based on CLI. When the cell load is heavy enough and higher than the available number of channels that the BS will be close to the minimum channel gain UE. In other words, the BS is no longer to provide service for UE of minimal perform‐ ance gain. The detailed step of the cell load-aware based CoMP is described in Algorithm 1.

4

Simulation Results and Analysis

In the cluster, BSs and UEs are random deployment. Without loss of generality, the area with unit radius is considered. In our simulation model, the number of BSs is 20 and the maximum number of UEs is 20 in the simulation process. Without loss of generality, we normalize the transmission power and the bandwidth to 1. In simulation process that large scale fading and small scale fading are considered to define the relationships of BSs and UEs. And the path-loss exponent factor is −4. During the simulation process, 6 sub-channels can be used with setting. In Fig. 1, we are comparing the system throughput of CLA-CoMP and traditional CoMP with not considered cell load-aware (NCLA-CoMP) and non-CoMP method. The simulation results show that, compare with non-CoMP, CoMP technology can bring significant performance gain. However, compare with NCLA-CoMP, CLA-CoMP that we propose having higher performance gain of system throughput.

An Approach of Cell Load-Aware

1021

Fig. 1. System throughput & the number of UEs

5

Conclusions

In this paper, a CLA-CoMP is identified, as realization of excessive UEs unloaded in a BS. This can avoid resource utilization being decreased and can improve system throughput. Result is shown that the performance gain of CLA-CoMP and NCLA-CoMP is higher than the non-CoMP method. And CLA-CoMP that we proposed can improve system throughput higher than NCLA-CoMP method.

References 1. Liu, Y., Li, X., Ji, H., Wang, K., Zhang, H.: Joint APs selection and resource allocation for self-healing in ultra dense network. In: IEEE Computer, Information and Telecommunication Systems (CITS) (2016) 2. Yu, W., Xu, H., Zhang, H., Griffith, D., Golmie, N.: Ultra-dense networks: survey of state of the art and future directions. In: IEEE Computer Communication and Networks (ICCCN), 15 September 2016 3. Gao, Y., Cheng, L., Zhang, X., Zhu, Y., Zhang, Y.: Enhanced power allocation scheme in ultradense small cell network. In: IEEE China Communications, February 2016 4. Liu, L., Garcia, V., Tia, L., Pa, Z., Shi, J.: Joint clustering and inter-cell resource allocation for CoMP in ultra dense cellular networks. In: IEEE Communications (ICC) (2015) 5. Sun, H., Yang, T.: Performance evaluation of distributed scheduling for downlink coherent joint transmission. In: IEEE Vehicular Technology Conference (VTC Fall), September 2015 6. Cheng, L., Gao, Y., Li, Y., Yang, D., Liu, X.: A cooperative resource allocation scheme based on self-organized network in ultra-dense small cell deployment. In: IEEE Vehicular Technology Conference (VTC Spring) 2015 7. Xu, M., Guo, D., Honig, M.L.: Two-cell downlink noncoherent cooperation without transmitter phase alignment. In: Proceedings of IEEE GLOBECOM, pp. 1–5 (2010)

Doppler Elaboration for Vibrations Detection Using Software Defined Radar Antonio Raffo and Sandra Costanzo ✉ (

)

DIMES, University of Calabria, 87036 Rende, CS, Italy [email protected]

Abstract. A Doppler elaboration based on a Software Defined Radar (SDRadar) system is proposed in this work as alternative to standard hardware architectures for vibrations detection. An SDRadar prototype, fully realized via software, is implemented to satisfy various frequency detection requirements, even in the presence of slow and small oscillations, by simply changing ‘real time’ the useful parameters (e.g. bandwidth and acquisition time). Experimental validations by a device able to produce a harmonic motion, are discussed to prove the proper detection capabilities of the proposed architecture. Keywords: SDRadar · Vibrations detection · Doppler radar

1

Introduction

Current challenges in the framework of radar researches are addressed to the develop‐ ment of flexible radars with multipurpose features. Remote sensing techniques based on radar architectures have been proposed in many monitoring fields [1], also exploiting the increasing availability of flexible radiating elements [2–4], which provide a very interesting alternative to classical mechanically moved reflectors, leading many benefits, such as simpler architectures, increased efficiencies, instantaneous radar beam posi‐ tioning [5, 6], and total absence of mechanical vibrations. The use of Doppler radar techniques is increasingly popular for monitoring vibra‐ tions, especially in the industrial and civil contexts, but also in biomedical engineering and security, as an alternative solution to the use of punctual position sensors. Different methods exist to analyze this kind of phenomenon, but all of them are basically appro‐ priate for the study of a certain type of occurrence. The most commonly used technology adopt piezoelectric sensors or micromechanical systems (MEMS), which are placed in direct contact with the activity source to be monitored. They allow to achieve good results [7–9], however sensors are directly subject to mechanical stresses, thus resulting to be exposed to a progressive degradation during their operation time. Methods imple‐ mented with optical fibers placed at small distance from the monitored items, allows to obtain higher reliability [10–12], but similarly to the piezoelectric sensors, they gener‐ ally offer operating bandwidth in the order of a few kHz [13, 14], and they require the use of sophisticated signal analyzers to obtain good resolutions in results.

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_103

Doppler Elaboration for Vibrations Detection

1023

In this context, the use of Doppler radar is a valid alternative solution to the use of punctual sensors for vibrations monitoring in various contexts, such as in the remote monitoring of the dynamic characteristics of buildings [15, 16]. The adoption of Doppler radar in the biomedical field, for the development of physiological sensors able to monitor breathing and heart rate, has already been indicated since the 70s [17], by showing the effectiveness of such non-invasive measurements, when compared to contact methods, in preserving the physiological integrity of the human subject [18]. Even if a lot of contributions exist in literature, all of them are based on a laboratory test equipment or custom hardware on printed/integrated circuits, and this makes the systems rather bulky and expensive [19–22]. A new approach for Doppler radar implementation is proposed in this work. This flexible and low cost solution, referred to as Software Defined Radar (SDRadar), can be obtained using an SDR transceiver [23], which leads to implement a multi-function radar, composed by RF hardware modules fully recon‐ figurable via software. SDRadar systems [24, 25] gives an interesting challenge in the framework of radar technology, through their ability to realize most of the basic operations (e.g. modulation, demodulation, filtering and mixing), by the simple use of programmable software modules, instead of specific hardware components. This ability provides a strong versa‐ tility in terms of signal generation and processing, leading to a faster and cheaper devel‐ opment and manufacture, as compared to conventional custom radars. Many researchers are focusing their attention on SDRadar systems, in particular in [26, 27] a Software Defined FMCW radar architecture is proposed to provide a novel solution for target detection in monitoring landslide scenarios. Starting with the outlined literature scenario, a Doppler elaboration for vibrations detection using SDRadar is developed in this work. This choice, instead of hardware platform, is performed to overcome the limit imposed by electronic circuitries, in particular in terms of detectable frequency. As a matter of fact, SDRadar is strongly flexible and the system is suitable for the detection of vibrations originating from different phenomena, as those generated by industrial plants, or in security and emergency applications based on body motion detection.

2

Vibration Detections in Continuous Wave Radar

Some types of radar are able to exploit the Doppler effect to detect the speed of targets intercepted by their beam. As matter of fact, when a microwave field is reflected from a moving object, this wave is subject to a frequency shift proportional to the object speed; this phenomenon is known as Doppler effect, and it can be successfully used to study the target motion. Doppler radar motion sensing system typically transmits a continuouswave (CW) signal, which is reflected off a target and then demodulated in the receiver. In this case, let us assume the transmitted signal has a single frequency component f0 (Eq. (1)). ( ) tx (t) = sin 2𝜋f0 t

(1)

1024

A. Raffo and S. Costanzo

When the signal tx is reflected back by a target at a nominal distance d0 with a timevarying motion given by x(t) (Fig. 1), the received signal can be approximated as described by [28]: ) ( 4𝜋d0 4𝜋x(t) − rx (t) ≈ Ar sin 2𝜋f0 t − 𝜆 𝜆

(2)

Fig. 1. Continuous wave Doppler radar for vibrations detection.

The received signal is related to the transmitted signal, but it own a time delay given by the nominal distance d0 of the target and a phase modulated by the periodic motion of the target. The motion information can be demodulated by a mixing operation followed by a low-pass filter. These operations act in this case, considering small-angle approximation, as phase demodulator, therefore the resulting signal BLPF(t) (Eq. (3)) is approximately proportional to the displacement x(t) due to motion.

BLPF (t) = −

) Ar 4𝜋 ( d0 + x(t) 2 𝜆

(3)

After performing the analog-to-digital conversion of the filtered signal, a Fast Fourier Transform algorithm (FFT) can be used to study the motion.

3

Experimental Results

In order to demonstrate the detection capabilities of the proposed solution, a configura‐ tion based on the use of SDRadar is fully implemented on LabView software. The radar architecture is essentially composed by the transceiver SDR NI USRP 2920 directly interfaced to a PC for data acquisition and processing. Two standard antennas, a omnidirectional dipole in transmission and a strong near field Impinji A0303 in reception,

Doppler Elaboration for Vibrations Detection

1025

are adopted. The displacement produced by the membrane of a standard speaker is considered. Through an audio signal generation software, by connecting the speaker to the output of a PC sound card, it is possible to produce a controlled variable frequency membrane displacement, in the order of 0.1 mm. The antennas and the target are placed at a mutual distance equal to 5 cm (Fig. 2).

Fig. 2. Vibrating membrane in the presence of full SDRadar Doppler configuration.

Full experimental setup diagram is shown in Fig. 3.

Fig. 3. Full experimental setup diagram.

The SDRadar Doppler platform is tested through some experimental tests, by producing audio signals and then vibrations with different frequencies. A first test is performed considering the presence of a single target with a fixed oscillation frequency. Demodulated displacement data x(t), related to a captured scene (at a given time), are shown in the time domain (Fig. 4).

1026

A. Raffo and S. Costanzo

Fig. 4. Captured scene in the presence of an oscillating target in time domain.

Additional tests are performed for different oscillation frequencies, whose experi‐ mental results are summarized in Table 1 and compared in Fig. 5. They all show an accurate reconstruction of the oscillation frequency fixed in the experiments, whose values are reported in the last column of Table 1 and highlighted by the peaks visible in the amplitude spectra of Fig. 5. Table 1. Parameters and results of the experimental validation test. B

fmax

T0

Δf

fosc

3 kHz 3 kHz 3 kHz 3 kHz

1.5 kHz 1.5 kHz 1.5 kHz 1.5 kHz

1s 1s 1s 1s

1 Hz 1 Hz 1 Hz 1 Hz

20 Hz 100 Hz 200 Hz 300 Hz

Fig. 5. Parameters and results of the experimental validation test.

Doppler Elaboration for Vibrations Detection

4

1027

Conclusions

An innovative SDRadar Doppler platform has been proposed and implemented as alter‐ native multipurpose solution to the use of punctual sensors for vibrations monitoring in various contexts. Experimental validations have been performed, through the use of a standard speaker, to produce small vibrations characterized by different oscillation frequencies in the range 20–300 Hz. This preliminary measurement results prove the accurate detection capability of the proposed system. Applications in the framework of biomedicine and civil context will be considered in future studies.

References 1. Costanzo, S., Di Massa, G., Costanzo, A., Borgia, A., Papa, C., Alberti, G., Salzillo, G., Palmese, G., Califano, D., Ciofanello, L., Daniele, M., Facchinetti, C., Longo, F., Formaro, R.: Multimode/multifrequency low frequency airborne radar design. J. Electr. Comput. Eng. 2013, 1–9 (2013) 2. Costanzo, S., Venneri, F., Raffo, A., Di Massa, G., Corsonello, P.: Active reflectarray element with large reconfigurability frequency range. In: 9th European Conference on Antennas and Propagation, EuCAP 2015, Lisbon (2015) 3. Venneri, F., Costanzo, S., Di Massa, G., Borgia, A., Raffo, A.: Frequency agile radial-shaped varactor-loaded reflectarray cell. Radioengineering 25, 253–257 (2016) 4. Costanzo, S., Venneri, F., Raffo, A., Di Massa, G., Corsonello, P.: Radial-shaped single varactor-tuned phasing line for active reflectarrays. IEEE Trans. Antennas Propag. 64, 3254– 3259 (2016) 5. Hum, S., Perruisseau-Carrier, J.: Reconfigurable reflectarrays and array lenses for dynamic antenna beam control: a review. IEEE Trans. Antennas Propag. 62, 183–198 (2014) 6. Venneri, F., Costanzo, S., Di Massa, G., Borgia, A., Corsonello, P., Salzano, M.: Design of a reconfigurable reflectarray based on a varactor tuned element. In: 6th European Conference on Antennas and Propagation, EuCAP 2012, Prague, pp. 2628–2631 (2012) 7. Jung, I., Roh, Y.: Design and fabrication of piezoceramic bimorph vibration sensors. Sens. Actuators A: Phys. 69, 259–266 (1998) 8. Sumali, H., Meissner, K., Cudney, H.: A piezoelectric array for sensing vibration modal coordinates. Sens. Actuators A: Phys. 93, 123–131 (2001) 9. Vogl, A., Wang, D., Storås, P., Bakke, T., Taklo, M., Thomson, A., Balgård, L.: Design process and characterisation of a high-performance vibration sensor for wireless condition monitoring. Sens. Actuators A: Phys. 153, 155–161 (2009) 10. Peiner, E., Scholz, D., Schlachetzki, A., Hauptmann, P.: A micromachined vibration sensor based on the control of power transmitted between optical fibres. Sens. Actuators A: Phys. 65, 23–29 (1998) 11. Conforti, G., Brenci, M., Mencaglia, A., Mignani, A.: Fiber optic vibration sensor for remote monitoring in high power electric machines. Appl. Opt. 28, 5158 (1989) 12. Zook, J., Herb, W., Bassett, C., Stark, T., Schoess, J., Wilson, M.: Fiber-optic vibration sensor based on frequency modulation of light-excited oscillators. Sens. Actuators A: Phys. 83, 270– 276 (2000)

1028

A. Raffo and S. Costanzo

13. Jelic, M., Stupar, D., Dakic, B., Bajic, J., Slankamenac, M., Zivanov, M.: An intensiometric contactless vibration sensor with bundle optical fiber for real time vibration monitoring. In: IEEE 10th Jubilee International Symposium on Intelligent Systems and Informatics, pp. 395– 399 (2012) 14. Palmetshofer, W.: Contactless vibration measurement for condition monitoring. Asset Manag. Maint. J. 27, 45–47 (2016) 15. Pieraccini, M., Fratini, M., Parrini, F., Macaluso, G., Atzeni, C.: High-speed CW stepfrequency coherent radar for dynamic monitoring of civil engineering structures. Electron. Lett. 40, 907 (2004) 16. Grazzini, G., Pieraccini, M., Dei, D., Atzeni, C.: Simple microwave sensor for remote detection of structural vibration. Electron. Lett. 45, 567 (2009) 17. Lin, J.: Non-invasive microwave measurement of respiration. Proc. IEEE. 63 (1975) 18. Lin, J.: Microwave sensing of physiological movement and volume change: a review. Bioelectromagnetics. 13, 557–565 (1992) 19. Papi, F., Donati, N., Pieraccini, M.: Handy microwave sensor for remote detection of structural vibration. In: 7th European Workshop on Structural Health Monitoring, Nantes, pp. 451–456 (2014) 20. Gu, C., Inoue, T., Li, C.: Analysis and experiment on the modulation sensitivity of doppler radar vibration measurement. IEEE Microw. Wirel. Compon. Lett. 23, 566–568 (2013) 21. Lohman, B., Boric-Lubecke, O., Lubecke, V., Ong, P., Sondhi, M.: A digital signal processor for Doppler radar sensing of vital signs. IEEE Eng. Med. Biol. Mag. 21, 161–164 (2002) 22. Hafner, N., Lubecke, V.: Performance assessment techniques for Doppler radar physiological sensors. In: 31st Annual International conference of the IEEE EMBS, Minneapolis, pp. 4848– 4851 (2009) 23. Costanzo, S., Spadafora, F., Borgia, A., Moreno, H., Costanzo, A., Di Massa, G.: High resolution software defined radar system for target detection. J. Electr. Comput. Eng. 2013, 1–7 (2013) 24. Costanzo, S., Spadafora, F., Moreno, O., Scarcella, F., Di Massa, G.: Multiband software defined radar for soil discontinuities detection. J. Electr. Comput. Eng. 2013, 1–6 (2013) 25. Zhang, H., Li, L., Wu, K.: 24 GHz software-defined radars system for automotive applications. In: European Conference on Wireless Technologies, Munich, pp. 138–141 (2007) 26. Costanzo, S., Massa, G., Costanzo, A., Borgia, A., Raffo, A., Viggiani, G., Versace, P.: Software-defined radar system for landslides monitoring. In: Rocha, Á., Correia, A.M., Adeli, H., Reis, L.P., Teixeira, M.M. (eds.) New Advances in Information Systems and Technologies. AISC, vol. 445, pp. 325–331. Springer, Cham (2016). doi: 10.1007/978-3-319-31307-8_34 27. Costanzo, S., et al.: Low-cost radars integrated into a landslide early warning system. In: Rocha, A., Correia, A.M., Costanzo, S., Reis, L.P. (eds.) New Contributions in Information Systems and Technologies. AISC, vol. 354, pp. 11–19. Springer, Cham (2015). doi: 10.1007/978-3-319-16528-8_2 28. Xiao, Y., Lin, J., Boric-Lubecke, O., Lubecke, M.: Frequency-tuning technique for remote detection of heartbeat and respiration using low-power double-sideband transmission in the Ka-band. IEEE Trans. Microw. Theory Tech. 54, 2023–2032 (2006)

Application Scenarios of Novel Multiple Access (NMA) Technologies for 5G Shuliang Hao1 ✉ , Jie Zeng2, Xin Su2, and Liping Rong2 (

)

1

Broadband Wireless Access Laboratory, Chongqing University of Posts and Telecommunications, Chongqing, China [email protected] 2 Tsinghua National Laboratory for Information Science and Technology, Research Institute of Information Technology, Tsinghua University, Beijing, China [email protected] Abstract. Three typical application scenarios of novel multiple access (NMA) technologies in 5G, such as eMBB (enhanced mobile), URLLC (ultra-reliable and low-latency communications) and mMTC (massive machine type commu‐ nications), are presented in detail in this paper. Also the target requirements in different application scenarios are briefly described with explicit examples. It can be seen from the evaluation results that the implementation of NMA technologies has a beneficial effect on satisfying the various requirements in mentioned scenarios. Keywords: eMBB · mMTC · URLLC · Novel multiple access

1

Introduction

As defined in international mobile telecommunications (IMT) for 2020 and beyond [1], 5G radio technology should focus on supporting three categories of typical scenarios: enhanced mobile broadband (eMBB), ultra-reliable and low-latency communications (URLLC), and massive machine type communications (mMTC). The requirements of these typical scenarios are different. It is extremely challenging to satisfy the require‐ ments by using orthogonal multiple access (OMA) schemes. So as to further meet these requirements, novel multiple access (NMA), as a promising candidate technology, emerges as the times require. This paper aims to show that NMA schemes are more suitable for meeting the requirements of these diverse scenarios. NMA schemes include non-orthogonal multiple access (NOMA) [2], sparse code multiple access (SCMA) [3], pattern division multiple access (PDMA) [4], etc. NMA is preferable to be applied in these typical scenarios of 5G. All these NMA schemes can provide more system capacity, more total throughput This work was supported by China’s 863 Project (No. 2015AA01A709), the National S&T Major Project (No. 2016ZX03001017), and by Science and Technology Program of Beijing (No. D161100001016002). © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_104

1030

S. Hao et al.

and higher spectrum efficiency (SE). Further performance gain can be achieved by grantfree based NMA schemes. Moreover, some enhanced schemes can be applied in these typical scenarios. All these aforementioned points can make much contribution to meeting these different requirements in eMBB, URLLC and mMTC. The rest of this paper is organized as follows: In Sect. 2, application scenarios of NMA schemes is described in detail. Section 3 shows the performance evaluation as results in different scenarios. Finally, in Sect. 4, conclusion remarks are given.

2

Application Scenarios of NMA Schemes

For eMBB, the expected peak data rate is 20 Gbps for downlink (DL) and 10 Gbps for uplink (UL). The target of peak SE should be 30 bps/Hz for DL and 15 bps/Hz for UL. mMTC requires connection density of 106 device/km2 in urban environment, large coverage of 164 dB maximum coupling loss and 15 years of long-life battery for cost devices. In case of URLLC, the tighter requirements are 0.5 ms for UL and DL for user plane latency and 99.999% for reliability [5]. Different NMA schemes have the same principle: different mobile users using the same physical time-frequency resource are differentiated by different signatures at the transmitter side and then combined through the wireless channel, then the receiver can separate the superposed signal by using advanced multi-user detection algorithms. NMA can bring some potential benefits: (1) higher reliability due to lower collision and/or better detection performance of receiver; (2) higher capacity; (3) more total throughput; (4) higher SE [6]. Further performance gain can be achieved by grant-free based NMA. Grant-free can [7]: (1) save signaling overhead; (2) reduce latency without sending grand request and transmission; (3) save energy. Grant-free can be beneficial for transmission of large number of infrequent small packets and extremely low latency in UL mMTC and URLLC. Grant-free based NMA can provide collision resolution or robustness, which can bring merits for URLLC. For eMBB, it is focused on providing the mobile users with high data rates that can be used, for example, for live video streaming from the user or for Virtual Reality Gaming and Augmented Reality shown in Fig. 1(a). The large packets would be carried in DL and sporadic small packets carrying interacting data from user equipment to control the video streaming would appear in the UL. Higher capacity and throughput provided by NMA, customers can enjoy high definition (HD) video. Furthermore, eMBB should stabilize the connections of users and guarantee a minimal data rate everywhere [1]. mMTC is a service for massive number of internet of things (IoT) devices. Each device is only sporadically active and very often has a tiny amount of data to send. There are also some use cases, as shown in Fig. 1(b), such as smart building, asset tracking in logistics, smart meter, smart agriculture and capillary networks. NMA can support enormous devices to access network. Besides grant-free based NMA can enable device remains in discontinuous reception (DRX) longer to save energy. URLLC refers to low-latency transmissions of small payloads among a relatively small set of communication nodes, used in mission-critical applications, such as indus‐ trial automation, remote interaction with a critical infrastructure. The challenge is how

Application Scenarios of Novel Multiple Access (NMA) Technologies

1031

Fig. 1. Classes of application scenarios

to guarantee ultra-reliability and low-latency. One use case presented in Fig. 1(c) for URLLC is autonomous car. In order to ensure the safety of customers, the high reliability and the low latency of data transmission must be guarantee. Grant-free based NMA can reduce the collision probability and achieve more robustness to improve the reliability. Furthermore, grant-free can reduce the signaling overhead to realize low latency. Additionally, some enhanced schemes applied in these typical scenarios can bring some performance gain: adaptive switching between grant-based and grant-free trans‐ mission can provide flexibility for handling urgent events or reconfiguring resource for user equipment (UE) [8], hybrid of NMA and OMA can provide flexibility for trans‐ mitting different sized packets [9], and cooperation with multiple-antenna technology can bring diversity gain and thus to improve reliability [10].

3

Results and Analysis

NOMA scheme, as one of NMA schemes, can be applied in these typical scenarios of 5G, including eMBB to improve SE and mMTC & URLLC to improve connection density. The performance of NOMA scheme was evaluated and compared with OMA in [11]. The main simulation assumptions are shown in Table 1. See more details in [11]. Table 1. Simulation assumptions Parameters

Value

Parameters

Value

Carrier

2 GHz

Scheduler

Bandwidth

Channel estimation

Channel model

UL: 5 MHz DL: 10 MHz ITU Uma

UL: grant-free DL: proportional fair Perfect

Antenna configuration

UL: 1Tx, 2Rx Traffic model DL: 2Tx, 2Rx

Receiver

Minimum mean square error (MMSE) for OMA Belief propagation based iterative detectiondecoding (BP-IDD) for NOMA UL: burst traffic with small packets DL: full buffer

1032

S. Hao et al.

Figure 2 shows these use cases and the related gain of NOMA over OMA in 5G new air interface. Compared with OMA, NOMA scheme can improve SE and support large number of users to access network. NOMA scheme can improve SE, about 30% for DL and 100% for UL in eMBB. NOMA scheme can also be used for mMTC and URLLC applications to increase the number of user connections by 5 times for mMTC and by 9 times for URLLC [12].

Fig. 2. Use cases and gain of NOMA over OMA in 5G

Besides NOMA, other NMA schemes, such as SCMA and PDMA, can also provide some performance gain in these scenarios. Then, it is unnecessary to go into details here. It is quite easy to summarize that NMA schemes can be better suited for these application scenarios in 5G.

4

Conclusion

In this paper, the requirements of diverse scenarios in 5G were presented. Compared with OMA, NMA is more appropriate for being applied in these typical application scenarios in 5G. Grant-free based transmission can further bring superiority. The performance evaluations in these scenarios powerfully prove that NMA schemes can be applied in eMBB, URLLC, and mMTC well. NMA technologies can make much contri‐ bution to pushing development in wireless communication of 5G.

References 1. Report ITU-R M.2083-0: Framework and overall objectives of the future development of IMT for 2020 and beyond, September 2015. http://www.itu.int/ITU-R/ 2. Benjebbour, A., Saito, Y., Kishiyama, Y., Li, A., Harada, A., Nakamura, T.: Concept and practical considerations of non-orthogonal multiple access (NOMA) for future radio access. In: 2013 International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS), Naha, pp. 770–774 (2013) 3. Taherzadeh, M., Nikopour, H., Bayesteh, A., Baligh, H.: SCMA codebook design. In: 2014 IEEE 80th Vehicular Technology Conference, Vancouver, BC, pp. 1–5 (2014)

Application Scenarios of Novel Multiple Access (NMA) Technologies

1033

4. Chen, S., Ren, B., Gao, Q., Kang, S., Sun, S., Niu, K.: Pattern Division Multiple Access (PDMA) - a novel non-orthogonal multiple access for 5G radio networks. IEEE Trans. Veh. Technol. 5(99), 1–16 (2016) 5. 3GPP TR 38.913 V0.3.0: Study on Scenarios and Requirements for Next Generation Access Technologies (Release 14), March 2016 6. 3GPP R1-164876: Design Target on Multiple Access Schemes for NR. Panasonic, Nanjing, 23–27 May 2016 7. 3GPP R1-1609398: Uplink Grant-Free Access for 5G mMTC. Lenovo, Lisbon, 10–15 October 2016 8. 3GPP R1-1609228: General Procedures for Grant-Free/Grant-Based MA. LG Electronics, Lisbon, 10–14 October 2016 9. 3GPP R1-164178: Uplink Non-orthogonal Multiple Access for NR Technology. Intel Corporation, Nanjing, 23–27 May 2016 10. 3GPP R1-1609399: Discussion on Grant-Free Based UL Transmission. Lenovo, Lisbon, 10– 14 October 2016 11. 3GPP R1-162306: Candidate Solution for New Multiple Access. CATT, Busan, 11–15 April 2016 12. 3GPP R1-162305: Multiple Access for 5G New Radio Interface. CATT, Busan, 11–15 April 2016

A Unified Framework of New Multiple Access for 5G Systems Bin Fan2, Xin Su1, Jie Zeng1 ✉ , and Bei Liu2 (

)

1

Tsinghua National Laboratory for Information Science and Technology, Research Institute of Information Technology, Tsinghua University, Beijing, China {suxin,zengjie}@tsinghua.edu.cn, [email protected] 2 Broadband Wireless Access Laboratory, Chongqing University of Posts and Telecommunications, Chongqing, China [email protected]

Abstract. In order to meet the 5G requirements on spectral efficiency and the number of connections, new multiple access (NMA) is becoming an important technology in 5G. Different from orthogonal multiple access (OMA), NMA tech‐ nology is tolerant of symbol collisions in orthogonal channels, thereby signifi‐ cantly increasing the number of served users. In this paper, a unified framework of NMA is proposed for the next generation radio access networks. The evaluation results of several NMA schemes, such as sparse code multiple access (SCMA), pattern division multiple access (PDMA) and interleaver-grid multiple access (IGMA), are given. And the results show that NMA can obtain significant block error rate (BLER) performance gain, compared with orthogonal frequency divi‐ sion multiple access (OFDMA). Keywords: 5G · New multiple access · Unified framework

1

Introduction

From the perspective of an information theory, the wireless channel is a classical multiple access channel [1]. NMA1 technology not only enhance the spectrum efficiency, but also approach multi-user channel capacity and support overloaded transmission. Furthermore, NMA enable reliable and low latency grant-free transmission and flexible service multiplexing [2]. At present, the industry has put forward a lot of candidate NMA technologies [3], such as SCMA [4], PDMA [5], IGMA [6], multi-user shared access (MUSA) [7], power domain non-orthogonal multiple access (NOMA) [8], resource spread multiple access (RSMA) and so on [9]. The main contribution of this paper is proposing a unified framework of NMA technology, including the principle, application analysis, and performance evaluation. 1

This work was supported by China’s 863 Project (No. 2015AA01A709), the National S&T Major Project (No. 2016ZX03001017), Science and Technology Program of Beijing (No. D161100001016002), and by Beijing Samsung Telecom R&D Center.

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_105

A Unified Framework of New Multiple Access for 5G Systems

2

1035

Unified Framework

2.1 Principle of Framework All proposed NMA technologies for UL transmission share the common features to suppress inter-user interference and providing overloading capability [10]. To support this operational requirement, this section we proposed a compatible multiple access uplink framework in the following as depicted in Fig. 1. This framework could reuse related modules, improve resource utilization and reduce overhead. By this uplink unified framework, we can flexibly configure different multiple access schemes according to various 5G scenarios on the basis of minimizing the hardware functional module.

Fig. 1. Unified framework

As depicted in Fig. 1, the differences among various multiple access technologies lie in the different realization of interleaver, constellation optimization, factor graph and multiplexing domain. 2.2 Application of Framework Based on the above framework, wide range of NMA technologies can be merged in this unified structure. SCMA is a code domain NMA technology based on multi-dimensional modulation design and sparse spreading. PDMA can achieve multiplexing and diversity gain by designing multi-user pattern matrix. MUSA is a NMA technology operating in the plural code domain. IGMA scheme could distinguish different users based on the different combinations of bit-level interleaver and grid mapping pattern. The element of the sparse matrix maybe different among various technologies, e.g. the element of the sparse matrix can only be “0” or “1” in the SCMA and PDMA, and it can also be “−1” in the MUSA. Considering the relatively independent design of interleaver and multiplexing domain modules with the constellation and factor graph and the probability of the hybrid multiple access scheme, we put more emphasize on the design of NMA technologies. For example, in the case of 3 users sharing 2 resources, the common sparse codebook set can be written as the following matrix:

1036

X. Su et al.

𝐂[2,3]

[√ ] √ 𝛼e−j𝜃1,1 1 − 𝛼e−j𝜃2,1 0√ = √ −j𝜃 𝛼e 1,2 0 1 − 𝛼e−j𝜃3,2

(1)

where 𝛼 is power scaling factor and 𝜃 is phase shifting factor. The optimal value of power scaling and phase shifting depend on the number of users and the shape of input constellation. The channel coding process in the unified framework can be either using simple repetition or directly using low coding rate forward error correction (FEC). For inter‐ leaver, the coded bits of each user is mapped to symbols. The bit/symbol of the other scheduled user not be considered.

3

Performance Evaluation

From all the new multiple access schemes observed, we can see that some schemes have common features based on OFDMA. In the 3GPP RAN WG1 Meeting #86bis, the company has submitted a variety of NMA technologies of the simulation analysis [11]. According to the simulation results, this section is comparing the non-orthogonal SCMA, PDMA and IGMA with the OFDMA multiple access technology. The main evaluation parameters of the uplink LLS are listed in Table 1. Table 1. Evaluation parameters Parameters Carrier frequency Waveform System bandwidth Transmission bandwidth Antenna configuration Propagation channel Channel coding Channel estimation

Assumptions 2GH OFDM 10 MHz 4RB 1T2R TDL-C 300 ns, 3 km/h LTE turbo Ideal

Huawei has proved robust to SCMA codebook collision with random codebook allocation [12]. PDMA under different overloading factors enables robust grant-free transmission [13]. IGMA could be beneficial in combating inter-channel interference (ICI) in multi-cell case [14]. The BLER performance comparison between PDMA, OFDMA, IGMA and OFDMA are shown in Fig. 2. From the simulation comparison results, we observed the code domain spreading NMA has significant gain over OFDMA.

A Unified Framework of New Multiple Access for 5G Systems

1037

Fig. 2. Performance comparison

4

Conclusion

Facing the future huge data traffic, in order to meet the requirements of 5G system, the problem of increasing system capacity and improving spectral efficiency should been considered and solved. In this paper, we proposed an uplink unified framework of NMA and analyzed the feasibility of framework for 5G system. According to the link level simulation results comparison, NMA technologies such as SCMA, IGMA, PDMA, could get better performance gain than OFDMA. Future work of interest is to apply the NMA to software defined radio (SDR) [15], together with the approach of the conver‐ gence for three radio solutions: Wi-Fi, iBeacon and ePaper.

References 1. Whitepaper: Alternative multiple access v1, Future (2015) 2. 3GPP TR 36.913 V0.2.1, Study on scenarios and requirements for next generation access technologies (2015) 3. 3GPP TSG RAN WG1 Meeting #86 R1-167445: Classification of candidate UL nonorthogonal MA schemes Gothenburg, Sweden, China Telecom (2016) 4. Lei, L., Yan, C., Wenting, G., Huilian, Y., Yiqun, W., Shuangshuang, X.: Prototype for 5G New Air Interface Technology SCMA and Performance Evaluation. Huawei Technologies Co, Ltd., Shanghai (2015) 5. Chen, S., Ren, B., Gao, Q., Kang, S., Sun, S., Niu, K.: Pattern Division Multiple Access (PDMA)-A Novel Non-orthogonal Multiple Access for 5G Radio Networks (2016) 6. 3GPP TSG-RAN WG1 Meeting #85 R1-163992: Non-orthogonal multiple access candidate for NR, Nanjing, China, Samsung (2016) 7. 3GPP TSG RAN WG1 Meeting #84bis R1-162226: Discussion on multiple access for new radio interface, ZTE Busan, Korea (2016) 8. Xu, P., Ding, Z., Dai, X., Vincent Poor, H.: NOMA: An Information Theoretic Perspective (2015) 9. 3GPP TSG-RAN WG1 #85 R1-164688: RSMA, Qualcomm Incorporated Nanjing, China (2016) 10. White Paper, “5G SIG”, FuTURE Forum (2015)

1038

X. Su et al.

11. 3GPP TSG RAN WG1 #86bis Lisbon: Portugal, Summary of updated assumptions and results for NR MA, Agenda item: 8.1.1.2 (2016) 12. 3GPP TSG RAN WG1 Meeting #86 R1-166094: LLS Results for UL MA schemes, Huawei, HiSilicon Gothenburg, Sweden (2016) 13. 3GPP TSG RAN WG1 Meeting #86 R1-167870: LLS results of PDMA, CATT Gothenburg, Sweden (2016) 14. 3GPP TSG RAN WG1 Meeting #86 R1-166750: Link level performance evaluation for IGMA, Samsung Gothenburg, Sweden (2016) 15. Suciu, G., Vochin, M., Diaconu, C., Suciu, V., Butca, C.: Convergence of software defined radio: WiFi, ibeacon and epaper. In: RoEduNet Conference: Networking in Education and Research, 2016 15th, pp. 1–5. IEEE (2016)

Research on Handover Procedures of LTE System with the No Stack Architecture Lu Zhang1 ✉ , Lu Ge2, Xin Su2, Jie Zeng2, and Liping Rong2 (

)

1

Broadband Wireless Access Laboratory, Chongqing University of Posts and Telecommunications, Chongqing, China [email protected] 2 Tsinghua National Laboratory for Information Science and Technology, Research Institute of Information Technology, Tsinghua University, Beijing, China

Abstract. In this paper, we propose a new architecture for LTE (Long-Term Evolution) cellular networks with No Stack (short for “Not Only Stack”). The layer-by-layer protocol stack is pushed down in No Stack. Three key points are introduced, including the C/U/M plane’s separation, GNV (Global Network View) and GC (Global Controller). We focus on making procedures for the hand‐ over of S1 in LTE network with the No Stack architecture. We compare the handover procedure of the proposed architecture with the traditional LTE’s. The numerical evaluation results show that the proposed architecture can reduce the signaling overheads and the delay of the handover procedure. Meanwhile, this architecture can increase success ratio of the handover and optimize the usage of mobile network resources. Keywords: No stack · Handover signaling · Network architecture

1

Introduction

In recent years the enormous growth of mobile networks is observed. Such networks provide world-wide coverage, so fast users move that many handovers can be triggered, which have to be handled in a very short time. As a result, the handover mechanism has to be fast and scalable. This is an important goal of ongoing works on 5G mobile networks. Traditional LTE network architecture shows the shortcoming of complex, low efficiency and closure. When the user equipment(UE) needs to handover, because of the mutual signaling between the UE and the network is more, the handover delay is long and it is easy to cause short time signal blocking. Software Defined Network(SDN) is a new network innovation architecture proposed by the slate clean research group of Stanford and data forwarding plane were isolated, and used the centralized controller software platform to realize the programmable control of the underlying hardware, lead This work was supported by China’s 863 Project (No. 2015AA01A706), the National S&T Major Project (No. 2015ZX03002004), and Science and Technology Program of Beijing (No. D161100001016002). © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_106

1040

L. Zhang et al.

to achieve a flexible on-demand deployment of the resources [1]. Although this approach to virtualization can make mobile users’ handover more efficient, but the network over‐ head has not been reduced [2]. So we use the not only stack (No Stack) architecture, by change the EPC part to a new network architecture, we propose global controller and global network view technology to EPC network [3]. It means that, all elements are controlled by global controller, all network elements information can be stored in the global network view. These characteristics provide favorable conditions for resolving the problem of network mobility management from signaling overhead.

2

Handover Management with No Stack Architecture

The traditional network architecture, the network element and the protocol stack is closed and vertical. The management of its bearer section by paragraph makes it inef‐ ficient, from the bottom up layer by layer can’t meet the business QoE flexible recon‐ figuration of the network needs, so the vertical and closed of the boundaries of network elements need to break and decoupled out. So the no stack framework is designed to provide an achievable way to flexibly reconstruct the network [4]. According to the specific scenarios and needs, the protocol stack contains the management functions, control functions and user data processing functions for decoupling, and the C-plane, U-plane, M-plane function is completely independent. U-plane protocol is flat, C-plane forms a logical concentration of the controller and M-plane which is through the north side of the controller interface forms an open management and operational surface. This EPC network consists of five network elements, namely UE, e-NB, x-GW, GC, GNV. The control functions contained in each layer of the protocol stack forms in a virtualized manner into a global controller(GC) and the GC is composed of MME+S-GW control plane+P-GW control plane and so on. The GC can be directly connected to other network elements, which are logically linked to the UE. At the same time, the data part of the protocol stack is stored as a distributed database, the distributed database located in multiple network elements forms a global network View(GNV). The GNV consists of the report information about M-plane, the context information, the session informa‐ tion about C-plane and the buffer of U-plane. GNV and GC are internal connections, GNV can be understood as a database stored in the GC. GNV allows the GC to handle almost state-independent, convenient function customization and reconstruction. According to the S1 handover process of LTE, the new mechanism shown in Fig. 1 is described as follows: The GNV can store all parameter information of handover preparation, The GC includes a handover command sequence module and a handover command execution module. The interface between GC and GNV, MME, x-GW control plane can be connected directly. The GC can read the corresponding module parameter information of the GNV and send the parameters to the corresponding network element C/U/M. GC and U-plane exchange information through the open-flow protocol. The Mplane includes a handover decision module. There is no direct physical communication link between the UE and the GC, so UE and the GC is a logical connection, through the software configure the interface to achieve interconnected. The final transfer of

Research on Handover Procedures of LTE System

1041

information are done through the U-plane data flow. the flow contains the contents of the flow table by the decision on the module, and flow table contains flow ID, destination, action three items, which is configured by the global controller. Management informa on Control informa on

Global network view Report Context

Data Configura on

Golbal Controller

Session Buffer Southbound Interface

UE

eNB

UE

xGW M UE

eNB

xGW

eNB

C

U

Fig. 1. S1 handover mechanism based on No Stack architecture

3

S1 Handover Processes Based on No Stack Architecture

Before the source e-NB initiates the handover decision, the cell information about the coverage of the base stations are obtained by the global controller, and the information, Including the cell handover threshold, carrier frequency, reserved channel resources and so on, is stored in the global network view. When handover is required, a handover

Fig. 2. S1 handover process with No Stack architecture

1042

L. Zhang et al.

execution command is sent directly from the global controller of the target e-NB without sending a handover request for the target e-NB. The new S1 handover procedure is shown in Fig. 2 through the global controller, the global network view, and the C/U/M split No Stack frame: The prerequisite is that the GNV has stored the measurement report by the UE and the status information reported by the source eNB. The global controller makes the S1 handover decision according to this measurement report, the steps are as follows: (1) The message contains parameters such as MME UE S1AP ID, E-RABs to be setup list, source to target transparent container. (2) The message contains parameters such as MME UE S1AP ID, eNB UE S1AP ID, E-RABs Admitted list, target to source transparent container and so on. (3) (a) The mobility control Info in the reconfiguration message contains the rachconfig dedicated parameter (b) The GC reads the S-eNB status information in the GNV, and then transmits the S-eNB status transfer information to the T-eNB. (4) The UE uses the contention-based random access procedure for uplink synchroni‐ zation to the target eNB, and then sends the message to the GC. (5) (a) The message including UE S1AP IDs, cause and other parameters. (b) The message including MME UE S1AP ID, eNB UE S1AP ID, E-RAB to be modified list and other parameters. (6) (a) S-eNB send UE context release complete message to GC. (b) The x-GW sends modify bearer request message to the GC containing parame‐ ters such as MME UE S1AP ID, eNB UE S1AP ID, E-RAB modify list and so on.

4

Signaling Analysis

When the user terminals move quickly, it may handover frequently, so it can lead to a lot of unnecessary overhand. The longer the forwarding chain, the greater the delay of data forwarding, the longer the handover delay. The serial forwarding path is not the optimal method, so the short and parallel forwarding path is well. (1) handover signaling From the number of signaling to analyze, based on no stack mechanism the number of handover signaling reduces from thirteen to six than the traditional LTE archi‐ tecture, which greatly reduces the signaling overhead. (2) handover delay Handover delay analysis: delay HO = T(H_ preparation)+T(HO_ Execution) +T(HO_ Completion) (Table 1). With no stack architecture, the information about S-GW and P-GW will become inner procedure which is called x-GW, GC includes MME and x-GW control function, so the information between x-GW and MME also will become inner procedure, by comparing the transmission of the network elements in the handover process, we can find the transmission delay of these messages will be decreased and then result in reduced handover delay.

Research on Handover Procedures of LTE System

1043

(3) handover failure rate The handover preparation time decrease due to the inner procedure, and then the handover command message will be sent to UE more rapidly. The low latency interface between cells eliminates the risk of the UE losing its connection with the serving cell, therefore, the handover failure rate decreases accordingly. The signaling of SN status transfer or the data forwarding can affect the handover inter‐ rupt time. The handover path is shorter, and reduce the frequent communication between core network and base station so that to obtain a low end-to-end handover delay. Table 1. Compare the handover link lengths for the two mechanisms Traditional LTE ①UE→S-eNB→MME→S-GW→MME→TeNB→MME ②S-eNB→MME→T-eNB(2 length) ③MME→S-GW→MME→S-eNB→MME(4 length)

5

No Stack architecture UE→GC→T-eNB→GC(3 length) GC→T-eNB(1 length) GC→S-eNB/GC→x-GW(1 length)

Conclusion

This paper presents the details of the traditional LTE S1 handover procedures and the S1 handover process based on No Stack architecture. The handover overhead, the hand‐ over delays and the handover failure rates are analyzed. It is shown that the S1 handover performance is better based on the No Stack architecture. We plan to validate the mech‐ anism proposed to this paper with a specific handover algorithm. We will verify the S1 handover performance through ns3 simulation scenarios. This work can be done in the future.

References 1. Anderson, T.: OpenFlow: enabling innovation in campus networks. Acm Sigcomm Comput. Commun. Rev. 38(2), 69–74 (2014) 2. Slawomir, K., Yuhong, L.: Handover management in SDN-based mobile networks. In: Globecom Workshops, The 6th IEEE International Workshop on Management of Emerging Networks and Services, pp. 194–200 (2015) 3. Mi, X., Tian, Z., Xu, X., Zhao, M., Wang, J.: NO stack: a SDN-based framework for future cellular networks. In: 2015 17th International Symposium on Wireless Personal Multimedia Communications (WPMC), pp. 497–502 (2015) 4. Wang, Q., Zhao, S.: UE assisted mobility management based on SDN. In: International Conference on Computer Science & Education (ICCSE) (2016) 5. Van-Giang, N., Younghan, K.: Signalingload analysis in openflow-enabled LTE/EPC architecture. In: International Conference on Information & Communication Technology Convergence (ICTC), pp. 734–735 (2014) 6. 3GPP TS 36.413: Evolved Universal Terrestrial Radio Access Network (E-UTRA); S1 Application Protocol (S1AP), May 2016. (Release 13)

Dual Band Patch Antenna for 5G Applications with EBG Structure in the Ground Plane and Substrate Almir Souza e Silva Neto1(&), Artur Luiz Torres de Oliveira1, Sérgio de Brito Espinola1, João Ricardo Freire de Melo1, José Lucas da Silva2, and Humberto César Chaves Fernandes2 1

Federal Institute of Education, Science and Technology of Paraíba, IFPB, Picuí, Paraíba, Brazil [email protected] 2 Department of Electrical Engineering, Federal University of Rio Grande do Norte, UFRN, Natal, Brazil [email protected]

Abstract. This paper presents a dual band antenna for 5G applications using a Electromagnetic Band Gap (EBG) structure in the ground plane and substrate in order to obtain an increase in bandwidth. The proposed antenna operates in the Ka-band, at 28 GHz and U-band at 60 GHz. The EBG structures apply multiple cylinders drilled in the substrate and circles etched in the ground plane with the period of 1.65 mm and radius of 0.2 mm. The simulated results at 28 GHz show a bandwidth (S11 < −10 dB) of 1.69 GHz (6.03%) from 27.66 GHz to 29.35 GHz and a average gain of 7.72 dBi, and bandwidth (S11 < −10 dB) and for 60 GHz show a bandwidth of 5.63 GHz (9.38%) from 57.49 GHz to 63.12 GHz with the maximum gain of 7.4 dBi. The proposed antenna is a good candidate for applications in 5G wireless technology. Keywords: Dual band

 5G applications  EBG  Patch antenna

1 Introduction The new mobile network technologies are characterized by new frequencies and larger bandwidths. The First Generation wireless technology (1G) the bandwidth up to 30 kHz, 2G up to 200 kHz, 3G up to 20 MHz and 4G up to 100 MHz. According to studies, the data traffic exceeded the voice, increasing the need for a faster internet and quality. The 5G technology arose from the need to improve the Internet, both in relation to its cost and mainly to its performance, the prevision for commercialization of 5G technology is for 2020. The new technology will have with distinction the largest number of subscribers connected at the same time, better spectral efficiency, low consumption of battery, greater connectivity, flexibility, use of IPV6, better coverage, low latency, low cost of infrastructure deployment, high reliability and versatility. An analysis shows that the power consumption in wireless devices improves as the bandwidth increases, resulting in great interest in millimeter frequencies that are widely © Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5_107

Dual Band Patch Antenna for 5G Applications

1045

available. From the need to increase the bandwidth, the use of millimeter wave (Mmwave) has become one of the solutions to obtain high data rates and smaller wavelengths that allow the use of multiple antennas in a given area, but the components used for these systems use equipment that consumes enough energy, propagation loss, atmospheric absorption, by rain attenuation, absorption by oxygen. The 28 GHz frequency band is available with more than 1 GHz of bandwidth. The 60 GHz unlicensed spectrum has become attractive to networks Wireless Personal Area Network (WPANs), wireless local area networks (WLANs), IEEE 802.15.3c (TG3c), IEEE 802.11ad (TG3ad), WiGig, Wireless HD and ECMA 387 [1, 2].

2 Antenna Design The substrate used was RT/Duroid® 5880, with dimensions 10.5  7.9  0.5 mm, dielectric constant er = 2.2, thickness h = 0.5 mm and loss tangent d = 0.009. The simulation and design of the proposed antenna was developed by software ANSOFT HFSS® (High Frequency Structure Simulator). The EBG structures used were: multiple cylinders drilled in the substrate and circles etched in the ground plane. The substrate consists of drilling to obtain the periodic pattern that is desired, according to the thickness variation of the substrate and the effective dielectric constant (eeff). The technique of the recording in the ground plane has a good results and are easier to produce. The circles are arranged in 3  4 with a radius of 0.2 mm and a period of 1.65 mm. The approximate expression for the structure is given by Eq. (1) [3, 4]: fc ¼

c pffiffiffiffiffiffiffi 2:a: eeff

ð1Þ

were, fc is resonant frequency, c is the speed of light in vacuum, a is the period and eeff is the effective dielectric constant.

Fig. 1. Geometry of proposed antenna.

1046

A. Souza e Silva Neto et al.

Figure 1 shows the dimension of patch: W = 4.4 mm, L = 3.3 mm, y = 1.06 mm, s = 0.385 mm, w = 0.77 and b = 3.264 mm [5].

(a)

(b)

(c)

(d)

Fig. 2. Antenna Design: (a) Without EBG; (b) With the EBG in the ground plane; (c) With EBG in the substrate; (d) With the EBG in the ground plane and substrate.

Fig. 3. S11 plot of comparison between the proposed antenna: without EBG, with EBG in the substrate, with EBG in the ground plane and with EBG in the ground plane and substrate.

Dual Band Patch Antenna for 5G Applications

1047

3 Results and Discussions Figure 2 shows the structure of the antennas proposed: (a) without EBG, (b) with the EBG in the ground plane, (c) with EBG in the substrate and (d) with the EBG in the ground plane and substrate.

Fig. 4. Simulated results of radiation patterns in 2D at: (a) 28 GHz; (b) 60 GHz

The comparisons of return loss (S11) between the antennas proposed are show in the Fig. 3. The simulated results radiation patterns in 2D on the E-plane (u = 0°) and H- plane (u = 90°) at frequency 28 GHz and 60 GHz are shown in Fig. 4.

1048

A. Souza e Silva Neto et al.

Fig. 5. Simulated results of radiation patterns in 3D at: (a) 28 GHz; (b) 60 GHz

Table 1. Return loss, bandwidth and gain of simulated antennas for 28 GHz. Antenna

Without EBG With EBG in the substrate With EBG in the ground plane With EBG in the ground plane and substrate

Gain (dBi)

−16.62 −17.86 −16.8

Bandwidth (S11 < −10 dB) (GHz) 1.43 (27.23–28.66) 1.63 (27.64–29.27) 1.62 (27,32–28.95)

−18.13

1.69 (27.66–29,35)

7.72

Resonant frequency (GHz) 27.93 28.29 28.22

Return loss (dB)

28.29

7.62 7.73 7.63

Dual Band Patch Antenna for 5G Applications

1049

Table 2. Return loss, bandwidth and gain of simulated antennas for 60 GHz. Antenna

Without EBG With EBG in the substrate With EBG in the ground plane With EBG in the ground plane and substrate

Resonant frequency (GHz) 59.59 59.94 59.17 60.3

Gain (dBi)

−12.84 −14.30 −18.35

Bandwidth (S11 < −10 dB) (GHz) 5.11 (56.78–61.90) 5.66 (57.17–62.83) 6.47 (55.79–62,27)

−13.53

5.63 (57.49–63.12)

7.45

Return loss (dB)

7.71 7.40 7.69

The simulated results of radiation patterns in 3D in the frequency of 28 GHz and 60 GHz are shown in Fig. 5. The comparison between the proposed antennas for the frequency of 28 GHz are shown in Table 1 and the comparison for the frequency of 60 GHz are shown in Table 2.

4 Conclusion A dual band antenna for 5G applications with operation at 28 GHz and 60 GHz, using EBG structure in the substrate and in the ground plane is proposed. According to Table 1, the antenna with EBG in the ground plane and substrate presented a better impedance matching, a better bandwidth (1.69 GHz) and a gain of 7.72 dBi. For Table 2, the antenna with EGB in the ground plane showed a better impedance matching, a better bandwidth (6.47 GHz) and a gain of 7.69 dBi. The four antennas designed has resonance frequency values near of the project values (28 GHz and 60 GHz) and gains with values very close. The proposed antenna is a good candidate for 5G technology.

References 1. Rappaport, T.S., Sun, S., Mayzus, R., et al.: Millimeter wave mobile communications for 5G cellular: it will work! Proc. IEEE 1(10), 335–349 (2013) 2. Rappaport, T.S., Murdock, J.N., Gutierrez, F.: State of the art in 60 GHz integrated circuits & systems for wireless communications. Proc. IEEE 99(8), 13901436 (2011) 3. Horii, Y., Tsutsumi, M.: Harmonic control by photonic bandgap on microstrip patch antenna. IEEE Microw. Guided Wave Lett. 9(1), 13–15 (1999) 4. Elsheakh, D.M.N., Elsadek, H.A., Abdallah, E.A.: Antenna Designs with Electromagnetic Band Gap Structures, 2nd edn., Chap. 16. Tech Publication 5. Neto, A.S.S., Fernandes, H.C.C., Dantas, M.L.M., Silva, J.S.: Antenna for fifth generation (5G) using a EBG structure. New Contributions Inf. Syst. Technol. 2, 33–38 (2015). doi:10. 1007/978-3-319-16528-8_4

Author Index

A Abidi, Bahae, 384 Abreu, António, 941 Abreu, João, 167 Acurio S., Andrés, 648 Afolabi, Adedeji, 12, 20 Afonso, Ana Paula, 941 Albiol-Pérez, Sergio, 619, 639 Al-Kaff, Abdulla, 221 Álvarez, Fabian A., 648 Alves, Victor, 232 Amorim, Eurico Vasco, 167 Analide, César, 711 Andrade, Carina, 175 Antipova, Tatiana, 551 Araújo, José, 185 Areias, Nuno, 62 Aristizábal, Leandro Flórez, 861 Arseni, Ştefan-Ciprian, 105 Au-Yong-Oliveira, Manuel, 908 Ávila, Galo, 639 B Bajo, Javier, 711 Balderas-Díaz, Sara, 115 Balhau, Pedro, 754 Barroso, João, 167, 581, 602 Belkhiri, Youcef, 73 Belo, Orlando, 426 Bernardino, Jorge, 528 Bhatti, Muhammad Shahid, 736 Bjørnstad, Camilla, 815 Boicescu, Laurentiu, 135 Borcoci, Eugen, 142 Bordel, Borja, 95 Boucheham, Bachir, 245 Brito, Pedro Quelhas, 259 Bucheli, José, 639

Burguillo, Juan Carlos, 493 Butrime, Edita, 983 C Cabral, Bruno, 528 Camacho, Pedro, 528 Cano, Sandra, 861 Cardoso, Henrique Lopes, 275 Cardoso, Manuel, 31 Carmo-Silva, Sílvio, 397, 406 Carneiro, João, 416 Carvalho, João Vidal, 941 Carvalho, Rommel N., 464 Castillo, Víctor H., 701 Chaves Fernandes, Humberto César, 1044 Chen, Jinsong, 745 Cheng, Hao, 332 Cheong, Yoonchae, 289, 311 Christodoulou, Eleni, 754 Christophorou, Christophoros, 754 Ciuciuc, Ramona, 211 Collazos, César A., 861 Collazos, Cesar A., 897 Conzon, Davide, 321 Côrte-Real, Artur, 86 Costa, António, 232 Costa, Carlos, 175, 453 Costa, Eduarda, 175 Costa-Montenegro, Enrique, 267 Costanzo, Sandra, 1008, 1022 Couñago-Soto, Pablo, 267 Cunha, Rúben, 41 D Dantas, Carina, 561, 754 de Brito Espinola, Sérgio, 1044 de la Escalera, Arturo, 221 De Marco, Antonio, 664, 677

© Springer International Publishing AG 2017 Á. Rocha et al. (eds.), Recent Advances in Information Systems and Technologies, Advances in Intelligent Systems and Computing 570, DOI 10.1007/978-3-319-56538-5

1051

1052 de Sousa e Silva, João, 149 Deters, Jan Kleine, 609 Di Fuccio, Raffaele, 664, 677 Di Pierro, Massimo, 975 Dias, Nuno, 426 Drias, Habiba, 73 Durães, Dalila, 711 Durão, Natércia, 887 E El Haj Ahmed, Ghofrane, 267 El Haziti, Mohamed, 384 Ellingsen, Gunnar, 815 Elzabadani, Hicham, 375 Encheva, Sylvia, 447, 963, 969 Erdoğdu, Kazım, 952 Escobar, Ivón, 639, 648, 657 Esparza, Danilo, 609 F Fadel, Luciane Maria, 571 Fagbenle, Olabosipo, 12, 20 Fan, Bin, 1034 Faria, Fernando, 745 Farias, Fabrícia, 690 Fernandes, Fabiana Santos, 921 Fernandes, Gabriela, 51 Fernandes, Hugo, 602 Fernandes, Nuno O., 397, 406 Ferraz, Filipa, 232 Ferreira, Carlos, 426 Ferreira, Maria João, 887 Filipe, Vítor, 167 Fiuza, Patricia Jantsch, 921 Fórtiz, María José Rodríguez, 115 Fratu, Octavian, 105 Freire de Melo, João Ricardo, 1044 Freitas, Alberto, 825 Frysak, Josef, 481 G Galarza, Eddie D., 639, 657 Galvão, João, 175 Garcéa, José Luis Meana, 221 García, Fernando, 221 Garrido, José Luis, 115 Ge, Lu, 207, 1039 Gil-Castiñeira, Felipe, 267 Gomes, João, 538 Gomes, Pedro F.O., 847 Gómez, Jose-Antonio Gil, 619 Gonçalves, Alexandra, 690 Gonçalves, Cirano, 581 Gonçalves, Ramiro, 149, 908

Author Index Gong, Jinjin, 207 Gonzalez, Carina S., 861 Gonzalez, Mario, 609 Gonzalvo, Arián Aladro, 609 Guamán, Accel, 657 Guerrero-Contreras, Gabriel, 115 Günel, Korhan, 952 H Habib, Sami J., 343 Hao, Shuliang, 1029 Hartl, Karin, 474 Hasani, Zirije, 503 Hashmi, Sajid Ibrahim, 736 Hasnain, Muhammad, 736 Hastings, Peter, 975 Holanda, Maristela, 464 Hussain, Syed Asad, 736 I Intriago-Pazmiño, Monserrate, 436 Inzunza, Sergio, 628, 701 Iribarne, Luis, 157 J Jacob, Olaf, 474 Jang, Hana, 289, 300 Jegundo, Ana Luísa, 561, 754 Jeong, Jongpil, 289, 300, 311 Jilbab, Abdelillah, 384 Jiménez, Samantha, 628, 701 José, Luis Javier San, 221 Juárez-Ramírez, Reyes, 628, 701 K Kamel, Nadjet, 73 L Larco, Andrés, 835 Latif, Imran, 736 Laurén, Samuel, 125 Leal, Fátima, 493 Lee, Haksang, 300 Lee, Taehyun, 300, 311 Leite, Eliana, 690 Lemos, Robson Rodrigues, 921 Leppänen, Ville, 125 Lewis, Daniel, 512 Li, Jikai, 745 Li, Jinru, 1014 Liang, Daan, 745 Ligios, Michele, 321 Lima, Francisca Vale, 175 Liu, Bei, 1034

Author Index Lopes, Fernando, 825 Lopes, Isabel Maria, 774 Lopes, Nuno, 592 Lopes, Pedro, 197 López V., William, 639 Loures, Eduardo F.R., 847 Lucas da Silva, José, 1044 Luís Silva, José, 592 Luo, Qianrong, 332 M Machado, Adriano, 690 Machado, Vítor, 592 Majdalawieh, Munir, 872 Malheiro, Benedita, 41, 62, 493 Malyuk, Anatoly, 725 Mansingh, Gunjan, 518 Marcillo, Diego, 353 Marciulyniene, Rita, 983 Marcu, Ioana, 105 Mareca, Pilar, 95 Marimuthu, Paulvanna N., 343 Marks, Adam, 872 Marques, Bernardo, 825 Marques, Gonçalo, 3, 785 Marreiros, Goreti, 416 Martín, David, 221 Martinho, Bruno, 175 Martinho, Diogo, 416 Martins, Ana Isabel, 561 Martins, José, 149 Martins, Paulo, 167 McNaughton, Maurice, 512, 518 Melninkaite, Vida, 983 Mena, Luis, 657 Meythaler, Amparo, 648 Miloslavskaya, Natalia, 364, 725 Montaluisa, Javier, 657 Montenegro, Carlos, 835 Moreira, Fernando, 861, 887, 897 Moreno, Francisco Miguel, 221 Moreno-Díaz, Jorge, 436 Mosaku, Timothy, 12, 20 Mosbah, Mawloud, 245 N Nasan, Adnan El, 375 Neves, João, 232 Neves, José, 232 Nieva, Alberto, 221 Njenga, James K., 474 Nogueira, Pedro, 275 Novais, Paulo, 416, 711

1053 O O’Hare, Gregory M.P., 115 Ojeda-Castelo, Juan Jesus, 157 Oliveira e Sá, Jorge, 175 Olmo, Elena, 619 Osimani, César, 157 Özarslan, Yasin, 952 P Paredes, Hugo, 167, 602 Parker, David, 512 Pasat, Adrian, 211 Pastrone, Claudio, 321 Pelet, Jean-Éric, 763 Pereira, António, 149 Pereira, Antonio, 353 Pereira, Carla Santos, 887 Pereira, Francisco, 197 Pereira, João Paulo, 774 Pereira, Luís, 795 Pérez, Sergio Albiol, 648 Pestana, Gabriel, 185 Piedra-Fernandez, Jose A., 157 Pilatásig, Marco, 657 Pinto, Filipe, 426 Pitarma, Rui, 3, 785 Polat, Refet, 952 Primo, Lane, 571 Pruna, Edwin, 639, 648, 657 Q Qayyum, Abdul, 736 Queirós, Alexandra, 561, 795 Quezada, Angeles, 628 Quintas, João, 561, 754 R Raffo, Antonio, 1022 Ramirez, Gabriel M., 897 Ramírez-Noriega, Alan, 628, 701 Rao, Lila, 512, 518 Rauti, Sampsa, 125 Rebelo, Sérgio, 167 Reis, Arsénio, 167, 581, 602 Reis, Luís Paulo, 275 Ribeiro, Pedro, 31 Rocha, Álvaro, 86, 931, 941 Rocha, Ana Paula, 275 Rocha, Nelson Pacheco, 561, 795 Rocha, Nelson, 805 Rocha, Tânia, 581, 602 Rodello, Ildeberto A., 474

1054 Roldán, Hugo, 835 Rong, Liping, 1018, 1029, 1039 Rossini, Rosaria, 321 Roxo, Mafalda Teles, 259 Rybarczyk, Yves, 609 S Salam, Rida, 375 Salazar-Grandes, Mayra, 436 Salazar-Jácome, Elizabeth, 436 Sales, Gilvandenys, 690 Santos, Eduardo A.P., 847 Santos, Luís, 754 Santos, Maribel Yasmina, 175, 453 Santos, Milton, 795, 805 Sapateiro, Claudio, 538 Scheianu, Andrei, 211 Seidel, Erik Joseph, 332 Shanaei, Saman, 375 Siano, Giovanni, 664, 677 Silva, Augusto, 805 Silva, Daniel, 275 Silva, J.C., 592 Silva-Costa, Tiago, 825 Soler, Alejandro Menal, 619 Song, Byunghoon, 289, 311 Sottile, Francesco, 321 Sousa, Cátia, 51 Sousa, Maria José, 931 Sousa-Pinto, Bernardo, 825 Souza e Silva Neto, Almir, 1044 Stevenson, Mark, 397 Su, Xin, 207, 1014, 1018, 1029, 1034, 1039 Suciu, George, 135, 211 T Taieb, Basma, 763 Teresa Delgado, Maria, 321 Tereso, Anabela, 31, 51

Author Index Thürer, Matthias, 397 Torres de Oliveira, Artur Luiz, 1044 U Uitto, Joni, 125 Ulbricht, Vânia, 571 Urbano, Joana, 275 V Valteryte, Rita, 983 van Erven, Gustavo C.G., 464 Vargas-Sandoval, Vanessa, 436 Veloso, Bruno, 41 Venneri, Francesca, 1008 Vera, Daniel, 353 Vicente, Henrique, 232 Villarreal, Santiago, 609 Vochin, Marius, 135, 142 Voicu, Carmen, 105 Vulpe, Alexandru, 105, 135 W Wei, Songjie, 332 Wings, Cindy, 754 Wu, Jingjing, 1018 X Xiao, Chiyang, 1014 Y Yahiaoui, Sofiane, 73 Z Zaytsev, Anton, 725 Zellefrow, Brad, 375 Zeng, Jie, 207, 1014, 1018, 1029, 1034, 1039 Zhang, Chao, 999 Zhang, Lu, 1039 Zoican, Sorin, 142 Zumbana, Paulina, 639, 648, 657