Advances in Intelligent Information Hiding and Multimedia Signal Processing: Proceedings of the 15th International Conference on IIH-MSP in conjunction with the 12th International Conference on FITAT, July 18-20, Jilin, China, Volume 1 [1st ed.] 978-981-13-9713-4;978-981-13-9714-1

The book presents selected papers from the Fifteenth International Conference on Intelligent Information Hiding and Mult

1,123 62 15MB

English Pages XX, 424 [415] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advances in Intelligent Information Hiding and Multimedia Signal Processing: Proceedings of the 15th International Conference on IIH-MSP in conjunction with the 12th International Conference on FITAT, July 18-20, Jilin, China, Volume 1 [1st ed.]
 978-981-13-9713-4;978-981-13-9714-1

Table of contents :
Front Matter ....Pages i-xx
Front Matter ....Pages 1-1
Designing Intelligent Wearable Product for Elderly Care: A Second User Study (Wen Qi, Liang Zhou)....Pages 3-9
Application Research of BIM Technology in Analysis of Green Building HVAC System (Long Jiangzhu, Ge Yijun, Huang Xiaolin, Li Yingjie)....Pages 11-19
Research on the Development of Fashion Industry in the “Internet+” Era (Hongkun Peng)....Pages 21-29
Design of Mini Pets Feeding Intelligent Home System Based on IoT (Renbiao Wang)....Pages 31-40
Study on IoT and Big Data Analysis of Furnace Process Exhaust Gas Leakage (Yu-Wen Zhou, Kuo-Chi Chang, Jeng-Shyang Pan, Kai-Chun Chu, Der-Juinn Horng, Yuh-Chung Lin et al.)....Pages 41-49
Front Matter ....Pages 51-51
A Data Hiding Approach Based on Reference-Affected Matrix (Trong-The Nguyen, Jeng-Shyang Pan, Truong-Giang Ngo, Thi-Kien Dao)....Pages 53-64
A Survey of Data Hiding Based on Vector Quantization (Chin-Feng Lee, Chin-Chen Chang, Chia-Shuo Shih, Somya Agrawal)....Pages 65-72
A Survey of Authentication Protocols in Logistics System (Chin-Ling Chen, Dong-Peng Lin, Chin-Feng Lee, Yong-Yuan Deng, Somya Agrawal)....Pages 73-78
Enhanced Secret Hiding Mechanism Based on Genetic Algorithm (Cai-Jie Weng, Shi-Jian Liu, Jeng-Shyang Pan, Lyuchao Liao, Trong-The Nguyen, Wei-Dong Zeng et al.)....Pages 79-86
An Adversarial Attack Method in Gray-Box Setting Oriented to Defenses Based on Image Preprocessing (Yuxin Gong, Shen Wang, Xunzhi Jiang, Dechen Zhan)....Pages 87-96
A Collusion Attack on Identity-Based Public Auditing Scheme via Blockchain (Xing Zou, Xiaoting Deng, Tsu-Yang Wu, Chien-Ming Chen)....Pages 97-105
Research on a Color Image Encryption Algorithm Based on 2D-Logistic (Xin Huang, Qun Ding)....Pages 107-115
UVM-Based CAN IP Verification (Jian Han, Ping Fu, Jiaqing Qiao)....Pages 117-124
Cryptanalysis of a Pairing-Based Anonymous Key Agreement Scheme for Smart Grid (Xiao-Cong Liang, Tsu-Yang Wu, Yu-Qi Lee, Chien-Ming Chen, Jyh-Haw Yeh)....Pages 125-131
Digital Audio Watermarking by Quantization Embedding System (Ching-Ju Chen, Ming Zhao, Shuo-Tsung Chen, Meng-Ju Lin)....Pages 133-141
Digital Audio Watermarking by Amplitude Embedding System (Meng-Ju Lin, Ming Zhao, Shuo-Tsung Chen, Ching-Ju Chen)....Pages 143-150
Front Matter ....Pages 151-151
MSAE: A Multitask Learning Approach for Traffic Flow Prediction Using Deep Neural Network (Di Yang, Hua-Min Yang, Peng Wang, Song-Jiang Li)....Pages 153-161
Power Plant Fan Fault Warning Based on Bidirectional Feature Compression and State Estimation (Nan Li, Lian Meng, Bin Geng, Ziyang Jing)....Pages 163-171
Refined Tensor Subspace Analysis (ChaoXia Wu, Wei Wang, ZeNan Chu)....Pages 173-180
FPKC: An Efficient Algorithm for Improving Short-Term Load Forecasting (Hongxiang Dong, Henan Yang)....Pages 181-187
A Novel Approach to Identify Intersection Information via Trajectory Big Data Analysis in Urban Environments (Weidong Fang, Hanlin Chen, Rong Hu)....Pages 189-199
Time Series Prediction of Transformer Oil Chromatography Based on Hybrid LSTM Model (Shun-miao Zhang, Xin Su, Xin-hua Jiang, Xingsi Xue)....Pages 201-209
Parameter Estimation of Redundant System (Chao-Fan Xie, Lin Xu, Fuquan Zhang, Lu-Xiong Xu)....Pages 211-219
Document Image Retrieval Based on Convolutional Neural Network (Jie Zhou, Baolong Guo, Yan Zheng)....Pages 221-229
Deep Residual Network Based on Deep Layer Aggregation for JPEG Images Steganalysis (Xinyue Lan, Rongrong Ni, Yao Zhao)....Pages 231-239
An Efficient Association Rule Mining Method to Predict Diabetes Mellitus: KNHANES 2013–2015 (Huilin Zheng, Hyun Woo Park, Khen Ho Ryu)....Pages 241-249
A Hybrid Credit Scoring Model Using Neural Networks and Logistic Regression (Lkhagvadorj Munkhdalai, Jong Yun Lee, Keun Ho Ryu)....Pages 251-258
The Early Prediction Acute Myocardial Infarction in Real-Time Data Using an Ensemble Machine Learning Model (Bilguun Jargalsaikhan, Muhammad Saqlain, Sherazi Syed Waseem Abbas, Moon Hyun Jae, In Uk Kang, Sikandar Ali et al.)....Pages 259-264
A Collaborative Filtering Recommendation System for Rating Prediction (Khishigsuren Davagdorj, Kwang Ho Park, Keun Ho Ryu)....Pages 265-271
Comparison of the Framingham Risk Score and Deep Neural Network-Based Coronary Heart Disease Risk Prediction (Tsatsral Amarbayasgalan, Pham Van Huy, Keun Ho Ryu)....Pages 273-280
Mining High Quality Medical Phrase from Biomedical Literatures Over Academic Search Engine (Ling Wang, Xue Gao, Tie Hua Zhou, Wen Qiang Liu, Cong Hui Sun)....Pages 281-288
Current State of E-Commerce in Mongolia: Payment and Delivery (Oyungerel Delger, Munkhtuya Tseveenbayar, Erdenetuya Namsrai, Ganbat Tsendsuren)....Pages 289-297
The Emerging Trend of Accurate Advertising Communication in the Era of Big Data—The Case of Programmatic, Targeted Advertising (Sida Chen)....Pages 299-308
Study on Automatic Generation of Teaching Video Subtitles Based on Cloud Computing (Xiangkai Qiu)....Pages 309-314
Attention-Based Multi-fusion Method for Citation Prediction (Juefei Wang, Fuquan Zhang, Yinan Li, Donglei Liu)....Pages 315-322
Front Matter ....Pages 323-323
Using Five Principles of Object-Oriented Design in the Transmission Network Management Information (B. Gantulga, N. Munkhtsetseg, D. Garmaa, S. Batbayar)....Pages 325-333
Modbus Protocol Based on the Characteristics of the Transmission of Industrial Data Packet Forgery Tampering and Industrial Security Products Testing (Qiang Ma, Wenting Wang, Ti Guan, Yong Liu, Lin Lin)....Pages 335-344
Analysis of Time Characteristics of MMS Protocol Transmission in Intelligent Substation (Wenting Wang, Qiang Ma, Yong Liu, Lin Lin, Ti Guan)....Pages 345-353
Reliability Evaluation Model of Power Communication Network Considering the Importance of Transmission Service (Wang Tingjun, Ma Shangdi, Liu Xuebing, Li Shanshan, Zhang Shuo)....Pages 355-364
Optimal Safety Link Configuration Method for Power Communication Network Considering Global Risk Balance (Liu Xiaoqing, Ma Qingfeng, Wang Tingjun, Ma Shangdi, Liu Xuebing, Li Shanshan)....Pages 365-373
Design of Data Acquisition Software for Steam Turbine Based on Qt/Embedded (Han Zhang, Hongtao Yin, Ping Fu)....Pages 375-384
A Reliable Data Transmission Protocol Based on Network Coding for WSNs (Ning Sun, Hailong Wei, Jie Zhang, Xingjie Wang)....Pages 385-392
Design of Virtual Cloud Desktop System Based on OpenStack (Yongxia Jin, Jinxiu Zhu, Hongxi Bai, Huiping Chen, Ning Sun)....Pages 393-401
Characteristics of Content in Online Interactive Video and Design Strategy (Bai He)....Pages 403-411
Implementation of Asynchronous Cache Memory (Jigjidsuren Battogtokh)....Pages 413-421
Back Matter ....Pages 423-424

Citation preview

Smart Innovation, Systems and Technologies 156

Jeng-Shyang Pan Jianpo Li Pei-Wei Tsai Lakhmi C. Jain Editors

Advances in Intelligent Information Hiding and Multimedia Signal Processing Proceedings of the 15th International Conference on IIH-MSP in conjunction with the 12th International Conference on FITAT, July 18–20, Jilin, China, Volume 1

Smart Innovation, Systems and Technologies Volume 156

Series Editors Robert J. Howlett, Bournemouth University and KES International, Shoreham-by-sea, UK Lakhmi C. Jain, Faculty of Engineering and Information Technology, Centre for Artificial Intelligence, University of Technology Sydney, Sydney, NSW, Australia

The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability. The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form. Volumes on interdisciplinary research combining two or more of these areas is particularly sought. The series covers systems and paradigms that employ knowledge and intelligence in a broad sense. Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community. It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities. The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions. High quality content is an essential feature for all book proposals accepted for the series. It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/8767

Jeng-Shyang Pan Jianpo Li Pei-Wei Tsai Lakhmi C. Jain •





Editors

Advances in Intelligent Information Hiding and Multimedia Signal Processing Proceedings of the 15th International Conference on IIH-MSP in conjunction with the 12th International Conference on FITAT, July 18–20, Jilin, China, Volume 1

123

Editors Jeng-Shyang Pan College of Computer Science and Engineering Shandong University of Science and Technology Qingdao Shi, Shandong, China Pei-Wei Tsai Swinburne University of Technology Hawthorn, Melbourne, Australia

Jianpo Li Northeast Electric Power University Chuanying Qu, Jilin, China Lakhmi C. Jain Centre for Artificial Intelligence University of Technology Sydney Sydney, NSW, Australia Liverpool Hope University Liverpool, UK University of Canberra Canberra, Australia KES International, UK

ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-981-13-9713-4 ISBN 978-981-13-9714-1 (eBook) https://doi.org/10.1007/978-981-13-9714-1 © Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Conference Organization

Conference Founders Jeng-Shyang Pan, Fujian University of Technology Lakhmi C. Jain, University of Technology Sydney, Australia, University of Canberra, Australia, Liverpool Hope University, UK and KES International, UK Keun Ho Ryu, Chungbuk National University Oyun-Erdene Namsrai, National University of Mongolia

Honorary Chairs Lakhmi C. Jain, University of Technology Sydney, Australia, University of Canberra, Australia, Liverpool Hope University, UK and KES International, UK Guowei Cai, Northeast Electric Power University Chin-Chen Chang, Feng Chia University Goutam Chakraborty, Iwate Prefectural University

Advisory Committees Yôiti Suzuki, Tohoku University Ioannis Pitas, Aristotle University of Thessaloniki Yao Zhao, Beijing Jiaotong University Kebin Jia, Beijing University of Technology Li-Hua Li, Chaoyang University of Technology Yanjun Peng, Shandong University of Science and Technology Jong Yun Lee, Chungbuk National University

v

vi

Conference Organization

Vu Thi Hong Nhan, Vietnam National University Uyanga Sambuu, National University of Mongolia Yanja Dajsuren, TU/E

General Chairs Jianguo Wang, Northeast Electric Power University Jeng-Shyang Pan, Fujian University of Technology Chin-Feng Lee, Chaoyang University of Technology Kwang-Woo Nam, Kunsan National University Oyun-Erdene Namsrai, National University of Mongolia Ling Wang, Northeast Electric Power University

Program Chairs Renjie Song, Northeast Electric Power University Ching-Yu Yang, National Penghu University of Science and Technology Ling Wang, Northeast Electric Power University Ganbat Baasantseren, National University of Mongolia

Publication Chairs Pei-Wei Tsai, Swinburne University of Technology Ho Sun Shon, Chungbuk National University Erdenetuya Namsrai, Mongolian University of Science and Technology Youngjun Piao, Nankai University

Invited Session Chairs Chih-Yu Hsu, Chaoyang University of Technology Keun Ho Ryu, Chungbuk National University Oyun-Erdene Namsrai, National University of Mongolia Erdenebileg Batbaatar, Chungbuk National University Jianpo Li, Northeast Electric Power University Xingsi Xue, Fujian University of Technology Chien-Ming Chen, Harbin Institute of Technology Shuo-Tsung Chen, National Yunlin University of Science and Technology

Conference Organization

vii

Electronic Media Chairs Jieming Yang, Northeast Electric Power University Aziz Nasridinov, Chungbuk National University Ganbat Baasantseren, National University of Mongolia

Finance Chairs Yang Sun, Northeast Electric Power University Juncheng Wang, Northeast Electric Power University

Local Organization Chairs Jianpo Li, Northeast Electric Power University Tiehua Zhou, Northeast Electric Power University Meijing Li, Shanghai Maritime University

Program Committees Aziz Nasridinov, Chungbuk National University Anwar F. A. Dafa-alla, Garden City College Basabi Chakraborty, Iwate Prefectural University Bayarpurev Mongolyn, National University of Mongolia Bold Zagd, National University of Mongolia Bu Hyun Hwang, Chungbuk National University Bum Ju Lee, Korea Institute of Oriental Medicine Byungchul Kim, Baekseok University Dong Ryu Lee, University of Tokyo Erwin Bonsma, Philips Garmaa Dangaasuren, National University of Mongolia Goce Naumoski, Bizzsphere Gouchol Pok, Pai Chai University Herman Hartmann, University of Groningen Hoang Do Thanh Tung, Vietnam Institute of Information Technology of Vietnamese Academy of Science and Technology Incheon Park, The University of Aizu Jeong Hee Chi, Konkuk University Jeong Hee Hwang, Namseoul University

viii

Conference Organization

Jong-Yun Lee, Chungbuk National University Jung Hoon Shin, Chungbuk National University Kwang Su Jung, Chungbuk National University Mohamed Ezzeldin A. Bashir, Medical Sciences and Technology University Moon Sun Shin, Konkuk University Mei-Jing Li, Shanghai Maritime University Purev Jaimai, National University of Mongolia Razvan Dinu, Philips Seon-Phil Jeong, United International College Supatra Sahaphong, Ramkhamhaeng University Suvdaa Batsuuri, National University of Mongolia Shin Eun Young, Chungbuk National University Sanghyuk Lee, Xi’an Jiaotong-Liverpool University Tom Arbuckle, University of Limerick TieHua Zhou, Northeast Electric Power University Tsendsuren Munkhdalai, Microsoft Research WeiFeng Su, BNU-HKBU United International College Yongjun Piao, Nankai University Yoon Ae Ahn, Health and Medical Information Engineering, College of Life Yang-Mi Kim, Chungbuk National University Kyung-Ah Kim, Chungbuk National University Khuyagbaatar Batsuren, University of Trento Enkhtuul Bukhsuren, National University of Mongolia Nan Ding, Dalian University of Technology Ran Ma, Shanghai University Gang Liu, Xidian University Wanchang Jiang, Northeast Electric Power University Jingdong Wang, Northeast Electric Power University Xinxin Zhou, Northeast Electric Power University

Committee Secretaries Hyun Woo Park, Chungbuk National University Erdenebileg Batbaatar, Chungbuk National University Tsatsral Amarbayasgalan, Chungbuk National University Batnyam Battulga, National University of Mongolia Erdenetuya Namsrai, Mongolian University of Science and Technology Meilin Li, Northeast Electric Power University

Conference Organization

ix

x

Conference Organization

Preface

Welcome to the 15th International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2019) and the 12th International Conference on Frontiers of Information Technology, Applications and Tools (FITAT 2019) to be held in Jilin, China, on July 18–20, 2019. IIH-MSP 2019 and FITAT 2019 are technically co-sponsored by Northeast Electric Power University, Chaoyang University of Technology, Chungbuk National University, National University of Mongolia in Mongolia, Shandong University of Science and Technology, Fujian Provincial Key Lab of Big Data Mining and Applications, and National Demonstration Center for Experimental Electronic Information and Electrical Technology Education (Fujian University of Technology). Both conferences aim to bring together researchers, engineers, and policymakers to discuss the related techniques, to exchange research ideas, and to make friends. We received a total of 276 submissions. Finally, 95 papers are accepted after the review process. The keynote speeches are kindly provided by Prof. James C. N. Yang (Dong Hwa University) on “Relationship between Polynomial-Based and Code-Based Secret Image Sharing and Their Pros and Cons,” Prof. Keun Ho Ryu (Chungbuk National University) on “Spectrum on Interdisciplinary Related to Databases and Bioinformatics Researches,” and Prof. Yuping Wang (Xidian University) on “A New Framework for Large Scale Global Optimization.” We would like to thank the authors for their tremendous contributions. We would also express our sincere appreciation to the reviewers, Program Committee members, and the Local Committee members for making both conferences successful. Especially, our special thanks go to Prof. Keun Ho Ryu for the efforts and contribution from him to make IIH-MSP 2019 and FITAT 2019 possible. Finally, we would like to express special thanks to Northeast Electric Power University, Chaoyang University of Technology, Chungbuk National University, National University of Mongolia in

xi

xii

Preface

Mongolia, Shandong University of Science and Technology, Fujian Provincial Key Lab of Big Data Mining and Applications, and National Demonstration Center for Experimental Electronic Information and Electrical Technology Education (Fujian University of Technology) for their generous support in making IIH-MSP 2019 and FITAT 2019 possible. Acknowledgements The IIH-MSP 2019 and FITAT 2019 Organizing Committees wish to express their appreciation to Prof. Keun Ho Ryu from Chungbuk National University for his contribution to organizing the conference. Qingdao Shi, China Chuanying Qu, China Hawthorn, Australia Sydney, Australia July 2019

Jeng-Shyang Pan Jianpo Li Pei-Wei Tsai Lakhmi C. Jain

Contents

Part I 1

2

3

4

5

Internet of Things and Its Application

Designing Intelligent Wearable Product for Elderly Care: A Second User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wen Qi and Liang Zhou

3

Application Research of BIM Technology in Analysis of Green Building HVAC System . . . . . . . . . . . . . . . . . . . . . . . . . . Long Jiangzhu, Ge Yijun, Huang Xiaolin and Li Yingjie

11

Research on the Development of Fashion Industry in the “Internet+” Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongkun Peng

21

Design of Mini Pets Feeding Intelligent Home System Based on IoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renbiao Wang

31

Study on IoT and Big Data Analysis of Furnace Process Exhaust Gas Leakage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yu-Wen Zhou, Kuo-Chi Chang, Jeng-Shyang Pan, Kai-Chun Chu, Der-Juinn Horng, Yuh-Chung Lin and Huang Jing

Part II

41

Information Security and Hiding

6

A Data Hiding Approach Based on Reference-Affected Matrix . . . . Trong-The Nguyen, Jeng-Shyang Pan, Truong-Giang Ngo and Thi-Kien Dao

53

7

A Survey of Data Hiding Based on Vector Quantization . . . . . . . . Chin-Feng Lee, Chin-Chen Chang, Chia-Shuo Shih and Somya Agrawal

65

xiii

xiv

Contents

8

A Survey of Authentication Protocols in Logistics System . . . . . . . Chin-Ling Chen, Dong-Peng Lin, Chin-Feng Lee, Yong-Yuan Deng and Somya Agrawal

9

Enhanced Secret Hiding Mechanism Based on Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cai-Jie Weng, Shi-Jian Liu, Jeng-Shyang Pan, Lyuchao Liao, Trong-The Nguyen, Wei-Dong Zeng, Ping Zhang and Lei Huang

73

79

10 An Adversarial Attack Method in Gray-Box Setting Oriented to Defenses Based on Image Preprocessing . . . . . . . . . . . . . . . . . . . Yuxin Gong, Shen Wang, Xunzhi Jiang and Dechen Zhan

87

11 A Collusion Attack on Identity-Based Public Auditing Scheme via Blockchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xing Zou, Xiaoting Deng, Tsu-Yang Wu and Chien-Ming Chen

97

12 Research on a Color Image Encryption Algorithm Based on 2D-Logistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Xin Huang and Qun Ding 13 UVM-Based CAN IP Verification . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Jian Han, Ping Fu and Jiaqing Qiao 14 Cryptanalysis of a Pairing-Based Anonymous Key Agreement Scheme for Smart Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Xiao-Cong Liang, Tsu-Yang Wu, Yu-Qi Lee, Chien-Ming Chen and Jyh-Haw Yeh 15 Digital Audio Watermarking by Quantization Embedding System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Ching-Ju Chen, Ming Zhao, Shuo-Tsung Chen and Meng-Ju Lin 16 Digital Audio Watermarking by Amplitude Embedding System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Meng-Ju Lin, Ming Zhao, Shuo-Tsung Chen and Ching-Ju Chen Part III

Machine Learning and Its Applications

17 MSAE: A Multitask Learning Approach for Traffic Flow Prediction Using Deep Neural Network . . . . . . . . . . . . . . . . . . . . . 153 Di Yang, Hua-Min Yang, Peng Wang and Song-Jiang Li 18 Power Plant Fan Fault Warning Based on Bidirectional Feature Compression and State Estimation . . . . . . . . . . . . . . . . . . . 163 Nan Li, Lian Meng, Bin Geng and Ziyang Jing 19 Refined Tensor Subspace Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 173 ChaoXia Wu, Wei Wang and ZeNan Chu

Contents

xv

20 FPKC: An Efficient Algorithm for Improving Short-Term Load Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Hongxiang Dong and Henan Yang 21 A Novel Approach to Identify Intersection Information via Trajectory Big Data Analysis in Urban Environments . . . . . . . 189 Weidong Fang, Hanlin Chen and Rong Hu 22 Time Series Prediction of Transformer Oil Chromatography Based on Hybrid LSTM Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Shun-miao Zhang, Xin Su, Xin-hua Jiang and Xingsi Xue 23 Parameter Estimation of Redundant System . . . . . . . . . . . . . . . . . . 211 Chao-Fan Xie, Lin Xu, Fuquan Zhang and Lu-Xiong Xu 24 Document Image Retrieval Based on Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Jie Zhou, Baolong Guo and Yan Zheng 25 Deep Residual Network Based on Deep Layer Aggregation for JPEG Images Steganalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Xinyue Lan, Rongrong Ni and Yao Zhao 26 An Efficient Association Rule Mining Method to Predict Diabetes Mellitus: KNHANES 2013–2015 . . . . . . . . . . . . . . . . . . . . 241 Huilin Zheng, Hyun Woo Park and Khen Ho Ryu 27 A Hybrid Credit Scoring Model Using Neural Networks and Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Lkhagvadorj Munkhdalai, Jong Yun Lee and Keun Ho Ryu 28 The Early Prediction Acute Myocardial Infarction in Real-Time Data Using an Ensemble Machine Learning Model . . . . . . . . . . . . 259 Bilguun Jargalsaikhan, Muhammad Saqlain, Sherazi Syed Waseem Abbas, Moon Hyun Jae, In Uk Kang, Sikandar Ali and Jong Yun LEE 29 A Collaborative Filtering Recommendation System for Rating Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Khishigsuren Davagdorj, Kwang Ho Park and Keun Ho Ryu 30 Comparison of the Framingham Risk Score and Deep Neural Network-Based Coronary Heart Disease Risk Prediction . . . . . . . . 273 Tsatsral Amarbayasgalan, Pham Van Huy and Keun Ho Ryu 31 Mining High Quality Medical Phrase from Biomedical Literatures Over Academic Search Engine . . . . . . . . . . . . . . . . . . . 281 Ling Wang, Xue Gao, Tie Hua Zhou, Wen Qiang Liu and Cong Hui Sun

xvi

Contents

32 Current State of E-Commerce in Mongolia: Payment and Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Oyungerel Delger, Munkhtuya Tseveenbayar, Erdenetuya Namsrai and Ganbat Tsendsuren 33 The Emerging Trend of Accurate Advertising Communication in the Era of Big Data—The Case of Programmatic, Targeted Advertising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Sida Chen 34 Study on Automatic Generation of Teaching Video Subtitles Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Xiangkai Qiu 35 Attention-Based Multi-fusion Method for Citation Prediction . . . . . 315 Juefei Wang, Fuquan Zhang, Yinan Li and Donglei Liu Part IV

Network Systems and Analysis

36 Using Five Principles of Object-Oriented Design in the Transmission Network Management Information . . . . . . . . . 325 B. Gantulga, N. Munkhtsetseg, D. Garmaa and S. Batbayar 37 Modbus Protocol Based on the Characteristics of the Transmission of Industrial Data Packet Forgery Tampering and Industrial Security Products Testing . . . . . . . . . . . . . . . . . . . . 335 Qiang Ma, Wenting Wang, Ti Guan, Yong Liu and Lin Lin 38 Analysis of Time Characteristics of MMS Protocol Transmission in Intelligent Substation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Wenting Wang, Qiang Ma, Yong Liu, Lin Lin and Ti Guan 39 Reliability Evaluation Model of Power Communication Network Considering the Importance of Transmission Service . . . . 355 Wang Tingjun, Ma Shangdi, Liu Xuebing, Li Shanshan and Zhang Shuo 40 Optimal Safety Link Configuration Method for Power Communication Network Considering Global Risk Balance . . . . . . 365 Liu Xiaoqing, Ma Qingfeng, Wang Tingjun, Ma Shangdi, Liu Xuebing and Li Shanshan 41 Design of Data Acquisition Software for Steam Turbine Based on Qt/Embedded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Han Zhang, Hongtao Yin and Ping Fu 42 A Reliable Data Transmission Protocol Based on Network Coding for WSNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Ning Sun, Hailong Wei, Jie Zhang and Xingjie Wang

Contents

xvii

43 Design of Virtual Cloud Desktop System Based on OpenStack . . . . 393 Yongxia Jin, Jinxiu Zhu, Hongxi Bai, Huiping Chen and Ning Sun 44 Characteristics of Content in Online Interactive Video and Design Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Bai He 45 Implementation of Asynchronous Cache Memory . . . . . . . . . . . . . . 413 Jigjidsuren Battogtokh Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423

About the Editors

Jeng-Shyang Pan received his B.S. in Electronic Engineering from National Taiwan University of Science and Technology in 1986, M.S. in Communication Engineering from National Chiao Tung University, Taiwan, in 1988, and Ph.D. in Electrical Engineering from University of Edinburgh, UK, in 1996. He is a Professor at College of Computer Science and Engineering, Shandong University of Science and Technology and Fujian University of Technology, and an Adjunct Professor at Flinders University, Australia. He joined the editorial board of the International Journal of Innovative Computing, Information and Control, LNCS Transactions on Data Hiding and Multimedia Security, Journal of Information Assurance and Security, Journal of Computers, International Journal of Digital Crime and Forensics, and the Chinese Journal of Electronics. His research interests include soft computing, information security, and big data mining. He has published more than 300 journal and 400 conference papers, 35 book chapters, and 22 books. Jianpo Li is a Professor at School of Computer Science, Northeast Electric Power University, China. He completed his Ph.D. in Communication and Information System from Jilin University, Changchun, China, in 2008, and has more than 10 years’ teaching/research experience. He has published more than 25 papers in international journals and conferences and has 12 patents. Pei-Wei Tsai received his Ph.D. in Electronic Engineering in Taiwan in 2012. He is a lecturer and the deputy course convenor for Master of Data Science at the Department of Computer Science and Software Engineering at Swinburne University of Technology in Australia. His research interests include swarm intelligence, optimization, big data analysis, wireless sensor network, and machine learning. Lakhmi C. Jain Ph.D., M.E., B.E. (Hons), Fellow (Engineers Australia), serves at University of Technology Sydney, Australia, University of Canberra, Australia, Liverpool Hope University, UK and KES International, UK. He founded KES International to provide the professional community with the opportunities for publication, knowledge exchange, cooperation, and teaming. Involving around 5000 xix

xx

About the Editors

researchers drawn from universities and companies worldwide, KES facilitates international cooperation and generates synergy in teaching and research. His interests focus on artificial intelligence paradigms and applications in complex systems, security, e-education, e-healthcare, unmanned air vehicles, and intelligent agents.

Part I

Internet of Things and Its Application

Chapter 1

Designing Intelligent Wearable Product for Elderly Care: A Second User Study Wen Qi and Liang Zhou

Abstract With the rapid development of China’s economy, the aging population is getting larger. As science and technology advances, wearable devices have become ubiquitous because of their unique characteristics such as portability and ease of use. These characteristics make wearable devices being feasible tools to provide service for the elders so as to assist the daily life of the elderly people and solve their nursing problems. In this paper, the authors summarized their further investigation on how to design wearable products for home-based elderly care. Three aspects of wearable products including product form, functionality, and interaction style are studied. The goal of this study is to collect user requirements of wearable devices and provide design method and guidelines for designers. Keywords Design exploration · Wearable device redesign · Questionnaire · User study · Home-based elderly care

1.1 Introduction Elderly care has become a major social issue for modern society as people life expectancy hits a record high. During last 20 years, for example, Chinese people are living longer as death rates fall. Consequently, the aging population in China continuously increases and it becomes a major concern of whole society. In a previous study, the state of the art of the community nursing services and the potentials of using wearable product for elderly nursing are investigated. In this paper, an extensive user study with questionnaire is conducted in order to investigate the detailed design issues of wearable product for elderly care. The goal of this paper is to collect detailed user requirements of wearable products for the elderly people and provide design method and guidelines for product designers and interaction designers.

W. Qi (B) · L. Zhou Donghua University, Shanghai 200051, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_1

3

4

W. Qi and L. Zhou

1.2 Related Work 1.2.1 Aging Society and Elderly Care As the most populous developing country in the world, China’s aging population has increased dramatically [1]. According to the latest data from UN, by 2018, China’s elderly population that is over 60-year-old will reach 248 million, accounting for 16.7% of the total population. The traditional approaches of supporting the elders are facing great challenges ever since [2]. The idea of “raising children to be secured when getting old” has been deeply rooted in the minds of Chinese people for thousands of years. Nowadays, this concept has been shakedown and challenged. Therefore, the society is constantly seeking solutions for tackling the aging and elderly caring problems. Chinese government has proposed the “973” elderly care policy. This policy suggests that 90% of elderly population will be living at their own home (aging in place), 7% will be taken care by community, and 3% will be taken care at the nursing agency [3]. Community-based elderly care refers to the approach of elderly care in which elderly people mainly live at home and extra nursing service is provided by the nearby community. The goal of community elderly care is to provide various extra services for the elders in the community so that the elders can live at their own home as much as possible. Compared with other forms of elderly care, community care has its own advantages, including allowing the elders to enjoy services in a more familiar environment; low service cost and full range service; compensation for lacking family member care; easing the pressure of home care; and reducing the government’s financial burden.

1.2.2 Wearable Device A wearable product is a device that is worn on the human body. This type of device has become a more common part of people’s daily life as companies have started to present various types of devices that are small enough to wear. Current wearable devices include powerful sensor that can collect, deliver, and process information about their surroundings [4]. The first wearable device was introduced in the 1960s, and devices with modern wearable forms only appear recently. In the early tome, wearable devices were just research prototypes developed in research lab, which were hardly used in people’s real life. The most representative breakthrough of wearable devices is the introduction of Google Glass in 2012, which becomes a symbol of new generation of wearable products. In 2014, Apple launched Apple Watch, which has been widely accepted by consumers [5]. The research and development of wearable devices in China started relatively late, and currently XiaoMi and HuaWei are the two major players in this perspective. At present, most of the wearable products on the market are mainly designed for sports, fitness, or health applications. However,

1 Designing Intelligent Wearable Product …

5

several issues exist around wearable devices which include privacy, the extent to which they change social interactions, look and feel, and various issues with usability.

1.3 Experimental Design The author has reviewed a large amount of related literature and found that there is limited literature on wearable devices for community elderly care [6, 7]. The authors conducted a user experiment with questionnaires to investigate the current status of community elderly care service from seven regions and found that the current awareness of wearable devices among the elderly people is relatively low and the design to use wearable devices in order to assist the daily life of the elderly people is high. In total, 160 subjects were interviewed. These subjects were from eight different regions that included Shanghai, Hangzhou, Sichuan, Hubei, Guangdong, Shanxi, Shandong, and Liaoning Province, which means 20 subjects from each region. The interview was mainly conducted with questionnaire in paper with additional online query. The complete user research lasts for 2 months. After preliminary screening, 45.52% of total participants are male and 54.48% are female. The subjects were divided into three groups for data analysis according to age: 47.56% of participants are from 60 to 70 years old, 39.39% are from 71 to 80 years old, and 10.35% are above 81 years old. The education levels of all participants are quite diverse. 43.63% of all participants finished their junior school education, and 49.06% of all participants finished their high school education. Only 7.31% of all participants are at university level or above. The distribution of education level is normal for the people at the age of 60–85 years old. Such distribution of education level requires a designer to take into account the fact that the users of wearable product with low level of education are the majority. Regarding the occupation of all participants, it was found that 43.07% of participants are staff for the government and 32.41% of participants are company employee. 15.28% of participants are self-employed and 9.25% of participants are unemployed.

1.4 Result The questions asked in the user study are classified into three categories: the form design of wearable product, the desired function of wearable product, and the aspects of interaction design.

6

W. Qi and L. Zhou

1.4.1 Form Design The participants were presented with five different types of product forms (see Fig. 1.1). They were asked to answer their preferences. 39.58% of participants prefer the choice of a, which is the round shape. 29.29% of participants prefer the choice of b. 15.05% of participants like the choice of c. Only 7.92% and 8.16% of participants choose the option of d and e, respectively. For most of the elders, the perception of wearable product is still quite traditional. Most of them believe that the form of wearable product should be equivalent to the shape of watch. Therefore, even the second and third popular options of product forms are still similar to the shape of circle. Regarding the product thickness, it can be seen that the thicknesses of 12 and 10 mm are more popular than other options. This shows that elder users expect that the wearable product can be as light as possible to reduce the burden on the wrist (Fig. 1.2). Regarding the product weight, it was found that 41.13% of participants choose 30 g as the most acceptable weight (see Fig. 1.3). 29.95% of participants choose 40 g as favorite product weight. The elder users prefer the equipment to be as light as possible. However, surprisingly, the product with ultra-lightweight is not the most acceptable one for elder people. Currently, wearable product is made of different materials, most used of which are rubber, metal, and leather. From the responses, there are similar numbers of participants who prefer leather or rubber material (66.75% vs. 67.69%). Metal is the

Fig. 1.1 The option of product forms Fig. 1.2 The result of preferred product thickness

1 Designing Intelligent Wearable Product …

7

Fig. 1.3 The result of preferred product weight

least acceptable materials for elder users. Wearable product is a kind of device that is attached to human body, which requires certain degree of skin-friendly property. Leather and rubber could provide more softness and comfort feeling than metal. Because of its hard and icy nature, wearable products that are made of metal give wearers a sense of repulsiveness. Regarding the choice of color, it was found that participants preferred dark colors to bright colors. Among all the participants, 65.33% prefer dark colors. While asking their attitude to the wearable product with bright color, only 18.63% of participants are willing to accept (see Fig. 1.4). Aging changes occur in all the structures of the eye causing varied effects. Color vision tends to fade with age, and the threshold of dark adaptation is increased. Therefore, most of the elder users prefer products with

Fig. 1.4 The result of preferred product color

8

W. Qi and L. Zhou

dark colors, such as gray color. However, it is noticed that some of the elder users are still keen on bright colors because of their personality that they are still full of energy and enthusiasm like young people.

1.5 Conclusion In this paper, the user requirements of wearable devices among the elderly people are investigated. Wearable devices have many advantages and can be valuable tool to be used to assist the daily life of the elderly people. However, wearable device is still a relatively new thing for the elders. This poses a big challenge to product designer of wearable devices [8–12]. This study summarizes the detailed requirements of wearable product design from three aspects: appearance, function, and interaction. Future work will be to design an experimental prototype based on the results of this study. Acknowledgments The author would like to thank the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning (No. TP2015029) for financial support. The study is also supported by “the Fundamental Research Funds for the Central Universities”.

References 1. Zhang, L., Zhang, L.: Social cooperative mechanism of community home-based care. Chin. Matern. Child. Health 336–337 (2017) 2. Liu, Y., Peng, D., Gong, J., Deng, J., Luo, J.: Research on community health hospitality service model. Contemp. Med. 23(24), 80–82 (2017) 3. Zhang, L.: A comparison of medical service mode between the UK and the US and its enlightenment to China. Contemp. Econ. 18, 10–11 (2017) 4. Xiong, S.: Wearable device modeling and user interface design research, Dissertation, Southeast University (2016) 5. Chen, H., Li, R.: Design of wearable intelligent products based on user experience. Mod. Decor. 7, 78 (2016) 6. Dong, W., Lei, J.: Review of application and problems of wearable devices in medical and health field. Chin. Digital Med. 12(08), 26–28 (2017) 7. Xu, J.: Wearable device design strategy in home care service. Packag. Eng. 37(12), 125–128 (2016) 8. Huang, C.: Research on application design based on user experience, Dissertation, Shanxi University of Science and Technology (2012) 9. Liu, Y.: User interface based application design research-take the example of female physiology app. Art Technology http://kns.cnki.net/kcms//33.1166.TN.20171027.1415.186.html (2017)

1 Designing Intelligent Wearable Product …

9

10. Jing, H.: User experience design elements and their application in product design. J. Chi. Feng Univ. 33(18), 44–45 (2017) 11. Godman, H.: Two questions can reveal mobility problems in seniors. Harvard Health Blog, Harvard Health Publications. https://www.health.harvard.edu/ (2013). Accessed on 04 April 2016 12. Salthouse, T. A.: When does age-related cognitive decline begin? Neurobiol. Aging 30(4), 507–514 (2009). https://doi.org/10.1016/j.neurobiolaging.2008.09.023. PMC 2683339. PMID 19231028

Chapter 2

Application Research of BIM Technology in Analysis of Green Building HVAC System Long Jiangzhu, Ge Yijun, Huang Xiaolin and Li Yingjie

Abstract In order to efficiently and accurately realize the analysis and optimization of the green building HVAC system, a method of applying BIM technology was proposed in this paper by analyzing the shortcomings of the traditional HVAC system and the advantages of the new BIM technology. Then, this method was used to analyze the HVAC system of an office building in Fuzhou Yango University, from which the relevant parameters of the building and a corresponding optimization plan were obtained. The empirical results show that the method is convenient, fast and objective, making the analysis of the green building HVAC system more accurate. Based on this optimization, the HVAC system design of the building can be rationalized, thus reducing energy consumption. Keywords BIM · Green building · HVAC analysis · Energy saving

2.1 Introduction With the continuous improvement of people’s living conditions, People’s requirements for living conditions and comfort are no longer decorative luxury but a good living environment and high-demand building configuration. Among all kinds of energy consumption in China, the energy consumption of building HVAC accounts for a large part, and the energy consumption of HVAC in high-rise buildings becomes an important part of building energy consumption [1]. Therefore, the energy-saving and high-comfort HVAC system has become the pursuit direction of people, which has higher requirements for the analysis and optimization of the building HVAC system. Here, using BIM new technology to analyze and optimize the green building, HVAC system is proposed.

L. Jiangzhu (B) · G. Yijun · H. Xiaolin · L. Yingjie Yango University, Fuzhou, Fujian 350015, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_2

11

12

L. Jiangzhu et al.

2.2 Shortcomings of Traditional HVAC System Analysis 2.2.1 General-Purpose, Lacking Personality Design Systems HVAC engineering is a comprehensive and complex project. Designers need to consider many factors for designing a scientific, reasonable, and applicable HVAC system. However, in the design process, the task is heavy because of the limited number of designers. In order to speed up the design and complete the design work, most designers cannot go to the construction site to inspect. Thus, the HVAC system designed is mostly for general purpose, not necessarily the most suitable [2]. At present, with the development of the national economy and the requirements for green energy conservation of buildings, various types of buildings are constantly emerging. Therefore, the requirements for the design of HVAC systems are inevitably different. Obviously, the universal design will be less and less able to meet the needs of green building development.

2.2.2 Many Influencing Factors, Complicated Calculation At present, there are many factors affecting the design of HVAC systems, and these factors are often dynamic [3]. The traditional HVAC system is cumbersome to calculate. Once an operation parameter or a size is changed, it needs calculation again. Furthermore, the calculation is huge, and not conducive to the comparison of data, which not only wastes manpower, material resources, and financial resources but also greatly reduce the working efficiency of designers.

2.2.3 Complex Form, Poor Visualization At present, the existing HVAC systems in China have many forms. Therefore, when choosing a specific form, it is necessary to comprehensively consider a variety of factors, especially in-depth analysis of energy-saving factor which is also the most basic and most important factor in the system design work [3]. However, most of the current HVAC system design drawings use CAD two-dimensional maps, making it difficult for viewers to imagine the three-dimensional renderings, and the complexity of the HVAC system form is inevitable for owners or nonprofessionals to understand.

2 Application Research of BIM Technology …

13

2.3 Advantages of BIM Technology 2.3.1 Personality Simulation BIM can model building information management and simulate the digital information. The information includes three-dimensional models of buildings, materials, mechanics, structures, equipment, various physical properties and data statistics [4], etc. Using BIM technology, individualized three-dimensional effects can be simulated for various buildings; further, construction and installation design can be carried out, and on this basis, and meantime, analysis and optimization of green building can be conducted.

2.3.2 Convenient and Efficient Calculation BIM technology can achieve comprehensive synergy, and its parameterized calculation effect is obvious. If any one of the operating parameters is modified, the output will automatically change at the same time in each dimension. It breaks the traditional paper book calculation method and realizes the dynamic adjustment of building data. Also, it can quickly extract data, make better data comparison, save calculate time, and improve work efficiency.

2.3.3 Obvious Visual Effect BIM transforms the conventional two-dimensional expression into a threedimensional visual model, and the expressed model information is visual and intuitive. It not only helps non-professionals understand the building and installation ideas through clear three-dimensional models but also achieves the effect of selftest: checking the shortcomings of model design, making sure that whether it meets the energy-saving requirements, etc. The three-dimensional effect can be presented throughout the process, facilitating timely and efficient coordination and communication management.

2.4 Application of BIM Technology in HVAC System Analysis The indoor environment is an objective condition for the direct contact of green building residents, and it is also a direct factor for building users to judge the quality level of the project. The main influencing factors of the indoor environment of

14

L. Jiangzhu et al.

green building are ventilation conditions, lighting conditions, and control of air conditioning systems [5]. That is, the quality of a building’s HVAC system is directly related to the comfort of the indoor environment. For the main influencing factors of the indoor environment, BIM technology can be used to carry out 3D modeling to simulate the actual construction situation and comprehensively analyze the objective factors such as climate, wind direction, and lighting in the area where the building is located to obtain the corresponding operating parameters. BIM technology designers can accurately obtain the heat transfer coefficient and ventilation during the use of green buildings and optimize the part of the building that does not meet the green requirements according to the obtained data parameters.

2.4.1 The Basic Idea To perform a BIM HVAC system analysis, first, a BIM building model that could be recognized was needed. Note that the BIM architectural model concerned here was an architectural frame composed of walls, doors, windows, etc. And the room objects as well as architectural outlines resulted from the frame were also included in this model. Therefore, it was necessary to check the BIM model and distinguish between inside and outside rooms. Next, the parameters of the engineering structure and the maintenance structure were set by the object characteristic table for the existing heating load design and the construction structure of the building. Then the thermal and cold load analysis was performed. For roofs and slabs, the system automatically processed them without modeling. The BIM technician sets the basic conditions by setting the thermal parameters of the room object, meanwhile, calculated and analyzed the cold load of the room by setting the room’s fresh air, characteristics, lighting, equipment, and so on. After calculation and analysis, the analysis results were compared with the normative indicators. For the places where the design was not in compliance with the specifications, the number, size, and position of the windows in the green design scheme were adjusted. After such repeated calculations, analysis, and optimization, the design scheme of the building HVAC system with good air quality and ventilation conditions was finally obtained, as shown in Fig. 2.1.

2.4.2 Specific Application Project Overview An office building of Yango University was taken as an example. BIM technology was applied to analyze the HVAC system. The project is located in the Yango University of Mawei District, Fuzhou City, Fujian Province. It is a multi-storey office building with a building height of 19.2 m. The surrounding area is dominated by teaching buildings.

2 Application Research of BIM Technology …

15

Fig. 2.1 BIM HVAC system analysis flowchart

BIM HVAC System Analysis Preparation (1) The designed building plan (Fig. 2.2) was introduced into BIM HVAC analysis software. (2) The grid, doors, and windows were identified, and the data that did not match was modified. (3) The graphics were checked to know whether the building plan is compatible with the BIM HVAC software data requirements, and the unsuitable areas were corrected. This work was the software cognitive matching setting. (4) A combined floor box was created to search the room. Then, the room name was displayed in the room generation option and the starting number started from 1001. (5) In the load calculation, it was necessary to set the type of each room and redefine the room parameters, such as ordinary office, bathroom, ward and living room, according to the design setting heat, personnel density, lamp heat dissipation, design room temperature, design humidity, fresh air volume, and other parameters. (6) Before the load calculation, the project name, project information, and the geographical location of the project were set according to the design. Once these tasks were completed, the BIM HVAC system analysis could be started.

16

L. Jiangzhu et al.

Fig. 2.2 First floor plan of an office building of Yango University

Analysis of BIM HVAC System Thermal Load Analysis Above all, the air conditioning heat load was selected, and the fresh air was calculated. After that, the heating considering the inner enclosure structure was chosen, and the temperature method was designed. Next, the BIM HVAC system analysis and calculation were carried out, and the result was saved. Finally, the room thermal load summary table (Table 2.1) was obtained, and the following figures take the first floor of the building as an example. Table 2.1 Room thermal load summary table Floor

Room no.

Room name

Area (m2 )

Volume (m3 )

Indoor calculated temperature (°C)

Enclosure Thermal strucload ture (w) (w)

Local indicator (w/m2 )

1

1007

Senior officer

78.2

281.52

20

10270.12 10270.1

131.33

1009

Clinic

60.2

216.72

16

2361.86 2361.86

39.23

1010

Living

6

21.6

20

1621.54 1621

270.26

1011

Ordinary 58.65 office

211.14

20

8659.3

147.64

1012

Pharmacy 59.34

213.62

16

3294.33 3294.33

8659.3

55.52

2 Application Research of BIM Technology …

17

From Table 2.1, it is clear to see the indoor calculation temperature, thermal load, and load index of each room in the first floor. For example, the indoor calculation temperature of ordinary office is 16 °C, while the national standard “Public Building Energy Efficiency Design Standard” GB50189-2005 Article 3.0.1 specifies the standard for calculating the temperature of indoor heating and air conditioning interior design of typical civil buildings: the winter heating of offices, residences, and other buildings should not be higher than 20 °C, and the summer air conditioning of public buildings should not be lower than 25 °C. Therefore, according to the specification, the room needed to be optimized. Cold Load Analysis The calculation method of calculating the fresh air, accurately calculating the solar radiation heat and the heat condition of the slope roof, was chosen, and then the BIM HVAC system analysis and calculation were performed. In the next step, the result was saved and the room cold load detailed table (Table 2.2). As can be seen from Table 2.3, the design temperature of the ordinary office is 26 °C, and it should not be lower than 25 °C according to the specifications. Practice has proven that the human body feels comfortable in the summer indoor temperature, and pursuing too low indoor temperatures can result in excessive power consumption. Optimization Adjustment Based on the thermal and cold load analysis results of the BIM HVAC system, the building was optimized according to the corresponding specifications as follows. (1) The area ratio of window and wall was regulated. In the “Code for Design of Thermal Design for Civil Buildings” GB 50176-93, the area ratio of window and wall facing each direction of civil buildings is not more than 0.2 in the Table 2.2 Room cold load detailed table No.

1011

Height (m)

3.6

Area (m2 )

58.56

Table 2.3 Calculation table of ratio of window to wall area of each building before optimization

Indoor design temperature (°C)

Implicit thermal load (w)

Wet load (kg/h)

26

3863

5.44

Orientation

Window area

Cold load (with potential thermal load) Maximum (w)

Maximum moment (t)

9569.67

15

Wall area (including holes)

Window wall area ratio

East

122.190

558.662

0.219

South

109.510

594.540

0.184

West

97.950

545.540

0.180

North

185.460

594.540

0.312

Average

515.110

2292.917

0.225

0.000

613.492

0.000

Roof

18 Table 2.4 Calculation table of ratio of window to wall area of each building after optimization

L. Jiangzhu et al. Orientation

Window area

Wall area (including holes)

Window wall area ratio

East

99.370

558.662

0.178

South

85.830

594.540

0.144

West

80.230

545.540

0.147

North

114.480

594.540

0.193

Average

379.910

2292.917

0.166

0.000

613.492

0.000

Roof

north direction. However, after comparing the design specifications with the calculation table of ratio of window to wall area (Table 2.3) which was obtained from above HVAC system analysis, it was obvious that the north window and wall area ratio exceeded the requirements. Therefore, the size and height of the window were optimized to meet the design specifications, and then, it was calculated that the heat consumption of the outer window accounted for 35–45% of the total heat consumption of the total building, as shown in Table 2.4. In this way, the results of the analysis are in line with the design specifications. (2) Energy-saving windows with good airtight performance were used, and doors and windows were designed with new insulation and energy-saving materials. According to the data, as the number of air exchanges in the room decreases, the cooling energy per unit area of the building can be reduced by about 8%. Therefore, doors and windows with good airtightness should be used in the design, or by adding sealing strips and selecting suitable insulation materials to improve the airtightness of the doors and windows, thereby increasing the heat load of the room and achieving the energy-saving effect of the insulation. For example, the ordinary insulating glass was replaced by 6 + 12A + 6 highpermeability low-emissivity glass. As the heat transfer coefficient decreased, the heat load and load index of the room would also decrease. In addition, a separate sealing structure could be used, and the sliding window could adopt double-strip double-flange four-seal structure; and the casement window could utilize an equal pressure principle. It is also possible to reduce the heat load of the room by adding a sealing strip and a reasonable selection of the heat insulating material to achieve the heat preservation effect. In this way, BIM HVAC system analysis software can adapt to the new optimized design scheme by changing the heat transfer coefficient, orientation correction, wind attachment, and other methods of the material, so as to carry out repeated optimization calculation and analysis to achieve a reasonable HVAC system energy-saving optimization scheme. In addition, in order to optimize the indoor livable environment, it is also possible to adopt a scheme of adopting fresh air regulation technology and placing green potted plants indoors. These schemes, in the BIM HVAC load software, can be

2 Application Research of BIM Technology …

19

adapted to the new optimization scheme by changing the fresh air volume to perform calculation and analysis, and report the heating heat load and the cold load data.

2.5 Conclusion The empirical evidence shows that the BIM technology for the analysis of HVAC system is convenient, fast and objective, which makes the analysis of green building HVAC system more accurate. Nowadays, BIM technology and green buildings are the major trends in the development of China’s construction industry. Energy conservation, emission reduction, and sustainable development require the promotion, development, and improvement of the green building system. The application analysis of the HVAC system in green buildings has reduced energy consumption and reduced building energy consumption and people’s Residence requirements. Hence, the application of BIM technology to the application of HVAC systems in green buildings has emerged. This study embodied the feasibility and superiority of the application of BIM technology in the analysis of green building HVAC systems, which can be further optimized with the promotion of BIM technology and the continuous exploration of the value of BIM technology in the future.

References 1. Liang, H.: Talking about the current situation and development trend of HVAC automatic control system [J]. Industrial (09), 300 (2016) 2. Wu, W.: Discussion on energy saving in HVAC engineering system [J]. Juvenile 6, 143–144 (2016) 3. Zhao, L.: Energy-saving design of building HVAC [J]. China Building Materials Technology 25, 33–43 (2016) 4. Xiang, Y.: Application of BIM technology in building energy-saving design [J]. Build. Mater. Decor. 05, 123–124 (2016) 5. Liu, Q.: Analysis of the application of BIM in green buildings [J]. Sichuan Cement (1), 122 (2016) 6. Baidu Library. Energy-saving calculation method and energy-saving effect analysis for improving indoor air-conditioning temperature [DB/OL]. https://wenku.baidu.com/view/ 7ba782a15901020207409ced.html?qq-pf-to=pcqq.c2c. Accessed 18 March 2019

Chapter 3

Research on the Development of Fashion Industry in the “Internet+” Era Hongkun Peng

Abstract Internet and fashion industry have become an important engine of China’s economic and social development under the new normal economic situation. In the future, China’s economic development will mainly be driven by e-commerce economy, but also will once again be driven by e-commerce economy. Internet platform is pushing to a new climax. “Internet+” and “fashion industry” are not simply a combination of the two. Instead, they use information communication technology and Internet platform to deepen the integration of Internet and fashion industry and create new development ecology. “Internet+” emphasizes the large-scale production of highly personalized products. Under the condition of flexible and high-speed allocation of production factors, the idea of “Internet+” will be applied to the design of fashion industry development, bringing about a comprehensive innovation form of technology, products, and services, which changes not only people’s way of life but also their traditional industries. Through the technology of information interconnection, the dependence of brand industry on labor force can be reduced to a great extent, the individualized needs of users can be met, and the circulation cost and product production cycle can be reduced to a low point. “Internet+” spawned new forms of fashion industry brand, such as pre-sale, Internet customization, O2O clothing customization, etc., and enterprises set up an Internet design platform to attract consumers. Keywords “Internet+” · Fashion industry · Innovative design

H. Peng (B) Department of Environmental Design and Visual Communication, School of Art, Northeast Electric Power University, Jilin 132011, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_3

21

22

H. Peng

3.1 Research Background of Fashion Industry in the Era of “Internet+” At present, with the high integration of industrial technology and information technology, and the interweaving of Internet, electronic information technology, Internet of things, software, and automation, a new value model of manufacturing industry (Intelligent Manufacturing) (Internet+ manufacturing) has emerged. In the era of “Internet+”, the transformation of Chinese manufacturing industry to create a change in order to change the shortage of labor supply, the rise of labor costs, and the decline of the new generation of labor will appear in the manufacturing industry [1]. According to the forecast of the Internet Data Center, the Internet-based manufacturing industry will penetrate into the value chain links of R&D, production, sales, logistics, and after-sales; in 2019, the trend of the Internet-based manufacturing industry will further extend to products. In the modern fashion industry market, the differences in lifestyle and consumption concept lead to the development of humanized design products and personalized design is difficult to meet the individual needs of consumers for the fashion industry. These ideas are the problems that the fashion industry brand development needs to face. The “Internet plus design” emphasizes the “intelligent design.” Under the premise of mass production and highly personalized products, the premise of highly flexible production factors is to apply the idea of “Internet+” to the design of the fashion industry. It will bring a full range of innovative new technologies, new products, new technologies, and excellent services. The reverse is not only the way people live but also the letter. Information technology has also changed the production mode of traditional manufacturing industry. Through the technology of information interconnection, the dependence of manufacturing industry on labor force can be reduced to a great extent, the individualized needs of users can be met, and the circulation cost and production cycle of products can be reduced to a lower level. This means that in the future, consumers can enjoy exclusive personalized fashionable products and services at prices lower than those customized by individuals, and save too much waiting time. The era of “Internet+” has become a form of national strategic development. We must follow the pace to make the development of fashion industry the best.

3.2 The Development of Fashion Industry in the Era of “Internet+” “Internet+” has become a high-frequency word in various industries in recent years, especially as a fashion industry represented by traditional industries. According to the data of China Fashion Brand Research Center, China has more than 10000 fashion industry brands, and the emerging fashion brands continue to burst. The Internet has extended from the simple sales end to the whole fashion industry chain, including design end, market end, sales end, service end, and other links. The core aspirations

3 Research on the Development of Fashion Industry …

23

of major enterprises and businessmen emerge in the form of the Internet from all directions and multiple perspectives. With the advent of the era of economic and commercial globalization, the fast fashion industry brand has gained a foothold in China. The aesthetic structure and consumption pattern of domestic consumers have changed, and the consumption structure has also been subdivided to find opportunities for development under the platform of “Internet+”. Years ago, the fast fashion industry brand in its design, sales, logistics, and other aspects of the high speed makes people surprised, but today, with the arrival of the era of personalized demand, “fast” has become the keyword of upgrading and transformation of the domestic fashion industry. Fast fashion, fast design, and fast channel are the three masters of fashion design industry. Fast fashion refers to the ability to predict the market trend faster and accurately, shorten the product planning cycle and reduce the initial investment. Fast design means shortening the design cycle using a fast network platform, for example, the control of the fashion brand fashion cycle is about 10–13 days, which greatly improves the speed of fitting with the market. On the basis of “Fast Fashion” and “Fast Design”, the efficiency of brand production, distribution and circulation can be guaranteed to the greatest extent. According to the latest research released by Market Research Company, the sales of fashion industry account for the largest proportion in online retail. At the same time, the survey on the development of e-commerce shows that the products of fashion industry are the most purchased by buyers on the Internet. According to the data from Internet Data Center, the sales of fashion brand E-commerce in China increased by 22% in 2018, which shows that China’s common fashion brand e-commerce sales increased by 22%. Brand consumption mode of fashion industry has been gradually replaced by “online shopping”. Traditional brand enterprises are becoming more and more transparent in people’s lives [2] (Fig. 3.1).

Fig. 3.1 “Internet+” is applied to the integration of fashion industry process

24

H. Peng

3.3 Research on the Development of Fashion Industry in the Era of “Internet+” The vigorous development of fashion industry relies on the “Internet+” information platform in a sense, which has a huge impact on China’s traditional design industry. In the information age, fashion industry brand should pay attention to humanization and personalized design and creativity. Along with people’s demand for shopping and consumption in the form of “Internet”, China’s fashion design industry is showing its new design appearance and trend.

3.3.1 “Internet+” Application “Internet+”, as an innovative concept, has begun to try to integrate with various industries, such as industry, agriculture, finance, etc. With the help of information and communication technology and Internet platform, tremendous changes have taken place in the fashion industry. E-commerce has given birth to new forms of brand design in fashion industry, such as pre-sale, network customization, oxygen customization, and so on [3]. Enterprises attract consumers by establishing Internet design platforms. The application of Internet plus fashion brand industry is mainly reflected in the design of network design platform. For example, cloud customization service provided by clothing brand aims at realizing a series of services such as intelligent production, intelligent typesetting, intelligent design, etc. After the buyer chooses, the system automatically generates templates according to the buyer’s requirements and transmits them to the factory for processing and production [4] (Fig. 3.2).

Fig. 3.2 Profit value created by platform

3 Research on the Development of Fashion Industry …

25

3.3.2 Fashion Industry Development With the continuous progress of society, fashion phenomenon attracts people’s attention. The earliest research on fashion can be traced back to the second half of the nineteenth century. And the research on fashion is also carried out from different disciplines, such as sociology, psychology, cultural communication, aesthetics, history, and other disciplines. Fashion runs through all kinds of groups, occupations, fields, classes, and ages. In this era of rapid economic development, information explosion and change of life concept, fashion is everywhere: media, advertising, film and television, cosmetics, clothing, automobile, diet, home, tourism, science and technology, sports, telecommunications, commerce, culture, art, music, and even environmental protection, all set up the fashion trend. It is all-pervasive, incredibly instantaneous infection and devouring tens of thousands of marginal people. In addition to the clothing industry, product design, architectural design, graphic design, and other aspects, the fashion industry has flourished. Many well-known fashion brands in China have defined the fashion industry as a new name, called creative industry, which improves its technical content and spiritual strength [5]. The narrow concept of fashion industry mostly centers on the fashion field. The broad concept of fashion industry refers to the garment accessories industry, automobile, electronic products, and other manufacturing industries but also includes travel, sports, and fitness services [6]. Fashion industry is a comprehensive industry, covering the scope of the industry will continue to expand with people’s understanding of fashion, economic development, and the development of human material and cultural needs. This characteristic determines that it is difficult to define the boundaries of fashion industry accurately and clearly. So far, there has not been a unified criterion for the division of fashion industry. In the development process of fashion industry, the spillover of information and communication technology acts on its design and marketing links to improve the capital utilization rate of fashion industry. With the advent of the “Internet plus” era, how to grasp the development opportunities of the fashion industry, form the corresponding brand strategy and supply chain system, promote the transformation and upgrading of the brand industry, and create new profit space has become an important topic for the whole fashion industry to consider.

3.3.3 The Development Mode of Fashion Industry in the Era of “Internet+” Improvement of Innovation Ability. Mr. Li Wudao wrote: “As the two engines of economic development, technological innovation and cultural creativity have both connections and differences in functions.” It is generally recognized that technological innovation focuses on the promotion of hard power, and cultural creativity focuses on the construction of soft power. It is a successful experience and practice in practice to lead development with soft power and provide basic support with

26

H. Peng

hard power [7]. The best choice for the brand of fashion industry in China is the construction of creative soft power. The practice of soft power is carried out all over the world, and the value of soft power can be opened through the construction of fashion brand value. Such as France, Austria, Germany, Italy, and Netherlands, which are famous for their music creativity. Many fashion design brands in China are facing a large-scale “closing tide” because of the lack of creative design and weak brand culture. Many fashion brands are actively launching the transformation from production-oriented to design-oriented. Thus, innovative ideas have become a major breakthrough point for domestic apparel design enterprises [8]. Today, in the Internet era, consumers’ aesthetics is gradually calm in the storm of Internet media information, and they begin to pursue a simple way of life. While the rapid development of Internet media brings convenience to people, the popularization of intelligent terminals of electronic devices makes people have a new demand for network visual display of fashion industry brand, which also indicates that the design of fashion industry brand image needs to further improve the user’s visual interaction experience [9]. Pay Attention to the Transformation of Fashion Industry Chain. Since 2015, many traditional enterprises have been impacted by the “Internet” wave, and they have collectively touched the Internet. The traditional brand industry is affected by the Internet revolution. It is imperative to transform the production terminal into the design terminal. With the help of e-commerce platform, leading enterprises in fashion industry brand industry can upgrade the industry chain through the optimization of enterprise structure, upgrade the fashion industry vertically and expand horizontally, and seek new development space. The vertical promotion of brand enterprises refers to the establishment of terminal docking with different brands. The key to the transformation and upgrading of brand enterprises is to improve the customer-to-factory model and to solve the inventory pressure by means of Internet channels. Inventory always affects the operation efficiency and success or failure of enterprises. Reducing inventory is always a big problem faced by enterprises. In the Internet platform, the main purpose of market space expansion is to expand horizontally, through large data to detect the consumption background, so as to accurately locate the market. On the Internet platform, we can use the management system of fast fashion brand to establish related platforms at the design, production and sales sides, and feedback the latest inventory balance, the best sales style and the best sales series of retail stores or affiliated stores in real time. According to the feedback information, the design cycle is shortened, and customers and factories are directly connected. Focus on the Construction of Fashion Brand Power. The construction of fashion brand power is the core of the development of domestic enterprises in the context of “Internet+”. According to a large number of data market surveys, the design terminals of many fashion brand enterprises in China are not perfect. The problems of reference, imitation, and piracy are common, and the road of characteristic branding is difficult. The fundamental reason is that the biggest obstacle hindering its development is the core value of the brand. Li Kailuo, a well-known expert on fashion industry economics and Dean of Guangdong Institute of Fashion Industry Economics, pointed

3 Research on the Development of Fashion Industry …

27

out that the shortboard of enterprise development is the extensive and quantity-based brand management mode, and the value reengineering of brand has become the key to build a unique brand strength. The connotation of brand culture is the core of brand value reengineering, as long as enterprises forge firm brand image and brand values rooted in deep cultural foundation can develop. In the Internet age, various fashion brands are blowout. It can be said that there are many kinds of fashion brands. Choosing a typical representative brand is a big problem for consumers [10]. Therefore, the fashion industry needs to build a unique brand power in the network era—expressive, marketing, promotion, design, service, and other aspects—and its strong construction is conducive to enhancing market influence and expanding market share.

3.4 The Development Trend of Fashion Industry in the Era of “Internet+” Nowadays, people’s life in the Internet era is more colorful, shortening the distance from the world’s leading fashion industry. Consumers can learn about different fashion styles through the Internet, so that consumers put forward higher requirements for the speed of fashion product updating. In many fast fashion brand visual designs, redundant decoration has been abandoned to cater to the choice preferences of users in the new era of Internet commerce.

3.4.1 Development Trend of “Online + Offline” Many enterprises choose online business model, but also in some enterprises from “online” to “offline”, consumers need to experience consumption, the number of offline physical stores has increased sharply; while “online + offline” is not so simple, the two business models are integrated. For example, “Dangdang Net” in the form of physical bookstores and Internet, Taobao Net in the form of traditional shopping and Internet, and “Ctrip” in the form of traditional tourism companies and Internet are all telling us that the combination of fashion industry and Internet is imperative.

28

H. Peng

3.4.2 Terminal Communication Power Based on “Internet+” “Internet+” is like a wave that sweeps the major fashion industries at home and abroad, especially the innovation-oriented garment design industry. Advanced enterprises are faced with mode innovation and industrial upgrading. Their target terminal is always a huge consumer group, which is the purpose of many enterprises to pursue the Internet trend. It has changed the traditional communication mode and marketing channels between enterprises and customers, from passive acceptance to active attack. The fashion industry needs to reposit the market, seek favorable brand core value, think about the real needs of consumers, and change the design terminal. Let Internet data replace enterprise presupposition and truly implement the spirit of enterprise in the era of “big data”. Enterprise’s business model can be changed from design terminal determined by designers and business managers to design terminal determined by consumer data. Many fashion brands focus on team building, maximize the reliability of purchasing market data, and transfer design terminals to consumers’ needs.

3.5 Conclusion In the face of new challenges, especially the advent of the “Internet plus” era, enterprises are determined to achieve cooperation and collaborative innovation in supply chain through the information and data determination of science and technology. “Internet+” in the process of integration with traditional industries also brings the fashion industry production mode and production technology innovation. Introducing the design concept of “Internet+” into the fashion industry design and transforming the design with the information technology of the Internet will bring the changes of the aesthetic, function, and concept of the fashion brand. The efficiency of fashion designer’s work will be greatly improved, and it will also enhance customer satisfaction and satisfy the individual needs of consumers to the greatest extent. The change from passive acceptance design to creative design reflects the people-oriented design concept and corresponds to the “Internet+” era. In the era of “Internet+”, designers should change from dominant design to user assistance, give consumers full play, provide professional support for fashion industry design platform, and help consumers meet their needs. Overall, a complete system platform provides theoretical guidance for the fashion industry involving patterns, and product design can meet the high requirements of consumers. Internet platform provides consumers with the process of design process and deep participation of design requirements. It helps to form product design using information technology tools to complete user’s personalized products efficiently and high quality. Through the Internet to publicize the relevant information of fashion industry, the popularity of fashion brand is enhanced, and a good atmosphere for the transformation and

3 Research on the Development of Fashion Industry …

29

Fig. 3.3 The correlation between the number of users and user value

development of traditional industrial design industry to modern fashion industry is formed (Fig. 3.3). Acknowledgements In this paper, the research was sponsored by the 13th Five-year Social Science Research Project Funded by the Education Department of Jilin Province (Project No. JJKH20190721SK).

References 1. Wang, A.: Intelligent technology selection of construction machinery products. Construct. Mach. Maint. 4, 8–10 (2015) 2. Li, L., Xie, H., Cao, R.: Research on customer loyalty of network customized clothing under O2O mode. J. Silk 52(1), 42–43 (2015) 3. Sun, B.: On 10 issues that equipment manufacturing enterprises should pay attention to in the internet era. Yihu Instrument 8, 34–35 (2015) 4. Zheng, Y.: On fashion. Zhejiang Soc. Sci. 2, 141–148 (2006) 5. Feng, J.: How to improve the economic benefit potential of enterprises? Integrating ICT products with the three major lifecycle issues of the enterprise. Econ. Trade Pract. 4, 56–58 (2015) 6. Li, W.: Innovation Changes China, 1st edn. Xinhua Press, Beijing (2009) 7. Zhang, J.: Measurement and effect of Shanghai fashion industry agglomeration. Commer. Trade 2, 55–56 (2014) 8. Wang, Q.: Research on the development of the garment industry under the situation of “Internet + ”. Fash. Design 3, 79–80 (2016) 9. Zhao, L.: Economic analysis of the fashion industry. Yunnan Soc. Sci. 3, 33–36 (2011) 10. Zhao, H.: Fashion brand cross-border cooperation from the perspective of rational consumption. Acad. Explor. 8, 52–53 (2014)

Chapter 4

Design of Mini Pets Feeding Intelligent Home System Based on IoT Renbiao Wang

Abstract Aiming at the problem that mini pet owner cannot take care of the pet remotely and the professional pet case and feeding device on the market cannot meet the requirements of remote control and real-time monitoring. An intelligent home system for mini pet feeding was designed. The system was based on the sensing layer, network layer, and application layer, three-layer Internet of things architecture. CC2530 worked as the coordinator of the lower computer and completed the data acquisition of the LAN network through the ZigBee wireless transmission protocol. The STM32 microcontroller was used as the core controller of the lower computer to control the interface circuit of each execution component. The ESP8266 Wi-Fi communication module was mounted on the STM32 to form an embedded gateway, which completed the data transmission between the lower computer and the upper computer and the mobile intelligent terminal. The system proposed proportional on–off temperature adjustment method, which realized the automatic adjustment of temperature in the case of no human intervention. Prototype aging test results demonstrate that the system operates well and has the stable performance. The solution has certain promotion significance and application value for the scientific feeding of mini pets under unattended care. Keywords IoT · Intelligent home · ZigBee · Proportional on–off

4.1 Introduction Keeping mini pets is becoming a fashion for young people; however, due to the busy work, many pets cannot be taken care of when the owner goes out [1]. There are few intelligent feeding systems for pets that can achieve automatic feeding, heating, and environmental monitoring on the market. Reference [2] proposed a multifunctional intelligent pet nest system based on 51 single-chip microcomputers, which can realize temperature and humidity, dangerous gas, and other information monitoring and R. Wang (B) Zhonghuan Information College Tianjin University of Technology, Tianjin 300380, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_4

31

32

R. Wang

control of feeding devices. Nonetheless, the system lacked some modules such as gateways and servers and cannot realize remote monitoring and control. Reference [3] proposed an intelligent remote control system based on pet feeding, which can realize remote monitoring of environmental information and remote control of executing components. However, the system adopted GPRS packet communication technology to complete remote data transmission, whose communication rate can only reach 115 kbps. In addition, the communication is not stable enough to meet the requirements of real-time transmission. This paper designs a set of intelligent home system for mini pet feeding, which adopts a three-layer Internet of architecture (IOT) with sensing layer and network layer. The system realize real-time collection of local and remote information such as temperature, humidity, pet heart rate, body temperature, and so on, and remote control of executing components such as feeding, heating, and so on. The system uses a proportional on–off temperature adjustment method to adjust the temperature without human intervention automatically and has the characteristics of reliable performance and stable operation.

4.2 System Design 4.2.1 Overall System Design The system consists of cloud platform, host computer, lower computer, and mobile intelligent terminal. The lower computer consists of CC2530 coordinator, terminal node, and sensors deployed in each terminal node. The ZigBee wireless transmission protocol is not only used to complete the LAN networking [4] but also can transmit parameters such as temperature, humidity, pet body temperature, and so on. The embedded gateway is composed of STM32 single-chip microcomputer carrying ESP8266 Wi-Fi module, which is responsible for the communication between the lower computer and the upper computer and the mobile intelligent terminal. The upper computer software can monitor the parameter information in real time and can control the operation of the execution component such as a pet feeder, a refrigerating device, and so on. Mobile terminal can realize remote information monitoring and execution component control. The system structure diagram is shown in Fig. 4.1.

4.2.2 Embedded Gateway Design Hardware Circuit Design of Embedded Gateway. The hardware circuit of embedded gateway is composed of core controller of lower computer, Wi-Fi communication module, and peripheral interface circuit. The core controller uses the STM32F103ZET6 chip based on Cortex-M3 core architecture [5] and is responsible for uploading the data collected by the terminal nodes of the local network

4 Design of Mini Pets Feeding Intelligent Home System Based on IoT

terminal node

33

terminal node

terminal node

ZigBee wireless transaction protocol CC2530 coordinator signs parameters

environmental parameters

signs and environmental parameters upper computer PC

embedded gateway (wifi module)

execuƟve component

execuƟve component

control command signs and environmental parameters

control command data mobile terminal

data witty cloud

Internet command

execuƟve component

command MQTT protocol

Fig. 4.1 System overall structure diagram

to the cloud service platform by the Wi-Fi module as well as to the upper computer monitoring interface by the serial port. In addition, as the lower computer core controller, it controls the system execution component interface circuits according to the upper computer command. The Wi-Fi communication module adopts Lexin ESP8266 chip of Anxinke Company [6]. The core controller for pin configuration electrical schematic is shown in Fig. 4.2. The external interface circuit is composed of a serial port circuit, a relay, and a driving circuit. PB11 pin controlled the fan to finish system dehumidification, and PB1 pin was responsible for system humidification. PB2 pin was connected the cooling semiconductor chip TEC1_12706 to finish the refrigeration, and PB10 pin was connected to the heating semiconductor chip X9-J4040 for system heating. PA4, PA5, PA6, and PA7 connected to IN1, IN2, IN3, and IN4 of the ULN2003 driving module, respectively, were used for controlling stepper motor.

34

R. Wang

Fig. 4.2 The core controller for pin configuration electrical schematic

Cloud Platform Deployment of Embedded Gateway. The communication between embedded gateway and mobile intelligent terminal is based on the data forwarding service provided by Witty-Cloud platform. System network structure based on WittyCloud diagram is shown in Fig. 4.3. The deployment process of Witty-Cloud platform includes burning the official firmware GAgent, creating products, adding new data points, installing debug app, Witty-cloud

Motor

Lamp

M

Internet Mobile intelligent terminal

Router

WIFI module(GAgent)

Fig. 4.3 System network structure based on Witty-Cloud diagram

MCU(STM32)

Relay

4 Design of Mini Pets Feeding Intelligent Home System Based on IoT

35

Table 4.1 Data node configuration information Distinguished name

Read–write type

Data type

Data scope

Node function

Watering_onoff

Writable

Boolean value

×

Pump control

Feed_onff

Writable

Boolean value

×

Feeder control

Led_onoff

Writable

Boolean value

×

Led control

Temperature

Read-only

Numeric type

−10 to 50

Temperature acquisition

Humidity

Read-only

Numeric type

0–100

Humidity acquisition

PM_25

Read-only

Numeric type

0–100

PM_25 value acquisition

start virtual devices, porting gateway control code, and binding mobile devices with app. The process of adding a new data node requires configuring the node information based on the actual device type of the lower computer. Data node configuration information is shown in Table 4.1.

4.2.3 ZigBee LAN Design ZigBee LAN Hardware Circuit Design. ZigBee LAN includes coordinator, routing node, and terminal node. The coordinator uses CC2530 controller belonging to TI [7]. The routing node acts as a relay device in the network for extending the wireless communication network. CC2530 terminal nodes are fixed around the mini pet case. When the network is initialized, each terminal node establishes a connection with the coordinator. The sensor is deployed on each terminal node and can collect environmental information such as temperature and humidity, pet body parameters, and so on. In addition, and transmit them to the coordinator. Temperature and humidity data can be obtained by DHT11 sensor, and PM2.5 value can be obtained by GP2Y1014AU photoelectric sensor. MQ-2 smoke sensor is responsible for hazardous gas detection, and pet heart rate data can be obtained by pulse sensor optical heart rate sensor, and the module MLX90615 can measure the pet’s body temperature in a non-contact manner. ZigBee LAN Control Program Design. The ZigBee LAN control program mainly includes the program that the coordinator established the ZigBee network and the program that the terminal node sent the data collected by sensor to the coordinator. The establishment of the ZigBee local area network is initiated by the coordinator, and CC2530 networking work can be illustrated in Fig. 4.4. After coordinator networking, the coordinator neighboring nodes will try to establish a connection with the coordinator. If the coordinator replies, the terminal node will join the network, and then the sensors on the terminal node will start to collect environmental or pet health condition data and send them to CC2530 coordinator. Terminal node work flowchart is shown in Fig. 4.5.

36 Fig. 4.4 CC2530 coordinator networking

R. Wang Begin Device initialize

Can this node act as coordinator?

N

Redeploy the device

Energy detection scan

Channel detection scan

Is there a channel available?

N

Y

Allocate PINID

Start the network

Waiting for other nodes to join the network

4.2.4 Proportional On–Off Temperature Adjustment Method In order to provide a constant temperature environment for mini pet, the system uses proportional on–off control method to adjust the temperature inside the pet case. Proportional on–off control schematic diagram is shown in Fig. 4.6. The whole system adopts closed-loop control mode, and the temperature sensor collects temperature data and sends it to the core controller of the lower computer. The controller outputs the PWM pulse signal according to the deviation between the current sample value and the expected value set by the user [8]. The system

4 Design of Mini Pets Feeding Intelligent Home System Based on IoT

37

Begin Connect to the coordinator

Does the coordinator agrees to join the network?

N

Y Send system data

Idle condition

Sensor acquisition data

Terminal node read data

Send data to coordinator N

N

Is the transmission successful?

Has transmission been reached the specified number of times?

Y

Y Terminal node alarm

Fig. 4.5 Terminal node work

Setting temperature Current temperature

H/L

MCU (Proportional on-off control)

CC2530 coordinator

H/L

Driving Circuit

Driving Circuit

Temperature sensor

Fig. 4.6 Proportional on–off control schematic diagram

On/Off

On/Off

Heating unit

Refrigeration unit refrigeration

Pet nest

Heating

38

R. Wang

sends out the control signals and controls the operation of heating or refrigeration components by amplifying  the drive circuit to complete the temperature adjustment.  The deviation value e t j during the j time period can be obtained from formula 1: e(t j ) = SV − P V (t j )

(4.1)

SV was defined as the expected temperature value set by the user, and P V (t j ) was defined as the mean value of the temperature sample collected by the sensor during the jth time period. In order to reduce noise interference and make the sampled value closer to the true value, the system uses a rate-limiting filtering algorithm to filter out noise for each sample [9]. The definite value of the kth temperature sample can be obtained from formula 2: ⎫ ⎧ ⎪ yk , |yk − yk−1 | ≤ y ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎨ yk−1 , |yk − yk−1 | > y ,k ≥ 1 (4.2) Xk = ⎪ y , |yk+1 − yk | ≤ y ⎪ ⎪ ⎪ ⎪ ⎪ yk+1k+1 ⎭ ⎩ +yk , |yk+1 − yk | > y 2 In the formula, yk was defined as the kth temperature sample value of the system, and y was defined as the maximum deviation value between adjacent two sampling signals. If the system sampled N times within the jth time period and sorted the N sample definite values from large to small, in addition, and removed the maximum and minimum value, P V (t j ) could be obtained from formula 3: P V (t j ) =

N −1 1 X ji N − 2 i=2

(4.3)

The PWM signal output by the core controller has the relationship with e(t j ), Tn−r , and T f −r . Tn−r was defined as the mean value of the original temperature sample before the regulation. T f −r was defined as the mean value of the temperature sample after full power output regulation. The equivalent voltage for the PWM output within t time period, taking the heating unit as an example, can be obtained from formula 4: 

   K P T e(t) V , e(t) > 0 e(t) OU T   f −r −Tn−r VOU T  ≤ 1 P OU Taverage = , K P   T f −r − Tn−r 0, e(t) < 0 (4.4)

4 Design of Mini Pets Feeding Intelligent Home System Based on IoT

39

Tn−r and T f −r can be obtained according to formula 3. K p is the proportional gain amplification factor and VOU T is the voltage value output by the driving circuit. Therefore, in a working cycle T, the effective pulse width of the core controller output can be obtained by formula 5: te_ pulse = K P

e(t) T f −r − Tn−r

T

(4.5)

4.3 Performance Test Build and test the entire intelligent home for mini pets system function. The test work mainly includes ZigBee networking test, terminal node acquisition data test, CC2530 coordinator function test, the communication test between embedded gateway and coordinator, the communication test between upper computer and embedded gateway, the communication test between mobile intelligent terminal and embedded gateway, cloud service platform communication test, upper computer software function test, mobile intelligent terminal software function test, temperature adjustment test, and so on. After a 24-h non-stop test, results demonstrate that the system operates well and has the stable performance. The test process is shown in Fig. 4.7.

4.4 Conclusions This paper designs a mini pet feeding intelligent home system based on IOT adopting sensing layer, network layer, and application layer, three-layer IOT architecture. It can realize real-time local and remote monitoring of environment and pet health condition data inside the pet case, as well as remote control of executing components. Performance test results demonstrate that the system operates well and has the stable

Fig. 4.7 Overall prototype test diagram

40

R. Wang

performance. Now, it has been passed the acceptance of Tianjin University students’ Innovation and Entrepreneurship training Project successfully. The solution has certain promotion significance and application value for the scientific feeding of mini pets under unattended care.

References 1. Own, C.M., Shin, H.Y., Teng, C.Y.: The study and application of the IoT in pet systems. Adv. Internet Things 3(1), 1–8 (2013) 2. Kuang, C., Huang, X.W., Zhou, P.: Multifunctional intelligent pet nest based on single chip microcomputer. Electron. World 23, 187–189 (2018) 3. Ma, C.H.: Pet Feeding Intelligent Remote Control System Design and Development. Shandong University of Technology, Master (2015) 4. Liao, J.M., He, X.Q.: The research and design of ZigBee wireless networking based on CC2530. In: International Computer Conference on Wavelet Active Media Technology and Information Processing, pp. 263–266. IEEE Press, Chengdu (2013) 5. Liu, W.H., Dai, J.X.: Design of attitude sensor acquisition system based on STM32. In: 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), pp. 1850–1853. IEEE Press, Qinhuangdao (2015) 6. Olubiyi, O., Akintade, T.K., Yesufu, L.: Development of power consumption models for ESP8266-enabled low-cost IoT monitoring nodes. Adv. Internet Things 9(1), 1–14 (2019) 7. Zhao, J., Lian, X.Q., Wu, Y.L., Zhang X.L., Wang S.: Design of wireless temperature and humidity data collection system based on MSP430 and CC2530. In: 2012 3rd International Conference on System Science, Engineering Design and Manufacturing Informatization, pp. 193–195. IEEE Press, Chengdu (2012) 8. Peng, L.J., Zhang, L.P., Huang, B., Tan, L.Z., Tong, H.W.: Design and implementation of vehicle intelligent fan system based on STM32. Transducer Microsyst. Technol. 37(12), 76–78 (2018) 9. Chen, Q., OuYang, S.W., Ma, X.Y., Xie, Q.: Digital filtering algorithm optimization for ECG signal acquisition. Modern Electron. Tech. 42(04), 45–48 (2019)

Chapter 5

Study on IoT and Big Data Analysis of Furnace Process Exhaust Gas Leakage Yu-Wen Zhou, Kuo-Chi Chang, Jeng-Shyang Pan, Kai-Chun Chu, Der-Juinn Horng, Yuh-Chung Lin and Huang Jing

Abstract Modern FABs use a large number of high-energy processes such as plasma, CVD, and ion implantation; the furnace is one of the important tools of semiconductor manufacturing. The FAB installed FTIR system due to the 12” furnace tools based on the aforementioned production management requirements. This study used open-type FTIR and integrated IoT mechanism to connect to the cloud, which is suitable for a variety of gaseous pollutants. This study set up two measuring points of furnace process tools in the 12” factory of Hsinchu Science Park in Taiwan. This study obtained FTIR measurements, and according to the OHSA regulations, this study is set in the cloud database for big data analysis and decision-making, when the upper limits of TEOS, C2 H4 , and CO are 0.6 ppm, 2.0 ppm, and 1.7 ppm and the lower limits of TEOS, C2 H4 , and CO are 0.4 ppm, 1.5 ppm, and 1 ppm. The application architecture of this study can be extended to other semiconductor processes, so that IoT integration and big data operations can be performed for all processes; this is an important step in promoting FAB intelligent production and an important contribution of this study. Keywords IoT · Big data · Furnace · Exhaust gas · Gas leakage

Y.-W. Zhou · K.-C. Chang (B) · J.-S. Pan · Y.-C. Lin Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, China e-mail: [email protected] K.-C. Chu · D.-J. Horng Department of Business Administration Group of Strategic Management, National Central University, Taoyuan, Taiwan H. Jing College of Information Science and Engineering, Fujian University of Technology, Fuzhou, China © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_5

41

42

Y.-W. Zhou et al.

5.1 Introduction Semiconductor process is an important technology to realize artificial intelligence and algorithm. The semiconductor factory invested more than USD 3 billion, the daily basic operating cost exceeded 6 million USD, although the process is very important, especially in the commercial key size of 12 nm or less (Fig. 5.1) [1]. In order to achieve high precision and yield, modern FABs use a large number of high-energy processes such as plasma, CVD, and ion implantation; the furnace is one of the important tools of semiconductor manufacturing (Fig. 5.2). The physico-

Fig. 5.1 Semiconductor critical dimension trend in 2020

(a) 12" advanced furnace process tool

(b) FAB layout

Fig. 5.2 Actual status of the furnace machine in 12” FAB and layout

5 Study on IoT and Big Data Analysis of Furnace Process Exhaust …

43

chemical changes in the respective reactors due to high energy are quite complicated, and it is often impossible to confirm the type and concentration of by-products produced. These by-products often cause the following effects on the FABs includes (1) incompatibility between by-products may increase the toxicity or explosiveness of the gas in the pipeline; (2) by-products may cause erosion or embrittlement of the material of the exhaust pipe; (3) if the type and concentration of by-products cannot be confirmed, it is impossible to select a suitable exhaust gas treatment equipment; and (4) it may cause damage to the processing equipment currently used, which in turn affects the processing efficiency [2, 3]. However, FAB installed FTIR system due to the 12” furnace tools based on the aforementioned production management requirements including (1) confirmation of the characteristics of the hazardous process exhaust gas; (2) evaluation of the treatment efficiency of various types of processing equipment for process exhaust; (3) investigation of hazard exposure assessment during maintenance and repair of the machine; (4) confirmation of the concentration and source of hazardous gases and particulates in the clean room operating environment; (5) identification of hazard species in the duct, etc. Under this premise, in order to make the aforementioned software and hardware systems intelligent, IoT modules are added to the original modules. This study continuously obtains the various process parameters and information required by FAB in the 12” furnace tools [4–6]. Under this premise, in order to make the aforementioned software and hardware systems intelligent, IoT modules are added to the original modules, so that we can continuously obtain the various process parameters and information required by our FAB in the 12” furnace process tools. Under the 24-hour continuous process and thousands of process machines, we will get a lot of data to confirm the above production requirements, so that we can effectively master FAB characteristics, improve production efficiency, improve product yield, and build a safe and healthy product line and working environment for staff; this is an important contribution of this research [7].

5.2 Methodology and Study Procedure The instrument used in this study is a pumping FTIR. The pumping FTIR uses a pump to introduce the gas to be tested into the FTIR detection chamber for immediate analysis. The measurement method is shown in Fig. 5.3. The main components of Fig. 5.3 Schematic diagram of instrument configuration of gas-type Holstein infrared spectrometer

44

Y.-W. Zhou et al.

the pumped FTIR include IR source, interferometer, beam splitter, fixed mirror, moving mirror, and gas cell, detectors, electronic modules, etc., in addition, must have a sampling tube and pump and other devices to introduce gas samples into the closed cavity for analysis; in addition to the computer and appropriate software for data acquisition and data analysis, this study adds an IoT module to the existing FTIR, allowing FTIR to transmit and compute with the cloud, and the instrument configuration of the pumped FTIR is shown in Fig. 5.4 [8, 9]. The basic design of the infrared spectrometer is to emit a beam of light to the measurement area and measure the amount of intensity change after the beam passes through the gas to be tested. Since each gas molecule has its specific infrared light absorption coefficient, when a light beam passes through the measurement region, a specific gas molecule absorbs light of a specific wavelength, so that the intensity of the light beam in this wavelength band is weakened, and the ratio of light intensity before and after absorption is. The concentration of the gas is directly related. The absorption band and intensity of the gas sample can be measured to know the composition and concentration contained in the gas. For a maximum path difference d, adjacent wavelengths λ1 and λ2 will have n and (n + 1) cycles, respectively, in the interferogram. The corresponding frequencies are ν1 and ν2 [10, 11].

Fig. 5.4 The instrument configuration of the pumped FTIR

5 Study on IoT and Big Data Analysis of Furnace Process Exhaust …

d = n λ1 λ1 = d/n ν1 = 1/λ1 ν1 = n/d

and and and and

45

d = (n + 1)λ2 λ2 = d/(n + 1) ν2 = 1/λ2 ν2 = (n + 1)/d ν2 − ν1 = 1/d

Inlet flow rate (Qi) estimated from the TEOS injection = 89 LPM. Initial outlet flow rate (Qo) estimated from the TEOS injection = 292 LPM. Therefore, dilution ratio = Qo/Qi = 292/89 = 3.3.

5.3 FTIR Sensing System of IoT In this study, open-path FTIR was used to monitor the air quality of clean room to ensure the air quality of the working environment and the health of employees. The principle of measurement is the same as the principle of extractive FTIR, but the closed cavity is changed to open-type and integrated IoT mechanism to connect to the cloud, which is suitable for a variety of gaseous pollutants (including organic gases and inorganic gaseous pollutants) in the atmosphere that are monitored (Fig. 5.5). Figure 5.6 shows the situation where this study was set on the site to set the exhaust line of the furnace control process. This study set up two measuring points in the 12” factory of Hsinchu Science Park in Taiwan, as shown in Fig. 5.7. Table 5.1 shows the process parameters of the on-site process tools during our experiment, and Table 5.2 shows the processing parameters of the on-site machine exhaust gas treatment equipment. Fig. 5.5 The FTIR field setting architecture

46

Y.-W. Zhou et al.

Fig. 5.6 This study was set on the site to set the exhaust line of the furnace control process

Fig. 5.7 The furnace process tools area measurement point distribution in this study Table 5.1 The process parameters of the on-site process tools during our experiment

Process tools

BPSG of A point

BPSG of B point

TMB flow

30 sccm

30 sccm

TEOS flow

300 sccm

300 sccm

PH3 flow

0.77 slm

0.8 slm

Chamber pressure

1.1 torr

0.8 torr

5 Study on IoT and Big Data Analysis of Furnace Process Exhaust …

47

Table 5.2 The processing parameters of the on-site machine exhaust gas treatment equipment Concentration compound

Inlet

Outlet

Max (ppm)

Average (ppm)

Max (ppm)

Efficiency (%) Average (ppm)

TEOS

937

850

47

42

86

*TMB

1430X

870X

11X

0.3

>99

C2 H4

2108

2068

225

220

69

CH3 OH

1335

1194

N.D

N.D.

>99

Front and rear gas flow details of exhaust gas treatment equipment: N2-pump = 98 L/min; CDO-air = 89 L/min; CDO-N2 = 47 L/min; CDO outlet air = 57 L/min; *TMB does not have FTIR standard spectrum

5.4 FTIR IoT Result and Big Data Analysis From the measurement data of this study, it can be found (Fig. 5.8) that the main reactant of the thin film process is TEOS for BPSG, so almost all reactions are carried out in the reaction chamber, or become a composite, which does not exist in the main exhaust gas pipeline, the beginning of the reaction concentration between 0.08 and 0.1 ppm only. C2 H4 is mainly used for cleaning the reaction chamber, concentration between 0.15 and 0.25 ppm. Therefore, when the main process is carried out, the high concentration of the input will be cleaned, so the high concentration state can be seen in the main exhaust pipe. On the other hand, the CO is active because it is in the process of production, so it is difficult to find the concentration of the main exhaust pipe. Figure 5.9 shows the secondary main exhaust pipe concentration trend. Due to the proximity of the process chamber, the concentrations are clearly detected, especially CO is more obvious and concentration between 1 and 1.7 ppm, and TEOS is liquid because it is normal and concentration between 0.4 and 0.6 ppm. Although the concentration near the reaction chamber is high, condensation occurs when entering (ppm) 0.3 TEOS

C2H4

CO

0.2

0.1

0 10:04

10:14

10:23

10:32

10:42

10:51

Time of Day

Fig. 5.8 Main exhaust pipe concentration trend (A point)

11:00

11:10

11:19

48

Y.-W. Zhou et al. (ppm) 2.5

TEOS

C2H4

CO

2.0 1.5 1.0 0.5 0.0 10:04

10:14

10:23

10:32

10:42

10:51

11:00

11:10

11:19

Time of Day

Fig. 5.9 Secondary main exhaust pipe concentration trend (B point)

the low-temperature zone. The section pipeline is measured, and the main exhaust pipe is not obvious. The C2 H4 concentration is between 1.5 and 2.0 ppm. According to the OHSA regulations, this study is set in the cloud database for big data analysis and decision-making, when the upper limits of TEOS, C2 H4 , and CO are 0.6 ppm, 2.0 ppm, and 1.7 ppm and the lower limits of TEOS, C2 H4 , and CO are 0.4 ppm, 1.5 ppm, and 1 ppm. The application architecture of this study can be extended to other semiconductor processes, so that IoT integration and big data operations can be performed for all processes; this is an important step in promoting FAB intelligent production and an important contribution of this study.

5.5 Conclusions In order to achieve high precision and yield, modern FABs use a large number of highenergy processes such as plasma, CVD and ion implantation, the furnace is one of the important tools of semiconductor manufacturing. The FAB installed FTIR system due to the 12” furnace tools based on the aforementioned production management requirements. The principle of measurement is the same as the principle of extractive FTIR, but the closed cavity is changed to open-type and integrated IoT mechanism to connect to the cloud, which is suitable for a variety of gaseous pollutants (including organic gases and inorganic gaseous pollutants) in the atmosphere that are monitored in this study. This study set up two measuring points of furnace process tools in the 12” factory of Hsinchu Science Park in Taiwan. This study obtained FTIR measurements, and according to the OHSA regulations, this study is set in the cloud database for big data analysis and decision-making, when the upper limits of TEOS, C2 H4 , and CO are 0.6 ppm, 2.0 ppm, and 1.7 ppm and the lower limits of TEOS, C2 H4 , and CO are 0.4 ppm, 1.5 ppm, and 1 ppm. The application architecture of this study can be extended to other semiconductor processes, so that IoT integration and big data operations can be performed for all processes; this is an important step in promoting FAB intelligent production and an important contribution of this study.

5 Study on IoT and Big Data Analysis of Furnace Process Exhaust …

49

References 1. Lu, C.C., Chang, K.C., Chen, C.Y.: Study of high-tech process furnace using inherently safer design strategies (IV). The advanced thin film manufacturing process design and adjustment. J. Loss Prev. Process Ind. 40, 378–395 (2016) 2. Lu, C.C., Chang, K.C., Chen, C.Y. Study of high-tech process furnace using inherently safer design strategies (III) advanced thin film process and reduction of power consumption control. J. Loss Prev. Process Ind. 43, 280–291 (2016) 3. Sze, S.M., Ng, K.K.: Physics of Semiconductor Devices, 3rd edn. Wiley (2006). ISBN: 978-0471-14323-9 4. Pan, J.-S., et al.: α-Fraction first strategy for hierarchical model in wireless sensor networks. J. Internet Technol. 19(6) (2018). Papers (ISSN 1607-9264) 5. Wu, H.-T., Hu, W.-C., Chou, T.-Y., Lin, J.-J.: A clownfish farming monitoring system based on the internet of things. J. Netw. Intell. 2(2), 213–230 (2017) 6. Chen, C.Y., Wang, C.J., Chen, E., Wu, C.K., Yang, Y.K., Wang, J.S., Chung, P.C.: Detecting sustained attention during cognitive work using heart rate variability. In: IEEE Conference on In Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), pp. 372–375 (2010) 7. Mattana, G., Briand, D., Marette, A., Vásquez Quintero, A., de Rooij, N.F.: Polylactic acid as a biodegradable material for all-solution-processed organic electronic devices. Organ. Electron. 17, 77–86 (2015) 8. Wang, X.C.: Li Yang. Infrared suppression of submarine exhaust system, laser and infrared 39(4), 393–396 (2009) 9. Wang, X.C., Guo H.X., Pan, L. et al.: Comparisons on flow and temperature fields for watercollection box of diesel exhaust system. In: Proceedings of the 2nd International Conference on Manufacturing Science and Engineering. Trans Tech Publications, Guilin (2011) 10. Nguyen, T.-T., Dao, T.-K., Pan, J.-S., Horng, M.-F., Shieh, C.-S.: An improving data compression capability in sensor node to support SensorML-Compatible for Internet-of-Things. J. Netw. Intell. 3(2), 74–90 (2018) 11. Holzer, F., Kopinke, F.-D., Roland, U.: Non-thermal plasma treatment for the elimination of odorous compounds from exhaust air from cooking processes. J. Chem. Eng. 334, 1988–1995 (2018)

Part II

Information Security and Hiding

Chapter 6

A Data Hiding Approach Based on Reference-Affected Matrix Trong-The Nguyen, Jeng-Shyang Pan, Truong-Giang Ngo and Thi-Kien Dao

Abstract Data security has got many remarkable achievements. However, the issues of the lower distortion and the higher embedding capacity in the embedded secret data in media have not much been considered by scholars. This paper proposes a new data hiding approach to the embedded secrets based on the guidance of the xcross-shaped reference-affected matrix to solve these issues. Adjacent pixels would be found out large area with similar values which can utilize for manipulating data embedding and extracting on a difference–coordinate plan instead of the traditional pixel–coordinate plan. Three parts: petal matrix, calyx matrix, and stamen matrix are combined for data embedding by using the x-cross-shaped reference matrix. The experimental results compared with the previous methods in the literature shows that the proposed approach brings outstanding payload with the cover visual quality. Keywords Steganography · Data hiding · Data embedding and extracting

6.1 Introduction Secret messages delivered to target destination need prevent from malicious attacks, so data hiding technique is one of the accessible ways [1]. Data hiding technique, a significant subject of information security, is widely used to transfer secret messages to others safely on public channels instead of highly costly and conspicuous private channels [2]. Data hiding focuses on finding a secure way to embed secrets in multimedia. Pictures, known as common multimedia, can be a perfect means to T.-T. Nguyen (B) · J.-S. Pan · T.-K. Dao Fujian Provincial Key Lab of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, Fujian, China e-mail: [email protected] J.-S. Pan e-mail: [email protected] T.-T. Nguyen · T.-G. Ngo Department of Information Technology, Haiphong Private University, Haiphong, Vietnam e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_6

53

54

T.-T. Nguyen et al.

carry secret messages. At the current stage in steganography, grayscale images are common and convenient carriers. Owing to the value of every grayscale image pixel ranging from 0 to 255, so a pixel is easily represented by 8 bits in a binary system [3, 4]. The low distortion and the high capacity are two aspects included the lower distortions of images estimate data hiding method’s performance after embedding secrets and the higher capacity of carrying secret messages. Generally, a higher embedding capacity will result in higher distortions of stego-images, and vice versa. Thus, how to find a feasible way to make a trade-off is a big problem that arises from the data hiding. The often steganography method used the least-significant bits (LSB) of each cover pixel’s value from a host image to carry a secret message. LSB for data hiding is the simple and achievable methods with a satisfactory capacity of carrying secret digits and the escapable view from human eyes. However, they are vulnerable one under the malicious attacks based on the statistical analysis [5]. Modified LSB method (LSB to match a revisited approach) devoted to controlling the distortion of host images at a lower level with the same payload. The stegoimages generated under the guidance of both of the corresponding two original images’ pixels and two secret digits that performed better in visual imperceptibility apparently in comparison with traditional LSB one [6]. The exploiting modification direction (EMD) [7] method in which each unit composed of n pixels of a host image can carry one secret digit in (2n + 1)-ary notational system during embedding processes every time and only one pixel of the unit is modified by 1 every time. Therefore, it shows a larger payload and better quality of stego-images. Further, in order to improve the ability of payload, the turtle-shell methods [8, 9] provided an easy way to establish a layout (like a turtle shell), so every secret digit ranging from 0 to 8 can be embedded by 2 pixels each time. The regular-octagon shape [10], the other method similar to turtle-shell one, had improved the capacity of carrying secret digits. This paper places an x-cross-shaped reference matrix extended on a pixel–differencing plan to hide the secret that combined from three parts: petal matrix, calyx matrix, and stamen matrix for secret embedding, and payload with good visual quality. The remaining paper is organized as sections. Section 6.2 presents related work. Section 6.3 states the methodology. Section 6.4 discusses the experimental results. A conclusion is summarized in Sect. 6.5.

6.2 Related Work The definition of symbols is used in this paper as follows. H Height of an image. W Weight of an image. Cover/host image Represents the original grayscale image.

6 A Data Hiding Approach Based on Reference-Affected Matrix

Stego-image i d1 , d2 M(d1 , d2 ) mod pi d1 , d2   , pi+1 pi−1

ls M(d1 , d2 ) σ num

55

Represents the grayscale image after embedding a secret message. Represents the index of pixels. ( pi−1 , pi , pi+1 ): a triple of consecutive cover pixels. Represents the difference values of pixel pairs ( pi−1 , pi ) and ( pi+1 , pi ) from a cover image, respectively. Represents the value with the guidance of x-cross-shaped reference matrix. Represents a modulo operation. Represents the stego-pixel of pi after carrying secret by leastsignificant bit substitution method.   Represents the difference values of pixel pairs pi−1 , pi and   pi+1 − pi , respectively. Represents the stego-pixel values of pi−1 and pi+1 after embedding secret, respectively. Represents the length of a secret that we are going to hide. Represents the value of the secret we are going to embed from the reference matrix. Represents standard deviation. Represents the number of statistics in the histogram.

6.2.1 EMD Scheme A (2n + 1)-ary notational secret data could be embedded for EMD scheme [7, 11] under a group of n cover pixels from the host image every time that achieve efficiency embedding and secrecy with low distortions. EMD’s embedding procedure included steps: First, divide a cover image into a series of nonoverlapping groups. Each group is composed of n pixels which are G = ( p1 , p2 , . . . , pn ). Second, convert a binary secret message into a sequence of secret digits in (2n + 1)-ary notational system. Every secret digit can be shown as s j ( j = 1, 2, . . . , l), where l depends on n. Apply EMD to the group G by Eq. (6.1) where “mod” represents a modulo operation. Equation (6.2) calculates how to carry a (2n + 1)-ary secret digit s j .  n  = f ( p1 , p2 , . . . , pn ) = ( pi i) mod (2n + 1) 

(6.1)

i=1

  D = s j − y mod (2n + 1)

(6.2)

By adding or subtracting, one is used to evaluate changes for certain pi value of G.

56

T.-T. Nguyen et al.



pi =

⎧ ⎨

pi , i f s j = ρ p D + 1, i f s j = ρ, and D ≤ n ⎩ p(2n+1)−D − 1, i f s j = ρ, and D ≤ n

(6.3)

Demonstration, when n = 2, e.g., if the cover pixel pair is ( p1 , p2 ) = (1, 2), ρ is 0 according to Eq. (6.1). When the to-be-embedded secret digit s j = 2, the  stego-pixel pair will be p1 , p2 = (1, 3) according to Eqs. (6.2) and (6.3). When the receiver wants to extract the secret, they can also utilize the function shown in Eq. (6.1). That is, the secret digit 2 can be extracted. This method ensures a high data payload (about 1.16 bpp when n = 2) and a good image quality (about 52 dB measured by peak signal-to-noise ratio, often abbreviated PSNR).

6.3 Turtle-Shell-Based Scheme Every two pixels can carry a secret data ranging from (000)2 to (111)2 each time in the scheme of turtle shell [8]. A reference matrix 256 × 256 containing as many turtle shells as possible is to hide the secret data. Each turtle shell is a hexagon that contains eight different distinct numbers ranging from 0 to 7, including six edge digits and two back digits. Matrix turtle shell (symbol is M) is arranged one by one without overlapping. The rule is the upper row is set to 2, and the next value difference is set to 3, and then continuously it is to 2 again. Alternately, add 2 and 3 to every row to complete the entire matrix. Therefore, the value difference between two adjacent numbers in the same row of the reference matrix is set to “1”, and the value difference between two adjacent numbers in the same column is set alternately to “2” and “3”. Continuously, write down 0 to 7 in every row. Every turtle shell contains eight numbers ranging from (000)2 to (111)2 , so that each cover pixel pair is expressed as ( pi , pi+1 ), which can carry a 3-bit digit s j . Assume that the grayscale cover image I with sized of H × W is composed by I = { pi |i = 1, 2, . . . , (H × W )}. To embed secret digits, the location of each pixel pair ( pi , pi+1 ) will be determined as M( pi , pi+1 ) in the reference matrix M, where pi and pi+1 are the column value and row value, respectively.

6.4 Proposed Method Our schemes work according to using three pixels every time with the guidance of the x-cross-shaped reference matrix under a difference–coordinate system. The xcross-shaped reference matrix combines three parts: petal matrix, calyx matrix, and stamen matrix for secret embedding, which brings a great payload with cover visual quality.

6 A Data Hiding Approach Based on Reference-Affected Matrix

57

6.5 Matrix Construction Procedure A coordinate system (d1 , d2 ), where d1 and d2 range from −255 to 255, represents the difference–value of pixel pairs ( pi−1 − pi ) and ( pi+1 − pi ), respectively. There is a large number of difference–values are close to 0s, due to the feature of images that adjacent pixels have nearly similar values. Therefore, when d1 and d2 range from −1 to 1, a 3 × 3 is arranged as rectangle-shaped matrix called stamen matrix which is marked in orange. Every pair of (d1 , d2 ) in the stamen matrix can carry a secret digit ranging from (000)2 to (111)2 . if d1,2 ∈ {−1, 0, 1}

(1, d2 ) = d1 + 3d2 + 4 mod 8,

(6.4)

Then, settle the second part of the big matrix, when either d1 or d2 is equal to 0. The calyx matrix is marked in blue as shown in Fig. 6.1. Equation (6.5) describes the calyx matrix.

M(d1 , d2 ) =

d1 mod 4, i f d1 {−1, 0, 1}, d2 = 0 / {−1, 0, 1} d2 mod 4, i f d1 = 0, d2 ∈

(6.5)

d2

The positive axis of d 1 ranges from 2 to 255, and the negative axis of d 1 ranges from −2 to −255; meanwhile, d 2 is set 0. The other two calyxes on the positive axis

7

6

5

4

3

2

1

0

1

2

3

4

5

6

2

1

0

31

30

29

28

1

28

29

30

31

0

1

7 2

29

28

27

26

25

24

23

2

23

24

25

26

27

28

29 23

23

22

21

20

19

18

17

3

17

18

19

20

21

22

17

16

15

14

13

12

11

0

11

12

13

14

15

16

17

11

10

9

8

7

6

5

1

5

6

7

8

9

10

11

6

5

4

3

2

1

0

2

0

1

2

3

4

5

6

0

1

2

3

0

1

2

4

2

1

0

3

2

1

0

6

5

4

3

2

1

0

2

0

1

2

3

4

5

6

11

10

9

8

7

6

5

1

5

6

7

8

9

10

11

17

16

15

14

13

12

11

0

11

12

13

14

15

16

17

23

22

21

20

19

18

17

3

17

18

19

20

21

22

23

29

28

27

26

25

24

23

2

23

24

25

26

27

28

29

2

1

0

31

30

29

28

1

28

29

30

31

0

1

2

7

6

5

4

3

2

1

0

1

2

3

4

5

6

7

Fig. 6.1 The designed scheme based on a reference-affected matrix

d1

58

T.-T. Nguyen et al.

and on the negative axis of d 2 could be obtained by transposing calyxes, respectively. Every element M(d1 , d2 ) in these calyxes can carry secret digits from (00)2 to (11)2 . The major arranging area of the matrix known as petal is marked in green as shown in Fig. 6.1. Every its column is set difference value as 1, and the range is from 0 to 31 and every row sets in turn difference value as 5, 6, 6, 6, and 6, and the range is from 0 to 31. The whole matrix called x-cross-shaped reference matrix composed of petal matrix, calyx matrix, and stamen matrix as shown in Fig. 6.1.

6.6 Payload Calculation Once information hiding needs to conduct in an unreliable environment, that the total volume of secret messages will be expected as much as possible during one transmission. Each cover image’s payload depends on the resolution of the host image. The steps of a calculating procedure for the secret message are the following: Step 1 Extract a triple of consecutive cover pixels ( pi−1 , pi , pi+1 ), where i = 2, 5, . . . , (W × H − (W × H mod 3) − 1). Convert a message S to a bit stream. First, extract three bits from the secret string and embed the segment into the host image using pi by LSB substitution method, and update ls = ls + 3, where ls represents the length of the secret string which is going to be embedded into a cover image. Relative to pi is a cover pixel, and then pi is a camouflaging pixel. Step 2 Calculate the difference values d1 = pi−1 − pi and d2 = pi+1 − pi , respectively. Step 3 Recognize M(d1 , d2 ) belonging to which part of the x-cross-matrix: If it belongs to the calyx area, then ls = ls + 2; if it belongs to the stamen part, ls = ls + 3; otherwise, ls = ls + 5, which means it belongs to the petal matrix. Step 4 Repeat from Steps 1 to 3 until all pixels in the cover image are completely processed. Return the payload length ls . Our scheme embeds a 3-bit sub-secret string to the LSB pixel of pi , and we also embed a ls -bit sub-secret string to the pair of difference values (d1 , d2 ). The binary value s is converted to its corresponding decimal value sd . The length ls of to-be-embedded secret data s depends, on where the pair (d1 , d2 ) locates on the flow-shaped reference matrix.

6.7 Embedding Procedure A secret message is embedded in a binary system into a host image. During the procedure of embedding a secret, our scheme is efficient due to the embedding time less than 25 s with more than 2.6 bit per pixel (bpp).

6 A Data Hiding Approach Based on Reference-Affected Matrix

59

Step 1 Extract a triple of consecutive cover pixels ( pi−1 , pi , pi+1 ), where i = 2, 5, . . . , (W × H − (W × H mod 3) − 1). Step 2 Embed three bits of secret message into the LSB of pi to generate a stegoimage pixel pi . Then, compute d1 = pi−1 − pi , d2 = pi+1 − pi . Step 3 If the decimal secret sd is equal to M(d1 , d2 ), then keep d1 , d2 unchanged; otherwise, embed sd as the following rules: Case 1 M(d1 , d2 ) belongs to the petal matrix, which means this pair of (d1 , d2 ) can carry five digits of secret message ranging from (00000)2 to (11111)2 . While the sub-secret sd is unequal to  M(d1 , d2 ), find the pair d1 , d2 which has the shortest distance the guidance of with (d1 , d2 ) and is equal to sub-secret   sd with matrix x-cross. Change (d1 , d2 ) to d1 , d2 later, according to   − pi , d2 = pi+1 − pi , to generate the stego-pixels: d1 = pi−1   pi−1 and pi+1 . Case 2 M(d1 , d2 ) belongs to the stamen matrix, which means this pair of (d1 , d2 ) can carry three digits of secret message ranging from (000)2 to (111)  the sub-secret sd is unequal to M(d1 , d2 ),  2 . While sd with the guidfind the pair d1 , d2 that is equal to sub-secret  ance of matrix x-cross. Change (d1 , d2 ) to d1 , d2 later, according   to d1 = pi−1 − pi , d2 = pi+1 − pi , to generate the stego-pixels:   and pi+1 . pi−1 Case 3 M(d1 , d2 ) belongs to the calyx matrix, which means this pair of (d1 , d2 ) can carry two digits of secret message ranging from (00)2 to  (11)2. While sub-secret sd is unequal to M(d1 , d2 ), find the pair xd1 , d2 that is equal to sub-secret  sd with the guidance of matrix   − pi , cross. Change (d1 , d2 ) to d1 , d2 later, according to d1 = pi−1    − pi , to generate the stego-pixels: d2 = pi+1   pi−1  and p i+1 . So far, the triple of consecutive stego-pixels pi−1 , pi , pi+1 are generated. Step 4 Repeat Steps 1–4 until all secret messages are embedded. We obtain the stego-image finally. For example: Once embedding a secret digit “3” by the x-cross-shaped reference matrix, and a secret digit “7” by LSB substitution under the triple cover pixels ( pi−1 , pi , pi+1 ) = (79, 74, 82), the LSB substitution procedure changesthe  central pixel from 74 (1001010)2 to 79 (1001111)2 such that pi−1 , pi , pi+1 = (79,  82). Then compute (d1 , d2 ) = (0, 3), and find M(0,   4) is 3,  so that   79, = d1 , d2 = (0, 4), and finally change the stego-vector pi−1 , pi , pi+1 (79, 79, 83). If we want to embed a secret digit “2” by the x-cross-shaped reference matrix, the procedure flowchart of calculating the length of the secret message 4, and a secret digit “5” by LSB substitution under the triple ( pi−1 , pi , pi+1 ) =  cover pixels  , p , p has p (77, 74, 77), the embedding procedure first  i−1 i i+1 = (77, 77, 77),  next compute (d1 , d2 ) = (0, 0), and last d1 , d2 = (−1, 1). Therefore, the triple   = (78, 77, 76). If embedding a secret digit “14” , pi , pi+1 stego-pixels are pi−1

60

T.-T. Nguyen et al.

the x-cross-shaped reference, and a secret digit “7” by LSB substitution under the triple cover pixels  ( pi−1 , pi , pi+1 ) = (49, 45, 50), apply LSB procedure to change    pi−1 , pi , pi+1 = (49, 47, 50) and compute   (d1 , d2 ) = (2, 3), so that d1 , d2 = (3, 4). At last, the triple stego-pixels are pi−1 , pi , pi+1 = (50, 47, 51).

6.8 Secret Extraction Procedure     First, select the triple of consecutive stego-pixels pi−1 , pi , pi+1 , from a stegoimage. The secret data can be obtained from the three least-significant bits of pi .      Second, compute d1 = pi−1  − pi and d2 = pi+1 − pi . Based on the location  indication    of the two values d1 , d2 on the x-cross-shaped reference matrix, M d1 , d2 is the secret data. Whole secret message could  by repeating to process the   be archived  from a stego-image is (83, 79, , pi , pi+1 secret extraction procedure. Assume pi−1  79). According to the extraction procedure, we can extract secret“7” (111)2 from pi       and “3” (11)2 from d1 , d2 , respectively. What about the triples pi−1 , pi , pi+1 =  (78,  76)? Secret data “5” (101)2 and “2” (101)2 can be extracted from pi and   77,  d1 , d2 , respectively.  Let us look at what the secret message will be extracted from   = (51, 47, 50), which are “7” (111)2 from pi = 47 and “14” , pi, pi+1 the pi−1 from d1 , d2 = (4, 3).

6.9 Experimental Result Two measuring parameters are used in the experiment to quantify the performance of the proposed method included: the embedding capacity (EC) and peak signal-tonoise ratio (PSNR). EC is the number of secret data embedded in a test image, and PSNR is a kind of objective criteria for the evaluation of the image (greater PSNR is the better quality of the image). PSNR = 10 log10 MSE =

2552 MSE



W H     1 pi, j − pi, j H × W i=1 j=1

(6.6)

(6.7)

where H and W represent the height and width of the cover image, respectively, pi, j represents the original cover pixels, and pi, j represents the camouflage image pixels, respectively. The process of embedding payload and image quality is as follows: First, divide each test image into 4 × 4 nonoverlapping blocks. Second, calculate the block standard deviations. Third, use the histogram to present the relationship between the standard deviation and the number of blocks.

6 A Data Hiding Approach Based on Reference-Affected Matrix

61

Table 6.1 shows a comparison of the experimental results of embedding secret information into two categories of the image. Apparently, the smooth regions are more suitable for embedding secret information due to the smaller difference between the pixel values. Figure 6.2 shows the calculation of the block standard deviations for images that use six 512 × 512 grayscale test images. The category of smooth images has the block standard deviations that are mostly around 0s. The category of complex images has variances block standard deviations are hardly around over 0. Table 6.2 shows the comparison of results of the experiments of the proposed scheme with the turtle-shaped scheme [8] and the regular-octagon scheme [10]. Obviously, the performance of the proposed scheme provides outstandingly the results. Figure 6.3 compares the original image of bridge with its camouflage one that has low PSNR of 38.1356(dB). Though the image ofbridge is classified as a complex image, with human being’s eyes is difficult to recognize the difference between the original image and the stego-image. Table 6.1 Smooth images and complex images PSNR and payload Images

Smooth images

Images

Complex images

PSNR (dB)

EC (bpp)

Lena

40.3699

PSNR (dB)

EC (bpp)

2.6023

Baboon

38.1357

2.6403

Peppers Elaine

40.3431

2.6214

Bridge

38.7694

2.6173

40.4183

2.6379

Land

38.7092

2.5935

Fig. 6.2 Calculation of the block standard deviations for smooth and complex images

62

T.-T. Nguyen et al.

Table 6.2 Comparison of the proposed scheme with the turtle-shaped scheme and the regularoctagon scheme Name

The proposed scheme

Turtle-shaped scheme [8]

Regular-octagon scheme [10]

PSNR (dB)

EC (bpp)

PSNR (dB)

EC (bpp)

PSNR (dB)

EC (bpp)

Lena

40.3699

2.6134

49.42

1.5

43.0017

2.5

Peppers

40.3531

2.6211

49.40

1.5

42.9873

2.5

Elaine

40.3879

2.6037

49.40

1.5

43.0069

2.5

Baboon

38.1356

2.6402

49.39

1.5

NA

NA

Bridge

40.3774

2.6022

49.42

1.5

NA

NA

Land

40.4181

2.6379

49.41

1.5

49.25

1.5

Fig. 6.3 Original image and stego-image based on bridge

Figure 6.4 shows the histograms of six 512 × 512 grayscale test images for identifying image types. The experimental results of embedding payload and image quality demonstrate that the proposed method is a competitor of embedding secret data scheme.

6.10 Conclusion In this paper, we proposed a new scheme of embedding secrets information to solve the issues of low distortion and high embedding capacity in the embedded secret data in media. The guidance of the x-cross-shaped reference-affected matrix was applied for embedding capacity in transferring more secret messages. Adjacent pixels in a large area of the matrix with similar values can utilize for manipulating data

6 A Data Hiding Approach Based on Reference-Affected Matrix

63

Fig. 6.4 Histograms of six 512 × 512 grayscale test images for identifying image types

embedding and extracting on a difference–coordinate plan instead of the traditional pixel–coordinate plan. We combined three parts including petal matrix, calyx matrix, and stamen matrix for secret embedding data with the x-cross-shaped reference matrix. The experimental results compared with the previous methods show that the proposed scheme brings outstanding both the payloads with good visual quality.

References 1. Jayaram, P., Ranganatha, H.R., Anupama, H.S.: Information hiding using audio steganography—a survey. Int. J. Multimed. Appl. 3, 86–96 (2011) 2. Li, B., He, J., Huang, J., Qing Shi, Y.: A survey on image steganography and steganalysis. J. Inf. Hiding Multimed. Signal Process. 2, 142–172 (2011) 3. Ngo, T.-G., Nguyen, T.-T., Ngo, Q.-T., Nguyen, D.-D., Chu, S.-C.: Similarity shape based on skeleton graph matching. J. Inf. Hiding Multimed. Signal Process. 7, 1254–1264 (2016) 4. Hu, Y.-C., Tsou, C.-C., Su, B.-H.: Grayscale image hiding based on modulus function and greedy method. Fundam. Inform. 86 (2008) 5. Yang, C.H.: Inverted pattern approach to improve image quality of information hiding by LSB substitution. Pattern Recognit. 41, 2674–2683 (2008) 6. Boopathy, R., Ramakrishnan, M., Victor, S.P.: Modified LSB method using new cryptographic algorithm for steganography. In: Advances in Intelligent Systems and Computing, pp. 591–600 (2014). https://doi.org/10.1007/978-81-322-1602-5_63

64

T.-T. Nguyen et al.

7. Zhang, X., Wang, S.: Efficient steganographic embedding by exploiting modification direction. IEEE Commun. Lett. 10, 781–783 (2006) 8. Liu, L., Chang, C.C., Wang, A.: Data hiding based on extended turtle shell matrix construction method. Multimed. Tools Appl. 76, 12233–12250 (2017). https://doi.org/10.1007/s11042-0163624-7 9. Jin, Q., Li, Z., Chang, C.C., Wang, A., Liu, L.: Minimizing turtle-shell matrix based stego image distortion using particle swarm optimization. Int. J. Netw. Secur. 19, 154–162 (2017). https://doi.org/10.6633/IJNS.201701.19(1).16 10. Kurup, S., Rodrigues, A., Bhise, A.: Data hiding scheme based on octagon shaped shell. In: 2015 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2015, pp. 1982–1986 (2015). https://doi.org/10.1109/ICACCI.2015.7275908 11. Lin, L., Hongbing, J.: Signal feature extraction based on an improved EMD method. Meas. J. Int. Meas. Confed. 42, 796–803 (2009)

Chapter 7

A Survey of Data Hiding Based on Vector Quantization Chin-Feng Lee, Chin-Chen Chang, Chia-Shuo Shih and Somya Agrawal

Abstract With the development of computers and networks, digital data can be transmitted quickly to anywhere in the world. Information security has become the focus of research for several researchers as it is essential to protect the information that is being transferred over the Internet. In 1980, Linde et al. proposed vector quantization (VQ), a simple compression technique with good image quality and compression rate. We explore vector quantization (VQ) in this paper for embedding watermark to achieve the goal of data hiding. Data hiding schemes for the encoded vector quantization (VQ) index table are studied from five papers and analyzed in terms of the characteristics of different methods. A comparison of image quality and the amount of embedded information has been presented and discussed. Keywords Data hiding · Vector quantization · Search order code

7.1 Introduction With the rapid development of computers, transferring data over the Internet is a must-have skill for everyone. Due to the rapid usage of Internet, information security has become the focus of research for several researchers as it has become essential C.-F. Lee · S. Agrawal (B) Department of Information Management, Chaoyang University of Technology, Taichung City, Taiwan e-mail: [email protected] C.-F. Lee e-mail: [email protected] C.-C. Chang · C.-S. Shih Department of Information Engineering and Computer Science, Feng Chia University, Taichung City 40724, Taiwan e-mail: [email protected] C.-S. Shih e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_7

65

66

C.-F. Lee et al.

to protect the information that is being transferred over the Internet. Nowadays, two methods of data protection have become very popular. In the first method, information is encrypted which does not allow others to know the true content of the message even though they know that important messages are being delivered. Despite the encryption, some people find ways to crack it and extract important information. Therefore, encrypted information attracts unsolicited attention very easily. The second method called as data hiding protects information in such a way that people do not notice that important information is being transferred. Data hiding enables the sender to protect information by hiding significant information into the cover medium and then transfer it as stego medium. The most commonly used carriers these days are images. When a secret message is hidden in the image, the image quality gets diminished. However, as long as the image quality is maintained at a certain level, it is still possible to make people unaware of the difference in the image before and after data hiding, thereby achieving the purpose of protecting data and transmitting important information. The topic of data hiding in the recent scientific literature is very vast and various methods have been proposed to satisfy the characteristics of data hiding. The data hiding methods can be majorly classified into two domains: spatial domain and transform domain. The spatial domain technologies [1–3] are used to execute data hiding by directly changing the media data. Using this method, it is possible to achieve high-capacity information hiding without destroying visual effects of the original image. The transform domain technique is a hiding method in the frequency domain [4] or in compression domain. Data hiding increases the size of files that need to be passed. We need to compress the image and make space for the secret message in such a way that it does not cause the image size to become larger after the data hiding process. This method saves the transit time and it also reduces the risk of being found to have hidden secret messages. A common type of lossy compression technique used is vector quantization (VQ) [5] and its process flow is simple. It also has a high image compression ratio. However, it is useful for data hiding only if it extracts the correct secret data after compression. In VQ method, we cut the image into n × n nonoverlapping blocks, and then find the index that is the most similar to each block in a pretrained codebook. Each index represents n × n pixels in the codebook (CB). We use these indexes instead of each of the original blocks because these indexes require lesser space than the original blocks to complete the compression. In 2016, Qin et al. proposed a reversible data hiding technique in VQ [6], which uses the technology of improved search order coding (ISOC) to hide information. In 2018, Pan et al. proposed a reversible data hiding in VQ [7] using two-stage VQ compression and search order code (SOC), in which more space was cleared to embed secret messages. In 2018, Rahmani et al. also proposed a reversible data hiding based on VQ [8], in which the similarity index was arranged to the position in front of the codebook (CB) by sorting the CB. Using this technique, it is possible to hide secret messages of various lengths through different positions. In 2018, Huang et al. proposed a data hiding technique with VQ [9]. Since VQ is a lossy compression method, the image quality gets degraded after decompression. However, using two-level encoding data hiding method, secret messages can be hidden and the image quality can be further improved. In 2019,

7 A Survey of Data Hiding Based on Vector Quantization

67

Huang et al. proposed an improvisation [10] to this method, in which dynamic data hiding was executed by flipping the secret message, thereby improving the amount of the space required in hiding and the image quality. In the second section, the abovementioned methods are presented in more detail. In the third section, a comparison analysis of each method’s experimental results has been provided. Finally, the fourth section ends with a conclusion.

7.2 Related Work 7.2.1 Data Hiding with VQ and ISOC In 2008, Qin and Hu [6] proposed the use of improved search order coding (ISOC) to hide secret information. We call this method as VQ_ISOC in this paper. The ISOC encoding can perform lossless compression on the VQ index, allowing the compressed image to be compressed again. ISOC divides the index into three categories, namely, the search order code (SOC), the relative address (RA), and the VQ index. We will make use of SOC and RA indexes to hide secret information. First, we need to compress the original image using VQ technology and then encode it using ISOC. In the ISOC encoding process, one-bit secret message can be embedded in an SOCor RA-type index. Assuming that the current index X belongs to the SOC type, we need to identify whether the secret data is 1 or 0. If the secret data is 0, then X maintains as the SOC type, but when the secret data is 1, we need to change X to RA type. Similarly, if the current index X is of RA type, and the secret message is 1, X is maintained as the RA type. When the secret message is 0, X is changed to the SOC type. If the current index X is neither SOC nor RA type, no message is hidden in the VQ index X.

7.2.2 Data Hiding with Two-Stage VQ and SOC In 2018, Pan and Wang [7] proposed the use of two-stage VQ technology and search order coding (SOC) to hide secret data. After using the two-stage VQ compression technique, the compressed data will be losslessly compressed again using SOC index. It can improve the image quality and compression rate so that the freed up space can be used to hide secret data. We call this method SMVQ_SOC in this paper. SMVQ_SOC method uses the characteristics of side match vector quantization (SMVQ) technique in the first stage of VQ. SMVQ uses state codebook (SCB) for encoding. SCB uses the left and upper blocks of the current target block and calculates the value. This value is then used to extract n indexes which are closest to this value in the original CB. The number of SCBs should be less than the original CB to achieve the aftereffects

68

C.-F. Lee et al.

of reducing size. In this SCB, the first index X S1 usually is similar to the current target block. Therefore, SMVQ_SOC uses this feature to subtract X S1 from the current target block to obtain the difference block. During the second stage VQ, the difference block obtained in the first stage VQ is used to perform the training of the difference codebook (DC), and DC is used to encode the difference block by the VQ technique to finally obtain the difference index X D . After completing the two-stage VQ compression encoding of the entire image, SMVQ_SOC will compress X D with SOC and gets X DS . Finally, SMVQ_SOC method puts the secret message behind each X DS index to complete the embedding of the message.

7.2.3 Data Hiding with Sorting Codebook In 2018, Rahmani and Dastghaibyfard [8] proposed a method of changing the index value to execute data hiding. We call this method as VQ_SORT in this paper. First, they used VQ technique to compress the entire original image and get an index table (IT) of the image. Next, Rahmani et al. use SMVQ technique to sort CB using left and top indexes of the current target index X. The sorted CB will place the codeword which is similar to X into the position near the front of the CB. We call this approach as VQ_SMVQ in this paper. The LAS scheme is used to sort the codewords of the codebook by using the similarity features of the images of adjacent blocks. The index used by the previous block is placed in the first position of the CB, and the remaining index is followed. When the next target block is encoded, its index will be more likely in the first few positions of CB. We call this method as VQ_LAS in this paper. Finally, after sorting the codewords by VQ_SMVQ or VQ_LAS, we get  a new SC Bsor ting . After generating SC Bsor ting , we get X s new index X new . Using X new we get L S . L S is a length that is designed using X new to hide secret data. When the position of X new is the position near the front in the state codebook (SCB) and is less than the threshold T, the index length that we hide will not change. Finally, the secret message is embedded according to X new and L S (Fig. 7.1).

7.2.4 VQ-Based Data Hiding with Adaptive Pixel Replacements In 2018, Huang et al. [9] proposed a method using data hiding information to improve the image quality with the VQ compression image technique. We call this method as VQ_R in this paper. The original image is performed by VQ compression technique and the decompressed image is used for data hiding which can enhance the image quality while embedding the secret message.

7 A Survey of Data Hiding Based on Vector Quantization

69

Fig. 7.1 Flowchart of VQ_SORT method

First, the original image is compressed with VQ technique and then decompresses the compressed code to obtain a decompressed image Idc . Calculate the difference between each pixel of the current target block in the original image and each pixel of the corresponding target block in Idc . We get {d1 , d2 , . . . , dn } where n is the number of pixels in the block. Then we need to find the largest difference value dmax . We calculate the length of the secret message L S by dmax . In order to extract the correct secret message, we also need to hide the index X while hiding the secret message. We hide the secret message and X into the least significant bits (LSB) of each pixel in Idc . Embedding is done by permanently changing the LSB of these pixels. Finally, we get an embedded image.

70

C.-F. Lee et al.

7.2.5 VQ-Based Data Hiding with Dynamic Embedding Strategy In 2019, Huang et al. [10] proposed a paper based on their initial work proposed in 2018 [9]. We call this method as VQ_D in this paper. Similar to the previous method, VQ_D method uses VQ technique to get the decompressed image Idc . VQ_D then frees up the LSBs of each pixel of the target decompressed block (X dc ) in Idc to  . VQ_D also cleans the LSBs of each pixel of the target block (X ) obtain a new X dc in the original image I to obtain a new block X  . It calculates the to-be-embedded  plus the secret data si cannot be greater secret data size according to the rule that X dc  than X plus a predetermined threshold T. Therefore, the threshold value affects the amount of information that can be hidden and also the quality of the image. The flipping process of VQ_D method will check the first bit of to-be-embedded secret data. If the first bit of the current secret stream is “1,” no flipping is needed; VQ_D simply adds secret bits as needed into the codeword and obtains stego pixels. Then, the first LSB is set to zero which indicates that secret data is “not flipped.” On the contrary, if the first bit of the current secret stream is “0,” VQ_D simply flips the secret stream segment and then sets the first bit of LSB to “1” to indicate “flipped.” VQ_D method hides the index of target block, a string of flip flag, and the secret   and obtains the block X dc with data si into the LSBs of decompressed block X dc hidden information. By flipping the beginning of the secret message, VQ_D can ensure that no errors are caused during the extraction.

7.3 Comparison of Experimental Results Based on VQ Technique In this section, we present the experimental results of data hiding method based on several VQ techniques. The characteristics of each method are shown in Table 7.1. The images used in the experimental data were all 512 × 512 grayscale images, and the codebook size was 256. The VQ_ISOC of Qin and Hu [6] and the SMVQ_SOC of Pan and Wang [7] hid secret messages into the compressed codes, so their embedding capacity (EC) was limited by the image size and the block size of VQ. In addition, VQ_ISOC is embedded using only the SOC-type and the RA-type indexes, and the message cannot be embedded on the VQ index. SMVQ_SOC could be hidden in every index. With respect to the embedding capacity (EC), the SMVQ_SOC had more capacity than that of the VQ_ISOC method. For the Lena image, SMVQ_SOC was 16384 bits and VQ_ISOC was 12770 bits. The VQ_SORT proposed by Rahmani and Dastghaibyfard [8] changed the index and hid secret messages of different lengths. In order to increase the EC, the compression ratio [11] had to be increased. The compression ratio is uncompressed image size divided by compressed image size. For the Lena image, the compression ratio (CR) of VQ_ISOC method was 2.403 and the CR of SMVQ_SOC method was 2.512.

7 A Survey of Data Hiding Based on Vector Quantization

71

Table 7.1 Comparison of different methods based on VQ technique Methods

VQ_ISOC [6]

SMVQ_SOC [7]

VQ_LAS [8]

VQ_SMVQ [8]

VQ_R [9]

VQ_D [10]

EC bounded by block size

Yes

Yes

Yes

Yes

No

No

Each index can be taken as carrier

No

Yes

Yes

Yes

Yes

Yes

Additional message

No

No

Yes

Yes

No

No

Compressed code

Yes

Yes

Yes

Yes

No

No

Reversible or irreversible (R/I)

R

R

R

R

I

I

Hiding in compressed code

Yes

Yes

No

No

No

No

Measures

Images

PSNR (db)

Lena

30.972

31.2

31.66

31.66

34.86

34.29

Airplane

31.450

30.6

31.49

31.49

33.73

34.45

EC (bits)

Lena

12770

16384

78888

66107

454996

524093

Airplane

12933

16384

74888

70453

429569

511084

CR (bpp)

Lena

2.403

2.512

1.574

1.821

N

N

Airplane

2.398

2.544

1.526

1.773

N

N

The CR of VQ_LAS method was 1.574 and of the VQ_SMVQ method was 1.821. Because of this, VQ_SORT method had more secret messages than VQ_ISOC and SMVQ_SOC methods, respectively. VQ_R [9] and VQ_D [10] were proposed by Huang et al. using the VQ technique to hide secret information into the decompressed image. After the final completion, a stego image was obtained but not the compressed image code. So VQ_R and VQ_D methods did not present CR. The EC of VQ_R method was slightly lesser than VQ_D method’s EC (70000 bits), but VQ_R method was relatively simpler. VQ_D method is an improvisation of VQ_R method.

7.4 Conclusions This paper analyzes the VQ-based data hiding method, which includes hiding secret messages in the index, hiding secret messages in the codebook, and completing the process of image carrying the secret message through VQ technique. VQ is a simple compression technique which has many parts that can be designed and improved. Further studies can think about whether there is a better way to hide using VQ technique or to use the deformed VQ technique to hide secret messages. Acknowledgements This research was partially supported by the Ministry of Science and Technology of the Republic of China under the Grants MOST 106-2221-E-324-006-MY2.

72

C.-F. Lee et al.

References 1. Lee, C.F., Weng, C.Y., Kao, C.Y.: Reversible data hiding using Lagrange interpolation for prediction-error expansion embedding. Soft Comput., 1–13 (2018) 2. Lee, C.F., Li, Y.C., Chu, S.C., Roddick, J.F.: Data hiding scheme based on a flower-shaped reference matrix J. Netw. Intell. 3(2), 138–151 (2018) 3. Lee, C.F., Weng, C.Y., Chen, K.C.: An efficient reversible data hiding with reduplicated exploiting modification direction using image interpolation and edge detection. Multimed. Tools Appl. 76(7), 9993–10016 (2017) 4. Lee, C.F., Chang, C.C., Xie, X.Z., Mao, K., Shi, R.H.: High robust image watermarking scheme exploiting Arnold transform mapping in the DCT domain of YCbCr color space. Displays 53, 30–39 (2018) 5. Linde, Y., Buzo, A., Gray, R.M.: An algorithm for vector quantizer design. IEEE Trans. Commun. 28, 84–95 (1980) 6. Qin, C., Hu, Y.C.: Reversible data hiding in VQ index table with lossless coding and adaptive switching mechanism. Sig. Process. 129, 48–55 (2016) 7. Pan, Z., Wang, L.: Novel reversible data hiding scheme for two-stage VQ compressed images based on search-order coding. J. Vis. Commun. Image Represent. 50, 186–198 (2018) 8. Rahmani, P., Dastghaibyfard, G.: An efficient histogram-based index mapping mechanism for reversible data hiding in VQ-compressed images. Inf. Sci. 435, 224–239 (2018) 9. Huang, C.T., Tsai, M.Y., Lin, L.C., Wang, W.J., Wang, S.J.: VQ-based data hiding in IoT networks using two-level encoding with adaptive pixel replacements. J. Supercomput. 74, 4295–4314 (2018) 10. Huang, C.T., Lin, L.C., Yang, C.H., Wang, S.J.: Dynamic embedding strategy of VQ-based information hiding approach. J. Vis. Commun. Image Represent. 59, 14–32 (2019) 11. Wikipedia, Data compression ratio. https://en.wikipedia.org/wiki/Data_compression_ratio. Last accessed 13 Mar 1997

Chapter 8

A Survey of Authentication Protocols in Logistics System Chin-Ling Chen, Dong-Peng Lin, Chin-Feng Lee, Yong-Yuan Deng and Somya Agrawal

Abstract E-commerce has developed rapidly in recent years. Many services and applications integrate IOT technologies are offered. Logistics is a representative application which focuses on rapid delivery, integrity of goods, and the privacy of personal information. This study surveys logistics environment and lists security requirements of past scheme. The goal of this study ensures these security issues should be considered in logistics applications. It will make people conversant with the basic environment and needs of logistics system. Keywords Mutual authentication · Privacy · Logistics system · Security · Integrity

8.1 Introduction In recent years, with the rapid development of e-commerce, online shopping has become a current trend, and many shopping and financial transactions can be completed via online shopping. These activities include online orders and online payments [1]. As buyers and sellers interact online, the purchased goods are divided into digital and physical products. If a product is physical, the seller will entrust logistics to deliver the goods to the buyer. As these logistics requirements grow, the greater focus is required not only on rapid delivery but also on ensuring the integrity of goods and the privacy of personal information [2, 3]. Unfortunately, the current process of goods delivery and online shopping cannot avoid a physical immediate exchange of goods. There exists a risk of counterfeit and C.-L. Chen Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung City, Taiwan, ROC C.-L. Chen · Y.-Y. Deng School of Information Engineering, Changchun Sci-Tech University, Changchun 130600, China D.-P. Lin · C.-F. Lee · S. Agrawal (B) Department of Information Management, Chaoyang University of Technology, Taichung City, Taiwan, ROC e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_8

73

74

C.-L. Chen et al.

fraud. In addition to a risk that goods may be lost due to human error, and this may be compounded by information errors, which mean that it cannot be determined where the goods were lost [4]. Liu and Wang [5] noted that preventing the goods loss has become an important issue in this field. However, the goods transportation process includes the risk of private information being leaked, which may result in improper use or theft of that personal information. The delivery verification can also include the risks of identity impersonation, parcel exchange, and loss of packages. For example, we discuss the situation of the switched goods. The buyer pays the seller to buy high-priced goods A, but receive low-cost goods B. Such a situation mostly occurs due to the distribution process; the goods are switched by malicious people. Because there is no reliable mechanism for buyer and seller to identify each other, it is impossible to know who own the goods, or when goods are lost [5, 6]. Aijaz et al. [7] classified various attacker behaviors as active attackers, internal personnel, and malicious attackers. The active attacker tampers the shopping information, causing the user’s loss, or profiting himself, and can also forward the stolen information to other attackers. Internal personnel attackers are dangerous during the transmission process. Due to a good understanding of the project and personal information, internal personnel can cause a variety of complex attacks. The main goal of a malicious attacker is to steal or tamper with information and cause property damage. This paper surveys the requirements and common attacks of the logistics system to protect personal privacy and shopping information during transmission to prevent information from being stolen. It will make people conversant with the basic environment and needs of logistics system. The remainder of this paper is arranged as follows. Section 8.2 presents the security threats in logistics. Section 8.3 illustrates the security requirements of logistics. Section 8.4 makes a security comparison. Section 8.5 offers conclusions.

8.2 Security Threats in Logistics Despite the continuous improvement of the logistics system in recent years, there still exist security threats. In the communication protocol, identity authentication is very important. In order to achieve a good logistics system, the following known attacks must be prevented [8–12]: (1) Modification attack: The attacker intercepts the information of the transmitting party and the receiving party, and modifies the contents of the shopping information, resulting in the loss of the transmitting party and the receiving party. Therefore, the transmitted information must defend against modification attacks. (2) Impersonation attack: The attacker uses a fake identity to disguise himself as a sender, and sends a fake message to the receiver, causing the receiver to receive a false message, and impersonating a user to communicate for the benefit.

8 A Survey of Authentication Protocols in Logistics System

75

(3) Man-in-the-middle attack: The attacker establishes independent contact with both ends of the communication and exchanges the information so that both sender and receiver of the communication think that they are talking directly with each other through a private connection. In fact, the entire conversation is completely controlled by the attacker. In a man-in-the-middle attack, an attacker can intercept calls from both parties and insert new content. (4) Clone attack: An attacker steals information by copying a label and impersonating a deliverer to deliver non-original information. (5) Replay attack: When communicating, the attacker may get the latest or old messages, and resend these messages to the transmitting party and the receiving party. This case may cause the illusion of communication with the transmitting party and the receiving party. The logistics encountered attacks are listed above. The characteristics of each attack are different and the prevention methods are different. In the communication, in addition to the information which may be externally damaged, the identity of the sender of the information should also be verified.

8.3 Security Requirements In order to address the security threats of the logistics system, it is essential to meet system requirements to comply with the normal operation of the network. These requirements are primarily related to general security requirements. The principal requirements of logistics system are proposed [13–22] as follows: (1) Mutual authentication: The basic requirement for a good system communication is the identity authentication during the transmission process. The message must guarantee the valid identity of a sender and receiver [13–17]. (2) Non-repudiation: In the information transmission process, if each identity is not authenticated, the sender and the receiver are easily vulnerable to impersonation attack. Therefore, the non-repudiation of information is crucial to effectively prevent impersonation [13, 14, 17, 18]. (3) Anonymity: It is easy for a user to disclose information in the goods delivery process. Therefore, the contents should keep secret about the user’s private information [13–15, 17–19]. (4) Integrity: In an unprotected environment, information is easily tampered in the transmission process. Therefore, the integrity of the information must be ensured during transmission [13–18]. (5) Confidentiality: The confidentiality is a consideration for communication in logistics. The level of message confidentiality depends on the application area and special circumstances for logistics. According to the situation, messages should be encrypted; however, nonsensitive messages need not be encrypted to avoid wasting computing resources [13, 17, 20, 21].

76

C.-L. Chen et al.

(6) Low overhead: Identity verification in the information transmission process must guarantee information integrity and maintain transmission speed, so it is necessary to reduce computation cost for a faster system [17, 18, 21, 22].

8.4 Security Comparison For communication phase, the information must be verified by the system, and the verified messages need to meet the security conditions, so that the sender and receiver can communicate with confidence in the open channel [7, 13]. We have surveyed several schemes and analyzed the security requirements as shown in Table 8.1. From Table 8.1, in aspect of security requirements, we can see that the previous solution meets most of the known security requirements. Regarding the integrity of the message, most of the literature indicate that this is the primary task of building a logistics system. If the message is incomplete, subsequent authentication operations will not continue because it is not trustworthy. Mutual authentication is an important basis for the delivery process of logistics system. In aspect of security threats, as long as the security requirements reach a certain level, the above solutions can prevent most malicious attacks in principle, except for cloned attacks. The cloned attack is a special attack of logistics. In the environment of transportation, it is difficult to check if the contents have been changed in the delivery of goods. These malicious behaviors can only rely on digital signatures to track the integrity of the goods. In the replay attack scheme, malicious behavior is prevented by verifying the timestamp. Although it can filter out a certain degree of attack, this Table 8.1 Security requirements of related works Ding et al. [3]

Wazid et al. [9]

Li and Zhang [13]

Gope et al. [17]

Integrity

Yes

Yes

Yes

Yes

Non-repudiation

Yes

NA

Yes

Yes

Anonymity

Yes

Yes

Yes

Yes

Mutual authentication

No

Yes

Yes

Yes

Confidentiality

Yes

Yes

Yes

Yes

Low overhead

No

NA

NA

Yes

Modification attack

NA

Yes

Yes

Yes

Impersonation attack

NA

Yes

Yes

Yes

Man-in-the-middle attack

Yes

Yes

Yes

Yes

Clone attack

NA

NA

NA

Yes

Replay attack

NA

Yes

Yes

Yes

8 A Survey of Authentication Protocols in Logistics System

77

method may be less effective for more complex attacks. For the protection of other attacks, these authentication schemes can effectively prevent the communication process from being intercepted by malicious parties. Ding et al. [3] mentioned the mutual authentication issue, cost reduction, and they used the shared keys in preventing the attack which cannot solve common attacks. In the proposed scheme of Wazid et al. [9], it is slightly weaker for non-repudiation because the transmitted messages lack strong evidence. It can be solved by digital signature mechanism. Li and Zhang [13] proposed a specific identity-based AAGKAwSNP protocol which has outstanding performance in terms of privacy and non-repudiation. Gope et al. [17] used lightweight encryption tool hash functions and symmetric key encryption to achieve known security and functional requirements.

8.5 Conclusions This paper surveys the development trend of the logistics system and existed problems. We have listed several attacks, security requirements and described them in detail. For the existing certification schemes, after investigation and analysis, we believe that the security level is higher than past standard in the future, and the computing cost needs to be reduced. For the existing certification schemes, after several investigation and analysis processes, we believe that the security level standard is getting higher than in the past. The analysis result also indicates that the computing cost in the newly proposed methods need to be reduced. We expect to pay more attention to communication security of various applications and come out with more convenient services in the logistics system. Acknowledgements This research was partially supported by the Ministry of Science and Technology of the Republic of China under the Grants MOST 106-2221-E-324-006-MY2.

References 1. Speranza, M.G.: Trends in transportation and logistics. Eur. J. Oper. Res. 264, 830–836 (2018) 2. Xue, L., Hu C.: Application of RFID technology in agricultural byproduct logistics and food security supervising. In: 2014 Fifth International Conference on Intelligent Systems Design and Engineering Applications, pp. 226–229. IEEE (2014) 3. Ding, L.-H., Wang, J., Liu, L.: Privacy-preserving temperature query protocol in cold-chain logistics. In: 2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics, vol. 1, pp. 113–116. IEEE (2015) 4. Geng, Y., Li, J.: A research of electronic commerce logistics model based on cloud logistics and warehousing, In: 2012 International Symposium on Management of Technology (ISMOT), pp. 631–635. IEEE (2012)

78

C.-L. Chen et al.

5. Liu, S., Wang, J.: A security-enhanced express delivery system based on NFC. In: 2016 13th IEEE International Conference on Solid-State and Integrated Circuit Technology, pp. 1534–1536 (2016) 6. Cui, J., She, D., Ma, J., Wu, Q., Liu, J.: A new logistics distribution scheme based on NFC. In: 2015 International Conference on Network and Information Systems for Computers, pp. 492–495 (2015) 7. Aijaz, A., Bochow, B., Dotzer, F., Festag, A., Gerlach M., Kroh, R., Leinmuller, T.: Attacks on inter vehicle communication systems—an analysis. In: Proceedings of the WIT 2006, pp. 189–194 (2006) 8. Cho, J.-S., Jeong, Y.-S., Park, S.: Consideration on the brute-force attack cost and retrieval cost: a hash-based radio-frequency identification (RFID) tag mutual authentication protocol. Comput. Math Appl. 69, 58–65 (2015) 9. Wazid, M., Das, A.K., Kumar, N., Odelu, V., Reddy, A.G., Park, K., Park, Y.: Design of lightweight authentication and key agreement protocol for vehicular ad hoc networks. IEEE Access (5), 14966–14980 (2017) 10. Liu, M.-M., Yu-Pu, H.: Equational security of a lattice-based oblivious transfer protocol. J. Netw. Intell. 2(3), 231–249 (2017) 11. Sun, Y., Zheng, W.: An identity-based ring signcryption scheme in ideal lattice. J. Netw. Intell. 3(3), 152–161 (2018) 12. Chen, C.-M., Huang, Y., Wang, E.K., Wu, T.-Y.: Improvement of a mutual authentication protocol with anonymity for roaming service in wireless communications. Data Sci. Pattern Recognit. 2(1), 15–24 (2018) 13. Li, J., Zhang, L.: Sender dynamic, non-repudiable, privacy-preserving and strong secure group communication protocol. Inf. Sci. 414, 187–202 (2017) 14. Zhang, X., Li, H., Yang, Y., Sun, G., Chen, G.: Lipps: logistics information privacy protection system based on encrypted QR code. In: IEEE Trustcom/BigDataSE/ISPA, pp. 996–1000. IEEE (2016) 15. Zhao, S., Aggarwal, A., Frost, R., Bai, X.: A survey of applications of identity-based cryptography in mobile ad-hoc networks. IEEE Commun. Surv. Tutor. 14(2), 380–400 (2012) 16. Das, A.K., Goswami, A.: A robust anonymous biometric-based remote user authentication scheme using smart cards. J. King Saud Univ. Comput. Inf. Sci. 27, 193–210 (2015) 17. Gope, P., Amin, R., Hafizul Islam, S.K., Kumar, N., Bhalla, V.K.: Lightweight and privacypreserving RFID authentication scheme for distributed IoT infrastructure with secure localization services for smart city environment. Futur. Gener. Comput. Syst. 83, 629–637 (2018) 18. Whitmore, A., Agarwal, A., Da, X.L.: The internet of things: a survey of topics and trends. Inf. Syst. Front. 17(2), 261–274 (2015) 19. Chen, C.-M., Linlin, X., Tsu-Yang, W., Li, C.-R.: On the security of a Chaotic maps-based three-party authenticated key agreement protocol. J. Netw. Intell. 1(2), 61–66 (2016) 20. Chen, C.-M., Wang, K.-H., Wu, T.-Y., Wang, E.K.: On the security of a three-party authenticated key agreement protocol based on Chaotic maps. Data Sci. Pattern Recognit. 1(2), 1–10 (2017) 21. Wu, T.-Y., Chen, C.-M., Wang, K.-H., Pan, J.-S., Zheng, W., Chu, S.-C., Roddick, J.F.: Security analysis of Rhee et al.’s public encryption with keyword search schemes: a review. J. Netw. Intell. 3(1), 16–25 (2018) 22. Sharma, V., Vithalkar, A., Hashmi, M.: Lightweight security protocol for chipless RFID in Internet of Things (IoT) applications. In: 2018 10th International Conference on Communication Systems & Networks (COMSNETS), pp. 468–471 (2018)

Chapter 9

Enhanced Secret Hiding Mechanism Based on Genetic Algorithm Cai-Jie Weng, Shi-Jian Liu, Jeng-Shyang Pan, Lyuchao Liao, Trong-The Nguyen, Wei-Dong Zeng, Ping Zhang and Lei Huang

Abstract Many industrial production processes, transactions, and multimedia communications are generated in daily life. Digital watermarking technology plays a vital role in the field of multimedia information security. Information security has got many remarkable achievements. However, a contradiction between the robustness of the watermark and the quality of the cover multimedia has not been paid attention by scholars. This paper proposes an algorithm to improve the cover image quality of QIM-based watermarking algorithm based on genetic algorithm. The experimental results demonstrate that the proposed algorithm is effective. Keywords Quantization index · Modulation · Watermarking · Genetic algorithm

C.-J. Weng · S.-J. Liu · J.-S. Pan (B) · T.-T. Nguyen Fujian Provincial Key Lab of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, Fujian, China e-mail: [email protected] C.-J. Weng e-mail: [email protected] T.-T. Nguyen e-mail: [email protected] L. Liao Fujian Key Laboratory for Automotive Electronics and Electric Drive, Fujian University of Technology, Fuzhou, China T.-T. Nguyen Department of Information Technology, Haiphong Private University, Haiphong, Vietnam W.-D. Zeng Smart Fuzhou Management Service Center, Fujian, China P. Zhang · L. Huang Fuzhou Investigation and Surveying Institute, Fujian, China © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_9

79

80

C.-J. Weng et al.

9.1 Introduction As the beneficiaries of information technology, nowadays, a lot of multimedia materials in daily life and industrial production processes such as images, audio, and video are created [1–3]. For example, monitoring video generated during electronic power generation, operation, and management [4]. Therefore, following this trend is the demand for security of this multimedia information [5]. The digital watermarking technology proposed by Tirkel et al. [6] designed to protect the security of multimedia information. The last decades have witnessed many proposed watermarking algorithms for multimedia protection problems which include LSB, QIM, difference expansion, and histogram modification [4, 7, 8]. According to the embedded domain, the digital watermarking algorithm can be divided into three categories: spatial domain, frequency domain, and encryption domain. The frequency domain can be subdivided into transform domains such as DCT, DWT, DFT, and contourlet, etc. The main feature of the spatial domain watermarking algorithm is easy to implement, and the LSB algorithm in paper [4] is the most straightforward classical watermarking algorithm in the spatial domain. This method replaces the least significant bit of the cover image pixel value with secret information directly. Sub-sample the cover image and embed multiple secret information in its DCT domain with a watermarking algorithm based on quantization modulation, which can achieve multipurpose protection for the cover image [9]. Zhang et al. embedded the QR code, as secret information, into the DWT domain of the cover image that is robust to compression and noise, and it also improves the security of the secret information [9]. Xiang et al. proposed a reversible data hiding scheme that first encrypts the cover image and then embeds the secret information into the ciphered image [10]. This method has higher security than other algorithms but requires a lot of time for image encryption and decryption [9]. Aiming for embedding message into a QR code, we proposed a module expansion (ME)-based watermarking method in the paper [11]. The core idea of ME is to expand a module to its neighbor if these two modules are in different colors. A hybrid DCT and DWT was introduced [12] which can ensure the transparency of the cover image and highly robust to anti-linear and nonlinear attack. Digital watermarking is a technique for embedding secret information into multimedia products by using the redundancy of the cover medium, which affects the quality of the original multimedia inevitably. The robustness of the watermark and the quality of the cover multimedia are a contradiction [13]. To trade-off the above contradictions, this paper proposes an algorithm to improve the cover image quality of QIM-based watermarking algorithm using Genetic algorithm. Compared results with the other methods in the literature show the proposed approach outperforms. The whole paper is organized as follows. Section 9.2 introduces the related works. Section 9.3 presents the proposed method in details. Experiments and results are given in Sect. 9.4. The conclusion draws in Sect. 9.5.

9 Enhanced Secret Hiding Mechanism Based on Genetic Algorithm

81

9.2 Related Works 9.2.1 Watermarking Technique Based on Quantization Index Modulation The information hiding method based on the quantization idea is a blind watermarking algorithm. The main idea is to quantize the original cover data to different quantization intervals according to the difference of the secret information and to extract the watermark information according to the associated quantization interval. The watermarking technique based on quantization index modulation is to divide the coordinate axes into two equal interval sets marked as A and B, and the interval size  is the quantization step length. It is stipulated that the A interval set represents 0 and the B interval set represents 0. The method of embedding secret information in a cover image is as follows. According to the watermark value, 0 or 1, the pixel value of the original cover image is adjusted to be equal to the intermediate value in the corresponding interval nearest to itself. When extracting the watermark, it only needs to determine the interval in which the pixel value is nearest to the A sets or the B sets, and then it can be known that the secret bit is 0 or 1. The principle of QIM-based watermarking algorithm is shown in Fig. 9.1.

9.2.2 Genetic Algorithm Genetic algorithm, referred to as GA, was proposed by Professor J. Holland and his students in 1975 [14]. GA is a robust and adaptive random search method. It does not require continuous object optimization, and the algorithm is simple, efficient, and easy to implement. Therefore, it has been widely used in different fields since the 1990s. It is inspired by Darwin’s theory of evolution and Mendelian genetics; this is an optimization algorithm based on the complex adaptation process of living things in nature to the environment. The pseudocode describing the principle of genetic algorithm is shown in Fig. 9.2. The specific algorithm workflow consists of the following steps: • Initialization: Set the iteration counter to 0 and set the maximum generation of the population, randomly generating individuals as the initial population. 0

1

0

1

0

1

0

-3Δ

-2Δ A

-Δ B

0 A

Δ B

2Δ A



Fig. 9.1 The schematic diagram of QIM

82

C.-J. Weng et al.

Fig. 9.2 Genetic algorithm structure

• Evaluate individuals: Calculate the fitness value of each individual in the population. • Selection: The selection operator is applied to the population, and according to the particle’s fitness value, some rules or methods are used to select superior individuals to generate the next generation. • Crossover: The crossover operator is applied to the population, and the selected pair of individuals exchanges a part of the chromosomes between them with a certain probability to generate a new individual. • Mutation: For selected individuals, change the value of one or some genes to other alleles with a certain probability. • Termination criterion: Output optimal results if the convergence condition is met or the maximum number of iterations is reached.

9.3 Methodologies Previous QIM-based algorithm is to divide the coordinate axes into two equal interval sets marked as A and B, and the A interval set can only represent 1 while the B interval set can only represent 0. If the original cover data is very close to the middle value of the A interval and the watermark information to be embedded is 0. In this case, the original data needs to be replaced with the intermediate value of the B interval closest to the original data, and the amount of change of the original data is larger than a half of quantization step length. The above situation of the original QIMbased algorithm will result in severe distortion of the cover image after embedding the secret information. Aiming at enhancing the quality of the cover image, this paper proposes a more flexible QIM-based mechanism for embedding the watermark into a cover image. In our approach, A and B no longer represent fixed information, both of them can represent 1 or 0 according to actual needs. But A and B are subject to the following constraints, only one of the adjacent A and B can represent 1, and the other must

9 Enhanced Secret Hiding Mechanism Based on Genetic Algorithm

83

0

1

0

1

1

0

0

-3Δ

-2Δ A

-Δ B

0 A

Δ B

2Δ A



P

N

Fig. 9.3 The schematic diagram of the proposed method

represent 0. As shown in Fig. 9.3, we bound consecutive adjacent interval A and B together as a new collection and marked the collection as a positive collection (abbreviated as P) when A represents 1 and B represents 0, otherwise, marked the collection as a negative collection (abbreviated as N). The distribution of the P and N sets in the value space is one of the most important factors affecting the image quality of the cover image. This paper uses genetic algorithms mentioned in the previous section to search for the optimal distribution of P and N in the value space. The formula of basic information embedding method based on QIM is as follows. If the watermark information bit to embed is 1, the embedding formula is X X ⎧ · ,  mod 2 = 0 ⎨   X X X = ·  + , mod 2 = 0&X > 0   X  ⎩ X ·  − ,  mod 2 = 0&X < 0 

(9.1)

If the watermark information bit to embed is 0, the embedding formula is X X ⎧ ⎨    · ,  mod 2 = 0 X X X =  ·  + ,  X  mod 2 = 0&X > 0 ⎩ X ·  − ,  mod 2 = 0&X < 0 

(9.2)

In the proposed method, we embed 1 with formula (9.1) and 0 with formula (9.2) in the interval corresponding to P. And we embed 1 with formula (9.2) and 0 with formula (9.3) in the interval corresponding to N.

9.4 Results and Analysis In this paper, we adopt the peak signal-to-noise ratio, often abbreviated as PSNR, to evaluate the quality of the cover image after embedding a secret message into it. Also, NHS is used to evaluate the robustness of the proposed algorithm against salt and pepper noise attacks. The NHS between the embedded watermark and the extracted watermark is defined as follows:

84

C.-J. Weng et al.



m  n NHS = 1 −

i=1

j=1

 w(i, j) ⊕ w(i, j)

m×n

,

(9.3)

where w denotes the embedded watermark, w is the extracted watermark, and m and n are the numbers of rows and columns of the watermark. In our experiment, as shown in Fig. 9.4, nine original pictures of size 512 × 512 with 8 bits per resolution are used as the cover image. The QR code in Fig. 9.4 is an example of a watermark image. The binary image with the same size as the cover image is randomly generated as the watermark information. After embedding it into the cover image with different algorithms, the performance of the proposed algorithm is evaluated by comparing the value of the PSNR. For the original QIMbased algorithm, the entire value space is either P set or only N set. We compare the proposed method with the above two QIM-based algorithms, and the experimental results are shown in Table 9.1, where R1 is the winning rate of the proposed method compared with the original QIM-based algorithm in which the whole value space is segmented by P sets and R2 is the result of comparing the proposed method with the original QIM-based algorithm in which the value space is segmented by N sets. As can be seen from Table 9.1, the proposed method has better performance than the previous algorithm.

Airplane

Baboon

Barbara

Boat

Elaine

GoldHill

House

Lake

Lena

Watermark

Fig. 9.4 Experimental cover image and a watermark example

Table 9.1 The winning rate of the proposed method compared with original QIM Winning rate

Cover image Airplane

Baboon

Barbara

Boat

Elaine

GoldHill House

Lake

Lena

R1

0.93

1.00

0.97

0.67

0.93

0.87

0.97

0.93

1.00

R2

0.77

0.93

0.97

0.83

0.93

0.93

0.87

0.83

0.97

9 Enhanced Secret Hiding Mechanism Based on Genetic Algorithm

cover image

w’

cover image

δ = 0.1

cover image

δ = 0.01

δ = 0.1

cover image

w’

85

w’

cover image δ = 0.01

w’

δ = 0.001

w’

cover image δ = 0.001

w’

Fig. 9.5 The experiment of salt and pepper noise attacks

Table 9.2 The NHS of extracted watermark after salt and pepper noise attack Noise intensity

Airplane

Cover image Baboon

Barbara

Boat

Elaine

GoldHill

House

Lake

Lena

0.001

0.9865

0.9960

0.9965

0.9979

0.9968

0.9939

0.9910

0.9978

0.9955

0.010

0.6248

0.6607

0.7196

0.7283

0.7331

0.6220

0.6253

0.7084

0.7128

0.100

0.5086

0.4981

0.4787

0.5166

0.5149

0.5006

0.4998

0.4794

0.4803

As shown in Fig. 9.5, to validate the robustness of the proposed algorithm, the salt and pepper noise attack experiments are carried out. δ is the intensity of salt and pepper noise and w is the extracted watermark after noise attacked. The results are shown in Table 9.2 when extracted the watermark under a variety of noise attack intensities; the NHS values are still high enough.

9.5 Conclusion In this paper, we presented a new scheme to enhance the secret hiding mechanism for watermarking cover multimedia. The cover images embedded the watermark into a cover image by QIM-based mechanism. The improvement for enhancing the quality of the watermarking cover image is optimized by applying genetic algorithm. The experimental results are compared with the original approach which shows that the proposed algorithm is an alternative approach. Acknowledgements This work was supported by the Natural Science Foundation of Fujian Province Grant 2018Y3001, Project of Fujian Education Department Funds Grant JK2017029, and the Scientific Research Project of Fujian University of Technology Grant GY-Z160130.

86

C.-J. Weng et al.

References 1. Shiu, P.-F., Lin, C.-C., Jan, J.-K., Chang, Y.-F.: A DCT-based robust watermarking scheme surviving JPEG compression with voting strategy. J. Netw. Intell. 3, 259–277 (2018) 2. Wu, T.-Y., Chen, C.-M., Wang, K.-H., Pan, J.-S., Zheng, W., Chu, S.-C., Roddick, J.F.: Security analysis of Rhee et al.’s public encryption with keyword search schemes: a review. J. Netw. Intell. 3, 16–25 (2018) 3. Nguyen, T.-T., Pan, J.-S., Chu, S.-C., Roddick, J.F., Dao, T.-K.: Optimization localization in wireless sensor network based on multi-objective firefly algorithm. J. Netw. Intell. 1, 130–138 (2016) 4. Tian, J.: Reversible data embedding using a difference expansion. IEEE Trans. Circuits Syst. Video Technol. 13, 890–896 (2003) 5. Chen, S.-H., Ko, S.-Y., Chen, S.-H.: Robust music genre classification based on sparse representation and wavelet packet transform with discrete trigonometric transform. J. Netw. Intell. 1, 67–82 (2016) 6. Tirkel, A.Z., Rankin, G.A., Van Schyndel, R.M., Ho, W.J., Mee, N.R.A., Osborne, C.F.: Electronic watermark. Digit. Image Comput. Technol. Appl., 666–673 (1993) 7. Rosales-Roldan, L., Chao, J., Nakano-Miyatake, M., Perez-Meana, H.: Color image ownership protection based on spectral domain watermarking using QR codes and QIM. Multimed. Tools Appl. 77, 16031–16052 (2018) 8. Kuang, F.-J., Zhang, S.-Y.: A novel network intrusion detection based on support vector machine and tent chaos artificial bee colony algorithm. J. Netw. Intell. 2, 195–204 (2017) 9. Lyu, W.-L., Chang, C.-C., Chou, Y.-C., Lin, C.-C.: Hybrid color image steganography method used for copyright protection and content authentication 10. Xiang, S., Luo, X.: Efficient reversible data hiding in encrypted image with public key cryptosystem. EURASIP J. Adv. Signal Process. 2017, 59 (2017) 11. Weng, C.-J., Pan, J.-S., Liu, S.-J., Wang, M.-J.: A Watermarking method for printed QR code based on module expansion. In: International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 124–133. Springer (2018) 12. Abdulrahman, A.K., Ozturk, S.: A novel hybrid DCT and DWT based robust watermarking algorithm for color images. Multimed. Tools Appl., 1–23 (2019) 13. Pan, J.-S., Huang, H.-C., Jain, L.C.: Intelligent Watermarking Techniques. World Scientific (2004) 14. Holland, J.H.: Genetic algorithms. Sci. Am. 267, 66–73 (1992)

Chapter 10

An Adversarial Attack Method in Gray-Box Setting Oriented to Defenses Based on Image Preprocessing Yuxin Gong, Shen Wang, Xunzhi Jiang and Dechen Zhan

Abstract Recently, many studies have proposed adversarial defenses of image preprocessing based on gradient masking to deal with the threats of adversarial examples in deep learning models. These defenses have been broken through in whitebox threat models, where attackers have full knowledge of target models. However, they have not been proved to be invalid in gray-box threat models, where attackers only partially know about target models. In this paper, by integrating stochastic initial perturbations into momentum iterative attack, we propose SMIM which is an efficient adversarial attack method. Based on this, BPDA attack framework is applied to the attack in the gray-box setting. Experiments show that this method can generate adversarial examples with strong attack ability and transferability on seemingly non-differentiable defensive models, thereby evading defenses with only partial knowledge of target models. Keywords Gradient masking · Adversarial example · Deep learning · Gray-box setting

10.1 Introduction The latest breakthrough in deep learning is introducing pretrained classifiers into systems with high security requirements. Deep Neural Network (DNN) can form high-precision models by learning a large number of examples efficiently. However, recent studies have shown that attackers can generate adversarial examples by adding carefully selected micro-adversarial perturbation to make deep learning models generate error outputs. Even in black-box settings, where attackers have limited knowledge of target models but definitely do not know about target models parameters, target models can still be fooled by adversarial examples. Y. Gong · S. Wang (B) · X. Jiang · D. Zhan Department of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_10

87

88

Y. Gong et al.

The primary method of generating adversarial examples is to obtain useful gradients through backpropagation. Therefore, in order to make it difficult to generate adversarial examples, many defenders use gradient masking [1, 2] to make the gradient of models nonexistent or incorrect. Image preprocessing and denoising [3, 4] is a kind of adversarial defense method based on gradient masking, it can be seen as adding a preprocessing layer to the front of the NN, but this could be easily broken in the white-box setting where attackers have full knowledge of target models [5]. While in the gray-box setting, where attackers do not know about target models, but they can perform different attack strategies based on prior defensive knowledge, the defenses have not been proved to be invalid. In this paper, we propose Stochastic-MIM (SMIM) based on Momentum Iterative Method (MIM) [6] and combine it with Backward Pass Differentiable Approximation (BPDA) to deal with the problem of gradient masking in gray-box settings. SMIM achieves a balance between attack capability and transferability, having a good attack effect on both white-box and black-box models [7]. On this basis, we apply the SMIM to BPDA framework [1] on substitute models. The robustness of this method is evaluated on several common image preprocessing defenses. Experiments on the ImageNet dataset show that adversarial examples generated by this method not only have good generalization on target models but also effectively counter image preprocessing defenses based on gradient masking. The rest of this paper is structured as follows: In Sect. 10.2, we introduce different methods used to generate adversarial examples. Next, in Sect. 10.3, we propose a novel adversarial attack method, as well as the attack framework used to evade image preprocessing defenses in gray-box settings. Then, Sect. 10.4 describes the relevant content of our experiment. Finally, we summarize our work in Sect. 10.5.

10.2 Backgrounds Szegedy et al. [8] found that a variety of machine learning models were vulnerable to adversarial attacks. The method of generating the adversarial example X ∗ is adding little perturbations δ X to original input X to make models misclassify the input as Y ∗ randomly or objectively: arg min δ X  s.t.F(X + δ X ) = Y ∗ δX

(10.1)

Basic Iterative Method [9]. Kurakin et al. proposed Basic Iterative Method (BIM) based on FGSM [10], through a number of small steps of perturbations, and clipping after each iterative perturbation: X 0adv = X,

   adv adv X adv N +1 = Cli p X, X N + α · sign ∇ X J (θ F , X N , Y )

(10.2)

10 An Adversarial Attack Method in Gray-Box Setting Oriented …

89

where α is the degree of perturbation and  is the maximum degree of perturbation. BIM can produce stronger adversarial examples than FGSM. Momentum Iterative Method [6]. Dong Y et al. proposed Momentum Iterative Method (MIM) by adding the momentum term μ into BIM. In this way, the gradient descent algorithm can be accelerated by accumulating the velocity vector of the loss function in the direction of gradients: G N +1 = μ · G N + X 0adv

= X,

∇ X J (θ F , X adv N ,Y) ; adv ∇ X J (θ F , X N , Y )1

X adv N +1

=

X adv N

(10.3)

+ α · sign(G N +1 )

where G N uses the decay factor μ to collect the gradient of the previous N iterations. Then, the adversarial example X adv N is disturbed until the N -th iteration in the direction of sign of the G N . In each iteration, the current gradient ∇ X J (θ F , X adv N ,Y) is normalized by L 1 norm of distance of itself. MIM can stabilize update directions and escape from poor local maxima. Although the basic attack method of single-step or multi-step iterations can effectively perform adversarial attacks in white-box settings, their abilities of black-box attacks are poor. In addition, adversarial examples generated by the momentum iterative method have better transferability, but they are still easily evaded by defenses based on gradient masking.

10.3 Adversarial Attack Against Image Preprocessing in the Gray-Box Setting 10.3.1 Stochastic-MIM In this paper, we propose SMIM in gray-box settings by randomly initializing momentum perturbation and projecting momentum perturbation of the model onto the constraint set to maximize the loss of the model. The procedure of SMIM is shown in formula (10.4): X 0adv = Rand X,α {X + η}, η ∈ (−α, α) S N +1 = momentum ∗ S N +

∇ X J (θ F , X adv N ,Y) adv ∇ X J (θ F , X N , Y )1

(10.4)

adv X adv N +1 = Cli p X, {X N + α · sign(S N +1 )}

where η is the initial perturbation and α is the degree of iterative perturbation for adversarial examples relative to clean examples. The initial perturbation value of the adversarial example begins at a random point of the L ∞ -bounded ball in the range of (−α, α). Then in the iterative procedures, the model accelerates the gradient descent

90

Y. Gong et al.

by accumulating the velocity vector in the gradient direction of the loss function. In addition, SMIM projects the perturbation in the range of (−, ) after each iteration, making the loss of the final iteration follow a centralized distribution without outliers. The  is the maximum perturbation. Similar to the momentum optimization method for training models, SMIM makes the update direction of models more stable and has the ability to get rid of the poor local maxima. The iteration after random initialization maximizes the loss of models, thereby generating adversarial examples with more robustness. These advantages contribute SMIM with better attack abilities in both white-box and black-box settings.

10.3.2 Attack Against Gradient Masking in the Gray-Box Setting Due to most adversarial attack, methods generate adversarial examples by calculating the gradients of models, when attackers cannot calculate gradients correctly, where following gradients do not successfully optimize the loss, and this can lead to a failed attack [1]. In order to calculate model gradients efficiently to maximize the loss of classification, BPDA [1] was introduced by Athalye et al. in white-box settings to overcome the problem of gradient masking caused by the use of incorrect or nonexistent gradients for defense. Specifically, given a pretrained classifier f , build a preprocessor g, the secured classifier will make fˆ(x) = f (g(x)), where the preprocessor g satisfies g(x) ≈ x. Therefore, the approximate derivative of f (g(x)) at xˆ is ∇x f (g(x))|x=xˆ ≈ ∇x f (x)|x=g(x) ˆ

(10.5)

Denoising defenses can be viewed as the first layer of the neural network, which performs preprocessing of inputs. When this preprocessing is differentiable, standard attacks can be utilized. When it is impossible to compute the gradient after preprocessing, models perform forward propagation through the neural network as usual, but on the backward pass, attackers can replace preprocessor with the identity function, because of ∇x g(x) ≈ ∇x x = 1. Although in the white-box setting, most of the image preprocessing defenses based on gradient masking can be circumvented by the above methods skillfully, it has not been proven to be invalid in the gray-box setting. According to the transferability of adversarial examples [8], we put forward the hypothesis: by using different adversarial attack methods with BPDA on substitute models, and then feeding the generated adversarial examples into target models, we can attack image preprocessing defenses in the gray-box setting. The specific steps of this adversarial attack framework are as follows: (1) Preprocess the adversarial input xadv of the substitute model to get pre(xadv ); (2) Forward-propagate the pre(xadv ) through the network and compute its loss;

10 An Adversarial Attack Method in Gray-Box Setting Oriented …

91

(3) Get the gradient ∇x Jθ,y (pre(xadv )) ≈ ∇x Jθ,y (xadv ) of the loss on preprocessed image; (4) Generate adversarial examples xadv = x + η. Perform one-step or multi-step iterations of steps 1–4 until meeting the conditions; (5) Feed xadv into the target model. Based on the above hypothesis, in order to effectively circumvent image preprocessing defenses based on gradient masking, we apply SMIM proposed in Sect. 10.3.1 to BPDA in the gray-box setting. We compute the approximate derivatives of substitute models by computing the forward pass normally of SMIM and the backward pass through a differentiable approximation of the function to avoid gradient masking. And then, the generated adversarial examples are transferred to target models for attack. The experimental results in Sect. 10.4 show the effectiveness of this method against image preprocessing defenses.

10.4 Experiment 10.4.1 Experimental Settings Dataset: ImageNet dataset published in ILSVRC is performed in our experiments, including 1000 classes. We use the training set to train basic models and the test set to make evaluation. Models: We use Inception-v3, Inception-v4, InceptionResnet-v2, and Resnet152 models as substitute models and target models, respectively, in our experiments. Adversarial examples with pixel value in [0, 1] are generated from substitute models. After that, we apply defenses separately on adversarial examples, which are fed into trained target models, and measure the top 1 accuracy. Defenses: Image preprocessing defenses used in our experiments include the following: median smoothing filter [3], the kernel size used in our experiments is set to 7; Gaussian smoothing filter [11], we set σ = 2 to determine the size of the smoothing window; average smoothing filter [11], the kernel size is set to 5; and JPEG [4], the quality parameter is 20%.

10.4.2 Experiments on Transferability of Adversarial Attacks In order to verify the generalization of adversarial examples, we use Inception-v3, Inception-v4, InceptionResnet-v2, and Resnet-152 models, respectively, as substitute models and target models to conduct BIM, MIM, and SMIM contrastive experiments in black-box settings. The experimental results are shown in Fig. 10.1. The labeling at the bottom of each subgraph in Fig. 10.1 are substitute models, and the different colored lines show the attack effect on the rest models. From Fig. 10.1, we

92

Y. Gong et al.

(a) Attacking Inc-v3

(b) Attacking Inc-v4

(c) Attacking IncRes-v2

(d) Attacking Res-152

Fig. 10.1 Among Inception-v3, Inception-v4, InceptionResnet-v2, and Resnet-152 models, the transfer attack success rate of using BIM (dotted line), MIM (dashed line), and SMIM (solid line) with the increase of perturbation

can see that SMIM retains a powerful white-box attack capability like MIM because it can attack a white-box model with nearly 100% attack success rate. In the blackbox setting, the transfer attack success rate of each method is enhanced with the rise of perturbations. But by initializing momentum perturbation randomly, our SMIM performs better than MIM and BIM in most black-box settings. The experimental results further demonstrate that adversarial examples generated by SMIM have better capability of transferred attack.

10 An Adversarial Attack Method in Gray-Box Setting Oriented …

93

10.4.3 Contrastive Experiments on Adversarial Attacks Against Image Preprocessing Defenses In the gray-box setting, in order to compare the robustness of each attack method against image preprocessing defenses, we experiment with several attacks, including BIM, MIM, and the SMIM proposed in this paper. The effectiveness of these attack methods and the use of BPDA are evaluated in both presence and absence of image preprocessing defenses. The experimental results of using Inception-v4 as substitute model and InceptionResnet-v2 as target model are shown in Table 10.1. Our attacks are based on the L ∞ norm bound for non-targeted attacks with 10 iterations, iterative perturbation and maximum perturbation are 0.015 and 0.1, respectively, and the decay rate based on the momentum iteration method is 1. Through the first line of results in Table 10.1, we can find that in the black-box setting, the basic attack effect of SMIM is the best, reaching 80.4% of the attack success rate, which is caused by its better transferability. The remaining parts show the attack success rate of each attack method with or without BPDA under each defense method, which are shown in the first and second lines of each defense in Table 10.1. Results indicate that defenses through image preprocessing can reduce the attack success rate to some extent. Although these defenses are non-differentiable and inherently random, which makes it difficult for an adversary to get around them, defensive effect is not very obvious in black-box settings, the attack success rates of MIM and SMIM are still above 50%. Nevertheless, attack methods using BPDA can significantly improve the attack capabilities in gray-box settings. Among them, SMIM combined with BPDA has the strongest attack robustness, achieving 86.50% attack success rate under the defense of Gaussian smoothing filter. This is mainly due to the approximate differential nature of BPDA and the stronger transferability of SMIM.

Table 10.1 The top 1 attack success rate of BIM, MIM, and SMIM in different settings when using Inception-v4 as substitute model and InceptionResnet-v2 as target model Defense Attack setting Adversarial attack Clean BIM (%) MIM (%) SMIM (%) No defense JPEG Gaussian Median Average

No BPDA No BPDA BPDA(JPEG) No BPDA BPDA(Gaussian) No BPDA BPDA(Median) No BPDA BPDA(Average)

12.5% 21.0% – 33.8% – 24.6% – 25.1% –

62.3 45.1 65.7 41.7 69.6 33.6 54.7 39.8 64.1

77.1 66.3 78.7 57.0 83.6 56.3 65.7 57.1 80.0

80.4 68.9 84.1 59.8 86.5 63.5 72.7 59.0 81.9

(b) MIM

(c) SMIM

Fig. 10.2 The attack success rates of BIM, MIM, and SMIM with and without BPDA under several defenses with the increase of perturbation degree

(a) BIM

94 Y. Gong et al.

10 An Adversarial Attack Method in Gray-Box Setting Oriented …

95

10.4.4 Experiments with Different Degrees of Perturbations The adversarial examples generated by different perturbations have different threats to models. In black-box settings, we use Inception-v4 as the substitute model and InceptionResnet-v2 as the target model, and evaluate attack effects with perturbations  from 0.02 to 0.11 based on BIM, MIM, and SMIM for normal attacks and attacks under defenses. In addition, attacks combined with BPDA using the above parameters are also evaluated in the gray-box setting. The evaluations of different perturbations are shown in Fig. 10.2. Figure 10.2 intuitively shows that the success rate of each attack method increases with the strengthening of perturbation. The attack effects based on momentum iteration are obviously better than that based on basic iteration. In addition, under each defense method, attack methods combined with BPDA have stronger robustness against image preprocessing defenses based on gradient masking, and can even be stronger than basic attack. This proves the hypothesis proposed in Sect. 10.3.1 that in the gray-box setting, the adversarial examples generated by different attacks combined with BPDA have strong transferability and attack ability. The methods based on momentum iteration combined with BPDA can achieve more than 70.00% attack success rate of transferability when the perturbation is 0.07, which cannot affect the normal recognition of human.

10.5 Conclusion and Discussions In this paper, aiming at the image preprocessing defense technology based on gradient masking, the robustness of iterative adversarial attack methods combined with BPDA is explored under the condition of gray-box threat model. Based on this, we propose SMIM. Experiments show that SMIM alleviates the balance between attack capability and transferability, presenting strong robustness in both white-box and black-box attacks, and at the same time, it can effectively avoid gradient masking problem caused by image preprocessing unintentionally when combined with BPDA.

References 1. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. arXiv:1802.00420 (2018) 2. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018) 3. Guo, C., Rana, M., Cisse, M., van der Maaten, L.: Countering adversarial images using input transformations. arXiv:1711.00117 (2017) 4. Shaham, U., Garritano, J., Yamada, Y., Weinberger, E., Cloninger, A., Cheng, X., Stanton, K., Kluger, Y.: Defending against adversarial images using basis functions transformations. arXiv:1803.10840 (2018)

96

Y. Gong et al.

5. Chen, C.-M., Wang, K.-H., Wu, T.-Y., Wang, E.K.: On the security of a three-party authenticated key agreement protocol based on chaotic maps. Data Sci. Pattern Recognit. 1(2), 1–10 (2017) 6. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018) 7. Chen, C.-M., Linlin, X., Tsu-Yang, W., Li, C.-R.: On the security of a chaotic maps-based three-party authenticated key agreement protocol. J. Netw. Intell. 1(2), 61–65 (2016) 8. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv:1312.6199 (2013) 9. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv:1607.02533 (2016) 10. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv:1412.6572 (2014) 11. Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. arXiv:1704.01155 (2017)

Chapter 11

A Collusion Attack on Identity-Based Public Auditing Scheme via Blockchain Xing Zou, Xiaoting Deng, Tsu-Yang Wu and Chien-Ming Chen

Abstract With cloud storage systems, users can access and update outsourced data remotely. Owing to the accompanying growth of the importance of data integrity, a great deal of attention has been focused on public auditing schemes. Identitybased public auditing (IBPA) scheme allows a third-party auditor (TPA) to verify the integrity of the outsourced data on behalf of users. However, malicious TPAs might collude with cloud servers and forge audit data to deceive users. In this paper, we first review the architecture of a traditional IBPA scheme and a novel IBPA scheme which try to solve the above problem via blockchain. Then, we analyze two main limitations in this newly proposed public auditing scheme against malicious auditors and illustrate our collusion attack on this IBPA scheme. Finally, we offer some suggestions to overcome the disadvantages and help to create a more trustworthy blockchain-based public auditing scheme. Keywords Cloud storage · Identity-based public auditing · Collusion attack

11.1 Introduction Cloud storage has attracted extensive attention from both academic and industrial communities for its huge advantages of costs, performance, and management [1–3]. As a result, more and more users choose to migrate their data to the cloud storage that is managed and maintained by professional cloud service providers (CSPs) [4, 5]. However, the outsourced data may be corrupted or even lost because cloud servers may suffer from external rival attacks and internal hardware or software failures [6]. X. Zou · X. Deng · C.-M. Chen (B) Harbin Institute of Technology (Shenzhen), Shenzhen, China e-mail: [email protected] T.-Y. Wu Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, China China and National Demonstration Center for Experimental Electronic Information and Electrical Technology Education, Fujian University of Technology, Fuzhou, China © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_11

97

98

X. Zou et al.

In addition, a cloud server is an independent and untrusted administrative entity that may delete data that users have never accessed to save storage space or hide data loss events to maintain its reputation [7]. Unfortunately, most users often delete locally stored backup data after they have uploaded their data to a cloud server. Because of these factors, it is important for users to audit the integrity of their outsourced data on a regular basis. To ensure the integrity of outsourced data, the cloud storage auditing technique, also called the cloud data auditing, is popularly employed [8–10]. Generally, there are two models for cloud data auditing, i.e., private auditing [11, 12] and public auditing [7, 13]. Public auditing can provide more dependable auditing results and reduce the users’ burden [14, 15] so it has been popularly employed for cloud data auditing [16]. So far, many public auditing schemes have been proposed, such as those presented in [17, 18]. These existing auditing protocols are mainly based on a public key infrastructure (PKI) which often has the problem of key management [19–22]. To eliminate the need for certificate management in public auditing schemes, identity-based public auditing (IBPA) schemes [23, 24] have been proposed. With the popularity of blockchain [25–27], Xue et al. [28] proposed an IBPA scheme against malicious auditors via blockchain. In their scheme, the nonces in a blockchain are employed to construct unpredictable and easily verified challenge messages. However, in this paper, we find that those challenge messages are generated and controlled by TPAs so the random processes are not transparent for the users. It means that the malicious TPA and the cloud server still can collude to forge auditing results. For this reason, we further describe some suggestions to overcome the disadvantages and help to create a more trustworthy blockchain-based public auditing scheme.

11.2 IBPA Schemes 11.2.1 Traditional IBPA Scheme In this section, we briefly review the traditional identity-based public auditing scheme. The system model of IBPA involves four entities: a private key generator (PKG), a user, a cloud server (CS), and a TPA (see Fig. 11.1). The PKG is fully trusted, which sets the system parameters and generates private keys for each user. The CS is managed by a cloud service provider and provides users with cloud storage services. The CS often has sufficient storage space and powerful computational capabilities. However, the cloud service provider may be dishonest and may hide data corruption or loss. The user is an entity with large amounts of data and limited communication and computation resources. The user uploads local data to the CS, as permitted by the (paid) cloud storage service. The TPA is delegated by the user to audit the integrity of the outsourced data. The TPA has the expertise and

11 A Collusion Attack on Identity-Based Public Auditing Scheme …

99

Fig. 11.1 The system model of a traditional identity-based public auditing scheme

ability to complete the auditing task but may not do so in full accordance with the users’ audit requirements. After the user’s local data have been uploaded, the user usually deletes local data to save storage space. To ensure the integrity of outsourced data, the user delegates the TPA to regularly audit the data and checks the TPA’s auditing results over a longer period of time. However, malicious TPA may perform fewer audits than agreed upon with the user to reduce auditing costs. Even worse, the TPA and the CS may collude to forge audit data to deceive the user.

11.2.2 Xue’s IBPA Scheme Xue et al. [28] proposed an IBPA scheme for ensuring data integrity in cloud storage systems against malicious auditors via blockchain (see Fig. 11.2). Based on traditional IBPA scheme, they adopted blockchain technology to select random challenge messages and store log files and auditing results. Since the blockchain is inherently verifiable and resistant to modification, the records can ensure the traceability of the TPA’s auditing services. The concrete process executes as follows: 1. Setup phase. The file F is preprocessed into n blocks, F = m1 m2  · · · mn where m j ∈ Z p , j ∈ [1, n] and p is a large prime. The PKG generates  the system parameters, a master secret key s, and private  keys s Pu,0 , s Pu,1 for users.

100

X. Zou et al.

Fig. 11.2 The system model of Xue’s identity-based public auditing scheme via blockchain

User U signs file blocks m j and stores the data file blocks and the authentication tag set S j , T j j∈[1,n] in the cloud (Step 1). 2. Audit phase. (1) Challen. The TPA generates a challenge message. The TPA obtains the nonce in the corresponding block based on the time t that is specified by U . The TPA chooses a random l-element subset J = {a1 , a2 , . . . , al } of the set [1, n] based on the nonce and the security parameter k. The TPA chooses a random l-element subset J = {a1 , a2 , . . . , al } of the set [1, n] based on the nonce and the security parameter k. Then it chooses a random v j ∈ Z p for each j ∈ J and generates a challenge  message D = j, v j j∈J (Step 3). The TPA sends the challenge message to the CS (Step 4). (2) Proofgen. The CS generates proof information.

11 A Collusion Attack on Identity-Based Public Auditing Scheme …

101

The CS chooses a random number x ∈ Z p , computes the proof information C, and sends it to the TPA (Step 5). μ = x −1

al j = a1

 m j v j + h(y) ∈ Z p ,

y = x Pu,1 ∈ G 1 , (S, T ) =

al j=a1

vj Sj,

(11.2)

al

C = {S, T, μ, y}.

(11.1)

j=a1

 vjT ,

(11.3) (11.4)

(3) Audit. The TPA audits the integrity of the challenged block. The TPA verifies a bilinear pairing equation and stores the auditing results. The TPA stores log files and auditing results in the blockchain which can be checked by the user (Step 6 and 7). (4) Checklog. U checks the validity of the log file that is recorded in the public blockchain. During the audit phase, the TPA first obtains the nonce in the corresponding block and chooses a random subset J with l elements. Then for each item j ∈ J it chooses random value v j ∈ Z p . The generated challenge message   a corresponding D = j, v j j∈J is a random tuple subset. By using those v j values and original data file blocks, the CS generates proof information.

11.3 Limitations Xue’s scheme selects challenge messages based on the nonce of the public blockchain of Bitcoin and maintains good efficiency. However, limitations also exist in this IBPA scheme. The limitations are as follows: Centralization. The TPA is required to compute some basic materials such as challenge messages and proof verifications. Once the TPA is compromised because of hardware or software failures, some incorrect auditing results may be computed and recorded in the blockchain, which may give users incorrect results. In other words, it might present a single point of failure. Furthermore, the TPA may be curious about the auditing histories of the user and do some analysis based on the user’s behaviors. In other words, if the TPA records the users’ auditing requests and other features, the privacy of this IBPA scheme will be compromised. Collusion. In Xue’s scheme, the security of the auditing process is based on the assumption that the TPA generates random parameters for challenge message correctly. To be more specific, the TPA should choose the random subset J based on the nonce and generate corresponding random value v j for each item j ∈ J .

102

X. Zou et al.

However, this random process is handled and manipulated by the TPA which is not transparent and credible. If the TPA can negotiate with the CS about how v j values are selected, the randomness of challenge messages is compromised. In other words, the CS can compute all m j v j in advance and generate correct proof information without original data blocks m j . Therefore, malicious TPA and the CS still may collude to forge audit data to deceive the user. The concrete collusion attack process executes as follows: 1. Setup phase. After the  CS receives all file blocks m j and the authentication tag set  S j , T j j∈[1,n] , it will communicate with the TPA and calculate a random n  element subset ϕ = j, u j j∈[1,n] based on U ’s identity and current time t0 . Then the TPA and the CS both have the same subset ϕ with n elements and know how to choose a function f based on subset J . The CS calculates all m j u j values and deletes some file blocks m j . 2. Audit phase. (1) Challen. The TPA generates a random l-element subset J = {a1 , a2 , . . . , al } of the set [1, n]. The  TPA chooses a random w j for each j ∈ J and calculates v j = f u j, wj .   The TPA generates the challenge message D = j, v j j∈J where function f should satisfy the following condition:     m jvj = m j f u j, wj = f m ju j, wj

(11.5)

The TPA sends the challenge message D to the CS. (2) Proofgen. The CS generates proof information. After the CS receives the challenge message D, it chooses a random number x ∈ Z p and computes (S, T ) and y normally. The CS uses m j u j values, function f , and w j values to calculate μ value. al

 m j v j + h(y) j=a  al 1   =x −1 m j f u j , w j + h(y) j=a  al 1   −1 =x f m j u j , w j + h(y)

μ =x −1

j=a1

(11.6)

In this way, the CS can generate a correct proof information without using original data file blocks m j . Therefore, the malicious TPA and the CS still can collude to forge auditing results and deceive the user.

11 A Collusion Attack on Identity-Based Public Auditing Scheme …

103

11.4 Discussion In this section, we propose some solutions to improve the security of Xue’s blockchain-based IBPA scheme. Centralization. In most public auditing schemes, the centralized TPA is assumed to be honest and reliable which is a strong assumption in reality. In addition, a single point of failure can compromise the availability of the TPA. To solve this problem, the TPA can be replaced by other decentralized entities such as the blockchain network. Collusion. We propose a collusion attack on Xue’s IBPA scheme in Sect. 11.3. To avoid this kind of attack, the process of creating random challenge messages should be transparent and formally regulated, which can be implemented by smart contracts [29] in the blockchain in the future.

11.5 Conclusion Security and privacy are the hot topics in recent years [30–34]. In this paper, we propose a collusion attack over a recent proposed blockchain-based IBPA scheme. Our analysis shows that a malicious auditor and cloud server still can collude to forge audit results to deceive the user. In addition, we offer some suggestions to improve the security of their scheme. In future work, we will further explore a more secure and robust public auditing mechanisms against malicious TPAs based on blockchain technologies. Acknowledgements The work of Chien-Ming Chen was supported in part by Shenzhen Technical Project under Grant number JCYJ20170307151750788 and in part by Shenzhen Technical Project under Grant number KQJSCX20170327161755.

References 1. Wu, T.Y., Chen, C.M., Sun, X., Lin, C.W.: A countermeasure to SQL injection attack for cloud environment. Wireless Pers. Commun. 96(4), 406–418 (2017) 2. He, B.Z., Chen, C.M., Wu, T.Y., Sun, H.M.: An efficient solution for hierarchical access control problem in cloud environment. Math. Probl. Eng. (2014) 3. Xiong, H., Wang, Y., Li, W., Chen, C.M.: Flexible, efficient, and secure access delegation in cloud computing. ACM Trans. Manag. Inf. Syst. 10(1) (2019) 4. Chen, X., Li, J., Weng, J., Ma, J., Lou, W.: Verifiable computation over large database with incremental updates. IEEE Trans. Comput. 65, 3184–3195 (2016) 5. Liu, C., Yang, C., Zhang, X., Chen, J.: External integrity verification for outsourced big data in cloud and IoT: a big picture. Futur. Gener. Comput. Syst. 49, 58–67 (2015) 6. Ni, J., Yu, Y., Mu, Y., Xia, Q.: On the security of an efficient dynamic auditing protocol in cloud storage. IEEE Trans. Parallel Distrib. Syst. 25, 2760–2761 (2014)

104

X. Zou et al.

7. Ateniese, G., Burns, R., Curtmola, R., Herring, J., Kissner, L., Peterson, Z., Song, D.: Provable data possession at untrusted stores. In: Proceedings of the 14th ACM conference on Computer and communications security–CCS’07 (2007) 8. Chen, X., Li, J., Huang, X., Ma, J., Lou, W.: New publicly verifiable databases with efficient updates. IEEE Trans. Dependable Secure Comput. 12, 546–556 (2015) 9. Kolhar, M., Abu-Alhaj, M., Abd El-atty, S.: Cloud data auditing techniques with a focus on privacy and security. IEEE Secur. Priv. 15, 42–51 (2017) 10. Wu, T.Y., Lin, Y., Wang K.H., Chen, C.M., Pan, J.S.: Comments on a privacy preserving public auditing mechanism for shared cloud data. In: Proceedings of the Multidisciplinary International Social Networks Conference (2017) 11. Erway, C., Küpçü, A., Papamanthou, C., Tamassia, R.: Dynamic provable data possession. In: Proceedings of the 16th ACM Conference on Computer and Communications Security–CCS’09 (2009) 12. Sebe, F., Domingo-Ferrer, J., Martinez-Balleste, A., Deswarte, Y., Quisquater, J.: Efficient remote data possession checking in critical information infrastructures. IEEE Trans. Knowl. Data Eng. 20, 1034–1038 (2008) 13. Tian, H., Chen, Z., Chang, C., Huang, Y., Wang, T., Huang, Z., Cai, Y., Chen, Y.: Public audit for operation behavior logs with error locating in cloud storage. Soft Comput. (2018) 14. Tian, H., Chen, Y., Chang, C., Jiang, H., Huang, Y., Chen, Y., Liu, J.: Dynamic-Hash-Table based public auditing for secure cloud storage. IEEE Trans. Serv. Comput. 10, 701–714 (2017) 15. Tian, H., Chen, Z., Chang, C., Kuribayashi, M., Huang, Y., Cai, Y., Chen, Y., Wang, T.: Enabling public auditability for operation behaviors in cloud storage. Soft. Comput. 21, 2175–2187 (2017) 16. Kolhar, M., Abu-Alhaj, M., Abd El-atty, S.: Cloud data auditing techniques with a focus on privacy and security. IEEE Secur. Priv. 15, 42–51 (2017) 17. Shen, J., Shen, J., Chen, X., Huang, X., Susilo, W.: An efficient public auditing protocol with novel dynamic structure for cloud data. IEEE Trans. Inf. Forensics Secur. 12, 2402–2415 (2017) 18. Zhang, Y., Xu, C., Liang, X., Li, H., Mu, Y., Zhang, X.: Efficient public verification of data integrity for cloud storage systems from indistinguishability obfuscation. IEEE Trans. Inf. Forensics Secur. 12, 676–688 (2017) 19. Chen, C.M., Xiang, B., Liu, Y., Wang, K.H.: A secure authentication protocol for internet of vehicles. IEEE Access 7(1), 12047–12057 (2019) 20. Wang, K.H., Chen, C.M., Fang, W., Wu, T.Y.: On the security of a new ultra-lightweight authentication protocol in IoT environment for RFID tags. J. Supercomput. 74(1), 65–70 (2018) 21. Chen, C.M., Xiang, B., Wang, K.H., Yeh, K.H., Wu, T.Y.: A robust mutual authentication with a key agreement scheme for session initiation protocol. Appl. Sci. 8(10) (2018) 22. Chen, C.M., Huang, Y., Wang, E.K., Wu, T.Y.: Improvement of a mutual authentication protocol with anonymity for roaming service in wireless communications. Data Sci. Pattern Recognit. 2(1), 15–24 (2018) 23. Wang, Y., Wu, Q., Qin, B., Shi, W., Deng, R., Hu, J.: Identity-Based Data Outsourcing With Comprehensive Auditing in Clouds. IEEE Trans. Inf. Forensics Secur. 12, 940–952 (2017) 24. Wang, H., He, D., Tang, S.: Identity-based proxy-oriented data uploading and remote data integrity checking in public cloud. IEEE Trans. Inf. Forensics Secur. 11, 1165–1176 (2016) 25. Zheng, Z., Xie, S., Dai, H., Chen, X., Wang, H.: An overview of blockchain technology: architecture, consensus, and future trends. In: 2017 IEEE International Congress on Big Data (BigData Congress) (2017) 26. Hsiao, J.H., Tso, R., Chen, C.M., Wu, M.E.: Decentralized E-voting systems based on the blockchain technology. In: Advances in Computer Science and Ubiquitous Computing, CSA (2017) 27. Yeh, K.H., Su, C., Hou, J.L., Chiu, W., Chen, C.M.: A robust mobile payment scheme with smart contract-based transaction repository. IEEE Access 59394–59404 (2018)

11 A Collusion Attack on Identity-Based Public Auditing Scheme …

105

28. Xue, J., Xu, C., Zhao, J., Ma, J.: Identity-based public auditing for cloud storage systems against malicious auditors via blockchain. Sci. China Inf. Sci. 62 (2019) 29. Catchlove, P.: Smart contracts: a new era of contract use. SSRN Electron. J. (2017) 30. Chen, C.M., Xu, L., Wu, T.Y., Li, C.R.: On the security of a chaotic maps-based three-party authenticated key agreement protocol. J. Netw. Intell. 1(2) (2016) 31. Li, C.T., Wu, T.Y. Chen, C.L., Lee, C.C., Chen, C.M.: An efficient user authentication and user anonymity scheme with provably security for iot-based medical care system. Sensors 17 (2017) 32. Wu, T.Y., Chen, C.M., Wang, K.H., Meng, C., Wang, E.K.: A provably secure certificateless public key encryption with keyword search. J. Chin. Inst. Eng. (2019) 33. Chen, C.M., Wang, K.H., Wu, T.Y., Wang, E.K.: On the security of a three-party authenticated key agreement protocol based on chaotic maps. Data Sci. Pattern Recognit. 1(2), 1–10 (2017) 34. Chen, C.M., Xiang, B., Wu, T.Y., Wang, K.H.: An anonymous mutual authenticated key agreement scheme for wearable sensors in wireless body area networks. Appl. Sci. (2018)

Chapter 12

Research on a Color Image Encryption Algorithm Based on 2D-Logistic Xin Huang and Qun Ding

Abstract With the development of network and communication technology, image encryption has become the necessary information security research problems. In this paper, 2D-logistic chaotic sequence image encryption algorithm is adopted, which can achieve image encryption easily and quickly, and ensure the security of image encryption results through the excellent properties of chaotic sequence. Before using the image encryption algorithm, the image should be processed. Because the color image is composed of a primary color matrix, it is divided into R, G, and B color components. Each color component matrix is composed of the value of the pixel matrix, and its value range is also between 0 and 255. The common color image encryption algorithms fail to fully consider the internal relations between the color components of RGB and have weak resistance to statistical analysis. In order to further enhance the degree of image scrambling and encryption security, a new color image encryption algorithm is proposed based on chaotic scrambling. Keywords Color image · 2D-logistic chaos · Encryption algorithm

12.1 Instruction With the continuous progress of science and technology, the media text transmitted on the Internet is gradually replaced by intuitive images. However, this also brings about the disclosure of information security, which is not only a personal privacy problem but also brings negative impact to the society [1–10]. Therefore, the proposal of image encryption technology has positive significance. The proposal of digital image encryption was put forward in 1988 by Fridrich [11], which proposed the scrambling algorithm to weaken the relationship between the pixels of the image and the diffusion algorithm to change the size of the image pixel value. This mechanism is also the most widely used digital image encryption structure. As the encryption and decryption of digital images require a large number of passX. Huang · Q. Ding (B) Heilongjiang University, Xuefu 74, Harbin, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_12

107

108

X. Huang and Q. Ding

words, this is the same as the requirements of one-time encryption technology. One of the research topics left behind by one-secret technology is how to generate a large number of random numbers with excellent statistical characteristics. In [12, 13], the scrambling algorithm is based on expanding two-dimensional Cat and Baker mapping into three-dimensional mapping, and the confusion algorithm is based on the sequence generated by logistic mapping to calculate the variable pixel value. The existing works related to encryption policy replacement and confusion of loose coupling are studied in [11–15]. This paper proposes a color image encryption algorithm based on two-dimensional chaotic mapping and random bit recombination. The algorithm first takes the RGB component as a whole to transform the color image into an extended gray image, and then uses the excellent pseudo-random property and high complexity of the 2D-logistic mapping to carry out the position scrambling of the color image.

12.2 2D-Logistic Chaos System Logistic equation of chaotic systems, known as pest model, is the most widely used model of nonlinear dynamic discrete chaotic system, it is an equation which is very simple, but it is of great significance of one-dimensional nonlinear equation, is in the nature of all the chaos, often as a chaotic pseudo-random sequence generator and used in data and image encryption algorithm. It has the following characteristics: (1) it is very sensitive to initial conditions, (2) it is aperiodic, and (3) it has singular attractors. The first two characteristics of chaotic sequences, namely, sensitive dependence and aperiodicity to initial values, are exactly the characteristics required for key and key flow in cryptography. In this paper, the 2D paired logistic mapping was defined as follows:  x1 (n + 1) = μ1 x1 (n)(1 − x1 (n)) + γ1 x22 (n) (12.1) x2 (n + 1) = μ2 x2 (n)(1 − x2 (n)) + γ2 (x1 (n) + x1 (n)x2 (n)) This map has two quadratic terms to enhance the complexity, when 2.75 < μ1 ≤ 3.4, 2.7 < μ2 ≤ 3.45, 0.15 < γ1 ≤ 0.21, 0.13 < γ2 ≤ 0.15. The above map is a chaotic map, x1 (n), x2 (n) ∈ (0, 1).

12.3 Image Encryption and Decryption Scheme 12.3.1 Algorithm Structure RGB digital color image can be seen as composed of R, G, and B three-layer threecolor matrix elements, each of which is composed of pixels in each layer. When

12 Research on a Color Image Encryption Algorithm …

109

encrypting and transmitting digital color images, RGB in three-dimensional space is first converted into three two-dimensional gray images R, G, and B, and then encrypted according to the gray image encryption method, as shown in Fig. 12.1. The specific steps of the encryption algorithm are as follows: 1. In order to reduce the correlation between the various components of the RGB, the image data for the further use of extended XOR operation gray transformation and 2D logistic chaotic map scrambled. The following shows XOR operation of each pixel of RGB component, obtained after XOR of each pixel of R G B component, and the size is M × N, when L = max(M, N). ⎧ ⎨

R = R ⊗ G G = G ⊗ B ⎩  B = R⊗G⊗B

(12.2)

2. Allow key parameters x1 , x2 , μ1 , μ2 , γ1 , γ2 according to the parameter range of 2D-logistic mapping. 3. Then two real chaotic sequences {x1 (i)}, {x2 (i)}, i = 1, 2, . . . , M N with length of M × N are constructed according to Eq. (12.1). 4. r 1 , r 2 , r 3 , r 4 of the chaotic system is 4 8-bit random integers whose value interval is [0, 255]. Four matrices of size M × N are generated using these two vectors. So that X (i, j) = floor

  r1 + 1 r3 + 1 x1 ((i − 1) × N + j) + x2 ((i − 1) × N + j) × 1014 mod256 r1 + r3 + 2 r1 + r3 + 2

 Y (i, j) = floor

  r2 + 1 r4 + 1 x1 ((i − 1) × N + j) + x2 ((i − 1) × N + j) × 1014 mod L r2 + r4 + 2 r2 + r4 + 2



(12.3)

(12.4)

5. The three matrices R G B are diffused according to Eqs. (12.5) and (12.6). A(1, 1) = (P(1, 1) + X (1, 1) + r1 + r2 )mod 256 A(i, j) = (P(i, j) + X (i, j) + A(i, j − 1))mod 256

Plain Image

R

R'

G

G'

B

B'

Fig. 12.1 Encryption algorithm structure

2D Logistic

(12.5) (12.6)

Cipher Image

110

X. Huang and Q. Ding

Which means, here r 1 and r 2 are the constant parameters referenced for encryption, they are also keys, and i = 1, 2, . . . , M, j = 1, 2, . . . , N . The matrix P(i, j) is representative of the RGB three matrix, and the matrix A(i, j) is representative of the RGB three matrix according to Eqs. (12.5) and (12.6) spread operations to get three new R”G”B” matrices. 6. Let pixel A(i, j), i = 1, 2, . . . , M, j = 1, 2, . . . , N and A(m, n) switch their positions, and here are the computational formulas about m and n. m = sum(A(i, 1 to N) − A(i, j) + Y (i, j))mod M

(12.7)

n = sum(A(1 to N, j) − A(i, j) + Y (i, j))mod M

(12.8)

When m = i or n = j, the position of A(i, j) and A(m, n) remains the same. Otherwise, A(i, j) and A(m, n) switch positions. Let A(i, j) do cyclic displacements according to the value of the lower 3 bits of A(m, n), which means A(i, j) = A(i, j) pbest (i), then use fit(i) Replacement pbest (i). lf fit(i) > Gbest (i), then use fit(i) Replacement Gbest (i). Step 3: The updating equation of particle velocity and position is



 t  t t t t Vt+1 id = ωVid + c1 r1 pid − Xid + c2 r2 pgd − Xid t+1 t xt+1 id = Xid + Vid

(22.11)

t and Vidt are the position and velocity of D dimension in t iteration Formula: X id position coordinates of individual extremes for particle I in dimension J, ptgd positional coordinates of global extremum in the J-dimension for a group, r1 and r2 are random numbers between (0, 1), c1 and c2 are learning factors, and ω is an inertia factor. Step 4: If the condition is satisfied, or the return step (2) is trained by PSO, the effect of parameter adjustment is shown in Fig. 22.6.

ptid

22.4 Result Analysis In this paper, the oil chromatographic monitoring data of a 500 kV power transformer in Fuzhou in 2017 are used as training set to predict the oil chromatographic monitoring data in the next 7 days. The use of evaluation function is root mean square error, mean absolute error, and absolute median error. The specific evaluation algorithm is shown below:

208

S. Zhang et al.

Table 22.1 Performance comparison

Method

RMSE

MAE

MedSE

LSTM

0.16

0.14

0.16

ARIMA

0.32

0.31

0.34

ED-LSTM

0.11

0.09

0.07

EDPS0-LSTM

0.05

0.04

0.04

RMSE (root mean square error):

  RMSE y, ˆy = 

n sampl e −1

 

1 n sampl e

y i − ˆy y

2

(22.12)

i=0

MAE (mean absolute error):   MAE y, ˆy =

1 n sampl e

n sampl e −1

    y i − ˆy i 2

(22.13)

i=0

MedSE (absolute median error):       MedSE y, yˆ = median(y1 − yˆ 1 , . . . , yn − yˆ n )

(22.14)

where y, yˆ represent the true value and the predicted value (Table 22.1).

22.5 Conclusion LSTM with encoder–decoder link is better than traditional LSTM and ARIMA in predicting chromatographic time series of transformer oil after particle swarm optimization. The prediction effect is affected by the hidden layer number, iteration times, window step size, and other parameters. Using particle swarm optimization to adjust parameters can effectively improve the accuracy of the algorithm. Acknowledgements This research has been financed by the Program for New Century Excellent Talents in Fujian Province University (No.GY-Z18155).

References 1. Shiling, Z., Qiang, Y.: Time series prediction model of transformer oil chromatography based on WNN-GNN-SVM combination algorithms [J]. Power Autom. Equip. 38(09), 155–161 (2018) 2. Jun, L., Lijin, Z., Liang, H., Huarong, Z., Xun, Z., Hui, P.: Gas prediction method in transformer oil based on improved fuzzy time series [J]. J. Wuhan Univ. (Eng. Ed.) 50(05), 754–759 (2017)

22 Time Series Prediction of Transformer Oil Chromatography …

209

3. Hang, L., Youyuan, W., Xuanhong, L., League, B., Jiafeng, Q.: Prediction method of dissolved gas volume fraction in transformer oil based on multi-factor [J]. High Voltage Technol. 44(04), 1114–1121 (2018) 4. Huaishuo, X., Qingquan, L., Yalin, S., Tongqiao, Z., Jiwei, Z.: Application of gray theoryvariational modal decomposition and NSGA-II optimized support vector machine to gas prediction in transformer oil [J]. Chin. J. Electr. Eng. 37(12), 3643–3653, 3694 (2017) 5. Shi, X., Chen, Z., Wang, H., et al.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting [J] (2015) 6. Salman, A.G., Heryadi, Y., Abdurahman, E., et al.: Weather forecasting using merged long short-term memory model (LSTM) and autoregressive integrated moving average (ARIMA) model [J]. J. Comput. Sci. 14(7), 930–938 (2018) 7. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for scene segmentation [J]. IEEE Trans. Pattern Anal. Mach. Intell. 99, 1 (2017) 8. Miao, Y., Gowayyed, M., Metze, F.: EESEN: end-to-end speech recognition using deep RNN models and WFST-based decoding [C]. Autom. Speech Recognit. Underst. (2016) 9. Masdari, M., Salehi, F., Jalali, M., et al.: A survey of PSO-based scheduling algorithms in cloud computing [J]. J. Netw. Syst. Manag. 25(1), 122–158 (2017) 10. Kennedy, J., Eberhart, R.C.: The particle swarm: social adaptation in information-processing systems [M]. New Ideas Optim. (1999)

Chapter 23

Parameter Estimation of Redundant System Chao-Fan Xie, Lin Xu, Fuquan Zhang and Lu-Xiong Xu

Abstract The components usually are redundant and they are often connected to larger systems in series or in parallel. For large systems, the order estimation formulas of different types of components are very important which are deduced by collecting the data of overall faults. The estimation properties in the sense of probability are given. It is the asymptotic convergence of probability. Continue to improve variance estimator to achieve asymptotically minimum uniform variance estimator. This ensures the practicability of the theoretical formula. Further study uses computer simulation to estimate series and parallel systems in which the components are redundant, and it gives the specific handing framework. Keywords Redundant · Estimation · Computer simulation

C.-F. Xie Department of Electrical Engineering, I-Shou University, Kaohsiung, Taiwan, China e-mail: [email protected] C.-F. Xie · L.-X. Xu Electronic Information and Engineering Institute, Fuqing Branch of Fujian Normal University, Fuqing, Fuzhou, FuJian, China e-mail: [email protected] F. Zhang (B) Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University, Fuzhou 350121, China e-mail: [email protected] School of Software, Beijing Institute of Technology, Beijing 100081, China C.-F. Xie · L. Xu · L.-X. Xu Key Laboratory of Nondestructive Testing, Fuqing Branch of Fujian Normal University, Fuqing, Fuzhou, FuJian, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_23

211

212

C.-F. Xie et al.

23.1 Introduction Reliability theory is a product-specific function describing the probability of random events. It is an interdisciplinary subject developed in the 1960s. Reliability theory was established on the basis of probability theory and was initially used in machine maintenance in practice [1]. At present, the main focus of reliability research is the reliability index of the system, as well as the optimization of detection time based on the reliability index, so as to avoid the occurrence of faults and reduce the losses caused by faults [2–4]. Basically, the research is qualitative analysis or numerical analysis to find approximate solutions of practical systems [5–7]. Fan J. J., Yung K. et al. proposed a numerical simulation algorithm for solving reliability problems of intelligent algorithms [7–10]. The ideas and methods of engineering research were used in these studies. In parallel systems with exponential distribution of n elements, the reliability and extreme value are analyzed, and the relevant theoretical guidance is obtained by Xie Chaofan and Xu Luxiong. Without considering the economic constraints, the failure rate of each component is equal, and the system reliability reaches the minimum state. When economic constraints are considered and the failure rate is unit elasticity, the conclusion is the same [11]. The asymptotic estimation formula of redundant order is obtained by spider web model: the asymptotic estimation of redundant order is 2 xˆk as follows: xˆk+1 = (qc2) , the failure rate is estimated to be λ = Txˆk1 [12]. But in this paper, it only gives the formula of the single redundant system, it doesn’t give the properties the estimation, and it only considers simple redundant components, so still need to study the estimation of the mixed system which components are redundant backup. In this paper, it uses computer simulation to produce random number which is derived from density function, as the same time we use properties of order statistics of truncated distribution to give the density function number, and last we use the Euclidean distance to choose the system.

23.2 Main Reliability Lemma The following lemma is assumed that when the switch is absolutely reliable [12]. Lemma 2.1 X 1 , X 2 , . . . X n they are independent random variables subject to the same parameter λ exponential distribution, then X = X 1 + X 2 + · · · + X n , redundant distribution subjected to order n, the probability density function is  bn (u) =

λe−λu (λu) ,u≥0 (n−1)! 0, u 66, married → yes

0.95

6

HbA1c ≥ 6.25, 54 ≤ age ≤ 66, married → yes

0.94

HDL cholesterol (≥58.33 mg/Dl) can help prevent diabetes. The rules 4, 5, 6 showed that if the glycated hemoglobin is greater than 6.25%, waist circumference is more than 88.35 cm, and age is more than 54 years old will lead to diabetes.

26.4 Conclusion This research has presented a rule experiment on KNHANES data for discovering the risk factors of predicting diabetes. We first used the complex sampling-based feature selection approach to extract significant features. Then we used the correlation feature selection approach to extract relevant features with high correlation ratio. At last, we generated 6 rules with high confidence based on the discovered risk factors. According to the National Institute of Diabetes and Digestive and Kidney Diseases, a normal glycated hemoglobin level is 5.6% or below; people with diabetes have a glycated hemoglobin level of 6.5% or above [11]. And the risk factors for diabetes include age (≥45 years old or older), HDL cholesterol level (88 cm) [12]. During our experiments, we found the same risk factors about diabetes, and the rules we discovered can be easy and useful to predict diabetes.

26 An Efficient Association Rule Mining Method to Predict …

249

Diabetes patients are increasing very quickly and the probability of diabetes patients who have complications disease is also very high. We will analyze diabetes with other complications disease in our further work. Acknowledgements This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No. 2017R1A2B4010826), supported by the KIAT (Korea Institute for Advancement of Technology) grant funded by the Korea Government (MOTIE: Ministry of Trade Industry and Energy). (No. N0002429).

References 1. International Diabetes Federation. https://www.idf.org/aboutdiabetes/what-is-diabetes/factsfigures.html (2017) 2. Tan, P.N.: Introduction to Data Mining. Pearson Education India (2018) 3. Park, H.W., Batbaatar, E., Li, D., Ryu, K.H.: Risk factors rule mining in hypertension: Korean national health and nutrient examinations survey 2007–2014. In: 2016 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), pp. 1–4. IEEE, Thailand (2016) 4. Ryu, K.S., Park, H.W., Park, S.H., Ishag, I.M., Bae, J.H., Ryu, K.H.: The discovery of prognosis factors using association rule mining in acute myocardial infarction with ST-Segment elevation. In: International Conference on Information Technology in Bio-and Medical Informatics, pp. 49–55. Springer, Spain (2015) 5. Kim, H.S., Shin, A.M., Kim, M.K., Kim, Y.N.: Comorbidity study on type 2 diabetes mellitus using data mining. Korean J. Int. Med. 27(2), 197 (2012) 6. Korea Centers for Disease Control and Prevention: Korea national health & nutrition examination survey, 2013–2015. https://knhanes.cdc.go.kr/knhanes/main.do 7. Jirapech-Umpai, T., Aitken, S.: Feature selection and classification for microarray data analysis: evolutionary methods for identifying predictive genes. BMC Bioinf. 6(1), 148 (2005) 8. Gokulnath, C.B., Shantharajah, S.P.: An optimized feature selection based on genetic approach and support vector machine for heart disease. Clust. Comput. 1–11 (2018) 9. Doshi, M.: Correlation based feature selection (CFS) technique to predict student Performance. Int. J. Comput. Netw. Commun. 6(3), 197 (2014) 10. Piatetsky-Shapiro, G.: Discovery, analysis, and presentation of strong rules. Knowl. Discov. Databases, 229–238 (1991) 11. National Institute of Diabetes and Digestive and Kidney Diseases. https://www.niddk.nih.gov/ health-information/diabetes/overview/tests-diagnosis/a1c-test 12. National Institute of Diabetes and Digestive and Kidney Diseases. https://www.niddk. nih.gov/health-information/communication-programs/ndep/health-professionals/game-planpreventing-type-2-diabetes/prediabetes-screening-how-why/risk-factors-diabetes

Chapter 27

A Hybrid Credit Scoring Model Using Neural Networks and Logistic Regression Lkhagvadorj Munkhdalai , Jong Yun Lee

and Keun Ho Ryu

Abstract Credit scoring is one of important issues in banking to control a loss due to debtors who fail to meet their credit payment. Hence, the banks aim to develop their credit scoring model for accurately detecting their bad borrowers. In this study, we propose a hybrid credit scoring model using deep neural networks and logistic regression to improve its predictive accuracy. Our proposed hybrid credit scoring model consists of two phases. In the first phase, we train several neural network models and in the second phase, those models are merged by logistic regression. In experimental part, our model outperformed baseline models on over three benchmark datasets in terms of H-measure, area under the curve (AUC) and accuracy. Keywords Deep learning · Logistic regression · Credit scoring

27.1 Introduction Over the past decades, machine learning-based credit scoring models accurately predict the borrowers’ credit score. More especially, neural network models achieve higher predictive accuracy of the borrowers’ creditworthiness, but there is still a need to improve the performance of credit scoring models. Therefore, this study proposes

L. Munkhdalai Database/Bioinformatics Laboratory, School of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea e-mail: [email protected] J. Y. Lee · K. H. Ryu Department of Computer Science, Chungbuk National University, Cheongju 28644, Republic of Korea e-mail: [email protected] K. H. Ryu (B) Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam e-mail: [email protected]; [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_27

251

252

L. Munkhdalai et al.

a hybrid credit scoring model using deep neural network and logistic regression in order to achieve better performance of baseline models [1]. Our baseline models are the logistic model constructed by Logistic regression and neural network models using multilayer perceptron (MLP) neural network [2, 3]. Logistic model is the most popular and powerful white-boxing method that commonly used on credit scoring application. Here are some properties of logistic regression that make it a benchmark—good predictive accuracy, high-level of interpretability and the modeling process is faster, easier, and makes more sense [4]. For neural networks, these approaches are being popular because these methods have dramatically improved the state-of-the-art in visual object recognition, speech recognition, object detection, genomics, energy consumption as well as financial domains [5]. More importantly, MLP is widely used neural network architecture in credit scoring application [6–8]. Thus, we select MLP neural network as one of our baseline models. Our hybrid credit scoring model consists of two training phases. In the first phase, several MLP deep neural networks are trained. Then in the second phase, logistic regression is used to merge neural network models. In the experimental part, we apply our proposed model to over three real-world credit scoring datasets. Then we perform an extensive comparison between the neural network and logistic models for both simple and complex neural network architectures. The model’s predictive performance of test set is evaluated against three theoretical measures, an area under the curve (AUC), H-measure, and accuracy [9]. Our proposed hybrid credit scoring model outperformed baseline models. This paper is organized as follows. Section 27.2 briefly presents logistic regression, MLP neural network approach as well as our proposed hybrid model is introduced in this section. Section 27.3 indicates the performance of neural network, logistic models and our hybrid model. Finally, Sect. 27.4 concludes and discusses the general findings from this study.

27.2 Methods 27.2.1 Logistic Regression Most previous studies compared their own proposed method to the logistic regression in order to demonstrate their approaches’ achievement [4, 7, 10, 11]. It becomes logistic regression can be a benchmark in credit scoring problem [4]. This method estimates conditional probability of borrower’s default, and explains the relationship between clients’ creditworthiness and explanatory variables. The procedure of logistic regression to construct model consists in the estimation of a linear combination between interpreter X and dependent variable of Y. The logistic formula is displayed in (27.1).

27 A Hybrid Credit Scoring Model Using Neural Networks …

Y ≈ P(X ) =

1 1+

e−(β0 +β X )

253

(27.1)

This study used logistic regression to merge neural network models as well as we used it to compare with our proposed model. Advanced machine learning techniques are quickly gaining applications throughout the financial services industry, transforming the treatment of large and complex datasets, but there is a huge gap between their ability to build powerful predictive models and their ability to understand and manage those models [12].

27.2.2 Multilayer Perceptron MLP is a general architecture in an artificial neural network that has been developed similar with human brain function, the basic concept of a single perceptron was introduced by Rosenblatt [3]. MLP consists of three layers with completely different roles called input, hidden and output layers. Each layer contains a given number of nodes with the activation function and nodes in neighbor layers are linked by weights. MLP achieves the optimal weights by optimizing objective function using backpropagation algorithm to construct a model as arg min ω

1  l( f (ωx + b); y) + λΩ(ω) T t

(27.2)

where ω denotes the vector of weights, b is the bias and f(*) is the activation function, x is the dependent variables, y is the independent variable and (*) is a regularizer. There are several parameters that need to be determined in advance for the training model such as number of hidden layers, number of their nodes, learning rate, batch size, and epoch number. However, overfitting is still a challenging issue in neural networks. If the training neural networks are extremely large, the model will be too complex and it would be transformed into an untrustworthy model. Fortunately, there are several algorithms to prevent overfitting. An Early Stopping algorithm is used for finding the optimal epoch number based on given other hyper-parameters as well as this algorithm helps to avoid overfitting [13]. Another method is a dropout. The dropout technique was proposed for addressing overfitting [14]. This method is able to efficiently avoid overfitting by randomly dropping out nodes in network. Dropping nodes generates thinned networks during training. At test time the results of different thinned networks are combined using an approximate model averaging procedure. Furthermore, the choice of optimization algorithm in neural network has a significant impact on the training dynamics and task performance. There are many techniques to improve the gradient descent optimization and one of the best optimizer is Adam [15]. Adam computes adaptive learning rates for different parameters from estimates of first and second moments of the

254

L. Munkhdalai et al.

Fig. 27.1 Our proposed hybrid model for credit scoring

gradients and realizes the benefits of both Adaptive Gradient Algorithm (AdaGrad) and Root Mean Square Propagation (RMSProp). Accordingly, Adam is considered one of the best gradient descent optimization algorithms in the field of deep learning because it achieves good results and faster [16].

27.2.3 Our Proposed Hybrid Model The overall architectural diagram of our proposed hybrid method for credit scoring is indicated in Fig. 27.1. Our hybrid credit scoring model consists of two main steps. In the first step, we train several neural network models using training set and evaluated by validation set. Then logistic regression is used for merging those neural network models based on validation set. Finally, the two types of models are constructed: the first model is neural network model trained on training set and other is a logit model trained on validation set to merge neural network models.

27.3 Results 27.3.1 Experimental Setup In this section, our hybrid model is compared with logistic and neural network models in terms of three real-world credit datasets. Two datasets from UCI repository [17], namely Australian and Taiwan, and other one dataset from FICO’s explanation

27 A Hybrid Credit Scoring Model Using Neural Networks …

255

Table 27.1 Summary of the three datasets Dataset

Instances

Australian Taiwan FICO

Variables

Training

Validation

Test

Good/Bad

690

15

442

110

138

383/307

30000

23

19200

4800

6000

23364/6636

9871

24

6318

1579

1974

5136/4735

machine learning challenge [18], namely FICO. A summary of all the datasets is presented in Table 27.1. In the neural networks, we compared three different neural networks that consist of different number of hidden layers with dropout. We also set the learning rate to 0.001, maximum epoch number for training to 1500 and use a mini-batch with 4, 32, and 32 instances at each iteration for the Australian, Taiwan, and FICO datasets, respectively. All experiments are performed using R programming language, 3.4.0 version, on a PC with 3.4 GHz, Intel Core i7, and 32 GB RAM, using Microsoft Windows 10 operating system. Particularly, this study used “Keras” library for analyzing [19].

27.3.2 Predictive Performances Our aim in this empirical evaluation is to display that our proposed hybrid credit scoring models can lead to better performance than both the industry-benchmark logistic regression and neural networks in terms of different evaluation metrics. To validate our hybrid model and to make a reliable conclusion, Tables 27.2, 27.3, and 27.4 compared the performance metrics of our model as well as baseline classifiers on the three datasets. For the Australian dataset (see Table 27.2), our hybrid model indicates the best performance in terms of AUC and accuracy. Our model achieves 91.1% AUC and 82.6% accuracy, which are 0.4% and 0.7% better than logistic model, respectively. The AUC and accuracy indicate classifying ability between borrowers as good and Table 27.2 Predictive performances for the Australian dataset over the different evaluation metrics MLP architecture

Method

H-measure

AUC

Accuracy

1 layer

Neural net

0.577

0.896

0.814

Our model

0.587

0.911

0.826

Neural net

0.551

0.869

0.822

Our model

0.573

0.889

0.819

Neural net

0.547

0.864

0.814

3 layers 5 layers

Our model Logistic regression

0.554

0.885

0.826

0.593

0.907

0.819

256

L. Munkhdalai et al.

Table 27.3 Predictive performances for the Taiwan dataset over the different evaluation metrics MLP architecture

Method

H-measure

AUC

Accuracy

1 layer

Neural net

0.248

0.730

0.672

Our model

0.293

0.771

0.717

3 layers

Neural net

0.237

0.709

0.696

Our model

0.254

0.738

0.702

Neural net

0.234

0.707

0.694

Our model

0.244

0.732

0.699

0.237

0.715

0.699

5 layers Logistic regression

Table 27.4 Predictive performances for the FICO dataset over the different evaluation metrics MLP architecture

Method

H-measure

AUC

Accuracy

1 layer

Neural net

0.287

0.784

0.707

Our model

0.285

0.782

0.724

3 layers

Neural net

0.286

0.784

0.722

Our model

0.292

0.787

0.721

5 layers

Neural net

0.285

0.784

0.722

Our model

0.289

0.785

0.724

0.292

0.787

0.721

Logistic regression

bad. Whereas, H-measure is better at dealing with cost assumptions among credit classes. But regarding H-measure, logistic model outperformed our hybrid model. In general, it is found that with the Australian dataset, our proposed model shows promising predictive performances over two main evaluation metrics, indicating that our model is an appropriate method with the small dataset in credit scoring. In the Taiwan dataset (see Table 27.3), our model that also consists of one hidden layer and with dropout model provides an improvement over logistic model by 0.06 H-measure, 5.6% AUC and 1.8% accuracy. The AUC of our model achieves 77.1%, which indicates its classification ability between bad and good borrowers. In addition, the H-measure proves that our model is better at dealing with cost assumptions between classes, as it scores 0.293. Finally, for the accuracy, our model achieves the best accuracy of the probabilities, at 71.7%. The reason why the AUC, H-measure, and accuracy give better results than both logistic and neural network model in the Australian dataset might be that the Taiwan dataset contains more instances. Regarding the FICO dataset (see Table 27.4), our model that has 3 hidden layers, 16 nodes at each layer improves the predictive performance over the logistic model by 0.01% AUC and 0.04% accuracy, respectively. In terms of the AUC, our model achieves 29.2%. The H-measure of our model achieves 0.292, which is very nearly to the performance of logistic regression. Lastly, the accuracy of our model succeeds the best accuracy of the probabilities with 72.1%. This again provides evidence that our proposed hybrid model constructed on the datasets, which contain more instances,

27 A Hybrid Credit Scoring Model Using Neural Networks …

257

are better than both logistic and neural network models. Overall, regarding the FICO dataset, the logistic regression’s results indicate that it is a close rival to our model for all evaluation metrics. Since it has proven that our proposed hybrid model is an efficient and promising classifier for developing credit scoring models when dataset contains a large number of instances.

27.4 Conclusions One of the main focuses of commercial banks is an efficient credit scoring model that provides a good ability of prediction. This study has shown that the hybrid credit scoring model, which is consisting of MLP neural networks and logistic regression can provide better predictive power in the credit scoring application that compared to the benchmark baseline classifiers. More particularly, a bigger sample dataset is given better performance by our proposed hybrid credit scoring model. To this end, the authors anticipate potential future work in this area that includes developing ensemble deep learning models for this context. Acknowledgements This research was supported by the Private Intelligence Information Service Expansion (No. C0511-18-1001) funded by the NIPA (National IT Industry Promotion Agency) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No.2017R1A2B4010826).

References 1. Louzada, F., Ara, A., Fernandes, G.B.: Classification methods applied to credit scoring: systematic review and overall comparison. Surv. Oper. Res. Manag. Sci. 21(2), 117–134 (2016) 2. Cox, D.R.: The regression analysis of binary sequences. J. Royal Statist. Soc. Ser. B (Methodological), 215–242 (1958) 3. Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65(6), 386 (1958) 4. Lessmann, S., Baesens, B., Seow, H.-V., Thomas, L.C.: Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. Eur. J. Oper. Res. 247(1), 124–136 (2015) 5. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015) 6. West, D.: Neural network credit scoring models. Comput. Oper. Res. 27(11–12), 1131–1152 (2000) 7. Lee, T.-S., Chen, I.-F.: A two-stage hybrid credit scoring model using artificial neural networks and multivariate adaptive regression splines. Expert Syst. Appl. 28(4), 743–752 (2005) 8. Wong, B.K., Selvi, Y.: Neural network applications in finance: a review and analysis of literature (1990–1996). Inf. Manag. 34(3), 129–139 (1998) 9. Hand, D.J., Anagnostopoulos, C.: A better Beta for the H measure of classification performance. Pattern Recogn. Lett. 40, 41–46 (2014) 10. Orgler, Y.E.: A credit scoring model for commercial loans. J. Money Credit Bank. 2(4), 435–445 (1970)

258

L. Munkhdalai et al.

11. Van Gestel, T., et al.: Linear and nonlinear credit scoring by combining logistic regression and support vector machines. J. Credit Risk 1(4) (2005) 12. Vellido, A., Martín-Guerrero, J.D., Lisboa, P.J.G.: Making machine learning models interpretable. ESANN 12 (2012) 13. Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Comput. 7(2), 219–269 (1995) 14. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014) 15. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412. 6980 (2014) 16. Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv: 1609.04747 (2016) 17. Asuncion, A., Newman, D.: UCI machine learning repository (2007) 18. FICO, Xml challenge, https://community.fico.com/s/explainable-machine-learning-challenge. Accessed 1 Oct 2018 19. Arnold, T.B.: KerasR: R interface to the Keras deep learning library. J. Open Source Softw. 2 (2017)

Chapter 28

The Early Prediction Acute Myocardial Infarction in Real-Time Data Using an Ensemble Machine Learning Model Bilguun Jargalsaikhan , Muhammad Saqlain , Sherazi Syed Waseem Abbas , Moon Hyun Jae, In Uk Kang, Sikandar Ali and Jong Yun LEE Abstract Cardiovascular disease is one of the extremely dangerous diseases in the world. Thus, the early detection of acute myocardial infarction is a critical model for patients and doctors. If the cardiovascular disease can make early detection, patients can prevent acute myocardial infarction. In this paper, we propose a machine learning ensemble approach for early detection of cardiac events on electronic health records (EHRs). The proposed ensemble approach combines a set of different classifier algorithms that are Random Forest, Decision Tree, Artificial Neural Network, K-Nearest Neighbors, and Support Vector Machine. Data from the Korea Acute Myocardial Infarction Registry (KAMIR), real life an acute myocardial database. Keywords Acute myocardial infarction · Risk prediction · Ensemble approach · Machine learning

B. Jargalsaikhan · M. Saqlain · S. S. W. Abbas · M. H. Jae · I. U. Kang · S. Ali · J. Y. LEE (B) Department of Computer Science, College of Electrical and Computer Engineering, Chungbuk National University, Cheoungju City 28644, South Korea e-mail: [email protected] B. Jargalsaikhan e-mail: [email protected] M. Saqlain e-mail: [email protected] S. S. W. Abbas e-mail: [email protected] M. H. Jae e-mail: [email protected] I. U. Kang e-mail: [email protected] S. Ali e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_28

259

260

B. Jargalsaikhan et al.

28.1 Introduction Cardiovascular disease (CVD) is the leading causes of mortality, accounting for the lead of global deaths as the World Health Organization (WHO) [1]. The CVD frequently includes diseases such as acute myocardial infarction (AMI), coronary heart disease (CHD), heart failure, congenital heart disease, and stroke [2]. The AMI is the most dangerous disease also called as heart attack that is blocking the coronary arteries from supplying blood to the heart and kills the heart cells. In the year 2008, approximately 17.3 million people died cause of CVD, which is 30% of all deaths and by 2030, the total number of deaths will increase up to 23.6 million, the main reason of deaths by CVD and AMI [1]. The prediction models are divided into two main parts, one is the traditional prediction model based on probability. The previous prediction models including the Framingham risk score [3], GRACE risk score [4], QRISK2 [5], and TIMI risk score [6]. These models are regression based prediction models of CVD. On the other hand, the early prediction model is based on machine learning methods. However, early detection models based on machine learning and deep learning approaches not been widely used in EHRs in clinical. Previous work finds out how to human brain works and why we are trying to replicate that. The human brain is the most powerful learning model on the planet. The ANN algorithm based on human brain works with many inputs and one output. Kim et al. [11] proposed ANN apply to coronary heart disease (CHD) dataset. They predict the risk of CHD in Korea. The KNN algorithm is used for classification and regression analysis. That the nearer neighbors contribute more to the average than the more distant ones. Weinberger et al. [12] proposed the large margin nearest neighbor classification to get the better of the weakness of KNN and improves the performance. The RF is one of the ensemble techniques in pattern recognition for high dimensional and complex issues. Shouman et al. [13] suggested a decision tree based classification in the prediction of heart disease patients. The SVM used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. Vadicherla and Sonawane [14] applied SVM and ANN to a classification of heart disease and the feature extraction. This paper proposes an early prediction model of cardiovascular disease, which is combined mode from several classification algorithms also called ensemble approach [7, 8]. The ensemble model used K-Nearest Neighbors (KNN), Random Forest (RM), Artificial Neural Network (ANN) [10, 11], Support Vector Machine (SVM), and Decision Tree (DT) algorithms of machine learning [9]. The ensemble approach is better prediction performance than any single classification algorithm. The ensemble method is supervised machine learning algorithms’ way to combine the best results in each classification algorithm and create a better prediction model. In the experiment, we have had higher results than a single classification algorithm accuracy.

28 The Early Prediction Acute Myocardial Infarction …

261

28.2 Method and Materials 28.2.1 Data and Data Extraction The present research work used Korea Acute Myocardial infarction Registry (KAMIR) dataset. Since November 2005, online registration of the KAMIR provided that there are 52 hospitals supported by the Korean Circulation Society (KCS) with the capability of primary PCI treatment. Therefore, in experiment dataset is from Jan 2005 to Dec 2008 with among 14,885 ACS patients. After preprocessing dataset, we selected 8,518 subjects in age from 20 to 100 years with 1-year followup traceability. The experiment dataset consisted of 21 numerical and 28 categorical data. We excluded 5,632 patients that failed to the 1-year clinical follow-up after hospital discharge. In preprocessing KAMIR dataset using ordinal encoding and one hot encoding library by scikit-learn library from python programming language.

28.2.2 Training, Validation, and Test Datasets The 8,518 subjects which are from KAMIR dataset subdivided through random sampling into a training dataset of 80% for model learning and a testing dataset of 20% for evaluating the prediction model. The data extraction is shown in Fig. 28.1. Fig. 28.1 Data extraction

Patients with acute coronary syndrome N=14,885 Excluded N=5,632 Failed 1-year follow-up Patients at 1-year follow-up Excluded N=9,253 Excluded=735 In-hospital death Patients after discharge 1-year follow-up 8,518

Training Data 80%

Testing Data 20%

262

B. Jargalsaikhan et al.

Fig. 28.2 Architecture of the proposed ensemble model

Decision Tree Artificial Neural Network KAMIR Dataset Training Dataset

K-Nearest Neighbors

Ensemble classifier

Support Vector Machine

Random Forest

KAMIR Dataset Test Dataset

28.2.3 Proposed Method In this paper, we used ANN, KNN, RF, DT, and SVM machine learning algorithms. This section describes proposed ensemble approach. The stacking is likely the wellknown meta-learning approach. The ensemble approach can improve machine learning results by combine several models. In the ensemble approach combine several machine learning techniques into one prediction model. It has bagging, boosting, and stacking [8]. In research used stacking ensemble approach in CVD prediction model on KAMIR dataset. The architecture of ensemble method is shown in Fig. 28.2. This proposed method gets which classification algorithms are reliable and which are not. The stacking build a meta-dataset containing a tuple for each classifier algorithm results.

28.3 Results The performance result of each classification algorithm compared with the proposed ensemble model. The early prediction model of the CVD occurrences in the ensemble model, we compared the performance of prediction models according to the accuracy, F-measure, precision, and recall score. The confusion matrix showed the performance of all classification algorithms and ensemble model are shown in Table 28.1. According to the accuracy in results, DT and Proposed ensemble model achieved the highest performance. Otherwise, the lowest performance is the SVM model. While accuracy of proposed ensemble model is improved the each algorithm’s accu-

28 The Early Prediction Acute Myocardial Infarction …

263

Table 28.1 Experiment results Classifier

Accuracy

F-measure

Precision

Recall score

Random forest

0.837

0.713

0.797

0.671

Decision tree

0.882

0.899

0.918

0.882

Artificial neural network

0.822

0.648

0.623

0.738

Support vector machine

0.413

0.478

0.457

0.5

K-nearest neighbor

0.719

0.478

0.457

0.5

Proposed ensemble model

0.956

0.741

0.939

0.676

Fig. 28.3 AUC-ROC of the experiment

racies are RF [+0.119; +14%], DT [+0.74; +8%], ANN [+0.134; +16%], SVM [+0.543, +123%], and KNN [+0.237, +46%]. The AUC–ROC curve is shown in Fig. 28.3.

28.4 Conclusion This paper proposed an early prediction model of cardiovascular disease. The model uses ensemble method on machine learning algorithms. The method we recommend can be possible to improve the recognition of the algorithm. That proposed ensemble method’s accuracy is 95%. In conclusion, an ensemble model shows prediction can improve single classification algorithm. Acknowledgements This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1A02018718).

264

B. Jargalsaikhan et al.

References 1. World Health Organization, the top 10 causes of death. http://origin.who.int/mediacentre/ factsheets/fs310/en/ 2. Thygesen, K., Alpert, J.S., White, H.D.: Universal definition of myocardial infarction. J. Am. College Cardiol. 50–22, 2173–2195 (2007) 3. Mahmood, S.S., Levy, D., Vasan, R.S., Wang, T.J.: The Framingham Heart Study and the epidemiology of cardiovascular disease: a historical perspective. Lancent 383, 999–1008 (2014) 4. Fox, K.A.A., Dabbous, O.H., Goldberg, R.J., Pieper, K.S., Eagle, K.A., de Werf, F.V., Avezum, A., Goodman, S.G., Flather, M.D.: for the GRACE Investigators, Prediction of risk of death and myocardial infarction in the six months after presentation with acute coronary syndrome: prespective multinational observational study (GRACE), BMJ, Vol. 333, p. 7578 (2006) 5. Hippisley-Cox, J., Coupland, C., Vinogradova, Y., Robso, J., Minhas, R., Sheikh, A., Brindle, P.: Predicting cardiovascular risk in England and Wales: prospective derivation and validation of QRISK2, BMJ 327(7426) (2003) 6. Amin, S.T., Morrow, D.A., Braunwald, E., Sloan, S., Contant, C., Murphy, S., Antman, E.M.: Dynamic TIMI risk score for STEMI. J. Am. Heart Assoc. 2(1) e003269 7. van der Laan, M.J., Polley, E.C., Hubbard, A.E.: Super Learner. J. Am. Stat. Appl. Genet. Mol. Biol. 6(1) (2007) 8. LeDell, E.: “Scalable Ensemble Learning and Computationally Efficient Variance Estimation” (Doctoral Dissertation). University of California, Berkeley, USA (2015) 9. Weng, S.F., Reps, J., Kai, J., Garibaldi, J.M., Qureshi, N.: Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS ONE 12(4), e0174944 (2017) 10. Yao, X.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999) 11. Kim, J.K., Kang, S.: Neural network-based coronary heart disease risk prediction using feature correlation analysis. J. Healthcare Eng., 2780501 (2017) 12. Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res., 207–244 (2009) 13. Shouman, M., Turner, T., Stocker, R.: Using decision tree for diagnosing heart disease patients. In: Proceedings of the Ninth Australasian Data Mining Conference-Volume 121, pp. 23–30. Australian Computer Society, Inc. (2011) 14. Vadicherla, D., Sonawane, S.: Classification of heart disease using SVM and ANN. Int. J. Res. Comput. Commun. Technol. 2(9) (2013)

Chapter 29

A Collaborative Filtering Recommendation System for Rating Prediction Khishigsuren Davagdorj , Kwang Ho Park

and Keun Ho Ryu

Abstract Recommendation system is a subclass of information filtering system to help users find relevant items of interest from a large set of possible selections. Modelbased collaborative filtering utilized the ratings of the user–item matrix dataset to generate a prediction. Essentially, this type of intelligent system plays a critical role in e-commerce, social network, and popular domains increasingly. In this research work, we present the comparison of the two widely used efficient techniques such as Biased Matrix Factorization and a regular Matrix Factorization, both using Stochastic Gradient Descent (SGD). We have conducted experiments on two real-world public datasets: Book Crossing and Movie Lens 100 K and evaluated by two metrics such as Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). Our experiments demonstrated that Biased Matrix Factorization used SGD technique results in a substantial increase in recommendation accuracy for rating prediction in experimental both datasets. Compute with a regular Matrix Factorization technique, Biased Matrix Factorization produced the reduction of the RMSE by 25.78% and MAE by 19.69% for Book Crossing dataset and RMSE by 19.69% and MAE by 14.08% for Movie Lens 100 K dataset. As expected when comparing the results of different datasets, Biased Matrix Factorization using SGD materialize less prediction error. Keywords Model-based collaborative filtering · Matrix factorization · Bias · Stochastic gradient descent K. Davagdorj · K. H. Park · K. H. Ryu (B) Database Bioinformatics Laboratory, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea e-mail: [email protected]; [email protected] K. Davagdorj e-mail: [email protected] K. H. Park e-mail: [email protected] K. H. Ryu Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_29

265

266

K. Davagdorj et al.

29.1 Introduction Recommendation system currently plays a central role for smart mechanisms in ecommercial services and various online applications such as movie, music, books, and purchasing products recommendation based on a rapidly growing amount of information [1]. Nowadays, a variety of fields are employing collaborative filtering (CF) which is most popular and well-known approach associated with recommendation system, moreover this technique can recommend products that are likely to fit customer needs efficiently. It is well known, CF predicts the interest of products for an active customer based on the aggregated rating information of similar customers. Broadly, CF divided into general two types: memory-based CF and model-based CF [2]. Memory-based CF uses user rating data to compute the similarity between users and items. On the contrary, model-based approaches learn the parameters of a model and store only those parameters. Model-based CF has been researched to solve the problem for prediction of user’s rating applying different data mining, machine learning algorithms [3]. Hence, in this research work, we investigate Matrix Factorization which has been widely and effectively used in the model-based recommendation [4]. According to the study [5], evaluated the accuracy of a new and extremely simple prediction method (RF-Rec) that uses the user’s and the item’s most frequent rating value to make a rating prediction. Therefore, they computed their proposed RF-Rec method with Slope One, User-based kNN and Item-based kNN used two datasets from the movie domain and one dataset from the book domain such as 100 k-MovieLens rating database (100,000 ratings by 943 users on 1,682 movies, 0.9369 sparsity), a snapshot of the Yahoo! Movies dataset and Book Crossing dataset. Their evaluation model in different training set sizes measured by MAE values. From their results, while training set sizes increasing, their proposed RF-Rec model built more good prediction model and best results were 100 k-MovieLens rating database −0.742; a snapshot of the Yahoo! Movies dataset—about 0.71 and Book Crossing—about 0.58 as well. In study [6], hybrid collaborative filtering model proposed and, then integrated deep presentation learning and matrix factorization. In their work, they presented a hybrid collaborative filtering model which bridges Additional Stacked Denoising Autoencoder and matrix factorization. Thus, they conducted experiments on three real-world datasets to evaluate the effectiveness of their proposed hybrid model. Experimental results showed that their proposed hybrid model outperforms four state-of-art methods such as Probabilistic Matrix Factorization, Collective Matrix Factorization, and Collaborative Deep Learning in terms of root mean squared error and recall metrics. In addition, study [7] researched movie recommendations using matrix factorization techniques which most notably in the Netflix prize competition. They also compared a matrix factorization implementation that uses a pure decomposition model with one that incorporates bias to create the model to see if better predictions can be achieved. These researchers examined if the movie features can be used for finding

29 A Collaborative Filtering Recommendation System …

267

similar movies they have chosen an arbitrary movie, to calculate the similarity to the other movies by comparing their features using cosine similarity. The remainder of this paper is organized as follows: Sect. 29.2 reviews of the proposed method and evaluation metrics. Section 29.3 will demonstrate the experimental results to show the effectiveness of each model. The conclusion and future research area will present in Sect. 29.4.

29.2 Method 29.2.1 Model Description Matrix Factorization. Matrix Factorization has many benefits for overcoming problems in recommendation system. Matrix Factorization technique computes users and items in lower dimensional latent factor [4]. These user–item interactions can be represented as a matrix with users on one axis and items on the other. In most recommender systems based on items rating which is given by users,however, this rating matrix is typically very sparse in realworld applications as users usually not fill their satisfaction for all the used items or watched movies. Matrix Factorization has been showed to be able to make very good predictions even on very sparse matrices. The matrix factorization approach reduces the dimensions of the rating matrix r by factorizing it into a product of two latent factor matrices, p for the users and q for items [8]. ⎡ ⎤ r11 . . . r1i ⎢ ⎢ .. . . .. ⎥ ⎢ = ⎢ ⎦ ⎣ . . . ⎣ ru1 . . . rui ⎡

p1 p2 .. .

⎤ ⎥ ⎥ ⎥ q 1 q 2 . . . qi ; ⎦

pu

{u × i, u × f, f × i}

(29.1)

Each row pu is a vector of features for a user u and each row qi is a vector of features for an item i. The product of these vectors creates an estimate of the original rating. rui = pu qiT

(29.2)

Factorize by only using the observed ratings and try to minimize the squared error. min

u,i

(rui − pu qiT )2

(29.3)

268

K. Davagdorj et al.

This can be result in overfitting the training data. To prevent overfitting a regularization term is introduced to the squared error. Impact of the regularization is controlled by constant β. min



(rui − pu qiT )2 + β( pu 2 + qi 2 )

(29.4)

u,i

Biased Matrix Factorization. Improvement of Matrix Factorization prediction uses a bias for item and user, then calculates interactions, and on the other way, this model modifies the minimized squared error Eq. 29.4 to include the bias. Stochastic Gradient Descent (SGD). SGD algorithm solves the optimization Eq. 29.4. It works by looping through each rating in the training data, to predict the rating and calculates a prediction error.

29.2.2 Evaluation Metrics The experimental results are evaluated in Root Mean Square Error (RMSE) [9] and Mean Absolute Error (MAE) [10]. Root Mean Squared Error (RMSE). RMSE is an objective metric widely used for performance evaluation of recommendation system models, which is defined as RMSE =

n ˆt t=1 ( y

− yt )2

n

(29.5)

The RMSE has the same measuring unit of the variable y. Mean Absolute Error (MAE). MAE is the average vertical distance between each point and the identity line. The formula is given by M AE =

n ˆt t=1 y

n

− yt

(29.6)

where t is estimated value at point, being the observed value in t and being the sample size.

29.3 Experimental Results In this section, we explain the datasets in Sect. 29.3.1 and experimental results obtained after running the Matrix Factorization and Biased Matrix Factorization approaches combine with SGD for the real—world Book Crossing dataset and Movie Lens 100 K dataset in Sect. 29.3.2.

29 A Collaborative Filtering Recommendation System …

269

29.3.1 Dataset In this research work, we analyzed two real-world dataset with different domain and considered user, item and item ratings given by users in each dataset, respectively. Book Crossing dataset [11] were collected in 2004 from the Book Crossing community with kind permission from Ron Hornbaker, CTO of Humankind Systems. This dataset consists of 272,679 interactions (explicit/implicit) from 2,945 users on 17,384 books. Ratings using a number rating scale 1 into 10, and higher scale means satisfaction is the highest on used product. Simple demographic information for the users is age, gender, occupation, and zip. MovieLens 100 K dataset [12] were collected by the GroupLens Research Project at the University of Minnesota. This data set consists of: 100,000 ratings (1–5) from 943 users on 1682 movies. Each user has rated at least 20 movies. Simple demographic info for the users (age, gender, occupation, zip).

29.3.2 Performance Evaluation Our retrieved Book Crossing and Movie Lens 100 K datasets included three attributes named by “user ID”, “item ID”, and “item ratings given by user”, without missing value. We set the “item ratings given by user” as a label, and “user ID” and “item ID” defined as attributes. In the split data phase, we perform 80% of data as the training set and the remaining 20% as the test data. The evaluation performance of our method we consider two comparison partners: Matrix Factorization and Biased Matrix Factorization, both using SGD. Experimental workflow as shown in Fig. 29.1.

DATASET

SET ROLE (Item, user, rating)

SPLIT DATA (Training, Test)

EVALUATION OF THE MDOELS - Matrix Factorization used SGD - Biased Matrix Factorization used SGD

Fig. 29.1 Workflow of the experiment

APPLY MODEL

PERFORMANCE EVALUATION RMSE; MAE

270 Table 29.1 Experimental comparison results of Matrix Factorization and Biased Matrix Factorization

K. Davagdorj et al. Dataset name

Method

RMSE

MAE

Book Crossing Rating

Biased matrix factorization

0.521

0.367

Matrix factorization

0.702

0.457

Biased matrix factorization

0.857

0.622

Matrix factorization

0.895

0.724

Movie Lens 100 K Rating

Parameter settings of minimum rating, range, and number of factors depended on collected information of experimental datasets, respectively. Then, some other parameters such as user and item regularizations −0015, iteration number −30, and learning rates −0.01 were applied similarly in each technique. Biased Matrix Factorization technique should adjusted bias parameter which toned in 1.0E-4. Table 29.1 shows that rating prediction result of Book Crossing and Movie Lens 100 K datasets, which outperforms higher result in Biased Matrix Factorization compare with regular Matrix Factorization technique. Compute with a regular Matrix Factorization technique, Biased Matrix Factorization produced the reduction of the RMSE by 25.78% and MAE by 19.69% for Book Crossing dataset and RMSE by 19.69% and MAE by 14.08% for Movie Lens 100 K dataset. Our experimental best result materialized RMSE by 0.521 and MAE by 0.367 on Book Crossing dataset, also RMSE by 0.857 and MAE by 0.622 on Movie Lens 100 K ratings datasets as well.

29.4 Conclusion and Future Work Recommender systems have become increasingly popular, and utilized in variety of areas to enhance the understanding about customer behavior, therefore support the business processes effectively. In recent years, many kinds of recommender systems were developed, from them collaborative filtering model-based approach used most popular and successfully employed in applications, specially solve the critical part of predicting ratings. Contribution of this research study is showed to compare the two kinds of efficient techniques as MF and BMF in real-world two different domains such as Book Crossing dataset and Movie Lens 100 K datasets for predicting item ratings of users. In both cases, BMF using SGD algorithm can prove the better performance corresponds to the RMSE and MAE. As to future work, we will employ more suitable state-ofart machine learning algorithms for customer prediction in terms of improving its predictive value on recommendation decision. Acknowledgements This research was supported by the Private Intelligence Information Service Expansion (No. C0511-18-1001) funded by the NIPA (National IT Industry Promotion Agency) and

29 A Collaborative Filtering Recommendation System …

271

supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No.2017R1A2B4010826).

References 1. Ricci, F., Rokach, L., Shapira, B.: Recommender systems: introduction and challenges. In: Recommender Systems Handbook, pp. 1–34. Springer, Boston, MA (2015) 2. Gong, S., Ye, H., Tan, H.: Combining memory-based and model-based collaborative filtering in recommender system. In: 2009 Pacific-Asia Conference on Circuits, Communications and Systems, pp. 690–693. IEEE (2009) 3. Su, X., Khoshgoftaar, T.M.: A survey of collaborative filtering techniques. Adv. Artif. Intell. (2009) 4. Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 8, 30–37 (2009) 5. Gedikli, F., Jannach, D.: Recommending based on rating frequencies. In: Proceedings of the Fourth ACM Conference on Recommender Systems, pp. 233–236. ACM (2010) 6. Ma, H., Yang, H., Lyu, M.R., King, I.: Sorec: social recommendation using probabilistic matrix factorization. In: Proceedings of the 17th ACM Conference on Information and Knowledge Management, pp. 931–940. ACM (2008) 7. Ivarsson, J., Lindgren, M.: Movie recommendations using matrix factorization. In: Degree Project in Computer Engineering (2016) 8. Jamali, M., Ester, M.: A matrix factorization technique with trust propagation for recommendation in social networks. In: Proceedings of the fourth ACM Conference on Recommender Systems, pp. 135–142. ACM (2010) 9. Hyndman, R.J., Koehler, A.B.: Another look at measures of forecast accuracy. Int. J. Forecast. 22(4), 679–688 (2006) 10. Willmott, C.J., Matsuura, K.: Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate Res. 30(1), 79–82 (2005) 11. Ziegler, C.N., Freiburg, D.B.I.S.: Book-crossing dataset. [014-06-201. http://www2.informatik, uni-freiburg, de/cziegler/BX, pp. 2017. Accessed 1 June 2004 12. Maxwell Harper, F., Konstan, J.A.: The movielens datasets: history and context. ACM Trans. Interact. Intell. Syst. (TiiS) 5(4), 19 (December 2015). http://dx.doi.org/10.1145/2827872. Author, F.: Article title. Journal 2(5), 99–110 (2016)

Chapter 30

Comparison of the Framingham Risk Score and Deep Neural Network-Based Coronary Heart Disease Risk Prediction Tsatsral Amarbayasgalan , Pham Van Huy

and Keun Ho Ryu

Abstract Coronary heart disease (CHD) is one of the top causes of death globally; if suffering from CHD, long time permanent treatments are required. Furthermore, the early detection of CHD is not easy; doctors diagnose it based on many kinds of clinical tests. Therefore, it is effective to reduce the risks of developing CHD by predicting high-risk people who will suffer from CHD. The Framingham Risk Score (FRS) is a gender-specific algorithm used to estimate at 10-years CHD risk of an individual. However, FRS cannot well estimate risk in populations other than the US population. In this study, we have proposed a deep neural network (DNN); this approach has been compared with the FRS and data mining-based CHD risk prediction models in the Korean population. As a result of our experiment, models using data mining have given higher accuracy than FRS-based prediction. Moreover, the proposed DNN has shown the highest accuracy and area under the curve (AUC) score, 82.67%, and 82.64%, respectively. Keywords Coronary heart disease · Framingham risk score · Data mining · Deep neural network

T. Amarbayasgalan Database and Bioinformatics Laboratory, School of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea e-mail: [email protected] P. Van Huy · K. H. Ryu Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam e-mail: [email protected] K. H. Ryu (B) Department of Computer Science, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea e-mail: [email protected]; [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_30

273

274

T. Amarbayasgalan et al.

30.1 Introduction CHD is a type cardiovascular disease (CVD); and according to the report by the World Health Organization (WHO), CVDs are the number one cause of death globally with regard to 2017, an estimated 85% of these deaths are due to CHD and stroke [1]. CHD is caused by unhealthy blood cholesterol level, high blood pressure (HBP), smoking, bad eating habits, lack of physical activity, and obesity [2]. If suffering from CHD, a waxy substance called plaque will be built on the wall of an artery that supplies oxygen and nutrients to the heart muscle. This plaque narrows the arteries and reduces the flow of oxygen-rich blood to the heart [3]. Over time, the plaque is more narrowed the arteries the flow of blood is blocked and heart attack or death can be occurred because of the blockage [4]. If CHD becomes worse, it will require advanced treatments such as heart transplant, stent surgery that helps keep coronary arteries open and reduce the chance of a heart attack, and coronary artery bypass grafting that improves blood flow to the heart [5]. In the early stage, it is possible to prevent suffering from CHD by a good diet, exercises, and optimal medication. However, making an accurate diagnosis is difficult, a doctor will diagnose it based on many clinical tests such as electrocardiogram, echocardiography, chest X-Ray, and blood tests and so on [6]. Early detection of CHD is very important because it increases the chances of successful treatment. Therefore, computer-aided approaches have been suggested to detect the risk of CHD in patients. The Framingham Risk Score (FRS) is a gender-specific multivariable statistical model to estimate at 10-years CHD risk of an individual. There are some features including age, sex, smoking status, blood pressure, cholesterol, high-density lipoprotein (HDL) cholesterol, and diabetes are used in this model [7]. However, FRS cannot well estimate risk in populations other than the US population [8, 9] because it was first developed based on residents of the city of Framingham, Massachusetts. Recent years, data mining-based CHD risk prediction models have been suggested in many research works [10–12]. Data mining techniques are deployed to scour large databases to find novel and useful patterns that might otherwise remain unknown [13]. According to the Korea National Health and Nutrition Examination Survey (KNHANES) dataset, Jaekwon Kim and Jongsik Lee et al. proposed Fuzzy logic and Decision tree (DT)-based CHD risk prediction model [11]. Their suggested model was based on age, sex, total cholesterol, low-density lipoprotein (LDL), HDL, systolic blood pressure (SBP), diastolic blood pressure (DBP), smoking status, and diabetes. Result proves that the proposed method provides more accuracy (0.69%) over other algorithms like NN, support vector machine (SVM) and C5.0. A NN with feature correlation analysis (NN-FCA) approach has been proposed by Jae Kwon Kim and Sanggil Kang. They have performed statistical analysis to select features related to CHD for KNHANES-VI dataset. Total 9 selected features such as age, body mass index (BMI), total cholesterol, HDL, SBP, DBP, triglyceride, smoking status, and diabetes were given into the input of NN. Compared to the result of the FRS and linear regression on the Korean population, their proposed model has shown high accuracy and AUC score, 82.51 and 0.74, respectively [12].

30 Comparison of the Framingham Risk Score …

275

In this research, we have compared our proposed DNN with FRS and data mining based other models such as naïve Bayes (NB), k-nearest neighbors (KNN), SVM, DT, and random forest (RF) in the Korean population.

30.2 Methodology In this section, we will describe the details of an experimental dataset and the proposed DNN.

30.2.1 Dataset The Korea National Health and Nutrition Examination Survey (KNHANES) is a national surveillance system that has been assessing the health and nutritional status of Koreans since 1998 [14]. We have analyzed KNHANES datasets over 2010–2015 years. The Framingham risk factors such as age, sex, total cholesterol, HDL, SBP, DBP, smoking status and diabetes from the KNHANES dataset were used as risk factors of CHD prediction model. Hypertension, dyslipidemia, stroke, myocardial infarction, angina, and hyperlipidemia were used to identify high-risk or low-risk labeling. If one of these 6 disorders is identified, the individual will be considered to have high-risk CHD. The description of the dataset in this experiment is detailed in Table 30.1.

30.2.2 Compared Methods We have compared the proposed DNN model to the following algorithms. FRS. The Framingham Risk Score (FRS) is a gender-specific multivariable statistical model to estimate at 10-years CHD risk of an individual. We have the Framingham equations for the prediction of CHD risk provided by Wilson et al. [15]. NB. The Naïve Bayes classifier estimates the conditional probability for each class label by assuming the attributes are conditionally independent and chooses the class label that has the highest probability. KNN. The k-nearest neighbor algorithm computes distances between the test set and training set to determine a list of its nearest neighbors from the training set. Then the most majority class label from the k number of nearest neighbors considered as predicted class label. SVM. The support vector machine finds a decision boundary called hyperplane that can separate dataset according to classes. It learns by minimizes the classification error and maximizes the margin.

276

T. Amarbayasgalan et al.

Table 30.1 Description of experimental dataset Variable name

Low risk (13,075 records)

High risk (12,915 records)

Age (yr)

41.54 (17.71)

55.96 (15.55)

Sex Men

5234

6083

Women

7841

6832

Total cholesterol (mg/dL)

181.16 (28.55)

194.17 (41.91)

HDL cholesterol (mg/dL)

54.53 (10.33)

46.23 (12.18)

SBP (mmHg)

113.28 (14.97)

124.19 (17.41)

DBP (mmHg)

73.01 (10.05)

76.99 (10.76)

Yes

1971

2397

No

11104

10518

Yes

343

1748

No

12732

11167

Smoking

Diabetes

DT. The decision tree is a flowchart-like structure, where each internal node denotes a test on an attribute, each branch represents the outcome of a test, and each leaf node holds a class label. In these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. It is useful that a decision tree has few nodes and solves the problems very effectively. RF. The random forest is a class of ensemble methods specially designed for decision tree classifier. It combines predictions made by multiple decision trees, where each tree is generated based on different training sets by randomly choosing N number of samples, with replacement. Proposed DNN. NN is an interconnected group of nodes and each node represents an artificial neuron and the output of one artificial neuron connected to the input of other neurons of the next hidden layer. Each connection has a weight that is adjusted by the learning process. We used Adam optimizer which is a stochastic gradientbased optimizer for weight optimization. For achieving the best structure of DNN, we trained 5 kinds of NN models with a different structure for CHD risk prediction. The selected DNN has 5 hidden layers; hidden 1 with 17 neurons, hidden 2 with 9 neurons, hidden 3 with 5 neurons, hidden 4 with 3 neurons, hidden 5 with 2 neurons, and each hidden layer used the ReLu activation function. Output layer has only one neuron and used the Sigmoid activation function. We applied the tenfold cross-validation method for training the DNN model. It splits the whole dataset into 10 samples; the first one is used for testing and the remained 9 samples are used for training purpose. The whole process is repeated 10 times, and the testing sample will be changed by the next sample and training samples are changed by the remaining samples. Finally, 10 performance results are aggregated.

30 Comparison of the Framingham Risk Score …

277

Fig. 30.1 The general architecture of the experiment

30.3 Experimental Study and Result First, we have prepared the experimental dataset and then we built models to predict CHD risk. Figure 30.1 shows the general architecture of the experimental study.

30.3.1 Data Pre-processing We have analyzed KNHANES datasets over 2010–2015 years; after deleting missing valued rows, total 25,990 records including 12,915 high-risk people who will suffer from CHD and 13,075 low-risk people were used in our experiment. Figure 30.2 shows about our integrated KNHANES dataset.

30.3.2 Comparison of the CHD Prediction Models We have used tenfold cross-validation method for testing our model. The precision, recall, specificity, f-measure, accuracy, and AUC that represents a summary measure of accuracy is used to evaluate the performance of models. All compared algorithms are implemented in Python with Keras. First, the experiment was conducted to select the best structure of DNN. Table 30.2 shows the result of CHD prediction models based on DNNs that have a different number of hidden layers.

278

T. Amarbayasgalan et al.

Fig. 30.2 Integrated KNHANES dataset (2010–2015)

Table 30.2 Comparison of the DNNs Algorithm (neurons of hidden layers)

Accuracy

Precision

Recall

Specificity Fmeasure

AUC

NN (2)

0.7861

0.7676

0.8243

0.7473

0.7949

0.7858

DNN (3, 2)

0.4704

0.4734

0.4705

0.4702

0.4720

0.4704

DNN (5, 3, 2)

0.8181

0.7899

0.8699

0.7657

0.8280

0.8178

DNN (8, 5, 3, 2)

0.8195

0.7912

0.8712

0.7673

0.8293

0.8192

DNN (17, 9, 5, 3, 2)

0.8267

0.7995

0.8750

0.7779

0.8355

0.8264

We have used NB, KNN, NN, SVM, DT, and FR algorithms for building models to predict the risk of CHD. Then these data mining-based models have been compared with the proposed DNN model. Table 30.3 shows the results of CHD prediction models and FRS prediction. As a result, data mining-based prediction models showed high accuracy than the FRS. Moreover, the proposed DNN gave the highest accuracy and AUC score.

30.4 Conclusions By this research work, we aimed to find the best structure of DNN for CHD risk prediction using Framingham risk factors in the Korean population. Our proposed DNN have 5 hidden layers and each hidden layers have n + 1 neurons, where n is a number of the input nodes. Then, the DNN-based proposed model has been

30 Comparison of the Framingham Risk Score …

279

Table 30.3 Results of compared algorithms on KNHANES dataset Algorithm

Accuracy

Precision

Recall

Specificity Fmeasure

AUC

FRS

0.5233

0.5188

0.7234

0.3207

0.6042

0.5220

NB

0.7306

0.6957

0.8256

0.6345

0.7551

0.7300

KNN(k = 10)

0.7876

0.7549

0.8556

0.7188

0.7872

0.8021

DT (criterion = entropy)

0.7513

0.7562

0.7461

0.7565

0.7511

0.7533

RF (tree = 100, criterion = entropy)

0.8111

0.7827

0.8646

0.7570

0.8216

0.8108

SVM (kernel = rbf)

0.8045

0.7960

0.8221

0.7867

0.8088

0.8044

Proposed DNN (17, 9, 5, 3, 2)

0.8267

0.7995

0.8750

0.7779

0.8355

0.8264

compared with the FRS and other prediction models based on data mining methods for CHD risk prediction at the early stage. As a result, data mining-based models showed higher accuracy and AUC score than the FRS among the Korean population. Moreover, the proposed DNN gave the result of the highest accuracy (82.67%) and AUC (82.64%). Acknowledgements This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No.2017R1A2B4010826).

References 1. World Health Organization (WHO). https://www.who.int/news-room/fact-sheets/detail/ cardiovascular-diseases-(cvds). Accessed 1 Feb 2019 2. American Heart Association. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5408160/pdf/ nihms852024.pdf. Accessed 1 Feb 2019 3. National Heart, Lung, and Blood Institute. https://www.nhlbi.nih.gov/health-topics/coronaryheart-disease. Accessed 1 Feb 2019 4. Nucleus Medical Media. http://www.nucleushealth.com/. Accessed 1 Feb 2019 5. Hausmann, H., Topp, H., Siniawski, H., Holz, S., Hetzer, R.: Decision-making in end-stage coronary artery disease: revascularization or heart transplantation. Ann. Thoracic Surg. 64(5), 1296–1302 (1997) 6. Diamond, G.A., Forrester, J.S.: Analysis of probability as an aid in the clinical diagnosis of coronary-artery disease. N. Engl. J. Med. 300(24), 1350–1358 (1979) 7. Greenland, P., LaBree, L., Azen, S.P., Doherty, T.M., Detrano, R.C.: Coronary artery calcium score combined with Framingham score for risk prediction in asymptomatic individuals. JAMA 291(2), 210–215 (2004) 8. Brindle, P., Jonathan, E., Lampe, F., Walker, M., Whincup, P., Fahey, T., Ebrahim, S.: Predictive accuracy of the Framingham coronary risk score in British men: prospective cohort study. BMJ 327(7426), 1267 (2003)

280

T. Amarbayasgalan et al.

9. Sacco, R.L., Khatri, M., Rundek, T., Xu, Q., Gardener, H., Boden-Albala, B., Di Tullio, M.R., Homma, S., Elkind, M.S., Paik, M.C.: Improving global vascular risk prediction with behavioral and anthropometric factors: the multiethnic NOMAS (Northern Manhattan Cohort Study). J. Am. College Cardiol. 54(24), 2303–2311 (2009) 10. Kim, H., Ishag, M.I.M., Piao, M., Kwon, T., Ryu, K.H.: A data mining approach for cardiovascular disease diagnosis using heart rate variability and images of carotid arteries. Symmetry 8(6), 47 (2016) 11. Kim, J., Lee, J., Lee, Y.: Data-mining-based coronary heart disease risk prediction model using fuzzy logic and decision tree. Healthcare Inf. Res. 21(3), 167–174 (2015) 12. Kim, J.K., Kang, S.: Neural network-based coronary heart disease risk prediction using feature correlation analysis. J. Healthcare Eng. 2017 (2017) 13. Tan, P.N., Steinbach, M., Kumar, V.: Introduction to Data Mining, 1st edn. Pearson Education, Boston (2006) 14. Kweon, S., Kim, Y., Jang, M.J., Kim, Y., Kim, K., Choi, S., Chun, C., Khang, Y.H., Oh, K.: Data resource profile: the Korea national health and nutrition examination survey (KNHANES). Int. J. Epidemiol. 43(1), 69–77 (2014) 15. Wilson, P.W., D’Agostino, R.B., Levy, D., Belanger, A.M., Silbershatz, H., Kannel, W.B.: Prediction of coronary heart disease using risk factor categories. Circulation 97(18), 1837–1847 (1998)

Chapter 31

Mining High Quality Medical Phrase from Biomedical Literatures Over Academic Search Engine Ling Wang, Xue Gao, Tie Hua Zhou, Wen Qiang Liu and Cong Hui Sun

Abstract Evidence-based medicine (EBM) is an inevitable trend in the development of medicine. It effectively improves treatment effect of diseases through combing clinical experience, medical acknowledge, and individualized biological information of patients. The biomedical literature, as the important medical acknowledge source of EBM, could help to discover comorbidity or disease progression patterns. However, due to the strong professionalism of biomedical literature, compared with the general language, the extracted medical phrases have semantic ambiguity problems. Therefore, we propose the high quality medical phrase mining approach (HQMP) for reducing the overdependence on frequency of multiple phrase evaluation and eliminating the semantic ambiguity of bilateral expansion of phrase boundaries. We use the proposed approach to analyze the pathogeny, diagnoses, and treatments of ophthalmopathy with central retinal vein occlusion (CRVO) and glaucoma, and demonstrate the diagnostic frequent disease co-occurrence and sequence patterns mined from medical literatures, to improve the credibility of evidence-based medicine for prevention and treatment of diseases. The experimental results show that HQMP not only improves the quality of medical phrases effectively, but also has fast performance. Keywords EBM · Biomedical literature · High quality medical phrase · NLP

L. Wang · X. Gao · T. H. Zhou (B) · W. Q. Liu · C. H. Sun Department of Computer Science and Technology, School of Computer Science, Northeast Electric Power University, Jilin, China e-mail: [email protected] L. Wang e-mail: [email protected] X. Gao e-mail: [email protected] W. Q. Liu e-mail: [email protected] C. H. Sun e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_31

281

282

L. Wang et al.

31.1 Introduction With the rapid development of evidence-based medicine (EBM), it has penetrated into all aspects of clinical practice and health decision-making all over the world [1]. The accuracy of medical evidence has a direct impact on the treatment and efficacy of disease [2], which leads to medical evidence mining gradually bringing into public focus. Generally, the evidence sources of EBM could be divided into two categories: medical image and medical text. The medical text includes medical records, medical forums data, and biomedical literature. Flourishing of social networks promotes research on EBM turning to online biomedical literature [3, 4]. The rapid growth of the biomedical literature has not been paralleled by growth in quality. Therefore, how to mine medical knowledge with high-speed and high-efficiency becomes an increasingly challenging problem [5, 6]. In this paper, we propose the HQMP approach to interact dynamic adjustment of phrase segmentation with multiple phrase evaluation, which focuses on improving the quality and number of medical phrase. At the beginning, the potential high quality medical phrases are found based on the multiple phrases evaluation. And then, use the dynamic phrase segmentation to choose the most appreciate phrase segmentation scheme. Finally, eliminate semantic ambiguity and expand the length of medical phrases.

31.2 Related Work There are rich researches on the EBM with NLP, which can be traced back to the linguistic string project medical—language processor (LSP-MLP) project [7]. Recent examples include detection the symptoms of diseases form medical record to enrich medical dictionary [8], analyses of health record to adjust eating habits and avoid corresponding harm [9], automate assessment the quality of care and diseases treatment [10, 11], detection of postoperative complications [12]. The methods of NLP are shown as the follows, which are applied on the EBM. The rule methods of NLP are used to analyze parse radiograph reports, which obtained the valuable clinical finding of pulmonary tuberculosis patients [13]. Evidently, the rule-based method faced enormous challenges because of serious transplant problem among data sets, which leads to low accuracy and much running time. NLP conducted an in-depth study about phrase mining and introduced part-of-speech (POS) to shallow parsing methods [14]. Then, the POS is widely used in automated assessment of quality of care to adjust care treatment to patients and to improve the professionalism of medical workers [15, 16]. Recently, statistics of data distribution from medical text to further improve the accuracy of medical phrase quality estimation. Therefore, the base-statistic methods are proposed to deal with the problems about semantic units through some indicator features, like positions, labels and statistical features, including phrasal frequencies

31 Mining High Quality Medical Phrase from Biomedical Literatures …

283

and dependency features. These methods segmented a query into phrases, calculated the mutual information and viewed as a new semantic unit, which the mutual information between the two consecutive words is over the setting threshold [17].

31.3 Motivation 31.3.1 Multiple Evaluation The part of phrase evaluation as shown in Fig. 31.1. Some phrases are satisfied with the characteristic of medical quality phrases, which are directly signed as medical quality phrase at the first evaluation. In addition, the medical phrases among the double frequencies are considered as probably quality phrases, and then they have the second phrase evaluation. According to the comparison of applicability and completeness of the selected medical quality phrases and probably quality phrases, some potential probably medical phrases could get from probably quality phrases. For example, adjacent words “w1 w2 w3 w4 w5 ” and “w1 w2 w3 ” are assumed as quality phrase (w1 = High, w2 = Quality, w3 = Medical, w4 = Phrase, w5 = Mining, w6 = Algorithm). According to the existing methods, if the frequency of ‘w1 w2 ’ does not satisfy the setting threshold, it would be directly judged as a non-high quality medical phrase and fails to mine the “w1 w2 w3 w4 ” and “w1 w2 w3 w4 w5 ”. One point should be noticed that phrase frequency and phrase length have the opposite trend: the increasing of phrase length accompanies with decreasing in the frequency.

Fig. 31.1 Phrase segmentation

284

L. Wang et al.

31.3.2 Phrase Segmentation The existing medical phrases are selected as the center of expanding boundaries, and then expand the length of phrase to find new medical phrases and eliminate semantic ambiguity of medical phrases. The several segmentations to the same word sequence contributes to the unideal segmentation. We assume that the word sequence may be divided into different situations: High Quality/Medical Phrase Mining/Algorithm High Quality/Medical Phrase/Mining Algorithm High Quality/Medical Phrase/Mining/Algorithm According to the statistic scores, we assume that “w1 w2 ”, “w3 w4 ”, and “w3 w4 w5 ” are both high statistic scores, which are higher than the setting threshold. If the statistic scores of “w1 w2 ” and “w3 w4 ” are both higher than “w2 w3 w4 ”, the segmentation result will be shown as the first situation. However, if the statistic scores of “w3 w4 ” and “w3 w4 w5 ” are equal, it will be difficult to choose the segmentation from the first and the second situation. Therefore, we propose that moving the left and right boundaries of the phrase, and combined with the phrase to evaluate new phrases, further find more medical quality phrases. As shown in Fig. 31.2, the existing medical quality phrase wl is close to the punctuation and set the split points is at the wl is close to the punctuation and setting the split points is at the two sides of the wi, respectively: Ls (the left segmentation boundary) and Rs (the right segmentation boundary). In addition, if the word sequence wi locating on the left to Ls is not empty: wi1 , wi2 … wim , Ts moves and calculates P(wi |wl ) at the same time as shown in Fig. 31.1.

Fig. 31.2 The precision-recall curves among different algorithms on CRVO dataset

31 Mining High Quality Medical Phrase from Biomedical Literatures …

285

31.4 High Quality Medical Phrase Mining Approach In this section, a novel method was introduced to automatically adjust the divided position and forward to get quality phrases. At first, we introduce the related definition. Definition: Given a word sequence C = {w1 w2 … wx } of length x, we want to divide C into S = {s1 s2 …sy }. The {s1 s2 …sy } is independent individual to each other, which means adjacent subsequence does not have overlapping sequences. In order to show definition of phrasal segmentation, we use B = {b1 ’,b2 ’,……,bu ’} to represent the best optimal split positions. The various segmentations for the same word sequence contribute to the unideal segmentation. For the aforementioned example, word sequence is assumed that it may be divided into different situations: The first step is to find the original segmentation position by frequency and sign this original segmentation position that contain the left boundary and right boundary in a set of index. Then the start index is used to examine whether left adjacent phrasal segmentation and right adjacent phrasal segmentation are frequent.  Then, we generate an indicator whether bu forms a quality segment.

31.5 Experiments 31.5.1 Accuracy The advantage of HQMP is proved by comparing precision and recall with baselines. Precision is the ratio of number of medical quality phrase (mqp) to the number of candidate phrases (cp). Recall is the ratio of the number of medical quality phrase to the number of total number of medical quality phrases (tmqp). The formula of precision and recall are shown as follows: pr ecision = Recall =

N um(mqp) N um(cp)

N um(mqp) N um(tmqp)

(31.1) (31.2)

In the experiments, treatments, drug names, and disease names are seemed as medical quality phrases due to datasets are related to ophthalmic diseases. If a phrase that is conducted by HQMP is found in Wikipedia, it will be seemed as a high quality medical phrase. However, this evaluation exists some bias due to the phrase is signed as quality phrase that is relied largely on the Wikipedia. As seen from Figs. 31.2 and 31.3, the precision-recall curve of HQPM is higher than the baselines, with the obvious advantage in the range of recall between 0.3 and 0.5 in Fig. 31.2. It is noteworthy that the precision-recall curve of HQPM takes on

286

L. Wang et al.

Fig. 31.3 The precision-recall curves among different algorithms on Glaucoma dataset

ascend trend when the range of recall between 0.3 and 0.4. The reason is that the medical literature of first dataset has the same theme, which leads to some papers have many common medical phrases. Furthermore, it leads to the number of medical phrases have grown faster than that of candidate phrases.

31.5.2 Running Time We selected different number of biomedical literatures to randomly form datasets with different size and then compare their running time of four algorithms. As shown in Fig. 31.4, the running time of four algorithms increases with the size of dataset, which one point should notice that HQMP compared with other baselines has better performance in the running time.

31.6 Conclusions In this paper, we introduced a novel approach to extract medical quality phrases from medical literature. Instead of existing techniques, the proposed method not only evaluate medical quality phrase from different perspectives, but also increase the times of evaluation to avoids misleading quality estimation. The phrase evaluation of HQMP interacts with bilateral expansion of phrase boundaries to eliminate semantic ambiguity, and mining the new medical phrase. In addition, experimental results show that our approach performs significantly better than comparable approaches through authoritative medical literature. In the future, we would like to explore how

31 Mining High Quality Medical Phrase from Biomedical Literatures …

287

Fig. 31.4 Running Time comparison among four algorithms

to predict the appearance of diseases according to the presenting symptoms and clinical records of patients. Acknowledgements This work was supported by National Natural Science Foundation of China (No. 61701104).

References 1. Sun, Y., Yong, L., Peng, Z., Wu, H., Hou, X., Ren, Z., Li, X., Zhao, M.: Anti-vegf treatment is the key strategy for neovascular glaucoma management in the short term. BMC Ophthalmol. 16, 150–158 (2016). https://doi.org/10.1186/s12886-016-0327-9 2. Liu, J., Shang, J., Wang, C., Ren, X., Han, J.: In Mining quality phrases from massive text corpora. In: ACM Sigmod International Conference on Management of Data, p. 1729 (2015). https://doi.org/10.1145/2723372.2751523 3. Duan, Z., Liu, G.S.: Method of Building User Profile Based on Textrank. Computer Technology & Development (2015) 4. Balikas, G., Amini, M.R.: An empirical study on large scale text classification with skip-gram embeddings (2016). https://doi.org/10.1145/1235 5. Wołk, K., Marasek, K.: Neural-based machine translation for medical text domain. Based on european medicines agency leaflet texts ✩. Procedia Comput. Sci. 64, 2–9 (2015). https://doi. org/10.1016/j.procs.2015.08.456 6. Ford, E., Carroll, J.A., Smith, H.E., Scott, D., Cassell, J.A.: Extracting information from the text of electronic medical records to improve case detection: a systematic review. J. Am. Med. Inform. Assoc. 23, 1007–1015 (2016). https://doi.org/10.1093/jamia/ocv180 7. Chan, K., Willan, A., Gupta, M., Pullenayegum, E.: Prm199–underestimation of uncertainties in health utilities dervied from mapping algorithms involving health-related quality of life measures: Statistical explanations and potential remedies. Med. Decis. Mak. Int. J. Soc. Med. Decis. Mak. 34, 863–872 (2014). https://doi.org/10.1177/0272989x13517750

288

L. Wang et al.

8. Pirracchio, R., Yue, J.K., Manley, G.T., Mj, V.D.L., Hubbard, A.E.: Collaborative targeted maximum likelihood estimation for variable importance measure: Illustration for functional outcome prediction in mild traumatic brain injuries. Statist. Methods Med. Res. 27, 286–297 (2016). https://doi.org/10.1177/0962280215627335 9. Arnold, L.D., Braganza, M., Salih, R., Colditz, G.A.: Statistical trends in the journal of the american medical association and implications for training across the continuum of medical education. Plos One 8, e77301 (2013). https://doi.org/10.1371/journal.pone.0077301 10. Tsatsaronis, G., Balikas, G., Malakasiotis, P., Partalas, I., Zschunke, M., Alvers, M.R., Weissenborn, D., Krithara, A., Petridis, S., Polychronopoulos, D.: An overview of the bioasq largescale biomedical semantic indexing and question answering competition. BMC Bioinform. 16, 138–165 (2015). https://doi.org/10.1186/s12859-015-0564-6 11. Rastegar-Mojarad, M., Ye, Z., Wall, D., Murali, N., Lin, S.: Collecting and analyzing patient experiences of health care from social media 4(3), 78–86 (2015). https://doi.org/10.2196/ resprot.3433 12. Fan, J., Prasad, R., Yabut, R.M., Loomis, R.M., Zisook, D.S., Mattison, J.E., Huang, Y.: Partof-speech tagging for clinical text: wall or bridge between institutions? In: AMIA Annual Symposium proceedings. AMIA Symposium 2011, p. 382 (2011) 13. Choi, W., Lee, J.K., Findikoglu, A.T.: Heuristic sample selection to minimize reference standard training set for a part-of-speech tagger. J. Am. Med. Inform. Assoc. 14, 641–650 (2007). https:// doi.org/10.1197/jamia.m2392 14. Jain, N.L., Knirsch, C.A., Friedman, C., Hripcsak, G.: Identification of suspected tuberculosis patients based on natural language processing of chest radiograph reports. In: Proceedings AMIA Annual Fall Symposium 1996, pp. 542–546 (1996) 15. Association, A.D.: 3. Comprehensive medical evaluation and assessment of comorbidities. Diabetes Care 40, S25–S32 (2017). https://doi.org/10.2337/dc17-s006 16. Abrishami, M., Hashemi, B., Abrishami, M., Abnous, K., Razaviazarkhiavi, K., Behravan, J.: Pcr detection and identification of bacterial contaminants in ocular samples from post-operative endophthalmitis. J. Clin. Diagn. Res. JCDR 9, 01–03 (2015). https://doi.org/10.7860/jcdr/2015/ 10291.5733 17. Rothery, C., Claxton, K., Palmer, S., Epstein, D., Tarricone, R., Sculpher, M.: Characterising uncertainty in the assessment of medical devices and determining future research needs. Health Econ. 26, 109–123 (2017). https://doi.org/10.1002/hec.3467

Chapter 32

Current State of E-Commerce in Mongolia: Payment and Delivery Oyungerel Delger, Munkhtuya Tseveenbayar, Erdenetuya Namsrai and Ganbat Tsendsuren

Abstract We have studied historical development of e-commerce business, ecommerce companies, online payment options, goods delivery service and its forms and selected certain indicators for unification. Two hypotheses were stated within the research: e-commerce companies in Mongolia have successfully implemented online payment methods; and also goods delivery services. The purpose of the research has been to cover all companies operating actively in e-commerce market that have B2C, i.e., Business to Customer model. In total 70 companies were selected based on the analyzed information from official sources. Regarding the result, the companies are offering 24 types of tangible and intangible goods and 20% of them are selling e-goods. Researchers have found out that there are 13 traditional and electronic payment methods in Mongolia as of today and all companies have implemented at least 1 online payment methods, the maximum amount is 12 methods, and the average is from 5 to 6 methods. Three goods delivery model exists and 56 from 70 companies are offering tangible goods. 84% of them has already solved the issue of goods delivery. 72% of those 56 companies have their own department or employee for delivery service, and remaining 28% are using domestic or international postage services. Half, i.e., 50% of those 56 companies use local transportation for their goods delivery to rural areas. Therefore, regarding the results we assume that hypotheses are proved and the relevant conclusion was made. Keywords E-commerce companies · Payment methods · Goods delivery service · Social media · Fintech (finance technology) O. Delger (B) · M. Tseveenbayar · E. Namsrai · G. Tsendsuren School of Business Administration and Humanities, Department of Management Information System, Mongolian University of Science and Technology, Ulaanbaatar, Mongolia e-mail: [email protected] M. Tseveenbayar e-mail: [email protected] E. Namsrai e-mail: [email protected] G. Tsendsuren e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_32

289

290

O. Delger et al.

32.1 Introduction E-Commerce activities have been studied internationally in enough amount but in case of Mongolia the development of e-commerce is in its early stage and has not been studied enough. The textbook about behavior analysis and its results from 2008 to 2011 of Mongolian Internet users by Ganbat Ts. and other researchers was published. Themes such as e-commerce statements, e-commerce fundamental functions, user’s behavior concepts, theoretical analysis of statements and behavior attitudes, their models, analysis of foreign countries’ practices, statistical information, behavior attitudes analysis on e-business, services, and some technologies and its results are included in the book. The book has been taken as fundamental basis for our research. The purpose of study. We aimed to identify e-commerce companies with B2C model operating actively in Mongolia, to analyze their payment options and their use, goods delivery issues. Hypothesis. The hypothesis of the study proposed as e-commerce companies have implemented enough options of payment methods; goods delivery issues of ecommerce companies have been solved. Methodology of the study. The survey methodology was taken over from international web analytical system, the companies were selected based on their social network sites visits, merchant contract with bank or other official finance service providers. The classification of payment methods and goods delivery were processed based on the research paper issued internationally and on information provided by service providers. We used statistical descriptive methodology on our collected data.

32.2 Methodology We have researched historical development of e-commerce, e-commerce business models, influencing indicators, research model and hypotheses, online payment and goods delivery system.

32.2.1 E-Commerce Even though there are a lot of definitions of e-commerce from different researchers depending on their context, we have selected 3 definitions for our study: E-commerce is a component of e-business that is providing selling services of tangible and intangible products using Internet-based technology [6]. E-commerce is selling of goods and services using computer technology and network [7]. E-commerce is a complex activity of buying and selling goods and services based on data exchanging principle [15].

32 Current State of E-Commerce in Mongolia: Payment and Delivery

291

32.2.2 Payment Issues Nowadays common payment methods of customers are cash, check, debit and credit cards, money transfer, and other modern fintech methods such as Qpay, Lend.mn, etc. Online payment is a type of payment executed electronically. The advantages of online payments are reduction of money transaction costs, lower the price of goods and services, elimination of corruption and criminal activities, regardless the time and place.

32.2.3 Goods Delivery System As a result of rapid development of e-commerce, demand for companies that deliver goods, and solved logistical issues is increasing. One of the biggest challenges for entrepreneurs and companies is to solve logistics issues of delivering goods to end users/customers. Delivery and distribution operations are the main factors that are influencing market of modern supply chain. Goods delivery services are organization of logistics services of a supplier or a shop to deliver goods to customers. Currently there are 3 models of goods delivery, i.e., traditional, individual, and global [5] (Fig. 32.1). The traditional model is about the delivery process of goods from producers using the service of logistics company delivering to wholesalers and retailers and finally to buyers [5]. Delivery process model of independent model that does not uses delivery services of other companies and executes it by themselves based on the customer order. Companies, that buy goods from foreign market, use the model [5]. Chinese logistics infrastructure is revolutionizing last 10 years. Companies that buy their products from China and resell it in Mongolia have implemented global model. Traditional model

Fig. 32.1 Goods delivery service

An independent model

Global model

292

O. Delger et al.

32.3 Current State of E-Commerce in Mongolia We have defined in this part e-commerce companies, payment methods, and goods delivery issues. Their analysis results were processed using basic statistical methods.

32.3.1 Data Collection and Data Analysis We have contacted State Registration Agency as an official source to gain official information about the companies that are operating on e-commerce market, however the result unsuccessful. Requested information about the operations’ type is not registered systematically in any state organization. Therefore, by using Internet we have found out the list of companies that have merchant contracts with Mongolian commercial banks. Also, we have collected information reports published by international research and analytical companies. From such collected certain amount of data, we have listed websites of 105 ecommerce companies as of today regarding listed below indicators. Hereto: 1. Business model of e-commerce is B2C. 2. Number of website visitors must be high (for detail, see Sect. 32.2.3). 3. Must be active on their social network sites (Facebook likes, YouTube subscribes, etc.). 4. Must have merchant contract about having online payment methods (QPay, MostMoney, and other commercial banks). 105 companies, which were filtered by above-described indicators, were sorted by each indicator and 35 were excluded from the study after visiting and analyzing websites of all companies and due to not having active operations permanently, small number of visitors, or website was not active (blocked). Finally, after the clearance of list, totally 70 companies, that met all criteria and are operating actively, have been selected for further detailed analysis. Companies Firstly, in this part we have researched whether e-commerce business is the main business operations of 70 researched companies. Afterward visitors’ number and their social network sites’ positioning indicators on international analytical websites were analyzed. Then each website of 70 companies was visited and types of goods and services they offer to customers were analyzed in detail. Figure 32.2 shows the result and only 19, i.e., 27% of companies have e-commerce business as their main operations. However 51, i.e., 73% of the total amount of companies have it as supporting business to their main business operations (Fig. 32.3). We used two international analytical websites, /www.alexa.com, www.similar. com/ to analyze and to describe the ranking of 70 researched companies by their

32 Current State of E-Commerce in Mongolia: Payment and Delivery Fig. 32.2 Main business activity of the companies

293

E-commerce, 19, 27%

Other business activities, 51, 73% Fig. 32.3 Ranking of the researched companies

5001 and more, 14, 20%

1-500, 17, 24%

4001-5000, 3, 4% 3001-4000, 5, 7% 2001-3000, 9, 13%

501-1000, 7, 10% 1001-2000, 15, 22%

indicators in Fig. 32.2. Each company was analyzed on these analytical websites. As graph 32.2 shows 24 of 70 companies are ranked in 1st to 1000th ranking place, next 24 of 70 companies in 1000th to 3000th place, and remaining 22 are placed in insufficient ranking place, i.e., in more than 3000th place. We have included in our study only 14 companies that are ranked in more than 5001st place based on their active engagement on their social network sites. It can be summarized that some of the Mongolian e-commerce companies are operating successfully and are making progress in their development. It is assumed that the more visitors on the social network sites of the company the more publicly accepted is the company as a service and a product provider. There are 12 companies in Mongolia that have more than 200,000 likes and followers and 9 companies with less than 10,000 likes and followers. See Fig. 32.4 for more detail. Fig. 32.4 Amount of visitors of the companies’ social network sites

less than 10000, 9, 13% 10001-20000, 9, 13% 20001-40000, 8, 11% 40001-60000, 8, 11%

200001 and more, 12, 17% 100001-200000, 11, 16%

60001-100000, 13, 19%

294

O. Delger et al.

TYPES OF GOODS AND SERVICES DELIVERED 25 20 15 10 5 1 0

9 1

2

2

2

2

3

3

4

4

12 10 10 10 11 11

15 15

18 18

20

23

5

Fig. 32.5 Number of companies sorted by type of goods and services they offer

70 e-commerce companies deliver as duplicated amount of 24 types of goods and services to their customer. Figure 32.5 shows number of companies sorted by type of goods and services they offer. Payment Methods We have found out that traditional commercial and e-commerce companies operating in Mongolia use below listed 13 methods as their payment system. Here to: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Cash POS /Card from any bank can be used/ mPOS /Mobile phone, Tablet/ e-Commerce system for Retail business Money transfer /through ATM, SocialPay, Internet banking, etc./ Transaction /MoneyGram, Western Union, etc./ Lend Wallet Candy /Electronic money 1₮ = 1Candy/ QPay /payment, money receipt, money transfer using QR code/ Most money Own electronic wallet /shoppy.mn/ Royalty /GG, Redpoint or own coupon points/ Other payment platforms /PayPal, American express, etc./

We have researched whether 70 e-commerce companies have implemented these 13 payment methods mentioned above. We have researched payment methods and described below results were released. In Fig. 32.6 we have shown payment methods ascending sorted from down to top, from basic to advanced methods used by e-commerce companies. As a result, we can see that companies are implementing modern new technologies for their payment system.

32 Current State of E-Commerce in Mongolia: Payment and Delivery

295

PAYMENT OPTIONS 2

Leasing 11. e-Wallet (of a company)

2

9. QPay 7. Lend wallet

3 2

5. Transfer

2

3. MPOS

0

8

15 12

16 29 66 45

1. Cash

0

10

20

30

40

67

50

60

70

80

Fig. 32.6 Implemented payment methods of e-commerce companies (as of 2019)

Regarding the researched international (inter countries) payment methods, the result shows that “e-commerce payment system” and international money transfer is used relatively in fewer amount. A company offers from 5 to 6 payment methods on average. e-Commerce websites such as www.shoppy.mn, www.bsb.mn, and www. mmarket.mn, etc. are offering enough amount of payment methods for customers. 20% of those 10 companies do only e-commerce business. Goods Delivery Services Goods delivery services of 70 companies that took part in the research were analyzed and classified by goods delivery model. The result is shown in Fig. 32.7. 56 of total 70 researched companies sell tangible goods. It means that it is required from them to deliver the goods to customers. Remaining 14 companies sell electronic goods such as movies, telecommunication services, and entertainment services. From the result, we have found out that 9 companies from total 56 that requires delivery services do not have it. Fig. 32.7 Number of e-commerce companies with goods delivery service

Without delivery service, 9, 13% Delivery services not needed, 14, 20% With delivery service, 47, 67%

296

O. Delger et al.

32.4 Conclusion The results of research are concluded in 3 parts: e-commerce companies’ research, online payment research and goods, and delivery service research. 1. E-commerce companies’ research: There are 105 e-commerce companies using B2C model in Mongolia as of today. 70 (66%) of them are operating actively. 2. Online payment research: Regarding the result of the research nowadays there are 13 payment methods in Mongolia. 8 of them are new advanced tool for online payment. • A company offers 5–6 methods on average. • Hypothesis 1 is proved. 3. Goods delivery service research: 56 of total 70 companies assumed to have goods delivery services. 47 (84%) of them already delivers goods to customers, but 9 (16%) of them do not have any delivery service. • Totally 47 companies have goods delivery service. 34 (72%) of them have department/employee for delivery, 10 (22%) of them cooperates with delivery company, and remaining 3 (6%) of them uses goods delivery services of international courier companies. • 19% of companies use Mongol Post where state figures as shareholder and 23% of companies use local transport for taking the goods along in case of goods delivery in rural areas • Hypothesis 2 is not proved. We need and assume that it is required to enrich operations of e-commerce companies by evidence-based information and indicators, to reflect changes annually, research factors that are influencing development, to maintain database of ecommerce companies’ information and collected data of companies that are using B2C models in their e-commerce business.

References I. In Mongolian 1. Ganbat, T.S.: Study on behavior trends of internet users in Mongolia-UB.: Nom khur, 198 p. (2013) 2. Oyungerel, D., Ganbat, T.S., Munkhtuya, T.S., Erdenetuya, N.: Study on current state of ecommerce. In: Khureltogoot-2018 Academic Conference Publishing-UB, pp. 14–27. National University of Mongolia Press (2018) 3. Mongol Bank, Payment system of Mongolia 4. Law on Nation Payment system 5. Standard of Mongolia on e-ecommerce goods delivery service

32 Current State of E-Commerce in Mongolia: Payment and Delivery

297

II. In English 6. Esmaeilpour, M.: An empirical analysis of the adoption barriers of e-commerce in small and medium sized enterprises (SMEs) with implementation of technology acceptance model. J. Internet Bank. Commer. 21(2) (2006) 7. Khosrow-Pour, M.: E-Commerce for Organizational Development and Competitive Advantage. IGI Global (2013). ISBN: 1466636238, 9781466636231 8. E-Commerce 2018 By Kenneth C. Laudon, Carol Guercio Traver, Global Edition eBook (14e). ISBN: 9781292251721 9. Gunasekaran, A., Ngai, E.W.T., Cheng, T.C.E.: Developing an E-logistics system: a case study. Int. J. Logist. Res. Appl. 10(4), 333–349 (2007) 10. Logistics and e-commerce | The impact of E-commerce on logistics real estate may 2013, CBRE 11. Mamta Assistant Professor, Hariom Tyagi, Dr. Abhishek Shukla, The Study of Electronic Payment Systems (2016) 12. Kabir, M.A., Ahmi, A.: Adoption of e-Payment Systems: A Review of Literature (2015) 13. Barkhordari, M., Nourollah, Z.: Factors influencing adoption of e-payment systems: an empirical study on Iranian customers (2016)

III. Online Sources 14. https://searchcio.techtarget.com/definition/e-commerce 15. www.similarweb.com- 70 companies were searched on the international analytical website, e.g. information about www.shoppy.mn was found on https://www.similarweb.com/website/ shoppy.mn 16. www.alexa.com- www.shoppy.mns were searched on the international analytical website, e.g. information about KhanBank.com was found on https://www.alexa.com/siteinfo/khanbank. com 17. Websites of 70 companies taken part in the research 18. Websites of 14 commercial banks in Mongolia

Chapter 33

The Emerging Trend of Accurate Advertising Communication in the Era of Big Data—The Case of Programmatic, Targeted Advertising Sida Chen Abstract Accurate communication has always been the goal of advertising. Accurate advertising based on big data has maintained rapid growth both at home and abroad. The core essence of precision advertising is to show the right content to the right people at the right time. Programmatic buying is the core of big data precision advertising. RTB is a purchasing method to realize programmatic buying, which has multiple market players. Although technology has become a key element driving the development of the advertising industry, creativity remains the core competitiveness of the advertising industry. At present, we should pay attention to the challenge in balancing between accuracy and user privacy. Keywords Big data · Precision advertising · Programmatic buying The advertising industry has long circulated such a famous saying: “I know that half of my advertising costs are wasted, but unfortunately, I don’t know which half.” The question put forward by John Wanamaker (1838–1922), the famous American advertising master and father of department stores, was called the “Goldbach conjecture” in advertising circles. Advertisers have been puzzled by this problem for more than 100 years. The application of big data provides important technical support for precision advertising, which opens the era of precision advertising. In a sense, big data precision advertising can be considered the inevitable result of the convergence between “precise guidance” marketing concept and new technology.

S. Chen (B) School of Advertising, Communication University of China, Beijing 100024, China e-mail: [email protected] New Media Communication Research Center, Min Jiang University, Fuzhou 350121, China © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_33

299

300

S. Chen

33.1 Why Should Advertisements Be Accurate 33.1.1 Accuracy Has Always Been the Goal of Advertising How to accurately convey advertising information and achieve precise guidance has always been the goal of the advertising industry. The market segmentation theory guided by constant accuracy is a fruitful achievement in the process of striving to achieve this goal. The so-called market segmentation refers to the process of enterprises dividing customers into several customer groups according to certain standards. Market segmentation can help enterprises to select and determine the target market. From geographical factors to demographic factors, from behavioral factors to consumer psychological factors, every basis of market segmentation is to further target specialized consumers, in order to achieve the purpose of accurate marketing communication. Finding consumers who are genuinely interested in the specific products is the key issue in the pursuit of accurate marketing communication, which can be greatly facilitated by the data-driven logic of the Big Data era. In the days of small data, there are insurmountable obstacles to accuracy. With the rapid development of Internet industry and the formation of ultra-large-scale data platforms, the scientific forces represented by big data technology are increasingly involved in the field of advertising. Precision advertising based on big data is becoming a new factor of productivity. It promotes the rapid development of advertising industry and brings a revolution of precision advertising.

33.1.2 The Value of Precision Advertising Academic circles generally believe that precision advertising is a way of accurate marketing, which can carry out personalized advertising communication one-toone. In the process of the development of precision advertising, relying on different technologies and platforms, the degree of accuracy has been continuously improved. Accurate advertising based on big data refers to real-time crawling and analysis of target consumer data by relying on Internet advertising network (Ad Network) and ad exchange (Ad Exchange), applying big data information retrieval, audience targeting and data mining technologies. To promote the dissemination and communication methods of highly relevant advertising information according to the characteristics and needs of consumers [1], the core essence of precision advertising is to show the right content to the right people at the right time. Compared with traditional advertising, precision advertising has a very prominent advantage. First of all, it can accurately lock the target audience, and accurately portray the user. Internet technology can make use of users’ browsing records, user ID, social network data to identify users, and obtain relevant data of user attributes, so as to achieve one-to-one communication.

33 The Emerging Trend of Accurate Advertising Communication …

301

Fig. 33.1 Consumer portrait

Secondly, the integrated analysis of big data can predict the purchasing intention of the target audience relatively accurately, making the pushed content more targeted and realize personalized customization. Third, the timing of communication is more opportune. Precision advertising can release ad messages to specific consumers from two aspects of time and space, which may suit their needs. To a certain extent, it avoids users’ aesthetic fatigue and intentional avoidance of traditional mass advertisements. Fourthly, the effect of communication is more immediate and easy to evaluate. Precision advertisement based on big data greatly improves the individualization and relevance of advertisement and the user experience of advertisement. Fifth, it improves the efficiency of the advertising process and reduces the cost of ad Programmed advertising purchase system replaces the traditional manual operation. In less than one second when users open the website, they can complete the whole process from issuing bidding instructions to the final display of ad on the user screen (Fig. 33.1).

33.2 The Development Status of Big Data Precision Advertising Big data has a significant and profound impact on various fields worldwide. Advertising is no exception. Driven by big data technologies, advertising has entered the golden era characterized by “precision”, and has embarked in a period of high-speed development. Take RTB advertising for example, when it started in the United States in 2010, RTB revenue reached 396 million US dollars, and reached 986 million US dollars in 2011, an increase of 149%. In 2012, RTB advertising revenue doubled again to nearly 2 billion US dollars. In 2013, RTB advertising revenue in the United States exceeded 3.3 billion US dollars, with an average annual growth rate of 106.3%, far exceeding

302

S. Chen

Fig. 33.2 China’s programmatic buying and display Ad. market scale in 2012–2019

the growth rate of the overall Internet display advertising in the same period. RTB advertising revenue reached 9 billion US dollars in 2017, accounting for 29% of all kinds of Internet advertising revenue [2]. EMarketer, an Internet Advertising Research Institute in the United States, predicts that RTB advertising will continue to grow strongly in double digits in the future. China’s big data precision advertising development shows the same trend. In 2016, the scale of China’s procedural purchase and display advertising market is ¥20.53 billion, an increase of 78.5% over 2015. It is estimated that the market will reach ¥67.09 billion by 2019. Since 2012, programmed purchasing has maintained an annual growth rate of more than 100% [3]. In the long run, China’s procedural purchasing market is still in the early stage of development, and there is still a lot of room for growth in the future. (see 错误!未 未找到引用源。) (Fig. 33.2).

33.3 The Foundation of and Measures to Achieve Accurate Advertising 33.3.1 Big Data and Precision Advertising Accurate advertising is a big data-driven AD. The core feature of big data is that “everything can be quantified”. It can be recorded and collected in the form of data in terms of language, sound and image, life consumption, geographical location, and even human communication and relationship, experience and emotion. Big data technology possesses the ability to quickly obtain valuable information from various types of data, and the application of Ad Targeting Technology plays a key role in the realization of precision advertising. It collects information about users’ Internetrelated behavior in a certain period of time, predicts users’ preferences or interests, and then, based on this prediction, advertisements are placed on specific terminal devices through the Internet.

33 The Emerging Trend of Accurate Advertising Communication …

303

Big data provides a new communication logic for advertising communication. Using big data technology to record, store and analyze massive user data makes it possible to identify individual consumers’ demographic attributes, behavioral characteristics, hobbies and consumption habits. Advertising thus can accurately target consumers, more effectively understand the needs of consumers, and seize the critical moment of communication with them, truly realize the transformation of advertising to “consumer-centered”, but also make the evaluation of the process and effect of advertising more controllable.

33.3.2 Programmatic Buying Is the Core of Big Data Precision Advertising Media delivery is an important link in the process of advertising communication. Whether advertising information can reach the target consumers accurately is an important embodiment of accurate advertising. Programmatic Buying is an important way for advertising to be delivered accurately. Programmatic Buying refers to the process of automatically executing advertising media purchasing on behalf of advertisers through digital platform. Based on data and technology, Programmatic Buying achieves the automatic trading and delivery management of advertisements, and achieves the effect of accurately positioning the purchasing audience. Compared with the conventional pure manpower purchase, programmatic buying greatly improves the efficiency of advertising transactions, expands the scale of advertising sales, and optimizes the effectiveness of advertising messages. Based on whether the transaction is public, Programmatic Buying can be classified into public and private transactions. Among them, the public transaction mainly adopts the RTB mode.

33.3.3 RTB Real-Time Bidding is a bidding technique that uses third-party technology to evaluate and invites bid for each user’s ad display requests on millions of websites or mobile applications (APPS). It is a kind of purchasing way to realize Programmatic Buying. Compared with traditional advertising, one of the core concepts of RTB is selling people rather than the advertising space. On the same page, different users are exposed to different advertisements, which are customized by the advertising platform on the basis of gauged user characteristics. RTB has diversified market players. To realize RTB, it needs the cooperation of different participants in the whole industry chain. The representative entities are mainly the following.

304

33.3.3.1

S. Chen

DSP (Demand-Side Platform)

That is, demand-side platform serves advertisers, provides cross-platform, crossmedia advertising platform for advertisers. The key to Programmatic Buying lies in the ability of DSP, which replaces advertisers to decide whether to bid for advertisements and how much to buy. With the help of advertising trading platform, advertisers can obtain exposure opportunities through real-time bidding, thus improving the efficiency and quality of advertisement purchase.

33.3.3.2

Ad Exchange

That is, the Internet advertising trading platform, similar to the role of “stock exchange”, it provides an online advertising trading market for buyers and sellers (DSP and SSP). The platform is based on the principle of “The highest bidder is winner”, so that the advertising price is really determined by the market.

33.3.3.3

SSP (Supply-Side Platforms)

It serves the supplier of advertising resources. With the help of SSP, the supplier hopes to obtain the highest display price through bidding for the advertisement space in its inventory, and realize the maximum benefit by realizing the resources and flows in hand.

33.3.3.4

DMP (Data Management Platform)

That is, the data management platform has functions such as data management, data analysis, and data calling. DMP can be regarded as an important part of DSP. By referring to data from different sources, advertisers and advertising agents can make more wise decisions on media purchasing and advertising plan management.

33.3.4 Basic Process of Programmatic Buying The complete process of achieving accurate advertising goes through the following steps: When a user visits an advertising medium, the SSP side sends an access request to Ad Exchange. SSP will package and distribute the specific information of the advertising space, such as the site to which the advertisement belongs, the lowest bid and the user attributes matched by DMP analysis to each DSP. Each DSP starts to bid for the advertisement display through RTB according to the customer’s demand. The higher bidder will get the chance to display the advertisement in this advertisement space, and eventually display it in front of the audience who visit the

33 The Emerging Trend of Accurate Advertising Communication …

305

Fig. 33.3 RTB advertising industry Chain

media [3]. It takes only 100 ms for the winner to complete the bidding process, which can be regarded as real-time (see 错误!未 未找到引用源。) (Fig. 33.3).

33.4 Some Thoughts on Precision Advertising 33.4.1 Technology Has Become the Key Factor Driving the Development of Advertising Industry Advertising in the big data age is increasingly dependent on technology. Big data, precise orientation technology and algorithms have changed the methods and processes of advertising operations, making them more efficient, accurate and intelligent, and will become the core force of the advertising industry reform. Although technology does not determine everything, many tasks will become passive without the support of advanced technology. To improve the creative level and user experience of advertising, more technical means are needed. This also makes all kinds of scientific and technological talents such as IT and artificial intelligence that were originally freed from the traditional advertising industry will be included in the advertising industry, becoming an important talent type in the industry and participating in all aspects of the advertising industry.

306

S. Chen

33.4.2 Advertising in the Future Will Become More Intelligent Most of the emerging technologies will be rapidly applied to the field of advertising marketing. With the maturity of artificial intelligence, blockchain, Internet of Things and other related technologies, advertising will further develop towards the direction of intelligence. In the environment of intelligent perception, with the change of scenes, advertising will be more ubiquitous.

33.4.3 The Platform Trend of Social Media Will Have a Positive Impact on Precision Advertising As more and more people use Facebook, WeChat, Weibo, and other major social media, the media industry has evolved into a platform-based development trend, with a huge user base, which brings a huge variety of data resources, and more conducive to the development of precision advertising.

33.4.4 Creativity is Still the Core Competitiveness of the Advertising Industry It is undeniable that big data has brought about changes in advertising concepts and production methods, profoundly changing the form of advertising, which greatly improves the effectiveness of advertising and makes advertising more accurate. But big data will not change the essence of the advertising industry, and the laws of development within the advertising industry have not changed. Data is service for creative marketing. Creativity is always the soul of the advertising industry, and its importance will be further strengthened. Without creativity, data can not be used well. In the era of big data, it is a general trend that all links in advertising industry are programmed, and creativity itself begins to show programmatic tendencies and characteristics. Programmatic creativity refers to the process of dynamically generating advertisements based on data [4]. But the advertising industry cannot fall into the trap of technology utopianism, because technology is only the means and tools of advertising, and its core goal is still how to bring better returns for advertisers as well as how to bring better experience for consumers. Programmatic advertising helps advertisers pinpoint the target consumers and the timing and location of delivery accurately, which solves the problem of “whom to say to” and “where to say it”. But the communication is not over, and we need to solve the key link of “how to say it”. Creative content presents the last stop connecting consumers, which determines consumers’ experience, atti-

33 The Emerging Trend of Accurate Advertising Communication …

307

tude, and purchase behavior. The power that creativity cherishes to reach out to the people is still precious [5].

33.4.5 The Contradiction Between Accuracy and User Privacy Should Be Handled Properly To accurately meet the needs of users, it is necessary to understand the user’s core data, which inevitably involves information concerning users’ personal privacy, such as personal preferences, habits and interpersonal relationships and other information. This makes many people feel that their privacy has been violated, and people do not know how the enterprise will use this information. The large-scale use of personal information undoubtedly enlarges the risk of infringement of personal privacy. The focus of protecting users’ privacy is to protect users’ information from being abused by lawbreakers, which requires the joint efforts of the government, industry, and individuals. Firstly, the government should strengthen supervision, regularly rectify the big data market, timely detect and punish irregular behavior in the industry, as far as possible to prevent user information from malicious use. Secondly, the industry should constantly optimize the classification algorithm of user labels, and shield key data such as ID card number and mobile phone number on the basis of retaining conventional behavior labels, so as to avoid the leakage of user core data without affecting the accuracy of positioning the audience. Finally, users should enhance their awareness of self-privacy protection, not arbitrarily divulge personal privacy information, regularly clean up Internet cookies, to avoid data being used by criminals. Acknowledgements This paper is the staged achievements of the open project of the new media communication research center (Min jiang University) of the humanities and social sciences research base of Colleges and Universities in Fujian Province in 2017. Item Number: FJMJ2017A03.

References 1. Hong Lei, J.: Precision Advertising in the Age of Big Data, vol. 9, p. 25. People’s Daily Press, Beijing (2015) 2. The US RTB digital display advertising expenditure accounted for 29.0% of the total. Advertising Trading Network. http://www.admaimai.com/news/ad201312032-ad110015.html, (2017). Last accessed 09 Apr 2019 3. Prospects for China’s Programmatic Buying Market. https://www.jiemian.com/article/1445791. html, (2017). 15 Apr 2019

308

S. Chen

4. Xing, H.Q., Lei, J.H.: New trend of advertising creativity in the era of big data. J. Zhejiang Media College 4 (2016) 5. Li, S., Fenglan, X.: New Media Advertising, vol. 9, p. 217. Zhejiang University Press, Hangzhou (2015) 6. Ping, L.J., Chen S.: Creativity in Advertising: Principles and Practice, 8. Renmin University of China Press, Beijing (2018)

Chapter 34

Study on Automatic Generation of Teaching Video Subtitles Based on Cloud Computing Xiangkai Qiu

Abstract Teaching videos are being used in increasing numbers in teaching. As an important element of teaching video, video subtitle is time-consuming and laborious. Based on the speech recognition interface of Baidu Cloud Computing Open Platform, this paper designs and implements an automatic subtitle generation system for teaching video. The system consists of four steps, namely, audio extraction, audio segmentation, speech recognition, and subtitle generation. Finally, it generates standard SRT subtitle format of teaching video. Taking the teaching video of Minjiang University as an example, the experiment shows that the system has high accuracy of speech recognition, which can meet the requirements of daily subtitle production and avoid the time-consuming and laborious manual subtitle adding process. Keywords Teaching video · Automatic subtitle generation · Speech recognition

34.1 Introduction Teaching video is to make the content of knowledge and skills that teachers should impart to students into video form to assist modern multimedia teaching. Teaching videos can help teachers show the content much more vividly that cannot be actually operated in the classroom, and also record the teaching content truthfully, so that learners can learn repeatedly at anytime and anywhere. They are indispensable and important assistant tools in modern teaching [1]. With the continuous development of Internet technology, bandwidth resources are no longer the bottleneck. Video has been used increasingly in teaching. At the same time, many online courses, such as Video Open Class, Micro-Course Online Video, MOOC, and so on, have emerged centering on instructional video [2]. As an important factor in teaching video, subtitles are stipulated in Sect. 34.3, clause 5 of the Technical Standards for Shooting and Making Quality Video Open Courses by the Ministry of Education in order to ensure the quality of video open class shooting in Chinese universities [3]. X. Qiu (B) School of Journalism and Communication, Minjiang University, Fuzhou, Fujian 350108, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_34

309

310

X. Qiu

For a long time, the production of teaching video subtitles has plagued vast number of practitioners. The production process of teaching video subtitles is cumbersome and the time cost is high. First of all, we should dictate the text files according to the teaching video, and then make SRT external subtitles according to the time sequence. It is studied that making SRT plug-in subtitles takes time 4.5–6 times as long as the speech [2]. Therefore, it is of great significance to develop a system that can automatically generate teaching video subtitles. In this paper, first, the teaching audio separated from the teaching video is transformed into an audio file that meets the requirements of the interface format of Baidu Cloud Computing Open Platform. Then, the speech is converted into text through the far-field speech recognition interface of Baidu Cloud Computing Open Platform, and finally the SRT format subtitle is generated.

34.2 System Design 34.2.1 System Introduction The system realizes the fully automatic generation of SRT subtitles for teaching videos and can be edited later. To achieve this goal, the following functional modules are completed: audio extraction, audio segmentation, speech recognition, and subtitle generation. As shown, first, the system extracts the audio track from the teaching video and generates the target audio format according to the requirement; second, the generated audio file is divided into several speech fragments according to the speech endpoint cutting algorithm, which can be saved, as well as the time of entry and exit of the speech fragments; third, the speech fragments are recognized through speech recognition interface of Baidu Cloud Computing Open Platform. Fourth, the standard SRT subtitle file is automatically generated according to the matching of the recognized text information with the time of entry and exit of the saved speech segments (Fig. 34.1).

Teaching Video

Teaching Audio

Fig. 34.1 System structure

Audio Segmentation

Speech Recognition

Subtitle Generation

34 Study on Automatic Generation of Teaching Video Subtitles …

311

34.2.2 Technical Scheme Audio Extraction. This paper uses FFmpeg, an open source video and audio technology solution, to record, transcode, and decode video and audio. The optimized decoding algorithm makes video and audio decoding fast and occupies less memory. At the same time, FFmpeg library supports almost all the main audio and video coding formats, and it is universal to decode them with FFmpeg [4]. The speech recognition interface document of Baidu Cloud Computing Open Platform illustrates that the speech format supports uncompressed pcm format, uncompressed wav format coded by pcm, and compressed amr format. Sampling rate supports 8 K or 16 k. Especially, 16 K mono-channel audio is recommended [5]. FFmpeg provides a very powerful function for the conversion of video and audio. Simple commands can separate audio from video files, and convert it into a format that meets the speech recognition interface requirements supported by Baidu Cloud Computing Open Platform. For example, run the following command line: ffmpeg -i video.mp4 -ac 1 -ar 16000 audio.wav The input file video. mp4 is represented by -i parameter, the channel set by -ac parameter, and the sampling rate set by -ar parameter. Finally, the required audio. wav file is obtained. Audio Segmentation. First, the audio file is transformed into waveform audio file format which meets Baidu speech recognition interface standard. Then, the speech endpoint detection is carried out with WEBRTC VAD (Voice Activity Detector) algorithm which detects the audio sentence boundary. Finally, the segmented audio sentences are saved into audio files in sequence. This system adopts WEBRTC VAD (Voice Activity Detector) algorithm to detect speech endpoint. It uses GMM (Gaussian Mixture Model) to model voice and noise, and judges them by corresponding probability. The main advantage of the algorithm is that it is unsupervised and does not require strict training [6]. Speech Recognition. This system adopts speech recognition API in Baidu Cloud Computing Open Platform. The speech API provides integrated development interface with perfect function and can be operated easily. Developers can integrate it with application program very conveniently and realize the application with complete speech ability. Its main development interfaces include speech recognition (including long speech recognition), speech synthesis development interface, and voice wake-up development interface.

def get_file_content(path): with open(path, 'rb') as fp: return fp.read() def get_baidu_asr_text(path): r = client.asr(get_file_content(path), 'wav', 16000, { 'dev_pid': '1536', }) return r['result']

312 Table 34.1 Subtitle format

X. Qiu Line no.

Format

Example

1

Subtitle header

1

2

Time – > time

00:00:01,160 – > 00:00:01,720

3

Subtitle content (Can be multiple rows)

Hello everyone

4

Blank line

5

Subtitle header

2

6

Time – > time

00:00:02,100 – > 00:00:04,420

7

Subtitle content (Can be multiple rows)

Today, let’s review first

8

Blank line

9

Subtitle header

3

10

Time – > time

00:00:05,220 – > 00:00:07,336

11

Subtitle content (Can be multiple rows)

Homework assigned in the last class

12

Blank line

The get_file_content method reads the voice file, while the get_baidu_asr_text submits the voice file to Baidu speech recognition interface, and returns the recognized text content according to the recognition results. Subtitle format. There are many kinds of video subtitles. In this paper, the text subtitle format is adopted. It is the external SRT subtitle format that required in the Ministry of Education document. This subtitle format is saved in text, so the file is very small and suitable for network transmission. Its standard is simple and easy to make. Any text editing tool can edit it. At the same time, it is compatible with most of the media player software, such as commonly used Thunder Player, Storm Player, QQ Player, and other software supporting this subtitle format (Table 34.1). Its basic format is as follows.

34.3 Demonstration 34.3.1 Development Environment Operating system: Winodws7 Compiler environment: Python 2.7.15 SDK: Baidu cloud computing open platform recognition synthesis of RESTful API Python SDK

34 Study on Automatic Generation of Teaching Video Subtitles …

313

Fig. 34.2 Classroom environment

Noise reduction Echo cancellation Audio mixing

Platform

34.3.2 Recording Environment The cloud recording and broadcasting system used by Minjiang University is a software system developed by Beijing Power Creator Information Technology Co. Ltd., which aims at recording the whole teaching process. The system integrates the functions of double monitoring of classroom video and screen, remote recording and centralized management of courseware, real-time live broadcasting of classroom content. It collects and compresses real-time video scenes and computer pictures of teaching points through network, centralizes storage and monitoring management, and provides a teaching platform integrating products, technology, information, and application (Fig. 34.2). The front-end audio acquisition system uses a microphone array composed of several ceiling microphones. Then, the captured audio is then input into the acquisition card after noise reduction and echo cancellation by the audio processing unit, and finally recorded and saved by the courseware recording software.

34.3.3 Experimental Results The author randomly selected one of the courses “Web Design” from the teaching videos recorded in the above environment as a sample to recognize the speech and generate the SRT subtitle file. From the final generated subtitle files, the accuracy rate of recognition is about 90%. The inaccurate recognition mainly occurs in the expression of English and related professional terms. Subtitles are edited and proofread later, which greatly reduces the workload of producing subtitle files.

314

X. Qiu

34.4 Conclusion The automatic generation system of teaching video subtitles designed in this paper can liberate practitioners from the heavy labor of adding subtitles manually to a certain extent, which greatly improves the efficiency of producing teaching video subtitles. It is true that the accuracy of speech recognition will fluctuate due to the varieties of teaching video materials and the large differences of terminology among different disciplines. Therefore, the system will use more different types of teaching video materials for experiments, continuing to optimize and adopt more advanced algorithms in order to make the final automatic generation of subtitles more accurate and practical.

References 1. Chen, C.: Research on Video-based Classroom Teaching Strategies. Central China Normal University, Wuhan (2011) 2. Wang, C.: Is it necessary to show explanatory subtitles in online teaching video–on the modification of the boundary conditions of redundancy effect. E-educ. Res. 275 (2016) 3. Technical Standards for Shooting and Making Quality Video Open Courses (2013). http:// old.moe.gov.cn/ewebeditor/uploadfile/2013/02/21/20130221093755413.doc. Accessed 19 Apr 2019 4. Liu, L.: Contribution title. In: 9th International Proceedings on Proceedings, 201 Realization of Audio and Video Synchronization Based on FFmpeg Decoding. Computer Engineering and Design, vol. 34, pp. 1–2 (2013) 5. Baidu AI Open Platform REST API Document. http://ai.baidu.com/docs#/ASR-API/top. Accessed 19 Apr 2019 6. Ristic, Dan (ed.): Learning WebRTC. Packt Publishing Limited, Birmingham (2015)

Chapter 35

Attention-Based Multi-fusion Method for Citation Prediction Juefei Wang, Fuquan Zhang, Yinan Li and Donglei Liu

Abstract As the most common research style in the process of academic exchange, the paper plays a vital role in the specific links of knowledge communication, academic cooperation, and scientific research education. However, in the traditional field of bibliometrics, how to quantitatively evaluate the influence of a paper generally depends on the number of citation as a reference standard. The number of citation is an important indicator for evaluating papers, and it has serious problems of lag. Therefore, based on the relevant meta-information generated in the publication process of the literature, the prediction of the future influence of the literature can make up for the above defects. In order to accurately predict the future citations of the paper, this paper constructs the Attention Convolution Neural Network model and combines the bibliometrics and alternative metrology-related features to enrich the input vector. Experiments on data sets collected from WOS and ResearchGate show that the model has improved accuracy compared to traditional prediction models. Keywords Attention convolution neural network · Bibliometrics · Altmetrics

J. Wang · Y. Li · D. Liu (B) Computer School, Beijing Information Science and Technology University, Beijing 100101, China e-mail: [email protected] J. Wang e-mail: [email protected] Y. Li e-mail: [email protected] F. Zhang Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University, Fuzhou 350121, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_35

315

316

J. Wang et al.

35.1 Introduction Throughout the academic exchange process, literature is the most common scientific research style which play as a vital role. First of all, in the entire academic research, it serves as the main tool for academic information dissemination, comprehensively, truly, and systematically describes the scientific research results. Besides, it plays an important role in promoting academic exchanges, achievement of results and development of science and technology as an important basis for exploring academic issues and conducting academic research. However, as the level of research continues to increase, the number of papers published each year increases exponentially. On the other hand, relevant institutions also need a relatively stable and generalized evaluation method to evaluate the impact level of the paper. Since Garfield [1] proposed the citation indexes in 1955, which can be used to evaluate the influence of a single paper, it is more and more common to evaluate papers by citations. The paper has a cumulative process of citations. This process lasts for half a year or more. This makes these traditional indicators based on citations of papers lag behind, and some newer papers cannot be evaluated or lack fairness. Therefore, the demand for the early prediction of the cited frequency is generated, and the prediction of the citation has become an important research topic for the evaluation of academic papers. Usually, the citations of the paper is predicted by traditional bibliometric indicators, such as journal impact factor, h-index, and citations in previous years. In recent years, with the development of the Internet age, academic activities have gradually moved from offline to online, resulting in new evaluation indicators such as praise, collection, and comments, and thus Altimetric came into being. Compared with traditional bibliometrics, Altimetric has the advantages of diversity of evaluation indicators, high real-time performance, and comprehensive evaluation results. On the other hand, Altimetric also has various problems, such as the reliability and the coverage of data. In general, the emergence of Altimetric enriches the means of paper evaluation. In this paper, by combining the bibliometric indicators and the alternative metrology indicators, the CNN+ Attention model is constructed to predict the citation frequency of the paper. The results are improved compared with the traditional methods.

35.2 Related Work The research on the citation frequency of papers mainly focuses on two aspects: future selection [2–6] and prediction methods [7–11]. For the selection of indicators, the study by Ming yang Wang et al. [12] shows that the level of the first author and the quality of the article play a key role in the future reference of the paper. Vanclay [13] believes that journal impact factors, article length and publication type, and journal self-citing will affect the paper citation.

35 Attention-Based Multi-fusion Method for Citation Prediction

317

Jamali et al. [14] studied the relationship between title types and downloads and citations. In Altimetric, Costas et al. [15] confirmed the positive correlation between Altimetric and citation through the study of Altimetric scores in publications from social sciences, humanities, medicine, and life sciences, but relatively weak. The results of Thelwall et al. [16] show that the ranking based on the academic social website ResearchGate statistics has a strong relationship with the traditional paper ranking. In terms of prediction methods, Yan et al. [17] compared the prediction effects of linear regression, k-Nearest Neighbor, support vector machine, and classification and regression tree (CART). The experimental results show that k-Nearest Neighbor with the worst prediction effect and CART predicts the best results with an accuracy rate of around 74%. Mistele et al. [18] use CNN to predict H-index. They show the results of a simple neural network that predicts the future citation counts of individual researchers based on various data from past researchers. Abrishami and Aliakbary [19] used the RNN neural network model, and their methods have better prediction accuracy than most current methods.

35.3 Methods 35.3.1 Feature Structure Based on the summary of the feature of citations of papers, this paper constructs four kinds of original features from two aspects: bibliometrics and Altmetrics metrology. Detailed feature are listed in Table 35.1. Domain Hotspots. Research hotspots often affect the citations of papers [20], and annual hotspots change with the year. In a given year, field research hotspots can be characterized by all the topics published in the current year. In this article, the topic of the article is constructed by the title, abstract, and keywords of the paper. In order to vectorize the paper text, we used doc2vec to process the title and abstract of each article, and use word2vec to process the keywords of each article. Table 35.1 Detailed feature in the data set Feature

Hotspots

Citation

Journal feature

ResearchGate feature

Title; Abstract; Key word;

3 years of citations;

Journal partition; 3-year impact factor;

Score_RG; ResearchItem_RG; ReadsNum_RG; CitationsNum_RG; RG_reads; Followers_RG; RelatedResearch_RG;

318

J. Wang et al.

Citation by Year. The citation frequency of the previous years of the thesis will greatly affect the citation frequency of the paper [21]. In this paper, the citation of the first 3 years of the paper is selected as the relevant feature. Journal Feature. Usually, the Journal published in the article will affect the citation frequency of the paper to a certain extent [13]. The citation of the paper, the higher the partition, the higher the impact factor, the papers published in journals with larger impact factors will be more likely to be cited. In this paper, the feature includes the journal partition level and the journal’s 3-year impact factor. Altmetrics Features. Altmetrics features cover a wide range of topics, including Twitter, Facebook, ResearchGate, Mendeley, etc. In this paper, we selected the data of the ResearchGate academic forum, and the feature of ResearchGate proved to be positively correlated with the paper.

35.3.2 Model Structure The model used in this paper is shown in Fig. 35.1. The model in this paper first step is deals with the features of the document vectorization, and the feature input shape is (10, 200). Then, resampled to a (50, 40) shape, which facilitates merging with other features of the (5, 4) shape. A two-layer convolution of the features of (n, 50, 40) is made to have a shape of (5, 4). Then merged with authors, journals, and other numerical features to be the feature matrix with shape of (5, 5, 4), which through the attention layer. It allows the model to focus on key features. After three more convolution operations, the predicted output of (1, 1, 1) is finally obtained, which is the total reference number predicted for the next 3 years.

Fig. 35.1 ACNN module

35 Attention-Based Multi-fusion Method for Citation Prediction

319

35.4 Experiments 35.4.1 Dataset The Literature data sets in this paper is collection form the published zone on internet. The journals and literature features mainly come from the Web of Science database and the public search system such as Google Scholar. The Literature Altmetrics feature come from ResearchGate database. Now, there is a collection of 34 literature data of all SCI journals in the transportation field for 10 years, with a total data for 40,000, ResearchGate data of 24,000, and the list in Table 35.2.

35.4.2 Training and Evaluation The training uses a combination of gradient descent and backpropagation. Using the Huber loss as the loss function, which combines the advantages of the two MSE and MAE loss functions. When the loss is at [0 − δ, 0 + δ] in the interval, the loss result is close to MSE, and close to MAE in the interval of [−∞, δ]U[δ, +∞], which makes it possible to have better training effect when dealing with more abnormal values. In the original data set, we will randomly select 70% of the tag data to train the model, and the rest 30% as the test set. The purpose of this experiment is to predict the development status in the next 3 years based on the development status of the first 3 years after the publication of the paper. The specific index of the model prediction is the total citation of the literature in the next 3 years. In order to determine if the prediction is accurate or not, we use the coefficient of determination to evaluate. R2 =

SSE SSR =1− SST SST

(1)

SSR: regression sum of squares, SST: total sum of squares, SSE: error sum of squares.

35.5 Results In this section, we present the results of the evaluation by comparing different prediction models. We chose LR (Logistic regression), CART (Classification and Regression Trees), and the model ACNN shown in Fig. 35.1 for prediction. The results are shown in Table 35.3. It can be seen from the results that the ACNN model has a significant improvement in accuracy than the traditional LR and CART.

320

J. Wang et al.

Table 35.2 34 journal on transportation field Num

Journal name

Partition

1

Computer-Aided Civil and Infrastructure Engineering

1

2

Vehicular Communications

1

3

Transportation Research Part B-Methodological

2

4

Transportation Research Part C-Emerging Technologies

2

5

Transportation Science

2

6

IEEE Vehicular Technology Magazine

2

7

IEEE Transactions on Intelligent Transportation Systems

2

8

Transportmetrica B-Transport Dynamics

2

9

IEEE Transactions on Vehicular Technology

2

10

Networks and Spatial Economics

3

11

Transportation Research Part E-Logistics and Transportation Review

3

12

Transportation Research Part A-Policy and Practice

3

13

Transportation

3

14

Transportation Research Part D-Transport and Environment

3

15

IEEE Intelligent Transportation Systems Magazine

3

16

International Journal of Engine Research

4

17

Transportmetrica A-Transport Science

4

18

Journal of Advanced Transportation

4

19

Journal of Intelligent Transportation Systems

4

20

Proceedings of The Institution of Mechanical Engineers Part F-Journal of Rail and Rapid Transit

4

21

Proceedings of The Institution of Mechanical Engineers Part D-Journal of Automobile Engineering

4

22

IET Intelligent Transport Systems

4

23

International Journal of Automotive Technology

4

24

Journal of Transportation Engineering

4

25

European Transport Research Review

4

26

Transport

4

27

International Journal of Vehicle Design

4

28

Transportation Planning and Technology

4

29

Transportation Research Record

4

30

Transportation Letters-The International Journal of Transportation Research

4

31

Promet-Traffic & Transportation

4

32

Proceedings of The Institution of Civil Engineers-Transport

4

33

International Journal of Heavy Vehicle Systems

4

34

ITE Journal-Institute of Transportation Engineers

4

35 Attention-Based Multi-fusion Method for Citation Prediction Table 35.3 Results from different models

Table 35.4 Feature elimination experiment result

321 R2

Modules LR

0.726

CART

0.761

ACNN

0.845

Feature

+



ALL

0.845

- hot research

0.322

0.833

- post cited

0.736

0.502

- journal

0.368

0.821

- RG

0.240

0.822

In addition, we performed elimination experiments to further analyze the impact of features on the results. The detailed results are shown in Table 35.4, where the “+” indicates that the feature is used alone, and the “−” indicates that the feature is removed from all features. Through the results, we can see that the number of citations in the past years has the greatest impact on the subsequent prediction results, and the addition of research hotspots and alternative metrology features can effectively improve the prediction accuracy of the model.

35.6 Conclusion In this paper, by constructing the model of attention convolution neural network, we predict the citation frequency of the paper. Compared with the traditional LR and CART prediction models, the accuracy rate is improved. Besides, we construct bibliometrics, alternatirices and construct feature vector which show the experimental results that adding research hotspots and the features of ResearchGate can be helped with prediction accuracy. In the next step, we will continue to study how to increase the Predictive accuracy by Improvement the model.

References 1. Garfield, E.: Citation indexes for science: a new dimension in documentation through association of ideas. Science 122(3159), 108–111 (1955) 2. Joyce, C.W., Kelly, J.C., Sugrue, C.: A bibliometric analysis of the 100 most influential papers in burns. Burns 40(1), 30–37 (2014) 3. Finardi, U.: Correlation between journal impact factor and citation performance: an experimental study. J. Inf. 7(2), 357–370 (2013)

322

J. Wang et al.

4. Abramo, G., Cicero, T., D’Angelo, C.A.: Are the authors of highly cited articles also the most productive ones? J. Inf. 8(1), 89–97 (2014) 5. Gazni, A., Didegah, F.: Investigating Different Types of Research Collaboration and Citation Impact: A Case Study of Harvard University’s Publications. Springer-Verlag, New York (2011) 6. Bornmann, L., Daniel, H.D.: Citation speed as a measure to predict the attention an article receives: an investigation of the validity of editorial decisions at Angewandte Chemie International Edition. J. Informetr. 4(1), 83–88 (2010) 7. Wang, D., Song, C., Barabasi, A.L.: Quantifying long-term scientific impact. Science 342(6154), 127–132 (2013) 8. Fu, L.D., Aliferis, C.F.: Using content-based and bibliometric features for machine learning models to predict citation counts in the biomedical literature. Scientometrics 85(1), 257–270 (2010) 9. Acuna, D.E., Allesina, S., Kording, K.P.: Future impact Predicting scientific success. Nature 489(7415), 201 (2012) 10. Yu, T., et al.: Citation impact prediction for scientific papers using stepwise regression analysis. Scientometrics 101(2), 1233–1252 (2014) 11. Ibáñez, A., Larrañaga, P., Bielza, C.: Predicting citation count of Bioinformatics papers within four years of publication. Bioinformatics 25(24), 3303–3309 (2009) 12. Wang, M., Yu, G., Yu, D.: Mining typical features for highly cited papers [J]. Scientometrics 87(3), 695–706 (2011) 13. Vanclay, J.K.: Factors affecting citation rates in environmental science [J]. J. Inf. 7(2), 265–271 (2013) 14. Jamali, H.R., Nikzad, M.: Article title type and its relation with the number of downloads and citations [J]. Scientometrics 88(2), 653–661 (2011) 15. Costas, R., Zahedi, Z., Wouters, P.: Do, “altmetrics” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective [J]. J. Assoc. Inf. Sci. Technol. 66(10), 2003–2019 (2015) 16. Thelwall, M., Kousha, K.: ResearchGate vs. Google scholar: which finds more early citations? Scientometrics 112(1), 1–7 (2017) 17. Yan, R.,Tang, J., Liu, X., et al.: Citation count prediction: learning to estimate future citations for literature. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, pp. 1247–1252. ACM (2011) 18. Mistele, T., Price, T., Hossenfelder, S.: Predicting citation counts with a neural network [J]. arXiv preprint arXiv:1806.04641 (2018) 19. Abrishami, A., Aliakbary, S.: NNCP: a citation count prediction methodology based on deep neural network learning techniques [J]. arXiv preprint arXiv:1809.04365 (2018) 20. Sohrabi, B., Iraj, H.: The effect of keyword repetition in abstract and keyword frequency per journal in predicting citation counts. Scientometrics 110(1), 243–251 (2017) 21. Yu, T., et al.: Citation impact prediction for scientific papers using stepwise regression analysis. Scientometrics 101, 1233–1252 (2) (2014)

Part IV

Network Systems and Analysis

Chapter 36

Using Five Principles of Object-Oriented Design in the Transmission Network Management Information B. Gantulga, N. Munkhtsetseg, D. Garmaa and S. Batbayar

Abstract Using the SOLID principle of the five principles of object-oriented design, it is possible to establish the software, which is more easily understandable, flexible, and more sustainable. This article discusses the “Transmission Network Management Information System” which was implemented at the National Power Transmission Grid state-owned stock company as well as the difficulties faced and the solution found by using the five principles of object-oriented design Keywords SOLID · Object-oriented design · Transmission network management information System-TNMIS

36.1 The Five Principles of Object-Oriented Design The term SOLID refers to five principles designed to make the software more understandable, flexible, and maintainable in object-oriented computer programming. The SOLID principles of Object-Oriented Design include the following five principles: • • • • •

S–Single responsibility principle O–Open/closed principle L–Liskov substitution principle I–Interface segregation principle D–Dependency inversion principle

The SOLID design principles were promoted by Robert C. Martin and are some of the best-known design principles in object-oriented software development The principles are a subset of many principles promoted by American software engineer and instructor Robert C. Martin [1–3]. Though they apply to any object-oriented

B. Gantulga (B) · N. Munkhtsetseg · D. Garmaa · S. Batbayar School of Engineering and Applied Sciences, NUM, Ulaanbaatar, Mongolia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_36

325

326

B. Gantulga et al.

Table 36.1 The SOLID principles First letter

Abbr

Concepts

S

SRP

Single responsibility principle A class should have one and only one reason to change, meaning that a class should only have one job

O

OCP

(Open/closed principle) Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification

L

LSP

Liskov substitution principle Functions that use pointers to base classes must be able to use objects of derived classes without knowing it

I

ISP

Interface segregation principle Clients should not be forced to depend upon interfaces that they do not use

D

DIP

Dependency inversion principle High-level modules should not depend on low-level modules. Both should depend on abstractions

design, the SOLID principles can also form a core philosophy for methodologies such as agile development or adaptive software development [3]. The theory of SOLID principles was introduced by Martin in his 2000 paper Design Principles and Design Patterns, [2, 4] although the SOLID acronym was introduced later by Michael Feathers. Various modifications are constantly being made to the software according to the needs of the organization and the needs of the users and for software improvement. The SOLID principle, the five principles of object-oriented design, is widely used to design the software as simple, understandable, flexible, reliable, and easy to evolve (Table 36.1).

36.2 S—SRP—Single Responsibility Principle Implementation in TNMIS A class should have one and only one reason to change, meaning that a class should only have one job. • When Programming logic is changed the class will change. • When Reporting is changed the class will change In other words, if one is changed, the other might change, because both of them belong to one class. We cannot control entirely. Therefore, one change makes two to test twice. SRP (Single responsibility principle) assumes there must be only one reason for changing each subsection of software. • Software subsection–Class, Function, etc. • Reason of change–Responsibility

36 Using Five Principles of Object-Oriented Design …

327

Fig. 36.1 The class achaalal

In order to solve the problems in Fig. 36.1, it is necessary to make different classes. achaalal-Class 1. achaalalDB—Class to react with Database 2. achaalalReport—Class to create report Introduced are the solutions which keep SRP in Fig. 36.2

36.3 O—OCP—Open/Closed Principle Implementation in TNMIS If software class extends, it must be opened. If it changes, it must be closed. Let us assume two users are using select function in a class AchaalalDB. One of them is staff, the others are dispatcher engineers. Dispatcher engineers responsibility that

328

B. Gantulga et al.

Fig. 36.2 The class achaalal, achaalalDB, achaalalReport

is information of load are necessary to change. According to new requirements, if “select” function changes its effect to user information and might be an error. So how to solve the problems by OCP (Fig. 36.3).

36.4 I—ISP—Interface Segregation Principle It is better to have many interfaces for each client than one general interface. For example, let us take a look at the typical user, dispatcher, and admin interface of the TNMIS system.

36 Using Five Principles of Object-Oriented Design …

329

Fig. 36.3 The class achaalalDetailDB

1. EmployeeUI—To show information of users logged into the system. 2. DispatcherUI—To show necessary reports only dispatcher engineers 3. AdminUI—To show reports every staff (Fig. 36.4) When developer type “objBal” on each user interface, the following values are displayed.

This means that the EmployeeUI interface is accessible to other developers that are working on it, which can be misleading. The ISP principle generally has one general purpose interface and it is preferable to have multiple interfaces for each client. Figure 36.4 Let us see how we can change this code using the ISP principle (Fig. 36.5).

330

B. Gantulga et al.

Fig. 36.4 The class achaalalDetailDB Class EmployeeUI DispatcherUI, AdminUI

36 Using Five Principles of Object-Oriented Design …

Fig. 36.5 Interface IEmployeeReportBAL, IDispatcherReportBAL

331

332

B. Gantulga et al.

Fig. 36.6 Class CustomerBAL, FileLogger

36.5 D—DIP—Dependency Inversion Principle It should be independent of implementation, depending on the design. For example, let us look at the code in Fig. 36.6. From the above code, the CustomerBAL class relies directly on the class FileLogger that logs the file log. However, if the administrators suddenly make a decision to log the file instead of a file, it will change the class FileLogger. However, there may be errors in the CustomerBAL class. The DIP principle is a high-level module that does not depend on lower level modules. Both of these should be based on abstraction. The code above should be followed by DIP (Fig. 36.7). From above code, the user depends on the ILogger abstract interface.

36.6 Conclusion Within the framework of this study, the five principles of object-oriented design are discussed in the implementation of some modules of the Transmission Network Management Information System. Changing the software to be simple, optimum, and easy to expand creates the possibility to develop and improve it. Thus, it is possible to create a simpler, more reliable, and more advanced management information system.

36 Using Five Principles of Object-Oriented Design … Fig. 36.7 Interface ILogger

References 1. 2. 3. 4. 5. 6. 7.

Marla Sukesh, F.: Object Oriented Design Principles, pp. 1–15 (2013) Steve Smith, F.: Improving the Quality of Existing Software, pp. 1–67. DevReach (2013) Sandi Metz, F.: SOLID Object-Oriented Design (2009) DSUMS. https://dsums.transco.mn/login J. Netw. Intell. http://bit.kuas.edu.tw/~jni/ Data Science and Pattern Recognition. http://dspr.ikelab.net/ J. Inf. Hiding Multimed. Signal Process. http://www.jihmsp.org/

333

Chapter 37

Modbus Protocol Based on the Characteristics of the Transmission of Industrial Data Packet Forgery Tampering and Industrial Security Products Testing Qiang Ma, Wenting Wang, Ti Guan, Yong Liu and Lin Lin Abstract Since the power plant has few network security protections and more industrial network safety problems reveal, we present a solution to verify the weakness and reinforce the safety protection. First, external operator scans the industrial network of the power plant to find alive master computer based on communication protocol. By matching the protocol, we get to find the IP address and type of the device, then use the corresponding master simulator or protocol writing tool to establish connection with the device and change the value of specific register. Obviously, the industrial network has authentication risk without verifying the IP address of the connection initiator. Here, we try to deploy general industrial firewall to fiter unknown IP address and the problem above gets fixed. Then, with arp spoofing, we succeed to hijack and modify the packet between the master computer and the device, firewall deployed before gets bypassed and industrial device can be controlled. Now we can see the industrial network lacks adequate internal auditing and monitoring and the general firewall has its limitation and weakness, therefore, we suggest we develop one customized and suitable security defense product for power industry. Keywords PLC · ICS ARP · Modbus

37.1 Introduction PLC is widely used in power system production control area, and is a kind of programmable memory used for internal storage program, sequence control execution, logical operation, counting and arithmetic operation, timing, and other user-oriented instructions, through digital or analog quantity I/O port to control various types of Q. Ma · T. Guan · Y. Liu · L. Lin State Grid Shandong Electric Power Company, Jinan 250021, China W. Wang (B) State Grid Shandong Electric Power Research Institute, Jinan 250003, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_37

335

336

Q. Ma et al.

industrial processes. The main industrial communication protocols between PLC and industrial equipment are Modbus, S7, and so on. In the process of production control, controllers can communicate with each other and with other industrial devices through Modbus protocol. The Modbus protocol describes how the controller requests access to other industrial devices, responds to requests from other devices, and detects and logs errors [1]. The Modbus protocol specifies both the message domain pattern and the common format of the content. By using the plaintext transmission characteristics of Modbus industrial communication protocol and the weak points of network security in the large production area, the research on safe infiltration and protection scheme of PLC equipment is carried out to ensure the safe and stable operation of industrial production process in the large production area of power system. Industrial protocols mainly consider the functional realization and efficient transmission of production processes [2, 3]. Transmission message security is often ignored. In addition, there is a lack of identity authentication, access control and other security measures between industrial devices using Modbus protocol [4]. In the industrial production environment without any safety protection measures, attackers can directly connect the PLC equipment with Modbus Poll, PLC scan and other tools, hijack the industrial communication data between the PLC and the underlying primary equipment, making the industrial equipment work abnormally. When the industrial firewall is deployed in the production area, the attack mode mentioned above will be invalid because the IP of the attacker is not in the whitelist [5–7]. The modbus protocol messages of the main control machine and PLC equipment are hijacked by the attack of the middle man, and the function codes of the primary equipment in the protocol are forged and tampered with to bypass the industrial firewall and make the PLC and primary industrial equipment work abnormally [8, 9]. In view of the above problems, the production area of power system needs the industrial data audit and alarm platform to monitor the working state of industrial equipment in real time [10]. Once the state is abnormal, the alarm and feedback will be given in the first time, and the industrial control and safety team will be responsible for on-site inspection.

37.2 Modeling of PV Cell Under Uniform Irradiance 37.2.1 Industrial Control Systems Industrial Control Systems (ICS) include various control systems for monitoring, industrial production, transportation, etc. Industrial control systems are mainly composed of data acquisition systems (SCADA), monitoring systems, energy management systems (EMS), and distributed control systems (DCS), Automation Systems

37 Modbus Protocol Based on the Characteristics …

337

(AS), Safety Instrumented Systems (SIS), and other small control system devices such as Programmable Logic Controllers (PLC). Industrial control systems refer to the collection of people, software, and hardware used to influence the safety, information security, and reliable operation of industrial processes. Industrial control systems have also become industrial automation control systems (IACS). The most basic industrial control systems include on-site industrial equipment (sensors, actuators), controllers, human–machine interfaces, and controlled objects.

37.2.2 Optimization of Principal Parameters in HAFSA The industrial auditing alarm platform is highly customized and matched. It is a vertical industrial security protection product. Its main application scenario is the 500 kV intelligent substation of the power system industrial control area, mainly for industrial communication information security of MMS network and GOOSE network. Auditing identifies and alerts malicious attacks based on industrial protocol hijacking, forgery, and tampering. The MMS and Goose/SV industrial protocols in the power system intelligent substation have clear text transmission characteristics, and there are no security measures such as identity authentication and access control between industrial equipment in the production area. Attackers can use MMS, Goose/SV protocol plaintext transmission features, hijacking, forgery, tampering with measurement and control, protection devices, intelligent switches, merging units, and industrial communication data between industrial devices, forcing the state of industrial equipment to change, thereby interfering with power system production. In order to avoid such malicious attacks in industrial production processes such as industrial agreement hijacking, forgery, tampering, etc., State Grid Shandong R&D Industrial Auditing Alarm Platform mainly analyzes equipment industrial communication messages between the station control layer, the bay level, and the process layer [11, 12], and establish a white environment model for industrial equipment work in combination with the changing frequency of the working state of the equipment, the starting/closing time point of the equipment, and the normal working threshold range of the equipment [13]. Real-time analysis of messages between industrial equipment, industrial equipment working status through the feedback device upload station control layer operating platform, once industrial communication data hijacking, tampering, white environment will automatically start the alarm device, accurate positioning of the fault point, the attack point in time retrospectively, at the same time, the industrial control fault investigation team will conduct on-site analysis for the first time, and handle the on-site events according to the industrial control incident emergency plan.

338

Q. Ma et al.

37.3 Simulation Test Analysis The topological structure of industrial network in industrial control area of thermal power plant or thermal power plant in power system is as follows. As shown in Fig. 37.1, the main control unit of the production area controls the operation of the primary equipment such as the steam turbine, the generator, the gas valve in the desulfurization device, and the air pump in the power plant target field via the PLC component. Its host computer operation control platform is shown in Fig. 37.2. (1) Safe penetration in the absence of safety precautions The tester connected the attack aircraft to the production control area, configured IP and gateway, and scanned the port in the area with nmap. It was found that the suspect host (Fig. 37.3) was 192.168.3.11, which was suspected to be the main control machine. EvilFoca, an ARP spoofing tool, is used to hijack the traffic between the master computer and the gateway (as shown in Fig. 37.4) to obtain data messages between the master computer and the PLC device under its control. Using the packet capture tool Wireshark for traffic analysis (Fig. 37.5), it was found that the host 192.168.3.11 and 192.168.2.171/172 two devices communicated

Fig. 37.1 Simulation environment of industrial control system of thermal power plant

37 Modbus Protocol Based on the Characteristics …

Fig. 37.2 Thermal power plant upper computer operating platform

Fig. 37.3 Nmap port scan results

Fig. 37.4 Hijacking the traffic between the suspicious master computer and the gateway

339

340

Q. Ma et al.

Fig. 37.5 Traffic capture and results analysis

with Modbus protocol, and determined that the host 192.168.3.11 was the master controller.192.168.2.171/172 is the PLC device that communicates with the host computer. It analyzes the captured message based on the standard specifications of the Modbus protocol. The content of the protocol is 06 01 03 00 00 00 01, 03 is the function code, which means reading A register value, 0000 0001 represents the value of the 01 register that reads the start address 00. Since the Modbus protocol data received by the PLC device side has no identity authentication, the attacker can tamper with the data of the corresponding register position of the PLC device through Modbus Poll tool. (2) Deployment of industrial control security protection products to achieve the white list mechanism against the above attacks in the case of security penetration The PLC device performs identityless authentication on the Modbus server and cannot defend against attacks without security protection. The industrial gateway and the industrial control firewall are deployed in front of the PLC equipment to realize the white environment of the production area (Fig. 37.6). Based on the method mentioned above, the attack cannot be completed because the IP address of the attack plane is not in the white list of the firewall. (3) Using man-in-the-middle attacks to bypass industrial protection equipment to launch attacks On the attacker Kali system platform, the ARP spoofing tool arp spoof is used to perform ARP spoofing on the host computer 192.168.3.11 and the gateway 192.168.3.203, and intercept the traffic data between the master controller and the gateway. Through the packet capture tool, you can see the traffic data message sent by the master to the gateway, which contains the Modbus protocol message interacting with the PLC device and the IP address of the PLC device (as shown in Fig. 37.7). Through independent code written to master flow interception and the master to the status of the request message to tamper with the PLC equipment, the original

37 Modbus Protocol Based on the Characteristics …

341

Fig. 37.6 Deployment of power industrial control system safety protection equipment

Fig. 37.7 Traffic analysis between the master and the gateway

TCP packet Payload content of 06 03 01 00 00 00 01 is modified to 06 01 06 00 01 00, 01, in one of the original message 03 read for function code, the function of the corresponding position for a revised code of 06, of tampering with the back four bytes accordingly at the same time, to 00 01 00, 01, The value 01 is written to the value 01 of register 01. According to the previous analysis of Wireshark’s packet, the value 01 written indicates that the valve is open and 00 indicates that the valve is closed (as shown in Fig. 37.8). (4) Industrial audit and alarm platform to defend against man-in-the-middle attack By deploying the industrial control audit platform (as shown in Fig. 37.6), ARP attacks in the production environment can be audited. When ARP fraud is found, the attack address can be found and the alarm can be given according to the audit log, and the equipment can be checked on the spot once.

342

Q. Ma et al.

Fig. 37.8 Tampering with industrial communication data

The audit alarm platform is an audit device jointly developed by shandong institute of electrical technology and information red and blue team for real-time monitoring and feedback of working status of industrial equipment [13, 14]. Its principle is the main control machine installed alarm program, alarm threshold data from a device real-time data, through the input of I port PLC, through the data message interaction detected by the PLC equipment switch state comparison, if two states are found inconsistent, that there is abnormal behavior, and then send relevant personnel on the line investigation.

37.4 Conclusion In the power system, through the deep analysis of the network architecture of the thermal power plant production area and the Modbus protocol between industrial equipment, the experience is summarized as follows:

The major problems in the production area

Solution

Lack of verification of the identity of the network

Strengthen the safety protection measures of industrial control to ensure the white environment of production control area

Imported brands of industrial control equipment account for a large proportion, and most of their vulnerabilities have been disclosed, which are easily exploited by attackers [15]

Master core industrial equipment technology, independently develop industrial equipment, improve production efficiency and safety

(continued)

37 Modbus Protocol Based on the Characteristics …

343

(continued) The major problems in the production area

Solution

Industrial agreements are transparent and easy to be hijacked and falsified by attackers

Privatization of industrial protocols, research and verification mechanisms to ensure transmission stability

Insufficient staff information security awareness and security knowledge, insensitive to advanced attacks of abnormal access or identity camouflage

Strengthen industrial safety knowledge training, improve safety awareness and preventive technology for grassroots workers

Sudden industrial attacks, unable to sense for the first time, trace the identity of the attacker [3]

Online monitoring and alarm for industrial equipment working status

Industrial equipment hardware and software lacks information security testing and evaluation

Research on industrial equipment information security testing standards

Acknowledgements This work was supported by “Research on Lightweight Active Immune Technology for Electric Power Supervisory Control System”, a science and technology project of State Grid Co., Ltd in 2019.

References 1. Meng, X.F., Ci, X.: Big data management: concepts, techniques and challenges. J. Comput. Res. Dev. 50(1), 146–169 (2013) 2. Guo, Q.L., Xin, S.J., Wang, J.H.: Comprehensive security assessment for a cyber physical energy system: a lesson from Ukraine’s Blackout. Autom. Electr. Power Syst. 40(5), 145–147 (2016) 3. Zhu, X.Y., Fang, Q.: Study on mechanism and strategy of cybersecurity in U.S. electric power industry. Electr. Power 48(5), 81–88 (2015) 4. Sun, H.F., Gong, L.D., Zhang, H.T.: Research on big data analysis platform for smart grid and its application evolution. Mod. Electr. Power 33(6), 64–73 (2016) 5. Peng, X.S., Deng, D.Y., Cheng, S.J.: Key technologies of electric power big data and its application prospects in smart grid. Proc. CSEE 35(3), 503–511 (2015) 6. Zhang, B., Zhuang, C.J., Hu, J.: Ensemble clustering algorithm combined with dimension reduction techniques for power load profiles. Proc. CSEE 35(15), 3741–3749 (2015) 7. Qi, J., Qu, Z.Y., Lou, J.L.: A kind of attribute entity recognition algorithm based on Hadoop for power big data. Power Syst. Prot. Control. 44(24), 52–57 (2016) 8. Fang, X., Misra, S., Xue, G.: Smart grid—the new and improved power grid: a survey. IEEE Commun. Surv. Tutor. 14(4), 944–980 (2012) 9. Wang, W., Lu, Z.: Cyber security in the smart grid: survey and challenges. Comput. Netw. 57(5), 1344–1371 (2013) 10. Tan, S., De, D., Song, W.Z.: Survey of security advances in smart grid: a data driven approach. IEEE Commun. Surv. Tutor. (2016) 11. Shvachko, K., Kuang, H., Radia, S.: The hadoop distributed file system. In: IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–10. IEEE (2010) 12. Zaharia, M., Chowdhury, M., Das, T.: Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. In: Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation, p. 2. USENIX Association (2012)

344

Q. Ma et al.

13. Team, D.J.D.: Deeplearning4j: Open-source distributed deep learning for the JVM. Apache Softw. Found. Licens. 2 14. Fiore, U., Palmieri, F., Castiglione, A.: Network anomaly detection with the restricted Boltzmann machine. Neurocomputing 122(5), 13–23 (2013) 15. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)

Chapter 38

Analysis of Time Characteristics of MMS Protocol Transmission in Intelligent Substation Wenting Wang, Qiang Ma, Yong Liu, Lin Lin and Ti Guan

Abstract With the rapid cross-integration of industrialization and informatization, industrial network security issues are becoming more and more serious. In this paper, the time characteristics of MMS protocol transmission in intelligent substation are analyzed according to the security scenario of intelligent substation system, and the time characteristics of MMS protocol transmission are analyzed with the methods of Fourier transform and wavelet transform. Through analysis, a method of establishing the baseline of message transmission time characteristics can be obtained, which can be used to detect the anomalies in the intelligent substation system network. Keywords Intelligent substation · MMS protocol · Fourier transform · Wavelet transform

38.1 Introduction With the introduction of the concepts of Industry 4.0, Industrial Internet of Things, and China Manufacturing 2025, industrialization and informatization are rapidly intermingling, and industrial control systems are developing in the direction of intelligence. At the same time, these changes will also shift the industrial control network from closed to open. Industrial networks are no longer an isolated network and therefore face many security issues [1]. In the past, industrial security system security solutions mainly focused on access control, fieldbus security protocols, and physical security. Due to the differences between industrial control networks and ordinary IT networks, the traditional IT network security solutions cannot fully solve the security problems in industrial control networks [2]. Therefore, anomaly detection in industrial networks has become a new research hotspot [3]. W. Wang (B) State Grid Shandong Electric Power Research Institute, Jinan 250003, China e-mail: [email protected] Q. Ma · Y. Liu · L. Lin · T. Guan State Grid Shandong Electric Power Company, Jinan 250021, China © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_38

345

346

W. Wang et al.

Taking a smart substation [4] in a power system as an example, a substation is a power facility that converts voltage, receives and distributes electrical energy, controls the flow of electric power, and adjusts voltage in a power system. It connects the grids of the various voltages through its transformer. The intelligent substation refers to the basic functions of digitizing the whole station information, networking the communication platform and standardizing information sharing through intelligent devices, and automatically completes basic functions such as information collection, measurement, control, protection, measurement and monitoring, and can support the grid as needed. Substation with advanced functions includes real-time automatic control, intelligent adjustment, online analysis and decision making, and collaborative interaction. Unlike traditional substations, smart substations use the IEC61850 standard. This is an internationally accepted substation automation system. It regulates the behavior of the device, the naming of the data, and the definition. Intelligent substations use electronic transformers to replace traditional voltage transformers, using fiber optic wiring instead of traditional signal cable hard wiring, and the transmitted data becomes digital. According to the IEC61850 standard, the intelligent substation consists of three layers of two networks [5]. Three layers, namely the station control layer, the spacer layer and the process layer. Among them, the station control layer mainly includes monitoring system, remote system and fault information system; the interval layer mainly includes protection devices and measurement and control devices; the process layer includes an analog collection terminal merging unit and an intelligent unit that realizes switch input and output. The two networks refer to the process layer network and the station control layer network. The process layer network mainly transmits GOOSE (Generic Object Oriented Substation Event) messages and SV (Sample Value) messages. The station control layer network mainly transmits MMS (Manufacturing Message Specification) protocols. Different from general IT networks, due to the frequent polling, diagnosis, and periodic refresh services in industrial networks, packets transmitted in industrial networks usually have certain rules in time. If the substation network is intrusive and causes abnormal operation, the time characteristics of its message transmission usually change. Therefore, analyzing the time characteristics of message transmission in the substation network can provide a detection method for whether the substation network is invaded. In this paper, the MMS protocols in the substation are aggregated, and the transmission time characteristics of the method are analyzed by Fourier transform and wavelet transform [6, 7].

38.2 Analysis of MMS Protocol Type and Time Characteristics As mentioned above, in smart substations, MMS protocols are used for control command delivery, measurement data reporting, and the like. For example, if the host computer needs to issue a trip control command, the host computer sends an MMS

38 Analysis of Time Characteristics of MMS …

347

protocol to the merged single eye. After receiving the MMS protocol from the host computer, the merging unit sends a GOOSE message to the smart terminal to execute the trip command. If an abnormal situation occurs in the substation, such as an abnormal trip, a large number of MMS protocols will be sent in a short time. When the substation is operating normally and there is no control command, there will still be MMS protocol transmission in the station control layer network. These MMS protocols are mainly diagnostic and heartbeat messages.

38.2.1 MMS Protocol Aggregation In the substation system, each host opportunity communicates with multiple bay level devices at the same time, and the protocol used for communication uses the TCP/IP protocol. Therefore, the MMS packets can be aggregated according to the IP address and the port number in the packet. That is, the MMS packets are aggregated according to the connection. The results are shown in Table 38.1. Given a connection, such as the connection corresponding to the IP address (172.16.0.73, 172.16.0.182), a deeper analysis of the MMS protocol transmitted by it will find that the MMS protocol transmitted between the two IPs is not limited. One. The source IP address is 172.16.0.182, and the MMS protocol with the destination IP address of 172.16.0.73 is only one of Confirmed_Request. The time interval for sending the two Confirmed_Request messages is counted, and then the message sequence number is plotted on the abscissa, and the time interval of the two adjacent MMS protocols is plotted on the ordinate, as shown in Fig. 38.1. As seen in Fig. 38.1, the overall transmission time interval of the request message is very close, but there will be some cases where the interval is particularly long. The height of these spikes is irregular, but the interval between the spikes is more regular, that is, there is a pause after each certain number of packets are sent in the Confirmed_Request message. The source IP address is 172.16.0.73, and the destination IP address is 172.16.0.182, which includes a response packet and an unconfirmed packet. According to the above method, the time characteristic curves of the two types of messages are respectively drawn, as shown in Figs. 38.2 and 38.3. As can be seen from Fig. 38.2, the time interval distribution characteristics of the message type of the response type and the request type are basically the same. As shown in Fig. 38.3, the transmission interval of unconfirmed packets is significantly different from the time distribution of response packets and request packets. MoreTable 38.1 The example of the connections of MMS message

Source/destination address

Source port

Destination port

(172.16.0.73, 172.16.0.181)

102

60378

(172.16.0.73, 172.16.0.182)

102

60507

348 Fig. 38.1 The time interval of request packages

Fig. 38.2 The time interval of response packages

Fig. 38.3 The time interval of unconfirmed packages

W. Wang et al.

38 Analysis of Time Characteristics of MMS …

349

over, in the actual data, the number of unconfirmed packets is much smaller than the number of request packets and response packets. Further, the request message and the response message in the smart substation correspond to the invokeID field. According to this field, the request and response packets are aggregated. The time interval required to send a request message to receive a response message can be calculated by using the timestamp in the packet. The appearance order of the request message is the abscissa, and the time interval is the ordinate, and the curve of the message question and answer interval can be drawn, as shown in Fig. 38.4. As shown in Fig. 38.4, the interval between sending a request message and receiving a response message is very short for packets of the same invokeID field. Most of the packets are separated by 0.1 s or less. The interval is longer, but they are all below 0.5 s, which is much smaller than the interval at which the request packet is sent. There is no obvious rule for the interval between the sending of the request message and the receipt of the response message. Through the above analysis, it can be found that the transmission of MMS protocols in the substation has a certain time rule. By analyzing and learning these laws, the normal behavioral baseline of the substation system can be obtained. The above analysis is mainly based on the method of drawing for qualitative observation. Next, a more in-depth analysis of the transmission time characteristics of the message is required. The methods used here are Fourier analysis and wavelet analysis. Fig. 38.4 The time interval between request and response

350

W. Wang et al.

38.3 Frequency Domain Analysis of MMS Protocol Transmission Characteristics 38.3.1 Introduction to the Fourier Transform The Fourier transform is a form in which a function satisfying a certain condition is expressed as a linear combination of a sine function and a cosine function or their integrals. For a given sequence (x0 , . . . , xt , . . . , x N ), assuming that the sequence is in N, that is, there is xt = xt+N , then the Fourier transform corresponding to the sequence is also a discrete sequence with N values, namely (X 0 , . . . , X k , . . . , X N ), where X k is defined as follows: Xk =

N −1 1  −ik2πt/N xt e , k = 0, . . . , N − 1 N t=0

(38.1)

The Fourier transform is a reversible transform. Based on Eq. (38.1), the inverse transform can be expressed as: xt =

N −1 

X k eik2πt/N , t = 0, . . . , N − 1

(38.2)

k=0

In Eqs. (38.1) and (38.2), X k corresponds to the kth harmonic of the period Tk = N /k. The magnitude of the kth harmonic is Ak = |X k |, and the Power Spectral Density (PSD) is calculated as: (P S D)Pk = |X k |2

(38.3)

If there is periodicity in the original sequence, a peak occurs at the corresponding frequency in the power spectrum sequence.

38.3.2 Fourier Analysis of MMS Protocol Transmission Time Characteristics For the MMS protocol having a periodic transmission time, the power spectrum is calculated according to the methods in Eqs. (38.1) and (38.3), and the power spectrum image is as shown in Fig. 38.5. As can be seen, for a sequence with periodicity, its power spectrum image has a distinct peak. An image of the complete power spectrum is shown in Fig. 38.5. For discrete sequences, the power spectrum is symmetrical, so only the left half of the power spectrum image needs to be considered. In Fig. 38.5, we can see that the first spike appeared at 0.2 Hz and the second spike at 0.4 Hz.

38 Analysis of Time Characteristics of MMS …

351

Fig. 38.5 The energy spectrum of periodic series

Therefore, it can be judged that there is periodicity in the source sequence with a period of 1/0.2 = 5. For the aperiodic signal shown in Fig. 38.4, the power spectrum image is shown in Fig. 38.6. As can be seen from Fig. 38.6, for a non-periodic sequence, there is no significant peak in the power spectrum. The periodicity of the MMS protocol can be detected by the Fourier transform method. If the periodicity of the MMS packet changes, the control command may be delivered or an abnormal situation may occur. Fig. 38.6 The energy spectrum of non-periodic series

352

W. Wang et al.

38.4 Wavelet Analysis of MMS Protocol Transmission Characteristics As mentioned earlier, if the MMS protocol transmission in the substation undergoes a very small change, the Fourier transform cannot accurately detect this change. Since the change is very small, it is more difficult to detect this change in the time domain. The use of wavelet transform to detect such small changes can get better results. As shown in Fig. 38.7, the result obtained after wavelet transforming the sequence. In Fig. 38.7b, the time domain signal undergoes a slight change. This time domain change is not easy to detect directly, nor can it be detected by Fourier analysis. However, the Haar wavelet transform method can detect this change. As shown in the middle graph of Fig. 38.7b, the small change of the time domain sequence will cause a large change in the result after the wavelet transform. By detecting this change, the change of the original time domain sequence can be detected, thereby detecting the abnormality of the message transmission.

Fig. 38.7 The interval of the MMS packages and its wavelet transform

38 Analysis of Time Characteristics of MMS …

353

38.5 Conclusion In this paper, the transmission time characteristics of MMS protocols in intelligent substation are analyzed, and the time characteristics of MMS protocol transmission are analyzed from different granularities by Fourier analysis and wavelet analysis. By analyzing the time characteristics of the message transmission, the time characteristic baseline of the message can be established as the basis for further abnormality detection. Acknowledgements This work was supported by “Research on Lightweight Active Immune Technology for Electric Power Supervisory Control System”, a science and technology project of State Grid Co., Ltd in 2019.

References 1. Ralston, P.A.S., Garham, J.H., Hieb, J.L.: Cyber security risk assessment for SCADA and DCS networks. ISA Trans. 46(4), 583–594 (2007) 2. Keith, S., Joe, F., Karen, S.: Guide to industrial control systems (ICS) security. Natl. Inst. Stand. Technol. (2011) 3. Garcia, T.P., Diaz, V.J., Macia, F.G.: Anomaly-based network intrusion detection: techniques, systems and challenges. Comput. Secur. 28(1), 18–28 (2009) 4. National Infrastructure Plan. Department of Homeland Security, Washington DC, USA (2009) 5. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-3 54. General Accounting Office (GAO), Washington DC, USA (2004) 6. Brandle, M., Naedele, M.: Security for process control systems: an overview. IEEE Secur. Priv. 6(6), 24–29 (2008) 7. International Electrotechnical Commission. http://www.iec.ch/smartgrid/standards/

Chapter 39

Reliability Evaluation Model of Power Communication Network Considering the Importance of Transmission Service Wang Tingjun, Ma Shangdi, Liu Xuebing, Li Shanshan and Zhang Shuo

Abstract The traditional reliability evaluation method of power communication network ignores the factors of transmission service carried by the communication network, which lacks the consideration of the importance of service link. It is difficult to effectively and comprehensively study the reliability of the whole communication network. Based on the actual operation of power system communication network, this paper first proposes a link contribution model from the perspective of topological structure, the model can calculate the reliability contribution value of different links to the whole network, and then we establish a reliability evaluation model of power communication network based on the importance of service link. Finally, an example of backbone network topology is given to verify the rationality and validity of the model. Keywords Power communication network · Transmission service · Link importance · Reliability assessment

39.1 Introduction In recent years, with the acceleration of informatization and intelligence of power grids, information transmission and interaction between power communication networks have become more frequent and in-depth [1]. As the core of power information communication, communication network bears a large number of production and management business, and it is the basis and guarantee of the normal operation of power system [2]. With the increasing degree of integration of information technology, all kinds of business of communication network operation are widely distributed and complex. Its reliability is directly related to the safe and stable operation of power grid [3]. Once the communication network is broken down, the cascading effect may lead to communication failure between power systems. Meanwhile such damage could lead to stagnation of electricity production [4]. In order to ensure the safe and W. Tingjun (B) · M. Shangdi · L. Xuebing · L. Shanshan · Z. Shuo State Grid Baoding Electric Power Supply Company, Baoding City 071051, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_39

355

356

W. Tingjun et al.

stable operation of the power grid, it is urgent to combine the security and reliability of the power system and communication system into a unified consideration [5]. It is difficult to effectively and comprehensively study the reliability of the whole communication network [6]. Based on the analysis of power communication service, this paper proposes a link contribution model from the aspect of power communication network topology structure, calculates the importance of service quantitatively, and establishes a reliability evaluation model of power communication network based on the importance of service link, which has important theoretical and practical significance for analyzing the influence of communication service and equipment on the reliability of communication network.

39.2 Link Importance Analysis of Communication Services Communication service link is an important part of power communication network. Its normal and stable operation will directly affect the reliability of communication network [7]. The analysis of service link importance is an important basis for reliability evaluation of communication network.

39.2.1 Service Link Weights According to the risk of different types of links to the communication network is mostly different [8], so here described business link weight image as the link in the status of communication network influence value size. The boundary interface number is defined as Bk =

 L ik j Li j i≥ j

(39.1)

L ikj is the minimum distance number of nodes i and j in communication service channel k, and L ij is the minimum distance number of network node i and network node j. In graph theory, it is assumed that the basic size of communication service link is unit 1, and the number of service link edge interfaces is further normalized, thus the weight of link k is obtained u k = Bk ∗

 m  k=1

 Bk

(39.2)

39 Reliability Evaluation Model of Power Communication …

357

39.2.2 Service Link Risk Values The risk value of service link refers to the degree of damage to the communication network when the link fails. Because the communication link is loaded with all kinds of communication services, the effect of the link fault on the communication network can be equivalent to the effect of the communication service fault on the network [9]. Based on the importance of the service link, the risk value of the service link is Rk = Wi ∗ Ck

(39.3)

where C k is the risk probability value of failure of link k, and W i represents the importance of service i carried by link. Since optical cable is commonly used as the medium in the transmission construction of power communication network, the communication channel is affected by a variety of risk factors, including the type of medium, construction mode and operation time, etc. The value of C k can be determined comprehensively according to the statistical data and the actual operation status of the network. It is assumed that the number of business types running in link k is x, and the number of businesses of this type is ni , then the risk value ni of this business link can be calculated by formula (39.4): Rk =

 x 

 Wi n i

∗ Ck

(39.4)

i=1

39.2.3 Service Link Importance Service importance refers to the degree of influence on the stable operation of the power grid when a service fails or is interrupted [10]. The greater the influence is, the higher the service importance will be. Service importance W i is jointly determined by service link weight and service link risk value, that is, W i = wk Rk . If service link k carries service of type x, the number of service of type i is qi , and the service importance is W i , then the comprehensive service importance of this link is determined by the following formula: Ik =

x  i=1

Wi qi

(39.5)

358

W. Tingjun et al.

39.3 The Importance Evaluation System of Service Link Based on Fuzzy Analytic Hierarchy Process Through the brief description of the service link and the corresponding analysis, the power service evaluation index system is established. As shown in the Fig. 39.1: (1) Calculation of first-level index weight The weight of the first-level index is calculated, and the index judgment matrix is further created. The security index, real-time index, and reliability index are taken into comprehensive consideration, and the fuzzy consistent matrix is constructed according to the 2.1 method, as shown in matrix:

C1 a1 a1 a11 a2 a21 a3 a31

a2 a12 a22 a32

a3 a13 a23 a33

The first layer fuzzy matrix is obtained by the mathematical transformation and the weight ordering vector is obtained. (2) Calculation of second-level index weight The weight of secondary index is calculated and the judgment matrix of secondary index is further created. The selected indexes are further subdivided according to the business link evaluation system. Based on the analysis, the evaluation matrices of real time, reliability, and security at the second level are created, and then the importance of each secondary index is calculated on the basis of the calculation method of the importance of the primary index. According to the fuzzy calculation method the comprehensive fuzzy weight of each service link is obtained as the parameter of power communication network evaluation. Delay Time

Real-time indicators

Real-time type Real-time requirement

Service Link Importance Evaluation System

Safety indicators

Error rate

Protection channel Bandwidth Security zone Reliability indicators safety requirement

Fig. 39.1 Service link evaluation system

39 Reliability Evaluation Model of Power Communication …

359

39.4 Construction of Reliability Evaluation Model of Power Communication Network This paper studies the reliability of power communication network based on traditional network topology. Considering the business link importance, and its combination in electric power communication network reliability model, build network topology model.

39.4.1 Construction of Network Model The establishment of the network model is the prerequisite for the construction of the reliability evaluation model of the entire power communication network. The network topology model can be defined as G = (V, E, S, W ), where V = {V 1 , V 2 ,…, V m }, E = {E 1 , E 2 , …, E m }. V represents the set of abstract network nodes, and E represents the set of network service links. In the network, it is assumed that the direction of communication service is bidirectional. S represents the total set of all communication services hosted in the communication network, and W represents the importance of the service link between each node pair in the communication network.

39.4.2 Power Communication Network Reliability Evaluation Model Based on the above network model, service importance is taken as an important parameter for reliability calculation and analysis of power communication network. The evaluation steps are shown in the Fig. 39.2. Here, it is defined that the calculation method of network risk value is to calculate the weight of risk value of each link in the network. The overall risk value of the network is calculated by formula (39.6): Rn =

m 

wk · Rk

(39.6)

k=1

Under the premise that it is difficult to accurately estimate network link risk, network risk equilibrium is taken as one of the important indexes. At the same time, in order to better and more directly reflect the degree of aggregation or dispersion and difference between individual services in the communication network, the variation coefficient in the probability theory is introduced. Based on the variation coefficient algorithm, the risk equilibrium index of communication network is fur-

360

W. Tingjun et al.

Real-time Real-time indicators indicators

Safety Safety indicators indicators

Reliability Reliability indicators indicators

Service Service Link Link Importance Importance Evaluation Evaluation First-order First-order Index Index Fuzzy Fuzzy Matrix Matrix

Secondary Secondary Index Index Fuzzy Fuzzy Matrix Matrix

Fuzzy Fuzzy computation computation

Network Network Risk Risk Value Value

Service Service Link Link Importance Importance

Network Network Reliability Reliability Fig. 39.2 Network reliability evaluation process

ther improved, and the reliability calculation method of communication network is shown in Eq. (39.7).   1  1   Ik ∗ (Rk − Rn )2 Vn = Rn k=1

(39.7)

I k is the importance of service link in communication network, Rk is the risk value of service link, and Rn is the network risk degree.

39.5 The Example Analysis In order to verify the effectiveness of the model, a simulation experiment was conducted on a local basic power communication network in a province. The basic network consists of 8 communication nodes and 12 communication links. Based on graph theory, the abstract power communication network model is shown in Fig. 39.3. Where network service link set L = {L 1 , L 2 , …, L 12 }, link interface set B = {B1 , B2 , …, B12 }, the dotted line in the network diagram is the service channel, and the details of each service are listed in Table 39.1.

39 Reliability Evaluation Model of Power Communication …

361

Fig. 39.3 Topology map of communication network service connection

Table 39.1 Typical service importance ID

Service type

Service importance

S1

Relay protection

0.9761

S2

Safety, stability, and control

0.9438

S3

Dispatching data network

0.8450

S4

Transmission and transformation condition monitoring

0.9061

S5

Distribution automation

0.6380

S6

Marketing business management

0.8036

S7

Customer liaison system

0.6012

S8

Power quality management

0.3655

S9

Conference telephone system

0.5390

S 10

Administrative telephone service

0.4639

39.5.1 Construction of Network Model According to the above method to calculate the business importance vector r = {0.9761, 0.9438, 0.8450, 0.9061, 0.6380, 0.8036, 0.6012, 0.3655, 0.5390, 0.4639}, and then we set up the network for all service C L link failure probability is 0.1%. By the formula (39.4) available link risk value vector RL = {0.002683, 0.000916, 0.001471, 0.000647, 0.003424, 0.000647, 0.000823, 0.000915, 0.000464, 0.001288, 0.001380, 0.000915}. According to the formula (39.1) to link the weight vector w = {0.0933, 0.1265, 0.0401, 0.0865, 0.0665, 0.0548, 0.1320, 0.1503, 0.0872, 0.0624, 0.0506, 0.0843}. Taking the above calculation results as parameters, the overall network risk value formula can be used to calculate the overall network risk value of about 0.00134. Further, by substituting the calculated results as parameters into formula (39.7), the network risk equilibrium degree is about 0.6303. The calculation results of each index value of the network are shown in Table 39.2.

362 Table 39.2 Network reliability calculation

W. Tingjun et al. ID

Service link risk value

Link reliability

L1

0.0933

0.4872

L2

0.1265

0.1247

L3

0.0401

0.7241

L4

0.0865

0.5925

L5

0.0665

0.5845

L6

0.0548

0.6124

L7

0.0320

0.6521

L8

0.1503

0.0875

L9

0.0872

0.5747

L 10

0.0624

0.5934

L 11

0.0506

0.6541

L 12

0.0843

0.6054

39.5.2 Network Reliability Analysis According to the calculation and analysis of network reliability in Figs. 39.4 and 39.5, it can be seen that the service link will have an impact on the network reliability. The higher the importance of the service link, the lower the risk of network failure, and the greater the impact on network reliability. If the high-importance service links fail, the network may be temporarily paralyzed. For the calculation of the risk value of the constructed network, except for L 2 and L 8 , most links are safe with the risk value less than 0.1, which conforms to the actual risk status of the communication network. According to Fig. 39.4, compared with other links in the network, the abnormal risk values of L 2 and L 8 are relatively large. Fig. 39.4 Link risk and reliability

39 Reliability Evaluation Model of Power Communication …

363

Fig. 39.5 Link risk value distribution

As can be seen from Fig. 39.4, the weight of L 2 link is the largest among all links, and its risk value is higher than other links. On the other hand, although the weight of L 8 is small, the type and number of businesses it runs are the most, resulting in a risk value as high as 0.1503. Therefore, it can be concluded that link L 2 and L 8 have relatively high risk values and are unreliable service links in the communication network.

39.6 Conclusion Based on the communication network service link, this paper proposes a link contribution value model from the point of view of topology network. Based on the importance of the service link, a more comprehensive and effective reliability evaluation model is established to analyze the reliability of the communication network. Experiments show that the proposed evaluation method can comprehensively evaluate the network reliability and operation status, provide targeted guidance for network risk control.

References 1. Ying, Z.: Power communication network reliability analysis and evaluation methods. J. Telecommun. Electr. Power Syst. 32(226), 13–16 (2011) 2. Wang, L., Qu, Z., Li, Z.: The design and implementation of attack path extraction model in power cyber physical system. J. JCM 11(9), 834–840 (2016) 3. Xing, N.Z., Guo, J.Q., Yu, R.: Electric power communication network optimization algorithm based on the equivalent network. In: Applied Mechanics and Materials, vol. 596, pp. 653–658. Trans Tech Publications (2014)

364

W. Tingjun et al.

4. Zhou, J., Liu, G., Zhao, Z., et al.: Research and simulation on the resource optimization for the power communication backbone network. J. Opt. Commun. Technol. 2 (2011) 5. Liu, J.X., Chen, S.D., Wang, Y.G.: Study on node importance of complex network based military Command Control networks. In: International Conference on Machine Learning and Cybernetics, vol. 3, pp. 920–923. IEEE (2012) 6. Tang, F., Wang, B., Zha, X., Ma, Z., Shao, Y.: Power system transient stability assessment based on two-stage parallel hidden markov model. J. Proc. Chin. Soc. Electr. Eng. 33(10), 90–97 (2013) 7. Wollschlaeger, M., Sauter, T., Jasperneite, J.: The future of industrial communication: Automation networks in the era of the internet of things and industry 4.0. J. IEEE Ind. Electron. Mag. 11(1), 17–27 (2017) 8. Tong, X.Y., Wang, X.R.: Inference and countermeasure presupposition of network attack in incident on Ukrainian power grid. J. Autom. Electr. Power Syst. 40(7), 144–148 (2016) 9. Kar, P., Roy, A., Misra, S., Obaidat, M.S.: On the effects of communication range shrinkage of sensor nodes in mobile wireless sensor networks due to adverse environmental conditions. J. IEEE Syst. J. 12(3), 2048–2055 (2018) 10. Liu, N., Yu, X., Zhang, J.: Coordinated cyber-attack: inference and thinking of incident on Ukrainian power grid. J. Autom. Electr. Power Syst. 40(6), 144–147 (2016)

Chapter 40

Optimal Safety Link Configuration Method for Power Communication Network Considering Global Risk Balance Liu Xiaoqing, Ma Qingfeng, Wang Tingjun, Ma Shangdi, Liu Xuebing and Li Shanshan Abstract Under the smart grid environment, the new operation and management mode of electric power communication network requires frequent work cooperation and business transmission between systems, which results in intensive information transmission problems. Therefore, the study of security link configuration methods to reduce the risk of electric power communication network business channel has become a hot issue in the field. This paper presents a new optimal security link configuration method of electric power communication network considering global risk balance. First, the business security link configuration model of electric power communication network is given, and the business security link configuration problem is described and mathematically modeled. Then, the fuzzy analytic hierarchy process (FAHP) is used to construct the business importance degree evaluation and ranking model. Finally, the improved Dijkstra algorithm and PSO algorithm are used to solve the multi-objective optimization problem, and the optimal security link is configured. The simulation experimental show that the proposed method can effectively reduce the global risk balance of electric power communication network, at the same time, it has certain availability and effectiveness. Keywords Electric power communication network · Global risk balance · Business importance degree · Security link selection

40.1 Introduction In recent years, with the rapid development of China’s smart grid, the power communication network has been continuously extended, and its coverage has been widely used in the key links. Due to the complexity of the power communication service, the L. Xiaoqing (B) Jilin Institute of Chemical Technology, Jilin City, China e-mail: [email protected] M. Qingfeng · W. Tingjun · M. Shangdi · L. Xuebing · L. Shanshan State Grid Baoding Electric Power Supply Company, Baoding City, China © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_40

365

366

L. Xiaoqing et al.

risk level and planning strategy of the power communication network are different from those in other fields, and the existing research results in other fields cannot be used. Literature [1] proposes a set of reliability evaluation methods for safety links of power communication networks considering global business risk balance, but its management system does not give specific methods for optimal safety link planning and selection. Literature [2] and [3, 4] give a network topology optimization method considering node importance, capacity demand relationship, and fiber-optic cable sharing. The literature [5] designed a routing allocation mechanism that can reduce the overall security risk of power communication networks. The literature [6, 7] is based on Complex network theory analyzes the vulnerability of power communication networks. However, these documents do not evaluate the reliability of the safety link from the service layer to the power communication network. This paper studies the global risk balance problem of the optimal safety link of the power communication network, and builds a safety link selection model for the power communication network. The model calculates the global risk of the network.

40.2 Power Communication Network Service Security Link Selection Model 40.2.1 Problem Description In the power communication network environment, the problem of the security link selection of the power communication network service means that in order to avoid the global intensive risk distribution, in the case of ensuring the whole network is safe and reliable [8–10], when the service security link is selected, according to the difference in risk spreads the business evenly across the entire communication network to maintain a global risk balance. As shown in Fig. 40.1, which is a power communication network service distribution map, the figure can be expressed as GV, E, wherein  the vertex V = {v1 , v2 , . . . , v9 } is a service safety link node, and the edge E = ei j , i ∈ Z + , j ∈ Z + , j > i is the channel between the link nodes, and the dotted line sk with the arrow, k ∈ Z + is the communication service existing between the nodes. It can be seen from the figure that the channel e15 .

40.2.2 Risk Data Model The network global risk function refers to the overall risk level of the power communication network, which is the sum of the node risk and the edge risk, as shown in Eq. (40.1).

40 Optimal Safety Link Configuration Method for Power …

367

Fig. 40.1 Power communication network service distribution map

P=

m 

V (vi ) +

n 

M(ei j )

(40.1)

i= j

i=1

  where P is the global risk of the network; V (vi ) is the risk of node vi ; M ei j is the risk of edge ei j . If node vi fails or an abnormality occurs, the traffic passing through the node will be interrupted according to the degree of node aggregation in the communication network, and the severity of the impact is expressed by Eq. (40.2). V (vi ) =

Q(v i ) 2L i • Wt ki (ki − 1) t=1

(40.2)

If ei j fails or an abnormality occurs, the severity of the impact of the service interruption is expressed by Eq. (40.3) according to the degree of concentration of each node in the communication network. n 

M(ei j ) =

(et j + e jt )

j=1 n  n 

Q(ei j )

• ei j



Wt

(40.3)

t=1

i=1 j=1

where the number of links connected to node i is the degree of node i, and Wt is the weight value of the t service on node vi and edge ei j , and is determined according to the importance of the service. The specific calculation method will be given in the next section of the business importance evaluation.

368

L. Xiaoqing et al.

The channel ei j bandwidth function between nodes i and j is N B = bandwidth(ei j )

(40.4)

The delay function is shown in Eq. (40.5): De =

m 

de(vi ) +

i=1

n 

de(ei j )

(40.5)

i=1, j=1, j>i

The service channel availability function is as shown in Eq. (40.6), that is, the availability of nodes and edges carrying communication services. Q=

Nsk  i=1

B(vi ) •

Nsk 

B(ei j )

(40.6)

i=1, j=1, j>i

So the data model of the global risk balance problem of the service-oriented network of the power communication network is obtained, as shown in (40.7). ⎧ Min(P) ⎪ ⎪ ⎪ ⎨ Max(Q) ⎪ NB ≤ B ⎪ ⎪ ⎩ De ≤ D

(40.7)

The formula shows that the power communication network has the highest availability when all bandwidths are < B and the delay is < D. On this basis, the security link with the lowest global risk can be sought.

40.3 Optimal Safety Link Selection Method Considering Global Risk Balance In this paper, the fuzzy importance analysis method [11–13] (FAHP) is used to construct the business importance evaluation and ranking model. The model considers the shortest path and important business link selection optimization constraints. (1) Establishment of the hierarchy analysis structure of power communication business importance. Based on the comprehensive consideration of the power communication business, this paper divides the business importance into three different levels according to the different business attributes. The structure analysis diagram of the importance level of power communication business is shown in Fig. 40.2. (2) Fuzzy matrix construction and consistency verification method for importance of power communication business. According to the power communication business importance hierarchy analysis structure and expert judgment information, the busi-

40 Optimal Safety Link Configuration Method for Power …

369

Fig. 40.2 Power communication business importance hierarchy analysis structure

ness importance degree fuzzy matrix R is constructed, and the business importance degree labeling method is first introduced, as shown in Table 40.1. Assuming that a certain level factor C is related to its next layer of factors m 1 , m 2 , …, m n , then the fuzzy judgment matrix is as shown in matrix: ⎡

r11 r12 ⎢ r21 r22 R=⎢ ⎣··· ··· rn1 rn2

⎤ · · · r1n · · · r2n ⎥ ⎥ ··· ···⎦ · · · rnn

(40.8)

where r11 = 0.5, i = 1, 2, . . . , n, ri j = 1 − r ji , i, j = 1,2, . . . , n, rij = r ji − r jk , i, j, k = 1, 2, . . . , n, ri j is a triangular fuzzy number, ri j = ai j , bi j , ri j . ai j , bi j , ri j are the comparison of the business importance factors ci and c j with respect to the upper layer business importance factor C, and the worst, most likely and best importance estimates of the business importance factors ci and c j . The fuzzy judgment matrix represents to some extent whether the logical judgment of humans is consistent. If T is slightly more important than C, and C is obviously more important than M, then it can be introduced that T is definitely more important than M. (3) Ranking of importance of power communication business. By sorting the above fuzzy judgment matrix, the relative weight of the layer relative to the upper Table 40.1 Power communication business importance marking method Scaling

Meaning

0.9

One business importance factor is more important than the other

0.7

One business importance factor is stronger than the other

0.5

One business importance factor is significantly more important than the other

0.3

One business importance factor is slightly more important than the other

0.1

Two business importance factors are equally important

0.2/0.4/0.6/0.8

Intermediate value of the above adjacent judgment

370

L. Xiaoqing et al.

layer element can be obtained, and p experts are used to specify the fuzzy judgment matrix set specified by p experts. The specific steps are as follows: ➀ Summarize the preference information of p experts and calculate the fuzzy comprehensive business importance judgment matrix; ➁ Normalize the fuzzy evaluation value of the individual business importance factor, and then obtain the relative weight vector of the fuzzy business importance factor; ➂ Compare the weight value with two and two, and write the probability matrix. Through the above steps, there are formula (40.9) and formula (40.10), and the formula (40.9) can be solved to obtain the order of importance. ri j = 0.5 + a(wi − w j )

(40.9)

In summary, this paper first uses FAHP to rank all services in order of importance from high to low, and then configures the optimal security link for services with higher importance. In this way, the global risk of the power communication network is guaranteed lowest. ⎧ n  ⎪ 2 2 2 ⎪ ⎪ 2a (n − 1)w − 2a w − . . . − 2a w + λ = a (r1 j − r j1 ) 1 2 n ⎪ ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎪ n ⎪  ⎪ ⎪ 2 2 2 ⎪ ⎪ − 2a w + 2a (n − 1)w − . . . − 2a w + λ = a (r2 j − r j2 ) 1 2 n ⎪ ⎨ j=1

⎪ ⎪ ·································································· ⎪ ⎪ ⎪ n ⎪ ⎪  ⎪ 2 2 2 ⎪ ⎪ (rn j − r jn ) − 2a w1 − 2a w2 − . . . + 2a (n − 1)wn + λ = a ⎪ ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎩ w1 + w2 + . . . + wn = 1 (40.10)

40.4 Simulation Experiment and Analysis The simulation experiment of the power communication network shown in Fig. 40.3 is carried out by using the optimal safety link selection method proposed in this paper. The figure contains 9 nodes, that is, the channel between the link nodes. A total of 34 communication services are generated between the nodes, and the business importance is sorted according to the evaluation method in Sect. 40.2.1. The availability values of the nodes are randomly generated in the range of [0.90, 0.99].

40 Optimal Safety Link Configuration Method for Power …

371

Fig. 40.3 Power communication simulation network structure

Experiment 1: Comparison of secure link selection. The 34 business importance levels are divided into four intervals, namely, [1~30], [30~60], [60~90], [90~100], and Fig. 40.4 shows the risk in the power communication network topology. The services in the [90~100] interval are relatively small, and the LB and AAR algorithms are in contrast, although most of the services carried on the edge are also Fig. 40.4 Edge risk degree distribution map in electric power communication network topology

372

L. Xiaoqing et al.

less important or centered communication services, but the importance of these two algorithms is [90~100]. Experiment 2: Comparison of transmission delays of different links. The experimental test is carried out within 60 min, and the service execution efficiency of different links is compared by collecting the service transmission time and the average delay information. It can be seen from the Fig. 40.5 that the average delay time of the optimal safety link selected by the DPSO algorithm is significantly lower. Experiment 3: Iterative calculation time comparison of safety links. As shown in Fig. 40.6, in the 200 iterative calculations, the global optimization ability of the DPSO algorithm is the strongest, and the other two methods have the problem of local optimal solutions. The simulation experiment proves that the DPSO algorithm proposed in this paper has a high global search capability. Fig. 40.5 Service transmission delay

Fig. 40.6 Iterative calculation performance of secure link calculation

40 Optimal Safety Link Configuration Method for Power …

373

40.5 Conclusion This paper proposes a new DPSO method for optimal safety link selection in power communication networks considering global risk balance. First, the security link selection model of power communication network is given, and the problem of business security link selection is described and mathematically modeled. Then, the business importance evaluation and ranking model is constructed by FAHP. Finally, the improved Dijkstra algorithm and particle swarm optimization (PSO) algorithm are used to solve the multi-objective optimization problem and configure the optimal safety link.

References 1. Ziyan, Z., Xi, C., Jianming, L.: Issues of establishing a reliable management system in power telecommunication network. J. Electr. Power Inf. Commun. Technol. 27(168), 58–61 (2006) 2. Shi, J., Zong, R., Liu, Y.: Study on the invulnerability and topology optimization of power communication network. J. Telecommun. Electr. Power Syst. 30(203), 11–13 (2009) 3. Zhou, J., Liu, G., Zhao, Z., Chen, X.: A network optimization method based on resource sharing of power optical cable lines. J. Power Syst. Technol. 35(5), 199–203 (2011) 4. Yuan, L.I.U.: Research on SDH network optimization in power telecommunication network. J. Telecommun. Electr. Power Syst. 29(3), 33–35 (2008) 5. Qing-tao, Z., Xuesong, Q., Shaoyong, G., Feng, Q.I., Luoming, M.E.N.G.: Risk balancing based routing mechanism for power communications service. J. Electron. Inf. Technol. 35(6), 1318–1324 (2013) 6. Guo, J., Wang, D.R.: Vulnerability analysis on power communication network based on complex network theory. J. Telecommun. Electr. Power Syst. 30(9), 6–10 (2009) 7. Jing, Z.H.O.U., Xiong, S.Q., Bin, S.U.: A quantified method to evaluate operational quality of power communication network and its application. J. Power Syst. Technol. 36(9), 168–173 (2012) 8. Zhang, M.Q., Xu, M., Qing, W.: Research of the best repair path based on an improved particle swarm optimization in power communication network. J. Sci. Technol. Eng. 8(22), 5990–5995 (2008) 9. Chang H.: A multipath routing algorithm for degraded-bandwidth services under availability constraint in WDM networks. In: 26th International Conference on Advanced Information Networking and Applications Workshops (WAINA 2012), Fukuoka, pp. 881–884 (2012) 10. Zhao, Z., Liu, J.: A new communication services optimization method based on services risk balancing degree for power system. In: Computer Science and Service System (CSSS), Nanjing, pp. 994–997 (2011) 11. Huang, W.: Adaptive particle swarm optimization algorithm and the application research. Zhejiang University, Hangzhou (2006) 12. Du, S., Dai, B.: Hierarchical division of security risk in electric power communication network. J. New Technol. 20, 117 (2013) 13. Jin, X.: Research on service reliability analysis method of electric power communication transmission network. North China Electric Power University, Beijing (2011)

Chapter 41

Design of Data Acquisition Software for Steam Turbine Based on Qt/Embedded Han Zhang, Hongtao Yin and Ping Fu

Abstract This paper puts emphasis on a design of data acquisition software for steam turbine. This software is developed by Qt/Embedded, and it runs on the embedded computer. It aims at improving the safety of steam turbine. Operating state of steam turbine can be indicated by some physical quantities, like temperature, pressure or rotational speed of turbine blade. This software acquires data of these quantities, and analyzes it. According to acquired data, it determines whether the steam turbine is in normal state. When the steam turbine is running in abnormal state, it will be forced to do specific actions by output signals of this software. Besides, this software offers users a friendly graphic interface, on which the data and signal is shown in real time. It has been tested and verified that the design can work normally. Keywords Data acquisition software · Qt/embedded · Embedded development

41.1 Introduction The steam turbine is a rotary powered machine that converts the energy of steam into mechanical work, and it accounts for a large part of the world power generation [1]. Nowadays, electrical equipment is moving in the direction of large-scale, complex and automated, people put a higher focus on the safety [2] of operation of power plant equipment. To make sure the steam turbine run safely, there are many alternative methods for data acquisition system. There is a design that is based on PCI bus [3]. It realizes data transmission between grating data acquisition card and PC by PCI bus. PLC is a microprocessor-based controller which is widely used to control machines and H. Zhang · H. Yin · P. Fu (B) Automatic Test and Control Institute, Harbin Institute of Technology, Harbin, China e-mail: [email protected] H. Zhang e-mail: [email protected] H. Yin e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_41

375

376

H. Zhang et al.

processes [4]. Krzysztof Sieczkowski and Tadeusz Sondej proposed a method to acquire data based on MATLAB, and tested it for three communication interfaces, UART, bluetooth and WiFi [5]. Besides, a kind of VME based data acquisition and control system has been designed and implemented for the gyrotron operation [6]. Common development environment in Linux includes eclipse, Qt/Embedded and vim. Qt/Embedded is development software for graphical interface which is based on C++ [7]. It has good cross-platform features, and offers mature support for designer. A handheld data acquisition terminal with touch screen was developed in Qt/Embedded [8]. There is also a design of smart home control system whose GUI is based on Qt/Embedded [9]. According to requirements of this software, Qt/Embedded is selected as developing environment in this design. The rest of this paper is organized as follows. Section 41.2 is a brief introduction of the structure of this system. Section 41.3 provides an overview of the mechanisms of Qt/Embedded used in the design. The description of software design is described in Sect. 41.4. Section 41.5 presents test experiments results and related analysis.

41.2 Structure of System The structure of overall data acquisition system is shown in the Fig. 41.1. The core of this system is an embedded computer whose control center is an ARM chip. The software can work on the embedded computer. Besides, there are digital output

Fig. 41.1 Structure of data acquisition system

41 Design of Data Acquisition Software for Steam Turbine …

377

and input module, PROFIBUS module, A/D module and chassis. Above-mentioned modules are plugged into the slots on the chassis. Acquired data can be sent to upper computer by PROFIBUS. For this system, the embedded computer, equipped with Linux system, is the control center. Transmission of data between modules relies on VME bus. This data acquisition system has 16-channel analog current input to receive data from steam turbine. It also receives binary signals that indicate state of system via eight channels. The system needs to control components of steam turbine by digital output signals.

41.3 Mechanisms in QT/Embedded There are three significant mechanisms used in this design that help the program run as it is wanted. They are thread, semaphore, and signal/slot. A thread is the smallest unit of processor scheduling. A thread can access the process’s memory space and resources, and share them with other threads in the same process. In Qt/Embedded, there is a class, Qthread, for users to use thread. The Semaphore mechanism is used for communication between threads in Linux. For threads, the semaphore is a kind of resource. For example, variable S is declined as a semaphore. If S is greater than zero, it means that the amount of resource that can be used by threads is the value of S. Similarly, when S is equal to zero, there is no resource for threads to occupy, and those threads that need this resource will be in the waiting state. The mechanism signal/slot is a high-level program interface. It is used to replace the callback in other developing environment, and this makes cooperation among the various components simple, the user interface development easy [10]. Signal sender do not care who receives the signal. Receiver receives the signal with slot function, and does not care who sends the signal. The function connect associates a signal with a slot. In the communication between the objects, it is simple and clear.

41.4 Design of Software 41.4.1 Structure of Data Acquisition Software It is shown in the Fig. 41.2 that the overall structure of data acquisition software. There are four layers in the structure [11]. Top of it is application layer, in which there are applications. Next layer is operation system, which is composed of embedded Linux kernel and its user’s graphical interface. The third layer is device driver. The software relies on drivers to write/read data to/from physical address. The bottom layer is hardware that comprises total devices. The structure of the application program is shown in Fig. 41.3. According to the

378

H. Zhang et al.

Fig. 41.2 Structure of data acquisition software

Application Qt for Embedded linux Linux kernel Device drivers Hardware

Main program creates threads

taskControl

taskAcquire

taskPRO

taskMonitor

Drivers Fig. 41.3 Structure of software

requirements of this design, the main program creates four threads, to respectively achieve four tasks. TaskAcquire acquires data from A/D module. A/D module changes analog signal to digital signal. The data represents temperature, pressure, or speed of steam turbine. Then it transfers binary data to decimal data which will be sent to other threads. TaskControl receives data from TaskAcquire. It analyzes the data according to some rules about industrial demand. It writes data into registers to control devices on steam turbine by digital output signal. TaskPRO is used to send the acquired data to the upper computer via PROFIBUS. TaskMonitor offers users a panel to display values and state of steam turbine. TaskAcquire, taskControl, and taskPRO are related to drivers.

41 Design of Data Acquisition Software for Steam Turbine … Table 41.1 Structure of arrays

Table 41.2 Interface to deliver data

379

Name

Type of array

Length

globalADValue[]

Float

20

globalDIValue[]

Unsigned char

16

globalSWValue[]

Unsigned char

globalSendPlc[]

Float

8 21

Interface Type

Type of Data

Sender

Receiver

Message queue

(Structure) Thread 1 SAMPLEDATA

Thread 2

Shared memory

(Structure) Thread 1 SAMPLEDATA

Thread 3 Thread 4

Thread 1: taskAcquire Thread 2: taskControl Thread 3: taskPRO Thread 4: taskMonitor

41.4.2 Data Structure Before programming, first set the variables for the data that need to use. According to the devices where the data comes, name the arrays that used to store raw data from input. The details of the array are shown in Table 41.1. GlobalADValue [0 ~ 20] represents the collected temperature, pressure, and speed data, which comes from the A/D module. GlobalDIValue [0 ~ 16] represents a variety of motion signals These data come from the digital output module. GlobalSWValue [0 ~ 7] represents alarm signal. These data come from the digital input module. The internal interface of data acquisition software is mainly based on the communication mechanism between threads in Linux. Data communication between tasks is based on the shared memory and message queue. The internal interface is shown in Table 41.2. SAMPLEDATA is a kind of self-defined structure, which is to save processed data.

41.4.3 Tasks of the Software Main program gets user’s parameters, first. Besides, it not only creates and starts threads but also sets a timer that controls taskAcquire. The timer emits timeout signal every 10 ms, and the signal is bonded to the slot function timer_upgate. It releases the semaphores which taskAcquire and taskPRO waits for to continue. The flow of main program is shown in Fig. 41.4. The thread that acquires data from modules is named taskAcquire. The raw data that read from the register is in the form of binary value. It conducts a series of mathematical calculation to translate them into decimal form. Flowchart of this thread

380

H. Zhang et al.

Fig. 41.4 Flow of main program

is shown in the Fig. 41.5. In order to make the reliability of data be ensured, read data 10 times for every channel, and abandon the maximum and minimum one, then find the average of the remaining eight values. Fig. 41.5 Flow of taskAcquire

41 Design of Data Acquisition Software for Steam Turbine …

381

The thread that controls devices is named taskControl. It analyses information from taskAcquire, and outputs digital signals. The thread that transfers information to the upper computer via PROFIBUS is named taskPRO. It receives data from taskAcquire and sends them to the computer. There is a thread named taskMonitor for information display. It works to show users the real-time data. Qt/Embedded provides a visual interface for designing, in which the designer simply drags the components and sets the parameters of the component.

41.4.4 Drivers Relying on the device drivers, application program can communicate with peripheral equipment in Linux. To transfer data between drivers and applications, Linux offers some special functions. For application program, it does not care how the system read or write, and it just informs the driver of data or address. Briefly speaking, special driver functions hide the details and only provide with simple interface. In driver program, a structure named file_operation is defined, which includes declarations of functions such as open(), read(), write(), ioctl(), and close(). We define them in detail, to transfer data from application to peripheral equipment.

41.5 Test Result We test it from two aspects. First, its panel should display the information in real time. Second, its output signal should be consistent with logic of program. We use a simulator instead of steam turbine to supply data acquisition system with all kinds of input signal. Figure 41.6 shows the main panel of software. It displays the values acquired from the data acquisition system. It displays all the information we want to show. The function real-time display is achieved. In Fig. 41.6, the turbine speed is higher than the overspeed value I and II, and the signal self-test is valid, the signal protection is invalid. So the output signal over speed value I and over speed value II is valid. It meets the logical determination. To test the determination logic of software, we need to supply this data acquired system with different analog input. These inputs need to meet the conditions in Table 41.3. Observe the output signal and record these output signals in Table 41.3. In Table 41.3, signal A and B belong to the input digital signals. Condition A ~ F represent conditions of related data.

382

H. Zhang et al.

Fig. 41.6 Interface of information display

From the Table 41.3, we can see the actual output is as same as the ideal output. When input signals or acquired data meets specific conditions, software can send corresponding output signal (valid or invalid). We can come to a conclusion that the judgment logic of software is right.

41.6 Conclusion This paper introduces the design of data acquisition software for steam turbine based on Qt/Embedded of Linux. There are four threads: data acquisition, device control, data transmission, and data display. This design is tested from two aspects: real-time information display and output logic of program. We can come to a conclusion that this software is able to display data in real time, and its output signals are correct. Generally speaking, this software is able to help steam turbine run safely.

41 Design of Data Acquisition Software for Steam Turbine …

383

Table 41.3 Record of test Conditions

Ideal output signal

Ideal value

Actual output signal

Actual value

Signal A: valid

Detection

Valid

Detection

Valid

Signal A: invalid

Detection

Invalid

Detection

Invalid

Signal B: invalid

Turning gear

Valid

Turning gear

Valid

Signal B: valid and Signal A: valid

Turning gear

Invalid

Turning gear

Valid

Signal A: valid and signal B: invalid

Condition A

Oil pressure low value II

Valid

Oil pressure low value II

Valid

Condition B

Oil pressure low value I

Valid

Oil pressure low value I

Valid

Oil pressure low value I

Oil pressure low value I

Condition C

Over speed value I

Valid

Over speed Value I

Valid

Condition D

Over speed Value I

Valid

Over speed Value I

Valid

Over Speed value II

Over speed Value II

Shutdown Condition E

Over speed Value I

Invalid

Over speed Value II Condition F

oil pressure low value I oil pressure low value I

Over speed Value I

Invalid

Over speed Value II Invalid

Oil pressure low value I

Invalid

Oil pressure low value I

Signal A: self-test Signal B: protection Condition A: pressure of oil pipeline is lower than oil pressure low value II Condition B: pressure of oil pipeline is lower than oil pressure low value I Condition C: rotational speed of turbine is higher than the over speed value I Condition D: rotational speed is higher than the over speed value II Condition E: rotational speed of turbine is lower than the overspeed value I Condition F: pressure of oil pipeline is higher than oil pressure low value II

References 1. Rossi, P., Raheem, A., Abhari, R.S.: Numerical model of liquid film formation and breakup in last stage of a low-pressure steam turbine. J. Eng. Gas Turbines Power (2017) 2. Zhou, G., et al.: Performance monitoring of steam turbine regenerative system based on extreme learning machine. In: 2017 Prognostics and System Health Management Conference (PHMHarbin), pp. 1–7. Harbin (2017) 3. Bjelica, O., Lale, S.: Development environment for monitoring, data acquisition and simulation of PLC controlled applications. Telfor J. 6(1), 912–915 (2014)

384

H. Zhang et al.

4. Huang, Y.: A communication scheme between grating data acquisition card and personal computer based on PCI bus. In: International Conference on Electronic Measurement & Instruments, pp. 224–228. IEEE (2011) 5. Sieczkowski, K., Sondej, T.: A method for real-time data acquisition using Matlab software. In: Mixed Design of Integrated Circuits and Systems, 2016 Mixdes, Inter-national Conference, pp. 437–442. IEEE (2016) 6. Patel, J., Patel, H., Rajanbabu, N., et al.: VME based data acquisition and control system for Gyrotron based ECRH system on SST-1. In: Fusion Engineering, pp. 1–4. IEEE (2013) 7. Fuying, T.: Soft keyboard design and implementation based on Linux QT. Comput. Modern (12), 179–181 (2011) 8. Zhang, X., Zhang, J., Meng, J.: Research on the touch screen of embedded handheld data acquisition terminal based on Qt. Embedded, 476–479 (2012) 9. Liang, Y., Wan, S.: The design of smart home control system. In: International Conference on Instrumentation and Measurement, Computer, Communication and Control, pp. 311–314. IEEE (2014) 10. Wuhan: Research on portable high speed data acquisition, measure and control system. Chin. J. Sci. Instrum. 27(8), 956–960 (2006) 11. Sun, M., Wu, S.: A software development of DICOM image processing based on QT, VTK and ITK. In: IEEE International Conference on Medical Imaging Physics and Engineering, pp. 231–235. IEEE (2014)

Chapter 42

A Reliable Data Transmission Protocol Based on Network Coding for WSNs Ning Sun, Hailong Wei, Jie Zhang and Xingjie Wang

Abstract Nowadays, wireless Sensor Networks (WSN) technology has been applied in more and more fields, such as medical, military, environmental protection, and so on. WSNs increase the efficiency of people’s access to information. However, the bandwidth of a wireless device is limited, and when the network is unreliable, the phenomenon of high packet loss rate will occur. At present, wireless sensor technology is used in various fields to achieve network load balancing, prolong network life and improve network coverage. To achieve these goals, the better quality is needed in the wireless network and reliability and energy consumption are important factors in evaluating the quality. This paper proposed an improved reliable transmission mechanism based on network coding in WSNs and proved the superiority of the mechanism in reliable transmission rate and energy consumption by simulation experiments. Keywords Wireless sensor networks · WSNs · Network coding · Transmission reliability · Multi-path routing

42.1 Introduction Wireless Sensor Networks (WSN) [1] has grown stronger with the rapid development of wireless communication technology. It has attracted the attention of electronic information workers, owing to its advantages of low consumption, low cost, wide application, and so on. Therefore, WSN has a remarkable position. Although WSN N. Sun (B) · H. Wei · J. Zhang · X. Wang Hohai University, Changzhou 213022, China e-mail: [email protected] H. Wei e-mail: [email protected]; [email protected] J. Zhang e-mail: [email protected] X. Wang e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_42

385

386

N. Sun et al.

has shown great superiority in many applications, there are still many problems to be solved. For example, many WSN nodes are densely distributed in the network area, and their energy and abilities to withstand harsh environmental influences are very limited, so faults are inevitable, and the success rate of data packet transmission is low. The WSN takes the task as the unit, and it is not until the sink node receives the correct data package that the transmission task is successful. Real-time performance, reliability, and other indicators are used to measure the performance of tasks, and these indicators are collectively referred to as Quality of Service (QoS) [2]. The loss rate of wireless channel is high and the reliability of multi-hop communication mode is low. For example, in the field of aviation and disaster relief applications, the uncontrollable geographical environment is very likely to cause sensor nodes to malfunction, and the data loss rate of packet transmission may reach 5%–10% or even higher. Existing methods for improving the reliability of data packet transmission are mainly divided into two categories [3]. One is forward error correction code [4], which increases the success rate of data packet transmission by adding a parity bit to the encoded data. This method relies excessively on the link in the network. When a node in the links fails to work normally, the data packet cannot be transmitted to the sink node. The other is multi-path transmission [5], which sends data packets to the destination node through multi-hop. As long as the data packets in one link are successfully transmitted to the sink node, the data packet can be successfully transmitted. In order to ensure the success rate of transmission, the source node often transmits data packets to the sink node through many paths, which will result in high network load and high energy consumption. By combining network coding and multi-path transmission, the data packet is encoded based on random linear network coding and transmitted to the destination node through multiple paths, which can reduce the number of transmission links while increasing the success rate of data packet transmission.

42.2 Related Works In the wireless sensor networks with low reliability of data transmission, forward error correction (FEC) is usually used to enhance transmission reliability. In [6], source node code packets with FEC and transmit them by multi-path, and intermediate nodes should check the packets before forward them. In [7], nodes are allowed to obtain partially lost data packets from adjacent links to achieve decoding of current link data packets, which can improve the success rate of data packet transmission under the condition of unreliable links. Nodes, around sink, code the receive packets to reduce load of the region of sink in [8]. Network coding is used in [9] to improve ARQ efficiency. COPE is proposed in [10] to improve network throughput by using network coding. Cai i.e. lt [11, 12] combine the FEC and network coding to improve packets transmission reliability. In addition, regional boosting [13] and gradientbased routing strategy [14] are also used to find an optimal path to expansion network throughput, and improved delivery rate.

42 A Reliable Data Transmission Protocol Based on Network Coding …

387

In [15], source node calculate the number of transmission path according to the hop number from source to sink, expectation reliability requirement, and native channel error rate. Before forward a packet, the relay node act as a source node of the packet and calculate the number of path again, and so forth, the packets are transmitted to sink by multi-path. However, the path from source are so much to ensure a high transmission reliability, and the network load is heavy. Although the above methods can improve success rate of packet transmission in a certain degree, but the traditional FEC method requires more redundancy data, and cannot solve the problem of a single path with a node failure. In multi-path packet transmission, in order to obtain a good transmission reliability, source node needs much paths to forward packets, the network load is heavy, and the forwarding nodes energy consumption is high at the same time. Considering the shortcomings of existing technology, this work puts forward a kind of packets transmission method based on network coding and multi-path transmission, source node calculate the number of next hop according to the network status and encodes the packets with random line network coding. Improving the reliability of packet transmission and reducing the number of transmission paths.

42.3 Network Coding-Based Multi-path Transmission (NCMPT) Protocol 42.3.1 Network Model As Fig. 42.1 shows, nodes are randomly deployed. There is only a sink, and all nodes of this network can communicate with sink with one hop or multi-hop. It is assumed that normal nodes have sufficient buffer to storage packets, the links are independent of each other, and the work status between links does not affect each other. If the energy of a node below a certain threshold, the node is considered in failure state. The channels between nodes are bi-directional and symmetric. In addition, assume that there are always enough nodes in the network for normal work, and each node has its own timer. Fig. 42.1 Network model

relay sink

source

path

388

N. Sun et al.

42.3.2 Route Establishment Due to the sensors are easy to breakdown and the energy of it is limit, some nodes may not able to work after network works. In this work, sink broadcast routing update package to update the network status, i.e., link status and node status, periodically. The massage structure is as Fig. 42.2 shows. GroupType: Grouping type, including initial routing update messages, encoding packets, and path building feedback messages is used to distinguish whether packets received by a node are built or transmitted. GroupID: Group id, each message has its own unique id. SAddress: The sender address, each hop node will record the SAddress value in the update message and set its own address into routing update message before forward it. RouteTable: rout table, to record the available route from a node to sink. TCount: the hop number from sender to sink. Once a node receive an update routing update massages, it set a timer as T = μd, where μ is the time parameter, depending on the deployment status of nodes in the network, in this work, we set it as 0.5, d is the Euclidean distance of the last hop node to the node. The timer countdown begins when the node receives the routing update message for the first time. In the waiting time range, the current node checks its neighbor table and selects the node with the smallest TCount to add its own near neighbor list. It is noted that the number of nodes with smallest TCount is not unique. At the same time, replace the SAddress field with its own address, and the value of TCount is added 1 to continue forwarding the routing message received. When the node is further and further away from the sink node, the source of routing updates is not unique. In this case, the current node will need to check the SAddress and TCount fields in the message. If the node corresponding to the SAddress is not in the neighbor table, then the node is added to the neighbor list of this node. Otherwise, if the TCount is smaller than the record, the neighbor table is updated. In this way, each node can know the hop between itself and the sink node, that is, all nodes are now in the routing table. Assuming that the initial energy of each node is E 0 , the energy consumption mainly consists of two parts, one is sending the packet, and the other is the node encoding and decoding the packet. When the remaining energy of a node is lower than the threshold value, the node is removed from the neighbor table. GroupType

GroupID

Fig. 42.2 Routing update package structure

SAddress RouteTable TCount

42 A Reliable Data Transmission Protocol Based on Network Coding …

389

42.3.3 Data Transmission If the expected reliability is rs , data error rate is es and the number of hop from source to sink is h s . The expected path to transmit package is calculated as Ns =

log(1 − rs ) log(1 − (1 − es )h s )

(42.1)

Assuming source node has N packets to transmit. The M packet would be copied to Ns copies and be encoded by RLNC. The network coding packet can expression as Yi =

m 

λi j X j ; (i = 1, 2, · · · , n)

(42.2)

j=1

where λi j is the ith encoded packet with jth coding coefficient over a finite field Fq of size q, X j is the jth packet. After encoding by network coding, the transmission packet structure is as Fig. 42.3 shows, where type is the package type tell receiver that is a source package, PacketID is the packet id, SAddress is the source node id, Num is the number of source packet, Vector is a vector of coding coefficient, Data is the encoding packet. A. Route selected After broadcast routing update massage, neighbors of a node i that TCount is h can be divided into three sets H− , H0 , H+ . H− is the nodes with TCount of h-1, H0 is the nodes with TCount of h, H+ is the nodes with TCount of h + 1. Duo to the hop number is not same from different sets to sink, so the path number allocation ratio as Ph0 Ph+ Ph− = = 1 1 − es (1 − es )2

(42.3)

If a node want select Ns path to transmit packet, it select next hop from set H− at first, and Ph0 is assigned only when Ns > |H− |, |Hi | notes the number of nodes in node set Hi , Ph+ is assigned only when Ns > |H− | + |H0 |. If h-, h0, h + note the path of set H− , H0 , H+ respectively, they should satisfy the following condition: Type

PacketID

Fig. 42.3 Package structure

SAddress

GroupID

Num

Vector

Data

390

N. Sun et al.

|H− | ∗ h − + |H0 | ∗ h 0 + |H+ | ∗ h + = Ns

(42.4)

B. Data forwarding. If a node receives first packet, it reduces timing from time T. On the waiting time period, when a node receives a packet, as shown in Fig. 42.3, it needs to check whether it is a coding package and linearly independent with the packets in storage buffer. If it is, puts this packet into storage buffer. Otherwise, it drops the packet. When timer approaches zero, if the number of receive packets with same GroupID greater than M, the receiver re-encode the receive packet with RNLC as Eq. (42.2) to increase the linear independence between packets and then increase decoding success rate in sink. Otherwise, send request package to sender to ask for retransmission until the number of packet with same GroupID is enough to re-encode or the retransmission request time reach a constant and drop all of packets with this GroupID. If a node have received enough packets in time T, it calculates the next hop number as Nf =

log(1 − rl ) − (1 − e f ) log(1 − (1 − e f )hl )

(42.5)

where rl is the expected reliability of forwarding a packet in the forward node, el is the local channel error rate of forward node, h l is the number of hop from forward node to sink. C. Decoding Once receive a packet, if the packet is linearly independent with the received packets, sink stores it for decoding. Otherwise, sink drops this packet. If sink receives S packets at time T after receive first packet, it will decode the network coding packet as ⎞ ⎛ X1 ε11s ε12s · · · ⎜ X2 ⎟ ⎜ ⎟ ⎜ ε21s ε22s · · · ⎜ ⎟ ⎜ . ⎜ .. ⎟ = ⎜ ··· ⎝ . ⎠ ⎝ .. s s εm2 · ·· εm1 Xm ⎛

ε1m s ε2m s .. .

εmm s

⎞−1 ⎛ ⎟ ⎟ ⎟ ⎠

⎜ ⎜ ⎜ ⎝

P1s P2s .. .

⎞ ⎟ ⎟ ⎟ ⎠

(42.6)

Pm s

where εi j s is the product of the coding vector, Pi s is the ith receive data. If number of receive packets under M within time T after receive first packet. Sink sends request massage, containing GroupID of the lost packet and the have received packetIDs, to request retransmit packets. When receive a request massage, if a node have the lost packets in its buffer, it stop forward the request massage and transmit the lost packets to sink by multi-path as a source node.

42 A Reliable Data Transmission Protocol Based on Network Coding … Table 42.1 Simulation parameter setting

100 m × 100 m

Network area

1.2

Packet size

1024 byte

Communication 25 m Radius

Initial energy

50 × 104 J

Number of sensor nodes

Maximum hop number

3

50

1.1

ReInForm(PN=3) ReInForm(PN=5) NCMPT(PN=3) NCMPT(PN=5)

1.0

391

NCMPT(PN=5) ReInForm NCMPT(PN=3)

1.0 0.9 0.8

PDR

PDR

0.8

0.7 0.6

0.6

0.5

0.4

0.4 0.3

0.2

0.2

0.10

0.15

0.20

0.25

Packet loss rate

0.30

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Hop Count

Fig. 42.4 Packet delivery rate (PDR)

42.4 Simulation 42.4.1 Simulation Setting The simulation is conducted in Matlab R2012b. The simulation setting is listed in Table 42.1.

42.4.2 Simulation Results Figure 42.4 shows how the packet loss rate and hop count impact the PDR. The simulation compares the performance of NCMPT and ReInForm in different path number.

42.5 Conclusion A reliable transmission mechanism based on network coding in WSN was proposed. This mechanism can reduce the waste of unnecessary network resources caused by flooding under the condition of ensuring transmission reliability. When the loss rate in channel is gradually increased, the reliability of the transmission mechanism

392

N. Sun et al.

proposed in this paper decreases gradually, but it is always higher than the traditional reliable routing and coding multi-path mechanism, and the more the number of paths, the more obvious the effect. Acknowledgements The work is supported by National Natural Science Foundation of China (No. 61601169).

References 1. Liu, X.Y., Zhu, Y., Kong, L., et al.: CDC: compressive data collection for wireless sensor networks. IEEE Trans. Parallel Distrib. Syst. 26(8), 2188–2197 (2015) 2. Zeng, L., Benatallah, B., Ngu, A.H.H., et al.: QoS-aware middleware for web services composition. IEEE Trans. Softw. Eng. 30(5), 311–327 (2004) 3. Mahmood, M.A., Seah, W.K.G., Welch, I.: Reliability in wireless sensor networks: a survey and challenges ahead. Comput. Netw. 79, 166–187 (2015) 4. Milankovich, A., Ill, G., Lendvai, K., et al.: Wakeup signal length optimization combined with payload aggregation and FEC in WSNs. In: Proceedings of European Wireless 2015; 21th European Wireless Conference. VDE, pp. 1–6 (2015) 5. Ahmed, A.M., Paulus, R.: Congestion detection technique for multipath routing and load balancing in WSN. Wirel. Netw. 23(3), 881–888 (2017) 6. Xu, J., Li, K., Min, G.: Reliable and energy-efficient multipath communications in underwater sensor networks. IEEE Trans. Parallel Distrib. Syst. 23(7), 1326–1335 (2012) 7. Yin, J., Yang, Y., Wang, L., et al.: A reliable data transmission scheme based on compressed sensing and network coding for multi-hop-relay wireless sensor networks. Comput. Electr. Eng. 56, 366–384 (2016) 8. Rout, R.R., Ghosh, S.K.: Enhancement of lifetime using duty cycle and network coding in wireless sensor networks. IEEE Trans. Wirel. Commun. 12(2), 656–667 (2013) 9. Nguyen, D., Tran, T., Nguyen, T., et al.: Wireless broadcast using network coding. IEEE Trans. Veh. Technol. 58(2), 914–925 (2009) 10. Katti, S., Rahul, H., Hu, W., et al.: XORs in the air: practical wireless network coding. IEEE/ACM Trans. Netw. (ToN) 16(3), 497–510 (2008) 11. Cai, N., Yeung, R.W.: Network coding and error correction. In: Proceedings of the IEEE Information Theory Workshop, pp. 119–122. IEEE (2002) 12. Koetter, R., Kschischang, F.R.: Coding for errors and erasures in random network coding. IEEE Trans. Inform. Theory 54(8), 3579–3591 (2008) 13. Yang, B., Xu, J.B.: Novel data transmission method in wireless sensor network using network coding. Comput. Eng. Appl. 49(5), 99–102 (2013) 14. Cheng, Z., Cheng, Y., Feng, D.Q.: Network coding based reliable data transmission policy in wireless sensor network. J. Comput. Appl. 32(11), 3102–3106 (2012) 15. Deb, B., Bhatnagar, S., Nath, B.: ReInForM: reliable information forwarding using multiple paths in sensor networks. In: 28th Annual IEEE International Conference on Local Computer Networks, 2003. LCN’03. Proceedings, pp. 406–415. IEEE (2003)

Chapter 43

Design of Virtual Cloud Desktop System Based on OpenStack Yongxia Jin, Jinxiu Zhu, Hongxi Bai, Huiping Chen and Ning Sun

Abstract In this paper, a virtual desktop cloud system based on OpenStack architecture is proposed to specifically satisfy the requirements for teachers and students in experiment classes. We design control and management components for the virtual desktop system, especially considering the schedule management algorithm. Based on OpenStack cloud platform, we implement the architecture of server for integrating the superiority of OpenStack into desktop Virtualization System. Keywords OpenStack · Virtual desktop · Schedule management

43.1 Introduction Cloud desktop system is an important delivery method of cloud resources, which stores desktop environment of users in the cloud platform. Users use a terminal device to access desktop environment via the Internet. Cloud desktop technology introduces a whole new solution for deploying and managing the teaching room in colleges and universities. The teaching room deploys various experimental environments for different majors and different experimental requirements of courses, which has the characteristics of centralized management and batch deploy. Using the cloud desktop technology can effectively achieve the goal of unified scheduling Y. Jin (B) · J. Zhu · H. Bai · H. Chen · N. Sun Hohai University, Changzhou 213022, China e-mail: [email protected] J. Zhu e-mail: [email protected] H. Bai e-mail: [email protected] H. Chen e-mail: [email protected] N. Sun e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_43

393

394

Y. Jin et al.

and management, which can improve the level of management and applications for the machine room and lab. At present, some typical commercialized cloud desktop products like Xen Desktop and VMware view which are extremely mature. However, it is too costly and cannot meet the demand of teaching in school well. Therefore, scholars do a great deal of research on the architecture of cloud desktop system, key technology, hotspot algorithm, etc. In [1], it designs the virtualized resource pool and its management system based on the virtualized QEMU_KVM framework, which has reference meaning to build the server of cloud desktop. Virtualized desktop has been designed and implemented in [2], which focuses on the reliability and safety of the system and do the testing to analyze this solution. Aiming at the virtualized desktop and user characteristics in the private cloud of education, [3] introduces a unified authentication model for users. It designs a unified management system of user and cloud desktop and reduces the data redundancy in the system. Colleges and universities established the school cloud platforms one after another which meet their own needs.

43.2 System Architecture Building the virtual cloud desktop system based on OpenStack, which is a virtual solution of remote desktop to individual users. The architecture, as shown in Fig. 43.1, mainly includes virtualization platform, cloud desktop management platform and cloud terminal. (1) Virtualization platform. Building the virtualized resource pool based on OpenStack cloud platform, which is the infrastructure of cloud desktop system. This allows for the virtualization of CPU, memory, hard disk, network, and provides services by using API to offer ways to cloud desktop management platform which can schedule and use these services. (2) Cloud desktop management platform. It is the core and scheduling center of the cloud desktop system, which can be used for the user management, the scheduling arrangements and the strategic control of virtual cloud desktop. The purpose of this contains managing the virtualized desktop more conveniently, disposing resources more reasonably and delivering products to users more conveniently. It includes user management, connection management, monitoring management, and scheduling management. (3) Cloud terminal. It is a terminal device for displaying and operating cloud desktop. The users of cloud platform use specific client programs and remote access protocol to access the desktop of the virtual machine.

43 Design of Virtual Cloud Desktop System Based on OpenStack

395

Fig. 43.1 The virtual cloud desktop architecture based on OpenStack

43.3 Deployment and Implementation 43.3.1 OpenStack Service Deployment The hardware resources virtualized cloud desktop system is made up of highperformance server, storage devices, networks, and other hardware devices. The virtualization technology KVM uses these physical resources forming a unified pool of virtualized resources. We use the open source cloud computing solution OpenStack building a management platform of virtualized resources, providing user authentication, mirror image template management, virtual machine instance management, virtual machine network resource management, and more. OpenStack platform consists of one control node and multiple computing nodes. We can deploy two virtual machines in control node which are CC(Cloud Controller) and NC(Network Controller). Majority control services of OpenStack run in these two virtual machines and start working with the control node system. Using the virtual machine runs control services is beneficial for migration and backup and simple to adjust resources for services. The control nodes of cloud platform mainly deploy database service, message queue service, and some essential components. It also

396

Y. Jin et al.

includes computing service NOVA of OpenStack, mirror service Glance, network service Neutron, and authentication service Keystone. The computing nodes mainly deploy Nova to provide computing service, which can be used to bear the whole virtual machines instances of cloud desktop.

43.3.2 Server Deployment Cloud desktop management platform runs on the server, provides scheduling management, resources monitoring, client connection, etc., which mainly deploys the following four service modules: (1) User Management. As for the authentication and management of terminal users, user permission can be controlled directly by OpenStack Keystone. The client can access OpenStack directly to get information about the virtual machine. (2) Connection Service. We deploy it in Cloud Controller, which normally runs as a daemon in the background to handle multiple client connection requests concurrently. This module configures server address and packet filtering rules for monitor server, which is based on a SocketServer framework to handle clients asynchronously, listen, and receive client requests. We use username and password to send a virtual desktop list of the user. (3) This module monitors performance index of a variety of computing resources in OpenStack platform (CPU, memory, storage, network, etc.), including information collection thread and information control thread. Information collection thread obtains the performance data of virtual machines and physical machines regularly. Information control thread manages the monitoring data, shares data with scheduling management module and persists data to monitoring databases. We use Shell to collect states of various resources for monitoring physical machines. For example, we can use shell or read some specific files in Linux File System to get the information about CPU, memory, and disk of physical machines. Next, we can use these data to calculate the usage rate. We get the information about CPU, memory, and other performance and statue data of virtual machine by using Libvirt API in KVM environment to monitor the virtual machine. Using the performance data we get, we can calculate the usage rate of resources. For example, we use the CPU monitor of the virtual machine to show the following process: 1. Import the libvirt and connect to the local qemu virtual monitor program. 2. Get the Domain object via the virtual machine instance ID which is created by OpenStack. 3. Call the Libvirt API to get the CPU time information of Domain object twice in a very short time and record the current time. 4. Get the number of the core in the Domain object.

43 Design of Virtual Cloud Desktop System Based on OpenStack

397

5. Calculate the CPU usage rate by comparing the difference of twice run time of CPU and difference ratio of calling cycle.

CPU usage =

C PU time di f f er ence ∗ 100 calling cycle ∗ cor e number ∗ 109

(43.1)

Scheduling Management: Based on the monitoring data from the cloud platform performance we get and the teaching requirements model, it provides the deployment and scheduling of virtual machines. The deployment of virtual machines and the use of dynamic scheduling mechanisms are key issues in achieving cloud platform load balancing. When the virtual machine is created, the virtual machines that need to be powered on are placed on suitable physical nodes according to the resource pool, which helps to improve resource utilization and load balancing. This can effectively prevent the frequent migration of virtual machines due to uneven load when the cloud desktop system is used. This section focuses on the deployment of the VM when it is created. OpenStack’s Nova-scheduler is responsible for finding suitable physical nodes for virtual machine instances, which supports four schedules. ChanceScheduler selects available nodes based on a random algorithm. SimpleScheduler chooses the node with the least load among the available nodes. FilterScheduler implements a scheduling algorithm based on filtering and weighing. MultiScheduler routes different scheduling requests to different sub-schedulers. FilterScheduler is the default scheduler of OpenStack. Its core idea is to filter unavailable nodes and sort the available nodes, and select the node with the smallest weight as the optimal node to create a virtual machine instance. The scheduling process is divided into two phases: filtering and weighting. The filter stage applies a set of filters to generate a set of all computing nodes that satisfy the condition. In the weighting stage, it uses a specific cost function to calculate the weight value of each node, and sorts each node according to the weight value to obtain the sequence of nodes that can meet the request of the virtual machine instance with the least cost, and select one of them to create a virtual machine. The FilterScheduler scheduler calculates the weight based on the remaining memory size of the node by default. This algorithm is simple to implement and has low complexity. However, the overall resource utilization of the node is not high. Therefore, in most cloud platform applications, the scheduling mechanism of the virtual machine needs to be optimized. [6] introduces the weight of the virtual machine to the weighing process of Scheduling Algorithm and provides The VM Weighted Filter Scheduling Algorithm. [7] defines a curriculum requirement model and a physical machine load model for campus cloud platform applications and proposes the Deployment Algorithm of Course Requirement. In this paper, combining with OpenStack’s default scheduling strategy, according to the actual needs of teaching cloud environment for storage, computing and other resources, the virtual machine scheduling strategy suitable for university experimental teaching applications is defined to achieve the purpose of efficient use of resources and load balancing. In the filtering stage, we determine whether the computing node

398

Y. Jin et al.

meets the requirements based on the hardware requirements configured by the user, the type of the user-specified instance, and the current resource allocation of the compute node. We use custom filters for node filtering to get a set of all available nodes that satisfy the conditions. In the weighting stage, we evaluate the priority of each node deploying virtual machines based on the resource requirements and node load status of the course virtual machines and select the most appropriate computing node to create user virtual machine instances. The research in [8, 9] shows that the load balance of the system can be described by the variance of the load of each node in the cluster. The smaller the variance of the load is, the more balanced the load of the system. Therefore, when we calculate the weights, we first obtain the performance monitoring data of all the compute nodes. Then, based on the characteristics of the requested instance, we evaluate the load status of the virtual machine after it is deployed on each compute node, calculate the load balance of the system, and finally select the node with the lowest system load balancing level places the virtual machine as the target node. The virtual machine resource requested by the course is defined as R = (cpu, memory, num), where Cpu indicates the number of virtual CPU requests, memory indicates the requested virtual memory size (GB), and num indicates the number of virtual machines requested. Assume that there are N nodes in the set. The load rate of node i can be described by the ratio of the virtual resources allocated on the node to the total amount of virtual resources that the node can provide. We make a comprehensive consideration of the CPU and memory resources of the node. The load rate of the node is defined as a formula (43.2). Li =

 Ur r

Tr

+

Rr × Pi Tr

 (43.2)

where Tr is the total amount of virtual resources r provided by the node, U_r is the consumed amount of virtual resources r in the node, and Rr is the demand of the virtual machine for resource r. r∈{cpu,memory}. Pi indicates the probability of assigning a virtual machine to node i. Pi∈{0,1}. We evaluate the load balance of the entire system by calculating the load variance of each node in the set. We calculate the average load rate L_avg of each node in the set by Eq. (43.3) and calculate the load variance δ of each node by Eq. (43.4). When the variance δ is minimum, the node that is determined is the node that ultimately allocates the virtual machine. Deploying the virtual machine at this node will enable the system to achieve the best load balancing effect. L avg

N 1  = Li N i=1

  N 1  (L i − L avg )2 δ= N i=1

(43.3)

(43.4)

43 Design of Virtual Cloud Desktop System Based on OpenStack

399

Cloud desktop systems for instructional applications often require the batch creation of virtual machines of the same specification. For newly applied virtual machine resources, we choose the most suitable node to deploy virtual machines for load balancing purposes. We first obtain load monitoring data for each node, then evaluate the system load variance after the virtual machine is assigned to each compute node, select the node that is assigned when the load variance is minimized as the final selected node, and deploy the virtual machine at the node. When updating the required number of virtual machines and the allocated amount of resources on the nodes, if the requested virtual machine is not allocated, update the load variance of the system according to the updated load data, and select the appropriate node to deploy the remaining virtual machines until the virtual machine needs to be fully allocated. The algorithm evaluates the load variance of the nodes according to the resource requirements of the course virtual machine and selects the node with the smallest load variance to place the virtual machine. When there is a new virtual machine request, the probability that the node with more current resources and better performance is selected as the destination node is larger, thus improving the utilization of the node resources and improving the system load balancing.

43.4 Performance of Cloud Desktop System The virtual cloud desktop system server is configured with ten high-performance physical machines. The IBM X360 server (64 GB of memory, 1 TB hard disk, 4 × 6 Xeon CPUs, and 2 Gigabit NICs) is used to form a physical server cluster. One of them is used as a control node, and the rest nine are compute nodes (Compute01—Compute09). Based on the basic management and operation of the OpenStack platform, we have realized the integrated configuration, and unified deployment of the desktop application environment and successfully met the needs of experimental teaching. Compared with the traditional physical experiment environment, the experiment environment in the virtual cloud desktop system has the advantages of rapid deployment, easy backup and recovery, and user data also has high security. The cloud desktop management platform’s virtual machine scheduling strategy comprehensively considers the course virtual machine resource requirements and server load status. When the virtual machine is deployed, the server resource load is balanced as much as possible. At the same time, the system performance is improved while guaranteeing the business demand, and the cloud desktop system is a relatively stable state. To verify the effectiveness of the proposed scheduling algorithm, this algorithm is compared with other scheduling algorithms regarding computing node resource load. In the experiment, the course created 100 virtual machines with two virtual cores and 4G of virtual memory. We used the default FilterSchedule algorithm and the scheduling algorithm in this paper to satisfy the above course request. When the system reaches a stable state, the CPU and memory utilization of the compute nodes

400

Y. Jin et al.

Fig. 43.2 CPU load in the cluster

Compute01

Compute02

Compute03

Compute04

Compute05

Compute06

Compute07

Compute08

Compute09

12 10 8 6 4 2 0

Fig. 43.3 Memory load in the cluster

A

B

Compute01

Compute02

Compute03

Compute04

Compute05

Compute06

Compute07

Compute08

Compute09

80 60 40 20 0

A

B

change as shown in Figs. 43.2 and 43.3, where A is the default FilterSchedule algorithm and B is the scheduling algorithm. We can see that with this scheduling strategy, the gap between the CPU utilization and the memory utilization of each node of the cluster is small, there is a great advantage in load balancing, and the load balancing effect of each node index is better.

43.5 Conclusion This paper adopts OpenStack as the basic service architecture of desktop virtualization, proposes a virtual desktop cloud system for teaching and experimental users, and designs control and management components for virtual desktops. The server is implemented in OpenStack cloud platform and integrates OpenStack’s advantages into the desktop virtualization system. Through the implementation of modules such

43 Design of Virtual Cloud Desktop System Based on OpenStack

401

as user management, connection management, monitoring management, and dispatch management, the functions and performance requirements for the teaching cloud desktop system are well satisfied, and it is easy for users to create and manage personalized teaching and experimental environments independently.

References 1. Huachao, Y.: Design and implementation of the desktop cloud server-end architecture based on QEMU-KVM. M.S. thesis, South China University of Technology, Guangzhou, China (2013) 2. Ying, L.: Design and realization of VDI-based virtual desktop. M.S. thesis, Shanghai Jiao Tong University Shanghai, China (2014) 3. Liang, S.: Design and implementation of the cloud platform User Centralized Management System based on OpenStack. M.S. thesis, Sun Yat-sen University, Zhongshan, China (2015) 4. Gang, W., Jing, W., Jing, G., Qian, Ma.: A comprehensive scheduling method of virtual machines for campus cloud platform. Comput. Digital Eng. 43(10), 1736–1741 (2015) 5. Haihua, L.: Design and Implementation of Desktop Cloud-based University Experimental Teaching Scheduling Management System. M.S. thesis, South China Agricultural University, Guangzhou, China (2016) 6. Varma, M.K., Nandimandalam, Eunmi, C.: The VM Weighted Filter Scheduling Algorithm For OpenStack cloud. In: 12th International Conference on Future Information Technology 2017, LNCS, vol. 9999, pp. 307–313. Springer, Heidelberg (2017) 7. Ma, Q., Wang, J., Wang, G.: A Deployment scheme of virtual machines for campus cloud platform. In: International Conference on Service Sciences 2014, LNCS, vol. 9999, pp. 187–192. IEEE, Heidelberg (2014) 8. Shaoka, Z. Liyao, L.I., Xiao, L., Cong, X., Jiahai, Y.: Architecture and scheduling scheme design of TsinghuaCloud based on OpenStack. J. Comput. Appl. 33(12), 3335–3338 (2013) 9. Zhilong1, D., Zhemin, D., Liutao, L.: Research of dynamic scheduling of resources under the environment of OpenStack. J. Northwestern Polytechn. Univ. 34(4), 650–655 (2016)

Chapter 44

Characteristics of Content in Online Interactive Video and Design Strategy Bai He

Abstract Online interactive video has increasingly attracted attention in the field of marketing. It is of certain practical significance to discuss issues concerned with interactive video content production. Compared with traditional linear video, online interactive video exists as a more sophisticated mode of content production and procedures. According to the analysis of the content features of online interactive video, this paper proposes some proposals related to the design of the interactive video content. It is hoped that the research conclusion can offer some beneficial enlightenment for the content production of online interactive video. Keywords Online interactive video · Characteristics of content · Design strategy

44.1 Introduction According to Cisco’s report, by 2022, global IP video traffic will account for 82% of the total, and Internet video traffic will grow fourfold from 2017 to 2022 [1]. With the quick development of digital technology and Internet technology, the platform and content form of online videos are also developed rapidly. (From video-sharing website to mobile video APP, from video-on-demand to live video, and from liner video to interactive video) Online video services are provided depending on abundant technical means and diversified communication terminals, and present rich communication forms and novel communication characteristics. In recent years, more and more platform operators have begun to provide online interactive video services. By using interactive online videos, the users can selectively obtain information according to their own preferences, or create new story structures,

B. He (B) School of Journalism and Communication, Minjiang University, Fuzhou 350108, People’s Republic of China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_44

403

404

B. He

thus greatly enhancing the users’ initiative in information services and provision. Since the Internet produces a large amount of video content every day, the brand will face increasing challenges in attracting users’ attention under the circumstance that a great deal of information is available. The report shows that interactive video ads extend the amount of time consumers spend with the brand, driving a 47% lift in time spent with the ad versus non-interactive ads [2]. Interactive video is quickly becoming a mature and widely used marketing tool, and more and more brand owners prefer to digital marketing through interactive videos. 23% of video marketers have used interactive videos as a channel. Out of those, 83% say it has been successful for them [2]. Not only that, online interactive video is often used for interactive education, collection of customer feedback data and personnel management, etc.

44.2 Literature Review 44.2.1 Interactivity According to Rafaeli (1988), interactive communication is to transmit the messages continuously and to give feedback to each other [3]. Deighton (1996) suggested that interactivity presented two communication characteristics—the ability to communicate with individuals and the ability to collect and remember individual feedback [4]. Hoffman and Novak (1996) believed that interactivity on hypermedia computers was achieved by a machine or through interpersonal media [5]. Stewart Alsop, the former editor-in-chief of InfoWorld, classified interactivity into four levels: (1) watching (2) navigating (3) using (4) programming. He believed that “watching” was the lowest level, without any “interactivity”; the second level was “navigating”, by which the users could jump from one item to another in a relatively random way and did not get caught into anything. The third level “using” meant that users could get something useful when they interacted with content or media. “Programming” was considered as the strongest “interactive” method, by which the users could define the concepts, assign a meaning to content, and control the entire interaction process [6]. Ha and James (1998) proposed five dimensions of interactivity: (1) playfulness, (2) choice, (3) connectedness, (4) information collection, and (5) reciprocal communication [7]. Sally J. McMillan and Jang-Sun Hwang (2002) pointed out that three key dimensions of interactivity included the direction of propagation, user control and time [8].

44.2.2 Design of Online Interactive Video Most of the research on online interactive video focused on video technology and using interactive video in education, but content creation and creative development were rarely mentioned. Liu Junyao, Yu Yang, Peng Lan (2014) believed that one

44 Characteristics of Content in Online Interactive Video …

405

major characteristic of interactive video lied in that the readers had a choice [9]. Luo Huanyi (2015) proposed that the goals of network video interaction included: (1) good orientation, (2) good navigation, (3) good usability [10], and Santeri Saarinen (2017) put forward some production guidance for interactive omnidirectional videos (iODV), including: (1) avoid objects very close to the camera, (2) consider an appropriate viewpoint, (3) present details with embedded content, (4) ensure the visibility and clarity of interactive objects and pathways [11].

44.3 Characteristics of Content in Online Interactive Video Online interactive video gives the users non-passive watching experience, but the persons watching the videos are required to take actions and provide information feedback. When watching traditional linear videos, the users can interact depending on pause, fast forward, fast reverse, and sending “bullet screen” in the process of content playback. However, in addition to the above operations, the users could not interact with linear video in other ways. Interactive online video adds more diverse interactive options that allow users to interact with the video by clicking, dragging, scrolling, hovering, touching, gestures, voice control, camera, and other methods in the process of content playback. The users create stimulating, interactive experiences by adding links, chapters, quizzes, shopping carts, personalization, and more to video assets. Several different functions can be built into an interactive video, and the most commonly used options included: – Hotspots: set up clickable hotspots on the video screen; these buttons could be used to attract the viewers to an individual page or display content directly in the video, such as product display page or sales page in the video. – Camera views: the users can watch the content in a video in all directions by clicking the “switch camera” or by dragging the picture in the process of video playback. – Branches: the users can control and select what they watch through different branch paths. – Data inputs: the users can enter names, ages and other information in a sheet which pops up. – Quizzes: provide evaluation results and display personalized results depending on the buttons and branches at the end of the video. Based on the above interactive functions, interactive content can be summarized in the following three types: Customized online interactive video This type of video allows the creators to build relevant information or links in some video frame hotspots. Although the video is linearly played back, the customization results show that during video playback, the information of the hotspots are activated, and the user can click or touch the tags to obtain relevant information of certain picture

406

B. He

Fig. 44.1 Hotspots provide a clickable call-to-action in a video for viewers

elements in a video and to jump to the corresponding purchase link or add to your shopping basket. Video product shopping guide for products is the most common. For instance, British luxury clothing brand Ted Baker created an interactive video for their website landing page. It depicts a fashionable man and woman strolling through the woods with “click to shop” tags pointing at their various items of clothing and accessories. When the viewer clicks on one, a pop-up appears with a description of the item, its price and a link to purchase it (Fig. 44.1). Conversational online interactive video In a conversational interactive video, the viewers typically interact with a video in a turn-based manner, as if the user had a genuine conversation with the video. The conversational interactive video will provide diversified answers for user’s choice at an event node, or design multiple storylines. The user can get a personalized story by using interactivity and create an engaging experience that viewers will want to “play” again and again. The promotional video for the movie Focus which was created by Warner Bros. is a great example (Fig. 44.2). Exploratory online interactive video Exploratory interactive video generally only provide few hints (even no prompt) in such a way that the users can explore interactive points and methods in the videos. Such videos are generally based on real-life scenes. The user will continue to play the scenes until the users find the interaction points. In another case, the users can obtain a different viewing angle by changing the camera, or explore the video content in a 360° angle by dragging a mouse, for example, display of tourist attractions and interior spaces of cars. In addition to the above three types of online interactive videos, a hybrid online interactive video integrates all the characteristics of the three. This type of online interactive video provides stronger interactivity and flow experience.

44 Characteristics of Content in Online Interactive Video …

407

Fig. 44.2 Multiple scenarios can be designed in online interactive video

44.4 Design Strategy for Online Interactive Video Regarding the creation of interactive videos, the traditional film and television production processes (namely a general sequence of “creation—story—script—shooting—post-production”) are still followed. However, with the increase of interactivity, traditional content design and storytelling methods were challenged. Video creators not only need to consider the arrangement of interactive elements in their workflow but also need to design and shoot multiple plots—which means more content is needed than liner videos (Fig. 44.3). For more effective content production and creative design based on the characteristics of interactive videos, we shall at least focus on the following:

44.4.1 Appropriate Interaction Style Online interactive videos can support a variety of interactive tools and styles. However, before designing an interactive video, we shall consider the purpose of interactive design as the starting point. This requires the designers to seriously consider the target audiences of interactive videos and expected results of interactive videos. Besides, the interaction style shall be designed according to the characteristics of the products or other objects to be displayed. For example, 360° full-view drag-and-drop interaction video design is more suitable for the display of the interior space of the building; conversational content design is more suitable for product using experience. The interactive elements must be designed based on product characteristics and brand needs, so as to select an appropriate interaction style. Excellent online interactive videos are not merely used to display visual spectacles or feel the game-like process. The interactive content of the videos shall be well designed as if a website or APP

408

B. He

Fig. 44.3 The users can design multi-story content using the composer of the online interactive video platform

user interface was designed. We need to consider whether good user experience is achieved. Regarding the design of interactive elements, attention shall be paid to the following: (1) the consistency of the video picture style shall be kept in vision and sense. The interface can be designed according to the plots and actions, thus integrating interactive elements with the scenes. (2) The occurrence time and size ratio of the interactive elements shall be appropriate so as to avoid excessive interference to the viewed images. (3) In order to avoid excessive damages to the flow experience, the interactive operation shall not be too complicated.

44.4.2 Innovative Interactive Narrative The production of interactive video content does not mean that the creators have given up control of their stories, nor does it mean that the viewers will have little flow experience. Therefore, the creators are required to make interactive narratives in the process of interactive video design. Interactive narrative refers to a narrative technique by which users participate in and influence the narratives. Interactive narrative combines traditional narrative with user interaction. The users can influence the story through interaction with events or characters. Therefore, its narrative structure is usually dynamic or nonlinear. Interactive narrative is classified into plot-based interaction and character-based interaction. In common situations, plot-based interaction concentrates on the choice of the plots; character-based interaction concentrates on the selection of the character actions and the scheduling of the characters. Effective

44 Characteristics of Content in Online Interactive Video …

409

interactive video design requires a strong plot so that the audiences can feel that they are playing a role in your story.

44.4.3 Creating Playful and Gamified Experiences During the design of interactive videos, whether the content is playful needs to be considered. Among the five dimensions of interactivity, Ha and James (1998) first mentioned whether the interactive design was playful. Good video marketing not only involves product sales but also pleasant audience feeling. This does not simply mean that the video plots are appealing, and the interactive design of the videos is also attractive, so that video marketing can reach a new height through entertainment. Gamification, understood as the use of game design elements and game mechanics in non-game contexts for the purpose of engagement [12], has become a hot topic in recent years. Interactive videos grow out of interactive movies. Regarding the interactive films, the concept of game design is applied to the liner film, and a multistory ending is produced with nonlinear plot development. Gamified design refers to that the typical elements of gameplay—scoring, competition, solution and other contents are integrated into the design of interactive videos which will give the users a more positive sense of participation. The process of gamified interactive video design is not mixing bits and pieces from game components, but building a clear design process and a definite design framework. The best-known design framework is presented in Six Steps to Gamification [13] by Werbach and Hunter (2012) and commonly known as 6D. Based on Werbach and Hunter’s framework, I propose the key points of gamification design for online interactive video, as shown in Table 44.1.

44.5 Conclusions In summary, this paper proposed several focuses for online interactive video design as a reference for relevant designers, including: (1) using appropriate interaction methods, (2) Creating gamified and playful experiences, and (3) innovating interactive narrative techniques. Online interactive video is a nonlinear game-like medium and supports clicking, dragging, scrolling and interaction with video content. It is a popular, traceable and efficient new medium which transforms the users and helps the marketers to achieve their marketing goals. Nowadays, the development of integrated Internet technology can provide new opportunities for the creation of the plots, characters and personality of interactive videos, which can maximize the appeal of film language. It is undeniable that interaction design will be a fast-growing emerging industry. As the most attractive direction of interaction design, it is inevitable that online interactive video will attract more attention.

410

B. He

Table 44.1 Design strategy for online interactive video based on 6D design framework 6D Framework

Description

Example

Define business objectives

Set an expected goal and effect for the video about to design

Number of interaction behaviors, Product sales quantity, Gathering customer feedback

Delineate target behaviors

What interactions do you want the users to do and how you will measure them?

Hotspot clicking, Camera switching, Add products to an internal wishlist, etc.

Describe your players

Set a target audience and conduct user portraits

Car owners, housewive, students

Devise activity cycles

Design an engagement Loop: – Motivation – Action – Feedback

Motivation—User wants the protagonist to take action Action—User clicks on the option button Feedback—The protagonist takes action and begins the plot Motivation—User wants the protagonist to take next action

Don’t forget the fun

Ask yourself: “Is it fun? Would the users do this voluntarily?”

Interesting plot, Interesting interactive behavior design

Deploy the appropriate tools

Choose appropriate interaction components, Testing and Iteration

Hotspots, Branches, Data inputs, Quizzes

References 1. Cisco Visual Networking Index: Forecast and Trends 2017–2022 White Paper. https://www. cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/whitepaper-c11-741490.html. Accessed 27 Feb 2019 2. The State of Video Marketing (2019). https://info.wyzowl.com/state-of-video-marketing-2019report 3. Rafaeli, S.: Interactivity: from new media to communication. In: Hawkins, R.P., Wiemann, J.M., Pingree, S. (eds.) Advancing Communication Science: MergingMass and Interpersonal Process, pp. 110–134. CA, Sage, Newbury Park (1988) 4. Deighton, J., Sorrell, M.: The future of interactive marketing. Harv. Bus. Rev. 74(6), 151–160 (1996) 5. Hoffman, D., Novak, T.: Marketing in hypermedia computer-mediated environments: conceptual foundations. J. Mark. 60(3), 50–68 (1996) 6. King, E.: Redefining relationships: interactivity between news producers and consumers 4(4), 26–32 (1998) 7. Louisa Ha, E., James, L.: Interactivity reexamined: a baseline analysis of early business web sites 42(4), 457–474(1998) 8. McMillan, S.J., Hwang, J.-S.: Measures of perceived interactivity: an exploration of the role of direction of communication. User Control Time Shap. Percept. Interact. 31(3), 29–42(2002) 9. Junyao, L., Yang, Y., Lan, P.: Interactive Video: New Ways of Presenting News. Press Circles (10), pp. 57–60 (2014) 10. Huanyi, L.: Interactive Analysis of Video Under the Background of Media Convergence. University of Jinan (2015)

44 Characteristics of Content in Online Interactive Video …

411

11. Santeri, S., Ville, M., Kallioniemi, Jaakko, H., Markku, T.: Guidelines for designing interactive omnidirectional video applications. In: IFIP Conference on Human-Computer Interaction, LNCS, vol. 10516, pp. 263–272. Springer, Heidelberg (2017) 12. Deterding, S., Dixon, D., Khaled, R., Nacke, L.: From Game Design Elements to Gamefulness: Defining “Gamification”. In Proceedings of the 15th International Academic MindTrek Conference, pp. 9–15. Tampere, Finland (2011) 13. Werbach, K., Hunter, D.: For the Win: How Game thinking Can Revolutionize Your Business. Wharton Digital Press (2012)

Chapter 45

Implementation of Asynchronous Cache Memory Jigjidsuren Battogtokh

Abstract This paper presents a novel asynchronous instruction cache suitable for self-timed system. The DCVSL is useful to make a completion signal which is a reference for handshake control. The proposed CAM is a very simple extension of the basic circuitry that makes a completion signal based on DI model. The cache has 2.75 KB CAM for 8 KB instruction memory. We designed and simulated the proposed asynchronous cache including content addressable memory. Keywords Cache · CAM · DCVSL

45.1 Introduction The generic concept associated with implementation of cache is to reduce the average access time of a main memory in the CPU [1]. The instruction cache is both smaller and a faster memory that stores copies of recent instructions fetched from program memory. The average latency of cache access is smaller when compared with the latency of a main memory access [2]. In general, the energy consumption of a cache accounts for significant part of the energy utilization in an embedded system. For instance, the cache employing the sleep mode approach prevents cache access until it is woken up. Within such design paradigm, asynchronous circuit techniques appear as a natural choice to realize the sleep mode functionality [3]. Thus, implementation of a handshaking protocol becomes responsible for activating active blocks, leaving other blocks in the wait-state. The asynchronous circuit employing delay-insensitive (DI) delay model is an option capable of both energy efficiency as well as performance enhancement.

J. Battogtokh (B) Department of Electronics and Communication Engineering, School of Engineering and Applied Sciences, National University of Mongolia, Ulaanbaatar, Mongolia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1_45

413

414

J. Battogtokh

Section 45.2 presents the proposed asynchronous cache architecture, CAM, completion signal generation using differential cascade voltage switch logic (DCVSL). Simulation results and conclusion are provided in Sects. 45.3 and 45.4, respectively.

45.2 Proposed Asynchronous Cache A self-timed circuits have a number of computation blocks that are coupled through interconnection blocks. Main architecture of the asynchronous cache is shown in Fig. 45.1 that includes CAM, handshake block and related blocks. Since there is no system clock to coordinate operations between computational blocks, we employ handshake protocol [4]. The completion signal of a combinational block helps with the handshake protocol as reference signal in asynchronous circuit [5]. We design asynchronous cache using DI model for CAM and a bundled model for an external memory (Fig. 45.2). Cache Match State First, an address in the Program Counter is loaded to the address bus. Then PC_req signal accesses the cache that enables the input latch of CAM. Therefore, a valid address is loaded into CAM. If the address is in the CAM that is a match state, a completion signal is activated. Then PC_ack signal is enabled that it disables PC_req. The data in CAM memory is moved to the Instruction Register of CPU and the searching operation is finished. Timing diagram of the match state is shown in Fig. 45.3.

Fig. 45.1 Main architecture of proposed asynchronous cache

45 Implementation of Asynchronous Cache Memory

415

Fig. 45.2 Main architecture of CAM block

Fig. 45.3 Match state timing diagram of cache

Cache Mismatch State First, the address in the Program Counter is loaded to the address bus. Then PC_req signal accesses the cache that enables the input latch of CAM. Therefore, valid address is loaded into CAM. If the search operation is missed to find the address in CAM, PM_req signal is activated after bundled delay t. The PM_req transfers the address to main memory. After finding data from main memory, PM_ack signal goes high. Thus the PC_ack loads an instruction to the Instruction Register. Mismatch state timing diagram is shown in Fig. 45.4.

45.2.1 Content Addressable Memory Content addressable memory (CAM) or associative memory, is a storage device, which can be addressed by its own contents [6]. Each bit of CAM cells includes comparison logic. An input data of CAM is simultaneously compared with all the stored data. Main architecture of CAM block is shown in Fig. 45.2. CAM cell has two basic functions: bit storage (as in RAM) and bit comparison (unique to CAM).

416

J. Battogtokh

Fig. 45.4 Mismatch state timing diagram of cache

Figures 45.5 and 45.6 show a NAND-type CAM cell and a NOR-type CAM cell, respectively. The bit storage in both cases is in an SRAM cell where cross-coupled inverters have data node D and D. NMOS access transistors and bit lines which are used to read and write the SRAM storage bit. Although some CAM cell implementations use lower area DRAM cells, typically, CAM cells use SRAM storage. The bit comparison, which is logically equivalent to an XOR of the stored bit and the search bit is implemented in a somewhat different fashion in the NOR and NAND cells [8]. Match state timing diagram of CAM is shown in Fig. 45.7. When PC_req signal is enabled to high, following EN signal of latch is enabled. By disabling RES signal, valid address is taken into address latch. If the result of searching operation Fig. 45.5 NAND-type CAM cell

45 Implementation of Asynchronous Cache Memory

417

Fig. 45.6 NOR-type CAM cell

Fig. 45.7 Match state timing diagram of CAM

is matched, completion signal is activated. Else the result of searching operation is mismatched, completion signal is not activated. When PC_req signal is disabled, EN signal of latch is disabled. After that RES signal of latch is enabled and at this time address latch is refreshed by 0. In addition, completion signal is replaced by 0. Mismatch state timing diagram of CAM is shown in Fig. 45.8.

45.2.2 Completion Signal and DCVSL Gate Self-timed circuits consist of number of computation blocks coupled through interconnection blocks. The system employs a handshake protocol instead of the more classical clock signal. The combinational block generates a completion signal that indicates the combinational block is allowed to move a new operation. DCVSL is used to make a completion signal with a very simple extension of the basic circuitry. This led to self-timed designs with minimal overhead for completion circuitry (Fig. 45.9).

418

J. Battogtokh

Fig. 45.8 Mismatch state timing diagram of CAM Fig. 45.9 Completion signal for matching

DCVSL gate has NMOS trees which implements the logic function with two output nodes Q and Q. For pre-charge phase during input I is low, nodes Q and Q are high. In the evaluate phase, a path of the NMOS trees will be conducting, causing one of the nodes Q and Q to be discharged. Therefore, result of evaluate phase is complementary values on the outputs. Completion signal required by the self-timed components is generated by making NAND gate of two outputs. Figure 45.10 shows a circuit for completion signal detecting match state. Table 45.1 shows truth table for CAM cell.

45 Implementation of Asynchronous Cache Memory

419

Fig. 45.10 CAM size

Table 45.1 CAM truth table

SL

D

ML

Content

0

0

1

Match

0

1

0

Mismatch

1

0

0

Mismatch

1

1

1

Match

45.3 Simulation Results The asynchronous cache is 32 bit × 2048 corresponding to 8 KB. CAM is decoding 2048 rows of the cache memory. Thus, the total size of CAM is, 2048 × 11 bit, 2.75 KB. LSB 11 bits is used to identify an address. Figure 45.10 shows CAM for the instruction memory. We implemented CAM block in VHDL coding. A CAM cell described the latch for bit-line, the latch for search-line and XNOR gate for comparing bit-line and search-line. If all data of search-lines and bit-lines are equal, that will be matched. All CAM cell’s match-lines connect the AND gates. After that outputs of AND gates connected to DCVSL gates. Finally, outputs of DCVSL gates connected to OR gate, that will generate completion signal. Figure 45.11 shows VHDL coding simulation results of the Asynchronous CAM. Figure 45.12 shows simulated circuit CAM with DCVSL.

45.4 Conclusions This paper proposed a CAM based on DI model for asynchronous cache. The architecture of cache mainly consists of four units: control logics, content addressable memory, completion signal logic units, and Instruction memory. We get a completion signal of CAM by using DCVSL that acts as a reference signal in the system. We designed and simulated a cache with 2.75 KB CAM for 8 KB instruction memory. The results show that the cache hit ratio is up to 95% based on pseudo-least recently used (LRU) replacement policy.

420

J. Battogtokh

Fig. 45.11 VHDL coding simulation result of the asynchronous CAM

Fig. 45.12 Simulated circuit CAM with DCVSL

References 1. Peir, J.K., Hsu, W.W. and Smith, A.J.: Functional implementation techniques for CPU cache memories. IEEE Comput. 48, 100–110 (1999) 2. Tuominen, J., Santti, T., Plosila, J.: Comparative study of synthesis for asynchronous and synchronous cache controllers. In: IEEE Norchip Conference, pp. 11–14, March 2006 3. Guillory, S.S., Saab, D.G., Yang, A.: Fault modeling and testing of self-timed circuits. In: IEEE Chip-to-System Test Concerns for the 90’s, pp. 62–66, April 1991 4. Li, J.-F.: Testing ternary content addressable memories with comparison faults using march-like tests. IEEE Comput.-Aided Des. Integr. Circ. Syst. 26(5), pp. 919–931 (2007) 5. Cheng, K.H., Wei, C.H., Jiang, S.Y.: Static divided word matching line for low-power content addressable memory design. In: IEEE Circuits and Systems ISGAS, vol. 2, pp. 629–632 May 2004 6. Lo, J.-C.: Fault-tolerant content addressable memory. In: IEEE Proceedings of ICCD, pp. 193–196, October 1993 7. Aly, R.E., Nallamilli, B.R., Bayoumi, M.A.: Variable-way set associative cache design for embedded system applications. Circ. Syst. MWSCAS 3, 1435–1438 (2003)

45 Implementation of Asynchronous Cache Memory

421

8. Chaudhary, V., Chen, T.-H., Sheerin, F., Clark, L.T.: Critical race-free low-power nand match line content addressable memory tagged cache memory. Comput. Digital Tech, IET 2, 40–44 (2008) 9. Cheng, K.H., Wei, C.H., Jiang, S.Y.: Static divided word matching line for low-power content addressable memory design. In: Circuits and Systems ISCAS, vol. 2, pp. 929–632 (2004) 10. Delgado-Frias, J.G., Yu, A., Nyathi, J.: A dynamic content addressable memory using a 4transistor cell. Des. Mixed-Mode Integr. Circ. Appl. 2, 110–113 (1999) 11. Kumaki, T., Kouno, Y., Ishizaki, M., Koide, T., Mattausch, H.J.: Application of multi-ported CAM for parallel coding. Circ. Syst. APCCAS 3, 1859–1862 (2006)

Author Index

Abbas, Sherazi Syed Waseem, 259 Agrawal, Somya, 65, 73 Ali, Sikandar, 259 Amarbayasgalan, Tsatsral, 273 Bai, Hongxi, 393 Batbayar, S., 325 Battogtokh, Jigjidsuren, 413 Chang, Chin-Chen, 65 Chang, Kuo-Chi, 41 Chen, Chien-Ming, 97, 125 Chen, Ching-Ju, 133, 143 Chen, Chin-Ling, 73 Chen, Hanlin, 189 Chen, Huiping, 393 Chen, Shuo-Tsung, 133, 143 Chen, Sida, 299 Chu, Kai-Chun, 41 Chu, ZeNan, 173 Dao, Thi-Kien, 53 Davagdorj, Khishigsuren, 265 Delger, Oyungerel, 289 Deng, Xiaoting, 97 Deng, Yong-Yuan, 73 Ding, Qun, 107 Dong, Hongxiang, 181 Fang, Weidong, 189 Fu, Ping, 117, 375 Gantulga, B., 325 Gao, Xue, 281

Garmaa, D., 325 Geng, Bin, 163 Gong, Yuxin, 87 Guan, Ti, 335, 345 Guo, Baolong, 221 Han, Jian, 117 He, Bai, 403 Horng, Der-Juinn, 41 Hu, Rong, 189 Huang, Lei, 79 Huang, Xin, 107 Jae, Moon Hyun, 259 Jargalsaikhan, Bilguun, 259 Jiang, Xin-hua, 201 Jiang, Xunzhi, 87 Jiangzhu, Long, 11 Jin, Yongxia, 393 Jing, Huang, 41 Jing, Ziyang, 163 Kang, In Uk, 259 Lan, Xinyue, 231 Lee, Chin-Feng, 65, 73 Lee, Jong Yun, 251 LEE, Jong Yun, 259 Lee, Yu-Qi, 125 Li, Nan, 163 Li, Song-Jiang, 153 Li, Yinan, 315 Liang, Xiao-Cong, 125 Liao, Lyuchao, 79

© Springer Nature Singapore Pte Ltd. 2020 J.-S. Pan et al. (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 156, https://doi.org/10.1007/978-981-13-9714-1

423

424 Lin, Lin, Lin, Lin, Liu, Liu, Liu, Liu,

Author Index Dong-Peng, 73 Lin, 335, 345 Meng-Ju, 133, 143 Yuh-Chung, 41 Donglei, 315 Shi-Jian, 79 Wen Qiang, 281 Yong, 335, 345

Ma, Qiang, 335, 345 Meng, Lian, 163 Munkhdalai, Lkhagvadorj, 251 Munkhtsetseg, N., 325 Namsrai, Erdenetuya, 289 Ngo, Truong-Giang, 53 Nguyen, Trong-The, 53, 79 Ni, Rongrong, 231 Pan, Jeng-Shyang, 41, 53, 79 Park, Hyun Woo, 241 Park, Kwang Ho, 265 Peng, Hongkun, 21 Qi, Wen, 3 Qiao, Jiaqing, 117 Qingfeng, Ma, 365 Qiu, Xiangkai, 309 Ryu, Keun Ho, 251, 265, 273 Ryu, Khen Ho, 241 Saqlain, Muhammad, 259 Shangdi, Ma, 355, 365 Shanshan, Li, 355, 365 Shih, Chia-Shuo, 65 Shuo, Zhang, 355 Su, Xin, 201 Sun, Cong Hui, 281 Sun, Ning, 385, 393 Tingjun, Wang, 355, 365 Tsendsuren, Ganbat, 289 Tseveenbayar, Munkhtuya, 289 Van Huy, Pham, 273

Wang, Juefei, 315 Wang, Ling, 281 Wang, Peng, 153 Wang, Renbiao, 31 Wang, Shen, 87 Wang, Wei, 173 Wang, Wenting, 335, 345 Wang, Xingjie, 385 Wei, Hailong, 385 Weng, Cai-Jie, 79 Wu, ChaoXia, 173 Wu, Tsu-Yang, 97, 125 Xiaolin, Huang, 11 Xiaoqing, Liu, 365 Xie, Chao-Fan, 211 Xu, Lin, 211 Xu, Lu-Xiong, 211 Xue, Xingsi, 201 Xuebing, Liu, 355, 365 Yang, Di, 153 Yang, Henan, 181 Yang, Hua-Min, 153 Yeh, Jyh-Haw, 125 Yijun, Ge, 11 Yin, Hongtao, 375 Yingjie, Li, 11 Zeng, Wei-Dong, 79 Zhan, Dechen, 87 Zhang, Fuquan, 211, 315 Zhang, Han, 375 Zhang, Jie, 385 Zhang, Ping, 79 Zhang, Shun-miao, 201 Zhao, Ming, 133, 143 Zhao, Yao, 231 Zheng, Huilin, 241 Zheng, Yan, 221 Zhou, Jie, 221 Zhou, Liang, 3 Zhou, Tie Hua, 281 Zhou, Yu-Wen, 41 Zhu, Jinxiu, 393 Zou, Xing, 97