Frontier Computing: Theory, Technologies and Applications (FC 2018) [1st ed.] 978-981-13-3647-8;978-981-13-3648-5

This book presents the proceedings of the 6th International Conference on Frontier Computing, held in Kuala Lumpur, Mala

769 74 142MB

English Pages XXV, 2003 [2028] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Frontier Computing: Theory, Technologies and Applications (FC 2018) [1st ed.]
 978-981-13-3647-8;978-981-13-3648-5

Table of contents :
Front Matter ....Pages i-xxv
Research on Application’s Credibility Test Method and Calculation Method Based on Application Behavior Declaration (Xuejun Yu, Ran Xiao)....Pages 1-10
The Selection of DNA Aptamers Against the Fc Region of Human IgG (Wen-Pin Hu, Hui-Ting Lin, Wen-Yu Su, Rouh-Mei Hu, Wei Yang, Wen-Yih Chen et al.)....Pages 11-19
Application of Deep Reinforcement Learning in Beam Offset Calibration of MEBT at C-ADS Injector-II (Jinqiang Wang, Xuhui Yang, Binbin Yong , Qingguo Zhou, Yuan He, Lihui Luo et al.)....Pages 20-28
Predicting Students’ Academic Performance Using Utility Based Educational Data Mining (K. T. S. Kasthuriarachchi, S. R. Liyanage)....Pages 29-39
Theme Evolution Analysis of Public Security Events Based on Hierarchical Dirichlet Process (Hong Hu, Xiao Wei, Zhanxing Hao)....Pages 40-49
Recurrent Neural Networks for Analysis and Automated Air Pollution Forecasting (Ching-Fang Lee, Chao-Tung Yang, Endah Kristiani, Yu-Tse Tsan, Wei-Cheng Chan, Chin-Yin Huang)....Pages 50-59
A Social Influence Account of Problematic Smartphone Use (Chi-Ying Chen, Shao-Liang Chang)....Pages 60-63
Simulation Analysis of Information-Based Animal Observation System (Lin Hui, Yi-Cheng Chen, Kuei Min Wang, Chiao Ming Peng, Kai-Ze Weng)....Pages 64-73
Effect of Fintech on the Cost Malmquist Productivity Index in the China Banking Industry (Wei-Liu Liao, Cui-You Yao, Yi-Ping Yang)....Pages 74-88
A Graph-Based Approach for Semantic Medical Search (Qing Zhao, Yangyang Kang, Jianqiang Li, Dan Wang)....Pages 89-98
On Construction of a Power Data Lake Platform Using Spark (Tzu-Yang Chen, Chao-Tung Yang, Endah Kristiani, Chun-Tse Cheng)....Pages 99-108
Selection Issues of Kernel Function and Its Parameters of Hard Margin Support Vector Machine in a Real-World Handwriting Device (Yan Pei, Lei Jing, Jianqiang Li)....Pages 109-117
Adaptive Threshold-Based Algorithm for Multi-objective VM Placement in Cloud Data Centers (Nithiya Baskaran, R. Eswari)....Pages 118-129
Monitoring Bio-Chemical Indicators Using Machine Learning Techniques for an Effective Large for Gestational Age Prediction Model with Reduced Computational Overhead (Faheem Akhtar, Jianqiang Li, Yu Guan, Azhar Imran, Muhammad Azeem)....Pages 130-137
Hybrid Technology in Video Annotation by Using the APP and Raspberry Pi—Applied in Agricultural Surveillance System (Yong-Kok Tan, Lin-Lin Wang, Deng-Yuan Theng)....Pages 138-143
Film Classification Using HSV Distribution and Deep Learning Neural Networks (Ching-Ta Lu, Jun-Hong Shen, Ling-Ling Wang, Chia-Hua Liu, Chia-Yi Chang, Kun-Fu Tseng)....Pages 144-153
Control Strategy-Based Intelligent Planning of Service Composition (Yishui Zhu, Lei Tang, Jun Zhang, Zongtao Duan, Hua Jiang)....Pages 154-163
IoT Enabled Environmental Monitoring System (Leo Willyanto Santoso, Markus Daud Giantara)....Pages 164-173
Neighbor Link-Based Spatial Index for k Nearest Neighbor Queries in Wireless Systems (Jun-Hong Shen, Ching-Ta Lu, Hong-Ray Chu)....Pages 174-180
A Memory-Friendly Life Film Editing System for Smart Devices (Shih-Nung Chen, Po-Zung Chen)....Pages 181-186
The Implementation of Conversation Bot for Smart Home Environment (Chuan-Feng Chih, Steen J. Hsu, Pei-Ting Chen, Yun-Ju Chen, Chun-Yu Lu)....Pages 187-192
A Study on the User Cognitive Model of Learning Management System (Hsin-Chao Ho, Mong-Te Wang, Shu-Chuan Shih, Chiu-Hsing Kuo, Ching-Pin Tsai)....Pages 193-202
GVM Based Copy-Dynamics Model for Electricity Load Forecast (Binbin Yong, Liang Huang, Fucun Li, Jun Shen, Xin Wang, Qingguo Zhou)....Pages 203-211
Combining Voice and Image Recognition for Smart Home Security System (Hung-Te Lee, Rung-Ching Chen, Wei-Hsiang Chung)....Pages 212-221
Evaluation of New Energy Operation Based on Comprehensive Evaluation Method (Hang Yin, Zifen Han, Jin Li, Xiang Wu, Ningbo Wang, Yan Li)....Pages 222-234
Comparison of Different Machine Learning Methods to Forecast Air Quality Index (Bo Liu, Chao Shi, Jianqiang Li, Yong Li, Jianlei Lang, Rentao Gu)....Pages 235-245
The Implementation of Wi-Fi Log Analysis System with ELK Stack (Yuan-Ting Wang, Chao-Tung Yang, Endah Kristiani, Yu-Wei Chan)....Pages 246-255
The Implementation of NetFlow Log System Using Ceph and ELK Stack (Yuan-Ting Wang, Chao-Tung Yang, Endah Kristiani, Ming-Lun Liu, Ching-Han Lai, Wei-Je Jiang et al.)....Pages 256-265
A Virtual Interactive System for Merchandising Stores (Meng-Yen Hsieh, Hua-Yi Lin, Tien-Hsiung Weng)....Pages 266-277
Data Imputation in EEG Signals for Brainprint Identification (Siaw-Hong Liew, Yun-Huoy Choo, Yin Fen Low)....Pages 278-286
A Supervised Approach for Patient-Specific ICU Mortality Prediction Using Feature Modeling (Gokul S. Krishnan, S. Sowmya Kamath)....Pages 287-295
Interpretable Learning: A Result-Oriented Explanation for Automatic Cataract Detection (Jianqiang Li, Liyang Xie, Li Zhang, Lu Liu, Pengzhi Li, Ji-jiang Yang et al.)....Pages 296-306
Scalable Data-Storage Framework for Smart Manufacturing (Hsiao-Yu Wang, Chen-Kun Tsung)....Pages 307-313
Reposition Cyber-Physical System to Minimizing the Gap between Cyber and Physical (Chun-Tai Yen, Chen-Kun Tsung)....Pages 314-318
Design and Implementation of a Learning Emotion Recognition System (Kuan-Cheng Lin, Li-Chun Sue, Jason C. Hung)....Pages 319-323
Leveraging Explicit Products Relationships for Improved Collaborative Filtering Recommendation Algorithm (Shunpan Liang, Jinqing Zhao, Fuyong Yuan, Fuzhi Zhang)....Pages 324-335
Establishment of Model of Open Audition for Freshmen Athletes (Yu-Yang Chen, Tai-Lun Chan, Mong-Te Wang)....Pages 336-348
A Difference Detection Mechanism Between API Cache and Data Center in Smart Manufacturing (Chun-Tai Yen, Chen-Kun Tsung, Wen-Fang Wu)....Pages 349-353
Elementary School Teachers’ Attitudes Toward Flipped Classrooms (Pao-Chu Huang, Li Yue, Hsuan-Pu Chang)....Pages 354-358
Matching Game-Based Power Control in Cooperative Cognitive Radio Networks (Min-Kuan Chang, Yong-Jen Mei, Chao-Tung Yang, Yu-Wei Chan, Wun-Ren Chen)....Pages 359-363
A Coalitional Graph Game Framework for Broadcasting in Wireless Networks (Yu-Wei Chan, Min-Kuan Chang, Wei-Chun Ho)....Pages 364-369
Examining the Impact of Exercise Tracking Data on Promoting Regular Exercise Among University Students (Chia-Hsien Wen, Wei-Yueh Chang, Tsan-Ching Kang, Chen-Lin Chang, Yu-Wei Chan)....Pages 370-373
Understanding the Motivation for Exercise Through Smart Bracelets: The Importance of a Healthy Lifestyle (Chia-Hsien Wen, Wei-Yueh Chang, Tsan-Ching Kang, Chen-Lin Chang, Yu-Wei Chan)....Pages 374-377
A Research of Demodulation Based Technique for Frequency Estimation (Zhi Quan, Chaoyi Ma)....Pages 378-387
The Implementation of a Campus Air Monitoring System Using LoRa Network (Yu-Sheng Lin, Chao-Tung Yang, Tzu-Chiang Chiang, Jung-Chun Liu, Tsung-Che Yang)....Pages 388-397
Fuzzy Prediction for Time-Series Data—A Case Study at Taichung City Open Data of Air Pollution (Wei-Sheng Huang, Tzu-Chiang Chiang, Chao-Tung Yang, Chung-Chi Lin)....Pages 398-407
Editing k-Nearest Neighbor Reference Set by Growing from Two Extreme Data Points (Jinwoo Park, Sungzoon Jo, Hyeon Jo)....Pages 408-418
Realization of Combined Systemic Safety Analysis of Adverse Train Control System Using Model Checking (Anit Thapaliya, Gihwon Kwon)....Pages 419-430
Synthesizing Use and Misuse Case Specification with Integrated Safety Analysis in Railway Control System (Anit Thapaliya, Gihwon Kwon)....Pages 431-443
Risk Assessment for STPA with FMEA Technique (Ngoc-Tung La, Gihwon Kwon)....Pages 444-455
Social Network Analysis on Tourism-Related Civil Complaints in Busan Metropolitan City (Na-Rang Kim, Soon-Goo Hong)....Pages 456-464
Analysis on Variables Affecting Youth Stress with Bayesian Networks (Euihyun Jung)....Pages 465-471
GPU-Based Fast Motion Synthesis of Large Crowds Based on Multi-joint Models (Mankyu Sung)....Pages 472-483
Sperm Count Analysis Using Microscopic Image Processing (Hyun-Mo Yang, Dong-Woo Lim, Yong-Sik Choi, In-Hwan Kim, Ailing Lin, Jin-Woo Jung)....Pages 484-489
A Method to Real-Time Update Speaker Pronunciation Time-Database for the Application of Informatized Caption Enhancement by IBM Watson API (Yong-Sik Choi, In-Hwan Kim, Hyun-Mo Yang, Dong-Woo Lim, Ailing Lin, Jin-Woo Jung)....Pages 490-495
A Study of Fandom Crowdsourcing Method Using Big Data (Se Jong Oh, Mee Hwa Park, Jeong Uijeoung, Ill Chul Doo)....Pages 496-502
Scenario Based Practical Information Protection Training System Using Virtualization System (Sungkyu Yeom, Dongil Shin, Dongkyoo Shin)....Pages 503-509
Cyber Battle Damage Assessment Framework and Detection of Unauthorized Wireless Access Point Using Machine Learning (Duhoe Kim, Doyeon Kim, Dongil Shin, Dongkyoo Shin, Yong-Hyun Kim)....Pages 510-519
A Text Mining Approach to Study Individuals’ Food Choices and Eating Behavior Using Twitter Feeds (Ayuna Dondokova, Satyabrata Aich, Hee-Cheol Kim, Gyung Hye Huh)....Pages 520-527
A Multi Criteria Decision Modelling Approach for Gait Analysis of Parkinson’s Disease Using Wearable Sensors to Compare the Classification Performance Based on the Different Feature Selection Methods (Satyabrata Aich, Kamalakanta Muduli, Hee-Cheol Kim)....Pages 528-534
Comparison of BVH and KD-Tree for the GPGPU Acceleration on Real Mobile Devices (SeungWoo Chung, MinKyoung Choi, DaeGeun Youn, SeongKi Kim)....Pages 535-540
Sentiment Analysis of Korean Teenagers’ Language Based on Sentiment Dictionary Construction (Jason Kim, Min Kyoung Kim, Yeoeun Park, Eomji Kim, Junhee Lee, Dongho Kim et al.)....Pages 541-550
Research on Vocal Tic Symptom Detecting Using SVM/HMM (Su-Seong Chai, InA Kim, Kyu-Chul Lee)....Pages 551-559
A Study on Visual Programming Platform Design for VR/AR SW Education (Hae-Jong Joo, Ho-Bin Song, Min-Kyu Park)....Pages 560-565
Interface for VR, MR and AR Based on Eye-Tracking Sensor Technology (Pill-Won Park, Ho-Bin Song, Hae-Jong Joo)....Pages 566-570
A Study of Security Model for Mobile Education Service (Dae Bum Lee, Hai-Gil Choi)....Pages 571-578
How Easy Is It to Surf the Semantic Web? (Jungmin Lee, Changu Kang, Jisun Chae, Hyeonmin Park, Seongbin Park)....Pages 579-583
Success Factor for Autonomous Vehicle (Hwa-Young Jeong)....Pages 584-587
Development of Supporting Tool for Executing Synthesized Automaton Representing Controller (Seonil Jung, Gihwon Kwon)....Pages 588-595
Application of Data Mining Techniques on New Product Development Decision-Making (You-Shyang Chen, Chien-Ku Lin, Chyi-Jia Huang)....Pages 596-600
Smart Counter of Hospital for Patient Drug Pickup (Cheng-Ming Chang)....Pages 601-605
Impact of Collaboration with LINE Application on Work Performance (Jerome Chih-Lung Chou)....Pages 606-610
IOT Based Smart Water Monitoring Using Image Processing (Jieh-Ren Chang, Nakul Agarwal, Yipeng Bao, Abhishek Sharma)....Pages 611-622
A Study on Blockchain Applications in Healthcare (Abhishek Sharma, Ying-Hsun Hung, Punit Agarwal, Muskan Kalra)....Pages 623-628
A Study on Computer Vision Techniques for Self-driving Cars (Nakul Agarwal, Cheng-Wei Chiang, Abhishek Sharma)....Pages 629-634
Real Time Human Fall Detection Using Accelerometer and IoT (Kritika Johari, Jing-Wei Liu, Thinagaran Perumal, Abhishek Sharma, Tanmay Chaturvedi, Jieh-Ren Chang)....Pages 635-639
A Study of the Interactive Styles of an Augmented Reality System for Cultural Innovative Product Design (Cheng-Wei Chiang, Jing-Wei Liu)....Pages 640-646
Mechanism and Application of Smart City by Developing Intelligent Cloud-Based Transportation Vehicle Surveillance System (You-Shyang Chen, Yao-Wen Kan)....Pages 647-652
A Study on the Necessity of Integrating Games into College Physical Education (Di Zhang)....Pages 653-657
An Empirical Study and Analysis of the Performance Model for the Foreign Language Learning (Junmin Wu)....Pages 658-664
Construction and Application of the English Corpus Based on the Statistical Language Model (Bo Zhang)....Pages 665-670
Design and Implementation of the Teaching Management System for English Language and Literature (Xichang Zhang)....Pages 671-677
Design of Multi-vehicle Flexible Rotational System Based on Green Dismantling of End-of-Life Vehicles (Shaoyi Bei, Liang Zhao, Tao Wang, Zijian Gao)....Pages 678-686
Design of the In-depth Intelligent Educational System Based on the English Learning (Fanghui Zhao)....Pages 687-693
Design of the In-depth Intelligent Learning System Based on the College English Teaching (Rongrong Ruan)....Pages 694-700
Discussions on the Construction of Digital Campus in Higher Vocational Colleges (Jianjian Luo)....Pages 701-705
Fuzzy Evaluation Mode of the Innovative Foreign Language Talents Based on the Analytic Hierarchy Process (Caihong Han)....Pages 706-712
Investigation and Countermeasure Study on Influencing Factors of Career Planning of Teaching Auxiliary Staff in Colleges and Universities (Chonghong Wei)....Pages 713-721
On the Construction of Digital Campus in the Development of Higher Vocational Colleges (Haiying Zhou)....Pages 722-727
Research and Implementation of the Key Algorithms for the Big Data in Education (Meiliu Luo)....Pages 728-733
Research on the Competency Models of the Foreign Language Teachers in Colleges and Universities (Fengxia Guo)....Pages 734-740
Research on the Construction of the Multivariate English Teaching Mode Combined with the Cloud Technology (Qinyuan Hui)....Pages 741-747
Research on the Design of the Multimedia Intelligence Teaching System Based on the Oral English Curriculum (Ke Zhang)....Pages 748-753
Research on the Evaluation Methods of the Guidance and Test on the Student Employment Based on the Computer Model (Wenzhao Zhang)....Pages 754-760
Research on the Multimedia Assisted English Teaching Mode Based on the Computer Platform (Ning Wang)....Pages 761-766
A New Method for Calculating Excess Air Ratio (Yalan Ye, Wenhao Jiang, Hongming Wang, Xiang An)....Pages 767-776
High Adaptability Storage Management File System in Industrial (Huizhong Liu)....Pages 777-783
A Wifi Location Method Which Supports the Combination of Position and Location Fingerprint (Mingyuan Xin)....Pages 784-791
Robust Control for Time-Delay Singular Systems Based on Passivity Analysis (Chunyan Ding, Qin Li)....Pages 792-802
Oscillation Criteria for a Class of Third-Order Neutral Dynamic Equations with Distributed Deviating Arguments (Yuanxian Hui, Peiluan Li, Xunhuan Deng)....Pages 803-814
Studies on College Translation Teaching Models in an Information and Internet Age (Wanfang Zhang)....Pages 815-819
Study on the Calibration Method of Gaze Point in Gaze Tracking (Yongsheng Zhou, Changyuan Wang, Hongbo Jia)....Pages 820-830
Innovation of “Internet +” Teaching Model Based on Organic Philosophy of Whitehead (Lin Lin, Xue-bo Yang, Li Yang, Bing-jie Xu)....Pages 831-840
The Analysis of the Correlation Between Market Value and Volatility—Based on CSI 300 (Weiqing Wang, Jiajun Fu, Ziyuan Li)....Pages 841-848
Applied Research of “Progressive in Class” Flipped Classroom Teaching Model (Yingying Sun)....Pages 849-853
Calculating the Resistance Inductance Parameter of the Interconnection with High Order Basis Function Method (Baojun Chen, Yanjie Ju)....Pages 854-863
Research on Equipment Status and Operation Information Acquisition Based on Equipment Control Bus (Xu Li, Chen Meng, Cheng Wang, Zhenghu Chen)....Pages 864-871
Design and Analysis of Tool Wear Detection Mechanism (Xiao-Yun Li, Yi-Jui Chiu)....Pages 872-879
Information Literacy Training for Students in Public Security Colleges and Universities Under the New Normal (Jing Leng, Miaomiao Zhang)....Pages 880-886
Layout Optimization of Book Sorting Machine Based on Sequential Control (Junhao Jiang, Peijiang Chen)....Pages 887-895
Analysis of Characteristics of High-Temperature Disasters in Flue-Cured Tobacco Based on GIS (Zhiguo Ma, Wenqing Li)....Pages 896-901
Typical Standards Developing Organization Case Analysis and Its Enlightenments to China’s Standardization Development (Qing Xu)....Pages 902-908
Research on the Restrictive Factors and Coping Strategies of Individual Income Tax Reform (Peng Sun, Weishuang Xu)....Pages 909-914
College Students’ Autonomous English Learning in Computer and Network-Based Self-access Center (Yulin Weng)....Pages 915-922
A Development Research of Teachers in Private Higher Vocational Colleges (Yang Boru, Zhou Wen)....Pages 923-928
Adaptive Harris Corner Detection Algorithm Based on Modified Detector (Peijiang Chen, Liying Wu)....Pages 929-935
Research on Energy Efficiency Optimization for New Renewable Energy Sources Data Center (Changgeng Yu, Liping Lai)....Pages 936-943
The Application of Configuration Software and ActiveX Technology in Blast Furnace Material Auto Raising Control System (Jin Yaling, Guo Jianshuang)....Pages 944-949
Research on Lateral Acceleration of Lane Changing (Wensheng Sun, Shufeng Wang)....Pages 950-960
Anchorage Effect on Rock Mass by Discontinuous Deformation Analysis for Rock Failure Method (Wen Wang, Moli Zhao, Rui Han, Xiangrong Shi)....Pages 961-964
Study on the Construction of College MOOC in the Era of Mobile Internet (Gao Bin)....Pages 965-973
Design of Shared Campus Travel Tools (Linan Zheng, Jian Zhao)....Pages 974-980
Analysis of Acceptance Degree of Safety Importance Element (Jiang Wei, Chunyang Liang)....Pages 981-988
IT Professional Robot Simulation Training System Based on Project Penetration and Swimming Pool Teaching Methods (Zhen Zhao, Guozhu Liu, Mengwei Xie, Yanqing Li, Jinghao Xu, Yang Gao et al.)....Pages 989-996
Study on the Relationship Between Urbanization and Tertiary Industry in Hubei Province (Guoyang Zhou, Wenbin Liu)....Pages 997-1006
Research on the Improvement of the Teaching Effect of the Traditional Computer Basic Public Course Based on the edX Technology MOOC Online Course (Hequn Wu, Mende Enhe, Lixia Suo)....Pages 1007-1012
Research on the Status Quo of Online Independent Learning for Postgraduates and Countermeasures (Xiaoli Wang, Zhanbo Liu, Lirong Su)....Pages 1013-1018
Simulation of Automated Stereo Warehouse System Based on Flexsim (Jiahui Liu, Guangqiu Lu, Haiqin Wang)....Pages 1019-1029
Some Suggestions on the Reform of Running Schools in Yunnan Higher Vocational Colleges (Wen Zhou, Boru Yang)....Pages 1030-1035
Auto Evaluation of Syntactic Analysis via Blending Word Similarity and Tree Status (Ruijuan Hu, Huifeng Tang)....Pages 1036-1046
Some Thoughts on Fire Emergency Communication Construction in the New Period (Yufeng Fan, Haotai Sun)....Pages 1047-1055
Individualized Design of Porous Titanium Alloy Implants Based on CT Data (Jinjun Tang, Jiang Zhang, Yulei Li, Yuelai Dai, Qun Wang)....Pages 1056-1061
Research on Planetary Transmission Scheme of Annular Stereo Garage Based on Simulation Analysis (Jingqi Wang, Xingquan Guan, Kaihan Wei, Haidie Chen, Mingze Gao)....Pages 1062-1069
The Application of Seeker Optimization Algorithm in Boiler Main Steam Pressure Control System (Ai Li, Xiong Wei)....Pages 1070-1076
Research on Application of Data Mining Based on Improved APRIORI Algorithm in Enrollment Management in Colleges and Universities (Hequn Wu, Mende Enhe, Jinyu Wang)....Pages 1077-1082
The Design of Simulation Sandbox System Based on Industrial Engineering (Guangqiu Lu, Jiahui Liu, Haiqin Wang)....Pages 1083-1092
Optimization of Control Strategy for Turbocharged Diesel Engine Under Transient Condition (Qiang Liu, Zhongchang Liu, Jing Tian, Yongqiang Han, Jun Wang, Jian Fang)....Pages 1093-1099
Research on Employees’ Intention of Bootleg Innovation (Wenxiang Wang, Yuanjian Qin)....Pages 1100-1108
Research on Teaching Reform of Java Programming Course (Jian Xu)....Pages 1109-1114
Analyses the Application of Multimedia Technology in Teaching (Shuang Yu, Guang Li, Xiaohui Cai)....Pages 1115-1119
Underlying Design and Implementation of a Wireless Data Transmission Device (Weixiong Wang, Wenjun Su, Yang Hu)....Pages 1120-1124
Task Pricing of “Taking Pictures for Making Money” (Han Li, Yi Cheng, Bingjun He, Shi Kang)....Pages 1125-1129
3-D Zero Offset Modeling (Yang Qiqiang)....Pages 1130-1135
Research on the Application of VR Technology in the Preservation of the Cultural Sites of Koguryo (Chunyan Wang, Yanjie Zhan)....Pages 1136-1143
Analysis of Teaching Reform on Costume Design and Engineering Specialty in Application-Oriented Universities (Xu Lan)....Pages 1144-1148
Application of FACTS Technology in China’s Transmission Systems (Jin Yiding)....Pages 1149-1156
Exploration and Practice of Teaching Mode Reform Based on “Maker Education” (Yingying Sun, Chunsu Zhang)....Pages 1157-1160
Numerical Simulation Method for Roll Forging and Forging Process of 7050 Aluminum Alloy Control Arm and Analysis of Final Forging Coarse Grain (Heng-yi Yuan, Lu Wang, Li-xin Yuan)....Pages 1161-1168
The Innovation of CAI in Art Design Education (Jing Cao, Gang Li)....Pages 1169-1174
The Control of Sensing Performance of CeO2-Sm2O3 Humidity Sensor (Chunjie Wang, Yue Wang, Lu Fu)....Pages 1175-1180
Study on Rapid Green Synthesis of Nanometer Silver Sol by Visible Light Reduction Method (Guangnian Xu, Guolian Ruan, Jiguang Zhu, Juncheng Jin)....Pages 1181-1187
The Design and Realization of the System Shows the Train Information Based on MCU (Guo Rui, Wu Tao)....Pages 1188-1193
Research on the Elements and Structure of Urban Innovation System (Bingfeng Liu, Mingjuan Jiang)....Pages 1194-1199
Research on PID Controller Based on Adaptive Internal Model Control (Xia Liu, Li Li, Qiyan Yan)....Pages 1200-1204
Research on the Risk of Computer Audit Under the Information Environment (Chan Li, Xiaochun Liu)....Pages 1205-1211
Modeling and Control of the Output Boost Converter for Photovoltaic Cells (Y. Shao, Wenbin Liu)....Pages 1212-1222
The Reform and Practice of Computer Public Teaching in University (Liming Zhang)....Pages 1223-1227
The Narrative of the Fourth Party Logistics (Wenguang Liang, Yi Ye)....Pages 1228-1234
Statistics Analysis on Political Skill and Career Success of IT Engineers (Qinglan Luo)....Pages 1235-1243
A FIR Digital Filter for Pulse Signal Processing (Yang Zhao)....Pages 1244-1248
The Design and Research of Intelligent Range Hood for Traceable Oil Fume Concentration Based on Sensor (Zhao Jian, Zheng Linan)....Pages 1249-1257
The Effect of Rural Financial Development on the Urban-Rural Income Gap—Data Analysis Based on Hubei Province (Chang Tan, Wenbin Liu, Xiaodi Qin, Yan Peng)....Pages 1258-1265
Improving Performance of Polymer Solar Cells by Double Plasma Resonance (Xu Peng, Li Wei)....Pages 1266-1271
An Early Warning System Design to Detect Human Diseases Based on Plantar Pressure (Enxiang Yu, Yunpeng Ma)....Pages 1272-1279
Research on Routing Algorithm of/SDN Openflow Controller (Mingyuan Xin)....Pages 1280-1285
Comparative Analysis of Didi and Uber App (Wang Huimei, Liu Yadi)....Pages 1286-1291
Partial Discharge Test and Quantitative Analysis of High Voltage Switchgear (Gui Junfeng)....Pages 1292-1297
The Evolutionary Analysis on the Expropriation of Large Shareholders on Minority Shareholders (Yueping Wang)....Pages 1298-1305
Prediction of Smart Meter Life Assessment Based on Weibull Distribution (Baoliang Zhang, Penghe Zhang, Yang Xue, Tao Wang, Lingqing Guo)....Pages 1306-1314
An Analysis of Innovative Talents Training of Arts Education Under the Development of Creative Economy in the Guangxi Province (Wanli Gu)....Pages 1315-1320
Data Acquisition Researching of Ultraviolet Spectrometer Based on Raspberry Pi (Xu Shuping, Huang Mengyao, Xu Pei)....Pages 1321-1330
Histogram Equalization Image Enhancement Based on FPGA Algorithm Design and Implementation (Huihua Jiao, JieQing Xing, Wei Zhou)....Pages 1331-1340
Study on Income Distribution Mechanism of Renewable Energy Power Supply Chain (Qichang Li, Guangxia Li, Wang Lin, Qiang Song, Yundong Xiao)....Pages 1341-1354
Design a Link Adaptive Management Algorithm for BLE Device (Xichao Wang, Guobin Su, Yanbiao Hao, Lichuan Luo)....Pages 1355-1364
An On-Line Identification Method for Short-Circuit Impedance of Transformer Winding Based on Sudden Short Circuit Test (Yan Wu, Lingyun Gu, Xue Zhang, Jinyu Wang)....Pages 1365-1375
Construction of Mathematical Model for Statistical Analysis of Continuous Range Information System (Ye Zheng)....Pages 1376-1382
An Electricity-Stealing Identification Method Based on Outlier Algorithm (Ying Wang, Mingjiu Pan, Zhou Lan, Lei Wang, Liying Sun)....Pages 1383-1388
ANNs Combined with Genetic Algorithm Optimization for Symbiotic Medium of Two Oil-Degrading Bacteria Cycloclasticus Sp. and Alcanivorax Sp. (Zhang Shaojun, Wang Mingyu, Liu Bingbing, Pang Shouwen, Zhang Chengda)....Pages 1389-1397
Application Analysis of Beidou Satellite Navigation System in Marine Science (Zhiliang Fan)....Pages 1398-1403
Research on Optimization and Innovation of ERP Financial Module (Gao Shuang, Yang Lingxiao)....Pages 1404-1408
The Initial Application of Interactive Genetic Algorithm for Ceramic Modelling Design (Xing Xu, Jiantao Pi, Ao Xu, Kangle He, Jing Zheng)....Pages 1409-1419
Study of Properties of Solutions for a Viscoelastic Wave Equation System with Variable-Exponents (Yunzhu Gao, Qiu Meng, Haojie Guo, Jing Li, Changling Xu)....Pages 1420-1426
Functional Tolerance Optimization Design for Datum System (Bensheng Xu, Can Wang, Hongli Chen)....Pages 1427-1434
The Distribution of Financial Resources and the Difference of Regional Economic Growth—An Empirical Study Based on Spatial Spillover Effects in 31 Provinces in China (Zhaoyi Xu, Pengfei Chen)....Pages 1435-1443
Digital Image Information Hiding Technology Based on LSB and DWT Algorithm (Limei Chen)....Pages 1444-1453
A Review of the Application Model of WeChat in the Propaganda of Universities (Wenguang Liang, Yi Ye, Man Bao, Rongrui Liu)....Pages 1454-1458
Design of Deer Park Environment Detection System Based on a Zigbee (Mingtao Ma)....Pages 1459-1464
Equal-Distance Coupling Method in OFDM System Under Frequency Selective Channel (Xiaorui Hu, Jun Ye, Songnong Li, Ling Feng, Yongliang Ji, Lin Gong et al.)....Pages 1465-1469
Construction of Search Engine System Based on Multithread Distributed Web Crawler (Hongsheng Xu, Ganglong Fan, Ke Li)....Pages 1470-1476
Gene Regulatory Network Reconstruction from Yeast Expression Time Series (Ming Zheng, Mugui Zhuo)....Pages 1477-1481
Analysis and Design of Key Parameters in Intelligent System of Lime Rotary Kiln (Tingzhong Wang, Lingli Zhu)....Pages 1482-1488
Study on the Environmental Factors Affecting the NOx Results of Heavy Duty Vehicle PEMS Test (Huang Liyan, Liu Gang, Wang Detao, Zhang Xian)....Pages 1489-1496
Analysis and Expected Effect of the Phase III Fuel Consumption Standard for Light Duty Commercial Vehicles (Wang Zhao, Bao Xiang, Zheng Tianlei)....Pages 1497-1507
Construction of Driving Cycle Based on SOM Clustering Algorithm for Emission Prediction (Feng Li, Jihui Zhuang, Xiaoming Cheng, Jiaxing Wang, Zhenzheng Yan)....Pages 1508-1515
Fatigue Detection Based on Facial Features with CNN-HMM (Ting Yan, Changyuan Wang, Hongbo Jia)....Pages 1516-1524
Construction and Evaluation of a Blending Teaching Model of Linear Algebra and Probability Statistics in the “Internet +” Background by Using the Gradient Newton Combination Algorithm (Mandan Hou)....Pages 1525-1532
Research on Algorithm of Transfer Learning Based on Sensor Location (Fan Yang, Yutai Rao)....Pages 1533-1536
Study on the Application of Big Data Analysis in Monitoring Internet Rumors (Zijiang Zhu, Weihuang Dai, Yi Hu)....Pages 1537-1545
Eyes and Mouth States Detection for Drowsiness Determination (Yuexin Tian, Changyuan Wang, Hongbo Jia)....Pages 1546-1554
Feasibility and Risk Analysis of Data Security System Based on Power Architecture (Lei Yao)....Pages 1555-1560
Application of the Extension Point and Plug-Ins Idea in Transmission Network Management System (Wu Ping)....Pages 1561-1568
Reliability Modelling of a Typical Peripheral Component Interconnect (PCI) System with Dynamic Reliability Modelling Diagram (Guan Yi, Zhao Jiacong)....Pages 1569-1576
Mid-long Term Load Forecasting Model Based on Principal Component Regression (Xia Xinmao, Zhang Zengqiang, Yuan Chenghao, Yu Zhiyong)....Pages 1577-1583
Research on Intrusion Detection Algorithm in Cloud Computing (Yupeng Sang)....Pages 1584-1592
An Improved Medical Image Fusion Algorithm (Hui Li, Qiang Miao, Hua Shen)....Pages 1593-1601
Construction Study on the Evaluation Model of Business Environment of Cross-Border E-Commerce Enterprises (Zhao Yuxin, Zhuang Meinan, Wang Yatong)....Pages 1602-1611
Research on Information Construction of Enterprise Product Design and Manufacturing Platform (Yan Ningning)....Pages 1612-1617
Planning and Simulation of Mobile Charging for Electric Vehicles Based on Ant Colony Algorithm (Yang Fengwei, Liu Jinxin, Zhang Zixian, Chen Peng)....Pages 1618-1624
Research on Urban Greenway Design Based on Big Data (Wenjun Wang)....Pages 1625-1629
Design and Implementation of Art Rendering Model (Haichun Wei)....Pages 1630-1637
LiveCom Instant Messaging Service Platform (Qiang Miao, Hui Li, Xiaolong Song)....Pages 1638-1646
Empirical Research on Brand Building of Commonweal Organization Based on Data Analysis (Shujing Gao)....Pages 1647-1653
The Influence Study of Online Free Trail Activities on Product Sales Volume (Shushu Gu, Yun Lu, Xi Chen, Ranran Hua, Han Zhang)....Pages 1654-1662
Research on Behavior Oriented Scientific Research Credit Evaluation Method (Yan Zhao, Li Zhou, Bisong Liu, Zhou Jiang)....Pages 1663-1671
Realization and Improvement of Bayes Image Matting (Xu Qin, Ding Xinghao)....Pages 1672-1677
Research on Image Processing Algorithm in Intelligent Vehicle System Based on Visual Navigation (Congbo Luo, Zihe Tong)....Pages 1678-1688
A Study on Influencing Factors of International Applied Talent Cultivation in Higher Vocational Education (Jian Yong, Fanyi Kong, Xiuping Sui)....Pages 1689-1694
Research on the Platform Structure of Product Data Information Management System (Bingfeng Liu, Mingjuan Jiang)....Pages 1695-1699
Research on a New Clustering Validity Index Based on Data Mining (Chaobo Zhang)....Pages 1700-1704
Optimization of Ultrasonic Extraction for Citrus Polysaccharides by Response Surface Methodology (Yongguang Bi, Yanshan Lu, Zhipeng Su)....Pages 1705-1712
Research on the Transmission Mode of Asynchronous Link in a Telegraph Switching System (Weixiong Wang, Wenjun Su)....Pages 1713-1717
Prediction of Smart Meter Life Assessment Based on BP Neural Network (Penghe Zhang, Baoliang Zhang, Yang Xue, Tao Wang, Lingqing Guo)....Pages 1718-1727
Numerical Research on the Effect of Base Cavity upon the Projectile Flow Field (Zhihong Ye, Zhongxian Li, Haibo Lu)....Pages 1728-1733
Recognition of Marine Oil Spill with BP Artificial Neural Networks (Zhang Shaojun, Wang Mingyu, Pang Shouwen, Lv Wenxiang, Liu Bingbing)....Pages 1734-1741
Jiugu Mining District Commercial WI-FI Construction Project (Qi Yunrui)....Pages 1742-1750
Tolerance Information Representation and Reasoning Method Based on Ontology (Bensheng Xu, Weiqing Wang, Can Wang, Meifa Huang)....Pages 1751-1758
A Cooperative Control Scheme for Voltage Rise in Distribution Networks (Wu Liu, Jingfeng Zhou, Jian Wu, Weitang Fu, Mingliang Yang, Dawei Huang et al.)....Pages 1759-1769
A New OFDM System Based on Companding Transform Under Multipath Channel (Guangcheng Xie, Kaibo Luo, Yang Wang, Dexiang Yang, Jun Ye, Quan Zhou)....Pages 1770-1778
Construction of Gene Regulatory Networks Based on Ordered Conditional Mutual Information and Limited Parent Nodes (Ming Zheng, Mugui Zhuo)....Pages 1779-1784
Innovation Research of Cross Border E-commerce Shopping Guide Platform Based on Big Data and Artificial Intelligence (Jiahua Li)....Pages 1785-1792
Research on Interior Design of Smart Home (Hongxing Yi)....Pages 1793-1800
Antioxidant Activities of Polysaccharides from Citrus Peel (Yanshan Lu, Zhipeng Su, Yongguang Bi)....Pages 1801-1807
Design of Rural Home Security System Based on the Technology of Multi-characters Fusion (Shuchun Chen, Peng Chen, Libo Tian, Tao Wang)....Pages 1808-1814
Research on Offline Transaction Model in Mobile Payment System (Songnong Li, Xiaorui Hu, Fengling, Yu Zhang, Wei Dong, Jun Ye et al.)....Pages 1815-1820
The In-Use Performance Ratio of China Real World Vehicles and the Verification of Denominator/Numerator Increment Activity Compliance (Qian Guogang, Xie Nan, Yang Fan)....Pages 1821-1828
Simulation and Analysis of Vehicle Performance Based on Different Cycle Conditions (Yangmin Wu, Zhien Liu, Guangwei Xi)....Pages 1829-1840
Comparison Research on Lightweight Plan of Automotive Door Based on Life Cycle Assessment (LCA) (Yusong He, Yiwen Xie)....Pages 1841-1850
The Analysis of the New Energy Buses Operating Condition in the North China (Xiaoqin Yang, Lu Zhang, Yuze Zhang, Qiang Lu)....Pages 1851-1862
Experimental Study on Influence Factors of Emission and Energy Consumption for Plug-in Hybrid Electric Vehicle (Le Liu, Lihui Wang, Chunbei Dai)....Pages 1863-1873
Design of On-Line Measurement System for Fine Particle Number Concentration of Vehicle Exhaust Based on Diffusion Charge Theory (Zhouyang Cong, Tongzhu Yu, Huaqiao Gui, Yixin Yang, Jiaoshi Zhang, Yin Cheng et al.)....Pages 1874-1884
Application of On-Board-On-Line Surveillance in Environment Supervision (Wei Gu, Liqiao Li, Jingliang Wu, Hong Cai, Yun Li, Gang Li)....Pages 1885-1895
Research on Diagnostics Methods of CNG Engine After Treatment Catalyst (Tianchi Xie, Hongqi Liu, Haipeng Deng, Yupeng Wang, Ying Gao)....Pages 1896-1903
Privacy-Preserving Authentication and Service Rights Management for the Internet of Vehicles (Wei-Chen Wu, Horng-Twu Liaw)....Pages 1904-1912
RSU Beacon Aided Trust Management System for Location Privacy-Enhanced VANETs (Yu-Chih Wei, Yi-Ming Chen, Wei-Chen Wu, Ya-Chi Chu)....Pages 1913-1924
Constructing Prediction Model of Lung Cancer Treatment Survival (Hsiu-An Lee, Louis R. Chao, Hsiao-Hsien Rau, Chien-Yeh Hsu)....Pages 1925-1933
Research on the Combination of IoT and Assistive Technology Device—Prosthetic Damping Control as an Example (Yi-Shun Chou, Der-Fa Chen)....Pages 1934-1938
A Study on the Demand of Latent Physical and Mental Disorders in Taipei City (Jui-hung Kao, Horng-Twu Liaw, Po-Huan Hsiao)....Pages 1939-1946
Real-Time Analyzing Driver’s Image for Driving Safety (Kuo-Feng Wu, Horng-Twu Liaw, Shin-Wen Chang)....Pages 1947-1951
A Web Accessibility Study in Mobile Phone for the Aging People with Degradation of Vision (Chi Nung Chu)....Pages 1952-1956
The Comparison Between Online Social Data and Offline Crowd Data: An Example of Retail Stores (Jhu-Jyun Huang, Tai-Ta Kuo, Ping-I Chen, Fu-Jheng Jheng)....Pages 1957-1965
The Historical Review and Current Trends in Speech Synthesis by Bibliometric Approach (Guang-Feng Deng, Cheng-Hung Tsai, Tsun Ku)....Pages 1966-1978
A Study on Social Support, Participation Motivation and Learning Satisfaction of Senior Learners (Hsiang Huang, Zne-jung Lee, Wei-san Su)....Pages 1979-1984
A Health Information Exchange Based on Block Chain and Cryptography (Wei-Chen Wu, Yu-Chih Wei)....Pages 1985-1990
The Relationship of Oral Hygiene Behavior and Knowledge (Cheng youeh Tsai, Frica Chai, Ming-Sung Hsu, Wei-Ming Ou)....Pages 1991-1995
Back Matter ....Pages 1997-2003

Citation preview

Lecture Notes in Electrical Engineering 542

Jason C. Hung Neil Y. Yen Lin Hui   Editors

Frontier Computing Theory, Technologies and Applications (FC 2018)

Lecture Notes in Electrical Engineering Volume 542

Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Napoli, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, München, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, Materials Science & Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, Humanoids and Intelligent Systems Lab, Karlsruhe Institute for Technology, Karlsruhe, Baden-Württemberg, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Università di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Madrid, Spain Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität München, München, Germany Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Stanford University, Stanford, CA, USA Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martin, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Sebastian Möller, Quality and Usability Lab, TU Berlin, Berlin, Germany Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston North, Manawatu-Wanganui, New Zealand Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Kyoto, Japan Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Baden-Württemberg, Germany Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China Junjie James Zhang, Charlotte, NC, USA

The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering - quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning:

• • • • • • • • • • • •

Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please contact leontina. [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Associate Editor ([email protected]) India Swati Meherishi, Executive Editor ([email protected]) Aninda Bose, Senior Editor ([email protected]) Japan Takeyuki Yonezawa, Editorial Director ([email protected]) South Korea Smith (Ahram) Chae, Editor ([email protected]) Southeast Asia Ramesh Nath Premnath, Editor ([email protected]) USA, Canada: Michael Luby, Senior Editor ([email protected]) All other Countries: Leontina Di Cecco, Senior Editor ([email protected]) Christoph Baumann, Executive Editor ([email protected]) ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, MetaPress, Web of Science and Springerlink **

More information about this series at http://www.springer.com/series/7818

Jason C. Hung Neil Y. Yen Lin Hui •



Editors

Frontier Computing Theory, Technologies and Applications (FC 2018)

123

Editors Jason C. Hung Department of Computer Science and Information Engineering National Taichung University of Science and Technology Taichung, Taiwan

Neil Y. Yen School of Computer Science and Engineering The University of Aizu Aizu-Wakamatsu, Japan

Lin Hui Department of Innovative Information and Technology Tamkang University Yilan County, Taiwan

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-13-3647-8 ISBN 978-981-13-3648-5 (eBook) https://doi.org/10.1007/978-981-13-3648-5 Library of Congress Control Number: 2018963066 © Springer Nature Singapore Pte Ltd. 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Contents

Research on Application’s Credibility Test Method and Calculation Method Based on Application Behavior Declaration . . . . . . . . . . . . . . Xuejun Yu and Ran Xiao The Selection of DNA Aptamers Against the Fc Region of Human IgG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wen-Pin Hu, Hui-Ting Lin, Wen-Yu Su, Rouh-Mei Hu, Wei Yang, Wen-Yih Chen, and Jeffrey J. P. Tsai Application of Deep Reinforcement Learning in Beam Offset Calibration of MEBT at C-ADS Injector-II . . . . . . . . . . . . . . . . . . . . . Jinqiang Wang, Xuhui Yang, Binbin Yong, Qingguo Zhou, Yuan He, Lihui Luo, and Rui Zhou

1

11

20

Predicting Students’ Academic Performance Using Utility Based Educational Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. T. S. Kasthuriarachchi and S. R. Liyanage

29

Theme Evolution Analysis of Public Security Events Based on Hierarchical Dirichlet Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hong Hu, Xiao Wei, and Zhanxing Hao

40

Recurrent Neural Networks for Analysis and Automated Air Pollution Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ching-Fang Lee, Chao-Tung Yang, Endah Kristiani, Yu-Tse Tsan, Wei-Cheng Chan, and Chin-Yin Huang A Social Influence Account of Problematic Smartphone Use . . . . . . . . Chi-Ying Chen and Shao-Liang Chang Simulation Analysis of Information-Based Animal Observation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lin Hui, Yi-Cheng Chen, Kuei Min Wang, Chiao Ming Peng, and Kai-Ze Weng

50

60

64

v

vi

Contents

Effect of Fintech on the Cost Malmquist Productivity Index in the China Banking Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei-Liu Liao, Cui-You Yao, and Yi-Ping Yang

74

A Graph-Based Approach for Semantic Medical Search . . . . . . . . . . . Qing Zhao, Yangyang Kang, Jianqiang Li, and Dan Wang

89

On Construction of a Power Data Lake Platform Using Spark . . . . . . Tzu-Yang Chen, Chao-Tung Yang, Endah Kristiani, and Chun-Tse Cheng

99

Selection Issues of Kernel Function and Its Parameters of Hard Margin Support Vector Machine in a Real-World Handwriting Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Pei, Lei Jing, and Jianqiang Li

109

Adaptive Threshold-Based Algorithm for Multi-objective VM Placement in Cloud Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nithiya Baskaran and R. Eswari

118

Monitoring Bio-Chemical Indicators Using Machine Learning Techniques for an Effective Large for Gestational Age Prediction Model with Reduced Computational Overhead . . . . . . . . . . . . . . . . . . Faheem Akhtar, Jianqiang Li, Yu Guan, Azhar Imran, and Muhammad Azeem

130

Hybrid Technology in Video Annotation by Using the APP and Raspberry Pi—Applied in Agricultural Surveillance System . . . . . . . . Yong-Kok Tan, Lin-Lin Wang, and Deng-Yuan Theng

138

Film Classification Using HSV Distribution and Deep Learning Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ching-Ta Lu, Jun-Hong Shen, Ling-Ling Wang, Chia-Hua Liu, Chia-Yi Chang, and Kun-Fu Tseng Control Strategy-Based Intelligent Planning of Service Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yishui Zhu, Lei Tang, Jun Zhang, Zongtao Duan, and Hua Jiang IoT Enabled Environmental Monitoring System . . . . . . . . . . . . . . . . . Leo Willyanto Santoso and Markus Daud Giantara Neighbor Link-Based Spatial Index for k Nearest Neighbor Queries in Wireless Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun-Hong Shen, Ching-Ta Lu, and Hong-Ray Chu A Memory-Friendly Life Film Editing System for Smart Devices . . . . Shih-Nung Chen and Po-Zung Chen

144

154 164

174 181

Contents

The Implementation of Conversation Bot for Smart Home Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chuan-Feng Chih, Steen J. Hsu, Pei-Ting Chen, Yun-Ju Chen, and Chun-Yu Lu A Study on the User Cognitive Model of Learning Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hsin-Chao Ho, Mong-Te Wang, Shu-Chuan Shih, Chiu-Hsing Kuo, and Ching-Pin Tsai GVM Based Copy-Dynamics Model for Electricity Load Forecast . . . . Binbin Yong, Liang Huang, Fucun Li, Jun Shen, Xin Wang, and Qingguo Zhou

vii

187

193

203

Combining Voice and Image Recognition for Smart Home Security System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hung-Te Lee, Rung-Ching Chen, and Wei-Hsiang Chung

212

Evaluation of New Energy Operation Based on Comprehensive Evaluation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hang Yin, Zifen Han, Jin Li, Xiang Wu, Ningbo Wang, and Yan Li

222

Comparison of Different Machine Learning Methods to Forecast Air Quality Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bo Liu, Chao Shi, Jianqiang Li, Yong Li, Jianlei Lang, and Rentao Gu

235

The Implementation of Wi-Fi Log Analysis System with ELK Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuan-Ting Wang, Chao-Tung Yang, Endah Kristiani, and Yu-Wei Chan

246

The Implementation of NetFlow Log System Using Ceph and ELK Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuan-Ting Wang, Chao-Tung Yang, Endah Kristiani, Ming-Lun Liu, Ching-Han Lai, Wei-Je Jiang, and Yu-Wei Chan

256

A Virtual Interactive System for Merchandising Stores . . . . . . . . . . . . Meng-Yen Hsieh, Hua-Yi Lin, and Tien-Hsiung Weng

266

Data Imputation in EEG Signals for Brainprint Identification . . . . . . . Siaw-Hong Liew, Yun-Huoy Choo, and Yin Fen Low

278

A Supervised Approach for Patient-Specific ICU Mortality Prediction Using Feature Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . Gokul S. Krishnan and S. Sowmya Kamath Interpretable Learning: A Result-Oriented Explanation for Automatic Cataract Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianqiang Li, Liyang Xie, Li Zhang, Lu Liu, Pengzhi Li, Ji-jiang Yang, and Qing Wang

287

296

viii

Contents

Scalable Data-Storage Framework for Smart Manufacturing . . . . . . . Hsiao-Yu Wang and Chen-Kun Tsung

307

Reposition Cyber-Physical System to Minimizing the Gap between Cyber and Physical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chun-Tai Yen and Chen-Kun Tsung

314

Design and Implementation of a Learning Emotion Recognition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kuan-Cheng Lin, Li-Chun Sue, and Jason C. Hung

319

Leveraging Explicit Products Relationships for Improved Collaborative Filtering Recommendation Algorithm . . . . . . . . . . . . . . Shunpan Liang, Jinqing Zhao, Fuyong Yuan, and Fuzhi Zhang

324

Establishment of Model of Open Audition for Freshmen Athletes . . . . Yu-Yang Chen, Tai-Lun Chan, and Mong-Te Wang

336

A Difference Detection Mechanism Between API Cache and Data Center in Smart Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chun-Tai Yen, Chen-Kun Tsung, and Wen-Fang Wu

349

Elementary School Teachers’ Attitudes Toward Flipped Classrooms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pao-Chu Huang, Li Yue, and Hsuan-Pu Chang

354

Matching Game-Based Power Control in Cooperative Cognitive Radio Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Min-Kuan Chang, Yong-Jen Mei, Chao-Tung Yang, Yu-Wei Chan, and Wun-Ren Chen A Coalitional Graph Game Framework for Broadcasting in Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yu-Wei Chan, Min-Kuan Chang, and Wei-Chun Ho Examining the Impact of Exercise Tracking Data on Promoting Regular Exercise Among University Students . . . . . . . . . . . . . . . . . . . Chia-Hsien Wen, Wei-Yueh Chang, Tsan-Ching Kang, Chen-Lin Chang, and Yu-Wei Chan Understanding the Motivation for Exercise Through Smart Bracelets: The Importance of a Healthy Lifestyle . . . . . . . . . . . . . . . . . Chia-Hsien Wen, Wei-Yueh Chang, Tsan-Ching Kang, Chen-Lin Chang, and Yu-Wei Chan A Research of Demodulation Based Technique for Frequency Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhi Quan and Chaoyi Ma

359

364

370

374

378

Contents

The Implementation of a Campus Air Monitoring System Using LoRa Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yu-Sheng Lin, Chao-Tung Yang, Tzu-Chiang Chiang, Jung-Chun Liu, and Tsung-Che Yang Fuzzy Prediction for Time-Series Data—A Case Study at Taichung City Open Data of Air Pollution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei-Sheng Huang, Tzu-Chiang Chiang, Chao-Tung Yang, and Chung-Chi Lin

ix

388

398

Editing k-Nearest Neighbor Reference Set by Growing from Two Extreme Data Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jinwoo Park, Sungzoon Jo, and Hyeon Jo

408

Realization of Combined Systemic Safety Analysis of Adverse Train Control System Using Model Checking . . . . . . . . . . . . . . . . . . . . . . . . Anit Thapaliya and Gihwon Kwon

419

Synthesizing Use and Misuse Case Specification with Integrated Safety Analysis in Railway Control System . . . . . . . . . . . . . . . . . . . . . Anit Thapaliya and Gihwon Kwon

431

Risk Assessment for STPA with FMEA Technique . . . . . . . . . . . . . . . Ngoc-Tung La and Gihwon Kwon

444

Social Network Analysis on Tourism-Related Civil Complaints in Busan Metropolitan City . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Na-Rang Kim and Soon-Goo Hong

456

Analysis on Variables Affecting Youth Stress with Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Euihyun Jung

465

GPU-Based Fast Motion Synthesis of Large Crowds Based on Multi-joint Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mankyu Sung

472

Sperm Count Analysis Using Microscopic Image Processing . . . . . . . . Hyun-Mo Yang, Dong-Woo Lim, Yong-Sik Choi, In-Hwan Kim, Ailing Lin, and Jin-Woo Jung A Method to Real-Time Update Speaker Pronunciation Time-Database for the Application of Informatized Caption Enhancement by IBM Watson API . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong-Sik Choi, In-Hwan Kim, Hyun-Mo Yang, Dong-Woo Lim, Ailing Lin, and Jin-Woo Jung A Study of Fandom Crowdsourcing Method Using Big Data . . . . . . . . Se Jong Oh, Mee Hwa Park, Jeong Uijeoung, and Ill Chul Doo

484

490

496

x

Contents

Scenario Based Practical Information Protection Training System Using Virtualization System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sungkyu Yeom, Dongil Shin, and Dongkyoo Shin Cyber Battle Damage Assessment Framework and Detection of Unauthorized Wireless Access Point Using Machine Learning . . . . . . . Duhoe Kim, Doyeon Kim, Dongil Shin, Dongkyoo Shin, and Yong-Hyun Kim A Text Mining Approach to Study Individuals’ Food Choices and Eating Behavior Using Twitter Feeds . . . . . . . . . . . . . . . . . . . . . . . . . . Ayuna Dondokova, Satyabrata Aich, Hee-Cheol Kim, and Gyung Hye Huh A Multi Criteria Decision Modelling Approach for Gait Analysis of Parkinson’s Disease Using Wearable Sensors to Compare the Classification Performance Based on the Different Feature Selection Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Satyabrata Aich, Kamalakanta Muduli, and Hee-Cheol Kim Comparison of BVH and KD-Tree for the GPGPU Acceleration on Real Mobile Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SeungWoo Chung, MinKyoung Choi, DaeGeun Youn, and SeongKi Kim Sentiment Analysis of Korean Teenagers’ Language Based on Sentiment Dictionary Construction . . . . . . . . . . . . . . . . . . . . . . . . . Jason Kim, Min Kyoung Kim, Yeoeun Park, Eomji Kim, Junhee Lee, Dongho Kim, and Seonho Kim Research on Vocal Tic Symptom Detecting Using SVM/HMM . . . . . . Su-Seong Chai, InA Kim, and Kyu-Chul Lee

503

510

520

528

535

541

551

A Study on Visual Programming Platform Design for VR/AR SW Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hae-Jong Joo, Ho-Bin Song, and Min-Kyu Park

560

Interface for VR, MR and AR Based on Eye-Tracking Sensor Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pill-Won Park, Ho-Bin Song, and Hae-Jong Joo

566

A Study of Security Model for Mobile Education Service . . . . . . . . . . Dae Bum Lee and Hai-Gil Choi

571

How Easy Is It to Surf the Semantic Web? . . . . . . . . . . . . . . . . . . . . . Jungmin Lee, Changu Kang, Jisun Chae, Hyeonmin Park, and Seongbin Park

579

Success Factor for Autonomous Vehicle . . . . . . . . . . . . . . . . . . . . . . . . Hwa-Young Jeong

584

Contents

xi

Development of Supporting Tool for Executing Synthesized Automaton Representing Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . Seonil Jung and Gihwon Kwon

588

Application of Data Mining Techniques on New Product Development Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . You-Shyang Chen, Chien-Ku Lin, and Chyi-Jia Huang

596

Smart Counter of Hospital for Patient Drug Pickup . . . . . . . . . . . . . . Cheng-Ming Chang Impact of Collaboration with LINE Application on Work Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jerome Chih-Lung Chou

601

606

IOT Based Smart Water Monitoring Using Image Processing . . . . . . . Jieh-Ren Chang, Nakul Agarwal, Yipeng Bao, and Abhishek Sharma

611

A Study on Blockchain Applications in Healthcare . . . . . . . . . . . . . . . Abhishek Sharma, Ying-Hsun Hung, Punit Agarwal, and Muskan Kalra

623

A Study on Computer Vision Techniques for Self-driving Cars . . . . . . Nakul Agarwal, Cheng-Wei Chiang, and Abhishek Sharma

629

Real Time Human Fall Detection Using Accelerometer and IoT . . . . . Kritika Johari, Jing-Wei Liu, Thinagaran Perumal, Abhishek Sharma, Tanmay Chaturvedi, and Jieh-Ren Chang

635

A Study of the Interactive Styles of an Augmented Reality System for Cultural Innovative Product Design . . . . . . . . . . . . . . . . . . . . . . . . Cheng-Wei Chiang and Jing-Wei Liu

640

Mechanism and Application of Smart City by Developing Intelligent Cloud-Based Transportation Vehicle Surveillance System . . . . . . . . . . You-Shyang Chen and Yao-Wen Kan

647

A Study on the Necessity of Integrating Games into College Physical Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Di Zhang

653

An Empirical Study and Analysis of the Performance Model for the Foreign Language Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . Junmin Wu

658

Construction and Application of the English Corpus Based on the Statistical Language Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bo Zhang

665

Design and Implementation of the Teaching Management System for English Language and Literature . . . . . . . . . . . . . . . . . . . . . . . . . . Xichang Zhang

671

xii

Contents

Design of Multi-vehicle Flexible Rotational System Based on Green Dismantling of End-of-Life Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . Shaoyi Bei, Liang Zhao, Tao Wang, and Zijian Gao

678

Design of the In-depth Intelligent Educational System Based on the English Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fanghui Zhao

687

Design of the In-depth Intelligent Learning System Based on the College English Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rongrong Ruan

694

Discussions on the Construction of Digital Campus in Higher Vocational Colleges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianjian Luo

701

Fuzzy Evaluation Mode of the Innovative Foreign Language Talents Based on the Analytic Hierarchy Process . . . . . . . . . . . . . . . . . . . . . . . Caihong Han

706

Investigation and Countermeasure Study on Influencing Factors of Career Planning of Teaching Auxiliary Staff in Colleges and Universities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chonghong Wei

713

On the Construction of Digital Campus in the Development of Higher Vocational Colleges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haiying Zhou

722

Research and Implementation of the Key Algorithms for the Big Data in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Meiliu Luo

728

Research on the Competency Models of the Foreign Language Teachers in Colleges and Universities . . . . . . . . . . . . . . . . . . . . . . . . . . Fengxia Guo

734

Research on the Construction of the Multivariate English Teaching Mode Combined with the Cloud Technology . . . . . . . . . . . . . . . . . . . . Qinyuan Hui

741

Research on the Design of the Multimedia Intelligence Teaching System Based on the Oral English Curriculum . . . . . . . . . . . . . . . . . . Ke Zhang

748

Research on the Evaluation Methods of the Guidance and Test on the Student Employment Based on the Computer Model . . . . . . . . . . . . . . Wenzhao Zhang

754

Contents

Research on the Multimedia Assisted English Teaching Mode Based on the Computer Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ning Wang

xiii

761

A New Method for Calculating Excess Air Ratio . . . . . . . . . . . . . . . . . Yalan Ye, Wenhao Jiang, Hongming Wang, and Xiang An

767

High Adaptability Storage Management File System in Industrial . . . . Huizhong Liu

777

A Wifi Location Method Which Supports the Combination of Position and Location Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . Mingyuan Xin

784

Robust Control for Time-Delay Singular Systems Based on Passivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chunyan Ding and Qin Li

792

Oscillation Criteria for a Class of Third-Order Neutral Dynamic Equations with Distributed Deviating Arguments . . . . . . . . . . . . . . . . . Yuanxian Hui, Peiluan Li, and Xunhuan Deng

803

Studies on College Translation Teaching Models in an Information and Internet Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wanfang Zhang

815

Study on the Calibration Method of Gaze Point in Gaze Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yongsheng Zhou, Changyuan Wang, and Hongbo Jia

820

Innovation of “Internet +” Teaching Model Based on Organic Philosophy of Whitehead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lin Lin, Xue-bo Yang, Li Yang, and Bing-jie Xu

831

The Analysis of the Correlation Between Market Value and Volatility—Based on CSI 300 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weiqing Wang, Jiajun Fu, and Ziyuan Li

841

Applied Research of “Progressive in Class” Flipped Classroom Teaching Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yingying Sun

849

Calculating the Resistance Inductance Parameter of the Interconnection with High Order Basis Function Method . . . . . . . . . . Baojun Chen and Yanjie Ju

854

Research on Equipment Status and Operation Information Acquisition Based on Equipment Control Bus . . . . . . . . . . . . . . . . . . . Xu Li, Chen Meng, Cheng Wang, and Zhenghu Chen

864

xiv

Contents

Design and Analysis of Tool Wear Detection Mechanism . . . . . . . . . . . Xiao-Yun Li and Yi-Jui Chiu

872

Information Literacy Training for Students in Public Security Colleges and Universities Under the New Normal . . . . . . . . . . . . . . . . Jing Leng and Miaomiao Zhang

880

Layout Optimization of Book Sorting Machine Based on Sequential Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Junhao Jiang and Peijiang Chen

887

Analysis of Characteristics of High-Temperature Disasters in Flue-Cured Tobacco Based on GIS . . . . . . . . . . . . . . . . . . . . . . . . . Zhiguo Ma and Wenqing Li

896

Typical Standards Developing Organization Case Analysis and Its Enlightenments to China’s Standardization Development . . . . . . . . . . Qing Xu

902

Research on the Restrictive Factors and Coping Strategies of Individual Income Tax Reform . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peng Sun and Weishuang Xu

909

College Students’ Autonomous English Learning in Computer and Network-Based Self-access Center . . . . . . . . . . . . . . . . . . . . . . . . . Yulin Weng

915

A Development Research of Teachers in Private Higher Vocational Colleges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yang Boru and Zhou Wen

923

Adaptive Harris Corner Detection Algorithm Based on Modified Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peijiang Chen and Liying Wu

929

Research on Energy Efficiency Optimization for New Renewable Energy Sources Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changgeng Yu and Liping Lai

936

The Application of Configuration Software and ActiveX Technology in Blast Furnace Material Auto Raising Control System . . . . . . . . . . . Jin Yaling and Guo Jianshuang

944

Research on Lateral Acceleration of Lane Changing . . . . . . . . . . . . . . Wensheng Sun and Shufeng Wang Anchorage Effect on Rock Mass by Discontinuous Deformation Analysis for Rock Failure Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wen Wang, Moli Zhao, Rui Han, and Xiangrong Shi

950

961

Contents

Study on the Construction of College MOOC in the Era of Mobile Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gao Bin

xv

965

Design of Shared Campus Travel Tools . . . . . . . . . . . . . . . . . . . . . . . . Linan Zheng and Jian Zhao

974

Analysis of Acceptance Degree of Safety Importance Element . . . . . . . Jiang Wei and Chunyang Liang

981

IT Professional Robot Simulation Training System Based on Project Penetration and Swimming Pool Teaching Methods . . . . . . . . . . . . . . . Zhen Zhao, Guozhu Liu, Mengwei Xie, Yanqing Li, Jinghao Xu, Yang Gao, Yang Liu, and Qiang Hu Study on the Relationship Between Urbanization and Tertiary Industry in Hubei Province . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guoyang Zhou and Wenbin Liu

989

997

Research on the Improvement of the Teaching Effect of the Traditional Computer Basic Public Course Based on the edX Technology MOOC Online Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007 Hequn Wu, Mende Enhe, and Lixia Suo Research on the Status Quo of Online Independent Learning for Postgraduates and Countermeasures . . . . . . . . . . . . . . . . . . . . . . . 1013 Xiaoli Wang, Zhanbo Liu, and Lirong Su Simulation of Automated Stereo Warehouse System Based on Flexsim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019 Jiahui Liu, Guangqiu Lu, and Haiqin Wang Some Suggestions on the Reform of Running Schools in Yunnan Higher Vocational Colleges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030 Wen Zhou and Boru Yang Auto Evaluation of Syntactic Analysis via Blending Word Similarity and Tree Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036 Ruijuan Hu and Huifeng Tang Some Thoughts on Fire Emergency Communication Construction in the New Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047 Yufeng Fan and Haotai Sun Individualized Design of Porous Titanium Alloy Implants Based on CT Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056 Jinjun Tang, Jiang Zhang, Yulei Li, Yuelai Dai, and Qun Wang

xvi

Contents

Research on Planetary Transmission Scheme of Annular Stereo Garage Based on Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 1062 Jingqi Wang, Xingquan Guan, Kaihan Wei, Haidie Chen, and Mingze Gao The Application of Seeker Optimization Algorithm in Boiler Main Steam Pressure Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1070 Ai Li and Xiong Wei Research on Application of Data Mining Based on Improved APRIORI Algorithm in Enrollment Management in Colleges and Universities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077 Hequn Wu, Mende Enhe, and Jinyu Wang The Design of Simulation Sandbox System Based on Industrial Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083 Guangqiu Lu, Jiahui Liu, and Haiqin Wang Optimization of Control Strategy for Turbocharged Diesel Engine Under Transient Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093 Qiang Liu, Zhongchang Liu, Jing Tian, Yongqiang Han, Jun Wang, and Jian Fang Research on Employees’ Intention of Bootleg Innovation . . . . . . . . . . . 1100 Wenxiang Wang and Yuanjian Qin Research on Teaching Reform of Java Programming Course . . . . . . . 1109 Jian Xu Analyses the Application of Multimedia Technology in Teaching . . . . . 1115 Shuang Yu, Guang Li, and Xiaohui Cai Underlying Design and Implementation of a Wireless Data Transmission Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120 Weixiong Wang, Wenjun Su, and Yang Hu Task Pricing of “Taking Pictures for Making Money” . . . . . . . . . . . . . 1125 Han Li, Yi Cheng, Bingjun He, and Shi Kang 3-D Zero Offset Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130 Yang Qiqiang Research on the Application of VR Technology in the Preservation of the Cultural Sites of Koguryo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136 Chunyan Wang and Yanjie Zhan Analysis of Teaching Reform on Costume Design and Engineering Specialty in Application-Oriented Universities . . . . . . . . . . . . . . . . . . . 1144 Xu Lan

Contents

xvii

Application of FACTS Technology in China’s Transmission Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149 Jin Yiding Exploration and Practice of Teaching Mode Reform Based on “Maker Education” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157 Yingying Sun and Chunsu Zhang Numerical Simulation Method for Roll Forging and Forging Process of 7050 Aluminum Alloy Control Arm and Analysis of Final Forging Coarse Grain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161 Heng-yi Yuan, Lu Wang, and Li-xin Yuan The Innovation of CAI in Art Design Education . . . . . . . . . . . . . . . . . 1169 Jing Cao and Gang Li The Control of Sensing Performance of CeO2-Sm2O3 Humidity Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175 Chunjie Wang, Yue Wang, and Lu Fu Study on Rapid Green Synthesis of Nanometer Silver Sol by Visible Light Reduction Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181 Guangnian Xu, Guolian Ruan, Jiguang Zhu, and Juncheng Jin The Design and Realization of the System Shows the Train Information Based on MCU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1188 Guo Rui and Wu Tao Research on the Elements and Structure of Urban Innovation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194 Bingfeng Liu and Mingjuan Jiang Research on PID Controller Based on Adaptive Internal Model Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200 Xia Liu, Li Li, and Qiyan Yan Research on the Risk of Computer Audit Under the Information Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205 Chan Li and Xiaochun Liu Modeling and Control of the Output Boost Converter for Photovoltaic Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212 Y. Shao and Wenbin Liu The Reform and Practice of Computer Public Teaching in University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223 Liming Zhang The Narrative of the Fourth Party Logistics . . . . . . . . . . . . . . . . . . . . 1228 Wenguang Liang and Yi Ye

xviii

Contents

Statistics Analysis on Political Skill and Career Success of IT Engineers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235 Qinglan Luo A FIR Digital Filter for Pulse Signal Processing . . . . . . . . . . . . . . . . . 1244 Yang Zhao The Design and Research of Intelligent Range Hood for Traceable Oil Fume Concentration Based on Sensor . . . . . . . . . . . . . . . . . . . . . . 1249 Zhao Jian and Zheng Linan The Effect of Rural Financial Development on the Urban-Rural Income Gap—Data Analysis Based on Hubei Province . . . . . . . . . . . . 1258 Chang Tan, Wenbin Liu, Xiaodi Qin, and Yan Peng Improving Performance of Polymer Solar Cells by Double Plasma Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266 Xu Peng and Li Wei An Early Warning System Design to Detect Human Diseases Based on Plantar Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272 Enxiang Yu and Yunpeng Ma Research on Routing Algorithm of/SDN Openflow Controller . . . . . . . 1280 Mingyuan Xin Comparative Analysis of Didi and Uber App . . . . . . . . . . . . . . . . . . . . 1286 Wang Huimei and Liu Yadi Partial Discharge Test and Quantitative Analysis of High Voltage Switchgear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1292 Gui Junfeng The Evolutionary Analysis on the Expropriation of Large Shareholders on Minority Shareholders . . . . . . . . . . . . . . . . . . . . . . . . 1298 Yueping Wang Prediction of Smart Meter Life Assessment Based on Weibull Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1306 Baoliang Zhang, Penghe Zhang, Yang Xue, Tao Wang, and Lingqing Guo An Analysis of Innovative Talents Training of Arts Education Under the Development of Creative Economy in the Guangxi Province . . . . . 1315 Wanli Gu Data Acquisition Researching of Ultraviolet Spectrometer Based on Raspberry Pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321 Xu Shuping, Huang Mengyao, and Xu Pei

Contents

xix

Histogram Equalization Image Enhancement Based on FPGA Algorithm Design and Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 1331 Huihua Jiao, JieQing Xing, and Wei Zhou Study on Income Distribution Mechanism of Renewable Energy Power Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341 Qichang Li, Guangxia Li, Wang Lin, Qiang Song, and Yundong Xiao Design a Link Adaptive Management Algorithm for BLE Device . . . . 1355 Xichao Wang, Guobin Su, Yanbiao Hao, and Lichuan Luo An On-Line Identification Method for Short-Circuit Impedance of Transformer Winding Based on Sudden Short Circuit Test . . . . . . . 1365 Yan Wu, Lingyun Gu, Xue Zhang, and Jinyu Wang Construction of Mathematical Model for Statistical Analysis of Continuous Range Information System . . . . . . . . . . . . . . . . . . . . . . 1376 Ye Zheng An Electricity-Stealing Identification Method Based on Outlier Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383 Ying Wang, Mingjiu Pan, Zhou Lan, Lei Wang, and Liying Sun ANNs Combined with Genetic Algorithm Optimization for Symbiotic Medium of Two Oil-Degrading Bacteria Cycloclasticus Sp. and Alcanivorax Sp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389 Zhang Shaojun, Wang Mingyu, Liu Bingbing, Pang Shouwen, and Zhang Chengda Application Analysis of Beidou Satellite Navigation System in Marine Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398 Zhiliang Fan Research on Optimization and Innovation of ERP Financial Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404 Gao Shuang and Yang Lingxiao The Initial Application of Interactive Genetic Algorithm for Ceramic Modelling Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1409 Xing Xu, Jiantao Pi, Ao Xu, Kangle He, and Jing Zheng Study of Properties of Solutions for a Viscoelastic Wave Equation System with Variable-Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420 Yunzhu Gao, Qiu Meng, Haojie Guo, Jing Li, and Changling Xu Functional Tolerance Optimization Design for Datum System . . . . . . . 1427 Bensheng Xu, Can Wang, and Hongli Chen

xx

Contents

The Distribution of Financial Resources and the Difference of Regional Economic Growth—An Empirical Study Based on Spatial Spillover Effects in 31 Provinces in China . . . . . . . . . . . . . . 1435 Zhaoyi Xu and Pengfei Chen Digital Image Information Hiding Technology Based on LSB and DWT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444 Limei Chen A Review of the Application Model of WeChat in the Propaganda of Universities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454 Wenguang Liang, Yi Ye, Man Bao, and Rongrui Liu Design of Deer Park Environment Detection System Based on a Zigbee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1459 Mingtao Ma Equal-Distance Coupling Method in OFDM System Under Frequency Selective Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465 Xiaorui Hu, Jun Ye, Songnong Li, Ling Feng, Yongliang Ji, Lin Gong, and Quan Zhou Construction of Search Engine System Based on Multithread Distributed Web Crawler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470 Hongsheng Xu, Ganglong Fan, and Ke Li Gene Regulatory Network Reconstruction from Yeast Expression Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477 Ming Zheng and Mugui Zhuo Analysis and Design of Key Parameters in Intelligent System of Lime Rotary Kiln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482 Tingzhong Wang and Lingli Zhu Study on the Environmental Factors Affecting the NOx Results of Heavy Duty Vehicle PEMS Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1489 Huang Liyan, Liu Gang, Wang Detao, and Zhang Xian Analysis and Expected Effect of the Phase III Fuel Consumption Standard for Light Duty Commercial Vehicles . . . . . . . . . . . . . . . . . . . 1497 Wang Zhao, Bao Xiang, and Zheng Tianlei Construction of Driving Cycle Based on SOM Clustering Algorithm for Emission Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1508 Feng Li, Jihui Zhuang, Xiaoming Cheng, Jiaxing Wang, and Zhenzheng Yan Fatigue Detection Based on Facial Features with CNN-HMM . . . . . . . 1516 Ting Yan, Changyuan Wang, and Hongbo Jia

Contents

xxi

Construction and Evaluation of a Blending Teaching Model of Linear Algebra and Probability Statistics in the “Internet +” Background by Using the Gradient Newton Combination Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525 Mandan Hou Research on Algorithm of Transfer Learning Based on Sensor Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1533 Fan Yang and Yutai Rao Study on the Application of Big Data Analysis in Monitoring Internet Rumors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1537 Zijiang Zhu, Weihuang Dai, and Yi Hu Eyes and Mouth States Detection for Drowsiness Determination . . . . . 1546 Yuexin Tian, Changyuan Wang, and Hongbo Jia Feasibility and Risk Analysis of Data Security System Based on Power Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1555 Lei Yao Application of the Extension Point and Plug-Ins Idea in Transmission Network Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561 Wu Ping Reliability Modelling of a Typical Peripheral Component Interconnect (PCI) System with Dynamic Reliability Modelling Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1569 Guan Yi and Zhao Jiacong Mid-long Term Load Forecasting Model Based on Principal Component Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1577 Xia Xinmao, Zhang Zengqiang, Yuan Chenghao, and Yu Zhiyong Research on Intrusion Detection Algorithm in Cloud Computing . . . . 1584 Yupeng Sang An Improved Medical Image Fusion Algorithm . . . . . . . . . . . . . . . . . . 1593 Hui Li, Qiang Miao, and Hua Shen Construction Study on the Evaluation Model of Business Environment of Cross-Border E-Commerce Enterprises . . . . . . . . . . . 1602 Zhao Yuxin, Zhuang Meinan, and Wang Yatong Research on Information Construction of Enterprise Product Design and Manufacturing Platform . . . . . . . . . . . . . . . . . . . . . . . . . . 1612 Yan Ningning

xxii

Contents

Planning and Simulation of Mobile Charging for Electric Vehicles Based on Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1618 Yang Fengwei, Liu Jinxin, Zhang Zixian, and Chen Peng Research on Urban Greenway Design Based on Big Data . . . . . . . . . . 1625 Wenjun Wang Design and Implementation of Art Rendering Model . . . . . . . . . . . . . . 1630 Haichun Wei LiveCom Instant Messaging Service Platform . . . . . . . . . . . . . . . . . . . 1638 Qiang Miao, Hui Li, and Xiaolong Song Empirical Research on Brand Building of Commonweal Organization Based on Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 1647 Shujing Gao The Influence Study of Online Free Trail Activities on Product Sales Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1654 Shushu Gu, Yun Lu, Xi Chen, Ranran Hua, and Han Zhang Research on Behavior Oriented Scientific Research Credit Evaluation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1663 Yan Zhao, Li Zhou, Bisong Liu, and Zhou Jiang Realization and Improvement of Bayes Image Matting . . . . . . . . . . . . 1672 Xu Qin and Ding Xinghao Research on Image Processing Algorithm in Intelligent Vehicle System Based on Visual Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678 Congbo Luo and Zihe Tong A Study on Influencing Factors of International Applied Talent Cultivation in Higher Vocational Education . . . . . . . . . . . . . . . . . . . . . 1689 Jian Yong, Fanyi Kong, and Xiuping Sui Research on the Platform Structure of Product Data Information Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1695 Bingfeng Liu and Mingjuan Jiang Research on a New Clustering Validity Index Based on Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700 Chaobo Zhang Optimization of Ultrasonic Extraction for Citrus Polysaccharides by Response Surface Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705 Yongguang Bi, Yanshan Lu, and Zhipeng Su Research on the Transmission Mode of Asynchronous Link in a Telegraph Switching System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1713 Weixiong Wang and Wenjun Su

Contents

xxiii

Prediction of Smart Meter Life Assessment Based on BP Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718 Penghe Zhang, Baoliang Zhang, Yang Xue, Tao Wang, and Lingqing Guo Numerical Research on the Effect of Base Cavity upon the Projectile Flow Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1728 Zhihong Ye, Zhongxian Li, and Haibo Lu Recognition of Marine Oil Spill with BP Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734 Zhang Shaojun, Wang Mingyu, Pang Shouwen, Lv Wenxiang, and Liu Bingbing Jiugu Mining District Commercial WI-FI Construction Project . . . . . . 1742 Qi Yunrui Tolerance Information Representation and Reasoning Method Based on Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1751 Bensheng Xu, Weiqing Wang, Can Wang, and Meifa Huang A Cooperative Control Scheme for Voltage Rise in Distribution Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1759 Wu Liu, Jingfeng Zhou, Jian Wu, Weitang Fu, Mingliang Yang, Dawei Huang, and Chenglian Ma A New OFDM System Based on Companding Transform Under Multipath Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1770 Guangcheng Xie, Kaibo Luo, Yang Wang, Dexiang Yang, Jun Ye, and Quan Zhou Construction of Gene Regulatory Networks Based on Ordered Conditional Mutual Information and Limited Parent Nodes . . . . . . . . 1779 Ming Zheng and Mugui Zhuo Innovation Research of Cross Border E-commerce Shopping Guide Platform Based on Big Data and Artificial Intelligence . . . . . . . . . . . . 1785 Jiahua Li Research on Interior Design of Smart Home . . . . . . . . . . . . . . . . . . . . 1793 Hongxing Yi Antioxidant Activities of Polysaccharides from Citrus Peel . . . . . . . . . 1801 Yanshan Lu, Zhipeng Su, and Yongguang Bi Design of Rural Home Security System Based on the Technology of Multi-characters Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808 Shuchun Chen, Peng Chen, Libo Tian, and Tao Wang

xxiv

Contents

Research on Offline Transaction Model in Mobile Payment System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1815 Songnong Li, Xiaorui Hu, Fengling, Yu Zhang, Wei Dong, Jun Ye, and Hongliang Sun The In-Use Performance Ratio of China Real World Vehicles and the Verification of Denominator/Numerator Increment Activity Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1821 Qian Guogang, Xie Nan, and Yang Fan Simulation and Analysis of Vehicle Performance Based on Different Cycle Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829 Yangmin Wu, Zhien Liu, and Guangwei Xi Comparison Research on Lightweight Plan of Automotive Door Based on Life Cycle Assessment (LCA) . . . . . . . . . . . . . . . . . . . . . . . . 1841 Yusong He and Yiwen Xie The Analysis of the New Energy Buses Operating Condition in the North China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1851 Xiaoqin Yang, Lu Zhang, Yuze Zhang, and Qiang Lu Experimental Study on Influence Factors of Emission and Energy Consumption for Plug-in Hybrid Electric Vehicle . . . . . . . . . . . . . . . . 1863 Le Liu, Lihui Wang, and Chunbei Dai Design of On-Line Measurement System for Fine Particle Number Concentration of Vehicle Exhaust Based on Diffusion Charge Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1874 Zhouyang Cong, Tongzhu Yu, Huaqiao Gui, Yixin Yang, Jiaoshi Zhang, Yin Cheng, and Jianguo Liu Application of On-Board-On-Line Surveillance in Environment Supervision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1885 Wei Gu, Liqiao Li, Jingliang Wu, Hong Cai, Yun Li, and Gang Li Research on Diagnostics Methods of CNG Engine After Treatment Catalyst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1896 Tianchi Xie, Hongqi Liu, Haipeng Deng, Yupeng Wang, and Ying Gao Privacy-Preserving Authentication and Service Rights Management for the Internet of Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1904 Wei-Chen Wu and Horng-Twu Liaw RSU Beacon Aided Trust Management System for Location Privacy-Enhanced VANETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1913 Yu-Chih Wei, Yi-Ming Chen, Wei-Chen Wu, and Ya-Chi Chu

Contents

xxv

Constructing Prediction Model of Lung Cancer Treatment Survival . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925 Hsiu-An Lee, Louis R. Chao, Hsiao-Hsien Rau, and Chien-Yeh Hsu Research on the Combination of IoT and Assistive Technology Device—Prosthetic Damping Control as an Example . . . . . . . . . . . . . . 1934 Yi-Shun Chou and Der-Fa Chen A Study on the Demand of Latent Physical and Mental Disorders in Taipei City . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1939 Jui-hung Kao, Horng-Twu Liaw, and Po-Huan Hsiao Real-Time Analyzing Driver’s Image for Driving Safety . . . . . . . . . . . 1947 Kuo-Feng Wu, Horng-Twu Liaw, and Shin-Wen Chang A Web Accessibility Study in Mobile Phone for the Aging People with Degradation of Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1952 Chi Nung Chu The Comparison Between Online Social Data and Offline Crowd Data: An Example of Retail Stores . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957 Jhu-Jyun Huang, Tai-Ta Kuo, Ping-I Chen, and Fu-Jheng Jheng The Historical Review and Current Trends in Speech Synthesis by Bibliometric Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966 Guang-Feng Deng, Cheng-Hung Tsai, and Tsun Ku A Study on Social Support, Participation Motivation and Learning Satisfaction of Senior Learners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979 Hsiang Huang, Zne-jung Lee, and Wei-san Su A Health Information Exchange Based on Block Chain and Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1985 Wei-Chen Wu and Yu-Chih Wei The Relationship of Oral Hygiene Behavior and Knowledge . . . . . . . . 1991 Cheng youeh Tsai, Frica Chai, Ming-Sung Hsu, and Wei-Ming Ou Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1996

Research on Application’s Credibility Test Method and Calculation Method Based on Application Behavior Declaration Xuejun Yu and Ran Xiao(&) Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China [email protected], [email protected]

Abstract. The core of this article is based on application behavior declaration, using the “words match deeds” as the standard of judgment, comparing “words” with “lines”, and then using the implicit indicator model to calculate the credibility of the behavior. This paper presents the concept of action path of application behavior, the definition of three types of program event, the recessive parameter model and the concept of credibility-degree calculation. It provides a new idea for the research of credible Verification method. Keywords: Software credibility  Software behavior Credibility verification  Credibility calculation

 Behavior declaration

1 Introduction The software architecture makes great computing ability but also brings the huge challenges. The issues of safety, reliability, and credibility become increasingly prominent, and lead to the mismatch of system running results and user expectations, hence the system cannot run normally. In Ref. [1], TCPA/TCG committed to the research of new generation creditable computing platform which will be more secure and more creditable. In Ref. [2], Shen Changxiang proposed trusted platform control module (TPCM), which integrates the password and control together and works as a trusted root of selfcontrol mechanism. While the traditional trusted platforms work as passive devices. This study is based on the idea of “words match deeds” to determine whether software behavior is credible. In the “words match deeds” thinking: “Words” refers to the expected behavior of the software, “deeds” refers to the actual behavior of the software, “match” refers to the “words” and “deeds” is the same. The idea of “words match deeds” embodies the relationship between behavior and expectation and is in line with the current academic definition of credibility [3]. This idea is also the credible standard on which this article is based. Foundation item This dissertation is a part of the important and special project ‘Research and Information System Construction of Evaluation System of Air Pollution Prevention Certification’ (Item No.: 2017YFF0211801) sponsored by Ministry of national science and technique. © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1–10, 2019. https://doi.org/10.1007/978-981-13-3648-5_1

2

X. Yu and R. Xiao

Behavior Declaration refers to the application software to describe their own behavior collection. In this collection, behaviors include all the functional behaviors of the software, behaviors that may infringe users’ own rights, behaviors that may affect the normal operation of applications, and behaviors that may cause unexpected changes in hardware and software environment configuration [4]. The focus of this paper is on two aspects. The first is to solve the problem of how to perceive the action path of application behavior in the process of software operation. The action path is defined as a collection of a set of programmed events executed when the application is running. The second is to propose the calculation method of credibility-degree. The credibility-degree is defined as the similarity between the actual behavior of the software and the behavior in the behavior statement.

2 Design of Credibility Verification Method The method based on “words match deeds”, by comparing the actual behavior of the software with behavior declaration to determine whether the behavior is credible. The key to the method is the monitoring of the actual behavior of the software, as well as the calculation of the credibility-degree. 2.1

Definition of ABD

This method uses JSON to represent software behavior because it is easy to read and write. Software behavior can be seen as the process of completing a series of program settings. This method divides the program settings into three types of events. Shown in Fig. 1, depending on whether it is interactively divided into click events and other events. In other events, the event is divided into a visible Exposure-Event and an invisible environmental event according to whether it is visible or not.

Fig. 1. Event classification.

An “action” includes several events of three kinds of events, which can be expressed as the structure in Fig. 2.

Research on Application’s Credibility Test Method …

3

Fig. 2. Action structure.

2.2

Application Implement Verification Module

In the software development phase, application implements trusted testing tools. Figure 3 shows the structure of a test tool based on Linux. Includes three parts: interface module, tracking module, verification module. The main function of the interface module is to package down and provide the entrance upwards. The main function of the tracing module is to trigger the tracking point on the action path of the software and send information to the server, then the server will make credible decisions on the event. The main function of the verification module is to check if the software and the module itself have been tampered with. 2.3

Tracking Module Records Actions of Application

Tracking point refers to the marking tool installed in the software action path. Whenever software acts in accordance with the program settings, the tracking point will be triggered, and report the program information to the judging system according to the

4

X. Yu and R. Xiao

Fig. 3. Tool structure.

rules. The purpose of the module is recording the tracking event caused by software behavior, so that the path of action can be traced to make a credible decision for system. In the behavior statement in Fig. 2, three types of program events have been mentioned, namely Click-Event, Exposure-Event, and Environment-Event. Three events represent different actions, triggering conditions are different. As shown in Fig. 4.

Fig. 4. Example of events.

Click-Event means that the UI element of the software application to be displayed to the user is triggered when the UI element is clickable and has a feedback action after clicking. For example, clicking a button triggers a Click-Event. Exposure Event indicates that the UI elements displayed by the software application to the user appear in the interface, such as an Exposure-Event triggered when an image is displayed to the

Research on Application’s Credibility Test Method …

5

user. Environment-Event triggers when an invisible action occurs on a software application, such as calling a system speaker to play a concert triggering an environmental event. When a behavior occurs, the tracking point of the workflow shown in Fig. 5. First, the tracing module requests the tracking data source to get the details of the tracing point according to the event keyword (event Id). Then, the tracking module determines the operation on the associated ID according to the detailed information: if it is ClickEvent, it combines the Unix timestamp with the event Id to generate a unique ID and carries it; and if it is an Exposure-Event or an Environment-Event, the associated ID is directly read and carried. Finally, the module sends the processed event information to the server.

Fig. 5. Workflow of tracking module.

2.4

Verify Program Analysis Tracking Information

As shown in Fig. 6, the server provides three interfaces to the client for receiving three different types of tracking events and placing them in three event queues. Whenever a Click-Event is dequeued, it means that an action has occurred. Then the Action Queue will create an Action object to host the action. The Click-Event that has just been dequeued will be held by the Action object. The associated ID carried by the ClickEvent will be given to the Action object and wait for the event with the same associated ID to join the Action object. In this way, multiple tracking events can be combined into a single action path which can represent a software behavior. Finally, the server reads the behavior declaration file and parses the behavior declaration into a Declaration Queue, which is a collection of declarations. After the completion of the Action Queue, the server dequeues the Action Queue and compares it with the Declaration Queue to determine whether the behavior is trustworthy.

6

X. Yu and R. Xiao

Fig. 6. Workflow of server.

3 Credibility Verification 3.1

Standard of Credibility Verification

The standard of credible judgment will be made from two aspects: the dominant indicator is correct and the recessive indicator is normal. Behavior should first meet the dominant indicator is correct, then meet the recessive indicator within the normal range, will be judged as credible behavior. Decision process diagram shown in Fig. 7.

Fig. 7. Credible indicator.

First of all, the dominant indicator is correct means to matching the action paths completely, that is, the triggering of the three kinds of tracking events in Action is completely consistent with the behavior declaration. As an example, as shown in Fig. 8b, the JSON representation of the behavior of “click the button to play web music” in the behavior declaration. It can be seen that Click-Event is the beginning of the action path in Action of “words”. According to result-id in Click-Event, events with

Research on Application’s Credibility Test Method …

7

event-id as “2001”, “2002” and “3001” are expected to trigger (e.g., the right side in Fig. 8b Dotted line). The dominant indicator is to compare the event in the Action monitored by the experiment with the Action event in the behavior declaration. As shown by the solid arrows in Fig. 8, compare the “words” and “deeds” in the event of the same. If correct, will continue to determine the recessive indicators. If not, it is determined that the behavior is not credible.

Fig. 8. Compare action of “words” and “deeds”.

Second, the recessive indicator refers to the relevant system parameters in environmental events, such as memory usage, network usage and so on. The purpose of setting the recessive indicator is to detect whether the program has reported fraudulence, that is, the tracking point is not triggered in the action path or the actual behavior does not match the tracking point. In order to monitor such behavior, in the client’s test module, system parameters are automatically filled in upon escalation, in order to achieve the purpose of detecting abnormalities. As shown in Fig. 8 dotted line is recessive indicator.

8

3.2

X. Yu and R. Xiao

Recessive Parameter Model and Calculation

In environmental events, the same software behavior should be the same for system parameters theoretically, but errors may occur due to the software’s own program settings or real-time system environment in the real environment. The recessive indicator model is a method that is abstracted out to parse system parameters of Environment-Events in order to determine the error range and to calculate the credibility-degree of the test sample based on the error range. The recessive indicator model uses K-means algorithm and adds the rule of calculating credibility-degree. The process of model construction is as follows: Step 1 The sample set of behavior parameters is obtained through multiple experiments on the sample behavior, which is called the sample set. Step 2 According to the Euclidean distance, calculate the central particle O’ of the sample set. Step 3 Taking O’ as the center, we remove the interference point in the sample set, and then calculate the sample center particle O again. Step 4 Taking O as the center and the data points which is farthest away from O as the model edges, this model is called the recessive indicator model. In the experiment, we choose the memory usage and network traffic as the eigenvalues of the recessive indicator model. The two-dimensional model is shown in Fig. 9a. It can be seen that circle O is the data set of sample behavior. Most of the sample data around the center of the circle, the data outside the circle is the error of the larger noise.

Fig. 9. Two-dimensional model of recessive indicator.

We provide that if the point is centered, the credibility-degree is 100.00%; the credibility-degree decreases as the distance from the center of the circle increases; and the credibility-degree of the point at the edge is 50.00%. As shown in Fig. 9a, point a is highly credible within the circle; point b is outside the circle and is close to 2r from the center of the circle, the credibility-degree is close to 0.00%. The calculation method of credibility-degree is composed of dominant factor and recessive indicator model. Dominant factor is 1 to meet the requirements of explicit

Research on Application’s Credibility Test Method …

9

indicators, 0 that does not meet, with a decisive role. Taking two-dimensional space as an example, the mathematical abstraction of the recessive indicator model is shown in Fig. 9b. O as a center, r as a radius of the circle is the data range of sample behavior. The center’s credibility-degree is P, and the edge’s credibility-degree is t (o, r). d (o, a) represents the Euclidean distance between point o and point a, d (o, r) represents the radius of circle o, and f represents the dominant factor. Then the credibility-degree of point a t (o, a) can be expressed as:   dðo; aÞ tðo; aÞ ¼ f* P   tðo; r Þ dðo; rÞ

3.3

ð1Þ

Credibility Judgment of Similar Behavior

If the same Environment-Event is triggered in the action paths of two software behaviors, the two behaviors are similar. Based on the similarity of system parameters of the same environmental events, a recessive indicator model is used to make credible judgments on similar behaviors. The purpose is to determine if there is any suspicious actions in the behavior that causes the parameter to be abnormal. As shown in Fig. 10, the dataset of A overlaps most of the dataset of B, while the dataset of C has only edge overlap. It can be determined C belongs to the behavior of a very high probability there is an expired action not reported.

Fig. 10. Judgment of similar behavior.

The credibility-degree calculation of similar behavior is actually to compare the coincidence degree of A set and B set. Based on the method of calculating the credibility-degree of a single point pair of sets, we convert the problem to the average

10

X. Yu and R. Xiao

of the sum of the plausibility of each point in B to that of A. This is the credibilitydegree of similar behavior. It can be expressed as: TðA; BÞ ¼

N X tðA; Bn Þ n¼1

n

ð2Þ

4 Conclusion In this paper, the credibility verification Method solves in three problems. First, use tracking module to get the action path of application when the application is running. Second, compare tracking information with behavioral declaration after they have been reassembled into an action. Thirdly, put forward the calculation method of credibilitydegree. It provides a new idea for credibility verification based on ABD.

References 1. Trusted Computing Group. TCG Architecture Overview Specification revision [EB/OL]. [2009-03-08]. http://www.trustedcomputinggroup.org 2. Shen, C.X., Zhang, H.G., Wang, H.M.: Research and development of trusted computing. Chin. Sci.: Inf. Sci. 40(2), 139–156 (2010) 3. Yu, X.J., Jiang, G.: Research on application’s credibility verification based on abd. Wuhan Univ. J. Nat. Sci. 21(1), 063–068 (2016) 4. Su, D.: A software behavior dynamic trusted research method and its trusted elements. Netw. Secur. Technol. Appl. 4, 14–17 (2013)

The Selection of DNA Aptamers Against the Fc Region of Human IgG Wen-Pin Hu1,2(&), Hui-Ting Lin3, Wen-Yu Su1, Rouh-Mei Hu1, Wei Yang4, Wen-Yih Chen5, and Jeffrey J. P. Tsai1 1

4

Department of Bioinformatics and Medical Engineering, Asia University, Taichung City 41354, Taiwan {wenpinhu,wenyusu,rmhu,president}@asia.edu.tw 2 Department of Medical Laboratory Science and Biotechnology, China Medical University, Taichung City 40402, Taiwan 3 Department of Physical Therapy, I-Shou University, Kaohsiung City 82445, Taiwan [email protected] Institute of Chemical Engineering, National Taipei University of Technology, Taipei City 10608, Taiwan [email protected] 5 Department of Chemical and Materials Engineering, National Central University, Jhong-Li 32001, Taiwan [email protected]

Abstract. Aptamers can bind with various kinds of target molecules and therefore they have highly potential for using in the therapeutic, diagnostic, biosensing, purification applications, etc. The aptamer that could specifically bind to the Fc region of antibody is very attractive to the applications of biosensors or diagnostics. Herein, a RNA aptamer specific to the Fc region of human IgG was adopted to convert to a DNA sequence as the template for producing mutated sequences for selection. Computational approaches were applied in the selection processes and then the evaluation results were verified by the ELISA assay. Finally, four new DNA aptamers were discovered and confirmed their binding ability to the Fc region of human IgG. According to experimental results, these DNA aptamers could not generate the binding amounts of antibodies as much as the original RNA sequence. However, these DNA aptamers still have potential to be used for the immobilization of human IgG in the applications of diagnostics or biosensors. Keywords: Aptamer Fc fragment

 IgG  Molecular simulation  Mutated sequence

1 Introduction Aptamers are short nucleic acid sequences that usually have the lengths of 12–80 nucleic acids. Aptamers are a relatively new class of recognition elements with high specific binding ability to targets, which have similar functions comparable to antibodies. The selection technique called as systematic evolution of ligands by © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 11–19, 2019. https://doi.org/10.1007/978-981-13-3648-5_2

12

W.-P. Hu et al.

exponential enrichment (SELEX) have been used to efficiently discovery new DNA and RNA aptamers. Aptamers are currently utilized in both therapeutic and diagnostic applications. At present, the types of aptamers found in researches are usually RNAs and DNAs, but most of them are mainly RNA molecules. Aptamers can react with a variety of target molecules, such as metal ions, small molecules like amino acids, or short peptides, macromolecules (proteins), and even pathogenic microorganisms. Scientists also found RNA aptamers that could specifically bind to the Fc region of human immunoglobulin G (IgG) [1]. For biosensors, the surface chemical modifications and the orientations of immobilized antibodies are critical issues to the sensing results. Appropriate strategies for improving surface modifications and the immobilization of antibodies can enhance the sensitivity and specificity of biosensing. In order to immobilize antibodies with the well-ordered orientation on the sensing chip, protein A and G are the most commonly used biomolecules to bind IgG antibodies through their specific binding ability to the Fc fragments of IgG molecules. These binding results can make the Fab fragments of immobilized IgG molecules towards the buffer with analytes and improve the binding efficiency between antibodies and target molecules. Except for biosensors, aptamers that specifically recognize the Fc region of IgG are also of great interest in the diagnostic applications and the purification of antibody. Some researchers discovered RNA aptamers could recognize the Fc region of mouse IgG [2], and another research group selected DNA aptamers against the Fc region of mouse IgG [3]. Compared with RNA aptamers, DNA aptamers are much more stable, cheaper and easy for generation and synthesis [4]. Herein, the aim of this study is to find new DNA aptamers that can bind to the Fc region of human IgG. The RNA aptamer reported by Miyakawa’s group [1] was selected and converted as the original source to generate new DNA aptamer sequences by some mutation rules. In addition, we used the molecular simulations for evaluating the binding ability of aptamer to human IgG. We also analyzed the binding interface between the aptamer and IgG to examine whether the important amino acids on the antibody that constituted the binding site for aptamer involved in this interaction. By using the selection strategy, we found four new DNA aptamers that could have good binding capabilities with IgG and verified the real performance of each screened aptamer in the enzyme-linked immunosorbent assay (ELISA) test.

2 Materials and Methods 2.1

Materials and Reagents

The human IgG, streptavidin high capacity coated plates (96 wells), TWEEN® 20, sodium dihydrogen phosphate monohydrate (KH2PO4) and sodium chloride were purchased from Sigma-Aldrich (St. Louis, Missouri, USA). DNA and RNA aptamers were all synthesized by MDBio, Inc. (Taipei City, Taiwan), and the 5′ end of each sequence was modified with biotin. Potassium chloride and sodium phosphate dibasic (Na2HPO4) were obtained from J. T. Baker (Center Valley, Pennsylvania, USA). Horseradish peroxidase (HRP) conjugation kit (1 mg), TMB ELISA substrate and 450 nm stop solution for TMB substrate were all acquired from abcam plc.

The Selection of DNA Aptamers Against the Fc Region of Human IgG

13

(Cambridge, UK). We prepared 1X PBS buffer and adjusted the pH value to 7.4. The composition of 1x PBS buffer is: 10 mM sodium phosphate dibasic; 137 mM sodium chloride; 2 mM potassium dihydrogen phosphate; 2.7 mM potassium chloride. All chemicals used in this study were reagent grade. 2.2

Parent Sequence and Mutation Rules for Child Sequences

The original sequence for generating new mutated sequences is Apt 8 reported by Miyakawa et al. [1], which is a RNA sequence with a length of 23 nt and has high affinity to IgG1. The sequence of Apt 8 is: 5′-GGAGGUGCUCCGAAAGG AACUCC-3′. Figure 1 shows the secondary structure of Apt 8 aptamer predicted by the RNAfold web server [5]. There are two stems and loops in the secondary structure. In order to find new DNA aptamers that can bind to the Fc region of human IgG, we replace the uracil in the Apt 8 sequence with thymine to obtain a DNA sequence as the parent sequence. Herein, we name the DNA aptamer sequence as Apt 8_DNA. However, the substitution of thymine for uracil cannot guarantee the DNA aptamer still retains high affinity to IgG1. For clarifying this issue, we firstly used the computational methods to compare and evaluate the binding ability of Apt 8 and Apt 8_DNA to IgG. After that, we adopted three rules to generate a pool of new mutated sequences.

Fig. 1. The secondary structure of Apt 8 aptamer predicted by the RNAfold web server [5].

Based on the sequence of Apt 8_DNA, 207 mutated sequences were produced manually by the defined rules. The three rules for the generation of mutated sequences are listed as follows. 1. Paired mutations at positions 1–4 and 20–23 in the stem 1. (12 mutated aptamer sequences) 2. Single-point mutations at positions 5–8 and 19 in the loop1. (15 mutated aptamer sequences) 3. Two-point mutations at positions 5–8, 19 and 12–15 in the loop1 and loop2. (180 mutated aptamer sequences)

14

2.3

W.-P. Hu et al.

Computational Assay

After producing 207 mutated sequences, we used RNAfold web server to analyze the secondary structures of sequences. According to the analysis results presented by dotbracket notation (DBN), these mutated sequences all have clear secondary structures. The file of crystal structure of a human IgG-aptamer complex (code: 3AGV) is downloaded from Protein Data Bank (PDB), which contains the Fc fragment of human IgG1 and an anti-Fc RNA aptamer. We adopted Accelrys Discovery Studio (DS) 4.1 to remove the anti-Fc RNA aptamer in this file and saved the structure of Fc fragment of human IgG1 for subsequent computational simulations. In order to get the structural model of each mutated aptamer sequence, we changed the letter code T in the DNA sequence to U and then used RNAComposer web server to generate the 3D model. Afterwards, atoms of pyrimidine bases in the 3D RNA model were edited to make uracil become thymine by using DS 4.1. Such changes might have a slight effect on the overall structure. Therefore, the function of energy minimization in DS 4.1 was adopted to adjust and obtain the 3D structures of DNA aptamers. The ZDOCK simulation function in the DS software was applied to evaluate the interactions between aptamers and the human IgG. The ZDOCK function had been used to study the binding capabilities of aptamers to their target protein [6]. The ZDOCK function uses fast Fourier correlation techniques to evaluate the shape complementarity between the protein and aptamer molecules. Except for using the ZDOCK scoring function, the ZRANK scoring function was used to re-rank the ZDOCK results for getting better prediction outcomes [7, 8]. Repulsive energies and van der Waals attractive, short and long range repulsive and attractive energies, and desolvation are all considered in the ZRANK scoring function. Because the binding interaction between Apt 8 and the human IgG had been studied thoroughly by Miyakawa et al. [1]. Amino acids of IgG Fc fragment involved in binding to the aptamer are divided into three amino acid clusters (L314-G316, L338-A344 and V397L406) [1]. In a different study reported by Nomura et al. [9], TYR373 and PRO374 were another two amino acids found to involve in the interactions between the aptamer and IgG Fc fragment. Based on these valuable information, we further checked the amino acids involved in aptamer-protein binding interface of best docking pose of each aptamer after simulations. In order to get more accurate prediction results, important amino acids revealed in previous reports showed in the aptamer-protein binding interface of best docking pose was used as another index to evaluate the docking results. The hardware we used to run simulations is a HP server, which comprised of two Intel® Xeon processors (each contains 8 computing cores), a memory of 44 GB, and a 64-bit Windows Server 2008 operating system. 2.4

ELISA Test

The human IgG was dissolved with 1X PBS to the final concentration of 1 mg/ml. For the preparation of HRP-conjugated antibody, 3 ml IgG solution and the modifier in the conjugation kit were mixed together. Then, the antibody sample was pipetted directly onto the lyophilized HRP Mix, and the mixture liquid was withdrawn and re-dispensed once or twice by using a pipette. The vial was placed in the dark at room temperature

The Selection of DNA Aptamers Against the Fc Region of Human IgG

15

for 3 h. After the incubation, quencher reagent was added to the vial and mixed gently. After 30 min, the conjugates were diluted 100 times with 1X PBS. The solution contained HRP-conjugated antibody was subsequently used in the ELISA test to recognize the immobilized aptamers on the well surface of ELISA plate. Each kind of biotin labelled aptamer was all dissolved in PBS to the final concentrations of 1 lm. One hundred lL of biotin labelled aptamer was added in each well, and then the plate was covered with the plate sealer to incubate for 1 h and 30 min at 37 °C. After 1 h, wash buffer (0.05% Tween 20 in PBS) was used to fill each well (approximately 350 lL) and the buffer was completely removed later, repeating the wash process three times. After the wash, the prepared solution of HRP-conjugated antibody added to each well with the volume of 100 lL, and the plate was incubated for 1 h at 37 °C. Afterwards, the wash process was repeated three times. As the wash process was completed, we added 90 lL of substrate solution to each well and covered with a new plate sealer to incubate at 37 °C. After about 15 min later, 50 lL of stop solution was added to each well and the color turned to yellow instantly. The ELISA plate was then measured by a micro-plate reader set to 450 nm to determine the optical density of each well.

3 Results and Discussions 3.1

Molecular Simulation Results for Apt 8 and Apt 8_DNA

The sequence of Apt 8_DNA was directly obtained from replacing uracil bases in the sequence of Apt 8 with thymine bases, hence we had to check whether the binding ability of Apt 8_DNA to human IgG had a significant difference compared with Apt 8. Table 1 shows the simulation results and important amino acids involved in the binding interface between the aptamer and IgG. The important amino acids listed in Table 1 included the three amino acid clusters for binding the aptamer reported by Miyakawa et al. [1]. Compared with simulation results of these two aptamer, Apt 8_DNA has one less number of interactions with important amino acids. Although the biding ability of Apt 8_DNA to IgG is predicted slightly inferior to that of Apt 8 based on the ZRANK score, Apt 8_DNA should still retain good binding ability of Apt 8_DNA to human IgG. Therefore, Apt 8_DNA is used as the template sequence to generate mutated sequences by the rules mentioned in Sect. 2.2. 3.2

The Candidate Sequences Predicted by Molecular Simulations

In order to find aptamers with good binding ability to human IgG, we used two criteria to select candidate sequences from the simulation results. The ZRANK score for the aptamer must be small than −90. The percentage of amino acids that are highly important to the binding to aptamer presented in the entire binding interface needs to be greater than 25%. Take the aptamer named as R3_6_12#7 as the example, 11 important amino acids mentioned in the two literatures involve in the binding interface of R3_6_12#7 and human IgG. The total number of amino acids in the binding interface is 39, and the percentage of important amino acids presented in the binding interface is

16

W.-P. Hu et al.

Table 1. Scores and amino acids in the binding interface for the best docking poses of two aptamers. Item ZDOCK score ZRANK score Total number of amino acids in the binding interface Important amino acids in the binding interface

Aptamer Apt 8 56.97 −90.655 39

Apt 8_DNA 57.14 −88.561 45

TRP313, ALA339, VAL397, LEU398, ASP399, PHE404, PHE405, LEU406

LYS338, VAL397, LEU398, ASP399, PHE404, PHE405, LEU406

28.2% (11/39). The data of all amino acids in the binding interface are shown below and the important amino acids for binding with aptamer are marked with bold style. R3_6_12#7: SER239, PHE241, LEU242, PHE243, PRO244, PRO245, LYS246, PRO247, LYS248, VAL264, TRP313, GLY316, GLU318, LYS320, ILE332, GLU333, LYS334, THR335, ILE336, SER337, LYS338, ALA339, LYS340, GLY341, TYR373, PRO374, SER375, ASP376, ILE377, ALA378, VAL379, THR393, THR394, PRO395, PRO396, VAL397, LEU398, PHE404, LEU406 By using the criteria, five aptamers (R3_5_12#8, R3_6_12#7, R3_6_15#6, R3_7_15#1 and R3_19_13#7) were selected from 207 mutated sequences. “R3” in the name of aptamer means the aptamer was produced by using the third rule for the generation of mutated sequences. The secondary and third numbers in the name of aptamer indicate the mutation positions in the sequence. The last number after the # symbol represents the order of this kind of combination. The sequences of these five selected aptamers are listed as follows. – – – – –

R3_5_12#8: 5′-GGAGtTGCTCCAAAAGGAACTCC-3′ R3_6_12#7: 5′-GGAGGCGCTCCCAAAGGAACTCC-3′ R3_6_15#6: 5′-GGAGGGGCTCCGAACGGAACTCC-3′ R3_7_15#1: 5′-GGAGGUCCUCCGAATGGAACUCC-3′ R3_19_13#7: 5′-GGAGGTGCTCCGTAAGGACCTCC-3′

Table 2 shows the scores obtained from simulations and the percentage of important amino acids presented in the binding interface for each aptamer. R3_6_12#7 has the smallest percentage of important amino acids in the binding interface, hence we exclude the aptamer from the ELISA assay. 3.3

Results of ELISA Assay

R3_5_12#8, R3_6_15#6, R3_7_15#1 and R3_19_13#7 were selected and applied in the ELISA assay, and Apt 8 reported by Miyakawa et al. [1] was also tested in this assay as a comparison. Figure 2 shows the results obtained from the analysis of five aptamers for binding to the human IgG in the ELISA assay. Among these five aptamers,

The Selection of DNA Aptamers Against the Fc Region of Human IgG

17

Table 2. The simulation results of selected aptamers Aptamer

R3_5_12#8 R3_6_12#7 R3_6_15#6 R3_7_15#1 R3_19_13#7

Item ZDOCK

ZRANK

51.28 54.7 44.6 49.32 41.55

−96.026 −92.591 −91.891 −92.562 −93.255

The percentage of important amino acids in the binding interface (numbers of important/all amino acids) 37.8% (14/37) 28.2% (11/39) 41.4% (12/29) 34.8% (16/46) 32.4% (11/34)

Apt 8 reported by Miyakawa et al. [1] has the largest value of optical density. For other four DNA aptamers, it is worth noting that they also have good binding responses with the human IgG, but the responses are inferior to Apt 8. In addition, the four selected DNA aptamers don’t exhibit stronger binding ability to the human IgG as the predictions by the ZRANK scores obtained from simulations results. The average values of optical density for Apt 8, R3_5_12#8, R3_6_15#6, R3_7_15#1 and R3_19_13#7 are 2.75, 2.45, 2.46, 2.33 and 2.45, respectively. Nevertheless, we think these four selected DNA aptamers still has their application values, especially for the three aptamers have similar average values of optical density (R3_5_12#8, R3_6_15#6 and R3_19_13#7). Because these DNA aptamers are suitable to be used in the immobilization of human IgG on the solid surface (like chip surface) for detecting the target molecule. Comparing with Apt 8, these DNA aptamers may immobilize minor amounts of human IgGs on the solid surfaces. The less amount of immobilized human IgG can influence the final measured signal produced by the biorecognition event. Even though, these DNA aptamers still have their advantages of applications that are listed below: (a) high temperature stability; (b) cost effective production; (c) possibility for chip regeneration. From the ELISA assay, we confirm that the selected DNA aptamers indeed have the binding ability to human IgG and validate the feasibility of whole selection processes of DNA aptamers.

4 Conclusions By using computational approaches, some DNA aptamers were evaluated to have the binding ability to the Fc fragment of human IgG. After using the selection criteria, our experiment demonstrated that four new aptamers could bind to the Fc fragment of human IgG. We also noticed that the simulation results didn’t fully match with the experimental outcomes. For example, R3_5_12#8 with the ZRANK score of −96.026 that did not show better experimental results than R3_6_15#6 or even than Apt 8. We think the computational approaches can help the selection of aptamer, but the experiments are essential for validations of computational results. The four aptamers reported in this study are suitable to bind human IgGs and may be extended for the diagnosis of human diseases.

18

W.-P. Hu et al.

Fig. 2. Comparison of results obtained by optical density (OD) for analysis of five aptamers for binding to the human IgG in the ELISA assay. The standard deviations of Apt 8, R3_5_12#8, R3_6_15#6, R3_7_15#1 and R3_19_13#7 are 0.23, 0.14, 0.11, 0.31 and 0.16, respectively. Acknowledgements. The authors gratefully thank the financial support provided by Ministry of Science and Technology, Taiwan under the following contract numbers: MOST 105-2221-E468-017.

References 1. Miyakawa, S., Nomura, Y., Sakamoto, T., Yamaguchi, Y., Kato, K., Yamazaki, S., Nakamura, Y.: Structural and molecular basis for hyperspecificity of RNA aptamer to human immunoglobulin G. RNA 14, 1154–1163 (2008) 2. Sakai, N., Masuda, H., Akitomi, J., Yagi, H., Yoshida, Y., Horii, K., Furuichi, M., Waga, I.: RNA aptamers specifically interact with the Fc region of mouse immunoglobulin G. Nucleic Acids Symp. Ser. (Oxf). 487–488 (2008) 3. Ma, J., Wang, M.G., Mao, A.H., Zeng, J.Y., Liu, Y.Q., Wang, X.Q., Ma, J., Tian, Y.J., Ma, N., Yang, N., Wang, L., Liao, S.Q.: Target replacement strategy for selection of DNA aptamers against the Fc region of mouse IgG. Genet. Mol. Res. 12, 1399–1410 (2013) 4. Zhu, Q., Liu, G., Kai, M.: DNA Aptamers in the diagnosis and treatment of human diseases. Molecules 20, 20979–20997 (2015). https://doi.org/10.3390/molecules201219739 5. RNAfold web server Available online: http://rna.tbi.univie.ac.at//cgi-bin/RNAWebSuite/ RNAfold.cgi. Accessed on 28 Mar 2018

The Selection of DNA Aptamers Against the Fc Region of Human IgG

19

6. Kumar, J.V., Chen, W.-Y., Tsai, J.J.P., Hu, W.-P.: Molecular simulation methods for selecting thrombin-binding aptamers. In: Park, J., Barolli, L., Xhafa, F., Jeong, H.Y. (eds.) Information Technology Convergence. Lecture Notes in Electrical Engineering, vol. 253, pp. 743–949 (2013) 7. Hsieh, P.-C., Lin, H.-T., Chen, W.-Y., Tsai, J.J.P., Hu, W.-P.: The combination of computational and biosensing technologies for selecting aptamer against prostate specific antigen. Biomed Res. Int. Article 2017, ID 5041683 (2017) 8. Hu, W.-P., Kumar, J.V., Huang, C.-J., Chen, W.-Y.: Computational selection of RNA aptamer against angiopoietin-2 and experimental evaluation. Biomed Res. Int. 2015, Article ID 658712 (2015) 9. Nomura, Y., Sugiyama, S., Sakamoto, T., Miyakawa, S., Adachi, H., Takano, K., Murakami, S., Inoue, T., Mori, Y., Nakamura, Y., Matsumura, H.: Conformational plasticity of RNA for target recognition as revealed by the 2.15 A crystal structure of a human IgG-aptamer complex. Nucleic Acids Res. 38, 7822–7829 (2010)

Application of Deep Reinforcement Learning in Beam Offset Calibration of MEBT at C-ADS Injector-II Jinqiang Wang1 , Xuhui Yang1 , Binbin Yong1 , Qingguo Zhou1 , Yuan He2 , Lihui Luo3 , and Rui Zhou1(B) 1

2 3

School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China {jqwang16,yangxh16,yongbb14,zhouqg,zr}@lzu.edu.cn Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, China [email protected] Cold and Arid Regions Environmental and Engineering Research Institute, Chinese Academy of Sciences, Lanzhou, Gansu, China [email protected]

Abstract. As a “Strategic Priority Research Program”, high-current superconducting proton driver linac is becoming the best equipment for nuclear waste disposal and cancer treatment. It can accelerate protons to high energy for transmutation of nuclear waste and damaging to cancer cells. However, the deflection of the beam in the MEBT section causes beam loss, which results in low quality beam. In fact, it is caused by being lack of a method to dynamically adjust its own calibration strategy based on the beam position information. This paper uses a novel Asynchronous Advantage Actor-critic (A3C) artificial intelligence method based on Deep Reinforcement Learning (DRL) to find an optimal control strategy in a fickle environment to solve problem that the existing traditional physical and numerical methods cannot calibrate. The final calibration offset by our method reduces to a mean beam offset of 0.4510 mm, which proves that the proposed method has a very competitive advantage in linac beam offset calibration.

Keywords: MEBT learning · A3C

1

· Beam offset calibration · Deep reinforcement

Introduction

With the development of economy and technology, the disposal of nuclear waste and the treatment of cancer have become two major issues of contemporary society. However, the half-life of nuclear waste elements is closed to a century, and hence can not be decayed in a short time, which leads to the incidence of diseases such as cancer [1]. Furthermore, the rapid growth of cancer not only endangers individuals but also increases social burden [2]. c Springer Nature Singapore Pte Ltd. 2019  J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 20–28, 2019. https://doi.org/10.1007/978-981-13-3648-5_3

Application of Deep Reinforcement Learning . . .

21

Nowadays, a proton linac accelerator, as the main part of China AcceleratorDriven System (C-ADS) injector-II project developed by the Institute of Modern Physics, Chinese Academy of Sciences (IMP-CAS), is an application equipment that can harness high-energy particles for transmutation of nuclear waste elements [3] and radiotherapy to destroy or damage cancer cells [4]. It is an exceptionally complex system including seven parts, each consisting of thousands of accessories. The structure and functionality of the linac has now been completed and a high quality beam of up to 25eMv energy is produced [5]. However, the excursion of the beam is a very serious and urgent problem [6], which limits the proton beam energy level can be reached. Many results has been achieved in this field, most of which are focused on traditional methods of using physical and numerical calculations. The transfer matrix algorithm [7] is a method of processing a quadrupole magnet, a calibration coil, and a BPM monitor by complex matrix transformation, which successfully solves the problem of simultaneously calibrating multiple BPM values and achieves the global optimal result. On this basis, Hussein et al. [8] adopted a global COD correction method based on singular value decomposition (SVD) of the orbital response matrix, which achieved good results on beam calibration of closed trajectories in vertical and horizontal direction. Beyond that, Beam-Based Alignment (BBA) [9] is also a classic and widely used method due to the complete automation of data processing. Jena SK et al. [10] used it to optimize the horizontal and vertical offsets down to 1.3 mm rms and 0.43mm rms. In addition, Fourier-harmonic iteration [11] can also be used for beam calibration in accelerator. Finally, the “null comparison method” [12] is a typical approach used on the linac beam behavior. For a complicated proton accelerator system, a dynamic learning and calibrating beam offset method is undoubted a preeminent choice. Reinforcement learning based on behavioral psychology can handle the interaction between the agent and the environment, and it proves its superiority in control in the Atari game [13] and intelligent control [14]. It has a very good application scenario for optimal control in the industrial manufacturing industry. In this paper, we use an Asynchronous Advantage Actor-critic (A3C) approach [15], which has sufficient ability to dispose the continuity control process and proves a good advantage for beam offset calibration in linac.

2

Physics structure

C-ADS Injector II is a sophisticated equipment consisting of an Electron Cyclotron Resonance (ECR), a Low Energy Beam Transport line (LEBT), a Radio Frequency Quadrupole (RFQ), a Medium Energy Beam Transport line (MEBT), a set of Superconducting cavity and a High Energy Beam Transport line (HEBT) [16]. As the dominating part of this study, MEBT undertakes the beam-level matching in vertical and horizontal directions between RFQ and superconductivity cavity. It consists of seven quadrupoles (Q1-Q7), two bunchers (B1, B2), one scraper and one FC2 in Fig. 1. A calibration coil is integrated

22

J. Wang et al.

Fig. 1. The MEBT layout structure. In the upper left corner of the figure is a cutaway view of a quadrupole whose coil is the calibration coil used in this paper. The red indicates the vertical control voltage and the green shows the horizontal voltage. The upper-right corner coordinate structure represents the calculation method of the calibrated beam offset.

into each quadrupole to calibrate beam offsets by controlling its voltage values in both horizontal and vertical directions. In addition, the BPM detector, which is mounted inside the quadrupole magnets Q1, Q4, Q5 and Q7, is mainly used to measure the beam offsets in the horizontal and vertical direction [17]. The calibration coil in the seven quadrupoles is divided into a focusing coil and a defocusing coil. Among them, the calibration coils inside Q1, Q3, Q5 and Q7 are focusing coils, and inside Q2, Q4 and Q6 are defocusing coils. The beam offset calibration is controlled by the calibration coil voltage value, which is usually set in the sequence (c1, c2, c3) (c4) (c5, c6, c7) in both horizontal and vertical directions, respectively.

3

Deep Reinforcement Learning Model

The sequence of voltages applied by the calibration coil determines the degree of beam offset calibration. The calibration process is controlled by inputting the voltage value through the system’s real-time feedback. This paper uses an Asynchronous Advantage Actor-critic (A3C) algorithm, a reinforcement learning method, which can use the agent and the environment to continuously learn interactively, to improve their own behavior. It can overcome the defects that cannot be calculated by common physical methods, due to complex magnetic field changes between magnets. Generally, We define a finite states set S and a finite action set A in the sequence decision process. The essence of reinforcement learning is a sequence decision problem, and its ultimate goal is to learn a set of optimal policy that pass the action at ∈ A in the corresponding state st ∈ S to obtain Ta reward rt . The maximized discount cumulative reward is defined as: Rt = t=0 γ t rt+1 with discount factor γ ∈ (0, 1]. Meanwhile, the policy is defined as π(s), which represents the probability distribution of the agent mapping on the random environmental states S. Based on the strategy π, we define the action-value

Application of Deep Reinforcement Learning . . .

23

  T t function: Qπ (s, a) = Eπ γ r |s = s, a = a , and the optimized actiont+1 t t t=0 ∗ π value function is: Q (s, a) = maxπ Q (s, a). Based on the above basic elements of reinforcement learning, A3C is used as an asynchronous algorithm, whose core idea is to use the multi-core attributes. Namely, the agent is copied and assigned to each thread to train and update the gradients of global parameters. We use   the strategy π(at |st ; θ ) and the estimated value function V (st ; θv ) for agent, and the reward R is define as Eq. (1).  0 for terminal st R= (1)  V (st , θv ) for non-terminal st In the model learning training process, we use the gradient ascending method to update the parameter of the strategy π according to Eq. (2) and the gradient descent to update state value function using the TD-method [18] according to Eq. (3).   dθ = dθ + ∇θ log π(ai |si ; θ )(R − V (si ; θv )) (2) 



dθv = dθv + ∂(R − V (si ; θv ))2 /∂θv 

(3)



where θ is the parameter of strategy π, and θv defines the parameters of the state value function, and R is usually used through R = ri + γR. Generally, we use A(st , at ; θ, θv ), which is an estimate of the advantage function defined as  Eq. (4), to express (R − V (si ; θv )) in Eq. (3). A(st , at ; θ, θv ) =

k−1 

γ i rt+i + γ k V (st+k;θv ) − V (st ; θv )

(4)

i=0

where the θ and θv representes the shared variables of the global network’s strategy π and state value function, respectively. k varies from state to state.

4 4.1

Experiments and Results Analysis Experimental Environment

The experiment in this paper is carried out on accelerator simulation software provided by IMP-CAS. Our experimental computing platform employes the Intel Xeon [email protected] CPU, which is a quad-core, eight-thread processor, and the open source machine learning framework ‘Tensorflow’ developed by Google is used for encoding. 4.2

Experimental Model Setup

As one of the most important components of reinforcement learning, the scale of rewards and punishments is very vital for the learning of the model. A reward and punishment mechanism that accords with the accelerator beam calibration rules is proposed to guide the agent to learn in this paper. We adopt the principle

24

J. Wang et al.

of dynamic incentive mechanism: the closer the distance to the center of the axis, 2 ) the greater the rewards are. The reward is set as r = |di | (di = x2(i) + y(i) when the distance from the center is 0 ≤ di < 2. When di ≥ 2, the corresponding bonus is set to r = −2, which is considered a poor calibration. Meanwhile, the reward is set as 1 when di = 0 to indicate the best calibration level. Among the most crucial modeling elements, this article uses the horizontal direction as an example for modeling, which can still be used in the vertical direction, because of the effect that coil electric field force on the beam deflection is much greater than the mass of the proton itself. In the modeling process, we created three sub-models to control the voltage values of the first, second, and third sets of calibration coils, and each set of calibration coils created a strategy and value network as showed in Fig. 2. The reason why three sub-models are used is because one model cannot handle a particle that still has an elegant distance after calibration as an input to the next state of reinforcement learning, which is contrary to the agent rule. The biggest difference between the models is that the parameters and hyperparameters of the network structure are different, and the other structures are the same. Another point that deserves special mention is

Fig. 2. The figure shows the structure of the beam calibration model. The upper left subgraph shows the execution flow of the A3C algorithm. The upper right subgraph shows the global neural network structure of the asynchronous model. The lower four diagrams represent each agent running in the thread.

Application of Deep Reinforcement Learning . . .

25

the output of the strategy network is divided into two parts. One is the normal distribution mean vector μ and the other is the normal distribution variance scalar σ 2 . The training uses the normal distribution sampling action formed by the two, and the practical application uses the average μ as the action. The structure of the first group (c1, c2, c3) of the coil’s strategy network is (1, relu, 512, tanh, 3), the value network structure is (1, 200, relu, 1), and the learning rate of both networks is 1e-3. Likewise, The strategy network structure of (c4)(c5, c6, c7) is (1, 402, relu, 1, tanh/softmax) and (1, 500, relu, 3, tanh/softmax) respectively, where the last activation function corresponds to μ is through tanh, and σ 2 is activated via softmax. Moreover, the value function networks for the second and third sets of calibration coils correspond to (1, 254, relu, 1) and (1, 254, relu, 1) respectively. 4.3

Results Analysis

In order to measure the calibration effect of the model on the migration of the proton beam, we use the distance d from the axis to evaluate the effect of the beam offset. Based on the beam position distribution in the accelerator database, 500 points that satisfy the Gaussian distribution are randomly generated in this paper as a test sample that are plotted as BPM1 in Fig. 3. It indicates the measurement position of the proton beam from the RFQ into the MEBT section.

Fig. 3. All BP M 1 points in the graph satisfy the Gaussian distribution of σ 2 = 2.0, μ = 0. The four coordinate distribution plots represent the point distributions entering the MEBT, the BPM2 distribution after calibration of the first set of coils, the BPM3 distribution after calibration of the second set of coils, and the BPM4 distribution after calibration of the last set of coils.

Figure 3 visually reflects the beam offset calibration effect from BPM1 entering the MEBT segment to BPM4 leaving the MEBT segment. In particular, after the second set of coil calibrations, the deflection of the beam significantly converged. To further illustrate the experimental results, we use five sets of 200, 500, 800, 2000, and 10,000 points of two-dimensional data for large-scale testing. The specific experimental test results are shown in Table 1. The consequence clearly demonstrates that the calibrated BPM reaches a minimum value of 0.4356 mm at the 500 points of the second group, and the calibration scale reaches a maximum of 0.8180 mm at 1000 points. Another significant result is that the overall BPM3 result is smaller than the BPM. This

26

J. Wang et al. Table 1. The beam offset calibration results distribution of test points. TIMES

BP M 1 (/mm)

BP M 2 BP M 3 BP M 4 (/mm) (/mm) (/mm)

REDU CT ION (BP M 1 − BP M 4)/mm

200

1.2556

0.9312

0.4093

0.7889

500

1.2530

0.9214

0.3912

0.4667 0.4356

0.8174

800

1.2409

0.9153

0.3824

0.4428

0.7981

1000

1.2798

0.9507

0.4129

0.4618

0.8180

0.4475

2000

1.2506

0.9257

0.3999

10000

1.2463

0.9200

0.3982

0.4462

0.8031 0.8001

Average 1.2543 0.9273 0.3983 0.4501 0.8042 The beam calibration result reached averagely 0.4356 mm during the 500 point test, while the calibration amount reached averagely 0.8180 mm in the 1000 point test, which is the maximum value of these six test groups. In addition, as the maximum test times, the average offset of the beam in the 10000 point test finally reached 0.4462 mm. The beam was calibrated from 1.2543 to 0.4501 mm, with the calibration amount of 0.8042 mm, which comprehensively reflected the average capacity of the beam offset calibration

is because there is still flow after the calibration of the beam in the first two sets of coils. In order to have an comprehensive evaluation of the experimental results, this paper combines multiple sets of test samples to achieve an average result. Finally, the average beam offset of 1.24343 mm in BPM1 out of the RFQ segment reaches an average of 0.4510 mm in BPM4 after three sets of MEBT coils, which demonstrates the outstanding effect of an average reduction of 0.8042 mm.

5

Conclusion

The main goal of this paper is to use the deep reinforcement learning to calibrate the beam offset of the proton accelerator MEBT section. Because the traditional beam offset calibration is based on physical calculations, it cannot adjust its own strategy when the environment is complex. The model based on A3C in this paper has an effect on the beam offset calibration of the proton linac MEBT segment. This model may also be beneficial for research in fields related to optimal control. More work to expand the proposed model to facilitate the control of complex systems is undertaken in the future. Acknowledgements. This work was supported by Strategic Priority Research Program of the Chinese Academy of Sciences with Grant No. XDA03030100, National Natural Science Foundation of China under Grant No. 61402210 and 60973137, Ministry of Education-China Mobile Research Foundation under Grant No. MCM20170206, State Grid Corporation Science and Technology Project under Grant No. SGGSKY00FJJS1700302, Program for New Century Excellent Talents in University under Grant No. NCET-12-0250, Major National Project of High Resolution Earth

Application of Deep Reinforcement Learning . . .

27

Observation System under Grant No. 30-Y20A34-9010-15/17, Google Research Awards and Google Faculty Award. The Double first class Funding-International Cooperation and Exchange Program under Grant No. 27560001, The Fundamental Research Funds for the Central Universities under Grant No. lzujbky-2018-k07.

References 1. Slovic, P., Flynn, J.H., Layman, M.: Perceived risk, trust, and the politics of nuclear waste. Science 254, 1603–1607 (1991) 2. Chen, W., Zheng, R., Baade, P.D., et al.: Ca: a cancer journal for clinicians. Cancer Stat. China 66, 115 (2016) 3. Salvatores, M., Slessarev, I., Ritter, G., Fougeras, P., Tchistiakov, A., Youinou, G., Zaetta, A.: Long-lived radioactive waste transmutation and the role of accelerator driven (hybrid) systems. Nucl. Instrum. Methods Phys. Res. 414, 5–20 (1998) 4. Gillette, E.L., Gillette, S.M.: Principles of radiation therapy. Semin. Vet. Med. Surg. 2, 129–134 (1995) 5. Liu, S.H., Wang, Z.J., Jia, H., He, Y., Dou, W.P., Qin, Y.S., Chen, W.L., Yan, F.: Physics design of the CIADS 25 MeV demo facility. Nucl. Instrum. Methods Phys. Res. p. 843 (2016) 6. Geng, H., Meng, C., Guo, Z., Tang, J., Li, Z., Pei, S.: Error analysis for the C-ADS MEBT2. In: Proceedings of the 4th International Particle Accelerator Conference (IPAC), pp. 390–398 (June 2013) 7. Corbett, W.J., Lee, M.J., Ziemann, V.: A fast model calibration procedure for storage rings. In: Proceedings of the IEEE Particle Accelerator Conference, 2002, pp. 108–110 (1993) 8. Husain, R., Ghodke, A.D., Yadav, S., Holikatti, A.C., Yadav, R.P., Fatnani, P., Puntambekar, T.A., Hannurkar, P.R.: Measurement, analysis and correction of the closed orbit distortion in Indus-2 synchrotron radiation source. Pramana 80, 263–275 (2013) 9. Zhang, M.Z., Li, H.H., Jiang, B.C., Liu, G.M., Li, D.M.: Beam based alignment of the SSRF storage ring. Chin. Phys. C 33, 301–305 (2009) 10. Jena, S.K., Husain, R., Gandhi, M.L., Agrawal, R.K., Yadav, S., Ghodke, A.D.: Beam based alignment and its relevance in Indus-2. Rev. Sci. Instrum. 86, 093303 (2015) 11. Sasaki, S., Soutome, K., Tanaka, H.: Beam based calibration of BPM position sensitivity at Spring-8 storage ring. American Institute of Physics, pp. 425–432 (2002) 12. Chen, W.L., Wang, Z.J., Feng, C., Dou, W.P., Tao, Y., Jia, H., Wang, W.S., Liu, S.H., He, Y.: Beam-based calibrations of the BPM offset at C-ADS Injector II. Chin. Phys. C 40, 158–161 (2016) 13. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning (2013). arXiv:1312.5602 14. Steingrover, M., Schouten, R., Peelen, S., Nijhuis, E.H.J., Bakker, B.: Reinforcement learning of traffic light controllers adapting to traffic congestion. In: Proceedings of the Seventeenth Belgium-Netherlands Conference on Artificial Intelligence, pp. 216–223 (October 2005) 15. Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In:

28

J. Wang et al.

Proceedings of the 33rd International Conference on Machine Learning(ICML), pp. 1928–1937 (June 2016) 16. He, Y., Wang, Z.J., Liu, Y., Chen, X., Jia, H., Yao, Y., Li, C., Zhang, B., Zhao, H.W.: The conceptual design of Injector II of ADS in China. In: Proceedings of the 2nd International Particle Accelerator Conference (IPAC), pp. 2613–2015 (September 2011) 17. Jia, H., Yuan, Y., Song, M., He, Y., Luo, C., Zhang, X.: Design of the MEBT1 for C-ADS Injector II. In: Proceedings of the 52nd ICFA Advanced Beam Dynamics Workshop on High-Intensity and High-Brightness Hadron Beams, pp. 115–118 (September 2012) 18. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn., pp. 129–147 (2017)

Predicting Students’ Academic Performance Using Utility Based Educational Data Mining K. T. S. Kasthuriarachchi1(&)

and S. R. Liyanage2

1

2

Faculty of Graduate Studies, University of Kelaniya, Dalugama, Sri Lanka [email protected] Faculty of Computing and Technology, University of Kelaniya, Dalugama, Sri Lanka

Abstract. Knowledge extracted from educational data can be used by the educators to obtain insights about how the quality of teaching and learning must be improved, how the factors affect the performance of the students and how qualified students can be trained for the industry requirements. This research focuses on classifying a knowledge based system using a set of rules. The main purpose of the study is to analyze the most influencing attributes of the students for their module performance in tertiary education in Sri Lanka. The study has gathered data about students in a reputed degree awarding institute in Sri Lanka and used three different data mining algorithms to predict the influential factors and they have been evaluated for interestingness using objective oriented utility based method. Subsequently, age of the students, their family background with regard to parents’ occupations, average monthly income of the family, their English language fluency level and knowledge of Mathematics were identified as the interesting factors. The findings of this study will positively affect the future decisions made regarding the progress of the students’ performance, quality of the education process and the future of the education provider. Keywords: Educational data mining  Knowledge discovery in databases Interestingness  Objective oriented utility based mining

1 Introduction During the recent decades, Information Technology has been influencing every aspect of a country, including: mechanical, e-governance, social, educational and related others. In Sri Lanka, with the intention of realizing knowledge center point, as one of its national dreams, has no exception before the influence of the Information Technology. Likewise, business process outsourcing sector has become a competitive industry in the national economy, providing foreign exchange to the country, creating numerous employment opportunities, being an influential force in the brain-drain. As a result, IT related academic and professional education sector has become an intermediary industry to supplying qualified people to the IT business. Data Mining is an important step in Knowledge Discovery in Databases (KDD) used by educators to extract essential information and make decisions related to © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 29–39, 2019. https://doi.org/10.1007/978-981-13-3648-5_4

30

K. T. S. Kasthuriarachchi and S. R. Liyanage

the pedagogical development as well as to improve the instructional design the educational sector. Analysis of learners’ records and the identification of how different factors affect their performance is valuable in making important decisions and initiating corrective actions by the educators and in providing feedback to the learners of the system. This study was carried out to identify the most influential factors for the module performance of students using a deep analysis with data mining and interestingness measurements. The analysis was conducted using the steps of KDD. Naïve Bayes, Random Forest and Decision Tree algorithms were executed and the factors of Decision Tree algorithm have been selected since it was to have proven the highest accuracy of the prediction of the data set. The derived patterns of the mining algorithm might be novel to the user or may be very general. Therefore, the patterns are evaluated using interestingness measurements. The organization of this paper is as follows: The literature review highlights the current studies in the same domain. Materials and method discusses the flow of the research and the important steps of the research. Next, the results and discussion section interprets the research results and finally, the conclusion section is included.

2 Literature Review Data mining can be successfully applied in different industries including Education to analyze students’ data to derive knowledge, and make predictions. Most of the studies were based on modelling the students, visualization of students’ data, giving support for course creators, grouping There are other tasks that can be done in educational data mining, such as; Students’ performance prediction, analysis of social network usage of students and online learning data analysis [1]. Student persistence and dropout in both face to face and online educational institutions was designed using a theoretical model to overcome the issues of the increasing number of dropouts [2]. A student-flow model was developed to monitor students’ progression from the first to the final year of their study [3]. Since Educational Data Mining is a progressive research area in education domain, there are an increasing number of data mining research studies in education, from enrolment management, graduation, academic performance, web-based education, retention and drop out [4]. Based on an open learning model a researcher has conveyed that the background characteristics are not good predictors of final outcomes whether the student will reach to target of the study or not since, they have data at starting point where as other factors may contribute to the difficulties that the students face during their study [5]. In another research study, it has been identified the factors which add to the achievement of an “atrisk student” may enable educational institutions to build students’ ingenuity [3]. Student modelling is to develop cognitive models, by modelling the skills and knowledge of students. Data mining systems were utilized to break down eagerness, joy, learning strategies, enthusiastic status to display the students. Regression and classification algorithms were utilized to test the consequences of various predictions in data mining activities, such as; student mental models in Intelligent Tutoring Systems [6]. It has developed student models using sequential pattern mining. In the knowledge

Predicting Students’ Academic Performance …

31

had been procured automatically [7]. Classification techniques were utilized to limit the improvement costs in building user models and to empower transferability in intelligent learning environments [8]. Visualization of instructive information in various practices was an approach to perform mining using graphical methods. Students’ online exercises, for example, students’ involvement in learning and answering, mistakes, students’ participation, instructors’ remarks on students’ work, overview of discussions, access to resources, and results on assignments are distinctive kinds of data that can be understood utilizing visualization strategies. The course creators, instructional designers or administrative staff in institutes will be able to get a guideline for their tasks by looking at the output of data mining algorithms on educational data. Clustering, Classification, sequential pattern analysis, dependency demonstrating, and prediction have been utilized to enhance web-based learning conditions to evaluate the learning procedure [9]. Cluster analysis, Association analysis, and case-based reasoning have additionally been utilized to set up course instruments and apportion homework at different difficulty levels [10]. With a specific end goal to find data, identified with educators to investigate students’ information further, or distinguishing showing fixings and assessments in adaptive learning situations an examination has additionally been done [11]. Grouping the students into different categories according to the individual behaviors is another type of research that were done as the benefit of educational industry. There are more investigations led around there by various researches. Cluster analysis has used to cluster the task of an arrangement of students into subsections to recognize the highlights and attributes which are basic to the occasions in the cluster [12]. Another investigation was an analysis of students’ identity and learning styles in light of the information gathered from online courses utilizing fuzzy clustering algorithm [13]. Clustering and Bayesian network were utilized to gather students as per their skills by another arrangement of scientists and furthermore, the K-means clustering algorithm was utilized to bunch students who indicate comparative learning practices in internet learning records, exam scores and assignment scores [14]. However, none of these studies focused on interpreting and presenting the most useful and interesting patterns to the user, except for the results of the analysis. However, a few studies are available which produced the mining results at the aftermath of an interestingness evaluation of derived patterns. They used medical data for decision making [15]. None of the studies are available to present mining results of educational data with the interestingness decisions. Therefore, the objective of this research was to extract the most influential and the interesting factors which affect their academic performance.

3 Materials and Methods The research study was carried out as shown in Fig. 1. Initially, the dataset has to be applied to the main steps of KDD which is an iterative process of finding knowledge from raw data of large databases. It consists of data selection, pre-processing, data transformation, data mining and data interpretation steps.

32

K. T. S. Kasthuriarachchi and S. R. Liyanage

Fig. 1. The flow of study

The data is gathered to databases from questionnaires, surveys, interviews, and other databases. While collecting this data some attribute values can be missed, which are called as incomplete data. If there are discrepancies between attribute values, they are called as outlier values or inconsistent data. Noisy data may also available in the dataset which has errors included in the data. In the data pre-processing step, all these missing values should be handled, identified and smoothened noisy data, and determined contradictions to generate a consistent collection of data for analysis. Data integration regularly requires to connect data from various data stores. There might name conflicts between different sources despite the fact that they mean the same. There might be redundant data also. These must be maintained a strategic distance from keeping in mind the end goal to play out the data mining undertakings with a complete set of data. An appropriate data mining method and algorithms have to be used for the analysis. The mining can be predictive or descriptive. Classification, clustering, regression, summarization, dependency modelling and deviation detection are the types of performing descriptive and predictive data mining tasks. Naïve Bayes, decision tree and random forest are classification type mining algorithms, K-Means is a clustering algorithm and the linear regression, logistic regression and neural network are regression algorithms. The extracted patterns are visualized to interpret. In order to identify the conflicts with previously believed knowledge and the discovered knowledge can be incorporated into a system which is called as consolidating the discovered knowledge. This study has used Naïve Bayes, Random Forest and Decision Tree algorithms for the data mining task.

Predicting Students’ Academic Performance …

33

Next, the interestingness of the rules/patterns derived in the mining stage have been conducted using the interestingness measures. Interestingness measures provide a standard method to filter and rank the rules/patterns extracted by the data mining algorithm. The interestingness measures can be used in three situations; firstly; they can be used during the pattern extracted process to select attribute and value pairs that are going to be included in the patterns. Secondly, these can be used to measure the weight of goodness of identified patterns. Thirdly, identify the patterns which differ from the rule set [16]. The researches use nine criteria for the measurement of interestingness of patterns. Conciseness, coverage, reliability, peculiarity, diversity, novelty, surprisingness, utility, and actionability [17]. Further, these nine criteria are categorized into three groups: objective, subjective and semantics-based. An objective measure is based on the raw data. These are based on theories in probability, statistics or information theory. Conciseness, coverage, reliability, peculiarity, diversity are considered as objective measures. If the knowledge about both the user and the data should be known for the measurement of interestingness, it falls into subjective measurement category. Novelty and surprisingness are subjective measures. A semantic measure considers the semantics and explanations of the patterns. Since it relates to subjective measures due to the knowledge on data is required, it can be considered as a special type of subjective measure. However, unlike the subjective measures, a utility function will be used to achieve the user’s goal. Utility and actionability are the measurements which fall into semantic measurement group. Therefore, the utility based method has been used for the interestingness measurement. The utility based measures are not only about the statistical aspect of the data, but also the utility of the mined patterns. Therefore, this would be more important since utility based measures are based on the user’s specific objectives and the utility of the mined patterns [15]. There are different utility based measures which have slight differences among each other. First; weighted item set mining, which a weight value/horizontal weight is assigned for each item in the patterns to represent the importance of them. Thereafter, normalized weighted support method. This is similar to the weighted item set mining but, can be used for a situation where there are more items in the pattern. By understanding the significance of the transactions in the data set, weights are assigned to each transaction which is called vertical weighted support method. The mixed weighted support method is a combination of both vertical and horizontal weights. Finally, the objective oriented utility based method which allows user to set objectives for the mining process to support a given objective both statistically and semantically. Then the attributes are partitioned into two groups, the target attributes and the non-target attributes and how interestingly the non-target attributes support the target attributes in the patterns are studied. The objective oriented utility based method has proposed to model the specific patterns that are both statistically and semantically related to achieve an objective and its utility [15]. The user is setting an objective for the mining process. The attributes in the dataset is divided into two sections; the target attributes (objects) and the non-target attributes. The target attributes appear in the consequent and the non- target attributes will be in the antecedents of the rules [15]. In preceding with OOA, the support, confidence and utility value of the rules could be computed.

34

K. T. S. Kasthuriarachchi and S. R. Liyanage

Support: s% ¼

countðfI1 ; . . .; Im ; objg; DBÞ  100 jDBj

ð1Þ

where, count ({I1, …, Im, obj}, DB) denotes the number of records in the database which contain the value of the rule including the object (obj) and |DB| is the total number of records in the database. Confidence: c% ¼

countðfI1 ; . . .; Im ; objg; DBÞ  100 countðfI1 ; . . .; Im g; DBÞ

ð2Þ

Where, count ({I1, …, Im, obj}, DB) denotes the number of records in the database which contain the value of the rule including the object (obj). count ({I1, …, Im}, DB) denotes the number of records in the database which contain the value of the rule excluding the object (obj). Utility: The summation of all positive and negative utility values of each class of attributes which are satisfying the expected object (obj) are computed to decide the most utilized attributes.

4 Results and Discussion As the initial step data had to be gathered and dataset was prepared. The data related to students for a programming module (English medium) in year 1 in an IT degree awarding institute in Sri Lanka was collected using a questionnaire. Subsequently, the data was collated into a Comma Separated Value file and used for apply in KDD process. The dataset consisted of 250 instances with 11 attributes. Table 1 represents the details of the dataset. Table 1. Description of dataset Attribute name Age Gender Location Father’s Job (Fjob) Mother’s Job (Mjob) AverageFamilyIncome A/LStream A/LEnglishGrade O/LEnglishGrade FrequencyofAttendancetoLectures Grade

Description Age of the student Gender of the student Hometown Occupation of the father Occupation of the mother Average family income per month A/L examination stream Grade for English in A/L examination Grade for English in O/L examination Frequency of Attendance to Lectures Grade obtained for the module

Predicting Students’ Academic Performance …

35

The dataset was pre-processed to remove all inconsistencies, missing data and duplicates. Then 203 instances remained in the dataset. Next, the data has transformed to perform the data mining tasks. The data mining has carried out in R software package which contains several packages and libraries to analyze datasets. The dataset was divided into training and testing set to initiate the mining task. The Naïve Bayes, Random Forest and Decision Tree algorithm were used in mining the data set and recorded their accuracies by performing tenfold cross validation repeated in 3 times. Accordingly, 92.17, 97.1 and 98.9% are the prediction accuracies of Naïve Bayes, Random Forest and Decision Tree algorithms respectively. Since the Decision Tree algorithm outperformed the rest, the rules generated by that were selected for further analysis. In measuring the variable importance of decision tree algorithm, 7 attributes were considered to measure the interestingness. They are; FrequencyofAttendancetoLectures, Age, Mother’sJob (Mjob), O/LEnglishGrade, Father’sJob (Fjob), AverageFamilyIncome and A/LStream. The above attributes were organised in the order of the importance derived by the algorithm. Following are the rules generated by the prediction algorithm. R1: If FrequencyofAttendancetoLectures is above 50%, then the student passes the module. R2: If Age of the student is between 19 and 25, then the student passes the module. R3: If Mother is doing a job, then the student passes the module. R4: If, O/L English Grade is above ‘S’, then the student passes the module. R5: If Father is doing a job, then the student passes the module. R6: If AverageFamilyIncome is above 50,000, then the student passes the module. R7: If A/LStream is Mathematics |Biology| Commerce, then the student passes the module. Next, the Objective Oriented Utility based method was used to evaluate the interestingness of the selected attributes and the performance. First, the target and nontarget attributes were identified. FrequencyofAttendancetoLectures, Age, Mother’sJob (Mjob), O/LEnglishGrade, Father’sJob (Fjob), AverageFamilyIncome and A/LStream could be considered as non- target attributes and grade for the module was selected as the target attribute (obj). Then the objective was set to grade is C+ or above. Depending on the ranks of grades offered by the institute, a weight value was assigned to each individual grade as shown in Table 2. Then the utilities of each rule was calculated. When a rule is taken, the non- target attribute was selected as a class and measured the utility of each individual instance value for the dataset. Next, the support, confidence and utility of the non-target attributes to achieve the target had to be calculated. The calculated values of the rule R1 is illustrated in Table 3. The non-target attribute of the rule is Frequency of Attendance to Lectures. These three values were computed for all rules derived by the prediction algorithm in the same method. Table 3 clearly depicts that the item set 3 and item set 4 obtained the highest support and the confidence. However, the item set 4 has the highest utility

36

K. T. S. Kasthuriarachchi and S. R. Liyanage Table 2. Values of obj (Grade) Grade A+ A A− B+ B B− C+ C C− D+ D E

Weight value 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6

Table 3. Support, confidence and utility of rule R1 obj: Grade  C+ Item set Frequency Frequency and 74 Frequency and 94 Frequency and 100

Confidence (%) 0 45.8

Utility

of attendance to lectures b:1 It should be satisfied that the proportion of income distribution should be increased. 1 k[ D

"

Pe bc

2 

#  1 1 2b  c þ þF  M : 2 2

1350

2.4

Q. Li et al.

Government Quantitative Market Pricing (Quota System)

Under such policies, the government requires power generation enterprises or power grid enterprises to meet the target of market share of renewable energy. The lack of acquisition enthusiasm of power grid enterprises can easily lead to difficulties or limited operation of renewable energy power. Therefore, the subject of quota obligation should be the power grid enterprise, and then the power generation enterprise. In this paper, the company mainly undertakes quota obligation. The government stipulates that power grid enterprises must have a certain proportion or amount of renewable energy power. This situation belongs to the power grid enterprise, the power generation enterprise is from the supply chain structure. According to the macroscopic goal of energy conservation and emission reduction, the government has set up the power grid enterprise to complete the renewable energy power grid. The price of a power grid enterprise that fails to complete the task is k yuan/KWH from the market. When a < H, the income of the grid enterprise is expressed as S1 = −(H − a)k. The over-fulfilled power grid enterprise sells the quota to profit, namely when a BBB 0 H, profit S2 = (a − h)k. The probability of completing the quota is (1−). The quota income represented by E(S) = theta, S1 + (1 − theta), S2 = k (a − H). Power grid enterprise profit function for P042 ¼ DP  Pe a  pe ðD  aÞ þ ða  HÞ k-12 ca2

ð29Þ

At this time, the power market equilibrium of renewable energy is in line with the power grid company as the leader, and the power generation enterprise is a sequential non-cooperative Stackelberg game equilibrium. In this game, the power grid enterprise determines the Pe according to the renewable energy quota information stipulated by the government, and the power generation enterprise determines the optimal power generation after observing the price of the power grid enterprise. In the same way, reverse induction is used to obtain the optimal generation capacity of power generation enterprises (7). The formula (7) is substituted into Eq. (29), which is obtained from the first order condition of 0, and the optimal purchase price of the grid enterprise is. Pe

  b Pe þ k : ¼ 2b þ c

ð30Þ

The formula (30) is substituted into Eq. (7). Q ¼

Pe þ k : 2b þ c

ð31Þ

By substituting formula (30) and (31) into formula (3) and (29), the renewable energy income of power generation enterprises and power grid enterprises in equilibrium is obtained.

Study on Income Distribution Mechanism …

P41 ¼

 2 b Pe þ k 2ð2b þ cÞ2

;

ð32Þ



P42

2 Pe þ k  Hk: ¼ 2ð2b þ cÞ

1351

ð33Þ

The social benefits of renewable energy power are.  P43 ¼

 Pe þ k w: 2b þ c

ð34Þ

Pe þ k 42 From the formula (33),we can get @P @k ¼ 2b þ c  H ¼ 0: The optimal allocation of the optimal allocation of power grid enterprise income is. 2 k ¼ Hð2b þ cÞ  Pe , because @ @kP242 ¼ 2b1þ c [ 0; so strictly convex function P is about 2k, k* for minimum quota pricing. And H 2b + c) means that the cost of generating electricity and the cost of the distribution of the cost of the distribution is 2–1: the cost of the renewable energy mix. So it’s easy to get Proposition 4.

Proposition 4 when the market price is less than the quotas of renewable energy power supply additional cost, renewable energy and conventional energy integrated power purchase cost difference), namely k < H (2 b + c) − Pe, power grid companies would rather buy quotas than efforts to promote renewable energy directly into the net. 2.5

Comparison of Main Income in Different Policy Scenarios

Compared in Table 1 of each main body under different control policies scene earnings, is derived in this paper: (1) on both sides of supply chain under the government regulation policy than the market spontaneous adjustment policy under the policy of (1) can obtain higher returns. Because P 31, P 41, P2 were greater than P 11; 12, and 22 > P P 33  23 P P 13. But the income distribution of supply chain under the policy, 3, 4 than depends on electric power with tariffs or quotas fixed variables such as reasonable, so both sides benefit under government control policy can not reasonable allocation. (2) the benefits of social and power generation enterprises under the government pricing policy (policy 3) are higher than the fixed price (policy 2). Power grid enterprises are not necessarily [9]. (1) Obviously renewable energy brings social welfare P 33 > P 23. (2) power generation company earnings P 21 < P 31 (shortage   of reduction, process). 2Analyze 2 2bðPe Þ Pe the revenue of grid enterprises, when k [ D1 32 c bc þ F  M  ð2b þ cÞ2 : China’s existing policies can improve the profits power grid enterprises more  of 2  2 2bðPe Þ Pe þ F  M  ð2b þ cÞ2 : than the fixed price system. Above analysis k [ D1 32 c bc

1352

Q. Li et al. Table 1. The benefit comparison of all agents in supply chains

i

Policy scenarios

Power generation company Pi1

Power grid enterprises Pi2

Social welfare Pi3

2ð2b þ cÞ2

ðPe Þ 2ð2b þ cÞ

Pe w 2b þ c

2bðPe Þ

2

2bðPe Þ

2

ð2b þ cÞ2

2Pe w 2b þ c

bðPe Þ

1. Market pricing 2. Government fixed price market (fixed price system) 3. Government pricing (China policy) 4. Market pricing government quantitative (quota system)

2

ð2b þ cÞ



Pe 1 2 b bc

2

2

bðPe þ k Þ

2

2ð2b þ cÞ2

2

 2 Pe Dk  32 c bc F þ M ðPe þ k Þ

2

2ð2b þ cÞ

 Hk

Pe w bc



Pe þ k 2b þ c



w

It is the sufficient condition that the income of the grid is higher than the  enterprise 2 2 2bðPe Þ Pe 1 3 fixed price system under the existing policy of China. D 2 c bc þ F  M  ð2b þ cÞ2 : It can be understood as the additional cost of renewable energy. (2) each subject of the supply chain may obtain the highest income under the quota system, and the income can be reasonably distributed.    3c k 2 Pe ; Pe 1 þ bc (3) When, P21  P41  P31 The most favorable policies for power generation enterprises are fixed price system (policy 2), quota system (policy   4) and government pricing quantitative policy (policy 3c 3). Among then, Pe 1 þ bc It is a function of the proportion of electricity cost in the   3c , Feed-in tariff for renewable distribution of renewable energy.when k [ Pe 1 þ bc 2 2 ð2b þ cÞðPe þ kÞ 4bðPe Þ energy is . In the integrated power supply cost allocation in (2 2k b + c), get P 42 > P 22, that power grid enterprises under quotas than fixed price system can obtain higher returns. when M< " # 2  2 1 Pe þ k 3 Pe  Dk þ c þF  M k 2ð2b þ cÞ 2 bc Power grid enterprises under the quota system can achieve higher returns than China.

Study on Income Distribution Mechanism …

1353

  3c , then P43 > P33. The quota On the basis of fixed quota H,When k [ Pe 1 þ bc system is higher than the social benefits brought by the existing Chinese policies, because after the relaxation of price control, k internalized the extra cost of renewable energy power, because the conventional energy grid electricity price (Pe) includes the cost of conventional energy and electricity production, and b, c reflects the cost of renewable energy power, and the function of these variables reflects the additional cost of renewable energy power supply [10]. The distribution of earnings between the power plant and the power grid corporation has to do with the technical factors of b and c, and it has to do with market factors like k and H. It can be understood that that additional cost of internalize renewable energy through the market is determined by the market to determine the more reasonable income distribution.

3 Conclusion This paper compares the distribution results of renewable energy income of power generation enterprises and power grid enterprises under four compulsory Internet policies, and the following conclusions are obtained. (1) renewable energy supply chain subject in control policies (such as policy 2, 3, 4) than spontaneous adjustment policy in the market under the policy of (1) can get high profits, but under control policies may not be able to realize reasonable allocation of revenue. (2) the key factor of income distribution among all parties in the supply chain is the cost of renewable energy generation. On the one hand, the income of all parties increases with the development of renewable energy generation technology; On the other hand, the proportion of income distribution of power generation enterprises increases, while the proportion of power grid enterprises decreases and the income distribution is unbalanced. (3) all parties can achieve higher returns than other policies under the quota system and achieve reasonable distribution of earnings. The reason is that the amount of renewable energy is controlled, and the extra cost is internalized by market prices. Therefore, from the perspective of higher income and reasonable distribution, the quota system should be the development direction of renewable energy. Thus, in order to implement the renewable power supply chain subject profit and social welfare maximization, ensure the balance of income distribution, and the benign development of the renewable energy power, suggested that the government uses the quota policy, macroeconomic regulation and control of renewable energy total quantity quota to achieve development goals; It also encourages technological advances to reduce the cost of renewable energy. In order to improve the initiative of both sides of the supply chain, it is suggested that the government should continue to support power generation enterprises and increase the support to power grid enterprises.

1354

Q. Li et al.

References 1. Wang, F., Yin, H.T., Li, S.D.: China’s renewable energy policy: commitments and challenges. Energy Policy 38(4), 1872–1878 (2010) 2. Harmelink, M., Voogt, M., Cremer, C.: Analyzing the effectiveness of renewable energy supporting policies in the european union. Energy Policy 34(3), 343–351 (2006) 3. Menanteau, P., Fion, D., Lam, M.L. Prices versus quantities: choosing policies for promoting the development of renewable energy. Energy Policy 31(8), 799–812 (2003) 4. Li, H.: The renewable energy policy in the vision of wind power projects. Reform 21(11), 51–59 (2008) 5. Luo, X., Zhang, L., Li, C.: Study on the incentive policies for renewable energy. Renew. Energy 24(4), 3–6 (2006) 6. Alphen, K.V., Kunz, H.S., Hekkert, M.P.: Policy measures top remote the widespread utilization of renewable energy technologies for electricity generation in the Maldives. Renew. Sustain. Energy Rev. 12(7), 1959–1973 (2008) 7. Wang, B.: Analysis of equilibrium based on market mecha-nism of renewable energy development policy. Master Dissertation of Northwestern University, Xi’ an (2010) 8. Li, H., Xie, M., Du, X. : Research on renew-able energy subsidies effectiveness in Chinabased on the empirical analysis of residents’ willingness to pay for environ-mental. Finance Econ. 33(3), 102–109 (2011) 9. Weng, Z., Chen, H.: A comparison of the consumer burden induced by two policies to promote renewable energy-from the view point of the interests of international competition. J SJTU Philos. Soc. Sci. 16(5), 58–63 (2008) 10. Yuan, X.L., Zuo, J.: Pricing and afford ability of renewable energy in China—a case study of Shandong province. Renewable Energy 36(3), 1111–1117 (2011)

Design a Link Adaptive Management Algorithm for BLE Device Xichao Wang1(&), Guobin Su1, Yanbiao Hao1, and Lichuan Luo2 1

2

MorningCore. Technology Co. Ltd., Haidian, Beijing 10083, China [email protected] School of Electronic Information Engineering, Beihang University, Haidian, Beijing 10083, China

Abstract. This paper proposes an algorithm framework for real-time adjustment of low-power Bluetooth link parameters in a dynamic wireless environment. Firstly, by using the convex function relationship between the device connection parameters and the system power consumption, combined with statistics on the channel packet error rate, the connection parameters suitable for the current channel environment are dynamically selected.; Secondly, conduct real-time monitoring of data buffering between the host and the controller, initiate connection parameter update operations at the appropriate time, and assign different adjustment weights to maintain the connection operation and data transmission operation, making the adjustment of connection parameters more accurate. Through simulation and experiment, it is proved that the algorithm can reduce the system power consumption by up to 34% compared with the traditional connection parameter setting method. At the same time, the connection loss probability is only 20–62% under normal conditions. Low power consumption maintains the stability of the link. This algorithm improves the accuracy and real-time performance of the system parameters as the channel changes. Keywords: Bluetooth low energy  Packet error rate  Connection parameters  Adaptively adjust

1 Introduction Bluetooth low energy (BLE) technology is widely used in various communication fields because of its simple and open features. In particular, with the release of the Bluetooth 5 standard, the application of BLE in the Internet of Things has been expanded [1–3]. With a large number of BLE terminals deployed in a specific application scenario, the channel environment of the device is deteriorating, and the performance is greatly reduced [4]. In a dynamically changing wireless environment, meeting the requirements of throughput while minimizing system power consumption and maintaining link synchronization is the key to the need for BLE system design tradeoffs [5]. The BLE protocol provides several system-settable parameters to suit different application requirements. The most important one is the connection interval TCI (Connection Interval, CI), which controls the wake-up and sleep cycles of the © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1355–1364, 2019. https://doi.org/10.1007/978-981-13-3648-5_175

1356

X. Wang et al.

device [6]. When the TCI increases, the BLE device stays connected and does not need to be woken up frequently, which can reduce system power consumption. However, as the TCI increases, the number of times the master and the slave exchanges packets will decrease during the timeout time ST(TST). This increases the chance of connection loss and the recovery process after the connection is lost will consume more energy than maintaining the connection [5, 6]. BLE device link stability and power consumption are related to device polling algorithm [7] and are related to the setting of system connection parameters [6]. The Literature [5] proposed the CABLE algorithm, using the relationship between the TCI and the power consumption of the system to adjust the stability of the TCI to improve the link maintenance in real time. However, it does not consider the situation when the application layer has continuous data transmission requirements. At the same time, the cross-layer design idea is adopted in the algorithm, which makes the versatility and parameter adjustment real-time decrease; In Literature [6], a power management framework for BLE systems based on real-time status detection is proposed, but the maintenance of link idle status is ignored. Meanwhile, the framework needs the application layer to provide some parameters, which reduces the versatility and makes the implementation and calculation more complex. Based on the idea of CABLE algorithm, combined with the power management framework in [5], this paper proposes an algorithm for the power consumption and link management of BLE devices in a complex wireless environment-Modified Connection Parameter Adaptation (M-CPA). The algorithm improves system stability compared to setting fixed parameters and reduces power consumption at the same time. Compared with the CABLE algorithm, it increases the data Buffer status monitoring mechanism. According to the application layer throughput requirements, the algorithm adjusts the connection parameters in time to better meet the upper application requirements. At the same time, the Packet Error Rate (PER) calculation and parameter adjustment process are completed in the controller, which reduces the probability of the algorithm being misadjusted due to cross-layer design [8]. Through theoretical modeling, the optimal TCI values of PER and system are calculated dynamically. This algorithm provides a solution to the problem of balance between system power consumption, link stability, and throughput. The simulation and experiments show that the algorithm reduces the system power consumption while satisfying the dynamic demand of the upper layer throughput and improves the stability of the link with lower power consumption in a higher PER environment.

2 Introduction 2.1

Establishment and Maintenance of Connection

BLE link layer operation is divided into two processes: initialization and connection establishment. The Scanner performs a scanning operation during initialization, and the surrounding broadcaster (Advertiser) is perceived. The Advertiser generates a broadcast event for each broadcast period (Adv_Interval). The Scanner scans the broadcast packet on each channel (Scan_Interval). When the Scanner receives the broadcast

Design a Link Adaptive Management Algorithm …

1357

packet of the target advertiser, it becomes the initiator. After 150 ls interval, the connection request (CONNECTION_REQ) is sent and the peer device establishes a connection [7]. After the connection is established, Initiator and Advertiser become Master and Slave respectively. Master and Slave get time synchronization through connection establishment. At the beginning of each TCI cycle, both the Master and Slave will wake up and the Master initiates a ConEvent (Connection Event) for data interaction. One or more data exchanges can be performed within one TCI cycle. ConnEvent will terminate when there is no data exchange or continuous cyclic redundancy check (CRC) error events between each other (Fig. 1).

Fig. 1. Data exchange when the BLE device is connected

After the Master and Slave establish a connection, they start the link layer timeout timer to monitor and maintain the link. When there is no data exchange between the two parties, in order to ensure the validity of the link, the protocol allows the BLE device to interact with the empty packet. When both parties fail to successfully interact with each other in the TST due to various reasons, the connection is considered to be lost. The two parties will enter the connection timeout process and start the process to reestablish the connection. Re-establishment of connections will consume more energy than connection maintenance [9–13]. Prior to the timeout mechanism, both parties had [TST/TCI] chances to interact with the packet [14]. When the TST is fixed, reducing the value of the TCI will increase the chance of successful packet interaction, reduce the probability of connection timeouts, and reduce energy consumption due to repeated connection setup operations [15]. 2.2

Update of Connection Parameters

The BLE protocol provides a parameter update mechanism that allows the BLE device to dynamically adjust system parameters according to channel conditions and throughput requirements after entering the connected state, so that the system performance and power consumption are optimized in a dynamic environment. The most important parameter is TCI, which determines the sleep and wake-up periods of the BLE node, and also determines the number of packet interactions that the BLE node can perform within a certain time range, which greatly affects the BLE wireless link, power consumption, throughput and other performance [15]. After actual verification, it is found that this parameter is usually fixed after the BLE device establishes a connection. It does not dynamically adjust according to the channel status and system requirements and does not play the role of the parameters. Systems that fix TCI in a

1358

X. Wang et al.

dynamic wireless environment are more likely to cause frequent loss of connectivity than systems that dynamically adjust TCI, leading to greater system power consumption and performance degradation [5, 8, 12].

3 Design and Implementation of M-CPA Algorithm In order to solve the above problems, this paper proposes an adaptive adjustment algorithm for connection parameters based on real-time channel state assessment. The algorithm is implemented in the Controller. First of all, through the statistics of the channel PER, combined with the establishment of the theoretical model to dynamically select the optimal value of the TCI for the current system requirements. Then, according to the filling status of the current Controller and Host data buffers, the demand for throughput is estimated, and the connection parameter update operation is started at a proper time, so that the system can maintain the link stability with lower power consumption while satisfying the application layer throughput requirement. 3.1

Design of TCI Selection Algorithm Based on PER

Based on the TCI selection algorithm and power management framework in Literature [5] and Literature [6], this paper optimizes the TCI selection algorithm and power management framework to dynamically adjust the TCI while considering system power consumption and throughput requirements. The adjustment algorithm of the connection network is consistent with this [5]. In Literature [5], a functional relationship is established between the power consumption of the BLE node and the TCI. The relationship between TCI and BLE node power consumption can be summarized as choosing the optimal TCI between TCI, max and TCI,min such that the value of the function P(TCI) for TCI is minimum. Where TCI,max and TCI,min are the constants given by the LE_Create_Connection command when the BLE node establishes a connection [7]. The node power consumption is expressed as P(TCI) and the power consumption function is as follows: þ PðTCI Þ ¼ ETvalid CI

Erecover STavg ðTCI Þ

ð1Þ

Evalid is the power consumption of nodes for data exchange, Erecover is the power consumption of link reconstruction, STavg(TCI) is the average reach time of timeout events, Evalid and Erecover are calculated according to the literature [9–11]. The final calculation of STavg(TCI) according to the Literature [5] is as follows: STavg ðTCI Þ ¼

1 P k¼0

ðtT þ ktR ÞpT pkR ¼ TST þ

TCI f1ðN þ 1ÞpN þ NpN1 g ð1pÞpN

ð2Þ

Based on the above formula and formula (1), the final system power consumption function is expressed as follows:

Design a Link Adaptive Management Algorithm …

PðTCI Þ ¼ ETvalid þ CI

Erecover TST þ TCI f1ðN þ 1ÞpN þ NpN1 g=ð1pÞpN

1359

ð3Þ

where N is the number of possible data packet interactions between BLE nodes prior to the TST, N = TST/TCI,p, where p is the statistical probability of the PER for a period of time. In order to find the best TCI, this paper makes the following assumptions:   TCI;opt ¼ xjx ¼ TnST ; x 2 N þ ;

ð4Þ

The scope of n is TST/TCI,max  n  TST/TCI,min, Replace the TCI in the formula P (TCI) with TST/n and find the second derivative of n to get the following formula: p00 ðnÞ ¼ Erecover TST 

ð1pÞpn f2 ln pð1pn Þ þ nðln pÞ2 ð1 þ pn Þg ð1pn Þ3

ð5Þ

Since formula (5) is greater than zero when n > 0 and 0  p  1, P(n) is a convex function on n. We can find the optimal n value incrementally starting from TST/TCI,max, and write it as nopt so that P(n) < P(n + 1), or n = TST/TCI,min. Therefore, TCI,opt= TST/ nopt. In this case, P(n) is the smallest value in the range of n. The agreement stipulates that TCI must be an integer multiple of 1.25 msec. Therefore, we will approximate the above formula so that the value of TCI,opt is an integer multiple of 1.25 ms and approaches the selected optimal value TST/nopt, and the formula is as follows: T =n

opt TCI;opt ¼ 1:25 msec  1:25STmsec

ð6Þ

At this point, we have chosen the optimal TCI value through theoretical modeling. After experiments prove that the operation of the above approximation has negligible impact on system performance. 3.2

M-CPA Algorithm Model Design

The M-CPA algorithm is based on the channel estimation strategy and uses the connection parameter updating mechanism in the BLE protocol to adjust the connection parameters of the system in real time. To maintain the stable connection between BLE nodes with lower power consumption while meeting the requirements of the application layer system throughput. Figure 2 shows the overall algorithm architecture of M-CPA. Figure 2 shows two connected BLE nodes, one as the Master and the other as the Slave parameter adjustment block diagram. When the Slave initiates the parameter update, the algorithm frame is consistent with the Master. The main body of the algorithm is located in the Controller and consists of four basic function modules: TCI selection module, PER evaluation module, Buffer monitoring management module, channel adjustment module, and parameter update module. In order to avoid frequent parameter updates caused by misjudgment, when the scheduled time Tschedule is set to be greater than Tschedule only when the current system parameter update initiation time

1360

X. Wang et al.

Fig. 2. M-CPA algorithm architecture diagram

is greater than Tschedule, this update will actually trigger, avoiding frequent system update tapes. The power consumption of the system comes. The PER calculation is the statistics of the transmission status of each data channel over a period of time. Refer to Literature [5] for the calculation method. Finally, provide the PER to the TCI selection module to select the optimal TCI. The Buffer monitoring module obtains the current buffer status parameters in real time and selects the precise TCI according to the number of buffers already filled in the current data buffer. The relationship between the value, buffer and TCI can be calculated according to the formula in [5, 6]. Since both the PER evaluation module and the Buffer monitoring module influence the selection of the TCI, the adjustment of the TCI in the above two modules is respectively assigned to the adjustment weights ηBuffer and ηPER. When we adjust according to PER, we call it PRper. On the contrary, we call it PRbuffer as shown in Table 1.

Table 1. PER calculation and buffer monitoring weight setting PER (%) Buffer (one) ηBuffer 20 >5 0 >20 0 0

ηPER 0 0 1 1

Strategies PRper PRbuffer PRper PRper

When there is no data transmission in and only the link needs to be maintained, ηBuffer is 0, that is, only the current system power consumption is considered, so as to maintain the link stability with the lowest power consumption. When there is data transmission in the system, that is, there is data buffer in the Buffer and the current channel status is good, the ηBuffer value is 1, at this time, the system is mainly considered to send the data in the Buffer to the air interface with the maximum throughput;

Design a Link Adaptive Management Algorithm …

1361

When there is data transmission in the system, i.e., Buffer has special buffer and the current channel status is poor, ηBuffer is 0 and ηPER is 1. At this time, the system link is mainly considered to be stable, link loss and connection reestablishment caused by excessively high PER are reduced, so as to improve system performance, and opportunities for overflow of Buffer are reduced [14, 15]. The system also maintains status information for each data channel. After a period of time, the status of each channel is automatically determined. When the PER of some channels is high, the system initiates a channel update operation to update the channel information, so that Master and Slave always work on a channel with a better status. Occurrence of small interference improves system link stability.

4 Simulation Test and Performance Analysis This paper evaluates the performance of the M-CPA algorithm by testing and simulating the actual application scenario. In order to facilitate the statistics of power information, the device in the test uses an external power supply. The power consumption meter is used to record the power consumption of the BLE node over a period of time. In the experiment, the test platform is connected with the smart phone and acts as a server end and sends temperature and humidity detection data to the mobile phone in real time, so as to provide a saturated and stable upper layer incentive for the experiment. 4.1

Simulation Verification

In the simulation verification process, we set the PER as a number of fixed values, and statistically verify the superiority of the M-CPA function for the power consumption of different TCI systems in each fixed PER system. Figure 3a shows the results of system simulation verification. We set the TCI value of the system to several common values. The PER of each wireless channel is set to a fixed value. In this environment, the power consumption of the system is detected. Through the results, it can be observed that the BLE devices working under different TCI will increase their power consumption with the increase of PER. This is because a higher PER will lead to frequent connection loss, which will lead to the system performing connection recovery operations. When PER = 0.6, the channel quality is poor, and the power consumption changes. The fixed TCI cannot adapt to the channel state at this time. The TCI needs to be adjusted again to minimize the power consumption. From the results in the figure, we also find that the value of the TCI with the smallest power consumption under different PER is constantly changing. Therefore, it is proved that the real-time adjustment of the TCI parameters with the change of the PER will improve the power consumption performance of the BLE system. 4.2

Actual Verification

The BLE system adopting the M-CAP power management strategy is deployed in the actual application scenario. At the same time, the TCI continues to be set to several

1362

X. Wang et al.

common values and the performance of the devices using the M-CAP system is compared. During the test, the BLE device is provided with stable upper layer incentives to ensure that the system always sends application data with saturated traffic. Each TCI system tests the power consumption for one hour, and the result shown in Fig. 3b is obtained. Through the analysis, it is found that the overall power consumption of the BLE system adopting the M-CAP power management strategy is lower by 20–34% than that of other fixed TCI systems. This proves that the M-CAP system will select the optimal TCI according to the current environment of the channel to adjust the system connection status.

(a) Device’s Energy Consumption in Fixed PER mode

(b) Power Consumption of Different TCI Modes in Actual Channel Environment

Fig. 3. Simulated and actually tested BLE device’s power consumption

Then we select a number of BLE devices that use the M-CAP policy as test device and deploy them in different locations in the lab to provide consistent upper-level incentives. The actual power consumption of the device is measured after a fixed period of test time, after which the TCI value is changed, and the test is continued. After several rounds of testing, the experimental results are shown in Fig. 4a. It can be found that the optimal TCI values of different BLE devices vary with the current channel environment. Communication between BLE devices significantly increases system power consumption when the TCI value is not suitable for the current channel environment. The BLE device adopting the M-CAP policy will dynamically change the TCI according to the current channel environment, so that the device is always in a lower power consumption state. The device power consumption of the comparative M-CAP system is reduced by 5–25% compared with the device power consumption in the same environment. Based on the above test platform, we used the BLE wireless packet capture tool to count the number of lost connections for each test case in the test process under the actual channel environment. As a result, the test in Fig. 4b shows that the number of lost connections for a BLE node using the M-CAP system in the same wireless channel environment for a certain period of time is significantly lower than that of a node with a fixed TCI value. The number of lost connections is reduced by 24–60% because small

Design a Link Adaptive Management Algorithm …

(a)Energy consumption of BLE devices deployed in different locations

1363

(b)Loss of connection in actual channel environmen

Fig. 4. Energy consumption and packet loss statistics in different environments in a real environment

TCI increase the number of packet interactions that the packet can perform during TST and reduce the probability of timeouts.

5 Conclusion This paper presents a BLE power consumption and link management algorithm MCPA based on channel state assessment and Buffer monitoring mechanism. Through the analysis of the relationship between the BLE connection parameter TCI and the channel state, a TCI selection and adjustment algorithm is proposed. The algorithm monitors the data buffer Buffer and uses the Controller to evaluate the channel quality and dynamically selects the TCI so as to adjust the system connection parameters and enhance the system stability. The test results in the simulation test and the actual channel environment prove that the algorithm improves the system reliability in the dynamic wireless environment, reduces the connection loss probability, and further reduces the energy consumed by the BLE node to maintain the link. The algorithm architecture has versatility and general adaptation, providing reference for different manufacturers to design and implement stable low-power BLE products.

References 1. Hussain, S.R., Mehnaz, S., Nirjon, S., Bertino, E.: Secure seamless bluetooth low energy connection migration for unmodified IoT devices. IEEE Trans. Mob. Comput. 17(4), 1536–1233 2. Yang, G.: The improvement points of Bluetooth 5 performance. Safety EMC 06, 25–54 (2017) 3. Meng, F., Liu, H., Wang, M., Lin, S., Tian, T.: Design of the CMOS low power sub-sampler with integrated filtering for the internet of things. J Xidian Univ. 44(3), 108–113 (2017) 4. Jeon, W.S., Dong, J.G.: Enhanced channel access for connection state of bluetooth low energy networks. IEEE Trans. Vehicular Technol. 66(9), 8469–8481 (2017)

1364

X. Wang et al.

5. Lee, T., Han, J., Lee, M.-S., Kim, H.-S., Bahk, S.: CABLE: connection interval adaptation for ble in dynamic wireless environments. In: International Conference on Sensing, Communication, and Networking. San Diego, CA, USA, pp. 1–9. IEEE (2017) 6. Kindt, P., Yunge, D., GOPP, M., Chakraborty, S.: Adaptive online power-management for bluetooth low energy. In: IEEE Conference on Computer Communications, pp. 2695–2703. Computer Communications, Hong Kong (2015) 7. Xu, F., Zhuang, Y., Feng, F.: A dynamic bluetooth polling scheme based on memory. J Xidian Univ. 34(7), 80–83 (2007) 8. Bluetooth SIG, Bluetooth core specification version 4.2 [OL]. 3 April 2018. http://www. bluetooth.com/

An On-Line Identification Method for Short-Circuit Impedance of Transformer Winding Based on Sudden Short Circuit Test Yan Wu1, Lingyun Gu1, Xue Zhang2, and Jinyu Wang3(&) 1

3

Beijing Key Laboratory for Energy Saving of Distribution Transformer, Beijing, China 2 State Grid Beijing Electric Power Company, Beijing, China School of Electrical and Electronic Engineering, North China Electric Power University, Beijing 102206, China [email protected]

Abstract. This paper comes up with an online identification method for shortcircuit impedance of transformer winding based on sudden short circuit test, which has realized the accurate calculation of parameters of short circuit impedance by collecting voltage and current from the sudden short circuit test, establishes related simulation model, and proves the adaptability of the test plan by comparing with the results with the real-test current waveform. This paper comes up with an identification method, which can collect values of the transient inductance with more convenience. From one hand, it can greatly save the testing tine to enhance the test efficiency; on the other hand, it can greatly lower the cost of manpower and the equipment to enhance the test capability and the support for the test efficiency. Keywords: Short circuit test method



Winding impedance



Online identification

1 Introduction During short circuit test of the transfer, it is difficult to determine how to take advantage of the voltage and current data to conveniently require the short-circuit impedance of the windings. Up to now, the parameter test method for short-circuit impedance of the transformer winding has been mainly divided into four ways: Low Voltage Impulse, Frequency Response Analysis, Electric Capacity Alternation Analysis, and ShortCircuit Reactance. All test measures for short-circuit impedance of the transformer winding need the transformer out of the service, which fails to realize the online identification of the status and conditions of the transformer windings to instantly identify the failures, and are all only belong to offline identification [1–7]. However, with the need for online identification of the short-circuit impedance of the winding are increasingly soaring, especially during the short circuit test of transformers, the short-circuit impedance of the transformer winding with great accuracy under real-time status is greatly needed. Apart from this, as for transformers operating in real practice, it shares great importance for elongating the life-span of the © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1365–1375, 2019. https://doi.org/10.1007/978-981-13-3648-5_176

1366

Y. Wu et al.

transformer and avoiding the accident to conduct the real-time identification of the data of short-circuit impedance of the winding. In summary from the analysis above, compared with other ways to identify the short-circuit impedance, the online identification shares characteristics with real-time identification, low disturbance, and simple and clear criterion. During the real practice, it can shows the real-time situations of the winding, thus to prepare the maintenance in advance and out of the service with instant when needed. Given to this problem, this paper firstly theoretically analyzes the impact rules of the impedance parameters of the transformer winding, comes up with an online identification method for the short-circuit impedance parameter of the transformer based on short-circuit test data, realizes the accurate calculation for the short-circuit impedance parameter by taking advantage of Prony integration method when only collecting the short-circuit voltage and current, and verifies the correctness of the method by establishing related simulation model and testing the current waveform data of the real-time short-circuit endurance.

2 Theoretical Analysis for the Short-Circuit Test Process Starting the analysis from the single-phase transformer, equation results of the transformer during transient process under various conditions can be calculated by listing the differential equations of the transformer during transient progress and Laplace’s transformation. The physical process of the transient process of the transformer can be better comprehended according to the resolution process. Figure 1 shows the diagram of the transformer whose subside is short-circuited and original side is switched on [8].

Fig. 1. The diagram when the transformer subside is short-circuited, and the original side is switched on

When the impedance on incentive branch is ignored, the circuit voltage equation can be expressed as follow by taking advantage of equivalent circuit of the transformer:

An On-Line Identification Method for Short-Circuit …

(

i1 R1 þ L1 didt1 þ Lm dði1dtþ i2 Þ ¼ Um sinðxt þ uÞ i2 R2 þ L2 didt2 þ Lm dði1dtþ i2 Þ ¼ 0

1367

ð1Þ

u is the initial phase-angle when switched on, the expression can be listed as follow by Laplace’s transformation of the expression above: 

u I1 R1 þ sL1 I1 þ sLm ðI1 þ I2 Þ ¼ U ssin su2 þþ xcos x2 I2 R2 þ sL2 I2 þ sLm ðI1 þ I2 Þ ¼ 0

ð2Þ

The solution of the current on subside of the transformer can be calculated after Laplace’s transformation: I2 ¼

A  s  ðs  sin u þ x  cos uÞ ðs2 þ x2 Þðs2 þ Bs þ CÞ

ð3Þ

The solution of the current on subside of the transformer is: i2 ¼ 2  jk1 j  cosðxt þ h2 Þ þ k3 ep1 t þ k4 ep2 t

ð4Þ

It is clear to conclude from the deviation that when an AC component is inserted on the original side of the transformer, the current on the sub-side of the transformer is composed of three sub-components, namely, one forced component and two exponential decaying free components. When combined the two free components into one free component which follows the exponential decay, namely Im et=s , then, the transient short-circuit current will be composed of one AC component and one DC decaying component when the switch is instantly switched on: i ¼ Im cosðxtÞ þ Im et=s

ð5Þ

Combined with previous theoretical analysis, the online calculation for the shortcircuit impedance can be established. To start with, the real-time short-circuit transient waveform of the transformer can be required by short-circuit test of the transformer; the DC decaying component and the AC component in the winding current can be required by decomposing the current waveform of the winding, and the decaying time constant of the short circuit in the system can be required by the integration of the transient waveform. At the same time, the equivalent impedance of the system can be required by the data of transient short circuit, and the equivalent short-circuit inductance L can be required according to the transient process of Eq. (5). The impedance parameters during the short-circuit process are required according to the processing illustrated above.

1368

Y. Wu et al.

3 On-Line Identification Method for the Short-Circuit Impedance of the Winding Based on Prony Method Prony method is come up with in 1975 by Gaspard Riche and Baronde Prony, which is linearly integrated by various exponential functions for the objective function [9–11]. This method is not only a signal analysis method, but also a system identification method, whose function can be easily expressed as: f ðxÞ 

n X

Ci eai x

ð6Þ

i¼1

Assumed that lk ¼ eak l, the matrix can be calculated as: C1 þ C2 þ    þ Cn ¼ fð0Þ C1 l1 þ C2 l2 þ    þ Cn ln ¼ fð1Þ C1 ln1 þ C2 ln2 þ    þ Cn lnn ¼ fðnÞ

ð7Þ

þ C2 lN1 þ    þ Cn lN1 ¼ fðN  1Þ C1 lN1 1 2 n In order to resolve the value of l1 and l2, assumed that the resolution of the equation as: a1 lN1 þ a2 lN1 þ    þ an1 lN1 þ an ¼ln

ð8Þ

The function can be structured according to the sample data as: 2

f ðn  1Þ f ðn  2Þ 6 f ðnÞ f ðn  1Þ 6 6 f ðn þ 1Þ f ðnÞ 6 4   f ðN  1Þ f ðN  3Þ

3 32 3 2 yðnÞ  f ð0Þ a1 76 a2 7 6 yðn  1Þ 7  f ð1Þ 7 76 7 6 76 a3 7¼6 yðn  2Þ 7  f ð2Þ 7 76 7 6 54    5 4    5   yðN  1Þ    f ðN  n  1Þ an

ð9Þ

When N > 2n, the least square solution a1, a2, …an of the equation can be calculated, and l1, l2, …ln can be calculated by resolving Eq. (8). C1, C2, …Cn can be calculated by further resolving Eq. (9) based on least square calculation, and finally, the linear integration of f(x) can be formed. According to the analysis above, the Prony algorithm can greatly realized the decomposition between the DC and AC components in data processing of the short circuit test, which can further calculate the decaying time constant during the transient process of the short-circuit with convenience. Based on previous application effect of the references, the accuracy of the data processing of the algorithm can meet the satisfaction of the practical projects [9].

An On-Line Identification Method for Short-Circuit …

1369

Fig. 2. The simulation model

Table 1. Transformer detailed parameters Rating capacity (kVA) Frequency (Hz) Connecting method of windings U1N (kV) IN (A) U2 (kV) R1 (X) L1 (H) R2 (X) L2 (H) Rk (X) Xk (X) LK (H)

200 50 Dyn11 10 11.55 0.4 2.998 50.76e−3 0.0038 0.2e−3 5.373 60.0445 0.1911

Rm (X) Xm (X) Lm (H) Rs (X) Ls (H) k s Rtotal (X) Ltotal (H) Lkmeasured (H)

4 Simulation Analysis The simulation model of Matlab/Simulink is illustrated in Fig. 2, in which the transformer is the three-phase transformer with double windings, whose transformer capacity is 200kVA and the rating voltage is 10 kV/0.4 kV. Other detailed parameters in the transformer is illustrated in Table 1. The value of short-circuit impedance can be

1370

Y. Wu et al.

Fig. 3. Short circuit current attenuation waveform in phase A

Fig. 4. Curve fitting results

An On-Line Identification Method for Short-Circuit …

1371

calculated based on short-circuit current decaying process when the short-circuit impedance is unknown. The transformer starts to be in its short-circuit status when the voltage in phase A starts to across the zero point, and its current starts to decay, among which the current decaying waveform in the phase A is shown in Fig. 3. Integrate the curve of the current waveform after the short circuit by taking advantage of Prony method, the decaying time constant of the DC component after short circuit can be calculated, whose integration result is shown in Fig. 4. The analysis formula of Eq. (10) can be formed after the curve integration. The decaying expression of the analysis formula of DC component is: Ia ¼ 12000  e31t þ 7600  cosð314  tÞ

ð10Þ

Among which Ia is the short-circuit current in phase A, and t is the time, whose units are all international standard units. According to the related theory of the zero input of the R-L first order circuit, 31 in

Table 2. Comparison of transformer short-circuit inductance parameters and analytical results Model Integration Error (%) Time constant (s) 0.0322 0.0323 0.31 Short-circuit reactance (H) 0.1911 0.1942 1.62

the expression is the reciprocal of the decaying time constant in the DC current component. s¼

L R

ð11Þ

According to equation (11), voltage and current data of the short circuit, and the calculated resistance value R, the short-circuit inductance can be calculated, which can further calculate the short-circuit reactance. The calculation results are shown in Table 2. According to the table above, errors between the short-circuit reactance and the parameter of the transformer itself after the resolution of short-circuit current waveform of the transformer are rather small in value, which portrays the effectiveness and accuracy of this plan.

1372

Y. Wu et al.

Fig. 5. 3 kVA transformer

5 Test Verification The short-circuit endurance test of the transformer has been conducted by taking advantage of the distribution transformer and the test platform to verify the correctness and effectiveness of the Prony method. The 3kVA transformer in the lab is illustrated as

15 10

U/V

5 0 -5 -10 -15 -20

1.42

1.44

1.48

1.46

Time/s

1.5

1.52

1.54

1.56

Fig. 6. Short circuit instantaneous phase voltage waveform diagram

30 20

I/A

10 0 -10 -20 -30 -40 1.44

1.46

1.48

1.5 Time/s

1.52

1.54

1.56

Fig. 7. Short circuit instantaneous phase current waveform diagram

Fig. 5. The DC decaying component of the current in each phase can be calculated on the basis of transformer winding based on the short-circuit endurance of the transform, voltage and current data in each phase of the transformer winding collected by the

An On-Line Identification Method for Short-Circuit …

1373

40 30 20

I/A

10 0 -10 -20 -30 -40 0

0.01

0.02

0.03

Time/s

0.04

0.05

0.06

0.07

Fig. 8. Short circuit current waveform of winding in phase A

Fig. 9. The fitting results

power analysis equipment, and the Prony decomposition for the collected data, thus to further collect reactance values in each phase. Voltage and current waveforms in each phase are shown as Figs. 6 and 7 after conducting the on-spot data test. The instant short-circuit current waveform of the transformer in phase A is shown as Fig. 8 after analysis for one of the phases. It is clear to conclude from the figure that the previous three cycle-waves show a clear decaying tendency in the short-circuit current. According to the voltage and current data of short circuit, the value of impedance R can be calculated. And the Prony algorithm is used to deal with the above data. Figure 9 is a decomposed direct current component diagram. The DC decaying component after integration is:

Table 3. Comparison and error analysis of short circuit inductance in phase A

Short-circuit reactance Error (%)

Measured value (mH) 0.292

Theoretical value after calculation (mH) 0.280

Integrated value (mH) 0.3047



4.109

4.349

1374

Y. Wu et al.

I ¼ 12  e1500t In this way, the value of short-circuit reactance is 0.3047 mH. According to comparison between the inductance value measured by the inductance meter, inductance value calculated by the short-circuit test and the inductance value integrated by the Prony algorithm, the accuracy of the Prony algorithm can be effectively verified. Values of short-circuit inductance in the phase A measured by these three method and their errors are listed in Table 3. It is clear to conclude that the error between the short-circuit inductance value in phase A integrated by the Prony method and the theoretical calculation value of the inductance is around 4%, which lies in the reasonable range, and has verified the correctness of the Prony method.

6 Conclusion Steps for the online identification based on the Prony method in this paper are as follow: (1) To require the transient waveform of the short-circuit current of the transformer by transformer short-circuit test; (2) To decompose the current waveform of the winding, thus to require the DC decaying component and DC component of the current in the winding, and to require the short-circuit decaying time constant in the system iAs ðtÞ ¼ Im et=s by integration of the transient waveform; (3) According to the short-circuit data under stable status, the equivalent resistance in the system can be required, then the equivalent short-circuit inductance L during the transient process can be required. The transient reactance value can be conveniently calculated by taking advantage of the online identification method for the short-circuit inductance, which can directly determine the deformation status of the winding. It can not only greatly save the test time to enhance the test efficiency, but can also greatly lower the cost of manpower and material resources; and enhance the identification of the test and provide the supports for the test benefits. Acknowledgments This project is supported by Key Laboratory Opening Fund Project of Energy Saving Technology for Distribution Transformer in Beijing (China Electric Power Research Institute).

References 1. Sidhu, T.S., Sachdev, M.S.: On-line identification of magnetizing inrush and internal faults in three-phase transformers. IEEE Trans. Power Deliv. (1992) 2. Yan, L.: Common detection methods of power transformer winding deformation. SHANDONG DIANLI JISHU 195, 46–48 (2013). (in Chinese)

An On-Line Identification Method for Short-Circuit …

1375

3. Zhang, Z., Huan, H.: Simulation of power transformer winding deformation detection based on FEM frequency response method. TRANSFORMER 54(10), 19–20 (2017). (in Chinese) 4. Wei, H., Liu, Y.: On line diagnosis of distribution transformer winding deformation based on short-circuit reactance method. Electr. Meas. Instrum. 14, 47–48 (2014). (in Chinese) 5. Bellini, A., Filippetti, F., Tassoni, C., Capolino, G.-A.: Advances in diagnostic techniques for induction machines. IEEE Trans Ind Electron 55, 4109–4126 (2008) 6. Wang, M., Vandermaar, A.J., Srivastava, K.D.: Improved detection of power transformer winding movement by extending the FRA high frequency range. IEEE Trans. Power Deliv. (2005) 7. Sun, X., He, W., Zhan, J.: Current situation and development of power transformer winding deformation detection and diagnosis technology. High Volt. Eng. 42(4), 1208–1209 (2016) 8. Li, Y., Li, L., Jing, Y.: Calculation and analysis of leakage magnetic field and passing short circuit impedance of axial double split generator transformer. High Volt. Eng. (2014). (in Chinese) 9. Ma, Y., Zhao, S., Gu, X.: Reduced order identification of low-frequency oscillation transfer function and PSS design based on improved multi-signal Prony algorithm. Power Syst. Technol. (2017). (in Chinese) 10. Grund, C.E., Paserba, J.J., Hauer, J.F.: Comparison of Prony and eigenanalysis for power system control design. IEEE Trans. Power Syst. (1993) 11. Feilat, E.A.: Prony analysis technique for estimation of the mean curve of lightning impulses. IEEE Trans. Power Deliv. (2006)

Construction of Mathematical Model for Statistical Analysis of Continuous Range Information System Ye Zheng(&) School of Basic Education, Jiangsu Food & Pharmaceutical Science College, Huaian 223000, Jiangsu, People’s Republic of China [email protected]

Abstract. Rough set theory is a new mathematical tool to deal with the knowledge of uncertainty, which was first proposed by Polish scientists in the year. It has developed into an important research direction of artificial intelligence and has a very wide range of data mining and knowledge discovery. The application background, and has received many successful applications. Based on the combination of the concept of mutual information and the intuitionistic fuzzy coarse sugar, a heuristic algorithm based on mutual information gain rate in intuitionistic fuzzy coarse sugar is proposed. The measurement of attribute importance in fuzzy decision table is given in intuitionistic fuzzy environment Method, this method not only takes into account the size of the attribute range, but also takes into account the distribution of values. This paper focuses on the rule extraction and knowledge reduction of the continuous range information system, gives the complete method of rule extraction and two new concepts of knowledge reduction, discusses the relationship between the various coordination sets, and discusses the allowable error change the effect of rule extraction. Keywords: Continuous range

 Statistical analysis  Model construction

1 Introduction With the rapid development of the Internet, information and data in various fields have increased dramatically. At the same time the uncertainty problem becomes more and more obvious. How to extract valuable, potential knowledge W and how to deal with incomplete, uncertain information in the fields of machine learning, pattern recognition W, data mining M, intelligent control and so on, from massive, strong interference, messy data Is the field of human intelligence most concerned about the subject (Pawlak 1982). Rule extraction and knowledge reduction are two important aspects of knowledge discovery. At present, the research and achievements of rule extraction and knowledge reduction are limited to the information system with conditional attribute value as discrete type (Pawlak 1991) [1]. However, the problems encountered in practical work are Conditional property values are continuous information systems, so it is important to study the continuous range information system [2]. Since the original research © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1376–1382, 2019. https://doi.org/10.1007/978-981-13-3648-5_177

Construction of Mathematical Model …

1377

literature on rough set theory was mostly Polish, it did not pay much attention to the international academic circles at that time (Toja-Silva 2016). The study area was confined to some countries in Eastern Europe until the end of the century, and gradually attracted the attention of scholars all over the world. In recent years, because of its learning and knowledge in the machine (Barbosa 2014). Knowledge discovery, data mining, decision support and analysis and other aspects of the extensive application of research gradually warming (Pisello 2014). Year, the first session of the International Symposium on Rough Set Theory in Poland, as a new research topic in computer science, shows that the theory of rough set theory and its application has a wide range of development prospects (Dongmei 2017). In this paper, we define the upper and lower approximate sets of knowledge in the information system by defining the relation on the continuous range information system and the indistinguishable set of x. In addition, the concept of rules and precision is proposed, and the division of U is successfully achieved (Wong 2016). The definition of the three knowledge reductions defined is extended to the continuous range information system, and two new definitions of knowledge reduction are given: rule reduction and precision rule reduction. The relationship between the two newly defined coordination sets and the originally defined coordination sets is discussed (Salata 2015). Finally, the influence of system allowable error on rule extraction is discussed [3].

2 Continuous Range Information System 2.1

Definition of Continuous Range Information System

The target information system is a special information system with both conditional and target attributes (Jie 2015). Definition 2.1 We Call (U, A, F, D, G) for the Target Information System, where U is the object set, U = {x1, x2, …,xn} A is a set of conditional attributes, A = {a1, a2, …,ap} D is the set of target attributes, D = {d1, d2, …,dq} F is the set of relations between U and A, F = {fk: U ! Vk, k  p}, Vk is the range of ak G is the set of relations between U and D, F = {gk: U ! Vk′, k  q}, Vk′ is the range of dk. If the value range Vk (k  p), Vk′ (k  q) is a finite value, called Pawlak information system, if Vk (k  p). Is the range of [0,1], it is called continuous range information system [4]. For the continuous range information system, we can define the relation RWB and [x] WB: RWB = {(x, y): | fl (x) − fl (y) |  W, al2(B), W  0, where W is called the system allowable error.

1378

Y. Zheng

Now X U, We Can Get It in the Relationship RWB Under the Upper and Lower Approximation Set n o W RW B ðXÞ ¼ x : ½xB X n o W RW ð X Þ ¼ x: ½ x  \ X ¼ 6 B B For the target information system (U, A, F, D, G), we denote RD = {(x, y): gk (x) = gk (y) (k  q)}, then RD is X. The equivalence relation is U /RD = {D1, D2, …, Dr}. 2.2

Knowledge and Knowledge Base

Rough set theory holds that knowledge is the ability to classify objects, and the set of objects discussed is U, which is called domain. If X  U, then called X is a concept (formal concept). A collection of several concepts is called a knowledge [5]. The following discussion only discusses the division of knowledge on the domain U, so a knowledge is an equivalent relation on U. We call the tuple K = (U, R) as a knowledge base, where U is the universe, and R is the set of equivalence relations on U [5]. For the knowledge base K = (U, R), if P  U and P 6¼ u, then P is called indefinitely on P, and is ind (P), ieind (P) = P (Table 1). Table 1. Statistical analysis of continuous range information system U x1 x2 x3 x4 x5 x6

a1 0 0.2 0.4 0.6 0.6 1

a2 0.3 0.8 1 1 1 0.5

d 1 1 2 2 3 3

For x 2 U, R 2 R, remember [x] R = {y 2 U; (x, y) 2 R} is the equivalence class with x below R. The elements of U /R = {[X] R; x 2 U} are called R elementary concepts. U /R is called a knowledge. For P  U, it is clear that [x] ind (P) = [x] R, the commodity set U /ind (P) is called the basic knowledge of P, also known as P basic set, where element is called P basic concept The 2.3

Knowledge Reduction of Continuous Domain Information System

The concept of knowledge reduction in continuous range is introduced below. For the continuous range information system (U, A, F, D, G), given W  0, B A,

Construction of Mathematical Model …

1379

W (1) If x 2 U, _W B (x) = _ A (x), then B is the distribution coordination set. If B is a distribution coordination set, and any true subset of B is not a distribution coordination set, then B is a distribution reduction [6]. W (2) If x 2 U, VW B (x) = VA (x), then B is the maximum distribution coordination set. If B is the maximum distribution coordination set, and any true subset of B is not the largest distribution coordination set, then B is the maximum distribution reduction. W (3) If x 2 U, fW B (x) = fA (x), then called B is the allocation coordination set. If B is the allocation of coordination sets, and any true subset of B is not assigned to the coordination set, then B is called the allocation reduction. W (4) If CW B = CA , then called B is the rule coordination set. If B is a rule coordination set, and any true subset of B is not a rule coordination set, then B is a rule reduction. W (5) If B is the rule coordination set, and qW B = qA , then B is the precision rule coordination set. If B is a precision rule coordination set, and any true subset of B is not a precision rule coordination set, then B is the precision rule reduction.

The above five concepts of knowledge reduction are defined by defining a minimum coordination set that satisfies a certain condition. The first three coordination sets are from the point of view of the object. See the literature [2]. The second set of coordination sets introduced by this paper is from the point of view of set-valued functions, and the rule coordination set requires that the rules function be equal [7] (Table 2). Table 2. Fuzzy decision table

1 2 3 4 5 6

Temperature Hot Mild Cool 0.9 0.5 0 0.8 0.3 0.2 0.8 0.6 0.3 0.2 0.9 0.4 0.6 0.5 0.9 0 0.3 0.5

Hummidity High Normal 0.9 0.5 0.7 0.6 0.8 0.5 0.6 0.8 0.5 0.4 0.1 0.3

Windy Flase Ture 0 0.5 0.7 0.6 0.8 0.4 0.6 0.5 0.5 0.4 0.4 0.8

Class Positive 0.8 0.6 0.5 0.6 0.4 0.6

Negative 0.4 0.5 0.8 0.9 0.5 0.6

3 Construction of Mathematical Model for Statistical Analysis of Continuous Range Information System In 1982, Rough Set theory was proposed, although it was a powerful tool for dealing with fuzzy information, but it was powerless to deal with raw fuzzy data. Pawlack’s Rough set theory deals with symbolic values, which are clear data, but in people’s real life, it is more about fuzzy concepts and fuzzy knowledge. Classical rough set theory cannot deal with them effectively [7]. The authors promote nuclei and reduction, but they are limited to pre-given readings. However, they have the disadvantage of losing

1380

Y. Zheng

information relative to the more general fuzzy similarity relation, although the rough set is extended to the fuzzy environment, the fuzzy equivalence relation or the fuzzy similarity relation is put forward (Table 3).

Table 3. Data of construction of mathematical model x1 x2 x3 x4 x5 x6

x1

x2

x3

x4 x5 x6

ab cd abd ad acd

bcd b abc b cd bd b cd b cd

A new attribute importance measure method [28] in the algorithm W mutual information as the importance of the property, which tends to select the value range contains more value of the property, from the information theory point of view, is inclined to choose more The chaos of the property, in fact, this tendency is not necessarily reasonable. This paper presents a new method of measurement: Zða, R, DÞ¼ðWðRUa; DÞ  WðR; DÞÞ=HðaÞ (1) where H (a) = —Epi logpi, pi is the ratio of the number of objects whose value a is ai to the total number of objects N. Based on the above measurement method, a method based on mutual information gain Heuristic Attribute Reduction Algorithm [8]. The input of the algorithm is a compatible decision information system S = (U, CUD, V, f), where C is the set of conditional attributes and D is the decision attribute set. The output of the algorithm is a relative reduction of the decision system. The steps of the algorithm are described as follows: (1) Calculate the mutual information W (C; D) of the condition attribute set C and the decision attribute set D; (2) Calculate the kernel R = KD (C) and calculate M (R, D); (3) Let Ccandidate = C-R calculate the importance of each attribute in Ccandidate according to formula (1), and select Z (a, R, D) to achieve the largest attribute ai; (4) R = RUai; (5) If W (R, D) = W (C, D), then turn to step 3. Attribute reduction is one of the contents of rough set theory. The information in the literature is the attribute importance, which tends to choose more worthy attribute in the selected range, and the feeding tendency is not necessarily reasonable. Therefore, this paper presents an intuitionistic fuzzy rough set method based on mutual information gain rate. In order to obtain a better relative attribute reduction in the decision system, an attribute reduction algorithm based on mutual information gain rate is proposed. The algorithm considers the mutual information of the selected condition

Construction of Mathematical Model …

1381

attribute and the decision attribute, and also considers the distribution of the value of the selected attribute. The attribute importance measure method based on the mutual information gain rate is defined from the information theory, and this measure is heuristic Information, the algorithm adds the most important condition attribute from the empty set to the selection attribute set until the mutual information of the selected condition attribute set and the decision attribute set is equal to the mutual information of the whole set of attribute attributes and decision attributes. The results show that the algorithm can reduce the decision system more effectively, and reduce the number of objects after reduction. Based on the mutual importance of the information on the importance of the importance of measurement, and construct the corresponding heuristic algorithm [9]. The theory of information is a series of theories established by Shannon to solve the communication process. A system that communicates information consists of a source, a sink, and a channel that connects both. The information is used to eliminate the uncertainty, and the size of the information is measured by the uncertainty of the elimination. There is a lot of research on the importance of defining attributes from information theory, using conditional entropy, mutual information, and various combinations of the two. In decision-making information systems, it is important to note that the conditional attributes are more important for decision making, and the mutual information between conditional attributes and decision attributes reflects the size of the information. Rough set theory is proposed by Pawlak to deal with imprecise, Mathematical tools for incomplete problems. The emphasis on the knowledge of the knowledge base is to reduce the knowledge of the knowledge base and to derive the decision rules of the research problem [10]. We introduce the concept of mutual information into the fuzzy coarse sugar concentration, and use it to measure the relative importance of fuzzy attributes in fuzzy decision tables [11].

4 Conclusions This paper focuses on the fuzzy information system, and proposes a new extended relation—fuzzy indistinguishable relation. Based on this fuzzy indistinguishable relation, the properties of fuzzy reduction and fuzzy kernel and fuzzy relative reduction and relative kernel are discussed. Theorem and its realization are discussed one by one. And discusses the interdependence between attributes. In order to obtain the fuzzy reduction relative reduction kernel and fuzzy kernel relative kernel, we propose a fuzzy discriminant relative discrimination matrix and distinguish the relative distinguishing function. The relations between the corresponding properties and theorem and fuzzy reduction and fuzzy relative reduction are discussed. A quick and easy way to obtain a kernel and a reduction, to achieve ambiguous, dynamic simplification of fuzzy information, and to illustrate. Finally, this paper constructs the appropriate mathematical model of continuous numerical domain information system, and then confirms the establishment of the hypothesis, and validates the application potential of the model through examples.

1382

Y. Zheng

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Barbosa, S., Ip, K.: Renew. Sustain. Energy Rev. 40, 1019–1029 (2014) Chan, A.L.S.: Energy 85, 620–634 (2015) Dongmei, H., Ledong, Z., Quanshun, D., et al.: J. Fluids Struct. 69, 355–381 (2017) Jie, L., Heidarinejad, M., Gracik, S., et al.: Energy Build. 86, 449–463 (2015) Krysgkiewing, M.: Int. J. Intell. Syst. 16, 105–120 (2001) Pawlak, Z.: Int. J. Comput. Inf. 11(5), 341–356 (1982) Pawlak, Z.: Klawer Academic Publishiers, Bosten (1991) Pisello, A.L., Castaldo, V.L., Poli, T., et al.: J. Urban Technol. 21(1), 3–20 (2014) Salata, F., Golasi, I., Vollaro, A., et al.: Energy Build. 99, 32–49 (2015) Toja-Silva, F., Lopez-Garcia, O., Peralta, C., et al.: Appl. Energy 164, 769–794 (2016) Wong, I., Baldwin, A.N.: Build. Environ. 97, 34–39 (2016)

An Electricity-Stealing Identification Method Based on Outlier Algorithm Ying Wang1(&), Mingjiu Pan2, Zhou Lan2, Lei Wang2, and Liying Sun2 1

2

China Jiliang University, Hangzhou 310018, China [email protected] Economic and Technical Research Institute of Zhejiang Electric Power Corporation, Hangzhou 310008, China

Abstract. In order to more effectively identify the electricity-stealing behavior of power users, and protect the interests of enterprises and legitimate users. Based on the analysis of the lack of current application of anti-stealing technology and in connection with the several common means of stealing, combined with the different data between normal electricity and abnormal electricity using and collected by electricity collection system equipped by most power supply companies. A new electricity-stealing identification method based on outlier algorithm is proposed. The main use of the outlier algorithm is based on the distance outlier algorithm. Based on the analysis of the distance-based outlier algorithm, the algorithm is simulated by MATLAB based on the other data. The validity and reliability of the algorithm will be analyzed after the field audit is carried out. Keywords: Outlier algorithm  Electricity-stealing system  Anti electricity-stealing



Electric data collection

1 Introduction The lack of electricity supply has made the State Grid Corporation attach more importance to the utilization of electricity, and has tried every means to ensure a low line loss rate. However, there are always some special areas where the line loss rate of the distribution network is high, and no doubt customers steal electricity is one of the reasons [1]. There are more and more high-tech means to steal electricity. The current anti-tampering technology only judges whether there is theft of electricity based on whether the line loss rate is greater than 15%. For those who are stealing electricity, how much electricity to steal, and when to steal electricity cannot be determined. With the development of remote meter reading technology and information management systems, power supply companies are equipped with power user power collection systems in most places to observe customer power consumption information at remote terminals. And through this system power supply enterprise has accumulated a large amount of historical data of electricity consumption, analyzed these historical data, found out the inherent rules of electricity consumption of users, and thus can

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1383–1388, 2019. https://doi.org/10.1007/978-981-13-3648-5_178

1384

Y. Wang et al.

excavate the behavioral characteristics of electricity consumption of users. The above analysis process can be implemented using the outlier algorithm in data mining technology [2–6].

2 Outlier Analysis 2.1

A Survey of Outliers and Detection Methods

There are some exceptions in the database. Such anomalies can be called outliers. Detection of outlier data is also referred to as exception mining or deviation detection. The method of detecting outliers is also an important part of data mining and is mainly used to find objects that are significantly different from most other objects. The types of outliers can be roughly classified into the following types: Classification from the scope of data can be divided into global outliers and local outliers. This classification is due to the fact that certain data objects are overall There is no outlier feature, but from a local point of view, it shows a certain degree of outliers. Classifying the type of the data itself can be divided into two types: outliers and numerical outliers. The criterion of this classification is to distinguish the dataset’s attribute types. Classifying from the number of data attributes can be divided into one-dimensional outliers and multi-dimensional outliers because one object may have one or more attributes. Outlier detection methods can be roughly divided into four categories: statistical model-based methods, distance-based outlier detection, density-based outlier detection, and cluster-based outlier detection [7–9]. 2.2

Distance-Based Outlier Detection Method

When power stealing occurs, the power parameters collected by the power consumption information acquisition system must be different. For example, if a certain phase in the three-phase current suddenly goes to zero, the voltage drop value is outside the normal fluctuations, such as this time fluctuation. The actual problem of large changes in data in the presence of anomalies is that the difficulty of describing or accounting with a distribution model is very large. Therefore, considering the participation of various factors, this paper applies a distance-based outlier detection algorithm to determine the stealing behavior. The distance-based outlier concept was first proposed by Edwin M. Noll and Raymond. The understanding of the distance-based outlier DB (pet, D) definition can be such that there is now a large data set. G and an object X to be judged, if a large data set G occupies pet (a value between 0 and 1, generally expressed as a percentage), the number of parts and the distance of the object X to be measured are greater than D (may As an outlier criterion, this object X can be judged to be a distance-based outlier with parameters pet and D. In simple terms, this data is very gregarious, and the distance to other data is often greater than the distance between other data. In the distance-based outlier detection algorithm, calculating the distance between objects as the Euclidean distance is currently one of the most commonly used methods. It refers to the actual distance between two data points or the distance between a vector

An Electricity-Stealing Identification Method …

1385

point and the origin. For example, the Euclidean distance calculated in twodimensional or three-dimensional space is the actual (linear) distance between two points [10]. In a common two-dimensional space, the Euclidean distance between the data object O1 (X1, Y1) and another data object O2 (X2, Y2) is calculated as Eq. (1). DðO1; O2Þ ¼

2.3

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðX1  X2 Þ2 þ ðY1  Y2 Þ2

ð1Þ

Distance-Based Outlier Detection Method

The original data collected by the power consumption information collection system to be measured should be preprocessed. When the amount of data is too redundant, the data can be reduced in dimension. When the attribute unit of the data to be detected or the range of values and the spatial dimension are not the same, the data should be standardized. Then the values of PCT and D are calculated, and then the data object to be measured is determined according to the definition of the outlier. This article selects the more intuitive positive active power based on the data collected by the actual power consumption information collection system (instantaneous active, reactive, three-phase current, three-phase voltage, forward active, one quadrant, and four quadrant reactive power). Analyze the application of distance-based outlier detection algorithms. In the data preprocessing phase, this paper analyzes the customer’s positive real power. Due to the effect of active power when using electricity, the active power at each time point will change slightly and fluctuate within a certain range. In addition, the frequency of current power acquisition system for users’ active acquisition is generally every 15 min. Once, there are as many as 96 data in 24 h. If analyzing one month or several months, big data sets high requirements for the accuracy of the system algorithm and the efficiency of analysis. Therefore, dimensionality reduction is required. In this paper, the dimensionality reduction method is to calculate the daily total active power of the user, i.e. the daily electricity consumption (the daily electricity consumption is the total active power of the current day minus the total active power of the previous day). In this way, when the power change occurs when the power is stolen, it can also reduce the monthly data to 30, eliminating the inconvenience caused by the interference data. For the calculation of PCT, this paper firstly averages the input data, and then averages the numbers after removing the average over a certain range. After calculating the absolute value of the difference between each data and the second average, The number within a certain range (which can be considered as the expected normal data size, which can generally be judged based on practical experience) is the proportion of the total sample for the PCT. The algorithm flow for selecting suspected users is shown in Fig. 1.

1386

Y. Wang et al.

Start

Data preprocessing

The range of the normal budget data is to take the percentage of the total sample

Find the Euclidean distance d, calculate the outlier criterion D

i=0 i=i+1(i ≤ n)

d≥D

The object is classified as suspected of stealing electricity

End Fig. 1. Distance-based outlier algorithm and electricity stealing user identification flow chart

3 Algorithm Verification In order to further determine the effectiveness and reliability of the algorithm in practical applications, the data collected by the power consumption information acquisition system was selected for MATLAB simulation verification. As shown in Fig. 2, this is a part of the electricity consumption data collected by the electrical information collection system for a certain textile company. Generally, the visual reflection of the data is difficult to judge. When using the data stealing method based on data mining described in this article, the data is analyzed; as shown in Fig. 2, after the MATLAB program running analysis,

An Electricity-Stealing Identification Method …

Fig. 2.

1387

The 72 days data of a textile corporation

the red circle part of the figure is the data The outliers (from the graph can be found very similar “outliers” around the outliers), but if the results are flawed in combination with the actual power situation, the data above the data mean is also judged to be outliers. Point, from the perspective of electricity use, stealing electricity will only be less active, and will not be more effective than daily, so the outliers above the mean are due to normal electricity data (high electricity consumption). After the problem is solved in the algorithm, the analysis and operation result of this group of data is shown in Fig. 3. Obviously, the result is satisfactory, and it can indicate the power stealing point in the data.

Fig. 3.

The outlier identification in the case of high power consumption is not considered

The five outliers identified by the algorithm are individually extracted as shown in Table 1. The first (lower than the mean 2) and the other 4 (above the mean 2) daily power consumptions of the test data can be visually displayed. The difference is very obvious. However, in actual investigation, the company did not stop production on this day, and there was no situation where the company did not consume active power, and it could be determined that data were abnormally generated by the power theft.

1388

Y. Wang et al. Table 1. Outliers detected by the algorithm and data values Sequence of the date W (kwh) 1 1.44 15 46.62 59 62.48 60 69.3 61 73.5

Average of W (kwh) 47.42 / 28.65

4 Conclusion In this paper, the power supply information acquisition system is basically equipped with the data mining technology in the data era development scenarios designed based on data mining stealing behavior identification method. The active power in two sets of electricity data of a textile mill was selected for application analysis. The results show that the proposed method for the identification of electricity stealing behavior is effective, and can accurately identify users who steal electricity, and provide a new direction for the practitioners of the power industry to use data mining technology to achieve efficient anti-theft. Acknowledgements The research was sponsored by the Fund Project of Zhejiang Province Natural Science Foundation for Youths (LQ17E070003).

References 1. Wu, A., Ni, B.: Power System Line Loss. China Electric Power Press (1999) 2. Wang, Z.: The Application of New Data Acquisition System in the Field of Anti-Tampering. Dalian Technology University (2006) 3. De lang, X.: Improvement of anti-tampering method of electric energy metering cabinet. China Mach. 6(11), 72–73 (2013) 4. Lu, F., Cheng Wen, W.: Research and design of customer relationship management system based on data mining. Power Inf. Technol. 5(7), 86–89 (2007) 5. Teng Fei, C.: Development and Application of On-Line Inspection Device Based on Electricity Information Collection System. North China Electric Power University (2012) 6. Chao, Cheng, Hang Jin, Z.: Research on anti-stealing research based on outlier algorithm and electric information collection system. Electr. Power Syst. Prot. Control 43(17), 70–74 (2015) 7. Chen, H.: Data Mining Technology in Power Management and Anti-stealing System Application and Research. Wuhan University (2004) 8. Hu, T.: Research on Outlier Detection Algorithm in Data Mining. Xiamen University (2014) 9. Jiang R., Lu R., Wang, Y.: Energy-theft detection issues for advanced metering infrastructure in smart grid. Tsinghua Sci. Technol. 19(2), 105–120 (2014) 10. Ze mao, Z., Kun jin H. et al.: Algorithm and application of abnormal data mining based on distance. School of Computer and Information Engineering, Hohai University 22(9), 106–107 (2005)

ANNs Combined with Genetic Algorithm Optimization for Symbiotic Medium of Two Oil-Degrading Bacteria Cycloclasticus Sp. and Alcanivorax Sp. Zhang Shaojun1,2, Wang Mingyu1, Liu Bingbing1(&), Pang Shouwen1, and Zhang Chengda1 1

2

College of Naval Architecture and Marine Engineering, Shandong Jiaotong University, 1508 Hexing Road, Weihai 264310, China [email protected] Weihai Marine Oil Pollution Treatment Engineering Technology Research Center, Weihai 264310, China

Abstract. In order to study the symbiotic medium of two petroleum degrading bacteria, Cycloclasticus sp. and Alcanivorax sp. Single factor experiment, uniform design and neural network combined with genetic algorithm were used to optimize medium components. The concentration of carbon source (diesel) is 0.5%, the concentration of nitrogen source (NH4)2SO4 is 8 g/L, phosphorus source is KH2PO4:Na2HPO4 mol ratio 3:1, yeast powder concentration is 0.03 g/L, culture time is 3D, culture temperature is 20, pH = 7.6, loading amount is 30 mL, inoculation amount is 1.5%, and rocking bed speed is 150 rpm. The test results were consistent with expectations. The degradation rate reached 89.80% after optimization and 14.4% higher than the original 78.50%. Through the optimization of the symbiotic medium, the culture medium suitable for the symbiosis of bacteria group is obtained, which shows that the neural network and genetic algorithm have remarkable superiority in the optimization of culture medium. Keywords: Artificial neural networks (ANNs) Oil-degrading bacteria

 Genetic algorithm 

1 Introduction Oil spill at sea brings huge ecological disasters to the ocean. After the accident, the maritime department has immediately invested a lot of manpower and material resources to clean up, but the effects of the residual oil on the sea water and the coastal zone still exist for a long time, and it is difficult to eliminate it. The final destination of residual oil is the natural degradation of microorganism, but the cycle of natural degradation lasted for several years or even decades[1, 2]. In order to shorten the damage cycle and accelerate degradation, screening and cultivation of highly efficient oil degrading bacteria has always been the focus of marine environmental research [3, 4]. © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1389–1397, 2019. https://doi.org/10.1007/978-981-13-3648-5_179

1390

Z. Shaojun et al.

There are more than 200 known hydrocarbons in petroleum. The biodegradation advantages of different species are different. Single cultures are difficult to degrade compounds such as naphthenic hydrocarbons, alkanes and aromatic hydrocarbons. The complexity of petroleum components determines that the degradation of the petroleum components requires the joint participation of various microbes. The petroleum degrading bacteria group can make full use of the symbiotic synergistic effect of the degrading bacteria to achieve the simultaneous degradation of petroleum components [5, 6]. The interaction of microbes in the process of degradation constitutes a complex microecosystem. The microecosystem has the ability to self-control and self-adapt to external interference [7]. The more complex the structure is, the stronger the stability, and the stronger the ability to adapt to the environment change. When mixed culture, the nutrient components of each strain were found to be different. The antagonism between various components was complicated. The appropriate culture medium environment is the most important factor to ensure the growth of microbes and the more thorough degradation of residual oil. But the preparation of 10 kinds of bacteria group culture medium needs at least several hundred combinations. It is very necessary to build an intelligent algorithm to optimize the complex coexistence. To design a suitable symbiotic medium, the steps are mainly composed of three parts: experimental design, mathematical modeling and optimization design. Reasonable experimental design can use less experimental data to obtain better modeling effect. A reasonable model can accurately predict the influence of the medium ratio on the target, and the optimization method is to search out the optimal solution in the subspace on the basis of the established mathematical model. Artificial neural network (artificial neural network) is a “black box” model, which is based on the input and output data to build a model [8, 9]. The statistical information of the network is stored in the connection weight matrix, which can be used to reflect the problem that is difficult to describe in the conventional mathematical model under the nonlinear condition [10, 11]. Genetic algorithm (GA) is an efficient global optimization search algorithm based on natural selection and genetic theory, which combines the survival rule of the fittest and the random information switch system of the internal chromosome in the process of biological evolution. The combination of artificial neural networks and genetic algorithms can be used to describe the complex relationships among factors by introducing nonlinear models, and on the basis of genetic algorithms, the optimal values are found by global optimization. Many scholars have been using neural networks and genetic algorithms to optimize microbiological media [12, 13]. ANNs is an efficient information processing technology. It has a strong nonlinear mapping ability and can reflect very complex nonlinear relations. The number of input and output endpoints of the network is not limited. It is suitable for nonlinear modeling, prediction diagnosis and adaptive control of multivariable. The data obtained by different combination methods can find the regular prediction results through the learning training on the established neural network model. The task group has achieved good results by using artificial neural network to predict the emission of NOx [14] and polycyclic aromatic hydrocarbons (PAH) of diesel engines. Cycloclasticus sp. and Alcanivorax sp. are two strains of petroleum degrading bacteria isolated and screened in our laboratory, and have a good effect on the degradation of naphthenic and n-alkanes. In this paper, the optimization method of

ANNs Combined with Genetic Algorithm Optimization …

1391

neural network combined with genetic algorithm is applied to optimize the medium of two strains of microorganism in order to improve the effect of petroleum degradation. It provides theoretical basis and technical reference for the commercial production of petroleum degrading preparation and the large-scale application of environmental protection in the application of cheap culture medium.

2 Materials and Methods 2.1

Screening of Petroleum Degrading Strains

The enrichment, separation and purification of petroleum degrading bacteria: taking a certain amount of oil contaminated soil into the 250 mL triangle bottle containing 100 mL medium, incubate 7 days in the rocking bed under the condition of 30 and 160 r min-1, and then take a certain amount of culture medium into the 250 mL triangle bottle containing 100 mL fresh medium, and shake the bed under the condition of 30 and 160 r min-1.7 days was cultivated in the medium for 3 times. The inoculation ring was used to draw the enrichment medium on the plate. After several lines, the purified strain was preserved in the refrigerator after the culture of the tube in the inclined surface of the test tube. The rescreening of petroleum degrading bacteria: it was called the oil containing soil sample 50 g, dissolved in the sterilized medium 100 mL, mixed with mud and was filled in the triangle bottle of 250 mL. Under the aseptic condition, the prepared bacterial suspension was inoculated into the 2 mL in the triangle bottle. All the strains were made of 3 parallel samples, and 2 kinds of blank control were made by adding mercuric chloride and without bacteria. At 30, 160 r min-1 conditions. It was cultured in a shaking table. 1 samples per 2D were taken to measure the content of petroleum hydrocarbon until the content of petroleum hydrocarbon was no longer changed. Bioremediation of petroleum degrading bacteria: it is called the oil containing soil sample 2 kg in the circular porcelain basin, activates the SY23 and SY43 of the slope culture, and makes the solid medium with the sawdust (1:7, mass ratio), and connects with the oil contaminated soil. The soil moisture content is basically kept around 20%, and the soil is repaired under the natural temperature. Cycloclasticus sp. and Alcanivorax sp. are two strains of petroleum degrading bacteria isolated and screened in our laboratory, and have a good effect on the degradation of naphthenic and n-alkanes. 2.2

Culture Medium

See Table 1 2.3

ANNs Recognition Model Establishment and Training Process

Addition of nutrient components was added to the optimized fermentation matrix. The effects of additional nutrient components on the biomass of mycelium were investigated for 3 repetitions.

1392

Z. Shaojun et al. Table 1. Medium and formula used in the laboratory

Medium name Inorganic salt medium

Formula NaCl 5.0 g, Na2HPO4 1.5 g, KH2PO4 3.5 g, (NH4)2SO4 4.0 g, MgSO47H2O 0.7 g, distilled water 1000 mL, pH 7.2–7.4, sterilization at 121°C for 30 min Enrichment medium Beef paste 3 g, peptone 10 g, NaCl 5 g, distilled water 1000 mL, pH 7.2–7.4, sterilization at 121°C for 30 min Culture medium with glucose as Glucose 20 g, yeast powder 1 g, KH2PO4 3 g, NaCl carbon source 5 g, distilled water 1000 mL, pH 7.2–7.4, sterilization at 121°C for 30 min. Solid culture base and agar 2% LB solid medium Peptone l%, beef paste 0.5%, NaCl 1%, distilled water 100 ml, agar 1.5%, pH 7.0, 15 lb sterilize for 20 min Crude oil medium Crude oil 2.0 g, Na2HPO4 1.5 g,KH2PO4 3.5 g, (NH4)2SO4 4.0 g, MgSO47H2O 0.7 g,distilled water 1000 mL,pH 7.2–7.4, 121°C Sterilization for 30 min Inclined plane 2% (w/v) agar was added to the enrichment medium, 3 mL was added, 121 C was sterilized for 30 min, and then placed in an environment of 4°C for cold storage Culture medium In the enrichment medium, 2% (w/v) agar was added, and 121°C was sterilized 30 min, toppling 15–20 mL, and refrigerated in the environment of 4°C after solidification Oil flat (solid selection medium using 10 times of inorganic salt solution 10%, water 90%, crude oil as carbon source) yeast extract 0.1%, agar 1.5%, crude oil 2%, natural pH value, 15 lb sterilizing 20 min, flat plate, and then evenly coated on the surface after a single layer of sterilized crude oil, called the oil plate

Additional nutrient component uniform test design: based on the added nutrition single factor test, all kinds of nutrient components were selected, and 24 groups of 8 factors and 6 levels were designed according to the uniform test. Each treatment was treated with 3 repetitions. Neural network has strong input and output nonlinear mapping ability, and especially suitable for the highly nonlinear and unstructured complex models of microbial fermentation. The genetic algorithm is a directed global random search method, which has no restrictions on the target function and search space, so it is very suitable for neural network model and other non-explicit analysis function optimization problems. The optimization of combining BP neural network with genetic algorithm is shown in Fig. 1. The combination of the experimental medium is divided into the training group and the prediction group. The training group is used to train the BP neural network. Then the prediction group is used to test the trained network, and then the neural network model is built. With the output of the model as the objective function of GA, the optimal combination of media is found through the global optimization of genetic algorithm.

ANNs Combined with Genetic Algorithm Optimization …

1393

Experiment group

Predicting group

Adjust parameters

Training group

Train BP network

Test BP network

No

GA

Satisfactory Yes

Experiment group Store

Satisfactory

No

Selection

Crossover

Mutation

Yes Optimize the Medium components Fig. 1. The optimization of combining BP neural network with genetic algorithm

3 Results and Discussion Data Statistics and Analysis 3.1

Data Statistics and Analysis

A program written in C++ on MATLAB 2016a software is modified to suit the problem for GA in emission controlling in which the objective function used is the concentration of the emissions. Errors that happened at the learning and testing stages are described the RMS and R2, mean error percentage values, which are defined as follows, respectively. RMS ¼

ð1=pÞ

X j

!1=2 2

jtj  oj j

ð1Þ

1394

Z. Shaojun et al.

R2 ¼ 1 

Mean % Error ¼

2 ! P  j t j  oj P  2 j oj   1 X tj  oj  100 p i tj

ð2Þ

ð3Þ

where t is the target value, o is the output value, and p is the pattern. Experimental results are used to train and test data for the ANNs. The RMS, R2 and the mean error percentage values were used for comparing all of them. According to the Number of Inspection Components and the Number of Optimization Indicators, the Network Topology is Designed: 8 ! 5!1 (Fig. 2)

Fig. 2. Artificial neural network topology

In this experiment, the particle swarm optimization (PSO) is used to optimize the weight and threshold value of the BP neural network (8 ! 5!1), that is, 5  8 + 5 = 45 weights and 5 + 1 = 6 threshold, so the individual coding length is 45 + 6 = 51.19 groups of 24 sets of data were used as training samples, and 5 groups as test samples. The parameters of PSO algorithm are set as follows: c1 = 1.494 45; c2 = 1.494 45; Maxgen = 100; Sizepop = 30; Vmax = 1; Vmin = 1; Popmax = 5; Popmin = 5. The absolute value Error of the difference between the simulated data and the actual value of the neural network training data is used as the fitness value of the PSO algorithm.

ANNs Combined with Genetic Algorithm Optimization …

1395

Error ¼ jSim-Trainj Sim: the output value of neural network simulation; Train: the actual output value of neural network training samples. Genetic algorithm for artificial neural network optimization. The inverse number of predictive values of the BP neural network model based on PSO algorithm is used as the fitness of the GA algorithm to find the maximum value of the output of the BP network, that is, the maximum biomass of the fermentation. Parameter setting: iteration number Maxgen = 100, population size Sizepop = 30, cross coefficient Pcross = [0.4], coefficient of variation Pmutation = [0.2]. Genetic algorithm is completed by MTLAB software GAOT genetic algorithm toolbox. 3.2

Artificial Neural Network Modeling and Its Global Optimization

Using the network model after training, we simulate the 5 prediction set samples, and the results are shown in Fig. 3. From Fig. 3, we can see that the error between the simulated value of OD260 value and the measured value is less than 10%. The network has good generalization ability and can be used to predict biomass.

1.5 1.4

Predicted OD260

1.2 OD260

OD260

1.3

Actural OD260

1.1 1.0 0.9 0.8 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Cycloclasticus sp.

1.5 Actural OD260 1.4 1.3 Predicted OD260 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Alcanivorax sp.

Fig. 3. Actual value and predicted value of biomass

In the prediction process, the network parameters should be kept the same as those in training, and Table 2 is the prediction result. It can be seen from Table 2 that the success rate of BP network identification is 100%, which can be used for fingerprint identification of spilled oil on the sea.

1396

Z. Shaojun et al.

4 Results and Discussion Artificial neural network (ANN) has good nonlinear approximation ability and selflearning ability. In theory, it can fully approximate any complex nonlinear relation, but the complexity of neural network greatly affects the performance of neural network. In this experiment, two strains of endophytic fungi which were isolated and screened in the laboratory were used to promote the growth of tobacco and reduce the content of heavy metals in tobacco leaves. A model was obtained by fitting and constructing the experimental results of uniform design through the BP artificial neural network based on PSO algorithm. BP-GA optimization model is used to optimize the culture medium for global optimization, and an optimized medium is finally obtained. The concentration of carbon source (diesel) is 0.5%, the concentration of nitrogen source (NH4) 2SO4 is 8 g/L, phosphorus source is KH2PO4:Na2HPO4 mol ratio 3:1, yeast powder concentration is 0.03 g/L, culture time is 3 D, culture temperature is 20, pH = 7.6, and liquid. The volume is 30 mL, the inoculum volume is 1.5%, and the shaking speed is 150 rpm. The experimental results were in agreement with the expected results. After optimization, the degradation rate reached 89.80%, which was 14.4% higher than the original 78.50%. The culture medium designed for this experiment laid the foundation for commercial production and application of petroleum degrading bacteria.

References 1. Bejarano, A.C., Michel, J.: Large-scale risk assessment of polycyclic aromatic hydrocarbons in shoreline sediments from Saudi Arabia: environmental legacy after twelve years of the Gulf war oil spill. Environ. Pollut. 158(5), 1561–1569 (2010) 2. Joydas, T.V., Qurban, M.A., Al-Suwailem, A.: Macrobenthic community structure in the northern Saudi waters of the Gulf, 14 years after the 1991 oil spill. Mar. Pollut. Bull. 64(2), 325–335 (2012) 3. Chaerun, S.K., Tazaki, K., Asada, R.: Bioremediation of coastal areas 5 years after the Nakhodka oil spill in the Sea of Japan: isolation and characterization of hydrocarbondegrading bacteria. Environ. Int. 30(7), 911–922 (2004) 4. Hassanshahian, M., Chaurasia, M., Tebyanian, H.: Isolation and characterization of alkane degrading bacteria from petroleum reservoir waste water in Iran (Kerman and Tehran provenances). Mar. Pollut. Bull. 73(1), 300–305 (2013) 5. Morales, G., Ferrera-Cerrato, R., Rivera-Cruz, M.C.: Diesel degradation by emulsifying bacteria isolated from soils polluted with weathered petroleum hydrocarbons. Appl. Soil. Ecol. 121, 127–134 (2017) 6. Sarkar, P., Roy, A., Pal, S.: Enrichment and characterization of hydrocarbon-degrading bacteria from petroleum refinery waste as potent bioaugmentation agent for in situ bioremediation. Biores. Technol. 242, 15–27 (2017) 7. Lau, M.K., Baiser, B., Northrop, A.: Regime shifts and hysteresis in the pitcher-plant microecosystem. Ecol. Model. 382, 1–8 (2018) 8. Bhattacharya, S., Dineshkumar, R., Dhanarajan, G.: Improvement of e-polylysine production by marine bacterium Bacillus licheniformis, using artificial neural network modeling and particle swarm optimization technique. Biochem. Eng. J. 126, 8–15 (2017)

ANNs Combined with Genetic Algorithm Optimization …

1397

9. Yang, Q., Gao, H., Zhang, W.: Biomass concentration prediction via an input-weighed model based on artificial neural network and peer-learning cuckoo search. Chemometr. Intell. Lab. Syst. 171, 170–181 (2017) 10. Subashchandrabose, S.R., Wang, L., Venkateswarlu, K.: Interactive effects of PAHs and heavy metal mixtures on oxidative stress in Chlorella, sp. MM3 as determined by artificial neural network and genetic algorithm. Algal Res. 21, 203–212 (2017) 11. Dhanarajan, G., Rangarajan, V., Bandi, C.: Biosurfactant-biopolymer driven microbial enhanced oil recovery (MEOR) and its optimization by an ANN-GA hybrid technique. J. Biotechnol. 256, 46–56 (2017) 12. Prakasham, R.S., Sathish, T., Brahmaiah, P.: Imperative role of neural networks coupled genetic algorithm on optimization of biohydrogen yield. Int. J. Hydrogen Energy 36(7), 4332–4339 (2011) 13. Pappu, S.M.J., Gummadi, S.N.: Artificial neural network and regression coupled genetic algorithm to optimize parameters for enhanced xylitol production by Debaryomyces nepalensis, in bioreactor. Biochem. Eng. J. 120, 136–145 (2017) 14. Wang, M.Y., Zhang, S.J., Zhang, X.: Prediction of PAHs emitted from marine diesel engine using artificial neural networks combining genetic algorithms. Appl. Mech. Mater. 599–601, 1233–1236 (2014)

Application Analysis of Beidou Satellite Navigation System in Marine Science Zhiliang Fan(&) Department of Information Engineering, High-Tech Institute of Xi’an, Xi’an 710025, China [email protected]

Abstract. The shortage of resources is becoming increasingly serious in the global form. In the Twelfth Five-Year Plan, China emphasized the status of marine resources and planned the development of marine resources in China. However, in the process of developing marine resources, advanced equipment and reasonable development planning are especially important, especially the development of marine resources, which is the core of the development of the whole marine resources. This paper emphasizes the function and basic structure of the Beidou satellite system, and analyzes the working principle of the Beidou satellite navigation system in detail. The author mainly explains the Beidou satellite system from four aspects of the rescue, positioning, communication and data transfer, and explains the significance and method of using the Beidou system to the ocean engineering. Finally, the advantages of the Beidou System in marine engineering are summarized. Keywords: The Beidou system

 Marine science  Location

1 Introduction Marine engineering refers to the development, utilization, protection and restoration of marine resources. The main body of the project is a new, rebuilt and expanded project on the side of the coast to the sea, which is divided into coastal engineering, offshore engineering and deep-sea engineering. In the face of the slow growth of the global population, the gradual exhaustion of land resources and the environmental crisis, countries have paid more and more attention to the marine resources. The technology and equipment of the marine engineering play a decisive role in the exploitation and utilization of marine resources. With the development of science and technology, various advanced technologies and theories are applied in the field of ocean engineering [1]. Beidou satellite navigation system is a global positioning satellite navigation system with independent construction, independent operation and compatibility with other satellite navigation systems in the world.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1398–1403, 2019. https://doi.org/10.1007/978-981-13-3648-5_180

Application Analysis of Beidou Satellite …

1399

2 Beidou Satellite Navigation System 2.1

System Composition and Function

The Beidou satellite navigation system developed by China consists of three system stations, namely, user segments, ground segments and space segments. The terminal of the Beidou user and the other navigation devices make up the user segment together. The ground monitoring station, the main control station and the injection station constitute the ground segment, and the main content of the space section is a variety of types and a large number of satellites [2]. The following are the main performance of the Beidou Satellite: first, high density and high speed information transmission. The Beidou satellite can transmit more than one hundred and twenty of the Chinese character information at one time, and the speed of transmission is very fast. Second, the positioning accuracy is very high. Beidou satellite positioning can not only work at all weather, but also has high accuracy in positioning. 2.2

System Positioning Accuracy

The Beidou satellite navigation system independently developed by our country has attracted the attention of experts both at home and abroad. Many experts have tested the Beidou satellite navigation system in many ways together, and the accuracy of the Beidou satellite system is measured by a variety of detection methods such as precise single point positioning, single point positioning, pseudo distance differential positioning and so on [3]. The results of the following test are as follows: The accuracy of the pseudo distance single point positioning of the Beidou satellite navigation system is less than ten meters, the relative positioning accuracy of the baseline can be calculated in centimeters, and the accuracy of the dynamic pseudo range differential positioning is less than four meters. In these testing methods, it can be used for positioning accuracy and pseudo range single point positioning with higher accuracy in general civilian navigation and positioning [4].

3 The Application of Beidou System in Marine Science 3.1

High Precision Positioning

In the background of the continuous development and perfection of science and technology in China, the construction of the ground section of the Beidou satellite is more complete, which makes the Beidou satellite technology further developed. On this basis, the accuracy of the Beidou satellite navigation system in China has reached the standard of deep sea exploration. China’s Beidou satellite navigation system has formally provided services to the whole world in June 2015. The birth of this technology has enabled users of the Beidou satellite system to receive signals of different levels in any corner of the world. Its precision is at the lowest level. Figure 1 is a map of the Beidou satellite [5].

1400

Z. Fan

Fig. 1. Star based high precision enhanced service system

3.2

Ship Operation Management and Command and Dispatch

The fine construction of ships under modern scientific and technological conditions is highly dependent on the means of information. All the construction information such as ship position, speed, course, draught, rake depth, mud concentration, mud flow velocity, engine speed, oil consumption and other construction information are collected by sensors. The collected work information needs to be transmitted to the central platform for analysis, storage and display in time. In addition to the real-time operation status report, the operating ships report daily construction and ship machine reports in a fixed format [6]. The ship operation management and command and dispatch are divided into two parts, one is the ship operation management and the command and dispatch center, the other part is to establish the corresponding ship operation management system on the working ship. The structure chart of ship operations management and command and control center is shown in Fig. 2. By analyzing the above picture, we can see that the relevant work information transmitted to the Beidou satellite transmission center needs to be completed by the operation ship [7]. The related information is analyzed and stored by the control center, and all the data will be put on record. The small system and central management system of each department will receive all kinds of data in real time, and manage and manipulate the management business under the cooperation of the database. During the operation of the central control system, the central management station can send information to the ship under which the subordinate is working. 3.3

Real Time Data Service

Tidal level data is the basis for meticulous construction of dredging. With the development of the large-scale ship, the construction range is far away from the shore, and the real-time transmission of the tidal level data is becoming more and more difficult.

Application Analysis of Beidou Satellite …

1401

Fig. 2. Architecture of ship operation management, command and dispatch center

Beidou satellite communication can solve this problem well. According to the characteristics of the long distance transmission of the Beidou satellite, it can collect and manage the whole coastal real-time tidal data, including the tidal information in the construction areas of Southeast Asia and West Asia, so as to realize the full coverage of the tidal information service in the dredging field, and establish a data publishing service system based on the SOA architecture. Beidou tidal level information is allweather and regional service. The Beidou tidal level information service architecture is shown in Fig. 3 [8].

4 Conclusion The Beidou satellite navigation system developed by our country has a great advantage in positioning and information transmission. It is at a high level in the global positioning system, showing the scientific and technological strength and comprehensive national strength of our country [9]. The application of the Beidou satellite navigation system in the marine engineering has greatly promoted the development of China’s marine engineering, and also has a higher guarantee for the life and safety of the related personnel. The research and development of the terminal of various Beidou users has also promoted the industrialization of the Beidou System in China. The Beidou satellite system in our country has achieved the success of the new Beidou sounding system and

1402

Z. Fan

Fig. 3. Architecture of the tidal level information service in Beidou

realized the intelligent transportation. In the civil field, the market of the Beidou satellite in our country will continue to expand. The key to the global development of our Beidou satellite is to build a large and well developed Beidou industry [10].

References 1. Li, W.Q., Fu, X., Wang, W.Y., et al.: Application of BeiDou 2nd generation satellite navigation system in marine data buoy supervision and management. Shandong Sci. 25(6), 21–26 (2012) 2. Chen, X E., Guo, S J., Qin, H F., et al.: Integrated shore-board monitoring system for marine incinerator based on Beidou satellites navigation system. Ship Eng. (2016) 3. Yang, Y.X., Li, J.L., Wang, A.B., et al.: Preliminary assessment of the navigation and positioning performance of BeiDou regional navigation satellite system. Sci. China Earth Sci. 57(1), 144–152 (2014) 4. Zhou, S.S., Hu, X., Li, L., et al.: Applications of two-way satellite time and frequency transfer in the BeiDou navigation satellite system. Sci China (Phys. Mech. Astron.) 59(10), 109511 (2016) 5. Zhu, J., Wang, J., Zeng, G., et al.: Precise orbit determination of BeiDou regional navigation satellite system Via double-difference observations. In: China Satellite Navigation Conference (CSNC) 2013 Proceedings, pp. 77–88. Springer, Berlin (2013) 6. Zhang, Q., Sui, L., Jia, X., et al.: SIS error statistical analysis of Beidou satellite navigation system. Geomatics Inf. Sci. Wuhan Univ. 423(1), 175–188 (2014)

Application Analysis of Beidou Satellite …

1403

7. Liu, W., Hu, Y.: Application of generalized binary offset carrier modulation in BeiDou satellite navigation system. J. Commun. Technol. Electron. 59(11), 1206–1214 (2014) 8. Wang, X., Liu, R.: Constellation design of the Beidou satellite navigation system and performance simulation. Information technology and computer science. In: Proceedings of 2012 National Conference on Information Technology and Computer Science (2012) 9. Yang, Y., Xu, Y., Li, J., et al.: Progress and performance evaluation of BeiDou global navigation satellite system: data analysis based on BDS-3 demonstration system. Sci. China Earth Sci. 61(5), 614–624 (2018) 10. Huang, L.: Graduate School, University N D. Commercial development strategy of Beidou satellite navigation system. National Defense Science & Technology (2017)

Research on Optimization and Innovation of ERP Financial Module Gao Shuang(&) and Yang Lingxiao Dalian Neusoft University of Information, No. 8, Software Park Road, Dalian, China [email protected]

Abstract. With the development of China’s informatization, intelligent development and third-party platform payment, the market size of ERP is increasing year by year. Based on the analysis and investigation of the financial implementation of enterprise ERP, this paper analyzes the existing problems in its implementation according to the management status of the financial module of the enterprise. Based on the analysis of these problems and the existing development, this paper proposes an innovative optimization scheme for the detailed demand analysis of the accounting process, the establishment of the database and the use of electronic vouchers. It lays a solid theoretical foundation for the rapid implementation and application of advanced management mode, and has important theoretical value for the future development of enterprise financial management. Keywords: Electronic voucher Three-flow integration



Information fusion



Financial module



1 ERP Overview ERP integrates information technology and advanced management ideas, can maximize reduce manpower, material resources and financial resources. At the same time, it has become the cornerstone of modern enterprise’s survival and development in the trend of The Times [1]. It was originally developed as an enterprise management software that was developed in terms of material resource management (logistics), human resource management (people) (goods flow), financial resources management, information resources management (flow), and it was quickly accepted by the world’s business enterprise, and it has now developed into one of the modern enterprise management theories. Financial module is the core of the whole module of ERP, the financial module in the ERP is based on the flow of information and exchange between various modules, at this stage of the main process is: First of all, by entering the accounting information involved in the enterprise to the system, retrieve and process the financial data to obtain the feedback of the data; Secondly, according to the current accounting subjects to summarize [2]. Finally, the financial staff of the enterprise will use the current financial statements of each stage as a reference to output the chart. This provides detailed © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1404–1408, 2019. https://doi.org/10.1007/978-981-13-3648-5_181

Research on Optimization and Innovation of ERP Financial Module

1405

interpretation and prediction for the sales, procurement plan and final performance evaluation of the enterprise [3]. The relationship between enterprise financial module and each module link is interrelated and mutually reinforcing (see Fig. 1).

Fig. 1. The relationship between enterprise financial module and each module link is interrelated and mutually reinforcing

2 The Management Status of Enterprise Financial Module As far as our country is concerned, most enterprise management systems are outsourced, and these software are not necessarily suitable for the enterprise. Nowadays, enterprises only use the ERP system as a technical project rather than a management project [4]. And for the existing financial module, its primary function is to collect, store, process and assist the management of the accounting information, ignoring it is a more important management idea to provide decision support for enterprise decisionmakers and employees. A business is like a big machine. It can only work better if it works together through various departments. The company’s procurement, sales, inventory and other links are closely related to finance. As third-party payment applications become more and more extensive, electronic certificates are gradually replacing paper vouchers, and the entry of financial data is expected to enter the era of paperless automatic office. However, for the existing working mode, the original vouchers and accounting vouchers need to be sorted and sorted, and transferred between departments [5]. The staff members have spent a great deal of energy and may omit it, resulting in low efficiency. In terms of the existing accounting process, there may be problems such as missing data and filling errors in the entry of vouchers, the risk of information omission in the audit of various departments, and many systems do not provide the function of data integration and analysis, so the timeliness and accuracy of financial information cannot be fully guaranteed (Fig. 2).

1406

G. Shuang and Y. Lingxiao

Fig. 2. The existing accounting process

For enterprises to better development, we should put the eye in the long run, we need to perfect the function of the financial module and let employees know the advantage of information systems management, strengthen each department for the regulation and control of business process, make the financial module of the enterprise adjust flexibly, realize the reasonable and flexible configuration, improve the overall operating efficiency of enterprises [6].

3 Optimization and Innovation of Financial Module Based on ERP By using a direct encryption of electronic credentials on the machine, an orderly transfer to the corresponding department to review, not only to avoid the loss of data and the waste of time, but also to enhance the supervision and control of the business process of each department. In order to ensure the authenticity, integrity and reliability of the financial data, collect and apply the data, integrate all the documents in the future, establish a consolidated database, put the financial information of each module involved, and make it automatically generate quarterly reports and books [7]. In this way, it is not only efficient and efficient, but also facilitates the retrieval and classification of information, speeds up the transmission of information, and avoids the errors that occur during the registration of books. It is better to realize the integration of three streams, make the integration of enterprise production, supply and marketing more optimized, reduce the cost of enterprises, and also ensure the timeliness and accuracy of financial information. The following is the specific operation process of the scheme: (1) By ordinary users permission to login system, the accounting personnel will have pasted on the fixed book and reviewed is scanned into PDF format by electronic scanner, then converted into word format, and then entered into the database; If it is an electronic voucher: after the examination is qualified, direct scan into the computer to record into the database. (2) The audit personnel directly receive the documents uploaded by the scan, and integrate the data in the ERP database with the obtained original vouchers, and do the filtering, adding and processing of the information, and then make the electronic signature after verification and matching. To bring the financial control

Research on Optimization and Innovation of ERP Financial Module

(3)

(4)

(5)

(6)

1407

awareness into the business cycle of the whole enterprise, improve the financial management ability, improve the overall transmission speed, and make the accounting data safe and accurate. The database is classified and processed according to the financial data entered, and the data sub-modules such as purchasing and sales are kept, such as the inventory status of records and the purchase price, the sales situation of a product in different months, the sales price, the quantity of sales, the price discount of different customers, etc. The way that this information is stored is not only able to extract information about all kinds of data and modules that you want to look at, but you can further study the amount of inventory, purchasing, and sales of each of these materials. Moreover, perfecting constantly updated in the database data, can be generated automatically according to the common situation entries, and it can help managers to understand the situation of the logistics, information flow and cash flow volatility [8]. According to different trading subjects, the company will conduct automatic inspection reconciliation. the main contents include accounts receivable and other accounts receivable and accounts payable, other payables and advance and advance payment, such as dividends and dividends payable receivable account reconciliation rules, and main business income and main business cost, other business income and other business costs such as reconciliation rules [9]. The report module sets up an internal transaction data collection form, and it automatically collects internal trading data from the ledger through the database, and then the PC is running through the internal transaction data, which can be monitored at any time, so that the transcripts are more standardized. Comprehensive utilization of integrated data to automatically create personalized reports. Based on the information stored in the database, such as the quantity fluctuation of the sales product, the price fluctuation, the change of the purchase demand of the customer, the price fluctuation of the raw material provided by the supplier, etc., further study the production and operation situation, and automatically generate the trend analysis table, so that the management personnel can master the sales status of each quarter, make more appropriate planning for product customization, realize the classification statistics and summary analysis, and automatically generate various management reports and analysis reports while processing various data, which is beneficial to the management personnel to make correct decisions. In units of year, The financial management module is used to construct a scientific financial analysis system. Through the analysis and measurement of all financial statements (such as financial statements, cash flow statements, assets income statement, sales trend chart, and price floating table, etc.), it analyzes the sales trend of its products, analyzes whether the sales volume of the product is related to the season, the purchase price and the change of the market environment, etc., to find out the weak links existing in the company, and then take effective measures to improve and improve the management, so as to make the management more refined, improve the management efficiency, and ensure the economic efficiency of the enterprise to improve steadily [10].

1408

G. Shuang and Y. Lingxiao

4 Conclusion The financial management module in the ERP environment is a relatively sophisticated management business, but in order to do better with the development of the information age, it helps the enterprise to understand the financial situation and the overall development of the firm, to improve the competitiveness of its enterprise, from the core of the ERP system, to develop a more reasonable and optimal information plan and to make the advance and innovation of its individual modules to the most extent, and to maximize the efficiency of the financial budget, and to maximize the goal of maximizing business interests.

References 1. Qi, L.: Combined with the traditional accounting on the internal control of enterprises under the accounting information system. Chinese Enterprise Accounting of Villages and towns 3, 147–148 (2013) 2. Lin, Chen: Accounting information environment under the college bank reconciliation issues and thinking. Contemp. Econ. 14, 84–85 (2013) 3. Li, Zhang: One belt and one road strategy should focus on issues and implementation path. China Econ. Trade Herald 27, 13–15 (2014) 4. Wang, M., Zhao, X.: Study on internal control and risk management of financial accounting information system under ERP environment. Friends Account. 25, 55–57 (2013) 5. Nan, Z.: The monitoring method of ACS electronic reconciliation result confirmation needs to be improved. Financ. Econ. 11, 154–156 (2013) 6. Prieto, N., Uttaro, B., Mapiye, C., Turner, T.D., Dugan, M.E.R., Zamora, V., Young, M., Beltranena, E.: Meat Sci. 98(4), 585 (2014) 7. Riovanto, R., Marchi, MD., Cassandro, M., Penasa, M.: Food Chem. 134(4), 2459 (2012) 8. Prieto, N., Dugan, M.E.R., López-Campos, O., McAllister, T.A., Aalhus J.L., Uttaro, B.: Meat Sci., 90(1), 43 (2012) 9. Prieto, N ó. López-Campos, J.L., Aalhus, M.E.R., Dugan, M., Juárez., Uttaro B.: Meat Sci., 98(2), 279 (2014) 10. Pla, M., Hernández, P., Ario, B., Ramírez J.A., Isabel Díaz.: Food Chem., 100(1), 165 (2007)

The Initial Application of Interactive Genetic Algorithm for Ceramic Modelling Design Xing Xu1(&), Jiantao Pi1, Ao Xu1, Kangle He1, and Jing Zheng2 1

2

School of Information, Jingdezhen Ceramic Institute, Jingdezhen 333403, China [email protected] School of Foreign, Jingdezhen Ceramic Institute, Jingdezhen 333403, China

Abstract. Interactive genetic algorithm has very high application value and development prospects in the design fields dominated by the subjective perception of human beings such as music creation, art creation, architectural design, product color design. This paper applies the interactive genetic algorithm to the design of ceramic products which have not been designed before and researches on evolutionary algorithms in different application fields. The design scheme of ceramic products mainly comes from subjective aesthetic viewpoint and thinking mode. In order to get the user’s most satisfied ceramic products in terms of design style and color matching, this paper utilizes interactive genetic algorithm, transforming the problem of optimizing the design of ceramic products into implicit performance optimization. The specific steps are: first, the design scheme and color matching are coded; then through the various operations of genetic algorithm such as selecting operation, crossover operation and mutation operation, the next generation is generated; then the user fills in the satisfaction value of the individual more than once. Repeatedly, ceramic product modelling and color cater to the user’s preferences more and more. After many evolutionary processes, the optimal individual most favored by the user is finally generated. Keywords: Interactive genetic algorithm Genetic algorithm



Ceramic



Modelling design



1 Introduction The development of history is the development of science, culture and technology, and ceramics play an important role in the history of science, technology and culture in China. Ceramics are the collective name of pottery and porcelain. In the Neolithic period, pottery was invented in 8000 BC. The Yellow River basin, mother of hundreds of millions of Chinese descendants, was the birthplace of the earliest painted pottery. The earliest painted pottery culture in the world dates back to the first stage culture of the Earth Bay in Gansu when the pottery already had a regular shape and a simple ornament [1–3]. In the long history, porcelain was invented after pottery. The first porcelain to appear is Celadon. Compared with Tao, porcelain has many unique advantages, such as © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1409–1419, 2019. https://doi.org/10.1007/978-981-13-3648-5_182

1410

X. Xu et al.

strong texture, durability, delicacy and density, and glazed porcelain has a high leakage resistance. In the Eastern Han Dynasty, the Yue Kiln appeared in Zhejiang where Celadon was gradually mature. With the continuous development of the society and the progress of technology, Celadon dominated the ceramic market during the Wei, Jin, Northern and Southern Dynasties. At the same time, the white porcelain appeared in the north of China, and gradually developed, rivaling with Celedon in the Tang Dynasty. As leaders, both the white porcelain and Celedon occupied a high position in the ceramic market, thus forming “South Green North White”. Tang Dynasty was a great flourishing time in ancient Chinese history with great economic, cultural and political development. In this historical background, the development of ceramics has made great progress among which the greatest achievement is the Tang Tricolor Pottery known in ancient and modern times. The three colors of glaze, yellow, green and white, are painted on the embryos. And that’s what Tang Tricolor Pottery is called. Tang modelling is rich and diverse, exquisite and vivid, for all kinds of characters, animals, flowers and birds are drawn on the porcelain surface. During the Song Dynasty, with the rapidly growing commodity economy, demand and use of ceramic products have been improved. Also at this time, different ceramics appeared in different regions with various styles and merits, such as the five great kilns, namely, Guan Ware, Ge Ware, Ru Ware, Ding Ware and Jun Ware. In the Ming and Qing dynasties, the level of ceramic production achieved a greater breakthrough, reaching the peak of perfection. Molded ceramics were both practical and ornamental with various kinds including blue glaze, red glaze, yellow glaze, peacock green glaze and black glaze, giving pleasure to both the mind and the eyes. The development and evolution process of ceramic product modelling design mainly goes from simplicity to complexity, from low-level to high-level, from practicality to appreciation, and to combination of practicality and appreciation. With the influence of modern industry on every aspect of people’s life, the style of traditional Chinese ceramics is changing, Simple and honest, the traditional style has been rare and hence replaced by the modern style pursued by the majority of ceramic manufacturers: dynamic, geometry, curves, and images. The design and manufacture of ceramic products should have a natural style of smoothness and harmony, as well as an ancient style of poetic beauty. These characteristics find expressions in the construction of fine tea sets, the shape of freehand vases, and the natural atmosphere of ceramic artworks. Today, ceramic works play an indispensable part in people’s life. For example: all kinds of tableware, tea sets, tiles and porcelains of different functions, and other daily necessities; vases, sculptures, jewelry and other ornaments. In the process of ceramic product modelling design, the designer should fully consider the consumer’s appreciation concept and consumption psychology. Appreciation concept varies with living condition, age, gender and other differences. Take the elderly and the youngsters as example. Most young people prefer products with innovative and dynamic design, and future elements. Yet older people prefer those with ancient rhyme and natural atmosphere. Furthermore, in the process of modelling design and production, designers and manufacturers should take into account the psychological differences of different consumer groups, and devise different styles of ceramic products to better meet consumer demand.

The Initial Application of Interactive Genetic Algorithm …

1411

2 Genetic Algorithm 2.1

Interactive Genetic Algorithm

With the development of traditional genetic algorithm, it is used more and more widely. Yet some problems and limitations are also exposed: It is found that the algorithm can only solve the system optimization problem of explicit representation (function with explicit performance index) because a definite performance index function is needed when calculating the fitness of the individual. However, many problems, such as art, music composition, data mining and knowledge learning, cannot be expressed by explicit functions. These problems, stored in the human brain and determined by the subjective perception of human beings, are implicit [4–6]. Traditional genetic algorithms is unable to solve these problems. To compensate for this shortcoming, an interactive genetic algorithm is proposed to adapt implicit problems. The interactive genetic algorithm has two definitions. Narrow definition: the evolutionary optimization problem of the individual’s adaptability determined by the subjective evaluation of human beings; Generalized definition [7, 8]: interactive genetic algorithm is the evolutionary optimization problem with human-computer interaction process which not only assigns fitness value to the individual and also contains man’s intervention in the evolutionary process. Interactive genetic algorithm comes into being because man is not satisfied with the existing things, combined with the development of computer. The purpose is to combine human intelligence and computer technology to make up for the deficiencies of genetic operation. The subjectivity of evaluation: users are involved in the interactive genetic algorithm while not in the traditional genetic algorithm. The noising of evaluation: in the initial stage, the user is unfamiliar with the objects, resulting in inaccurate individual fitness value, and fatigue after assigning fitness value to the individual for a long time. The difference of optimization results: users have different psychological preferences and hardly distinguish between the subtle difference of the individual’s performance [9–11]. The process of interactive genetic algorithm contains the following steps: (1) Encode the actual problem. (2) Set the value of the correlation parameter in the algorithm such as crossover probability Pc, mutation probability Pm and population number N. Randomly generate the initial population P(0), and set evolutionary algebra t = 0. (3) Generate the performance type of the individual after decoding. Users evaluate the fitness value of each individual based on personal feelings and preferences. (4) Judge the algorithm to determine whether it meets the condition of stopping. Terminate the algorithm if it meets the conditions after output the corresponding results. (5) Select the operator function and genetic manipulation. The new population P(t + 1) is derived from the evolution of population P(t) with t plus one automatically. (6) Transfer to step (3).

1412

2.2

X. Xu et al.

The Application of Interactive Genetic Algorithm

The costume assistant design system is established with interactive genetic algorithm. The designer’s clothing design is mainly for three parts including the neckline, waist and sleeve of clothing. With the application of interactive genetic algorithm into the design, these three parts are encoded in 8-bit binary respectively. The style of clothing is expressed by the high 4-digit numbers while the colors by the rest of low 4-digit numbers. There are 16 different styles and colors in each section with the search space of 224. Table 1 indicates the meaning of the string representation of the corresponding binary code. Table 1. Partial codes and styles of clothing Style Neckline Sleeve

Waist

Color Skirt

Code Small neckline Small short sleeve Small tight waist Light red Miniskirt

Middle neckline Middle short sleeve Middle tight waist Red Middle skirt

Large neckline Flower short sleeve Middle wide waist Light blue Folded flower skirt

Small floret collar Middle sleeve

Middle floret collar Long sleeve



Wide waist

Folded tight waist



Blue Folded flower middle skirt

Light yellow Folded flower long skirt

… …



Costume assistant design is an effective application of interactive genetic algorithm in real life. All aspects of traditional clothing design are now completed by the computer, thus improving efficiency and reducing human energy and economic input. The development of this application has far-reaching influence on the development of clothing industry. It is an important research direction to better apply genetic algorithm in costume assistant design and generalize it to other fields.

3 The Application of IGA for Ceramic Modelling Design 3.1

Interactive Platform Design

The interactive platform needs a simple and clear user graphical interface. Therefore, the color and shape of the products displayed on the platform should be visually clear, So for the platform to display the products of its color and shape needs to be visually clear, with each function button easy to understand and operate for users. Buttons include: execute algorithm, evolve next generation, NB evaluate, continue and stop. The algorithm can be interactively manipulated by buttons. The genetic algorithm selects individuals with high fitness values or certain individuals for

The Initial Application of Interactive Genetic Algorithm …

1413

optimization which are directly generated into the next population. So it is necessary to add input box underneath each product displayed on the platform where fitness value can be directly entered by the user’s preference. The ultimate goal of the algorithm is to generate the product favored by the user. Thus, we need a separate area to show the best individual for users to see. 3.2

Chromosome Coding

Chromosomes are nouns in biology, and chromosome differences lead to different biological traits. In genetic algorithms, chromosomes have the same function as in biology. Chromosomes are generally expressed as a string: 10100011. A set of multiple chromosomes (multiple individuals) is called a group. In biology, genes are the components of chromosomes, and each of these numbers in the string above represents a gene. The key problem of genetic algorithm is design coding. The quality and speed of problem solving are determined by the appropriate degree of coding. In addition, the arrangement of chromosomes, crossover operators and mutation operators are all influenced by coding. Three commonly used coding laws are: ① binary coding; ② Gray coding ③ real number coding Among these, binary coding, composed by 0 and 1, is most common. Binary coding facilitates the operation of genetic operators and conforms to the principle of minimum set of codes. The ceramic product modelling design mainly involves color and shape. After a field visit to the manufacturers of Jingdezhen ceramics, we found that, in terms of the shape, it is roughly divided into bottle type and pomegranate type. The shape is simple and single, so one-bit code is adopted. In contrast, color contains many categories, then we use three-bit code. A combination of four-bit code represents the specific performance type of the individual. For instance, the code 0010 stands for black bottle type. The specific code is as shown in Table 2. 3.3

Design of the Algorithm

(1) Improved Design of Selection Operator Selection operator is also called re-product operator. The objective of selection: The genetic algorithm optimizes the individual directly inherited by the selection operator until the next generation occurs. For some optimized individuals, the algorithm no longer crosses or mutates, but is passed directly to the next generation. For other individuals, new ones are generated by cross matching and pass on to the next generation. The method of directly passing to the individual with high fitness value makes the optimal solution produced by repetition not destroyed by the algorithm. Coding: function [new population] = selection operation (population,fitness value) total fitness value = sum of individual fitness values; b = the highest fitness value of the individual;

1414

X. Xu et al.

fitvalue = fitness value/totalfit; fitvalue = the cumulative value of of each line; [px,py] = population size; …… if (ms (inherited individual)) < the fitness value of the individual after heredity; new population (newin,:) = pop(fitin,:); Newin plus 1 fitin set to 1 else fitin plus 1 //comparison of ms and fitness value of each individual in sequence

(2) Improved Design of Crossover Operator Crossover: Genetic algorithm design is mainly to design genetic operators. At present, favorable crossover operators contain JOX, LOX, SXX, PPX, GT, SPX and POX. All that POX operator produces is feasible solution, able to inherit the positive features of the parent. Hence, this paper mainly takes the POX operator as the prototype, and improves it as a genetic operator. The crossover operation code is as follows; Function new chromosome = two chromosomes for crossover operation; n = the chromosome size of previous generation; npares = floor(n/2); cruzar = rand(npares,1) threshold Th, it can be concluded that the embedded secret data information bit is equal to zero; when Sum < Th, it can be concluded that the embedded secret data information bit is equal to one. In this way, the secret data information sequence A0 in Eq. (13) is obtained. 2. According to the formula (14), the original secret data information A obtained by the modulo-2 sum operation of the A0 and the pseudo-random sequence Bkey ¼ fb1 ; b2 ; . . .; bn g is A0  B ! A. The experimental results of the embedded secret data information and the extracted secret data information based on the discrete wavelet transform algorithm in the frequency domain are shown in Figs. 2 and 3.

a) Picture after data information are embedded

b) Original carrier’s picture

Fig. 2. Figure of discretized wavelet transform algorithm embeds secret information

Digital Image Information Hiding Technology …

a) Extract secret data information

1449

b) Original secret data information

Fig. 3. Discretized wavelet transform algorithm extracts secret data information

2.2

Least Significant Bit (LSB) Algorithm

The least significant bit algorithm in the spatial domain is to embed confidential data information in the least significant bit of the pixel of the carrier image (i.e., the least significant bit in the picture), which has the smallest impact on the quality of the carrier picture. Usually, the byte bit of the embedded secret data information is 1–3 bytes. Therefore, when embedded in byte  4, the carrier picture will be degraded, so that the hidden effect will be reduced. When extracting data, secret data can be extracted as long as the embedded byte value and the corresponding position are known. This paper only discusses the situation of replacing the lowest byte. The embedding process is mainly divided into the following three steps. 1. The original picture space domain pixels are converted from a decimal system to a binary system. As shown in Fig. 4, a 3  3 block picture is taken as an example.

255

253

254

11111111 11111101

11111110

253

255

253

11111101 11111111

11111101

252

255

254

11111100 11111111

11111110

Fig. 4. 8-byte binary is used to represent the pixels of the original picture

2. Instead of corresponding least significant bits in the carrier data, each byte of information in the binary secret data is used. When the binary data sequence to be embedded is [0 1 1 0 0 0 1 0 0], the replacement process is shown in Fig. 5. 3. The binary data containing the secret data is converted into a decimal pixel to obtain a secret image.

1450

L. Chen

11111111

11111101

11111110

11111110

11111101

11111111

11111101

11111111

11111101

11111100

11111110

11111100

11111100

11111111

11111110

11111101

11111110

11111110

Fig. 5. Binary secret data are used to replace the least significant bit of carrier data

3 Comparison of Image Information Hiding Algorithms in Spatial Domain and Transform Domain In this part, the spatial domain image information hiding algorithm is compared with the transform domain image information hiding algorithm, focusing on comparing their robustness, information hiding rate and time complexity. For spatial domain image information hiding algorithm, we mainly use the improved least significant bit algorithm as an example. For transforming frequency domain image information hiding algorithms, we mainly use an algorithm based on discrete wavelet transform as an example. 3.1

Comparison of Robustness

Now, taking a picture of a girl character as a carrier image (size 24 bits 256  256) as an example, see Fig. 6a, the picture to be hidden is a 64  64 binary character picture, see Fig. 6b. We now use the improved least significant bit algorithm (LSB algorithm) and discrete wavelet transform based algorithm (DWT’s frequency domain algorithm) to implement the concealment and extraction of digital image information separately. The proportion a = 0.01 embedded in the algorithm based on the discretized wavelet transform is shown in Figs. 7 and 8 respectively. After the loaded secret digital information images are processed by adding noise, filtering and shearing attacks, the extracted images are shown in Fig. 6d, h.

Fig. 6. Spatial domain and transform frequency domain algorithm loading results after extracting secret image attacks

Digital Image Information Hiding Technology …

1451

Fig. 7. Improved least significant bit algorithm hiding result

Fig. 8. Discretization wavelet transform algorithm hidden results

Figure 6a–d is the salt and salt noise 0.01, 0.02 added to the loaded secret digital information image obtained under the improved least significant bit algorithm of the spatial domain, Gaussian filtering and cut off 16 points after the extraction of the resulting image; In Fig. 6e–h are the addition of 0.01–0.02 salt and pepper noise, Gaussian filtering, and one-sixteenth cut off to the loaded secret digital image obtained in the frequency domain based on the discrete wavelet transform algorithm. After the extraction of the resulting image. Comparing the images in (a) to (d) in Fig. 6 and (e) to (h) in Fig. 6, it can be known that both algorithms of information hiding technology have strong anti-attack performance. From Fig. 6c, g, the algorithm based on discretized wavelet transform in frequency domain has better effect after Gaussian filter processing than the previous improved least significant bit algorithm. After comparing the image in Fig. 6 with the concealed binary image, after analyzing and discovering that the secret digital image information data downloaded by the above-mentioned second algorithm is violated, the extracted image and the hidden embedded image NC value, as Table 1 shows.

1452

L. Chen

Table 1. LSB algorithm and DWT algorithm download NC images of secret information pictures to extract and hide images after attack Hidden extraction algorithm Improved airspace LSB algorithm DWT-based frequency domain algorithm

3.2

Add 0.01 salt and pepper noise 0.9958

Add 0.02 salt and pepper noise 0.9872

Gaussian filter 0.5690

Cut 1/16 0.9836

0.9204

0.8644

0.9147

0.9912

Comparison of Hidden Capacity Between LSB Algorithm and DWT Algorithm

Previously, the hidden bit rate of the digital image information of the least significant bit algorithm is one-eighth, which is to hide 1 bit information per pixel. For the 8-bit grayscale image, the amount of hidden information is equal to the number of pixels in the carrier image. The improved least-significant bit algorithm mentioned in the previous chapter is to hide 6-bit information in 24 bits per pixel. Therefore, the information hiding rate of the improved LSB algorithm is one-fourth. The amount of digital image information that can be hidden is six times the number of pixels in the carrier. In the frequency domain discretization-based wavelet transform algorithm, the digital image information is hidden in the low frequency part of the information, and the decomposition level is 2 or higher. When the wavelet is divided into two levels, the low frequency component is the number of coefficients is one sixteenth of the pixels of the carrier image. This section selects the carrier image as a 24-bit BMP image, so when the three components of RGB each have a one-sixteenth hiding rate, the entire image hiding rate is three-sixteenths of the pixels of the carrier image. Therefore, the least significant bit algorithm concealed in the improved spatial domain is one-fourth larger than the frequency domain algorithm based on discretized wavelet transform.

4 Conclusion At present, as a new type of data information security technology, information hiding technology has attracted more and more attention from researchers and has become a hot research topic in the field of information hiding [7–10]. Different from traditional encryption methods, information hiding using digital images as a carrier hides the information generated by the obfuscation of the images and can better withstand the attacks of malicious people. Due to the weak sensitivity of human-eye vision system, digital image information files have been written as a carrier for more ideal information hiding. In this paper, the implementation process of the hidden technology of digital image information put forward by predecessors needs its algorithm and calculation model to be further elaborated in detail, and there should be mathematical software to implement it in the most concise and most feasible way. Based on this, we can create a better implementation of hidden technology for digital image information, and compare

Digital Image Information Hiding Technology …

1453

the advantages and disadvantages of spatial domain and frequency domain algorithms for digital image information hiding technology. Finally, the scope of application of these algorithms and their effects are given.

References 1. Ma, W., Han, Y.: Multi-sensor fire monitoring system based on image hiding technology. Fire-Fight. Sci. Technol. 9, 1269–1272 (2016) 2. Zheng, W., Ma, X., Zhao, C.: An improved BP coding algorithm for LDPC codes, 0. J. Hebei Univ. Nat. Sci. Ed. 5, 547–553 (2016) 3. Guo, J., Wang, Z.: Research on LSB watermarking algorithm based on chaotic sequences. Sci. Technol. Plaza 04, 167–169 (2016) 4. Zhang, S., Gao, C., Zhang, L.: Development and the latest application of digital image correlation technology in stress and strain measurement. Imag. Sci. Photochem. 02, 193–198 (2017) 5. Zhang, Y., Wang, Y.: Analysis of robustness of LSB image hiding algorithm based on chaotic sequence enhancement. J. Xi’an Univ. Technol. 10, 850–854 (2015) 6. Tang, Y., Gao, Y., Yu, J., Ye, Q.: Digital image watermarking algorithm based on gaininvariant quantization in DCT domain. J. Chongq. Univ. Posts Telecommun. Nat. Sci. Ed. 2, 223–231 (2017) 7. Liu, Y.: Digital image compression algorithm based on discrete cosine transform. J. Wuxi Vocat. Techn. Coll. 1, 43–46 (2017) 8. Prieto, N., Uttaro, B., Mapiye, C., Turner, T.D., Dugan, M.E.R., Zamora, V., Young, M., Beltranena, E.: Meat Sci. 98(4), 585 (2014) 9. Riovanto, R., De Marchi, M., Cassandro M., Penasa, M.: Food Chem. 134(4), 2459 (2012) 10. Prieto, N., Dugan, M.E.R., López-Campos, O., McAllister, T.A., Aalhus, J.L., Uttaro, B.: Meat Sci. 90(1), 43 (2012)

A Review of the Application Model of WeChat in the Propaganda of Universities Wenguang Liang, Yi Ye(&), Man Bao, and Rongrui Liu Wuzhou University, No. 82, Fuminsan Road, Wuzhou, Guangxi, China [email protected]

Abstract. The study discusses the application model of WeChat from the perspective of the practical application in university propaganda work. Through reading and combing academic literature on the communication model and application of WeChat, the theoretical framework of this study is formed, and the application method of WeChat in university propaganda work is concluded. Keywords: WeChat communication  Propaganda in universities  Application

1 A Review of Communication Model of WeChat 1.1

The Concept of WeChat Communication

The functions of WeChat make people’s communication more convenient, and then through the launch of a series of plug-in features, WeChat has become a reliable communication tool in people’s life. Among them, voice notepad, Weibo, QQ synchronization assistant, mailbox, drifting bottle and other functions are popular. Take the function “drifting bottle” for example, it has a function of “throw one” and “collect one” [1]. Among them, the “throw one” function allows users to select a drifting bottle in their WeChat and post voice or enter text and throw it in the virtual sea formed by the network. When other users obtain the bottles, they can see or hear the text and voice in the bottles and choose whether to reply to the bottles. When users use the “collect one” function, they can salvage the drifting bottles in the sea, and when the users obtain it, they can talk to the people who throw the bottle or look up the text in the bottle. At present, the function of WeChat allows each user to pick up 20 bottles a day. Using this communication function of WeChat, we can spread the information to a wider range in a shorter period of time [2]. Nowadays, many people are keen to use the bottle-drifting function to find friends who are outside the communication circle. The use of the function of WeChat for marketing has become a new method for today’s marketing. For example, one company’s approach is to make use of “shake” function of WeChat to help people in need through the platform of micro-charity with small points. The practice of the enterprise is to transmit information through the function of WeChat. It is an easy and convenient way for the users to offer their kindness and contribute their efforts through a simple small action to help the people with difficulties, and to make a series of advertising and enhance the image of the enterprise.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1454–1458, 2019. https://doi.org/10.1007/978-981-13-3648-5_187

A Review of the Application Model …

1455

From the above overview and the function of WeChat, the propaganda of WeChat can promote the relationship between enterprises and consumers, enhance the trust and dependence of consumers to the enterprises, promote the development of the marketing system, and promote the image of the enterprises. 1.2

The Features of WeChat Communication

Complementarity. As a unique online instant messaging product with Chinese characteristics, the voice communication, text, pictures, and video functions of WeChat are almost free. At the same time, it supports the group chat function as well as Tencent QQ. Compared with QQ, which is more entertaining, WeChat pays more attention to the needs of the office staff. It originally creates the function of WeChat official account admin platform, enabling enterprises to deliver the information of the enterprises and attract business opportunities in a variety of ways. At the same time, WeChat terminals of traditional media have tried to utilize WeChat to carry out new propagation [3]. For example, in addition to distributing newspapers ordinarily, the Wuzhou University Newspaper also operates a WeChat official account that regularly disseminates the news in the form of voice, video, text, etc. to the majority of students. In the “remedial media” theory, Levinson [2] points out that all media are remedial media, which remedies the shortcomings of the media in the past, and make the media humanized. McLuhan also believes that the emergence of any new medium means a new extension of individual capabilities. WeChat adopts a three-dimensional communication method to bring people closer to each other in the form of speech and brings users a new communication experience, making the most original means of communication-mouth has an extension in terms of the media. From this point of view, WeChat can be considered as the product of media convergence, with the purpose of improving the way of information dissemination [3]. Universities should not devote too much energy to WeChat when they conduct propaganda, and WeChat can be regarded as a supplementary means, which can combine with other propaganda methods. Community. Since WeChat is an instant messaging product, it must inevitably have a personal circle of communication, and then form a network of communication communities. German sociologist Tonnies defines the word “community” in “Community and Society” and considers it as a community of life, characterized by region, consciousness, behavior and so on. It is a society of homogenous people with common values, which has a close relationship, helping and supporting and a humane social relationship [4]. On the Internet, the definition of community is not only limited to the scope of the space but also to the ideological fit, for example, taking advantage of people’s own sense of belonging and group identity to define the scope of the community. The function of “circle of friends” of WeChat integrates social functions such as QQ space, mobile phone contacts, and “people nearby” to a certain extent, achieving a certain degree of integration of online virtual communities and real communities. In “circle of friends” of WeChat, it also has the function of privacy. The “like” or comments on others’ dynamics can only be seen by the owner of the information and his mutual friends., which enhances the authenticity of the virtual community, but also ensures the privacy of the users. Take the “Top Ten Singers on Campus” competition at

1456

W. Liang et al.

Wuzhou University for example, if the information has been posted on the official account, the students in the community of Wuzhou University can obtain news, which can carry out the news again through the network virtual community of the students’ circle of friends, so that more people can be aware of the “Top Ten Singers on Campus” competition held by Wuzhou University [5]. In other words, WeChat is not just a propaganda platform but also a huge community. It is a reproduction of the context of community media. It can realize the second propagation of information through the relationship network between friends. Since the users of WeChat in the information communication are known to each other, it will cause more active communication, which will bring the enhancement of the quality, effectiveness and frequency of communication [3]. When universities apply WeChat for propaganda, they need to give full play to the function of social circle of friends of WeChat and seek more extensive propaganda space. Interactivity. Universities take advantage of the WeChat platform to interact with students through various ways such as voice, photos, video, text to induce students’ needs. For example, the Marketing Association held a “Marketing Special Forces” Contest and conducted popular vote on WeChat so that the audience could enter the competition directly outside the court and vote for their favorite team, which allows the participating teams to engage in zero-distance interaction with the audiences and bring fresh viewing experience to the audiences. Moreover, in this way, the stickiness of students who watch the game and like the game will be raised, and they will care about the results of the game unconsciously, so as to achieve the purpose of holding the contest. At the mean time, it can lay a good foundation for the next “Marketing Special Forces” contest and attract more sponsors or more expensive funds. The most important thing is that the online voting on WeChat can make the society outside the school pay more attention to Wuzhou University, which also indirectly propagate a fame for Wuzhou University. In general, WeChat has a unique seamless interactive experience and deep social behavior, which can increase the stickiness of users. Compared to traditional interactive ways such as advertising or professional people, WeChat propaganda is characterized by high stickiness, low cost and a saving of time and labour. High Integration of Transmitters and Recipients. In the “Introduction to Internet Communication”, Kuang [6] indicates that “Internet communication combines the characteristics of mass communication (unidirectional) and interpersonal communication (bidirectional) information dissemination to form a dispersed network of transmission structures”. In a transmission structure, any network node can produce and publish information. All information produced and published by the network node can flow into the network in a non-linear manner. That is to say, the communication of WeChat has a high degree of interactivity. Therefore, the recipients can be transformed into transmitters of information dissemination on the basis of certain selfdistribution [6]. The propagation of universities is also the same. It not only enables students to receive information, but also stimulates the self-distribution of recipients so that students can become transmitters and disseminate information to their own online virtual communities, and forms a network-like communication structure in order to make the disseminated information flow into the network through a non-linear way and explore better effects of propagation [7].

A Review of the Application Model …

1.3

1457

The Interaction and Integration of WeChat Communication

According to the above, the model of WeChat communication has the following characteristics: complementarity, community, interactivity, and high integration of transmitters and recipients. It also shows that WeChat should be combined with other propagation methods in the practical application of education. It is not possible to rely on only one method for propagation [8]. Only comprehensive publicity methods can achieve the best results. When using WeChat for propagation, it is necessary to take advantage of highly interactive features of WeChat, develop interactive topics, actively design various interactive links, increase the audience’s stickiness, and give the audience a fresh and playful interactive experience; WeChat communication means information is transmitted from one online community to another. Therefore, it is of great importance to understand that the transmitters and receivers of the communication method of WeChat are highly integrated, allowing students to receive information and become a transmitter [9]. Students can send information to their online communities, and make the information spread as much as possible in various online communities.

2 The Role and Prospect of WeChat in the Propaganda of Universities As a means of network communication in the new era, WeChat fills in the gaps in the previous modes of communication, effectively improves the efficiency of communication, and increases the scope of the audiences. Its high degree of interactivity increases the stickiness of the audiences, allowing the audiences to have a better ability to accept information and information recipients to spontaneously disseminate information to their online communities, which indirectly increases the scope of communication [10]. There is no doubt that because of the enhancement of the speed of communication, the ability to accept, and the scope of communication, some bad information may also be spread in a short time due to some we-media or individual’s initiative, which leads to some negative effects. Some of those who disseminate information may not be able to think of it. The speed and scope of dissemination of information is like a plague. Acknowledgements. This study was supported by Research Project of Guangxi Education System of Maintenance of School Safety and Stability (Grant No: 20161B075).

References 1. Zhang, H.: Analysis of the communication mode of WeChat from the perspective of communication. News Commun. 6, 65 (2014). (in Chinese) 2. Levinson, P.: Digital Mcluhan: A Guide to the Information Millennium, 1st edn. Routledge, London (2001) 3. Jing, M., Zhou, Y.: Ma Danchen: communication mode and characteristics of WeChat and reflections. News Writ. 7, 41–45 (2014). (in Chinese)

1458

W. Liang et al.

4. Tonnies, F.: Lin Rongyuan Translation: Community and Society. Peking University Press, Beijing (1999) (in Chinese) 5. Wang, W.: Advantage analysis of WeChat marketing and traditional marketing. SME Manag. Technol. 8, 164–165 (2014). (in Chinese) 6. Wenbo, K.: Introduction to Online Communication. 2nd edn. Beijing Higher Education Press, Beijing (2009) (in Chinese) 7. Huang, L.: Regulation of WeChat publicity by law. People’s Forum 31(7), 148–149 (2017). (in Chinese) 8. Ma, H.: Research on the effect of WeChat communication based on content marketing. Commer. Res. 11(451), 123–129 (2014). (in Chinese) 9. Li, X.: Analysis on the situation of the spread of traditional culture by WeChat public accounts: based on surveys in Five Universities in Changchun. New Media Res. 22, 106– 108 (2017). (in Chinese) 10. Teng, L.: Research on the “virtual-reality” interactive communication mechanism of WeChat expression. New Med. Res. 1, 1–2 (2017). (in Chinese)

Design of Deer Park Environment Detection System Based on a Zigbee Mingtao Ma(&) Jilin Agricultural Science and Technology College, Jilin 132101, China [email protected]

Abstract. Jilin province is a big animal husbandry province in China. After the rapid development of the “12th Five-Year” period, the animal husbandry has undergone tremendous changes in the total quantity and quality of the breeding. With the rapid development of animal husbandry, people pay more and more attention to the quality and safety of livestock products. Information technology is the key to promote the development of animal husbandry. At present, some achievements have been made in the research of information technology in the field of livestock and poultry production and production management, nutrition and feed, and disease prevention and control. However, because of the rapid development of information technology, the management software sold on the market is obsolete at present, which affects the rearing of deer. Management level. Therefore, a new deer feeding and management software is needed to meet the market demand. Keywords: Zigbee

 Deer field  Sensor

Deer farms have their special attributes, such as building deer farms in relatively quiet places. The height should be higher than the pig farm, more than 2.5 m. When a deer is brooding, it must have a child ring, otherwise it will not be able to avoid the attack of the adult deer on the fawn. Because the deer fear to be frightened, the general group raises and so on. The design of a deer farm environmental monitoring system has great economic benefits for raising deer farms and intensification and meticulous management.

1 The Characteristics and Structure of Deer Farm Environmental Detection System Based on ZigBee Technology The deer park environment detection system is based on smoke sensor, photosensitive resistance sensor and temperature and humidity sensor, combined with ZigBee wireless communication technology, a real-time detection system of smoke concentration, light intensity and temperature and humidity can be detected in real time. The work flow of the system is shown in Fig. 1. Each sensor sends the collected information to the ZigBee terminal node, the terminal node carries out the data collection and analysis and then transfers the data to the ZigBee coordinator through the wireless sensor [1]. The ZigBee coordinator sends © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1459–1464, 2019. https://doi.org/10.1007/978-981-13-3648-5_188

1460

M. Ma

Smoke concentration sensor

We nshidu churan

Zig Bee End node

Zig Bee Coordi nator

Wire less

Serial port module

PC

Photoresistor sensor

Fig. 1. .

the data to the serial port module. The serial module and the PC terminal are connected through the Uart bus, and the data is transmitted through the bus. Give it to the PC machine. The system structure of the environmental detection system is generally divided into three parts: the lower driver layer, the central communication layer and the upper user layer, as shown in Fig. 2.

Fig. 2. .

The lower driver layer is a mode of information acquisition by modular structure, including the CC2530 module, the photosensitive resistance sensor, the smoke sensor, and the temperature and humidity sensor, which is the source of the data of the upper user layer and the basis for supporting the operation of the whole platform. In a sense,

Design of Deer Park Environment Detection …

1461

sensor nodes in the system have dual functions [2]. They are both responsible for data detection and data transfer. The coordinator node uses ZigBee wireless communication network with CC2530 wireless micro sensor module. ZigBee is a self-organizing network wireless communication network. Each terminal node will automatically add wireless network to transmit data. Each node in the network can communicate with each other, so if a terminal node cannot transmit the data directly to the coordinator, it can also inherit data through other nodes, and then transmit it to the coordinator by the nodes that inherit the data [3]. The central communication layer communicates the lower sensing layer and the upper user layer. The function is to process the data information of the future autobiography layer and send it to the user layer for storage analysis. At the same time, the control instruction feedback sensing layer sent by the user layer is received and there is a specific communication agreement within it. The protocol specifies which data can be communicated. The communication layer is then transmitted to the driver layer or the user layer, and what data can not be transmitted through the communication layer. The upper user layer is responsible for collecting all the data from the communication layer and displaying the final data in the form of curves, and storing it in order to facilitate the later data query and data deletion, so that the managers can read the dynamic environment information inside the deer field in time and clearly. And the action instructions can be sent to the central communication according to the actual needs, so as to achieve the monitoring needs of the user level [4].

2 Hardware Design In order to better extend the function of the system and reduce the influence to the radio frequency circuit, the node becomes the two parts of the radio frequency circuit and the main board with the information transmission function. Coordinator and router are composed of RF circuit module and motherboard. The hardware design is no different, the difference lies in the preparation and function realization of the program. The core of the whole node of RF circuit module is to undertake the information transmission task of the lower driver layer. Because it converged many circuits inside, it only needs a small amount of external circuit to reflect its function [5]. The RF circuit module leads all the P0 ports, P1 ports and P2_0 P2_2 pins of CC2530 to connect to the motherboard through the pin interface. It corresponds to the interface on the main board. CC2530 chip is used in the main control chip. The chip, which combines the Zig Bee protocol stack, works in the 2.4 GHz band, uses the core of the 8051 processor, and can access the bus through three different registers. There are 21 I0 ports that can be input or output and have high frequency transceivers. The operating voltage is 2–3.6 V, and the power consumption is very low. Not only that, the CC2530 chip increases the storage capacity on the basis of the CC2430 chip, and improves the signal transmission capacity at the same time that the power consumption is very low, so that the system does not need to add power amplifier and can expand the transmission distance, thus reducing the cost and improving the stability of the wireless transmission[6].

1462

M. Ma

The main board is not only the “medium” that connects the RF circuit module and the sensor module, but also the source of all the nodes. The main board is not only the interface between the main board or the communication between the node and the PC machine. Power circuit, RS232 interface, joint test action group interface circuit together constitute the motherboard. The main board gives each interface to the RF circuit module and the sensor module respectively. Through these two interfaces, the connection between the sensor module and the RF circuit module is realized. The main board provides the power supply for the RF circuit module and the sensor module, as well as through the two interfaces [7]. The normal working voltage of CC2530 chip in the system of the deer environment detection system is 3.3 V, and the common adapter is 220 V input 5 V output, so it is necessary to convert the voltage conversion chip to the 5 V voltage, in which the AMS1117-3.3 chip is used to produce a stable voltage supply system. The important part of the entire Zig Bee wireless communication network is the main node that uses power supply and occupies the leading position. The route used by the system is the DL-LN33 module. It is a wireless ad hoc network multi hop module, without configuring, without relying on WiFi and base stations, it can automatically complete networking. After the network is set up, the module supplies wireless communication service to the microprogrammed controller of the user. The module is characterized by easy development, communication stability, automatic networking, automatic multi hop, non control center and multi to multi network communication. This scheme is more flexible and stable for the wireless communication solutions of other ad hoc networks. The DL-LN33 pin configuration used for the routing module [8].

3 The Design of Software 3.1

The Software Design of the Coordinator

After the coordinator is electrified, the hardware and software systems are initialized, and the network is built. Then the coordinator builds the Zig Bee network and detects whether it is successful. If it fails, the coordinator repeats the last step; if successful, it will continue to accept the request from the node, assign the network address to the node, send the network response to the node, and join the network to receive the data of the environment collection module (Fig. 3) [9]. 3.2

The Software Design of Terminal Node

When the node is successfully connected to the power, the node initializes the action first. After the action is finished, the request instruction will be sent to the coordinator in order to enter the network. If the failure is failed, it will be retried. If you succeed, you will get the network address assigned by the coordinator and try to join the network. If you join the failure, you will take the last step [10]. When the node is successful, it will automatically enter the dormancy state to reduce the power consumption of the system and improve the service life of the node. In the case of

Design of Deer Park Environment Detection …

1463

Fig. 3. .

emergencies, the node triggers the system, and the node automatically collects the environment information of the current library every time of the same time. After the acquisition is successful, it will check whether the smoke concentration exceeds the predetermined value. If the value exceeds the predetermined value, there will be a fire alarm, if the fruit does not exceed the predetermined value, it will go to the neighborhood. The parent node sends the packaged data. If the transmission is successful, sleep mode is entered. Otherwise, continue to attempt to transmit data until the data can be successfully transmitted to the parent node.

4 Conclusion Based on ZigBee wireless transmission deer environment detection system, it aims to detect the indoor temperature, humidity, smoke concentration and intensity of light in the deer field. No matter which of these three items exceeds the predetermined value, the system will automatically alarm and the related managers take effective measures to solve the problem. Furthermore, this environmental detection system has solved many problems, such as traditional hand-held monitoring, wiring method, large number of data and real time environment, high cost and manpower. Because of the limited conditions, the system still has some shortcomings, but I believe that with the progress of science and technology, the monitoring system will be more intelligent and humanized. Acknowledgements JJKH20170339KJ, jilin provincial education department project.

1464

M. Ma

References 1. Chengcheng, Yin: Shandong province pig industry development strategy research. Shandong Agric. Univ. 05, 41–45 (2014) 2. Design and Research on Distributed Real-time Monitoring System of Nongying Camp and Scale Livestock and Poultry Farms, Jiangsu University, pp. 42–45 (2009) 3. Hutong based on intelligent pig breeding management system design and research. In: Agricultural Network Information, pp. 59–62 (2015) 4. Wu, Y., Wang, J., Cheng, W.: Nowing the rural recreation households should pay attention to the prevention of poultry disease. Sci. Breed. 05 (2014) 5. Gao, W.: Research on integrated information management system of pig farm (english). J. Agric. Eng. 230–236 (2015) 6. Fan, Z.: The MSTP architecture of service oriented. Fenghuo Commun. Sci. Technol. 10 (2004) 7. Zhang, Y.: Design of grassland environment monitoring system based on ZigBee wireless sensor network. Electric. Autom. 37(4), 27–29 8. Chen, C., Zhou, J., Zheng, N.: Design of multi point distributed fire monitoring system based on DSP and ZigBee technology [J]. J. Heilongjiang Bayi Agric. Univ. 28(1), 93–99 (2016) 9. Li, W., Huang, Q.Z.: Intelligent emergency light control system based on ZigBee network. J. Southwest Univ. National. (Nat. Sci. Ed.) 43(3), 291–297 (2017) 10. Zhao, L., Bao, L.G., Sun, K., et al.: Research and design of coal spontaneous combustion monitoring system based on ZigBee in thermal power plant. J. Inner Mongolia Univ. National. (Nat. Sci. Ed.)

Equal-Distance Coupling Method in OFDM System Under Frequency Selective Channel Xiaorui Hu(&), Jun Ye, Songnong Li, Ling Feng, Yongliang Ji, Lin Gong, and Quan Zhou Chongqing Electric Power Research Institute, Chongqing 400015, China [email protected]

Abstract. Orthogonal frequency division multiplexing (OFDM) is very sensitive to frequency errors caused by phase noise and Doppler shift. It will disturb the orthogonality among subcarriers and cause intercarrier interference (ICI). A simple method to combat ICI is proposed in this letter. The main idea is to map each data symbol onto a pair of subcarriers rather to a single subcarrier. Different with conventional adjacent coupling and symmetric coupling method, the frequency diversity can be utilized more efficiently by the proposed equaldistance coupling method. Numerical results show that our proposed method provides robust signal-to-noise ratio (SNR) improvement over the conventional coupling methods. Keywords: Equal-distance coupling

 OFDM  ICI

1 Introduction One of the major problems in OFDM systems is the high sensitivity of modulations to frequency-offsets errors caused by oscillator inaccuracies and Doppler shift. In such situations, the orthogonality of the subcarriers is no longer maintained, which results in ICI. A number of methods have been developed to reduce ICI [1–4] have proposed effective ICI self-cancellation schemes, one data symbol is mapped onto a pair of adjacent subcarriers with opposite sign [5] has proposed a similar data-conjugate method, in which one data symbol is mapped onto a pair of adjacent subcarriers with opposite sign and conjugate relationship. In [6] a symmetric coupling method was proposed with frequency diversity under multipath channels [7] has improved the symmetric data-conjugate method by maximal ratio combining (MRC). The adjacent coupling method in [1–5] is based on the fact that the channel response of the two adjacent subcarriers is nearly the same, the ICI can be efficiently self-cancelled, but no channel selectivity can be utilized. Most of the existing ICI reduction methods focus on how to cancel ICI to increase carrier-to-interference ratio (CIR), however, under frequency selective channel, the deep fading subcarriers will significantly degrade the system performance. From this point of view, the conventional ICI reduction methods are no longer suitable under frequency selective fading channel. The symmetric coupling methods [6, 7] have already shown the significant performance gain with frequency diversity. In this letter, we have proposed a more efficient equal-distance coupling method to combat ICI by fully exploiting the frequency diversity. © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1465–1469, 2019. https://doi.org/10.1007/978-981-13-3648-5_189

1466

X. Hu et al.

2 Equal-Distance Coupling in OFDM Systems N is assumed to be the number of total subcarriers in OFDM system. In adjacent coupling method, data symbol Dk is mapped onto the 2kth and 2k þ 1th subcarrier, the two adjacent subcarriers almost suffer the same fading. The separation distance between the two subcarriers in each couple is always one subcarrier, this separation distance is denoted as dadjacent ¼ 1. In symmetric coupling method, data symbol Dk is mapped onto the kth and N  k  1th subcarrier. The separation distance between the kth and N  k  1th subcarrier is denoted as:  dsymmetric ðkÞ ¼

N  1  2k; if N  1  2k  2k þ 1; if N  1  2k [ N2

N 2

; 0  k  N=2  1

ð1Þ

The minimum value of dsymmetric ðkÞ is 1, when k ¼ 0; or N=2  1. The maximum value of dsymmetric ðkÞ is N=2  1, when k ¼ N=4  1; or N=4. In our proposed equal-distance coupling method, the data symbol Dk is mapped onto the kth and N=2 þ kth subcarrier. Let Xk ðk ¼ 0; . . .; N  1Þ denote the modulated symbols on the kth subcarrier of an OFDM symbol, the relationship between data symbol and subcarriers can be denoted as: Xk ¼ Dk ; Xk þ N=2 ¼ Dk ; k ¼ 0; . . .; N=2  1

ð2Þ

The separation distance between two subcarriers in each couple is dequaldis ¼ N=2. With the increased separation distance in each couple, the possibility of the two subcarriers in each couple suffering the same fading is reduced. With the increased frequency diversity, our method outperforms the conventional symmetric coupling method significantly. Experiencing multipath fading and AWGN, the received signal on the kth subcarrier contaminated by frequency-offsets errors can be written as: Yk ¼

N 1 X l¼0

ðHl Xl ÞQkl þ Zk ¼ Q0 Hk Xk þ

N 1 X

Qkl Hl Xl þ Zk

ð3Þ

l¼0;l6¼k

where Hk is the channel frequency response and Zk denotes the frequency domain P j/n j2pnk=N , /n is the expression of AWGN. Qk is denoted as Qk ¼ N1 N1 n¼0 e e frequency-offsets errors. The received symbols Yk and YN=2 þ k are combined by maximal ratio combining ^ k can be written as: (MRC) to achieve the frequency diversity. From (3), D ^k ¼ D where

  Hk Yk þ HN=2 Hk Ik þ HN=2 þ k YN=2 þ k þ k Jk ¼ D þ    2 þ W k k 2 jHk j2 þ HN=2 þ k  jHk j2 þ HN=2 þ k 

ð4Þ

Equal-Distance Coupling Method in OFDM …

Ik ¼

N=21 X

ðHl Xl Qkl þ HN=2 þ l Xl QkN=2l Þ þ HN=2 þ k Xk QN=2

1467

ð5Þ

l¼0;l6¼k

Jk ¼

N=21 X

ðHl Xl QN=2 þ kl þ HN=2 þ l Xl Qkl Þ þ Hk Xk QN=2

ð6Þ

l¼0;l6¼k

Wk ¼

Dk is the desired signal,

 Hk Zk þ HN=2 þ k Zk þ N=2  2 jHk j2 þ HN=2 þ k 

Hk Ik þ HN 2

þk

ð7Þ

Jk

jHk j2 þ jHN=2 þ k j

2

is the ICI, Wk is the AWGN.

In [6, 7], CIR is only analyzed under flat fading channel, where Hk ¼ 1. However, in most cases, the real channel is frequency selective. In this letter we first derive CIR under frequency selective channel. From (4), CIR of our proposed method is presented as CIRproposed

 2 jHk j2 þ HN=2 þ k  ¼   Hk Ik þ HN=2 þ k Jk

ð8Þ

In [7], the ICI self-cancellation is effective only under flat fading channel when Hk ¼ 1. However, under frequency fading channel, the ICI term in [7] can not be selfcancelled. The improved BER performance in [7] under frequency selective channel is mainly due to the frequency diversity gain in each couple. The separation distance of each couple varies from one subcarrier to N=2  1 subcarriers. In our proposed method, the separation distance of any couple is always N=2. Compared with the conventional symmetric coupling method, the possibility that the two subcarriers in each couple suffering from fading simultaneously is significantly reduced by our proposed equal-distance coupling method. Computer simulation is done to prove the effectiveness of our proposed method.

3 Numerical Results Simulations are performed to demonstrate the ICI suppression effect of our proposed coupling scheme. The following system parameters are assumed: (1) An OFDM system of N ¼ 64 with each subcarrier modulated by 64-QAM; (2) Baseband sampling rate is 20 MHz; (3) The PHN is generated, according to the Matlab codes recommended by IEEE 802.11 g standard, as i.i.d Gaussian samples passed through a single pole Butterworth filter with 3 dB bandwidth of 100 kHz [5]. A two path and six path channel model defined in IEEE 802.11 g standard are used. The two path power profile is given by p ¼ ½0; 3 dB, and the delay profile is s ¼ ½0; 0:15 ls. The six path power profile is given by p ¼ ½0; 3:6; 7:2; 10:8; 18; 25:2 dB, and the delay profile is s ¼ ½0; 0:1; 0:2; 0:3; 0:5; 0:7 ls. Each path is an independent, zero-mean complex

1468

X. Hu et al.

Gaussian random process. Channel estimation is assumed to be perfect and the channel is equalized by zero-forcing algorithm. Figures 1 and 2 shows the raw BER performance under two path channel and six path channel with the standard deviation of PHN r ¼ 3 deg. Both symmetric coupling with MRC and our proposed method outperform the conventional adjacent coupling method. It proves that under frequency selective channel, frequency diversity is more efficient to suppress ICI than the conventional ICI-self cancellation. Compared with the symmetric coupling with MRC, the SNR improvement of our proposed method is more than 5 dB at the BER of 10−3. This is because in [7], each couple of subcarriers has unequal subcarrier distance. The two subcarriers in one copule may strongly correlate with each other, such as the 0th subcarrier and the N  1th subcarrier. While in our method, each couple of subcarriers has equal subcarrier distance of dequaldis ¼ N=2, the possibility that the two subcarriers in each couple suffering from fading simultaneously is significantly reduced.

Fig. 1. Raw BER performance in two-path channel under different SNR, when the standard derivation of PHN r ¼ 3

Fig. 2. Raw BER performance in six-path channel under different SNR, when the standard derivation of PHN r ¼ 3

Equal-Distance Coupling Method in OFDM …

1469

4 Conclusion A method to combat ICI using frequency diversity in OFDM systems has been presented. Compared with the conventional symmetric coupling method, our proposed method provides an equal-distance separation in each subcarrier couple to fully take advantage of frequency selectivity. Simulation results have proved the robust SNR gain by our proposed method. Acknowledgements. This work is supported by Chong Qing Electric Power Research Institute, State Grid Corporation of China technology project, cstc2016jcyjA0214, National Key Technology Support Program (2015BAG10B00).

References 1. Zhao, Y., Häggman, S.G.: Sensitivity to Doppler shift and carrier frequency errors in OFDM systems—the consequences and solutions. IEEE Conf. Proc. VTC 1996, 2474–2478 (1996) 2. Armstrong, J.: Analysis of new and existing methods of reducing intercarrier interference due to carrier frequency offset in OFDM. IEEE Trans. Commun. 47(3), 365–369 (1999) 3. Zhao, Y., Haggman, S.G.: Intercarrier interference self-cancellation scheme for OFDM mobile communciation systems. IEEE Trans. Commun. 49(7), 1185–1191 (2001) 4. Zhang, J., Rohling, H., Zhang, P.: Analysis of ICI cancellation scheme in OFDM systems with phase noise. IEEE Trans. Broadcast. 50(2), 97–106 (2004) 5. Ryu, H.G., Li, Y., Park, J.S.: An improved ICI reduction method in OFDM communication systems. IEEE Trans. Broadcast. 51(3), 395–400 (2005) 6. Sathananthan, K., Cancellation technique to reduce intercarrier interference in OFDM: K. IEEE Electron. Lett. 36, 2078–2079 (2000) 7. Tang, S., Ke, G.: Intercarrier interference cancellation with frequency diversity for OFDM systems. IEEE trans. Broadcast. 53(1), 132–136 (2007) 8. Phase Noise Matlab Model: IEEE P802.11-Task Group G, 2000 [Online]. Available: http:// grouper.ieee.org/groups/802/11/Reports/tgg_update.htm

Construction of Search Engine System Based on Multithread Distributed Web Crawler Hongsheng Xu1,2(&), Ganglong Fan1,2, and Ke Li1,2 1

2

Luoyang Normal University, Luoyang 471934, China [email protected] Henan Key Laboratory for Big Data Processing & Analytics of Electronic Commerce, Luoyang 471934, China

Abstract. The search engine is mainly to the user request information carries on the automatic information collection. Web crawler is an important part of search engine, it is an automatic extraction of web pages for search engines to download web pages from the web. The search system faces hundreds of millions of pages across the Internet, and there are several search servers in each data center, and there may be several crawlers deployed on each grab server. This constitutes a multithread distributed web crawler search system. The paper presents construction of search engine system based on multithread distributed web crawler. Keywords: Multithread  Distributed web crawler Information collection  URL



Search system



1 Introduction Web crawler is an automatic web page extraction program, it is the search engine from the World wide Web download pages, is an important part of the engine. The traditional crawler starts with the URL of one or more initial web pages and obtains URLs on the initial web pages. In the process of crawling the web pages, new URL are continuously extracted from the current pages and put into the queue until the system stops under certain conditions. With the rapid expansion of the scale of the Internet, a search engine on its own alone can no longer adapt to the current market conditions, so now there is a division of labor and cooperation between search engines. And has a professional search engine technology and search database service providers. Inktomi, for example, is not a direct user-oriented search engine, but provides full-text web search services to other search engines such as Overture [1]. LookSmartMMSN HotBot and so on. Domestic Baidu also falls into this category use its technology. So in this sense, they are search engines of search engines. The subject network crawler is to filter the links irrelevant to the subject matter according to a certain webpage analysis algorithm, keep the links related to the subject matter and put them into the URL queue to be captured, according to a certain search strategy, select the webpage URL to be grabbed by the next step, and repeat the above process until a certain condition of the system is reached. All the webpage captured by © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1470–1476, 2019. https://doi.org/10.1007/978-981-13-3648-5_190

Construction of Search Engine System …

1471

the network crawler will be stored by the system, analyzed and filtered, and the index is established. For the subject network crawler, the analysis result obtained by the process may also feedback and guidance the subsequent grabbing process. The breadth-first search strategy refers to the search at the next level only after the current level of search is completed in the process of crawling. The design and implementation of the algorithm is relatively simple. In order to cover as many pages as possible, the breadth first search method is generally used [2]. There are also many studies that apply breadth-first search strategies to focused reptiles. The basic idea is that there is a great probability that the web pages with the initial URL within a certain link distance have thematic correlation. The other method is to combine the breadthfirst search with the web filtering technology, first grab the web pages with the breadthfirst strategy, and then filter out the irrelevant pages. The disadvantage of these methods is that as the number of crawling pages increases, a large number of irrelevant pages will be downloaded and filtered, and the algorithm will become less efficient. Web crawling strategies can be divided into three types: depth first, breadth first and best first. Depth first is easy to cause web crawler to get into problems, so the most commonly used strategies are breadth first and best priority. The best priority strategy is to predict the similarity between candidate URL and the target page or its relevance to the topic according to a certain algorithm of web page analysis, and select one or more URL that is the best evaluation for crawling. It only visits pages that are predicted to be “useful” by arbitrary algorithms. One problem is that many related pages on the crawler’s crawl path may be ignored. Because the best priority strategy is a local optimal search algorithm. Therefore, it is necessary to improve the optimal priority strategy in order to jump out of the local optimum.

2 Text-Based Web Page Analysis Algorithm and Web Search Strategy A search engine is just a tool for retrieving information. Its retrieval methods can be divided into the following two types: one is the directory type, the crawler program collects the resources of the network together, divides them into different catalogs according to the type of resources, and then continues to classify them layer by layer. When people search for information, they enter according to the classification layer by layer, and finally get the information they need. The other is the keyword which is often used by the user [3]. The search engine retrieves the addresses of the resources required by the user according to the keywords entered by the user, and then feedback these addresses to the user. Grasp the technical characteristics of Map/Reduce distributed computing, including the implementation process and the specific implementation of Java, and decompose each stage of the network crawler, in order to use the MaPReduce programming mode to remake it to meet the needs of Hadoop distributed computing. Research search engine related technology, and research distributed crawler Nutch implementation characteristics, such as plug-in mechanism. On this basis, the grabbing process of Nutch is changed on a large scale, distributed crawlers are developed, various grab functions are realized, and the design scheme and code implementation are completed [4].

1472

H. Xu et al.

The large-scale distributed environment deployment of more than 150 nodes is realized on the cluster, and the Web crawling capacity of T level and the speed of single machine web crawling are realized. With the popularity of AJAX/Web2.0 and other technologies, how to grab dynamic pages such as AJAX has become an urgent problem for search engines. If search engines still adopt the mechanism of “crawling”, it is impossible to capture the effective data of AJAX pages. In addition, the web crawler also mainly faces these problems. 1, forced to use Cookies. Some webmasters force users to use Cookies to remember the login information. If not open, cannot be accessed, visit the page will not display normal, this way will make the spider unable to access. 2, login requirements. Some enterprise station and individual station setting must register after landing to see the related article content, this kind of spider is not very friendly, the spider will not register, also will not land. The most important object of the crawler is URL, which obtains the required file content according to the URL address, and then deals with it further. Therefore, the accurate geographical interpretation of URL is critical to understanding the web crawler: the unified resource locator, the string. URL describing information resources on Internet, can be traced in a unified format. Describes various information resources, including files, addresses and directories of servers. The visual view of the World wide Web is butterfly-shaped, and the web crawler generally starts from the left-hand structure of the butterfly. There are some home pages for portals, and portals contain a lot of valuable links. Use run and finish queues to hold links in different states [5]. For large amount of data, the in-memory queue is not enough, usually using database simulation queue. This method can not only carry out massive data capture, but also have the function of breakpoint continuation. Threads read the queue header from the run queue, continue execution if it exists, as is shown by Eq. (1), and otherwise stop crawling. P1 ¼

mðkÞ X i¼0

¼

mðkÞ X i¼0

EfxðkÞx0 ðkÞjhi ðkÞ; Z k gbi ðkÞ ð1Þ ½^xi ðkjkÞ^x0i ðkjkÞ þ Pi ðkjkÞbi ðkÞ

Depth takes precedence, that is, starting from the starting web page, selecting a URL, entering, analyzing the URL in the web page, selecting re-entry. This is a link indepth tracking down, processing a route and then processing the next route. The algorithm has the following disadvantages: the link provided by the portal is often the most valuable, PageRank is also high, and every layer, web page value, and PageRank are correspondingly lowered. This implies that the important web page is usually closer to the seed, while the web page that is too deeply grasped is of low value. Generally a multithreaded program, while downloading multiple target HTML, can be done with PHP, Java, Python and so on, the general integrated search engine crawlers do so. However, if the other party hates crawlers, it is likely to block the IPs of the server, the server IP is not easy to change, and the bandwidth consumption is also more expensive.

Construction of Search Engine System …

1473

An index library is a reverse index of pages that are fetched in all systems, not directly from the back index of the page, but by merging the index generated by many small segment indexes [6]. Nutch uses Lucene to build the index. So all Lucene related tools, API, are used to build index libraries. It is important to note that the concept of Lucene’s segment is completely different from that of Nutch’s segment, so don’t confuse it. In short, Lucene’s segment is part of the Lucene index library, while Nutch’s Segment is part of the WebDB being fetched and indexed. Topic crawlers do not pursue high coverage, but selectively select theme-related pages, which have the advantages of low resource occupancy, convenient updating of index databases and accurate cache pages [7]. However, the following difficulties exist in the implementation: how to model the topic, how to determine the correlation between the page and the topic, and how to accommodate different topics in a crawler system. Topic crawlers filter theme-independent links according to a certain web page analysis algorithm. Following a certain scheduling strategy, we select the URLs to be fetched next from the queue, and the results of the web pages stored in the system will be feedback back to guide the subsequent crawling process.

3 Construction of Search System Based on Multithread Distributed Web Crawler The depth-first search strategy starts from the start page, selects a URL to enter, analyzes the URL in the page, and selects a reentry. Such a link is grabbed until one route is processed before the next route is processed. The design of depth first strategy is relatively simple. However, portals often offer the most valuable links, and PageRank is also very high, but with each layer, the value of the page and PageRank will decline accordingly. This suggests that important pages tend to be close to seeds, while pages that are too deep-grained are of low value. At the same time, the grab depth of this strategy has a direct impact on the grab hit rate and capture efficiency, which is the key of the strategy. The data source needed by crawler system can realize directional crawling by writing web crawler program, and by customizing filtering condition and crawling strategy in crawling process. The amount of data acquired by the crawler and the efficiency of crawling can be well guaranteed. The information visualization module can dynamically display the news on a very rich interactive chart with the help of Google Visualization API, and then build a rich Internet application with ExtJS. The whole system can provide a good user experience. When the user uses the system, the system needs to be able to record and skip some exceptions. The system needs to have high portability and reliability. The system needs to have good testability and maintainability. The web crawler system downloads Web pages and collects web pages from the world wide web. As the web crawler system is a part of the search engine, the search engine uses the information of the crawler system, as is shown by Eq. (2), where J is the system needs to design a reasonable storage file and establish an index [8].

1474

H. Xu et al.

6V ¼ 6

4 X i¼1

Vi ¼

4 X

4 Y

i¼1

j6¼i;j¼1

! Rj sin h0i

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !ffi v u 4 u 4 4 uX sin2 h0 Y uX i : yi R2j  t t yi i¼1 i¼1 j6¼i;j¼1

ð2Þ

Dynamic URL is simply with question marks, equal marks and parameters of the URL is dynamic URL, dynamic URL is not conducive to search engine spider crawling and grasping. Some website pages use Flash visual effect is very normal, such as using Flash to do, advertising, charts, and so on, these search engines to capture and include is no problem, many websites home page is a large Flash file, this is called spider trap, In spider grab HTML code is just a link, and there is no text. Although the large Flash effect looks good, the appearance looks very beautiful, but unfortunately the search engine cannot see, cannot read any content [9]. In crawler systems, URL queues are an important part of the crawler system. The order in which the URL in the URL queue is to be fetched is also an important issue. Because this involves crawling that page first, then which page to crawl. The methods that determine the order of these URL are called grab strategies. The following focuses on several common crawling strategies: depth first traversal means that a web crawler starts from the start page, tracks a link, processes the line and then moves on to the next start page to continue tracking the link.

4 Experiments and Analysis The web crawler accesses the background html code, which analyzes the URL, filters it and puts the results into the run queue. Be wary of a “reptile trap” when getting an URL. Because even if a URL can access the content, there is no guarantee that there is a corresponding page on the server side. For example, the application of dynamic web pages may result in the existence of inexhaustible addresses in the web site. Let the reptile loop infinitely in one position and not end. One way to deal with the “reptile trap” is to check the length of URL (or the number of “/”), and once a threshold is exceeded, it is no longer available. According to some algorithms of machine learning, a web page analysis algorithm is provided to evaluate the correlation between the initial URL and the topic to be acquired, and to select one or more URL that are the best ones to crawl. It only visits pages that have been predicted to be “useful” by page analysis algorithms. The advantage of this algorithm is that it can improve the crawling efficiency of web pages, but it is easy to ignore the topic-related pages in the filtered path of URL pages, so the best priority strategy is a local crawling algorithm. In the process of use, it should be optimized so that it can jump out of the local optimum. Imagine that when the user queries a hot topic and the crawler has not grabbed the relevant page, then the PageRank can’t be used to evaluate the importance of the web page. PageRank’s computing object is the page that has been grabbed, that is, No new pages are added to the PageRank calculation, a method known as off-line computing. This method is suitable for sorting the results, but it is not suitable for crawler

Construction of Search Engine System …

1475

scheduling, as is shown by Eq. (3), a new algorithm strategy of OPIC On-line Page Importance Computation) is proposed [10].  u u ð0Þ xð1Þ ðtÞ ¼ x1  eaðt1Þ þ a a

ð3Þ

1. The network crawler based on multi-thread is designed. 2. Through http to crawl the URL list corresponding to the URL page code extraction. 3. The information is extracted and the algorithm is used to determine whether the web page is related to the set theme. 4. Breadth-first search, starting from a link in the web page, visit all the links on the web page, access the next layer of access through recursive algorithm, repeat the above steps, as is shown by Fig. 1.

Fig. 1. Experimental comparison of search engine system based on multithread distributed web crawler and single thread.

Web-Harvest is a Java open source Web data extraction tool. It can collect specified Web pages and extract useful data from them. The implementation principle is that according to the predefined configuration file, the whole content of the page is obtained by http client, then the content filtering operation of text/xml is realized by using XPath XQuery, regular expression and so on, and the vertical search of accurate data is selected. The main difficulties of web crawler implementation include how to build a URL search queue, and then get rid of some illegal web links such as js files and CSS files. Then it is how to get the link to go to the reprocessing. There are also many sites that use AJAX to build their sites, and many useful links are included in JS files. In the user interface layer, the system uses ExtJS framework combined with Google Visualization API to construct. Ext JS as an open source JavaScript framework. It has powerful function and beautiful interface. It uses AJAX technology to develop the graphic tools provided by RIA application. Google Visualization API as Flex program. Has extremely rich interaction effect. The foreground web page developed by the two methods ensures a good user experience and can meet the needs of the user’s actual operation.

1476

H. Xu et al.

5 Summary Generally speaking, crawling systems are faced with hundreds of millions of web pages across the Internet. A single crawler cannot accomplish such a task. More than one crawler is often needed to handle it together. Common grab systems are often a distributed three-tier structure. The bottom layer is a data center located in different geographical locations. There are several grab servers in each data center, and several crawler programs may be deployed on each grab server. This constitutes a basic distributed grab system. Acknowledgements. This paper is supported by Henan key Laboratory for Big Data Processing & Analytics of Electronic Commerce, and also supported by the science and technology research major project of Henan province Education Department (17B520026), Key scientific research projects in Henan province universities (17A880020, 15A120012).

References 1. Winter: Chinese Search Engine Technology Decryption, Web Spider. People’s Post and Telecommunications Press, Beijing (2014) 2. Wang, J., Pan, J., Zhang, F.: Research on web text mining technology. Comput. Res. Dev. 5, 513–520 (2015) 3. Wisenut: WiseNut Search, Engine White Paper. Electric Power Press, Beijing: China (2011) 4. Hedan, S., Yannan, P.: A Review of Research on Search Engines. Computer Technology and Development (2016) 5. Wu, P., Ding, Z.: Based on multi-thread distributed search engine research. Mod. Libr. Inf. Technol. 6(3), 100–106 (2014) 6. Jiang, J.: The main distributed search engine technology research. Sci. Technol. Eng. 7(10) (2017) 7. Xu, H., Zhang, R.: Novel approach of semantic annotation by fuzzy ontology based on variable precision rough set and concept lattice. Int. J. Hybrid Inf. Technol. 9(4), 25–40 (2016) 8. He Guangyi, L.: Design and implementation of distributed search engine. Comput. Appl. (2013) 9. Pani, S.K., Mohapatra, D., Ratha, B.K.: Integration of web mining and web crawler: relevance and state of art. Int. J. Comput. Sci. Eng. 772 (2010) 10. Bedi, P., Thukral, A., Banati, H., Behl, A., Mendiratta, V.: A multi-threaded semantic focused crawler. J. Comput. Sci. Technol. 2, 16 (2012)

Gene Regulatory Network Reconstruction from Yeast Expression Time Series Ming Zheng(&) and Mugui Zhuo Guangxi Colleges and Universities Key Laboratory of Professional Software Technology, Wuzhou University, Wuzhou, China [email protected]

Abstract. Gene regulatory network is the manifestation of life function at the level of gene expression. A combination of linear regulation model, regulatory element identification and gene clustering is used to interpret the gene regulation network of yeast in cell cycle and environmental stress. The results showed that the gene regulatory network would be regulated under different environmental conditions. In the adaptive environment, the main function is the gene regulatory network related to cell growth and proliferation, and in response to environmental stress, the cells will reconstruct the regulatory network, inhibit the cell growth and proliferation related genes, induce genes related to adaptive carbohydrate metabolism and structural repair, and may also initiate meiosis to produce spores. From cell cycle and environmental stress response related genes, the binding site of the transcription factor Mcm1, and the binding site of Dal82 on the pathway related genes were searched respectively. Therefore, it is feasible to estimate the gene regulatory network from yeast expression time series, which is quite consistent with the experimental observations so far. Keywords: Gene regulatory network cycle  Environmental stress

 Genome expression time series  Cell

1 Introduction Cellular biochemical network is the basis of the function of life system. There are at least three levels: gene expression, metabolism and signal transduction. Gene expression is not only regulated directly by transcription factors but also coupled with metabolic networks and signal transduction; enzymes and proteins are also the products of genes. Therefore, when the gene space is projected into the gene space, the biochemical network is reduced to a genetic network [1]. At present, some methods have been used to estimate gene networks from cDNA microarrays, such as Boolean networks [2], Bayesian networks [3], differential equations [4] and linear models [5] (including SVD singular value decomposition [6]). These methods also have a number of uncertain factors, for example, the number of genes in the current gene expression array measurement (more than thousands) is far greater than the number of time sampling points (about 2), so that the network parameters can’t be uniquely

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1477–1481, 2019. https://doi.org/10.1007/978-981-13-3648-5_191

1478

M. Zheng and M. Zhuo

determined; these network models describe only the interaction between genes, and do not distinguish direct (causal) interactions or between them. Interact with each other. The former refers to the relationship between a gene and a gene that encodes a transcription factor; the latter, such as a metabolite that affects a gene expression, and the other gene encodes an enzyme that regulates the metabolite. At present, the interaction between transcription factors and cis elements can be identified using the “transcription factor [7]/cis element [8]” database or the whole genome transcriptional factor binding site data. The problem of the lack of time sampling for genomic expression, in addition to the full increase of the number of points in the future, can still give a useful reconstruction of the gene regulation network [9]. It also shows that the interpretation of the regulatory relationship of disease gene also contributes to the identification and drug design of drug targets, thus providing valuable clues for the diagnosis and treatment of complex diseases [10]. This paper combines linear regulation model, regulatory element recognition and gene clustering to interpret yeast gene regulatory network in cell cycle and environmental stress.

2 Method 2.1

Material

CDNA microarray (microarray) can be used to measure genome transcriptional data. If measured throughout a period of time, that is to measure at a number of time points, the genome expression time sequence is formed, and the expression time sequence of each gene is called expression profile, and the gene regulation network is estimated by the time sequence of the expression. As a case study, we used three sets of Saccharomyces cerevisiae genome expression time series. One is the cell cycle (ALPHA, alpha factor synchronization) expression time sequence, containing 6178 genes, 18 time-sample points, from Spellman and other experiments. Two is the expression time sequence of cell response to environmental stress, containing 6152 genes, including: heat shock from 25 to 37 °C, 8 time points; cells treated with hydrogen peroxide, 10 A time point; the last two sets of experiments taken from Gasch, etc. The sequence information of yeast gene and its function suggest that the database is SGD (http://genome-www. stanford.edu/saccharomyces/). 2.2

Data Preprocessing

In the time series of gene expression, if the proportion of a gene’s missing value exceeds 20%, it is eliminated. If the absolute value of a gene’s expression spectrum does not exceed 2 (threshold), it is also eliminated because it indicates that it does not participate in the regulation process. For cell cycle, the threshold value was 2; for environmental stress, the threshold value was 3.

Gene Regulatory Network Reconstruction from Yeast …

2.3

1479

Linear Network Model

The linear network model assumes that the interaction between genes is linear and noninstantaneous, that is, the expression level of the gene I at the time tk+1 is the TK all the tk gene j (j = 1, 2,…, n) weighted summation of the expression level of n, shown as Eq. (1): yi ðtk þ 1 Þ ¼

n X

Rij yi ðtk Þ; tk ¼ t0 þ kDt;

k ¼ 0; 1; 2; . . .; T  1

ð1Þ

j¼1

Here, Rij indicates the regulatory intensity of all genes J to gene I, and delta T indicates the average transfer time of interaction. When the control network parameters are given, formula (1) describes the dynamics of the gene network, which is common in engineering. When the gene expression time sequence yi(t) is given, the regulatory network parameter Rij is estimated, and the reverse engineering problem (reverse engineering) is now called. The purpose of this article is the latter. Several papers have been used to solve this inverse engineering problem by singular value decomposition [8]. However, the current measurement is limited to n  T, so their solution set is not unique, so it is difficult to have biological significance. In this paper, we first try to cluster many genes to find out the regulatory network between classes, and the intra class gene interactions are identified by other knowledge or information. At present, coexpression genes with similar expression profiles are often assigned to one class. The average gene expression profiles in the same category are defined as the expression profiles representing the prototype gene (prototype). The regulatory network of the prototype gene is still in form (1), but the number of genes involved in the network is not n, but the total number P of the prototype gene (that is, the total number of clusters). The inverse engineering problem is to find the network parameter Rij, so that the residual error of fitting the prototype gene expression time series with the linear network model is the smallest.

3 Result We found 563 genes that were significantly expressed in the cell cycle process, which were divided into 6 categories. Linear modeling was used to obtain regulatory networks between 6 prototypes. The expression level of prototypical gene 1 in G1 phase is mainly related to mating with cells. The prototype gene 2 was expressed in G1 stage, mainly carbohydrate metabolism, amino acid synthesis and decomposition, energy metabolism related genes, ribosome protein and rRNA synthesis related genes. From these genes, we searched for transcription factors Gal4, QBP, Bas2, Pho4, Dal82, Gcn4 (regulating carbohydrates, amino acids, and nucleic acid metabolism related genes). The transcription of the DNA binding site (see Table 2). The expression level of prototypical gene 3 in G1 phase is mainly related to cell cycle control, DNA synthesis and spore growth related genes. The result is shown in Fig. 1.

1480

M. Zheng and M. Zhuo

Fig. 1. Graphical representation of the linear model for 14 genes on data set heat shock

4 Conclusion In this paper, the linear network model, regulatory element identification and gene clustering are used to estimate the gene regulatory network of yeast in cell cycle and environmental stress response. As an analytical example, this method is applied to yeast in cell cycle and environmental stress. The results show that the dynamic interaction between the genes shown in the yeast gene regulatory network is in good agreement with the experimental observations so far. This method can be extended to the analysis of gene regulatory networks in other species. The current cDNA microarray measurement has some technical limitations, such as insufficient time sampling points (Nyquist sampling theorem), background noise interference and so on. Acknowledgements. This work was supported by grants from The National Natural Science Foundation of China (No. 61502343), the Guangxi Natural Science Foundation (No. 2017GXNSFAA198148, 2015GXNSFBA139262) foundation of Wuzhou University (No.2017B001), Guangxi Colleges and Universities Key Laboratory of Professional Software Technology, Wuzhou University.

Gene Regulatory Network Reconstruction from Yeast …

1481

References 1. Kandpal, M., Kalyan, C.M., Samavedham, L.: Genetic programming-based approach to elucidate biochemical interaction networks from data. IET Syst. Biol. 7(1), 18–25 (2013) 2. Calzone, L., Barillot, E., Zinovyev, A.: Predicting genetic interactions from Boolean models of biological networks. Integr. Biol. 7(8), 921–929 (2015) 3. Nickless, A., Rayner, P.J., Erni, B., et al.: Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design. Inverse Probl. 34(5) (2018) 4. Zhou, D.N.X., Zhao, Y.H., Baker, J.A., et al.: The effect of alcohol on the differential expression of cluster of differentiation 14 gene, associated pathways, and genetic network. PLoS One 12(6) (2017) 5. Grada, A., Madihally, S., Gasem, K.A.: Transdermal delivery of insulin using novel chemical penetration enhancers designed via in silico, non-linear QSPR modeling, utilizing genetic algorithms and artificial neural networks. J. Invest. Dermatol. 136(8), B12–B12 (2016) 6. Nariman-Zadeh, N., Darvizeh, A., Dadfarmai, M.H.: Adaptive neurofuzzy inference systems networks design using hybrid genetic and singular value decomposition methods for modeling and prediction of the explosive cutting process. Ai Edam 17(4–5), 313–324 (2003) 7. Washington, T.A., Smith, J.L., Grossman, A.D.: Genetic networks controlled by the bacterial replication initiator and transcription factor DnaA in Bacillus subtilis. Mol. Microbiol. 106(1), 109–128 (2017) 8. Hu, P., Liu, M.L., Zhang, D., et al.: Global identification of the genetic networks and cisregulatory elements of the cold response in zebrafish. Nucleic Acids Res. 43(19), 9198–9213 (2015) 9. Liu, J.H., Wang, X.L., Li, J., et al.: Reconstruction of the gene regulatory network involved in the sonic hedgehog pathway with a potential role in early development of the mouse brain. Plos Comput. Biol. 10(10) (2014) 10. Kogelman, L.J.A., Kadarmideen, H.N.: Weighted interaction SNP hub (WISH) network method for building genetic networks for complex diseases and traits using whole genome genotype data. Bmc Syst. Biol. 8 (2014)

Analysis and Design of Key Parameters in Intelligent System of Lime Rotary Kiln Tingzhong Wang(&) and Lingli Zhu College of Information Technology, Luoyang Normal University, Luoyang 471934, China [email protected]

Abstract. This paper establishes a three-dimensional model based on the structure of the existing preheater, and simulates the preheater, optimizes the corresponding structure according to the analogue analysis structure, and finally determines the optimal structure parameters. The residence time of lime in the rotary kiln is simulated and studied, and the calculation method of the residence time of the lime rotary kiln is formed. Finally, this paper completes the knowledge acquisition and function realization of the intelligent design system of lime rotary kiln. Keywords: Intelligent system Design parameters  Cylinder



Lime rotary kiln



Knowledge acquisition



1 Introduction Rotary kiln refers to rotary calciner, referred to as rotary kiln, belongs to building materials equipment type machinery. According to the type of materials treated in rotary kiln, it can be divided into cement kiln, metallurgical chemical kiln and lime kiln. Rotary kilns are widely used in mechanical, physical or chemical treatment of solid materials in building materials, metallurgy, chemical industry, environmental protection and many other industries [1]. The use of rotary kilns varies from industry to industry, for example, in the building materials industry for calcining cement clinkers, in the chemical industry for the production of soda, calcined phosphate fertilizer, and barium sulphide, Calcined aluminum hydroxide in non-ferrous metallurgical industry. The rotary kiln of calcined active lime is generally divided into three kinds of kilns, such as 300t/400t/100t, etc. The temperature of calcining zone is about 1350 °C. The rotary kiln can be divided into discharge end, cooling zone, firing belt, preheating zone, feed end and temperature range from 1000 to 1350 °C. The requirements for refractories are also different [2]. The refractory in the kiln not only bears the high temperature shock, but also has to bear the erosion and wear of the material, the stress caused by the kiln body rotation, etc. Therefore, not only the physical and chemical indexes of the refractory are strictly required, but also the construction masonry must be strictly guarded. High output lime kiln structure: mainly consists of kiln body, feeding device, distribution device, combustion device, ash unloading device, electrical appliance, instrument control device, dust removal device and so on. The structure and calcination © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1482–1488, 2019. https://doi.org/10.1007/978-981-13-3648-5_192

Analysis and Design of Key Parameters in Intelligent System …

1483

form of different lime kiln are different, the process flow is basically the same, but the equipment value is very different. Of course, the effect of use must also be different. In modern industrial production, cement clinker calcined in rotary kiln mostly adopts dry process and wet process. Generally, the cement rotary kiln consists of a cylinder, a supporting device, a supporting device with a retaining wheel, a supporting device without a retaining wheel, a drive device, a kiln head, a seal device at the kiln end, a coal injection pipe device, and so on. Its functions include the following four aspects: 1 fuel combustion device provides combustion space and thermal field for clinker, so that fuel can be completely burned to provide uniform temperature field for fuel combustion, which can meet the conditions of clinker calcination; the formation of clinker minerals has higher development potential in the field of transportation equipment. The rotary kiln equipment is made up of cylinder body, supporting device, belt retaining wheel supporting device, transmission device, movable kiln head, kiln tail seal device, combustion device and so on. It is broken, dried, milled and calcined in rotary kiln. Grinding is one of the main equipments in each production line. Rotary kiln for: industry, agriculture, construction, chemical and other fields; suitable for a variety of materials such as calcinations such as lime, active calcium, ceramsite sand, bauxite, kaolin, calcium aluminate, zinc oxide, gypsum, cement, and so on. Rotary kilns are getting closer to everyone’s lives, and many people don’t understand its role. A rotary kiln turns one substance into another.

2 Final Specification for Use of Lime Rotary Kilns The calcination operation of rotary kiln has an important effect on the calcination quality of Portland cement clinker. The main characteristics of high quality clinker are the high content of C3s C2S mineral, the low content of alkali, the smaller average size of mineral grain, and the good development of clinker. When the quality parameters of raw meal process and grinding fineness, particle size distribution, chemical composition, harmful composition, rate value and so on remain constant, the rotary kiln calcination manipulates the thermal rail system and calcination temperature, heating rate, peak temperature, etc. The holding time, kiln speed and cold vertical crusher speed determine the content and activity of clinker silicate mineral C3s and C2S, and the size of Aritt crystal in clinker. It is mainly determined by the burnability of raw cement and the calcinations of kiln. Therefore, the effects of coal quality, flame shape and temperature, clinker and calcination temperature, firing zone length, kiln type specification, kiln speed, heating rate and cooling rate on clinker calcination quality are discussed. Rotary kiln is an open calcined kiln with simple structure, unblocked air flow, timely discharge of sulfur from flue gas containing sulfur, and low sulfur content in the fuel, which is in line with the requirements of steelmaking. At the same time, the material in the kiln rolling evenly, heated evenly, the product quality is stable, raw, overburning rate is very low, can be calcined high activity of lime for steelmaking. Under the same conditions, the activity degree of lime produced by rotary kiln is higher than that of gas fired kiln, and the average 30 ml is over 340–380 ml, even as high as 400 ml.

1484

T. Wang and L. Zhu

The thrust roller is the limit switch that restricts the rotary kiln (rotary kiln) from eating or eating. Because the supporting roller is wider than the kiln tire, the roller and belt can move up and down and wear evenly. A thrust roller is arranged at the end of the tire ring [3]. The thrust roller only acts as a barrier, and the roller itself has no power. When the kiln is eaten and eaten, the roller is offset, the supporting wheel and the center line of the kiln have a certain angle, so that the supporting wheel gives the kiln body upward force and causes the kiln shell to move up. Sometimes sprinkle some raw powder or wipe the wheel clean, increase its friction coefficient, can also make the kiln body upward. When the kiln is eaten, it is possible to reduce the friction between the roller and the wheel by sprinkling graphite powder between the wheel and the belt, as is shown by Eq. (1) [4]. Br ¼

0:202EðQ þ Gr Þði þ 1Þ ½p0 2 Dr

ð1Þ

When the rotary kiln is running, the raw meal is fed from the kiln tail. Because of the inclination and slow rotation of the cylinder, the material will be rolled and moved along both the circumference and the shaft of the kiln, and will be continuously transported to the kiln head, and the pulverized coal will be fired from the kiln head into the kiln. During the flow process, the material is continuously heated to complete the physical and chemical reaction [5]. The temperature of the material can reach 1450 °C. The fired clinker is discharged from the kiln head and enters the cooler. The hot air formed in the process of heat exchange with materials enters the kiln tail system from the feed end of the kiln and finally from the chimney into the atmosphere, as is shown by Eq. (2).  X¼

X1 X2



 ¼

 A  þN S þ N ¼ AS AU

ð2Þ

The cylinder of the rotary kiln is made of steel plate, and the cylinder is lined with a fireproof lining, with a specified slope of horizontal line, and is supported by three wheels on each supporting device. A large gear ring is fixed by tangential spring plate on the cross inner cylinder near the feed end wheel, and a pinion is meshed under it. In normal operation, the main drive motor transfers power to the open gear device through the main reducer to drive the rotary kiln. Because of the inclination and slow rotation of the cylinder, the material moves along both the circumference and the axis (from the high end to the low end), and continues to complete its technological process. Produce clinker through kiln hood into cooler cooling. The fuel is injected into the kiln by the kiln head, and the waste gas produced by combustion is exchanged with the material for impact crusher, which is derived from the kiln tail.

Analysis and Design of Key Parameters in Intelligent System …

1485

3 Analysis and Design of Key Parameters in Intelligent System of Lime Rotary Kiln The main structure of rotary kiln consists of cylinder, wheel belt, supporting device, transmission device, kiln head and kiln tail sealing device. Cylinder is a multi-section cylinder made of different thickness steel plate, which is welded to become part of rotary kiln [6]. The material is fully mixed in the cylinder and heat exchange and chemical reaction take place, and then it is transported to the discharging end. In order to make the material move effectively in the cylinder, the cylinder has a certain slope. The tube is usually built with refractory material to protect the tube and maintain sufficient heat. Calcinations: located in the middle of the furnace, entering this area, because the blower enters the proper amount of air to support combustion, the fuel begins to burn, and a large amount of heat is released, and the temperature gradually rises to 1100– 1200 °C. Calcium carbonate is decomposed into calcium oxide and carbon dioxide, and the gas released is preheated in the preheating zone. The decomposition rate of limestone depends on the rate at which the gas that produces carbon dioxide at the temperature of the calcination zone is taken away. It also has something to do with the fuel ratio, the incoming air. The product of decomposition reaction rate and sintering time is equal to the particle size, the limestone is cooked and burned thoroughly. When the particle size is smaller than the material size, the raw burning occurs, but the overburning occurs when the material size is larger. Kiln specializing in the production of high-yield lime kilns, large lime kilns, our company combines advanced foreign technology and many domestic research technology to design high quality and high yield lime kilns, mainly divided into coal burning shaft kiln series, gas burning shaft kiln series, Rotary kiln series of three series of more than a dozen kiln type, the company designed Nissan 100T*700T lime kiln special equipment: top distributor, multi-channel adjustable ash machine, computer simulation hood, sealed ash unloading machine, electronic weighing device, hoisting device, Dust removal and desulphurization equipment and full automatic control system, as is shown by Eq. (3) [7]. r¼

S a MI ¼ 2 3  ½r1  0:1de W

ð3Þ

The inlet of kiln head and outlet of kiln end are made of ordinary steel, and there is no strict requirement for the material of manufacture [8]. The bottom of kiln end design has tug boat, which is suitable for convenient maintenance. The production of kiln body is the most in rotary kiln. Manganese steel must be made of manganese steel. Manganese steel is a kind of high strength anti-wear steel, which is mainly used in bad working conditions such as shock, extrusion, material wear and so on. Ordinary steel does not meet such requirements and cannot be used. Large roller, support wheel and retaining wheel must be made of steel 4, used after annealing, in order to meet certain strength requirements, large ring and small ring must

1486

T. Wang and L. Zhu

adopt 4 steel. Hardness is in the range of HB17-210, ring teeth can only be used after heat treatment in multiple processes. The specific gravity of 2.5–2.9 bricks and the bearing temperature of 500–1300 °C are used for the refractory brick lined in the rotary kiln. The refractory brick is related to the size of the calcined material and the kiln body.

4 Experiments and Analysis In general, coal quality requirements for rotary kiln calcinations are ash A  30, volatile matter V at 18/30, fever QDW  5000 kcal/kg, and pulverized coal fineness requirement at 8/15. In fact, at present our country has a tight supply of high-quality coal and a relatively high price. Many manufacturers can not meet this requirement by actually moving the crushing stations because the ash content of the pulverized coal after burning falls entirely on the surface of the clinker particles in the firing zone, resulting in silicification on the surface of the clinker particles. In order to change the mineral composition of clinker surface layer, the content of C_3s decreases and the content of C_2S increases, thus affecting the quality of clinker. At present, the corresponding countermeasures are as follows: one is to increase the amount of coal used in dry kiln calciner and to reduce the amount of coal injection at kiln head [9]. The ratio is controlled at about 6:4 to increase the mixing degree of coal ash and burnt raw material in calciner, to reduce the negative influence of coal ash on clinker quality, and to control the coal quality of kiln end calciner and kiln head separately. Low calorific value coal fed in calciner and high calorific value coal in kiln head can reduce the adverse effect of poor quality coal on clinker quality of kiln head. The fine grained limestone of 10–50 mm can be directly calcined in rotary kiln, and the 0–30 mm fine grained limestone is about 30–40% of the total output, which cannot be used in other kiln types [10]. With the “concentrate” of iron and steel raw materials, the limestone is gradually replaced by quicklime, and the fine grained limestone cannot be comprehensively utilized. The construction of rotary kiln production line can not only make full use of high quality limestone mine resources, but also meet the sustainable development policy of lime industry, as is shown by Eq. (4). /j;i ¼ 2j=2 /ð2 j t  iÞ;

i2Z

ð4Þ

When the rotary kiln (rotary kiln) eats and touches the Y1 switch, the hydraulic system starts to take action, the hydraulic system eats for 1 min, stops for 4 min, and then repeats the 1 min stop 4 min action until the kiln tire rings reach the Y5 position. At this time the kiln began to eat, hydraulic pressure relief 2 min, stop 4 min, and then repeat the action. Repeat the above process over and over again. When the hydraulic system stops operating, the internal pressure remains the same. The other end of the high speed shaft of the main reducer is provided with an auxiliary transmission device. The motor of the auxiliary drive device drives the high speed shaft of the main reducer through the auxiliary reducer to make the rotary kiln cylinder rotate at a lower speed. The role of the auxiliary drive is to make the kiln tube

Analysis and Design of Key Parameters in Intelligent System …

1487

stay in a specified position when the kiln is shut down, as is shown by Eq. (5), when the kiln is overhauled and bricklaying is ensured.  w ð0Þ  ðbÞ ð1j0Þ ¼ Uw ð0ÞPðbÞ ð0ÞUTw ð0Þ þ Q P T  w ð0Þ ¼ Uw ð0ÞP0 Uw ð0Þ þ Q    T  ¼ WX Uð0ÞP0 U ð0Þ þ Qð0Þ WX

ð5Þ

¼ WX PðvÞ ð1j0ÞWX The supporting device is an important part of rotary kiln. It bears the whole weight of the kiln barrel and plays a role in positioning the kiln barrel so that it can run safely and smoothly. It is necessary to meet two requirements: one is to ensure that the centerline of the cylinder after installation is a straight line; the other is to ensure that the force of the two supporting wheels of the same gear is the same so that the two rollers are worn evenly.

5 Summary The movement of the material in the kiln, when the material enters the rotary kiln through the kiln tail, the rotary body of the cylinder produces the force on the material, and the material moves towards the kiln head gradually, and the material stays under the section of the kiln after the kiln is stopped, when the rotary kiln turns again, The material is subjected to internal friction and rotates to a certain position with the kiln body before rolling down to form a new material surface. Because of the difference of filling rate and rotary speed of kiln, the moving speed of different combustion zone is different. Among them, the velocity of decomposition band is the fastest, and the slowest of burning zone. Acknowledgements. This paper is supported by Scientific and technological key projects in Henan Province (152102210123).

References 1. Wang, H.-H.: Mechanical Behavior Analysis of Large Rotary Kiln Supporting System. East China University of Technology (2012) 2. Li, X.: Research on equal life optimization of rotary kiln support system based on axis inspection. J. Appl. Mech. (2014) 3. Li, X.: The Influence of the Supporting Wheel Deflection of Large-Scale Rotary Kiln on Maximum Contact Stress. Hunan Science and Technology University (2015) 4. Zhang, W.: Repair and Adjustment Method of Rotary Kiln Supporting Wheel Device. Hunan Zhongye Changtian Heavy Industry Technology Co., Ltd. (2014) 5. Zhu, L., Zhi, X., Qiao, B.: Research and development of active lime precalciner. Manuf. Autom. 4 Apr 2014 6. Serroukh, A., Walden, A.T.: Wavelet scale analysis of bivariate time series I: motivation and estimation. J. Nonparametr. Stat. 13, 1–36 (2011)

1488

T. Wang and L. Zhu

7. Wang, Z., Daniel, W.C., Liu, X.: Variance-constrained control for uncertain stochastic systems with missing measurement. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 35 (5), 746–753 (2005) 8. Dong, P.: Study based on SPSS of the organizational factors of the supervision staffs. AISS 5 (9), 98–104 (2013) 9. Magni, L., de Nicolao, G., Magnani, L., et al.: A sabilizing model-based predictive control algorithm for nonlinear systems. Automatica 37(9), 1351–1362 (2011) 10. Zhu, L., Zhang, K., Qiao, B.: Research on key parameters of active lime rotary kiln and development of intelligent design system. Mine Mach. 1, 100–108 (2012)

Study on the Environmental Factors Affecting the NOx Results of Heavy Duty Vehicle PEMS Test Huang Liyan(&), Liu Gang, Wang Detao, and Zhang Xian Beiqi Foton Motor Co., Ltd., Beijing 102206, China [email protected]

Abstract. The test technology of portable emissions measurement system (PEMS) has become an important means to carry out the test and research of the vehicle’s actual road emission. This paper takes the heavy vehicle with SCR post-processing technology as the research object. By comparing the emission of NOx pollutants at different altitudes, temperatures and loads, the factors that affect the NOx emissions of heavy vehicle PEMS test are studied. Keywords: Heavy vehicle  Low temperature Vehicle emission test system (PEMS)

 High altitude  Load

It was found that the NOx emission decreased with the increase of load. The NOx pollutants in high altitude areas were slightly higher than the level of pollutant emission in the plain area, but there was no significant correlation with the altitude. In order to meet the requirements under any environmental boundary condition stipulated by the regulations, we should verify the vehicle emission results under the worst conditions (low temperature, high altitude and low load). At the same time, the chassis dynamometer simulation can only be used in the initial calibration, and the actual road needs to be verified in the later period.

1 Foreword With the continuous increase of passenger and cargo traffic on roads in China and the effective control of light-duty petrol vehicles in China, heavy-duty diesel vehicles have become the focus of pollution control [1]. In recent years, in order to further control the emissions of vehicles under actual use conditions, the management departments have stepped up stricter new vehicle emission limits and have also vigorously strengthened their compliance inspections. Judging from the situation in various countries, only the effective control of in-use vehicle emissions to ensure compliance with regulatory requirements within the normal use period is the fundamental way [2]. The United States and the European Union have adopted the portable emissions measurement system (PEMS) to conduct in-use vehicle emissions testing through the research on the conformity of heavy vehicles. The vehicles are completely driven under actual road conditions, and the emission of pollutants are tested and recorded in real © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1489–1496, 2019. https://doi.org/10.1007/978-981-13-3648-5_193

1490

H. Liyan et al.

time. This method effectively supervises the emission control of heavy-duty vehicles in actual operation. In fact, as the research of PEMS has just to be done in China, it still basically stays in the theoretical research and lacks the testing and research in the actual environment. Therefore, this paper compares the NOx emissions of vehicles under different conditions of altitude, ambient temperature, and loads, and studies the factors that affect the NOx emission of heavy-duty vehicles adopted PEMS.

2 Introduction 2.1

Test Equipment

In recent years, the technical level of on-board emission testing has been continuously improved and matured. The PEMS has been introduced into regulations by the United States, the European Union, and China. There are many mature PMES in the world, and the measurement accuracy also gradually approaches the laboratory emission test equipment. The PEMS used in this article is the OBS-2200 produced by HORIBA, Japan. Considering that the main emissions of diesel engines are CO, HC (small amounts), NOx and particulate matter (PM), NOx and PM are the most important pollutants. The major contents of national standards, Beijing local standard, American standards, European standards and other regulations on pollutant emissions of heavy-duty vehicles are all NOx pollutants. Therefore, this article focuses on NOx emissions of heavyduty vehicles and the results are consistent with national standards. g/kWh is the expression form. 2.2

Test Vehicle

Considering that in our country, SCR is the most important technical route of the aftertreatment system. SCR technology has the advantage of high fuel efficiency, and is also the technology route adopted by most domestic manufacturers. Therefore, the aftertreatment system of research vehicles adopted in this paper is the SCR.

3 Research of Influencing Factors 3.1

Effect of Ambient Temperature on Emission Results

3.1.1 Test Method The test method mainly refers to the heavy-duty chassis dynamometer emission test method in the Beijing local standard “DB11/965-2017”. The vehicle runs C-WTVC cycle in GB/T 27840-2011. Comparison tests are conducted in the same measurement equipment, the same test vehicle, the same test personnel, and different environmental temperatures. The test temperatures are (−15 °C, −10 °C, −7 °C, −3 °C, +2 °C, +10 °C, +20 °C, +30 °C), respectively. The altitude is 45 m above sea level.

Study on the Environmental Factors Affecting the NOx Results …

1491

The test vehicle continuously runs three C-WTVC cycles as a complete test. To ensure the repeatability of the test results, a statistical determination is made on the collected data, and each set of valid data is not less than three. As a result, the arithmetic average of the effective data is taken. The emission test and the cycle start simultaneously. The on-board measurement system is used to obtain the emission data of the vehicle under this cycle. The measurement data shall be rounded off according to GB/T8170-2008 “Rules of rounding off numerical values and expression and judgment of limiting values”. The measurement data shall be processed according to GB/T4883-2008 “Statistical interpretation of data-Detection and treatment of outliers in the normal sample”. If the measured data does not meet the requirements of the standard, the number of trials shall be added. 3.1.2 Test Results Figure 1 shows the results of NOx emissions at different temperatures. It can be seen that the emission of NOx pollutants shows a downward trend from −15 °C to 30 °C. For vehicles with the SCR technology route for post-processing, the post-treatment injection temperature will directly affect the NOx emission results. In the low temperature, the ambient temperature is too low, which will affect the post-treatment exhaust temperature, causing the post-treatment to not reach the start-up temperature.

Fig. 1. The results of NOx emissions at different temperatures

3.2

Impact of Load on Emissions Results

3.2.1 Test Method The test method mainly refers to the method in the Beijing local standard DB11/9652017, using the same measurement equipment, the same test vehicle, the same test personnel, the same area, on the same route and under different loads. 3.2.2 Test Results Table 1 shows the results of NOx pollutants under different loads. It can be seen that as the load increases, the NOx emission pollutants decrease. Figures 2 and 3 are the third test case and the fifth test case, respectively. As can be seen from Fig. 2, NOx emissions are high in urban and suburban conditions. After the exhaust temperature rises to 200° in the high-speed section, the NOx drops significantly. In the low-temperature and urban conditions, post-processing SCR

1492

H. Liyan et al. Table 1. The results of NOx pollutants under different loads

No. 1

Load (%) 10

2

10

3

10

4

50

5

100

Ambient temperature (°C) Lowest: −12.5; Highest: −6.6 Lowest: −7.7; Highest: −5.2 Lowest: −13.3; Highest: −4.1 Lowest: −9.2; Highest: −3.9 Lowest: −10.7; Highest: −5

Urban

NOx (g/kWh) 1.76

Urban (%) 48.2

Suburban (%) 25.0

Highway (%) 26.8

1.19

46.5

24.1

29.4

1.62

44.5

25.9

29.7

1.15

43.2

27.9

28.9

0.88

41.9

27.9

30.2

suburban

highway

Fig. 2. The third test case

Urban

suburban

highway

Fig. 3. The fifth test case

can`t reach the starting temperature (180 °C), because the average exhaust temperature in urban conditions is about 120°, the suburban conditions the urea began to inject, but affected by the exhaust temperature, the spray effect is not good and NOx is also very high. The normal operation of the NOx after a high-speed working condition falls. It

Study on the Environmental Factors Affecting the NOx Results …

1493

can be seen that the low ambient temperature affects the exhaust temperature and leads to excessive emissions. Comparing with Fig. 3, we can see that with the increase of load, the vehicle exhaust temperature can be significantly increased during the middle and low-speed road conditions, resulting in a significant decrease in NOx emissions. From Fig. 4, it can be seen that NOx specific emission when urea non-injected is 2.7–4.4 times that of normal urea injection.

Fig. 4. The comparison of NOx emissions before and after injection of urea under different loads

3.3

Influence of Altitude on Emissions Results

China’s high-altitude area is very broad. The area where the altitude is above 1000 m occupies more than 65% of the country’s land area, and the area with an altitude above 2000 m occupies more than 33% of the country’s land area. For a diesel engine, atmospheric pressure is an important factor affecting its overall performance index. In China’s current heavy-duty diesel engine emission regulations, the experimental conditions for emissions certification for heavy-duty engines are specified to be no more than 1000 m above sea level (or 90 kPa equivalents to atmospheric pressure). Therefore, the impact of altitude on engine emissions has not been considered from the point of view of emission regulation certification.. According to China’s altitude distribution and the distribution of motor vehicle ownership, the heavy-duty vehicle standards (6th phase)are limited to an altitude of 2400 m, which is in line with the upper altitude limit specified in the six standards for light-duty trucks. Therefore, the NOx pollutant emissions of the same vehicle under different altitudes are studied. 3.3.1 Test Method Due to the lack of an altitude test capability, the study of the effect of altitude on emissions results mainly with the same driver, the same vehicle, the same road ratio, and different altitudes. 3.3.2 Test Results Figure 5 shows the emissions test results at different altitudes. It can be seen that NOx pollutants in high altitude areas are a bit higher than those in plain areas, but there is no obvious correlation with altitude. The main reason for this is that the nitrogen oxides in vehicle engines contain NO and NO2, most of which are NO, which are products of N2 at high temperatures of combustion. High temperature, long duration, and oxygen-enriched state during

1494

H. Liyan et al.

Fig. 5. The emissions test results at different altitudes

combustion are three factors that generate NOx. The main feature of the plateau is the thin air, which will mainly lead to deterioration of power and fuel consumption, insufficient combustion, increase of CO, and less impact on NOx emissions [4]. 3.4

Impact of the Gradient Variation on Emissions Results

The NOx pollution test in the chassis-dynamometer-simulation method and the actual road test method are compared, to research the impact of the gradient variation on emissions results. 3.4.1 Test Method In the actual road, PEMS test measures NOx pollutants, and simultaneously collects the road spectrum (vehicle speed, latitude and longitude), and imports the road spectrum into the chassis dynamometer to form a simulated road condition curve. Using the same measuring equipment, test vehicle, and test personnel, the data of the vehicle’s power system (load rate, engine speed, and vehicle’s driving gear position) were monitored to compare the difference between actual road and chassis dynamometer. 3.4.2 Test Results Table 2 shows the test results of the actual road method and the chassis dynamometer simulation road method. It can be seen from the comparison that the NOx emission results from the chassis dynamometer simulation road method are significantly lower than the actual road test results. Table 2. The test results of the actual road method and the chassis dynamometer simulation road method Condition Load WBW-NOx result (90%th) Threshold value of power Percentage of valid window

The actual road Chassis dynamometer simulation Half load Half load 3.40 2.24 0.20 0.20 0.78 0.74

Study on the Environmental Factors Affecting the NOx Results …

1495

From Fig. 6, it can be seen that the relationship between the exhaust temperature and the NOx pollutants is independent of the test site.

Fig. 6. Relationship between exhaust temperature and NOx emission

It can be seen from Fig. 7 that when the slope change is not severe, the driver can easily follow the curve of the vehicle speed. And the engine speed, instantaneous fuel consumption, and actual coincidence are better when simulated on the chassis dynamometer.

Fig. 7. Comparison of engine data when gradient variation is not great

It can be seen from Fig. 8 that when the gradient change is severe, the driver cannot keep up with the speed curve (especially in the case of shifting gears) due to the influence of gradient changes. At this time, the consistency between the indoor simulation and the actual mountain collection data is not good. There is a large deviation.

Fig. 8. Comparison of engine data when gradient variation is great

1496

H. Liyan et al.

4 Conclusion From the perspective of the test, factors that affect vehicle emissions mainly include load, ambient temperature, and altitude. Simultaneously, (1) In order to ensure that heavy vehicles meet the requirements under any environmental boundary conditions specified in the regulations in the process of production conformance and in-service inspection, during the design calibration process, the worst case conditions should be verified as far as possible (low temperature, High altitude, low load); (2) The chassis dynamometer cannot simulate the slope and is not in line with the actual situation. The chassis dynamometer simulation can only be used for the initial calibration, and it needs to be verified on the actual road in the later period. In order to ensure that the chassis dynamometer simulation is closer to the actual road, the chassis dynamometer simulation software needs to be improved. At present, the time-velocity curve method cannot reflect the gradient change, which will affect the test results. From a design point of view, for heavy-duty vehicles equipped with SCR, the main factor affecting NOx emission is the exhaust temperature. Therefore, in order to ensure compliance with regulatory requirements, improvements can be made in the following areas: (1) For the diesel engine calibration, it is best to increase the temperature quickly, adopting the post-injection strategy properly, or increasing the throttle valve. (2) In order to minimize the heat loss from the exhaust manifold to the catalyst section, it is better to use a heat insulation material with better heat insulation performance in this section; (3) Consider using a copper molecular sieve catalyst instead of a vanadium-based catalyst to increase the low temperature activity of SCR catalyst.

References 1. Bao, X., Hu, J.: Research on the on-board emission test method for the compliance of inservice heavy vehicles. J. Environ. Eng. Technol. (2011) 2. Li, M.: The development direction of vehicle emissions testing pollutant measurement—onboard emission measurement, environmental protection and energy conservation (2006) 3. DB11/965-2017: Limits and measurement method of emissions from heavy-duty vehicle (PEMS method phase IV and V) (2017) 4. Vehicle Emissions and Control Technology, 2nd edn. People’s Communication Press (2012)

Analysis and Expected Effect of the Phase III Fuel Consumption Standard for Light Duty Commercial Vehicles Wang Zhao(&), Bao Xiang, and Zheng Tianlei China Automotive Technology and Research Center Auto Standardization Research Institute, Tianjin 300300, China [email protected]

Abstract. The process of formulating the Phase III fuel consumption limit standard for light duty commercial vehicles in China is described. The gap between the level of fuel consumption of China’s light duty commercial vehicles and the international advanced level is analyzed. The process of determining the overall energy saving target of the standard and the determination basis of the limit program are introduced, and the reasons for the change of the standard evaluation parameters are also explained. Finally, the energy saving effect brought about by the implementation of the third phase standard is predicted based on parameters such as output, fuel consumption level, annual mileage, and years of service. Keywords: Light duty commercial vehicle  Energy saving  Fuel consumption

1 Introduction Light duty commercial vehicles refer to N1 and M2 with a maximum design total mass of not more than 3.5 tons, including light trucks, pick-up trucks, and light buses. In recent years, the Chinese auto industry has continued to maintain a high-speed development trend. The total amount of fuel consumed by automobiles has continued to grow and has become the main body of China’s new oil consumption. In 2015, China’s apparent oil consumption was about 541 million tons, of which net imports of crude oil were 330 million tons, and the foreign dependency was 60.9%, and it can be predicted that the proportion of auto fuel consumption in China’s oil consumption will continue to increase. The energy and environmental problems caused by fuel consumption have become increasingly prominent. How to properly deal with the rapid development of the auto industry, the continued expansion of the scale of car ownership and the resulting energy and environmental issues are not only related to the future competitiveness and sustainable development of the Chinese auto industry, but also affect the energy of China in the future period of time. The “Limits of Fuel Consumption for Light-duty Commercial Vehicles” was issued in 2007, which has played an important role in reducing the fuel consumption of light commercial vehicles. The development of regulations on fuel consumption in the world is under way, major countries and regions in the world are working to develop more © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1497–1507, 2019. https://doi.org/10.1007/978-981-13-3648-5_194

1498

W. Zhao et al.

stringent energy conservation laws and regulations. China’s “Energy-saving and New Energy Vehicle Industry Development Plan” issued in 2012 clearly stated that commercial vehicles should also meet the international advanced level by 2020. The Phase III fuel consumption standard for LCV, which is “Limits of Fuel Consumption for Light-duty Commercial Vehicles” standard (GB 20997-2015) was officially released in 2015. The standard states that from January 1, 2018, all new certification vehicles must meet the requirements of the Phase III standard, and from 2020, all vehicles must meet the standard requirements.

2 Determination of Overall Energy-Saving Targets 2.1

Effect of Different Evaluation Systems on Fuel Consumption

The fuel consumption evaluation system has an important influence on the standard implementation effect and the actual fuel consumption level. Under the enterprise average fuel consumption evaluation system, it is not required that all products meet the target value, but rather that the fuel consumption of some products exceeds the target value is allowed. Therefore, the actual average fuel consumption is generally equivalent to the target value. But under the fuel consumption limit system for individual models, the limit value is a constraint index for the vehicle, that is, all vehicle fuel consumption must meet the limit value, so the actual average fuel consumption level is usually better than the limit value. This has also been demonstrated in the evaluation and analysis of the standards for the fuel consumption limit of passenger cars and light-duty commercial vehicles in China. As shown in Fig. 1, taking passenger cars as an example, the average fuel consumption level of passenger cars is 10–15% lower than the limit value after the Fuel Consumption Limits for Passenger Cars (GB 19578-2004, Phase II) are fully implemented. As shown in Fig. 2, similar conclusions can be obtained for the analysis of light commercial vehicles. Taking N1 diesel vehicles, which account for a major market share in China, as an example, after the 2011 standard was fully implemented, the fuel consumption level of N1 diesel vehicles was about 12% lower than the Phase II limits. This means that when the fuel consumption limit value and the target value index value are the same, the actual requirement of the fuel consumption limit evaluation system is 10–15% stricter than the corporate average fuel consumption evaluation system targets, and as a result, its actual fuel consumption level is 10–15% lower than the corporate average fuel consumption evaluation system targets. 2.2

Energy Saving Targets for Light-Duty Commercial Vehicles in China

In the standard pre-research, it was found that the decline in the fuel consumption of light-duty commercial vehicles in recent years was very limited, as shown in Fig. 3. Therefore, in setting the overall energy saving target for 2020, we must not only take into account the sustainable and healthy development of the industry, but also consider

Analysis and Expected Effect of the Phase III Fuel Consumption …

1499

Fig. 1. Comparison of average fuel consumption and limits of passenger cars in 2010 25% 20% 15% 10% 5% 0% 12

13

14

15

16

17

18

19

20

21

22

Fig. 2. The extent that average fuel consumption of N1 diesel vehicles is lower than the Phase II limit

the national overall energy-saving work expectations and requirements. Considering the adaptability of the existing technologies and the prediction of the development of future energy-saving technologies, combining the specificity of light-duty commercial vehicle market in China, we will follow the principle of consistent energy-saving targets and that in the Plan to determine the tightening of the limits. As mentioned above, the major countries and regions in the world have set energy saving targets for 2020 and later for the fuel consumption of light-duty commercial vehicles, while a basic 20% reduction in the level of fuel economy in 2012 is basically the same for all countries and regions. The target of 147 g/km for the 2020 new light-duty commercial vehicle in EU is also a huge challenge for the relevant companies. Although no implementation measures have been announced at present, but comprehensive information and reference to the 2017 implementation of energy-saving targets, it can be expected that the gradual

1500

W. Zhao et al.

8.4

11.0

8.3

10.5

8.2

8.12

8.10

8.1 8.00

8.0

7.96

8.02

7.9 7.8

10.0 8.06

7.89

7.86

10.64 10.34

10.34

10.41

8.94

9.00

8.97

2010 M2

2011 M2

2012

9.5 7.91

7.7

10.55

10.21

9.0 8.5 8.0

7.6 2010 N1

2011 N1

2012

Fig. 3. The trend of fuel consumption of light-duty commercial vehicle in China from 2010 to 2012

introduction plan will continue to be used, and increase the proportion of models that need to reach the target year by year. This means that by 2020, the EU’s target requirements for fuel consumption of light commercial vehicles will not be fully implemented. In addition, considering the coordination of fuel consumption and emission standards, light-duty commercial vehicle fuel consumption standards plan to implement on new certification vehicle models from 2018 onwards. Therefore, when comparing the fuel consumption requirements of light-duty commercial vehicles in China and the EU, it is more appropriate to use 2018 as the benchmark. According to the previous analysis of the relationship between the limits and the targets, light-duty commercial vehicle limits is about 15% less than the targets of Europe in 2018 means that it achieves the same energy saving targets. On this basis, considering the actual situation of light-duty commercial vehicle market in China, and following the principle of consistent energy saving targets and that in the Plan, the working group negotiated that the fuel consumption of new vehicles for China’s 2020 new commercial vehicles is at least 20% lower than that of 2012, and use the EU’s 2018 target as a reference setting. 2.3

Limits Proposal

The light-duty commercial vehicle fuel consumption limit proposal is based on the 2010–2014 light vehicle fuel consumption label record database, and is determined through systematic analysis of the key features, energy-saving technologies, and fuel consumption level of light-duty commercial vehicles in China. 2.3.1 Compare N1 Diesel Vehicles with International Advanced Level According to the fact that China’s light-duty commercial vehicles in China are mainly based on N1, and almost all new commercial vehicles in the EU are diesel products, which is the basis for setting CO2 emission targets, therefore, in determining the N1 diesel vehicle is taken as the object and compared with the EU target. Based on the mentioned analysis method, taking 2012 as the reference year and the European target as the benchmark, the average level of decline in the fuel consumption

Analysis and Expected Effect of the Phase III Fuel Consumption …

1501

of N1 diesel vehicles was 27%; corresponding to the decline of N1 gasoline vehicles, M2 gasoline vehicles and M2 diesel vehicles were 23, 18, and 18%, respectively. 2.3.2 Limits Compliance Rate Analysis The 2012 N1 gasoline vehicles have a compliance rate of approximately 10.5%, diesel vehicles is about 8.4%, M2 gasoline vehicles is with a compliance rate of 0%, and diesel vehicles is 9.9%; the compliance rate of N1 gasoline vehicles is approximately 27% in 2015, that of diesel vehicles is about 18.5%, M2 gasoline vehicles is 5.8%, and that of diesel vehicles is about 20.5%. The 2012 fuel consumption distribution and limit proposal for commercial vehicles is shown in Fig. 4.

Fig. 4. The fuel consumption data in 2012 and the limit proposal

From the perspective of technical reserves, light-duty commercial vehicles rely on price competition for a long time and technological progress is very slow. Overdue engines such as the 491 and 4JB1 that have been introduced from Japan for nearly 30 years and have long been eliminated abroad are still the main force in the market, seriously affects marketing process of new technologies and new products. This has actually formed a vicious circle of bad money driving out good money, and has led to high levels of fuel consumption for light-duty commercial vehicles in China for a long time. It must take decisive and effective measures to promote its comprehensive technological innovation. In addition, in order to further verify the technical feasibility of the limit proposal, during the drafting of the standard, the statistical analysis for the best 5, 10, and 15% of the fuel consumption values of the existing database were also performed in a “top

1502

W. Zhao et al.

runner” manner, and compared with the models with the lowest fuel consumption in the existing products of major light-duty commercial vehicle companies. The results show that the “top runner” and the fuel consumption models provided by the company have a high degree of agreement with the limits proposal and can meet the requirements of the new standard. 2.4

Comparison with European Fuel Consumption Target

As shown in Figs. 5 and 6, from a numerical point of view, the fuel consumption limits for most of the mass segments of N1 and M2 diesel vehicles in China are on average looser 16 and 12% than the targets in the EU, but in the heavy mass segments is slightly stricter than the targets in the EU.

Fig. 5. Comparison of N1 limits and targets in the EU

Fig. 6. Comparison of M2 limits and targets in the EU

In the previous analysis, fuel consumption limit value is 10–15% higher than the corporate average fuel consumption targets because of the difference in the actual binding force between the individual vehicle limits and the average corporate targets. With regard to the mass distribution characteristics of N1 and M2 diesel vehicles in China, it can be considered that the “Limit of Fuel Consumption for Light-duty

Analysis and Expected Effect of the Phase III Fuel Consumption …

1503

Commercial Vehicles” further narrows the gap between China and the international advanced level, and is close to the CO2 emission regulations in the EU in 2018.

3 Expected Effect of the Standard The specific measurement process is: First, use the existing data as a basis for the deduction, and analyze the average fuel consumption levels of various types of vehicles from 2015 to 2030 according to the conclusions of economic analysis of energy-saving technologies, the planning results in industrial policy documents, and the results of industry surveys; Secondly, according to the market development level and industry analysis research, forecasting the production of various types of vehicles from 2015 to 2030; then, according to the investigation results, parameters such as the running time of the vehicle and the average annual mileage will be set according to different vehicle types, and finally put the parameters in the calculation model and generate the results. The light-duty commercial vehicles are classified into gasoline and diesel according to the type of fuel, and are classified into N1 vehicles and M2 vehicles. The formula is: Cp ¼

N X ðFCR  FCi Þ  Vi  D i¼1

100  1000

q

ð1Þ

in which: Cp saved fuel consumption compared to the base year; i the no. of the year; N average year of running; Vi sales in the year i; average fuel consumption in the year i; FCi FCR average fuel consumption in the reference year; average annual mileage; D q fuel density. The formula of CO2 emission: CCO2 ¼ Cp  kf

ð2Þ

in which: CCO2 total CO2 emission; kf conversion factor of fuel and CO2, for gasoline is 23.8, diesel is 26.1. Through the investigation and visit, this paper collected and analyzed the data of production, sales, population, and annual mileage of light-duty commercial vehicles, and established a calculation model based on this. The calculation model can be used to generate the fuel consumption saved by all vehicles based on a given year after the implementation of the standard.

1504

W. Zhao et al.

The Phase III standard for light-duty commercial vehicles was released in 2015 and will be implemented for new certified vehicles in 2018. Therefore, 2016 will be set as the import year of calculating savings of each manufacturer. The parameters will be set as follows according to the input conditions of the calculation model. The results are shown in Table 3: (1) Production data for 2015 and 2016 comes from the “China Automotive Industry Yearbook”. The average annual growth rate for 2017–2030 is forecast at 4%, and the output forecast results for each year are shown in Table 1. Table 1. Forecast of light-duty commercial vehicle production 2015–2030 (Unit: 10,000) Year 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030

N1-gasoline 16.2 16.9 17.5 18.2 19.0 19.7 20.5 21.3 22.2 23.1 24.0 25.0 26.0 27.0 28.1 29.2

N1-diesel 146.0 151.9 157.9 164.2 170.8 177.7 184.8 192.1 199.8 207.8 216.1 224.8 233.8 243.1 252.9 263.0

M2-gasoline 34.5 35.9 37.3 38.8 40.3 41.9 43.6 45.4 47.2 49.1 51.0 53.1 55.2 57.4 59.7 62.1

M2-diesel 6.1 6.3 6.6 6.8 7.1 7.4 7.7 8.0 8.3 8.7 9.0 9.4 9.7 10.1 10.5 11.0

According to statistics, N1 and M2 respectively account for 80 and 20% of all lightduty commercial vehicles. The proportion of gasoline and diesel N1 vehicles is 10 and 90% respectively, and the proportion of gasoline and diesel M2 vehicles is 85 and 15% respectively. The fuel consumption in 2020 is in accordance with the objectives set out in the “Energy-saving and New Energy Vehicle Industry Development Planning (2012– 2020)” and “Made in China 2025” and the expected effect of “Limits of Fuel Consumption for Light-duty Commercial Vehicles” implementation, which is 20% decline compared with 2012. As Europe has not yet established the energy-saving target for the 2025 light-duty commercial vehicles, it is predicted that the average fuel consumption of light-duty commercial vehicle in China will reach the level of 2020 in Europe by 2023 and reach the international advanced level in 2025, and make the interpolation calculations in the middle of each year. After 2025, the average annual decline in fuel consumption is set at 2.5%. Table 2 shows the fuel consumption levels of light commercial vehicles in each year.

Analysis and Expected Effect of the Phase III Fuel Consumption …

1505

Table 2. Forecast of fuel consumption level in 2015–2030 (Unit: L/100 km) Year 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030

N1-gasoline 7.31 7.11 6.91 6.72 6.52 6.32 6.01 5.71 5.40 5.27 5.13 5.01 4.88 4.76 4.64 4.52

N1-diesel 7.40 7.20 7.00 6.80 6.60 6.40 6.16 5.92 5.68 5.54 5.40 5.26 5.13 5.00 4.88 4.76

M2-gasoline 9.60 9.37 9.14 8.91 8.68 8.45 8.14 7.84 7.54 7.35 7.17 6.99 6.81 6.64 6.48 6.32

M2-diesel 8.58 8.37 8.17 7.96 7.75 7.54 7.36 7.18 7.00 6.83 6.65 6.49 6.33 6.17 6.01 5.86

The average annual mileage and average running time of N1 vehicles are 50,000 km and 8 years, respectively, for M2 are 70,000 km and 8 years. Considering that new vehicles are produced in the different months of the year, therefore the mileage of the first year was calculated as half (Table 3). As mentioned above, the conversion factor for gasoline consumption (L/100 km) and CO2 emissions (g/km) is 23.8, and the density is calculated as 0.725 kg/L. It can be concluded that CO2 emissions from 1 kg gasoline consumption is about 3.283 kg; the conversion factor for diesel is 26.1, and the density is 0.835 kg/L, the CO2 emission from 1 kg diesel consumption is about 3.126 kg. According to the calculation results in Table 5, it can be concluded that with 2016 as the base year, until the standards will be fully implemented by 2020, a total of 3.130 million tons of fuel can be saved and CO2 emissions can be reduced by 9.533 million tons; by 2025, a total of 33.665 million tons of fuel can be saved. tons, reduce CO2 emissions 106.798 million tons.

4 Conclusions The “Limits of Fuel Consumption for Light-duty Commercial Vehicles” (GB 209972015), which is called Phase III fuel consumption standards, is an important measure to implement the industrial policies such as “Energy Conservation and New Energy Vehicle Industry Development Planning (2012–2020)” and “Made in China 2025”. The setting of the overall energy-saving targets for the Phase III light-duty commercial vehicle fuel consumption standard is under the guidance of relevant national documents concerning energy-saving management and industrial development. It not

Year 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030

2017 9.8 19.6 19.6 19.6 19.6 19.6 19.6 19.6 5.2 5.2

20.4 40.8 40.8 40.8 40.8 40.8 40.8 40.8 10.8 10.8

2018

31.8 63.7 63.7 63.7 63.7 63.7 63.7 63.7 16.8 16.8

2019

44.2 88.3 88.3 88.3 88.3 88.3 88.3 88.3 23.3 23.3

2020

60.1 120.1 120.1 120.1 120.1 120.1 120.1 120.1 31.8 31.8

2021

77.2 154.4 154.4 154.4 154.4 154.4 154.4 154.4 40.8

2022

95.6 191.1 191.1 191.1 191.1 191.1 191.1 191.1

2023

108.9 217.8 217.8 217.8 217.8 217.8 217.8

2024

122.9 245.8 245.8 245.8 245.8 245.8

2025

137.6 275.2 275.2 275.2 275.2

2026

153.0 306.1 306.1 306.1

2027

169.2 338.4 338.4

2028

186.2 372.4

2029

Table 3. Total amount of fuel consumption saved for light-duty commercial vehicles in each year (Unit: 10,000 tons)

204.0

2030

1506 W. Zhao et al.

Analysis and Expected Effect of the Phase III Fuel Consumption …

1507

only considers the product structure, technical status and market development of lightduty commercial vehicles in China, but also refers to the international development trend. At the same time, it is also set in keeping with the stringency of fuel consumption standards for passenger cars and heavy-duty commercial vehicles, taking into account the operability and executability of the standard. The final overall energy saving target, fuel consumption indicator, evaluation system proposed in the standard are in line with the requirements of national energy saving and emission reduction strategy, which is beneficial to the sustainable and healthy development of the automotive industry. It is estimated that the implementation of standards will result in cumulative savings of 3.104 million tons of fuel and a reduction in CO2 emissions of 9.533 million tons by 2020 with 2016 as the reference year, with significant social and economic benefits.

References 1. State Council of the People’s Republic of China: Energy-Saving and New Energy Automotive Industry Development Plan (2012–2020) (2012) 2. State Council of the People’s Republic of China: Made in China 2025 (2015) 3. Xiang, B., Zhao, W.: Evaluation of Implementation Effect of Fuel Consumption Standard for Light-duty Commercial Vehicles. China Sustainable Energy Project, Energy Foundation (2016) 4. China Automotive Technology and Research Center, China Automotive Industry Association: China Automotive Industry Yearbook. China Automotive Industry Yearbook Publisher, Tianjin (2014–2016) 5. Prieto, N., Uttaro, B., Mapiye, C., Turner, T.D., Dugan, M.E.R., Zamora, V., Young, M., Beltranena, E.: Meat Sci. 98(4), 585 (2014) 6. Riovanto, R., De Marchi, M., Cassandro, M., Penasa, M.: Food Chem. 134(4), 2459 (2012) 7. Prieto, N., Dugan, M.E.R., López-Campos, O., McAllister, T.A., Aalhus, J.L., Uttaro, B.: Meat Sci. 90(1), 43 (2012) 8. Prieto, N., López-Campos, Ó., Aalhus, J.L., Dugan, M.E.R., Juárez, M., Uttaro, B.: Meat Sci. 98(2), 279 (2014) 9. Pla, M., Hernández, P., Ariño, B., Ramírez, J.A., Díaz, I.: Food Chem. 100(1), 165 (2007) 10. Pullanagari, R.R., Yule, I.J., Agnew, M.: Meat Sci. 100, 156 (2015)

Construction of Driving Cycle Based on SOM Clustering Algorithm for Emission Prediction Feng Li(&), Jihui Zhuang, Xiaoming Cheng, Jiaxing Wang, and Zhenzheng Yan Electro Mechanic Engineering College, Hainan University, Haikou 570228, China [email protected]

Abstract. The purpose of this study was to investigate taxi driving cycle and emission factors in Haikou. Through the collection of taxi driving data in Haikou City, a short-stroke was used to extract kinematic fragments from the driving data, and a series of characteristic parameters were used to characterize the driving modes of each segment. The SOM clustering algorithm was used to cluster the kinematic fragments, and the data fragments were extracted based on the cluster analysis, and the taxi driving cycle in Haikou were constructed. The Pearson correlation coefficient was used to verify the correlation of the constructed driving cycle. This method can restore the typical traffic conditions under complex traffic conditions. Based on the data of taxi distribution and driving cycle in Haikou City, the COPERT model was used to calculate the taxi emission factor in Haikou City. Compared with other driving cycles already existing in the world, the taxi driving conditions in Haikou City are characterized by short idle time, frequent acceleration and deceleration, high average acceleration and deceleration, and long-term slow-moving mode. The CO, CO2, VOC, and NOx emission factors for taxis in Haikou are 0.329, 2.230, 0.170, and 0.362 g/km, respectively. Keywords: SOM clustering

 Taxi  Driving cycle  COPERT model

1 Introduction Vehicle emissions have become the main source of atmospheric pollution in modern cities [1]. Taxi is an important part of urban public transport and it is also one of the basic facilities for the normal operation of the city. Taxi has the advantages of long operating time, high delivery efficiency, convenience, and low transportation cost. Giving priority to the development of urban public transport is an important means to increase the efficiency of the use of transport resources and ease traffic congestion [2]. In the past decade, the rapid increase in the number of private cars has led to urban traffic congestion, serious emissions problems and fuel consumption crisis. In emissions testing, driving cycle is a basic concept. The construction quality of driving cycle is directly related to the accuracy of gas emission analysis and further affects the achievement of emission reduction targets. European driving cycle is used in China to measure vehicle emissions and fuel consumption. Emission test results based on this © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1508–1515, 2019. https://doi.org/10.1007/978-981-13-3648-5_195

Construction of Driving Cycle Based on SOM Clustering …

1509

operating condition cannot truly reflect actual road emissions situation. When applied to specific areas, this problem is even more pronounced [3]. In order to accurately assess the actual fuel consumption and emission levels of local vehicles, some vehicle driving cycles that are in line with local traffic conditions have been developed in succession. Some of the well-known are: the driving cycle of vehicles developed in France in 1989, the driving cycle of vehicles on roads in India, the driving cycle of Melbourne vehicles developed in Australia in 1982, the driving cycle of typical road vehicles developed in Hong Kong in 2000, and Tianjin The development of Tianjin electric vehicle road conditions [4–6]. The COPERT model is a macro-scale vehicle emission model funded and developed by the European Environment Agency. The basic calculation principle of the model is based on basic emission factors, and uses a number of column correction factors to calculate the vehicle emission factors under actual conditions. MS Alam et al. used the COPERT model to assess the impact of Irish emission reduction policies on greenhouse gas emissions [7]. R. Smit et al. used the COPERT model to estimate emission factors for Australian vehicles [8]. The COPERT model can calculate the correction factors of various factors based on the local information input by the user, and then obtain a localized emission list. This study collected the actual traffic data of road traffic in Haikou City, constructed typical road driving cycle, and calculated the taxi emission factors of Haikou City by the COPERT model. It is of great significance to promote the design and development of vehicles suitable for road traffic conditions in the region, reduce fuel consumption and emissions.

2 Construction Method of Driving Cycle 2.1

Data Acquisition Process

In this study, taxis in Haikou were selected as research objects to construct road driving cycle. The research came from a project called ‘Research on Product Testing Conditions of China’s New Energy Vehicles and Development—Haikou City, Urban Data Collection. The vehicle data collection terminals were used to collect data through the OBD port, obtained vehicle position and time information through the GPS module, and vehicle ECU data through the CAN module. At the same time, all data is transmitted to the SD card, and the data center transmits the data to the monitoring platform. Five taxis were selected and collected continuously for 31 days (July 1, 2017–July 15, 2017). The vehicle driving data covers working days and non-working days, including periods of peak and non-peak periods of traffic flow. Excluding some invalid data, it eventually obtained more than 300,000 valid travel data. The taxi did not demarcate the route during the data collection process, so it can effectively avoid systematic errors caused by artificially selecting the test geographical location and road type, so that the data acquisition and kinematics segmentation process can more accurately reflect the actual traffic flow of the car. In the operating conditions. The movement of the car from an idling state to the next idling state is defined as a shortstroke segment [9]. To analyze and evaluate each short-stroke segment, some characteristic parameters are defined: acceleration time, uniform time, deceleration time, idle time, maximum speed, average speed, standard deviation of speed, maximum

1510

F. Li et al.

acceleration, maximum deceleration, and standard deviation of acceleration. MATLAB was used to write the corresponding m program for extracting micro-stroke segments, setting the acceleration to greater than 0.1 for acceleration, and the acceleration to less than −0.1 for deceleration. When the acceleration is between acceleration and deceleration, we set the speed equal to zero for idle speed, and for the constant speed greater than zero, we calculated 2647 micro-stroke segments. 2.2

SOM Algorithm Clustering

The SOM clustering algorithm was proposed by Professor Kohonen, a Finnish neural network expert, in 1982. It is an undirected training neural network. The selforganizing process is actually an unguided learning. SOM structure is the input layer + output layer. The input layer corresponds to a high-dimensional input vector, and the output layer consists of a series of ordered nodes organized on a twodimensional grid, connected by a weight vector. In this paper, the number of input layer neurons in the SOM network is 4, and the number of output layer neurons is 2  1. The basic idea of the SOM clustering algorithm is to find the output layer unit with the shortest distance in the learning process, i.e., the winning unit, and update it. At the same time, the weights of neighboring regions are updated so that the output nodes maintain the topological features of the input vector [10]. SOM clustering algorithm processing process shown in Fig. 1:

Fig. 1. SOM clustering algorithm processing flow chart.

The advantage of the SOM clustering algorithm is that the SOM network adaptively determines the number of clusters through a learning process. Through learning, the SOM network can reflect the topological structure and category characteristics of the input mode through the output layer. The network has self-stability and does not require external evaluation functions. The disadvantage is that the initial value of the

Construction of Driving Cycle Based on SOM Clustering …

1511

connection weight and improper parameter selection will cause the network convergence time to be too long. The computational strategy adopted in some steps of the SOM clustering process has a great influence on its clustering speed. For example, the winning neuron traditionally adopts a full-distortion search strategy, resulting in too long training time [11]. SOM clustering divided all segments into two categories, with “smoother” segments accounting for 86.9% and “more congested” segments accounting for 13.1%. 2.3

Driving Cycle Construction

According to the proportion of two types of data segments under SOM clustering, the segments were randomly selected from 2647 segments. The MATLAB was used to calculate the kinematics values of each segment such as acceleration ratio, deceleration ratio, uniform velocity ratio, and idling velocity ratio. Compared with the kinematic eigenvalues of the comprehensive data, selected the segment with larger correlation coefficient of the comprehensive data feature parameters. The total fragments and selected fragments parameters are compared in Table 1. Table 1. Comparison of overall segments and selected segments. Characteristic parameters Average speed (km/h) Maximum speed (km/h) Speed standard deviation (km/h) Maximum acceleration (m/s2) Maximum deceleration (m/s2) Standard deviation of acceleration (m/s2) Acceleration time ratio (%) Deceleration time ratio (%) Idling time ratio (%) Uniform time ratio (%)

Overall segments 20.14 87 10.76 16.389 −14.278 0.308 33.28 25.88 16.45 24.39

Selected segments 21.72 86 11.65 17.03 −15.69 0.299 32.57 24.41 17.26 25.76

By the corr(a,b) function of Matlab, the Pearson correlation coefficient of the two operating condition parameter matrices is 0.997, which is highly correlated with each other, indicating that the constructed cycle can reflect the driving status of vehicles in urban areas of Haikou City. The construction flow chart of driving cycle is shown in Fig. 2. The final synthetic taxi driving cycle in Haikou City is shown in Fig. 3. From the condition data of Haikou City and the map of road driving cycle, it can be concluded that the taxi driving conditions in Haikou City have a short idle time, frequent acceleration and deceleration, and a larger average acceleration and deceleration, and a long time in low speed mode specialty.

1512

F. Li et al.

Fig. 2. Driving cycle construction flow chart.

Fig. 3. Taxi road driving cycle in Haikou City.

3 Parameter Configuration of COPERT Model and Calculation of Emission Factors The COPERT model was funded by the European Environment Agency (EEA) and the model calculates pollutant emissions in a single vehicle or fleet throughout the year. The calculation process of the model is shown in Fig. 4. The calculation base year is 2017, and the weather information data comes from the National Weather Service. The total fuel consumption, number of vehicles and activity data come from the National Bureau of Statistics. The number of natural gas taxis in Haikou is 4203. The number of vehicles and mileage are shown in Table 2. The city’s taxis are mainly composed of natural gas vehicles and electric vehicles. The electric

Construction of Driving Cycle Based on SOM Clustering …

1513

Fig. 4. COPERT model calculation process.

vehicles do not generate exhaust emissions during driving and are not included in the calculation of emission factors. The annual consumption of taxi natural gas in Haikou is 5543.244 TJ. The composition of the team is mainly composed of small passenger cars. The vehicle activity data is collected by the road test telematics system and statistics are collected, as shown in Table 3. Table 2. Number of vehicles and mileage. Category Fuel Model Emission standards Quantity Annual mileage (km) Passenger car Natural gas Small Euro IV 4203 240,000

Table 3. Vehicle activity data. Proportion City off-peak section (%) 15 Speed City off-peak section (km/h) 86

Urban peak section (%) 35

Suburban road section (%) 40

Highway section % 10

Urban peak section (km/h) 40

Suburban road section (km/h) 60

Highway section (km/h) 90

Calculated by the COPERT model, CO, CO2, VOC, and NOx emission factors for taxis in Haikou are 0.329, 2.230, 0.170, and 0.362 g/km, respectively. The calculated emission factors are compared with other urban taxi emission factors [12, 13], as shown in Fig. 5. As can be seen from Fig. 5, both CO and VOC emissions in Haikou City are lower than those in Beijing (0.6 g/km) and Qingdao (1.84 g/km), mainly due to the earlier research years of the other two cities. In recent years the national emission standards have gradually increased. The city of Qingdao is far higher than Beijing and Haikou due to the calculation of the comprehensive emission factor of the study and the calculation of the emissions of some gasoline vehicles. The NOx emission factors are similar. In terms of energy saving and emission reduction, this indicator has a large room for improvement.

F. Li et al.

g/km

1514

2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0

CO

VOC

NOX

Haikou

0.329

0.17

0.262

Beijing

0.6

0.2

0.22

Qingdao

1.84

0.66

0.36

Haikou

Beijing

Qingdao

Fig. 5. Comparison of taxi emission factors in cities.

Compared with traditional fuel vehicles, natural gas taxis have obvious advantages in energy conservation and emission reduction. It has fewer types of harmful emissions, less harmful emissions, and lower costs.

4 Conclusion The taxi driving cycle in Haikou City constructed using kinematic fragments can better reflect the driving status of taxis in Haikou and provide a more feasible method for constructing research areas for localized driving cycles. Comparing the driving conditions established under the SOM clustering method with the original data, the correlation coefficients of the feature parameters are all above 0.90, and the error is less than 10%, which can reflect the overall characteristics of the road in Haikou City. Analysis of pollutant discharges by the COPERT model provides an important basis for studying urban vehicle exhaust emissions, reducing emissions and sustainable development. In the construction of urban public transport, the development of clean energy and environment-friendly vehicles plays an important role in reducing urban exhaust pollution. The conversion of traditional fuel vehicles into gas vehicles can achieve a balance between economic and environmental benefits. Acknowledgements. Fund Project: National Science and Technology Ministry Science and Technology Support Program “China’s New Energy Vehicles and Development—Haikou City, Urban Data Collection”.

Construction of Driving Cycle Based on SOM Clustering …

1515

References 1. Han, X., Naeher, L.P.: A review of traffic-related air pollution exposure assessment studies in the developing world. Environ. Int. 32(1), 106–120 (2006) 2. Editorial Department of China Journal of Highway and Transport: Review on China’s Traffic Engineering Research Progress: 2016. China J. Highw. Transp. 29(06), 1–161 (2016) 3. Liu, H., He, K., Barth, M.: Traffic and emission simulation in China based on statistical methodology. Atmos. Environ. 45(5), 1154–1161 (2011) 4. Fotouhi, A., Montazeri-Gh, M.: Tehran driving cycle development using the k-means clustering method. Sci. Iranica, 2013, 20(2), 286–293 5. Zhao, H., Cheung, C.S., Hung, W.T.: Development of a driving cycle for automotive emission evaluation. Acta Sci. Circumst. (03), 312–315 (2000) 6. Zhuang, J., Xie, H., Yan, Y.: GPRS based driving cycle self-learning for electric vehicle. J. Tianjin Univ. 43(04), 283–286 (2010) 7. Alam, M.S., Hyde, B., Duffy, P., et al.: Assessment of pathways to reduce CO2, emissions from passenger car fleets: case study in Ireland. Appl. Energy 189, 283–300 (2017) 8. Smit, R., Kingston, P., Wainwright, D.H., et al.: A tunnel study to validate motor vehicle emission prediction software in Australia. Atmos. Environ. 151 (2016) 9. Shi, Q., Zheng, Y., Jiang, P.: A research on driving cycle of city roads based on microtrips. Autom. Eng. 33(3), 256–261 (2011) 10. Tang, X., Qiu, G., Li, Y., et al.: A clustering algorithm based on particle swarm optimization and self-organizing map. J. Huazhong Univ. Sci. Technol. (Nat. Sci.) 35(5), 31–33 (2007) 11. Cai, L.: Improvement of SOM clustering algorithm and its application in text mining. Nanjing University of Aeronautics (2011) 12. Wu, D., Lin, Y., Peng, M., et al.: Prediction of emission factors from LPG light-duty vehicles with mobile 6.2. Adm. Tech. Environ. Monit. 21(1):46–49 (2009) 13. Wei, H., Huo, W., Zhao, H., et al.: Study on the emission characteristics of natural gas taxi fleet MOBILE6.2 based on the Matlab grey model method. China New Technol. New Prod. (6), 114–116 (2017)

Fatigue Detection Based on Facial Features with CNN-HMM Ting Yan1(&), Changyuan Wang1, and Hongbo Jia2 1

School of Computer Science and Engineering, Xi’an Technological University, Xi’an, China {15029078611,cyw901}@163.com 2 Institute of Aviation Medicine, Military Medical University, Air Force, Beijing, China [email protected]

Abstract. In order to meet the new requirements of driving safety detection, we propose a method of CNN-HMM fatigue detection, which can be used to detect fatigue of drivers. With the knowledge of human experience and facial activity, it can be seen that the eyes, mouth, and head posture will be significantly different under different mental conditions. Therefore, we set these three facial features as indicators of fatigue state. In order to detect and identify fatiguerelated facial features as efficiently as possible, here we use a Convolutional Neural Network with high recognition in the field of pattern recognition to extract fatigue feature. Then we use the Hidden Markov model to model the extracted features and reasonably estimate the the driver’s fatigue state. We used the Driver Drowsiness Video Dataset created by the NTHU Computer Vision Lab to verify our proposed architecture. The accuracy of detection was 91.1%. Experiments show that our method has a high recognition rate for the driver’s fatigue state. Keywords: Fatigue detection  Feature extraction network  Dynamic features  Hidden Markov model



Convolutional neural

1 Introduction With the rapid development of transportation industry, the following traffic accidents have gradually increased. According to relevant research statistics, fatigue driving is one of the main causes of casualties. And especially for large trucks, fatigue driving is the leading cause of traffic accidents. Due to various reasons, such as lack of sleep, long driving time and so on, the driver suffers from disorders in physiological and mental functions during the driving time. Then the disorders will affect the driver’s attention, feelings, thinking, judgment, and driving actions and so on, cause accidents and endanger the lives of themselves and others. Therefore, it is of great significance to detect the fatigue state of drivers in real time to ensure their personal safety and reduce the related economic losses. It is also an important research content that researchers need to study further.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1516–1524, 2019. https://doi.org/10.1007/978-981-13-3648-5_196

Fatigue Detection Based on Facial Features with CNN-HMM

1517

In order to reduce traffic accidents caused by fatigue driving, many researchers have developed a large number of technologies of fatigue detection. Because each method has two sides, there are certain disadvantages in every method of fatigue detection. For example, fatigue detection is performed by detecting subjects’ physiological signals. Although the accuracy is high, it is disturbing to the subject. Thus it is impossible to judge which method is the best at present. However, the current more popular method of fatigue detection is the method based on facial features. Related research shows that facial performance will have some variation as people’s fatigue state is different: In the phase of non-fatigue, people occasionally laugh or speak, and the head sometimes turns left/right. During the fatigue phase, People will occasionally yawn, the blinking frequency will increase, the blink rate will slow down. And on the basis, there may be the increasing frequency of head’s forward/back/left/right inclination, because when people are tired or they are dozing off, the head will unconsciously incline in one direction [1]. Based on the changes of facial expressions, we define eye, mouth, and head posture as the indicators that reflect the driver’s fatigue status. Using these features and certain algorithms, we can identify the driver’s fatigue status. In recent years, the Convolutional Neural Network (CNN) [2] have been widely used in the fields of pattern recognition and image processing. Its weight-sharing network structure reduces the complexity of the network and reduces the number of weights. And the image can be directly input into the network, which avoids the process of feature extraction and data reconstruction in traditional algorithms of recognition. Therefore, CNN is also used in our proposed method for feature extraction. In order to analyze the extracted features and judge the subject’s fatigue state, we used the Hidden Markov model to model the extracted features and identify the mental state of drivers. Hidden Markov Model (HMM) [3] are classifiers that use probabilistic statistics to simulate the identification of sequential data. It is often used in the field of pattern recognition of sequential images, because HMM has advantages in analyzing the spatial-temporal features of objects and has the characteristics of simple modeling, small amount of calculation, fast speed of running and high rate of recognition, etc. The proposed model of fatigue detection is shown in Fig. 1. Our main work can be divided into the following two parts:

Fig. 1. The proposed fatigue detection architecture. The cubes represents the CNN/HMM model, and the rectangular box represents the corresponding feature or classified result from the models

1518

T. Yan et al.

(1) The fatigue can be estimated by facial features (eyes, mouth and head posture) that can clearly reflect fatigue state. Therefore, we use a convolutional neural network that performs well in the field of pattern recognition to extract fatigue-related features. According to the pre-defined labels (When the labels are defined, it is necessary to define the all corresponding states of each feature as far as possible), the network is trained. Then we remove the output layer of the CNN network and test the network to extract facial features. (2) Since fatigue is a continuous process, it is impossible to accurately identify the fatigue state from the static pictures. We need to integrate information of dynamic features over a period of time to detect fatigue. For this purpose, we use two HMMs (HMM-fatigue and HMM-non-fatigue) to model the extracted features and simulate dynamic relationships between features to identify the fatigue state. First of all, we use Baum-Welch algorithm to adjust the model parameters k ¼ ðN; M; p; A; BÞ, that is, we get the five parameters in the model that making PðOjkÞ the largest. Then we use the forward-backward algorithm to quickly calculate the probability of occurrence of the observed event sequence under the optimal model. Finally, we use Eqs. (2)–(4) to obtain the subjects’ fatigue state.

2 Related Work For the technology of the fatigue detection, it is essentially to capture and analyze the information of the driver’s physiological behavior, such as the eyes, mouth, heart, brain electrical activity and so on during the driving of the vehicle to identify the driver’s fatigue status. The common methods of fatigue detection are divided into three categories based on their detective information: (1) The methods of detecting the driver’s physiological signals. That is, the driver’s Heart Rate Variability (HRV) [4], pulse, electroencephalography (EEG) [5] and other physiological parameters are detected [6]. This kind of method is the most accurate method of fatigue detection at present. However, the required hardware equipment is relatively expensive and has some interference with the driver. Therefore, such methods are limited in practical promotion. (2) The methods of detecting driving behavior. That is, the vehicle’s hardware system under driver’s control and driving conditions such as vehicle speed, acceleration, braking, driving conditions [7, 8] and so on are monitored to detect the driver’s fatigue status. However, the main problem with the methods is that the accuracy of the results will vary depending on the driving experience, driving conditions, and road conditions. And some fatigue conditions cannot be judged. For example, if the driver snoring on a straight road for a few seconds, the position and speed of the vehicle will not change. (3) The methods of detecting facial features. The fatigue-related facial features such as eyes, mouth, head posture and so on are detected. Such methods uses the technology of image processing to evaluate the fatigue of the driver’s facial features to determine his mental status. Such as Zhang et al. [9] detect people’s fatigue by detecting the eye movement. Abtahi et al. [10] judge whether the subject yawning

Fatigue Detection Based on Facial Features with CNN-HMM

1519

by detecting the mouth movement. It has the advantages of low intrusion and low cost, although the accuracy of such methods is less than the first methods. Thus such methods are gradually being accepted and adopted by many researchers and developers, especially automobile manufacturers. Considering the advantages and disadvantages of the above methods, the former two methods have some unrealistic factors in practical applications, so we adopt the method based on facial features. Our proposed method consists of a convolutional neural network [9, 11, 12] and two hidden markov models [1, 13, 14]. The both models have been successfully used in many applications of pattern recognition. For example, Shih et al. [11] used CNN to extract fatigue-related spatial features. Park [12] applied three CNN models—AlexNet, VGG-FaceNet, and FlowImageNet to fatigue detection and achieved excellent results. Weng et al. [13] used HMM to model the extracted facial features for fatigue detection. Ronao et al. [14] proposed a two-stage continuous Hidden Markov Model to recognize human activity. Lahbiri et al. [15] presented a simple algorithm based on hidden Markov model for an automatic recognition of facial expressions. Inspired by these literature [1, 9–15], in order to effectively complete the fatigue detection, we use CNN to extract the fatigue-related facial features, and we use HMMs to simulate the dynamic changes of the extracted features to evaluate the driver’s fatigue state.

3 CNN for Extraction of Feature Parameters At present, the Convolutional Neural Network has been used in many fields, such as image recognition, speech analysis, etc., and has been studied as a hot topic. CNN has four characteristics: local connection, weight sharing, pooling operation and multi-layer structure. These characteristics enable CNN to automatically learn features from the data through multiple layers of non-linear transformations, thereby replacing the features of manual designs. And the deep structure makes it have a strong ability to express and learn. In addition, CNN increases the nonlinearity of the network by increasing its depth so that it can better fit the objective function and obtain better distributed features. The CNN structure we use is shown in Fig. 2. We put the convolutional layer and the ReLU (Rectified Linear Units) layer together, followed by the pooling layer. And this structure are repeated until the image is spatially reduced to a small enough size, after which the full connected layer connects all of the feature maps and finally outputs from the output layer. After the CNN model is established, we use the training set to train the model. The trained CNN removes the output layer and tests it to extract features. In this network, the size of the input image is fixed to 128  128. And the sizes of the convolutional kernels in the two convolutional layers are (5  5  6) and (5  5  16), respectively. The length and width of the convolutional kernels in both pooling layers are (2  2), and the height is the same as the height of the convolution kernels of the previous layer. The form of pooling is max-pooling. The last pooling layer is followed by two fully connected layers. The first fully connected layer contains

1520

T. Yan et al.

Fig. 2. CNN structure used for feature extraction

1024 neurons. In order to avoid overfitting problems, Dropout processing is performed before the data is input to the second fully connected layer. We set Dropout_ratio to 0.7. For the second fully connected layer, we define two layers. The layer containing 2 neurons is used to identify the state of the eye because we have defined two labels for the eye. Another layer containing 3 neurons is used to identify the state of the mouth and the head posture. The output layer of the proposed network uses the Softmax function to identify the features of the input image.

4 HMM for Simulation of Dynamic Features and Fatigue Decision We use HMM for fatigue detection, which is essentially a kind of probability calculation. After the model is obtained by calculation of the training set, the test set calculates the conditional probability of each model. And the one with the highest probability is the result of detection. An HMM model is described by a five-tuple parameter k ¼ ðN; M; p; A; BÞ. Where N indicates the number of model states and the state set is recorded as fS1 ; S2 ;    ; SN g. M denotes the number of possible observations for each state. The set of observations is denoted as fV1 ; V2 ;    ; VM g. A indicates the probability matrix of the state transition, denoted as A ¼ ðaij ÞNN .aij denotes the state transition probability transitioned from the state Si at time t to state Sj at time t þ 1, i.e. aij ¼ Pðqt þ 1 ¼ Sj jqt ¼ Si Þ; 1  i; j  N. B represents the probability matrix of the observed values, i.e. B ¼ ðbjk ÞNM . bjk indicates the probability of the observed value Vk in the state Sj , i.e. bjk ¼ PðOt ¼ Vk jqt ¼ Sj Þ; 1  j  N; 1  k  M. p represents the probability of the initial state, p ¼ ðp1 ; p2 ;    ; pN Þ, pi ¼ Pðqi ¼ Si Þ; 1  i  N, is used to indicate the probability distribution of the observed values at time t ¼ 1 of the states in the model. We use two HMM models: HMM-fatigue and HMM-non-fatigue. We use bt ¼ faeye ; amouth ; ahead g as the observation vector for the HMM model, where amouth ¼ fpstil ln ess ; plaugh=talk ; pyawn g, ahead ¼ fpstil ln ess ; aeye ¼ fpstil ln ess ; psleepy g, inclination rotation ;p g. These vectors are obtained from the last fully connected layer of the p CNN (the output layer of the CNN is removed). These feature vectors can be used as training data and test data to train the HMM model and test the performance of the HMM. We chooses Baum-Wellch algorithm to train HMM model in this paper. In order to find the maximum value of PðOjkÞ in all possible models, the PðOjkÞ must be

Fatigue Detection Based on Facial Features with CNN-HMM

1521

calculated for each HMM model. However, the calculation of directly calculating probability is very large, therefore this way cannot be achieved. Inspired by the literature [13], the forward-backward method is used to sum the probabilities of all possible states. The summation formula is shown in Eq. (1). Then we use the likelihood difference Eqs. (2)–(4) to estimate the subject’s fatigue state. PðOjkÞ ¼

X

PðOjS; kÞPðSjkÞ ¼

X

ps1 bs1 ðo1 Þ

T Y t¼2

dt ¼ Pfatigue ðOjkÞ  Pnonfatigue ðOjkÞ; di ¼

s X

ast1 st bst ðot Þ

0\t  s

ð1Þ ð2Þ

dt ; s ¼ 20

ð3Þ

1  100% 1 þ edi

ð4Þ

t¼1

state ¼

where st 2 fS1 ; S2 ;    ; ST g is the state at time t. ot 2 fV1 ; V2 ;    ; VT g is the observation value at time t. T represents the total number of video frames. s indicates a preset period of time, here we set it to 20 frames, so i 2 ð1; Ts Þ. We can judge the fatigue state by Eq. (4). When the subject’s state is at state [ 50%, we think that it is the subject’s mental state.

5 Experiment 5.1

Preprocessing of Data

We use the driver drowsiness video dataset as the experimental dataset. There are 440 videos of 22 subjects. The video is of different sizes. These videos were shot by infrared (IR) lighting in 5 different scenarios, the scenarios include Non-Glasses, Glasses, Sunglasses, Night-Non-Glass and Night-Glasses. The frames of video has the size of 640  480 and the frame rate of 30 FPS, except the videos in scenarios of night_noglasses and night_glasses, whose the frame rate is 15 FPS. There are four categories of labels used to train the proposed fatigue detection model: eyes (0 for normal, 1 for sleepy eyes); mouth (0 for speaking or laughing, 1 for normal, 2 for yawning); head posture (0 for left/right rotation; 1 for normal; 2 for up/down/left/right inclination). Before training and testing our proposed model, we need to normalize the size of the frame to reduce the computational complexity of the input data. Therefore we need to detect the face in the image. The facial detection is shown in Fig. 3. Histogram equalization is required before facial detection. The algorithm of facial detection is the Viola-Jones algorithm [16]. The fast calculation method of features—integral graph, the effective learning method of classifier—AdaBoost, and the efficient strategy of classification—the cascade structure greatly improve the speed of facial detection.

1522

T. Yan et al.

Fig. 3. Preprocessing of the image (The number of frames obtained by the relevant code of openCV is 195,600. The size of the detected region is 128  128)

5.2

Experimental Analysis

We divide the facial images into training sets and test sets. There are 194,640 images in the training set and 960 images in the test set. We use the training set to train the CNN and generate the model of the feature extraction. The training process is shown in Fig. 4. Then we use the test set to test the generated model. The testing results are shown in Table 1. Here the standard of evaluation for the architectures is shown in Eq. (5):

Fig. 4. The process of the CNN training (The blue line indicates the change of the loss rate, and the green line indicates the accuracy rate in the figure)

ACC ¼

TP  100% TP þ FP

ð5Þ

We can see from Fig. 4 that the loss rate has been reduced to a minimum and the accuracy is stable when the times of training is around 2500. When the times of training exceeds 2500, the loss rate remains almost constant, maintaining around 0.1, and the accuracy of the features is approximately 0.9. After CNN training is completed, we test

Fatigue Detection Based on Facial Features with CNN-HMM

1523

Table 1. Accuracy of features under different scenarios (the last line is the average accuracy of each feature under different scenarios) Non-Glasses Glasses Sunglasses Night-Non-Glasses Night-Glasses Accuracy

Eye (%) 95.0 90.1 73.9 84.8 79.6 84.7

Mouth (%) Head posture (%) 95.6 95.9 95.7 96.0 95.0 95.1 85.9 88.2 86.2 89.1 91.7 92.8

the model, and the results show that In the Non-Glasses scenario, the accuracy of all features is the highest and the accuracy rate is above 95%. In the Sunglasses scenario, the eye has the lowest accuracy of 73.9%. In the Night-Non-Glasses and Night-Glasses scenarios, the accuracy of all features is low. Because of the low brightness of light at night, the result of the test is affected. We use the feature vectors output from the last fully connected layer of CNN to train and test the HMM. The results of test are shown in Table 2. We can see that in the Non-Glasses scenario, the accuracy of the model increase to 93.2%. In the Night-NonGlasses and Night-Glasses scenario, the accuracy is lower. The reason is similar to that of CNN. Due to the low light intensity, the feature vectors extracted by CNN is not very accurate, which affects the final accuracy. Despite this, the final accuracy rate was 87.9%. The results show that our proposed method is effective and feasible. Table 2. Accuracy of fatigue detection in different scenarios Non-Glasses Glasses Sunglasses Night-Non-Glasses Night-Glasses Overall

Fatigue (%) Non-fatigue (%) Average accuracy (%) 93.6 92.8 93.2 89.9 93.5 91.7 87.6 89.8 88.7 85.9 85.8 85.9 79.6 80.6 80.1 87.3 88.5 87.9

6 Conclusion Fatigue detection is an important area of pattern recognition. We propose the CNNHMM model based on the fatigue-related facial features and the fatigue persistence. In order to effectively detect the fatigue state, we defined the labels of the fatigue-related facial features and the driver’s fatigue state. In this model, the CNN extract facial features, which is closely related to the accuracy and reliability of fatigue detection. The extracted features are input into the HMM, and the HMM learns changes of the features of the fatigue/non-fatigue state. The trained model can accurately judge the driver’s fatigue state.

1524

T. Yan et al.

References 1. Teyeb, I., Jemai, O., Zaied, M., et al.: A novel approach for drowsy driver detection using head posture estimation and eyes recognition system based on wavelet network. In: The International Conference on Information, Intelligence, Systems and Applications, Iisa, pp. 379–384. IEEE (2014) 2. O’Shea, K., Nash, R.: An introduction to convolutional neural networks. Comput. Sci. (2015) 3. Bobulski, J., Adrjanowicz, L.: Two-dimensional hidden Markov models for pattern recognition. In: Artificial Intelligence and Soft Computing, pp. 515–523. Springer, Berlin (2013) 4. Rogado, E., Garcia, J.L., Barea, R., et al.: Driver fatigue detection system. In: IEEE International Conference on Robotics and Biomimetics, pp. 1105–1110. IEEE (2009) 5. Zhang, X., Li, J., Liu, Y., et al.: Design of a fatigue detection system for high-speed trains based on driver vigilance using a wireless wearable EEG. Sensors 17(3), 486 (2017) 6. Čolić, A., Marques, O., Furht, B.: Driver drowsiness detection and measurement methods. In: Driver Drowsiness Detection. Springer International Publishing, Berlin (2014) 7. Hailin, W., Hanhui, L., Zhumei, S.: Fatigue driving detection system design based on driving behavior. In: International Conference on Optoelectronics and Image Processing, pp. 549– 552. IEEE (2011) 8. Liu, J., Wang, L., Nie, F., et al.: Research on fatigue driving detection method based on steering wheel angle. Automob. Technol. (2016) 9. Zhang, F., Su, J., Geng, L., et al.: Driver fatigue detection based on eye state recognition. In: International Conference on Machine Vision and Information Technology, pp. 105–110. IEEE (2017) 10. Abtahi, S., Hariri, B., Shirmohammadi, S.: Driver drowsiness monitoring based on yawning detection. In: Instrumentation and Measurement Technology Conference, pp. 1–4. IEEE (2011) 11. Shih, T.H., Hsu, C.T.: MSTN: multistage spatial-temporal network for driver drowsiness detection (2016) 12. Park, S., Pan, F., Kang, S., et al.: Driver drowsiness detection system based on feature representation learning using various deep networks. In: Computer Vision—ACCV 2016 Workshops (2017) 13. Weng, C.-H., Lai, Y.-H., Lai, S.-H.: Driver drowsiness detection via a hierarchical temporal deep belief network. 117–133 (2016) 14. Ronao, C.A., Cho, S.B.: Human activity recognition using smartphone sensors with twostage continuous hidden Markov models. In: International Conference on Natural Computation, pp. 681–686. IEEE (2014) 15. Lahbiri, M., Fnaiech, A., Bouchouicha, M., et al.: Facial emotion recognition with the hidden Markov model. In: International Conference on Electrical Engineering and Software Applications, pp. 1–6. IEEE (2013) 16. Wang, Y.Q.: An analysis of the viola-jones face detection algorithm. Image Process. Line 4, 128–148 (2014)

Construction and Evaluation of a Blending Teaching Model of Linear Algebra and Probability Statistics in the “Internet +” Background by Using the Gradient Newton Combination Algorithm Mandan Hou(&) Heilongjiang University of Finance and Economic, Harbin 150025, China [email protected] Abstract. Currently, Blending teaching mode based on “Internet +” gradually prevails in colleges and universities, leading the new direction of the development of higher education mode. This paper establishes a discrete process neural network model. A Comparative Analysis of the Teaching Effects of Blending teaching class and Ordinary Classes in the Course of “Linear Algebra and Probability and Statistics” in the “Internet +” Background Using the gradient Newton binding algorithm. The experimental results show that the class of Blending teaching has a great improvement in its application ability and has a good teaching effect. Keywords: Blending teaching model Gradient newton binding algorithm

 Discrete process neural network

1 Introduction In recent years, MOOC, Micro Course, Flipped Classroom and SPOC have been widely used around the world. The methods of education and teaching have presented unprecedented opportunities and challenges. The traditional classroom teaching model has also been greatly impacted. The slogan “popular entrepreneurship and innovation” proposed by China also applies to the education sector. Innovative education [1–3] emerges as the times require. The so-called innovative education is education based on cultivating innovative talents as the basic value orientation. The “blending teaching” under “Internet +” background cannot only exert the coherent advantages of traditional teaching knowledge system, but also take advantages of digital teaching to break the limitation of time and space. The construction of blending teaching model is a question worth considering for all college teachers. Yu et al. divided the implementation of blending teaching in the context of “Internet+” into four links [4–6] which are design, classroom teaching, online teaching and developmental teaching evaluation. The blending teaching under the background of “Internet+” combines the characteristics of various disciplines. The design of learning environment and online teaching present

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1525–1532, 2019. https://doi.org/10.1007/978-981-13-3648-5_197

1526

M. Hou

different styles. Therefore, teachers in the same discipline should study, discuss and practice more suitable methods for students of the university. The traditional Linear Algebra and Probability Statistics course teaching is “theoretical knowledge (concept, nature, theorem) + example problem + problem solving”, which can enable students to master basic knowledge quickly, improve computing ability and logical reasoning ability, but there is no significant increase in application ability and creativity. Linear Algebra and Probability Statistics has extensive and indepth application in engineering technology, national economy, computer technology, biotechnology, medicine, navigation, aerospace, military and other fields [7–9]. The three basic disciplines of mathematics in ordinary colleges and universities are Advanced Mathematics, Linear Algebra, Probability Theory and Mathematical Statistics. Advanced Mathematics is offered in the first semester of the freshman year, and Linear Algebra, Probability Theory and Mathematical Statistics are set in the next semester. The academic hours of Linear Algebra and Probability Theory and Mathematical Statistics of most universities are between 50 and 70, while the academic hours of the two courses of our university before 2015 was 32. Taking into account the actual situation of students in applied universities, in 2015, the two basic mathematics courses were merged into one, namely Linear Algebra and Probability Statistics, which was reformed and innovated with the joint efforts of all teachers in the basic teaching and research section, which both to provide basic theoretical knowledge, but also to develop students’ applicative skills. Blending teaching was performed on some students in economics, management and accounting departments of 2016 classes. After one year of practice, the results are very good.

2 Construction of Blending Teaching Model of Linear Algebra and Probability Statistics In the second semester of 2015–2016 school year, our university merged Linear Algebra and Probability Theory and Mathematical Statistics into Linear Algebra and Probability and Statistics. The main reason is that students’ mathematics performance in the college entrance examination was too low to learn the two courses. Merge the two courses into one, the passing rate may be improved. After one semester of practice, it is found that the passing rate of the entire college was higher than that of the two previous semesters, indicating that the innovation is relatively successful. In 2016– 2017 school year, for the purpose of improving students’ application ability, smallscale blending teaching was used to some students. A large class of economics department (a total of three large classes in total), a large class of management department (a total of three large classes in total) and four large classes of accounting department (a total of eleven large classes in total) were selected with the principle of each teacher being responsible for a large class. Teachers drew for their classes, in which the first four teachers selected their classes randomly while the latter two checked for missing. The selected classes met the sampling criteria. The final classes and number of students are shown in Table 1. Students who were adopted blending teaching started from the first semester of freshman year, and the final grades were divided into two parts, of which the written

Male Female Total

Number/class

Department of economics 1 41 72 113

Department of management 1 32 65 97

Department of accounting 1 27 84 111

Department of accounting 2 28 83 111

Table 1. Test classes and numbers Department of accounting 3 29 84 113

Department of accounting 4 28 85 113

Construction and Evaluation of a Blending Teaching Model … 1527

1528

M. Hou

test accounted for 70% and the usual scores accounted for 30% (the same proportion as the students of Ordinary class). For the convenience of calculation, usual grades were 100 points, of which attendance for Ordinary classes was 30 points, classroom performance and notes were 40 points, assignments and usual tests were 30 points. The usual grades of blending teaching were divided into attendance accounting for 30 points and others accounting for 70 points: (1) Statistics and explanations on the application of the subject and WeChat group show (2) Learn an Internet course and write outline about 100 words (3) Anonymous evaluation and scoring of the exercises of 6 students of experimental class (evaluation with more than 20 words is effective, advantages and disadvantages); (4) Three class tests (with books open) score 10 points each and 30 points in total. At the end of the first semester of the freshman year, students’ written test scores and the ability to apply advanced mathematics were counted. Questionnaires were also conducted for the classes which were parallel to the six experimental classes. In Linear Algebra and Probability Statistics course of the second semester, these classes are still experimental classes, at the end of the semester, written test scores and the ability to apply advanced mathematics will be counted.

3 Discrete Process Neural Network Model 3.1

Discrete Process Neural Network

Compared with general neural network which can only be used to describe the instantaneous mapping relationship between input and output values, process neural network describes the cumulative effect or aggregation effect of input on the time axis. Process neuron network is composed of four operations, including time-varying process signal input, spatial weighted aggregation, time effect accumulation and excitation threshold excitation output [10]. The network topology of a multiple-input singleoutput system with a hidden layer of discrete process neurons is shown in Fig. 1.

Fig. 1. Topological structure diagram of discrete process neural network

Construction and Evaluation of a Blending Teaching Model …

1529

In P Fig. 1, x1 ðtl Þ; x2 ðtl Þ;    ; xn ðtl Þ ðl ¼ 1; 2;   Þ are n discrete time series P which are input. represents the spatial weighting operator for the input time series, t is the accumulation operator for time effects on discrete input signals. The hidden layer has m nodes that implement weighted aggregation and accumulation of discrete input signal space and time; the output node is a non-time-varying neuron. The mapping between network inputs and outputs can be equation as m n X T X X ð1Þ y ¼ gð vj f ð xi;j ðtl Þxi ðtl ÞDtl  hj Þ  hÞ j¼1

ð1Þ

i¼1 l¼1

where, the length of the time series is finite T; xi;j ðtl Þ ði ¼ 1; 2;    ; n; j ¼ 1; 2;    m; l ¼ 1; 2    TÞ is the connection weight between input layer node i and hidden layer node j at tl ; vj is the connection weight between hidden layer node j and ð1Þ

output node; hj is the excitation threshold of hidden layer node j; f is the excitation function of hidden layer neurons; g is the excitation function of output node and h is the excitation threshold of output node. From Eq. (1), it is known that a process neural network containing a process neuron hidden layer and a linear output process neuron network can be equation as y¼

m X

0 vj f @

j¼1

~ ij ðtÞ ¼ where, w

L P

ZT

n X

!

1

~ ij ðtÞxi ðtÞ dt  hi A w

ð2Þ

i¼1

0

ðlÞ

wij bl ðtÞ; b1 ðtÞ;b2 ðtÞ;    bL ðtÞ is a set of finite basis functions in

l¼1

C½0; T space. Network error analysis is shown in Eq. (3).



K X k¼1

3.2

2

ðyk  dk Þ ¼

K X k¼1

0 @

m X j¼1

0 vj f @

n X L X

ðlÞ wij

i¼1 l¼1

ZT

1

12

bl ðtÞxki ðtÞdt  hj A  dk A

ð3Þ

0

Gradient Newton Combination Algorithm

In the process neuron network output error function represented by Eq. (3), note ð1Þ ð1Þ ð1Þ ð1Þ ðLÞ ðLÞ W ¼ ðw11 ;    ; wn1 ; w12 ;    ; wn2 ;    w1m ;    ; wnm ; v1 ;    ; vm ; h1 ;    hm Þ. Then E is a function of W, that is, E ¼ EðWÞ. The gradient descent method is given by: Wðk þ 1Þ ¼ WðkÞ  aðkÞrEðWðkÞÞ

ð4Þ

Note FðWÞ ¼ ðf1 ; f2 ;    ; fK Þ. The problem of solving the process neuron network can be expressed as

1530

M. Hou

FðWÞ ¼ 0

ð5Þ

The nonlinear Eq. (3) contains K equations and n  L  m þ 2m unknowns. When K  n  L  m þ 2m, the equation would basically have solutions (or solutions to the error range), which can be solved by Newton’s iterative formula. FðWðsÞÞ ¼ F 0 ðWðsÞÞDWðsÞ

ð6Þ

4 Evaluation Results of Discrete Process Neuron Network to Blending Teaching Mode 4.1

Discrete Process Neural Network Blending Teaching Mode

Use the discrete process neural network model, the input node shall be best selected, so the input data is selected as follows: Select the gender of each student (1 for male and 0 for female), department (1 for economics, 2 for management, 3 for economics), results of college entrance examination, self-evaluation of higher mathematics application ability is x1 ðtl Þ, scores of the first three parts and the final application ability evaluation of advanced mathematics are x2 ðtl Þ; three test scores and final exam paper scores of advanced mathematics are x3 ðtl Þ; the three parts of the usual scores and the final applicability evaluation of Linear Algebra and Probability Statistics are x4 ðtl Þ; the three tests scores and the final exam scores of Linear Algebra and Probability Statistics are x5 ðtl Þ. That is, n ¼ 5; T ¼ 4. Since the training sample is K = 650, which is rather rich. According to past experience, one hidden layer is selected and the number of hidden layer nodes is m = 10. The purpose of this experiment is to study students’ evaluation of mathematics application ability in the implementation of blending teaching model in the context of “Internet+”, and therefore the self-evaluation of application ability of the output node. 4.2

The Process Neuron Network Based on the Gradient Newton Combination Algorithm Step1: Set error accuracy e [ 0; set learning iterations s ¼ 0; the maximum learning iteration M; select base functions b1 ðtÞ; b2 ðtÞ;    ; bL ðtÞ; ðlÞ Step2: Initialize connection weights and excitation thresholds vj , wij , hj ði ¼ 1; 2;    ; n; j ¼ 1; 2;    ; LÞ; Step3: Calculate the error function E from Eq. (3), if E\e or s [ M, output the calculation result Wð0Þ, otherwise, correct connection weights and excitation thresholds by gradient descent algorithm; Step4: According to Eqs. (5) and (6) WðsÞ, set s ¼ s þ 1; Step5: Calculate the error function E from Eq. (3),if E\e or s [ M, output the calculation result WðsÞ.

Construction and Evaluation of a Blending Teaching Model …

4.3

1531

Evaluation Results

Use matlab software, 650 sets of data were selected as the training set for the network training. Under the condition of selecting the error accuracy e = 0.01, the training was completed after s = 1879 times, and the other 8 students were predicted (randomly extracted). The results are shown in Table 2. Table 2. Comparison between the predicted value and the actual value Sample Actual value Predicted value Error rate (%)

1 9.3 9.27 0.32

2 8.5 8.45 0.58

3 7.8 7.91 1.41

4 8.7 8.65 0.57

5 9.0 9.03 0.33

6 7.0 7.01 0.14

7 8.3 8.24 0.72

8 9.4 9.36 0.43

In order to better compare whether the experimental class and the ordinary class have great differences in mathematics application ability, a questionnaire survey was conducted on the six large classes which are similar to the experimental class. The average value of the investigation is shown in Table 3. Table 3. Comparison of application ability questionnaire survey (average score) Class 1 2 3 4 5 6 Experimental class 8.134 8.432 8.745 8.690 8.656 8.776 Ordinary class 7.679 7.912 8.123 8.212 8.163 8.265

5 Conclusions It is seen from the experiment, it can be seen that the application abilities of the students of the class which applied blending teaching have been greatly improved. Through a discrete process neuron network blending teaching model, some students are predicted. Students with different levels of application ability can be appropriately added or subtracted homework in order to expect a better learning and teaching method and have a higher application value. Acknowledgements. This work is supported by General research project of higher education and teaching reform in Heilongjiang Provincial Department of Education (SJGY20170357).

References 1. Kang, Y.: The post MOOC era of online education—SPOC analysis. Tsinghua Univ. Educ. Res. 2(1), 85–93 (2014) 2. Abu, L.: Curriculum design based on blended learning. China Educ. Technol. Equip. 3(3), 104–106 (2013)

1532

M. Hou

3. Wu, M., Yin, L., Liu, X.: Teaching reform of probability theory and mathematical statistics based on teaching turnover mode. Sci. Educ. 1(11), 148–149 (2015) 4. Xu, W., Wei, C., Sun, S., Chen, Y.: Teaching mode reform of financial regulations and accounting occupation moral course—based on” blended teaching mode Internet + virtual workplace”. China Manag. Inf. 21(09), 184–185 (2018) 5. Shen, J.: Explore vocational image processing course teaching mode of blended Internet +. (04), 126–128 (2018) 6. Su, H.: Internet + the background of online and offline teaching model—a case study of reinforced calculation course as an example. J. Huainan Vocat. Tech. Coll. 17(06), 93–95 (2017) 7. Zhong, C.: Internet +education under the background of law of motion animation “curriculum blended teaching model research”. Beauty Age 11, 105–106 (2017) 8. Li, D.: Research on SPOC hybrid teaching mode Internet + era of integrated business English course. J. Kaifeng Inst. Educ. 36(11), 126–127 (2016) 9. Da Li, Q.: Integrating the idea of mathematical modeling into the main course of mathematics. Chin. Univ. Math. 1, 9–11 (2006) 10. He, X.G., Xu, S.: Process neuron network. Science Press, Beijing (2007)

Research on Algorithm of Transfer Learning Based on Sensor Location Fan Yang1 and Yutai Rao2(&) 1

2

Software Engineering Institute, Hubei Radio & TV University, Wuhan 430074, Hubei, China [email protected] Dean’s Office, Hubei Radio & TV University, Wuhan 430074, Hubei, China [email protected]

Abstract. In the research of sensor positioning, the accurate positioning of sensors in wireless sensor networks is one of the current research hotspots. Nodes in a wireless sensor network move from time to time in a local area, and the moving distance is often not large. This not only changes the topology of the network, but also increases the difficulty of accurate positioning. The training samples are dynamically changed and the positioning is dynamically adjusted. The original positioning algorithm is not applicable. Aiming at the high dimensionality and nonlinearity of sensor positioning, this paper proposes a semi-supervised local linear embedded algorithm. This algorithm not only improves the generalization ability of the mapping model, but also has high modeling efficiency. Keywords: Sensor positioning embedded algorithm



Semi-supervised learning



Local linear

1 Introduction Currently, there are many migration learning algorithms dedicated to the positioning of sensors. Typical learning methods include: Locally Linear Embedding (LLE), Laplace Mapping (LE), and Equal Gauge Mapping (Isomap). In the current research, Li Shancang et al. of Xi’an Jiaotong University proposed a Hessian Locally Linear Embedding (HLLE) based on the Hessian linear local embedding algorithm based on linear local embedding algorithm. The HLLE algorithm derives unknown nodes based on known nodes, which are not very mobile nodes. Based on the Laplacian mapping algorithm, Jeffery et al. [1] of Hong Kong University of Science and Technology proposed a dual mobile location algorithm based on Laplace mapping and SVD decomposition. The algorithm can predict moving unknown nodes and can correct the position. Wang Chengqun of Zhejiang University et al. proposed an improved Isomapbased localization algorithm based on the iso-metric mapping algorithm. The algorithm can calculate the position of unknown nodes that are not moving. In practical applications, the nodes in the wireless sensor network will actually move irregularly or suddenly [2]. How to locate such nodes in real time is the problem to be solved in this paper. In this paper, the sensor positioning problem is studied in a © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1533–1536, 2019. https://doi.org/10.1007/978-981-13-3648-5_198

1534

F. Yang and Y. Rao

semi-supervised framework, and a semi-supervised local linear embedded algorithm is proposed to achieve a balance between modeling efficiency and precise positioning.

2 ATL-BSL Algorithm In order to improve the efficiency of sensor positioning modeling and improve the accuracy of sensor positioning, this paper proposes a semi-supervised local linear embedded algorithm for the high dimensionality and nonlinearity of positioning data. The ATL-BSL algorithm is still a manifold learning method [3]. When learning nonlinear data, it can better reflect the essential structure of data nonlinearity. The semisupervised ATL-BSL algorithm is based on the sensor’s movement characteristics in the sensor positioning. Each positioning is based on the non-moving sensor as a learning sample, and then trains other mobile nodes [4]. Due to the semi-supervised nature of ATL-BSL, positioning accuracy can be improved while reducing dimension. The general idea of the ATL-BSL algorithm is to set a known number of nodes in a wireless sensor network, and these nodes do not move much in the future, and the positions of these nodes are marked according to the signals of the collected nodes. Other unknown nodes in the wireless network, including unknown nodes that will always move, need to rely on the preceding mark nodes as samples to obtain the realtime location of these unknown nodes [5]. Step 1: Set up N known nodes and determine their location. These N nodes will be used as samples in each subsequent learning. That is, there are sample data D = {di|di, i = 1,2,3,…N}. Their position information is recorded as L = {(xi, yi,zi)| i = 1,2,3,…N}. Step 2: Select the nearest n unknown nodes in the K-neighborhood of each sample point pi from the sample point as the K-neighborhood of the requested sample point. The calculation of the distance between points and points uses the formula: Step 3: Is to calculate the local reconstruction weight matrix of the sample point and its neighbor unknown nodes. Find the weight matrix by minimizing the objective function: Step 4: Low-dimensional space mapping of sample points and unknown nodes in its neighborhood. Step 5: Obtain the set of neighboring node coordinates of the unknown node through the mapping matrix. Step 6: Obtain the q adjacent nodes in step 5 by calculating the Euclidean distance. Step 7: Use the heart algorithm to find the coordinates of the unknown node.

3 Simulation Results In order to verify the effectiveness of the algorithm, the algorithm is validated by using randomly generated data from MATLAB simulation experiments. All the nodes in the simulation experiment are randomly arranged in the 1000 m  1000 m region. The communication radius of the sensor node is assumed to be 100 m. For comparison

Research on Algorithm of Transfer Learning Based on Sensor …

1535

experiments, we also run Jeffery’s localization algorithm [6, 7]. Wang’s Isomap locates and corrects unknown nodes simultaneously [8]. The experimental environment is MATLAB2015a [9], In order to verify the validity of the algorithm, the experiment runs 50 times in the dataset and the average value is used as the evaluation criterion. Figure 1 is a comparison chart of positioning accuracy of ATL-BSL algorithm, Jeffery’s algorithm and wang’s algorithm when the training sample set proportion is taken as 20–80% respectively [10]. When the ATL-BSL algorithm has too many unknown sample points, the error becomes smaller and the positioning effect is stable [11, 12].

Fig. 1. Comparison of positioning errors for the three algorithms Acknowledgements. This article is funded by the Science and Technology Research Project of the Hubei Provincial Department of Education(B2016592): Research on Network Instability Control Methods for Wireless Networks. Associate Professor Rao Yutai of Hubei Radio and TV University is the corresponding author of this article, thank him for this.

1536

F. Yang and Y. Rao

References 1. Huang, Z., Huang, Y., Xu, S., et al.: Discrimination of the traditional Chinese medicine from schisandra fruits by flash evaporation-gas chromatography/mass spectrometry and fingerprint analysis. Chromatographia 78(15–16), 1083–1093 (2015) 2. Hong, X., Wang, J.: Discrimination and prediction of pork freshness by e-nose. In: Proceedings of International Conference on Computer and Computing Technologies in Agriculture, pp. 1–14. Springer, Berlin (2011) 3. Palma, P.D., Vito, S.D., Miglietta, M., et al.: E-nose as a potential quality assurance technology for the detection of surface contamination by aeronautic fluids. In: Sensors and Microsystems, pp. 443–446. Springer International Publishing, [S. l.] (2014) 4. Ridder, D.D., Kouropteva, O., Okun, O., et al.: Supervised locally linear embedding. In: Proceedings of International Conference on Artificial Neural Networks and Neural Information Processing, pp. 333–341. Springer, Berlin (2003) 5. Mardini, W., Khamayseh, Y., Almodawar, A.A., et al.: Adaptive RSSI-based localization scheme for wireless sensor networks. Peer-To-Peer Netw. Appl. 6, 1–14 (2016) 6. Ran, Q., Feng, R., Yu, N., et al.: A weighted least squares source localization algorithm using TDOA measurements in wireless sensor networks. In: Proceedings of the 2016 6th International Conference on Electronics Information and Emergency Communication (ICEIEC), F (2016) 7. Cheon, J., Hwang, H., Kim, D., et al.: IEEE 802.15.4 Zig bee-based time-of-arrival estimation for wireless sensor networks. Sensors 16(2), (2016) 8. Ahmad, T., Li, X.J., Seet, B.-C.: A self-calibrated centroid localization algorithm for indoor Zig Bee WSNs. In: Proceedings of the 2016 8th IEEE International Conference on Communication Software and Networks (ICCSN), F, (2016) 9. Jiang, R., Yang, Z.: An improved centroid localization algorithm based on iterative computation for wireless sensor network. Acta Phys. Sin.—Chin. Ed. 65(3) (2016) 10. Ji, X., Hou, C., Hou, Y., et al.: A distributed learning method for—regularized kernel machine over wireless sensor networks. Sensors 16(7) (2016) 11. Li, J., Song, N., Yang, G., et al.: Improving positioning accuracy of vehicular navigation system during GPS outages utilizing ensemble learning algorithm. Inf. Fusion 35(1–10) (2017) 12. Abdellatif, M.M., Oliveira, J.M., Ricardo, M.: The self-configuration of nodes using RSSI in a dense wireless sensor network. Telecommun. Syst. 62(4), 695–709 (2016)

Study on the Application of Big Data Analysis in Monitoring Internet Rumors Zijiang Zhu, Weihuang Dai(&), and Yi Hu South China Business College, Guangdong University of Foreign Studies, Guangzhou 510545, China [email protected]

Abstract. The network is not only a virtual society but also a place of public opinion. Based on the current prevalence of internet rumors, the paper explores the characteristics of its dissemination and other features such as strong abruptness and wide circulation. Combined with the advantages of big data in the analysis and processing of massive social network information, the monitoring of internet rumors based on big data technology and its prevention design are proposed in this paper. Taking the analysis of Sina Weibo data as an example, a distributed big data vertical search engine based on Nutch/Hadoop cloud platform is built to discover the public opinion from the Internet first time using directional collection and the entire network search method. And the network rumors are searched vertically. In order to strengthen the crackdown on internet rumors, it provides a new way to promote the public opinion of Internet and maintain social stability and harmony. Keywords: Internet rumors

 Big data  Data capture  Data mining

1 Introduction The network is both a virtual society and a place for public opinion. It is an important guarantee for the rule of law society to promote the public opinion and righteousness of the Internet and safeguard social stability and harmony to strengthen the crackdown on internet rumors [1]. For some lawbreakers, Internet has become a paradise for them to break the law, which includes false information that severely distorts Lei Feng’s image, the government paid 200 million Yuan in compensation for the foreign tourists in “7∙23” EMU accident; the well-known netizen “Qin Huo Huo”, “Lierchaisi” and the famous network disclosing guy “Zhou Lubao” maliciously slandering public figures in order to improve their visibility and influence of the Internet to illegally seek more benefits [2]. This kind of phenomenon of executing illegal and serious crimes by manipulation of the Internet public sentiment has severely hindered the healthy operation and harmonious development of the society. It has brought serious challenges to the Internet in the era of the rule of law, and it is extremely urgent to speed up the research process of network defamatory.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1537–1545, 2019. https://doi.org/10.1007/978-981-13-3648-5_199

1538

Z. Zhu et al.

2 Harm of Internet Rumors With the rapid development of Internet and multimedia technologies, especially social networks such as Twitter and Weibo, the rumors have changed greatly in terms of their survival methods and transmission methods, and they have shifted to more relying on the spread of the Internet and multimedia. With the characteristics of high speed, wide spread, and strong suddenness, a new term is coined—“internet rumors.” Due to the characteristics of the Internet spread, the development of rumors has reached its limit. Regardless of the speed of replication and the norms, the development of rumors has reached an unprecedented peak, and its lethality is also more powerful. As a result, Internet becomes a place of rumors. For instance, the “maggot citrus incident” has caused severe sluggish sales of citrus throughout the country; the earthquake rumors have caused millions of people going to the streets of Shanxi to “take refuge”; “leather milk powder” has hit domestically produced dairy products; rumors spreading in the QQ group have triggered a nationwide “robbing salt” event; the forged “public announcement No. 47” and “the food touched with blood drops spreading virus” rumors caused panic; and so on. More and more scams and rumors are flooding the Internet, including food safety, natural disasters, public safety, commercial interests, and instigation of social unrest. Even more and more lawbreakers are taking advantages of loopholes in laws and technologies to create rumors, which creates dissatisfaction. They control public opinion in their hands, and mix the righteous attributes of Internet public sensation, disturb the stability and harmony of society, and seek illegal interests for themselves [3]. Internet rumors not only ruin personal reputation, but also cause great troubles to victims and victim enterprises. What’s more serious is that they damage the image of the country and affect social stability. If you measures are not taken actively, internet rumors will bring incalculable losses to the country and society.

3 The Spreading Principles and Characteristics of Internet Rumors 3.1

Cause of Internet Rumors [4]

American social psychologists Allport and Postman point out the general formula for generating rumors in their book Psychology of Rumors R  i  a. In the formula, R (Rumor) represents the spreading breadth and depth of rumors; i (Important) represents the importance of information for a group of people, and a (Ambiguous) represents the ambiguity of the information. The above formula tells us that when the importance i and the ambiguity a are simultaneously available, the rumors can be produced. When one of them is insufficient, the rumors cannot be produced. The internet rumors also have this feature. Secondly, the spread of Internet rumors has a profound social background. Contemporary China is in a critical period of transition. Social problems such as housing, medical insurance and old-age care are inclined to be further intensified. The official information disclosure system is not perfect and timely, and a few criminals are incited to create rumors, which gives rumors opportunities. The rapid

Study on the Application of Big Data Analysis …

1539

development of modern science and technology has provided people with convenience in publishing and disseminating news, especially the production of Weibo and Wechat, which has increased the spread and breadth of rumors. 3.2

The Route of Transition of Internet Rumors

There are two kinds of classical explorations of rumors spread in western communication studies [5]. One is the Shannon–Wafer model proposed by American mathematician C. Shannon–W. Wafer; The other is Schramm model propose by American communication expert Schramm. The Shannon–Wafer model regards information dissemination as single-threaded and only puts the noise of signal on the channel stage. In the Schramm model, both sides of the information dissemination are the main bodies, interacting through the acceptance and transmission of information. To jointly complete the dissemination of information, the transmission path of internet rumors can be summarized as shown in Fig. 1 below according to the Shannon–Wafer model and the Schramm model.

Simplify Modify

Unsupported or rumor information

Noise A

Release rumor

Comments

Multistage transition

Transition

Noise B

Refute a rumor

Fig. 1. Transition routes of internet rumors

As seen from Fig. 1, the spread of internet rumors generally goes through the following steps: (1) Sending unconfirmed or intentional information to the publisher; (2) The publisher is interfered by noise A after receiving the information, and the information is sent out then; (3) After the follower sees the information, he or she makes a comment or forwards it after self-understanding, or some people simplifies or alters the information with their own understanding and release again; (4) After the followers release the information, their interpersonal circle forward or comment again to form a multi-level retransmission, which will eventually cause widespread diffusion of rumors and spread; (5) A few informed people or official organizations make statements and verifications against the doubt points of information and seek out the evidence to refuting the rumors, and then publish the information to refute rumors. In the process, the forwarding link is the most important part of internet information dissemination. The extensive dissemination and distortion of rumors are carried out through this link.

1540

3.3

Z. Zhu et al.

The Spread Characteristics of Internet Rumors

The obvious difference between internet rumors and traditional media rumors is as follows [6]: (1) Different spread platforms. Traditional media mainly spreads through newspapers, magazines, televisions, and other media in the form of text and multimedia. The contents are checked and approved before release, and the speed of spread is slow. Internet rumors mainly spread through BBS, Weibo, WeChat and other network tools. Although the contents are also spread in the form of text and multimedia, they are not reviewed and the spread speed is extremely fast. (2) Different spread main bodies. Traditional media is the representative of the mass media, with a professional level, and the spread is generally one-way; the main body of internet rumor is the masses of people with the characteristics of grassroots, and the spread is bidirectional and has strong feedback. The same person can be both a communicator and a receiver. (3) Different effects of spread. Traditional media are slow in communication, and are constrained by time and space. The spread content is not easy to be deformed; internet rumor is node-based and exhibits fission-like spread, and the speed is extremely fast. It is not restricted by time and space, and easy to deform in the process of spread.

4 Study on the Predictability of Outbreak Point and Power Law Distribution of Big Data Flow Action The daily behavior pattern of human is not random, but it is “explosive.” In the monograph of Barabasin Bursts: The Hidden Pattern behind Everything We Do, he claims to find an orderly pattern under human behavior that has long been considered completely accidental. He names this model as “explosive,” that is, people’s work and entertainment and other activities are intermittent and will suddenly erupt in the short term and then almost fall into silence. At the same time Barabasin points out that many things in the natural or artificial world follow the Power Law (Power Law rule): Once the power law appears, the explosive point will appear [7]. Internet is connected by a few highly-linked nodes. Very few nodes have massive clicks, and the vast majority of websites have only a handful of visits. The power law distribution determines the structure and direction of internet. It also dominates the rhythm of human activities. Under the increasingly sophisticated digital technology conditions, with information gathered from all over the world, human behavior will not be regarded as an irrelevant and casual accidental independent event [8]. When the life is digitalized, formulated, and modelled, you will find that everyone is very similar— we all have an explosive pattern, and it’s very regular. People look casual and accidental, but they are easy to predict. In the past, because there was no relevant big data, it was difficult to explore human behavior. In big era, everything is under the ever-changing but increasingly sophisticated surveillance situation. Every movement of us can be found clues in a database.

Study on the Application of Big Data Analysis …

1541

It is the existence of these records that detonates the personal privacy crisis. There is ample evidence proving that most of human behavior is subject to laws, models, and principles, and their reproducibility and predictability are comparable to natural sciences [9]. If you want to predict the future, you must first understand the past. When the entropy is low, we will be particularly sure about something. Each accident is followed by a series of events that are uniquely arranged. Although it’s impossible to predict when it will happen, it still has inherent order even though it’s casual. If we can carefully distinguish the contingency and predictability, we can predict the characteristics of social structure.

5 The Application of Big Data in Analysis of Internet Rumors In light of the widespread use of big data in internet rumors and the serious harm to social stability, this paper concludes that people generally propose to crack down on internet rumors relying on the following ways: firstly, the self-purification function of microblogging itself and its institutions; secondly, the traditional media play the role of verification of internet in refuting rumors; thirdly is the disclosure and transparency of official information [10]. It is undeniable that it can only be carried out after the rumors have taken place and have caused the influence for public opinion. The effect of governance is poor and cannot be prevented from being unburned. Relying on modern intelligent means, especially it is a good try to apply big data technology to refute rumors on the Internet, and it is also an inevitable trend of social development. This study attempts to combine big data technology with the application of internet rumors to implement event topic tracking, website tracking, figure tracking, geographic tracking, organization tracking, and activist tracking to achieve omni-directional threedimensional public opinion tracking, real-time monitoring, trend analysis, timely alarm, timely refuting rumors and other functions. 5.1

Information Capture

Set up a distributed big data vertical search engine based on Nutch/Hadoop cloud platform, adopt the method of directional acquisition and full-network search from the Internet, obtain accurate public network monitoring and other public sentiment, store in the massive database of the public opinion platform, and execute real-time indexes for the public opinion data in large quantities by public sentiment search engine, discover public sentiment at the first time, and conduct a vertical search of internet rumors. The vertical search engine for this study is based on Nutch/Hadoop. It only searches for online public opinion big data, and integrates ICTCLAS, a Chinese word segmentation tool developed by the Computer Research Institute of the Chinese Academy of Sciences. It selectively collects web pages related to Internet public opinion, including microblogs, community, BBS, and improves the quality of information processing. The following is an example of analyzing Sina Weibo data.

1542

Z. Zhu et al.

1. Big Data Capture Based on Sina API Weibo The acquisition of Internet data is usually done through web crawlers. In a web crawler program, the program saves the web page content as a text file in the local storage system according to a certain crawling strategy by setting the entrance URL address, and extracts the effective address in the web page as the next crawling entrance URL address. The process terminates until the crawl is completed or the established crawling conditions are met. This enables the capture of a large amount of public opinion platform information by changing the URL. Compared with the web crawler, the open API interface of the internet public opinion platform can obtain corresponding data more succinctly, which provides a guarantee for the program to obtain the internet public opinion platform data efficiently. OAUTH certification is the premise to obtain the data from public opinion platform. The specific process to obtain user resource authorization is as follows: (1) The user applies to the service provider of internet public opinion platform OAUTH to obtain the application-specific App Key and App Secret, and uses the HMA C-SHA1 algorithm to digitally sign the request sent by the user. (2) Get the Request Token through the Internet public opinion open platform. The Request Token not authorized by the server contains the corresponding key, encryption algorithm, timestamp of initiating request, random string, and version number information. (3) The user sends a request to the authorization address of the server of internet public opinion platform Request Token, and the server approves the user’s request, and issues the Oauth token and the corresponding Oauth Secret without user authorization to the server. (4) The Token and Secret obtained in the previous step are sent to the user authorized address of the internet opinion platform to apply for Request Token authorization. (5) After authorization, the user initiates a request to the Access Token address of the internet public opinion platform, and exchanges the Request Token authorized in the previous step to an Access Token. (6) Server agrees with the user’s request and issues the Access Token authorized by the internet opinion platform and the corresponding key. (7) The user signs the request on the obtained authorized Access Token in the form of OAUTH based on HTTP-header, so that the usage authorization of the software for users’ identity resource can be obtained.

2. Page Parsing Based on Internet Crawler The crawler sends the original user name and password information encrypted in Base64 encoding to the server in a certain format. The server extracts the string from the authorization information that is contained in the http-header and decrypts it to obtain the original user name and password, implementing the simulating login process of webpage with the program. The main code to simulate the login process is as follows:

Study on the Application of Big Data Analysis …

1543

…… code = username + “:” + password; Auth = “Basic” + Base64.encode(code,”utf-8”); OpenConnection(url); connection.setMethord(“GET”); connection.setProperty(“Authorization”,auth); InputStream = connection.getInputStream( ); …… The encrypted user password is sent as an Authorization attribute each time when the connection is requested. The InputStream is used to obtain the content returned by the request, and the entire login process is implemented then. Then, InputStream is used to input the stream, and the content information of the webpage in the specified URL address can be taken out and stored in the local storage system according to a certain classification standard in the form of a text file. Through the combination of two kinds of information acquisition technologies, the purpose of obtaining public opinion information from the internet public opinion platform can be achieved. 5.2

Data Pre-processing

Big data preprocessing includes cleaning, deduplication, removing noise, constructing classifiers, and automatically extracting the elements and keywords of public opinion. The public opinion information is judged accurately at the first time through the preprocessing to extract the most valuable public opinion information accurately. (1) Statistical analysis. In order to adapt to the needs of complex information mining, a probabilistic model with clear goals and tasks is needed. The statistical model of data mining should be applied to the objects to be extracted, and statistical analysis techniques can be used to mine the information that we are interested in. (2) Association rules. When users access web pages or microblogs, they often brows a collection of pages with no ordering relationship in the same visit, and in mining, intrinsic links are found among the discovered pages, which is the association rule. By obtaining an association rule, it can be used as a heuristic rule to analyze pages that a remote client may request to make predictions about user behavior. (3) Cluster analysis. The essence of cluster analysis is to establish a classification method that can automatically classify a batch of data according to their intimacy degree in nature without priori knowledge. Each class is a collection of a large number of similar individuals. There are obvious differences between different classes. (4) Classification. Classification is to divide and classify the data items according to predefined categories. The criteria for classification are predefined and the process is similar to the process of mail sorting. The classification technique requires the extraction of key attributes describing known information and classification with the guided induction learning algorithm. If the content is in Chinese, understanding word segmentation, mechanical word segmentation, or the Chinese language analysis system ICTCLAS can be used.

1544

5.3

Z. Zhu et al.

Data Mining

The main task of data mining is to analyze preconditioned big data of rumors and public opinions and use a variety of effective algorithms to process and extract urgent and important internet rumors and key words from the data, so as to provide the data foundation for the subsequent analysis of Internet public opinion and the rumors alarm module. Typical algorithms are: one is information extraction algorithm (STALKER) based on the wrapper model; the other one is the multi-slot information extraction rules (WHISK) based on induction logic programming data module [11]. This paper uses STALKER algorithm to build big data mining system, and the structure is shown in Fig. 2.

Graphical user interface Model assessment Knowledge base

Data mining engine Data base or data warehouse server Data cleaning

Data integration

Data base

Filtering Data warehouse

Fig. 2. Structure of big data mining system

5.4

Public Opinion Service Platform

Public opinion service platform is developed based on WEB technology and Android technology, and the platform is used to provide public opinion information services based on the mined public opinion information, including timely warning and push, statistical analysis, automatic report generation, public opinion guidance and control, and pop-up alarms, mail alarms, SMS alarms and other methods are used to release in a timely and accurate manner to fully satisfy the requirements of information and decision of the public opinion monitor [12].

6 Conclusion The application of internet rumors and big data is an important way to deal with internet rumors and public opinion in the future. It will surely bring a brand new internet environment to the internet and people. However, there are many factors that need to be considered in the specific combination and application. Especially this is a

Study on the Application of Big Data Analysis …

1545

key technology to cope with the rapidity of the dissemination of information on the Internet today and ensure that the public opinion can be searched out at the first time and quickly controlled. In this paper, the search engine speed optimization, rumors tracing traceability and other technologies are involved relatively less, and these will all become important indicators in the evaluation system and the future study direction. Acknowledgements. This article has received the support of the Characteristics innovation project of colleges and universities of Guangdong Province (Natural Science), 2016, No. 2016KTSCX182; and also received the support of the Youth Innovation Talent Project of colleges and universities of Guangdong Province, 2016, No. 2016KQNCX230.

References 1. Deng, Z., Liu, Z., Wang, X.: Summary of internet rumor research: causes, influences, and dispelling mechanisms. Leg. Syst. Expo. 6, 46–49 (2013). (in Chinese) 2. Yu, R., Li, Y.: Beijing websites building the platform to refute rumors together. People’s Daily (004) (2013). (in Chinese) 3. Wang, L., Cao, S.: A comparative analysis of the research subjects of internet public opinion under the perspective of subject intersection. J. China Soc. Sci. Tech. Inf. 36(2), 159–169 (2017). (in Chinese) 4. Xie, Y.: Blue Book of Public Opinion: Report on Chinese Public Opinion and Crisis Management. Social Sciences Academic Press, Beijing (2015). (in Chinese) 5. Huang, H.: Internet rumors participation psychoanalysis and management strategies in the era of big data. J. Chengdu Univ. Technol.: Soc. Sci. Ed. 25(2), 81–85 (2017). (in Chinese) 6. Biega, J., Kuzey, E., Suchanek, F.M.: Inside YAGO2s: a transparent information extraction architecture. In: Proceedings of the 22th International Conference on World Wide Web, pp. 325–328. ACM, New York (2013). (in America) 7. Sun, L., Yin, P.: Study on emotional intensity of internet public opinion based on big data technology. Comput. Digit. Eng. 46(1), 160–166 (2018). (in Chinese) 8. Xue, S., Lu, R., Ren, Y.: Weibo hot topic discovery based on speed growth. Appl. Res. Comput. (9), 2598–2601 (2013). (in Chinese) 9. Li, J., He, Y., Xiong, Q.: Study on internet opinion text mining based on big data technology. J. Inf. 32(10), 1–6 (2014). (in Chinese) 10. Tang, M., Su, X., Zhang, Y.: Intelligence acquisition for emergencies oriented to big data. Inf. Sci. 36(3), 46–50 (2018). (in Chinese) 11. Rob, T., Tim, W.: Fast and Effective Embedded Systems Design, pp. 257–290. Jonathan Simpson, Oxford (2017). (in America) 12. Zhang, S., Wang, L.: Study on internet public opinion analysis technology based on knowledge and data driven factors. Mod. Inf. 38(4), 106–111 (2018). (in Chinese)

Eyes and Mouth States Detection for Drowsiness Determination Yuexin Tian1(&), Changyuan Wang1, and Hongbo Jia2 1

2

Xi’an Technological University, Xi’an, China {tian_yuexin,acyw901}@163.com Institute of Aviation Medicine, Military Medical University, Air Force, Beijing, China [email protected]

Abstract. People usually are fatigue in daily life. Sometimes it can seriously affect our normal life even our lives. Therefore we need discover a method which can detect drowsy person in time. In this paper, the states of eyes and mouth are used to determine whether person is actually drowsiness. The Percentage of Eye Closure (PERCLOS) is used to judge whether the eyes are closed or open. The Percentage of Mouth Closure (PMRCLOS) is used to judge whether the mouth is normal or yawny. Through the threshold time of normal state to drowsy state, the state of person is determined. Specifically, this method comprising the following steps. (1) improving the quality of image in image preprocessing; (2) using Active Shape Model (ASM) to detect human face; (3) extracting the Histogram of Orientation Gradient (HOG) features of eyes and mouth; (4) using Support Vector Machine (SVM) to classify the states. The proposed method was effective which is contrast with the prediction of human rater. Keywords: Drowsy detection HOG feature  SVM



Eyes state, mouth state



Threshold time



1 Introduction Drowsiness is an involuntary human physical activity. Webster’s Dictionary defines drowsiness as a feeling of being sleepy and lethargic [1]. Since drowsiness can be directly related to the human concentration and activeness, drowsiness detection has been applied in fields like in human behavioral analysis, fatigue detection, alertness level measurement etc. [2]. Now the drowsiness detection can be generally split into two sections: Contact methods and Contactless methods [3]. The Contact methods detect drowsiness correlating with physiological signals of human, such as the electrocardiogram (ECG) [4], electroencephalogram (EEG) [5], electrooculogram (EOG) [6] and electromyogram [7] (EMG). According to the signals changing, we get the states of human. Nevertheless, the great majority of devices are very costly and they need directly contact with people, which will make mental pressure on people. The results of detection are not precise. The Contactless methods

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1546–1554, 2019. https://doi.org/10.1007/978-981-13-3648-5_200

Eyes and Mouth States Detection for Drowsiness Determination

1547

test fatigue mainly by machine vision. According as the changing appearances of people, we can detection drowsiness. People’s fatigue states will be not interrupt by devices. The ability to detect and describe salient features is an important component of a face recognition system [8]. Now, most of these methods simulate in ideal environment and ignore the lighting and noises. The great majority methods work on analyzing the appearances of people by detecting the state of eyes but neglect the states of mouth. As known to all, the mouth of people open largely when people yawned. The face can be obtained by using camera. In this situation, we propose a new method to detect fatigue. We obtain video steaming by web camera and develop system on VisualStudio2012 and OpenCV. Specially, the method utilizes ASM to detect face and locate the eyes and mouth. Further, extracting HOG features of eyes and mouth and using SVM to classify the state of eyes and mouth. Additionally, the method accurately detect the state of the human eyes and mouth in big angle, and fit in with variable illumination conditions. The detail of the method as show in Fig. 1.

Video Streaming

Image preprocessing

Facial detection

Detect eyes and mouth

Extract HOG features

SVM classify

Fig. 1. System processing flow

2 Methodology 2.1

Image Preprocessing

Image preprocessing is an essential section in machine vision, which can increase the image quality and visual effect. Normally the input image have lots of noise and the computational of image is complex. It is very adverse for following steps. Here median filter is used for noise removal [9]. It is a highly used smoothing technique. The step of image preprocessing is proposed in Fig. 2.

Capture the video

Image serialization

Grayscale

Median filtering

Histogram equalization

Fig. 2. The steps of image preprocessing

2.2

Facial Detection

ASM is based on a geometrical facial shape model (or a point distribution model, PDM), where the coordinates or the locations of eyes, eyebrows, nose, mouth, chin, etc., are labeled as the landmark set of the face shape [10]. The advantage of this

1548

Y. Tian et al.

algorithm is using statistical theory to find out the most suitable position of selected features from the training set [11]. The ASM algorithm is composed of a global shape model and a local texture model, which alternates in the detection processing to make the shape of the model converging gradually. The result of the model is shown in Fig. 3. Through manually labeled a large number of face images, the global shape model are obtained by principal component analysis (PCA). In this paper, 77 feature points are selected to simulate the face.

Fig. 3. Model testing result

The shape of face can express to Eq. 1. x ¼ ðx1 ; y1 ; x2 ; y2 ; . . .; x77 ; y77 Þ

ð1Þ

where x, face shape, is composed of 77 points. (xi, yi) is coordinate of landmark i. For the training set containing K samples (K > 300), the mean sample x is given by Eq. 2. x ¼

K 1X xi K i¼1

ð2Þ

Covariance matrix of training samples is W, which is calculated by Eq. 3. W ¼ ðx  xÞðx  xÞT

ð3Þ

Eyes and Mouth States Detection for Drowsiness Determination

1549

The feature vector of W is calculated by Eq. 4. WPk ¼ kk pk

ð4Þ

where kk is feature value of W. pk is feature vector of kk . According to PCA, the kk is more bigger, then the variation of feature values pk is more important for overall model. Selecting the first M feature values, the new matrix P is formed in Eq. 5. P ¼ ½p1 ; p2 ; . . .; pM 

k1 [ k2 [ . . . [ kM

ð5Þ

So the shape of face is calculated by Eq. 6.  þP  b XX

ð6Þ

where b = (b1, b2,…, bM)T is weight vector. 2.3

Feature Extraction

Feature extraction is used to get feature vector, which is important for machine learning. There are various types of feature extraction techniques such as Scaleinvariant feature transform (SIFT), Fourier transform, Gabor filter etc. Histogram of Orientation Gradient (HOG) feature was named by Dalal and Triggs in 2005 [12]. The image is divided into 16  16 blocks and each block is divided into 2  2 cells, and the gradient direction histogram of each cell is calculated in the block. For face recognition, the range of 0°–180° that selecting unsigned numbers are better than 0°– 360° in each block. The effect is great when the number of histogram is 8–10. The histogram of all cells in the block is combined into a histogram of block. The direction and magnitude of ach pixel is calculated by Eqs. 7 and 8. hðx; yÞ ¼ tan1 ½ mðx; yÞ ¼

Iðx; y þ 1Þ  Iðx; y  1Þ  Iðx þ 1; yÞ  Iðx  1; yÞ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi f½Iðx; y þ 1Þ  I ðx; y  1Þ2 þ ½Iðx  1; yÞ  Iðx þ 1; yÞ2 g

ð7Þ ð8Þ

which I(x, y) is pixel value at coordinate (x, y). 2.4

Feature Classification

Feature classification is cornerstone of machine learning. Classifications are trained by samples. Especially classifications are linear classification, Bayes classification, classification based on distance and so on. In this paper, we used Support Vector Machine (SVM), which is liner classification. Cortes and Vapnik first proposed Support Vector Machine (SVM) in 1995 [13]. It uses the principle of structural risk minimization. According to the limited samples, we seek the best compromise between the learning ability and complexity of the model to obtain the best generalization ability [14]. The

1550

Y. Tian et al.

theoretical basis is minimizing the structure. The original data is projected to the high dimensional data space by kernel function, and the low dimensional nonlinear data is converted into high dimensional linear data so that the classification can be done well. Generally, the kernel functions are divided into four classes, which is linear kernel, polynomial kernel, radical basis function kernel (RBF) and sigmoid tanh. Here we used RBF. Because the RBF is better in fatigue mapping [15]. 2.5

Drowsiness Detection

There are many researches on the detection methods of fatigue state. Here the eyes and mouth of states is our criterion of judgement. We all know that when people are sleepy, the duration of closed eyes is longer than active state. Currently, PERCLOS [16, 17] method is widely used for its high accuracy of visual fatigue judging. PERCLOS is a measure of duration of time in which the eyes were closed [18]. P80 refers to the covered pupil more than 80% of the entire eyes, which is recorded as closed eyes. We count the proportion of the closed eyes in a certain period of time. We can also get P70. EM refers to the covered pupil more than the general area of the eyes. We count the proportion of the closed eyes in a certain period of time. When people are active, the state of eyes is P80 and the state of mouth is M20. When people are fatigue, we call this as P20 and M80 [19]. The PERCLOS is defined as Eq. 9 [20]. PERCLOS ¼

the total number of closed eyes in one min  100% total number of detection time in one min

ð9Þ

Referring to the principle of PERCLOS, the standard of judging yawning is calculated by Eq. 10. PMRCLOS ¼

the total number of yawned mouth in one min  100% total number of detection time in one min

ð10Þ

According to Schiffman [21], the average blink duration of a person is 100–400 ms and the number of blinks per minute is 10–15 from [22]. For a active person, the duration of closed eyes less than 400 * 15 = 6000 ms. So the person is determined to drowsiness when the duration of closed eyes more than 6 s. In this paper, yawns were divide into shallow yawn and deep yawn. The main difference between the two yawns are the duration time of enlarged mouth. When the people are yawning, the mouth will be enlarged and last for 3 s, and the duration of deep yawning is 4–5 s or even longer. When the yawn continues more than 3 s, it is considered yawned and the number of yawned frames are recorded. When a person made a deep yawn or at least two shallow yawns, we decided the person is drowsiness. So the person is determined to drowsiness when the duration of yawn more than 6 s.

Eyes and Mouth States Detection for Drowsiness Determination

1551

3 Results and Discussion 3.1

Implementation

We captured a frame image from video streaming. In this image, we use ASM algorithm to detect the biggest face in the image. It is noteworthy that the algorithms were executed on Lenovo Xiaoxin300 notebook with an Inter Core i7 processor 2.1 Ghz and 8 GB of RAM. We use web camera to capture video frames, which samples 25 frames per second and obtains a resolution of 640 * 480 pixels. The traditional face dataset only contain different races, lights, ages, etc. But it lacks samples of yawning. When we train ASM in this dataset, we can not match well with yawning mouth. We manually marked some yawning samples in dataset. The ASM result is Fig. 3. After obtaining facial feature points, we need to intercept images of the eyes and mouth areas. It is Fig. 4. In subsequent processing, we can get its corresponding HOG feature. Put these features in trained SVM, different states of eyes and mouth will be classified.

Fig. 4. The eyes and mouth areas detection

3.2

Validation

Self-rating methods of drowsiness include some form of introspective assessment by the driver. The 7-point Standford Sleepiness Scale (SSS) and the 9-point Karolinska Sleepiness Scale (KSS), are the two most common sleepiness scales used in studies [23]. In this paper, the KSS was used to rate the drowsiness. According to the state of people, KSS is divide into active and sleepy. The human rater watches the video and decide whether the people is drowsy by rating based on the KSS. The Fig. 5 shows the KSS rating scale.

1552

Y. Tian et al.

Fig. 5. Karolinska sleepiness scale divided into two classes

Our final goal is detect whether the people is drowsiness through the states of eyes and mouth. In order to ensure the correctness of the experiment results, we refer to KSS scale. We use 4 videos to verify the correctness of this method. The results of the verification are shown in Table 1. Table 1. Results of drowsy detection Sub. no. Time [s] PERCLOS [s] PMRCLOS [s] Prediction 1 36 2.48 1.38 Active 2 100 17.58 9.4 Drowsiness 3 59 7.93 6.37 Drowsiness 4 73 14.57 8.73 Drowsiness 5 125 16.26 9.39 Drowsiness

Human rater 2 8 6 7 8

Above the table, we can discover that the results of drowsy detection is very successful. The wrongly result is highlighted in the above table.

4 Summary In this paper, we use web camera to obtain image. The developed tool is OpenCV and the VisualStudio. Particularly, the ASM is used to detect the face area and get the facial landmarks of people. According to the serial number of landmarks, the areas of eyes and mouth can be obtained. Then we extract the HOG features of eyes and mouth. SVM is trained by a lot of states of eyes and mouth images. The trained SVM separately classify the eyes states and mouth states. According to the longest time of closed eyes and yawning mouth, the drowsiness state is determined. It is user friendly contactless equipment. This method is widely used on cars, classroom, hospital, etc.

Eyes and Mouth States Detection for Drowsiness Determination

1553

Acknowledgements. This paper is supported by the local special program of the Shaanxi Provincial Department of Education (No. 16JF012), the National Natural Science Foundation of China (No. 61572392).

References 1. Sandberg, D., et al.: The characteristics of sleepiness during real driving at night—a study of driving performance, physiology and subjective experience. Sleep 34(10), 13172011 2. Pauly, L., Sankar, D.: Detection of drowsiness based on HOG features and SVM classifiers. In: IEEE International Conference on Research in Computational Intelligence and Communication Networks. IEEE (2016) 3. Yang, G., Lin, Y., Bhattacharya, P.: A driver fatigue recognition model based on information fusion and dynamic Bayesian network. Inf. Sci. 180(10), 1942–1954 (2010) 4. Wu, Q., Zhao, Y., Bi, X.: Driving fatigue classified analysis based on ECG signal. 2(4), 544– 547 (2012) 5. Chai, R., Naik, G.R., Nguyen, T.N., et al.: Driver fatigue classification with independent component by entropy rate bound minimization analysis in an EEG-based system. IEEE J. Biomed. Health Inform. 21(3), 715–724 (2017) 6. Suman, D., Malini, M., Anchuri, S.: EOG based vigilance monitoring system. In: India Conference, pp. 1–6. IEEE (2016) 7. Ahmad, Z., Jamaludin, M.N., Omar, A.H.: Development of wearable electromyogram for the physical fatigue detection during aerobic activity. 7(1) (2018) 8. Yuille, A.L., Hallinan, P.W., Cohen, D.S.: Feature extraction from faces using deformable templates. Int. J. Comput. Vision 8(2), 99–111 (1992) 9. Barman, S., Samanta, A.K., Kim, T.H., et al.: Design of a view based approach for Bengali Character recognition. Int. J. Adv. Sci. Technol. 15 (2010) 10. Ayhan, O., Abaci, B., Akgul, T.: Improved active shape model for variable illumination conditions. In: IEEE, International Workshop on Multimedia Signal Processing, pp. 322– 327. IEEE (2013) 11. Nhan, D.T., Bao, T.Q., Dinh, T.Q.: A study on warning system about drowsy status of driver. In: Seventh International Conference on Information Science and Technology, pp. 215–222. IEEE (2017) 12. Kumar, G., Bhatia, P.K.: A detailed review of feature extraction in image processing systems. In: 2014 Fourth International Conference on Advanced Computing & Communication Technologies (ACCT). IEEE (2014) 13. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge university press, Cambridge (2000) 14. Wang, Z.-H., Jia, Y.-S., Chen, X.: Driver fatigue detection based on SVM. Sci. Technol. Eng. (2011) 15. Jin, L., Niu, Q., Hou, H., et al.: Driver cognitive distraction detection using driving performance measures. Discret. Dyn. Nat. Soc. 2012, 1555–1565 (2012) 16. Grace, R., Byrne, V.E., Legrand, J.M., et al.: A machine vision based drowsy driver detection system for heavy vehicles. In: Proceedings of the Ocular Measures of Driver Alertness Conference, pp. 26–27 (1999) 17. He, J., Roberson, S., Fields, B., Peng, J., Cielocha, S.: Fatigue detection using smartphones. J. Ergonomics 3, 120 (2013). https://doi.org/10.4172/2165-7556.1000120 18. Wierwille, W.W.: Historical perspective on slow eyelid closure: whence PERCLOS. In: Ocular Measures of Driver Alertness, Technical Conference Proceedings (1999)

1554

Y. Tian et al.

19. Wang, P., Shen, L.A.: Method of detecting driver drowsiness state based on multi-features of face. In: International Congress on Image and Signal Processing, pp. 1171–1175. IEEE (2013) 20. Dinges, D.F., Grace, R.: PERCLOS: a valid psychophysiological measure of alertness as assessed by psychomotor vigilance. Tech Brief (1998) 21. Schiffman, H.R.: Sensation and Perception. An Integrated Approach. John Wiley and Sons, Inc., New York (2001) 22. https://www.ucl.ac.uk/media/library/blinking 23. Liu, C., Hosking, S., Lenne, M.: Predicting driver drowsiness using vehicle measures: recent insights and future challenges. J. Saf. Res. 40(4), 239–245 (2009)

Feasibility and Risk Analysis of Data Security System Based on Power Architecture Lei Yao(&) Chengdu Polytechnic, Chengdu 610041, Sichuan, China [email protected] Abstract. With the rapid development of “Internet+”, data security issues have penetrated into every industry and business function area today. The implementation of this project will enable China to achieve breakthroughs in the field of data security and master a number of independent core technologies to drive the development of upstream and downstream related industries, significantly improve China’s data security technology, further optimize the structure of China’s information security industry, and prosper the information security market. Keywords: Data security

 Security technology  Risk analysis

Our world is full of data and information. We can’t live without the Internet. The global information brings us infinite possibilities. However, the security of information has always been a problem that has troubled us. According to the survey, in 2015, the global basic consumption of information security reached $86 billion, and in 2020 it will reach $170 billion. According to IDC data, in 2014 alone, the market size of China’s backup all-in-one machine reached $126 million, an increase of over 14.0% compared to 2013. In 2014, the scale of China’s disaster recovery market reached approximately RMB 7 billion, which is approximately 10% higher than the 2013 data. IDC predicts that in 2017, China’s backup all-in-one market will reach a scale of $300 million, which represents an increase of 89.1% compared to 2014. In 2017, China’s disaster recovery market will reach a scale of more than 10 billion. This shows that the market for data security has broad prospects [1]. With the rapid development of cloud computing and large data industry, in particular, the government has greatly changed the data security in the large data environment to speed up the development of intelligent manufacturing in order to accelerate the industrial transformation [2]. The development of the world is so fast that the traditional data security solutions have long been unable to meet the security requirements of large data. It is mainly reflected in: The first one is that the big data backup takes a long time and data is easily lost. With the changes of the times, the application of enterprises has also been gradually enriched and diversified [3]. This has led to a geometric increase in the amount of data in enterprises, and the growth rate has also become faster and faster. In particular, unstructured data is gradually becoming the main force of its growth [4]. The backup mode of simple structured data is no longer practical, and ordinary enterprise users are struggling with backup operations for complex interfaces. When backing up the big © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1555–1560, 2019. https://doi.org/10.1007/978-981-13-3648-5_201

1556

L. Yao

data, if the backup is performed at the service running time, the free time incremental data of the accumulated incremental data may not be backed up frequently. Increasing the backup interval may result in the loss of more data when an accident occurs. The second one is that the big data logic is damaged, and the backed up data cannot be used normally. Logical damage refers to an error in the relationship between the data. The traditional storage replication disaster recovery model can hardly isolate any logical errors [5]. Once database logic errors occur, including database bugs, disk bad blocks, and network memory errors, disk mirroring will propagate these errors to the disk of the backup database without loss. Eventually, the backup database will not work properly. The third one is that the big data backup is time-consuming and labor-intensive, and business recovery is difficult. After the big data platform encounters physical damage, logic errors, or human-induced data corruption, using traditional backup and recovery methods can result in long service interruptions due to a large amount of data and limited network bandwidth [6]. At present, companies need more flexible, fast and refined data recovery methods. When data recovery is required, the contents of the backup cannot be viewed in advance, resulting in long-term recovery of the data may not be the desired data [7]. This will seriously affect the normal development of business, and bring huge economic losses and negative impacts to enterprises and society. In February 2014, the Party Central Committee established the Central Cyber Security and Informationization Leading Group, and General Secretary Xi Jinping took the chair. This marked that information security has received sufficient attention and has risen to the national strategic height [8]. In 2015, the fifteenth meeting of the Standing Committee of the Twelfth National People’s Congress passed a new national security law on July 1, and information security has become the commanding height of national strategic security. In 2017, our province introduced five major high-end growth industry development programs, of which the information security industry was the first place. The plan called for an industry scale of 55 billion yuan in 2017 to drive information securityrelated manufacturing and growth industries [9]. The scale of the information service industry has exceeded 220 billion yuan and it has achieved trillions of yuan in the electronic information industry in our province. By 2020, the scale of the industry will reach 110 billion yuan, which will drive the scale of related industries to exceed 380 billion yuan, becoming a high-end growth industry that will drive the economic transformation and upgrading of our province in the new period. The new generation of information technology industry was identified by Sichuan Province as one of the seven strategic emerging industries. The implementation of this project will enable China to achieve breakthroughs in the field of data security and master a number of independent core technologies to drive the development of upstream and downstream related industries, significantly improve China’s data security technology, further optimize the structure of China’s information security industry, and prosper the information security market.

Feasibility and Risk Analysis of Data Security System …

1557

1 The Overview and Trends of Related Technology Development at Home and Abroad With the rapid development of “Internet+”, data security issues have penetrated into every industry and business function area today. However, it is worth noting that IT operations personnel often pay more attention to the availability and security of the server system, and it is easier to ignore the importance of data security. In fact, data is the backbone of all types of information applications and business systems. In the event of a disaster, hardware resources such as routers, servers, and storage devices can be rapidly restored or reconfigured. However, if data is damaged or lost, it is difficult to find data back or restore [10]. Therefore, real-time protection of data security has become a top priority for information security. Tracing the development of data security technology, the world data security market originated in the 70s of the last century. In 1979, SunGard established the world’s first disaster recovery center in Philadelphia, USA, focusing on data backup and system backup, and achieved secure storage by transporting backup tapes to a dedicated storage location. In the mid to late 1990s, the concept of business continuity emerged. People gradually turned disaster recovery from an IT perspective to a business perspective and used business to measure disaster recovery goals. From the point of view of the RedPower server, in the 2015 OpenPower China Summit Forum, Wuxi Zoom Server Co., Ltd. released the world’s first RedPower dualway server based on the CP1 processor developed by Suzhou Zhonghong Hongxin. For the first time, domestic servers became synonymous with high-performance, highthreaded, high-frequency, and high-bandwidth. RedPower first implemented 2-way 192-thread “national production servers.” This project is a data security product based on high-performance domestically produced RedPower servers. It protects electronic systems from all aspects through multidimensional linked list CDP technology, including operating systems, files, and databases [11]. It can use the O&M migration function to mount data from any point in time to a virtual machine for disaster recovery simulation exercises to verify its effectiveness. In extreme cases, the virtual machine can use the version data of the most recent point of time to take over the production server business and realize the continuity of related services. The product of this project is based on independent R&D and independent intellectual property rights. It is the first domestic, independently researched and developed data security software. The core technology has reached the international advanced level. It can fully replace similar foreign products, greatly improve the level of China’s data security technology, and further optimize China’s information. The structure of the security industry will prosper the market for information security [12]. At the same time, the implementation of this project will promote the development of upstream and downstream industries and effectively increase the economic level of the region.

1558

L. Yao

2 Risk Analysis 2.1

Policy and Legal Risk

The exposure of the “Prism” event in 2013 sounded an alarm for information security. The basic resources and technological advantages possessed by a small number of leading countries in information technology have become weapons of mass destruction and have seriously threatened the national security of other countries. Based on this situation, the Chinese government has adopted a number of measures for the construction of information security. In 2014, the frequent occurrence of security crisis in the world forced all countries to increase their investment in information security construction into a national security and armament competition. China’s emphasis on information security has reached an unprecedented level. On February 27, the Central Cyber Security and Informatization Leading Group was established. Xi Jinping, general secretary of the CPC Central Committee, president of the country, and chairman of the Central Military Commission, personally assumed the position of team leader, and Li Keqiang and Liu Yunshan are deputy leaders Xi Jinping stressed that cyber security and informationization were major strategic issues that concerned national security and national development and affected the work and life of the broad masses of the people. In 2015, the fifteenth meeting of the Standing Committee of the Twelfth National People’s Congress passed a new national security law on July 1, and information security has become the commanding height of national strategic security. In September of the same year, the State Council executive meeting presided over by Prime Minister Li Keqiang passed the “Program of Action for the Promotion of Big Data Development”, which clearly stated that government affairs information and public data should be shared and opened up, and at the same time emphasized the need to strengthen information security such as the protection of private data. Therefore, in the field of data security, for the sake of national security, pure domestic data backup products for data will be favored by the government. Therefore, the project’s policy and legal risks are not significant. 2.2

Technology Risk

This project is based on the data protection industry experience accumulated over the years, and carries out independent innovation research and development. The core product Black-Bridge data backup and recovery system is ported to the Power Form server, and the product’s performance is optimized using Power Form’s CAPI+FPGA. Black’s data backup and recovery system has passed China’s national information security product certification, national secrecy bureau confidential product certification, military information security product army B-level certification, the Ministry of Public Security information system security product certification and China’s mandatory product certification. It has a solid technical foundation and low-tech risks.

Feasibility and Risk Analysis of Data Security System …

2.3

1559

Market Risk

Market risk is mainly due to the sluggish sales of products and the increase of substitutes. We focus on the field of data security. Since its establishment in 2008, we have accumulated rich customer resources and deep corporate reputation in various industries and institutions such as enterprises and institutions. As everyone knows, as the most authoritative and most comprehensive data, the value of data for social and economic development is also widely recognized and expected. The opening of political data and the formation of big data platform are irreversible. However, the security of various types of information and data stored in the system is directly related to the interests of the government and the country. Therefore, how to fully guarantee the reliability, consistency, and integrity of data resources is the most important issue for all public data holding departments. Therefore, the continuous growth of government big data, the demand for data security will inevitably continue to grow! On the other hand, according to the survey, in 2015, the global basic consumption of information security reached $86 billion, and in 2020 it will reach $170 billion. According to IDC data, in 2014 alone, the market size of China’s backup all-in-one machine reached $126 million, an increase of over 14.0% compared to 2013. In 2014, the scale of China’s disaster recovery market reached approximately RMB 7 billion, which is approximately 10% higher than the 2013 data. Thus, the prospect of the data security market is broad. Therefore, based on the domestic policy orientation and the self-controllability of domestic data security products, the project has a promising market and no market risk. 2.4

Economic and Environment Risk

With the release of the “Made in China 2025” plan, smart manufacturing has been given the heavy responsibility of “turning overtaking” in China’s industrial economy. The “National Smart Manufacturing Standards System Construction Guide” issued in 2015 further pointed out that intelligent manufacturing integrates next-generation information technology such as Internet of things, big data, and cloud computing with manufacturing links such as design, production, management, and service, and has deep information self-awareness, Intelligent optimization of self-accounting, precise control of self-execution and other advanced manufacturing processes. In 2017 and 2018, further requirements were made on the basis of the original Guide. Therefore, as an important part of a new generation of information technology, information security has become the commanding height of national strategic security and is strongly supported by national policies. The entire macroeconomic environment is very conducive to the industrialization of this project. 2.5

Natural Disaster Risk Analysis

Natural disasters such as earthquakes, floods, and fires are unpredictable and inevitable. It is precisely because of the irresistible nature of natural disasters and the

1560

L. Yao

unavoidability of man-made disasters that information systems need to be built for disaster recovery systems. Only when the disaster recovery system is constructed can the system and data be quickly restored after a disaster, and the damage caused by the disaster can be minimized. Data security products are an important part of the disaster recovery system. Data security products must be configured during data center construction. In recent years, due to rain, snow, earthquakes and other natural disasters, system disasters, data loss, business interruptions and other data disaster incidents have gradually awakened people’s awareness of disaster recovery. Therefore, natural disasters will not only have no negative risks to the project but will positively promote it. In summary, the research of this project is very necessary and will play a positive role in promoting information security. Acknowledgements. Scientific Research Project Funded by the Sichuan Provincial Education Department “Research on Data Security System Based on Power Architecture” (18ZA0171).

References 1. Tao, G.: China Information World (2015) 2. Huang, H.: Take stock of information security market in 2014: scale growth. Commun. World (2014) 3. Anhui, S.: Henan launch new strategy for high growth industries. Inf. Deciders Mag. (2014) 4. Ye, K., Wang, L.: Design of emergency disaster recovery platform for Nanjing environmental monitoring data information. Pollut. Prev. Tech. (2017) 5. Hu, L.: Design and implementation of data disaster recovery plan in accounting information system. Master’s Thesis of East China Normal University, (2011) 6. Wang, Y.: On the significance of establishing a disaster recovery base for urban construction archives. Yunnan Arch. (2009) 7. Wu, Y: Taking off IOE is Surging, and China is saving national information security. Digit. Manuf. Ind. (2014) 8. Wang, Y.: Intelligent manufacturing will help the Chinese economy overtake on corners. China Secur. J. (2015) 9. Deng, Z.: The construction of network power must Adhere to the speed and quality. Legal Daily (2014) 10. Zheng, Z.: See the construction of the library disaster recovery from the strong earthquake in Japan. Libr. Work. Study (2011) 11. Gao, Q., Chen, J.: The cognitive differences and coopetition relations between Chinese and the American network sovereignty concepts. Int. Forum (2016) 12. Liu, J.: Developing information security industry and promoting industrial transformation and upgrading. Sichuan Party Construction (City Edition) (2015)

Application of the Extension Point and PlugIns Idea in Transmission Network Management System Wu Ping(&) College of Information and Control Engineering, Weifang University, Weifang 261061, China [email protected]

Abstract. The paper puts forward innovatively the extension points and plugins design idea in design of transmission network management system. We can use the configuration files to acquire extension point, and then use the extension point to acquire the corresponding functional plug-ins and load plug-ins. The plug-ins design idea can effectively reduce the coupling between the different units in server, which makes the function modules of the whole network management system more flexible to load and delete. Keywords: Plug-ins

 Extension point  Networks management

1 Introduction Because of the continuous development of communication technology and the separation and reorganization of telecom operators at home and abroad, it leads to the emergence of multiple devices, multiple services and various technologies coexisting in the domestic transmission network. Using the classification and partition management mechanism to realize the unified management of large capacity and multi factory equipment, and provide resources management, fault location, business management, customer management, network analysis and network planning and other services. It is the main problem to be solved in the current network management system. Therefore, it is the general trend to implement a new generation of network management system that can be compatible with various network management products and enables smooth access between network management systems. The network management system studied in this paper is trying to be able to be independent of the specific equipment, have better scalability, and realize the portability in different platforms.

2 Design of Transmission Network Management Server System In this system, the CORBA architecture is adopted as the host middleware. In order to achieve distributed management of platform and achieve smooth access to other network management products, CORBA is used as internal bus and north-interface. In © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1561–1568, 2019. https://doi.org/10.1007/978-981-13-3648-5_202

1562

W. Ping

addition, the system follows the TMN architecture, the whole system is designed to comply with the standards of the open system, and the system uses the idea of software reuse and object oriented technology in implementation. The system uses a layered development mode. The bottom system provides service to the upper system. It shields the implementation of the underlying system under the condition of keeping the interface unchanged, reduces the system coupling and has good reusability. The composition of the transmission network management system based on CORBA is shown in Fig. 1.

Fig. 1. Structure diagram of network management software system

In order to reduce the coupling between each module, and to improve the efficiency of development and the portability of the software, layer division is implemented inside the transmission network management server. It is mainly divided into three levels: network management support layer, network management framework layer and network management application layer. As shown in Fig. 2. The design of the transmission network management server uses CORBA as the bus. The request sent by the client is converted to the server’s message format through the UEP platform and adapter, and then sent to the ORB. Because the network management server contains multiple POA (supporting many different servant), it needs to manage and control through POA Manager. The request is queued, and the request is sent to the specified POA through scheduling. Finally, the request is distributed to the various Servant by POA. Servant is implemented by C++ object and is eventually mapped to CORBA object [1]. The relationship between ORB, POA manager, POA and network management servant is shown in Fig. 3.

Application of the Extension Point and Plug-Ins Idea …

1563

Fig. 2. Internal hierarchy structure of network management server

Fig. 3. Relationship between ORB, POA manager, POA and network management servant

3 Extension Point and Plug-in Design In the design of the network management server, the design idea based on the extension point and the plug-in is innovatively adopted, so that each function plug-in in the server can be loaded dynamically. Each extension point is obtained through the system configuration file, then the corresponding function plug-in is acquired according to the extension point, and the plug-in is loaded. 3.1

Extension Point

The extension point defines the interface between two subsystems and shields the implementation of two subsystems. For one extension point, an interactive method and parameter (Interface) need to be defined. This enables development based on interface programming, and enables the connection between subsystems to be determined at

1564

W. Ping

runtime [2]. The extension is the implementer of the extension point. It implements the interface defined by the extension point. In this system, the implementation of extension point interface is not reflected in the specific physical realization, but from the conceptual architecture. For example, nm in nm.context.servant.performance represents the network management system, nm.context is the first to start context. Servant. performance represents the servant name for the startup is performance. Context is responsible for loading all registered servant and maintaining the status of each servant. class INMPlatformServant : public IExtension { public: virtual ~INMPlatformServant (void) {}; virtual bool Ping(void)const = 0; virtual void PreInit(IDefaultContext* ctx, const char * sessionName,const char *cfgFile) = 0; virtual CBuffer* Get (CBuffer& buf,AdditionInfo & addinfo) throw (CRemoteProcessFailedException) = 0; virtual void Set (CBuffer& buf,NMPlatform::AdditionInfo & addinfo) throw (CRemoteProcessFailedException) = 0; };

IExtension defines only one interface, every servant in the system inherits from Iextension. That is, each servant is an extension point of the system. Each extension point is an independent functional module, which is physically a dynamic library. Each servant implements get(), set() and so on. It can get the name of servant extension point by reading configuration files. The name of the servant extension point gets the corresponding dynamic library through ExtensionFactory, and finally the library is loaded by CExtensionLibrary:: Loadlibrary(). When the sub server starts, it gets the names of the service extension points according to the other configuration files, and realizes loading similarly. For example, servant, configured as follows, first loads the database subserver, and then gets its configuration file, NM-database-config.xml, to get and load each service plug-in through this configuration file.

NM.context.servant.database NM-database-config.xml NM.context.svtproxy.orbproxy

The network management server is composed of a ORBContext, multiple servant and task extension points. Context has registered multiple servant or task, which have created an extension point respectively. Each servant also contains multiple service extension points, and each service implements a relatively independent function.

Application of the Extension Point and Plug-Ins Idea …

3.2

1565

Plug-in

The system implements a specific plug-in for each extension point. When the system starts, it can dynamically load each plug-in according to the pre designed extension point in the configuration file, thus realizing the dynamic expansion of the whole system. The organizational relationship between the plug-ins is shown in Fig. 4. The NM.context plug-in is the plug-in that is loaded when the system starts, and then loads and initializes the servant and task plug-ins. Then load the specific servant according to the name of the servant in the configuration file. For example, the extension point NM.context.servant.performance in the configuration file indicates that the performance plug-in is loaded after the servant is initialized. The plug-in also includes NM.context.servant.performance.pmcollction, NM.context.servant.performance.pmtaskmgr and other service extension points.

Fig. 4. The organizational relationship between the plug-ins

4 The Method of Calling Between the Plug-Ins in the Server As shown in Fig. 5, when the application layer executes a Get command for performance collection tasks, it is first necessary to obtain a service from OrbContext. This is because the CDefaultOrbContext class contains a map to get service references from this map (std::map LocalServices). IRemoteServiceProxy*pService = pOrbContext ! GetService(“nm_svr_pmcol”); CBuffer * pResult = pService ! Get(asnBuf, addInfo); At the same time, orb will get the local agent of the servant and return the pointer to the current application. The current application is calling the Get () method of IremoteServiceProxy. IremoteServiceProxy gets the CORBA object of the specified servant through the naming service and sends the command to the specified service.

1566

W. Ping

Fig. 5. The call relationship of the server component

5 Load Plug-In The system gets the total configuration file when it starts, and then loads the subsystems of the local process, and the subsystem gets its internal configuration file, which can flexibly load each function plug-in according to the extension point in the file. (1) The start of Context When the server starts, it first acquires ORBContext and completes the initialization of ORBContext. ORBContext initialization basically completes initialization of OrbPoa and establishes CORBA objects [3]. Then determine if the local is RootPOA. If RootPOA, create and initialize ContextManager locally. If the local is not RootPOA, then the local POA query ContextManager, Initialize sessionservant and establish a connection with ContextManager. Then register servant. Finally, ORB loads every servant/task according to the configuration file [4]. (2) Load the service plug-in Context maintains the status of the Service plug-in. If the plug-in has been initialized, it returns directly; if the initialization is being initialized, the caller is blocked until the initialization is completed; if it is not loaded, the loading process is started, and all the waiting persons are released when the initialization is complete. (3) Load the servant plug-in Context maintains the status of the Servant plug-in. If the plug-in has been initialized, it returns directly; if the initialization is being initialized, the caller is blocked until the initialization is completed; if it is not loaded, the loading process is started, and all the waiting persons are released when the initialization is complete. Context needs to load all local unloaded servant plug-ins, some of the local servant is loaded locally, some of which are loaded by ContextManager remote notification, and the loading process is the same.

Application of the Extension Point and Plug-Ins Idea …

1567

When a servant is loaded, the system plug-in status notification is sent. (4) Load the task plug-in Context maintains the status of the task plug-in. If the plug-in has been initialized, it returns directly; if the initialization is being initialized, the caller is blocked until the initialization is completed; if it is not loaded, the loading process is started, and all the waiting persons are released when the initialization is complete. Context needs to load all local unloaded task plug-ins. (5) Restarts of Context SubContext restart. The startup process is the same as the startup of Context. ContextManager will receive repeated login at this time, because the Context has already logged in. There are two cases of duplication: reboot and configuration errors (the two Context configuration is the same ID). For accurate judgement, the RegContext method will return the unique ID, and Ican::Session_I adds a GetSessionID method to return to the ID [5]. For repeated login cases, ContextManager first checks whether the previous context reference is valid. If it is invalid, it considers that the previous context has already dropped out, replaces the context with a new landing and checks whether the ID returned by the previous reference is valid. If it is valid, it is handled according to the deployment error. MainContext restart. After ContextManager starts, all valid references need to be querying from the naming service, and it is considered that these Context have been registered.

6 Conclusions The server system in this paper adopts the management idea based on the extension point and the plug-in, and designs each independent subsystem as an extension point, and compiles it into a dynamic library in physics. The system gets the total configuration file when it starts, and then loads the subsystems of the local process, and the subsystem gets its internal configuration file, and flexibly loads each function plug-in according to the extension point in the file. The advantages of this design method are: (1) The distributed or centralized management of servers can be realized only by modifying the configuration files. (2) Loading and deleting server function plug-ins is very easy. (3) Reduce the coupling between the internal function plug-ins of the server. Acknowledgements. This research was supported by Shandong Provincial Natural Science Foundation (project number: ZR2017QF011), Shandong Province Higher Educational Science and Technology Program (project number: J16LB10), and Weifang City Science and Technology Development Program (project number: 2017GX017).

1568

W. Ping

References 1. Zhu, Q., Zheng, B.: CORBA Principle and Application. Beijing University of Posts and Telecommunications press, Beijing (2001). (in Chinese) 2. He, D.: Research on Transmission Network Management Performance Server Based on CORBA. University of Electronic Science and Technology of China, Chengdu (2009) 3. Ruiyu, Z., Junjun, H: Modern Communication Network. Peking University Press, Beijing (2017) (in Chinese) 4. Michi, H., Steve, V.: Advanced Programming of C++ Based on CORBA. Tsinghua University Press, Beijing (2000). (in Chinese) 5. Huston, S.D., Hohnson, J.C.E., Syyid, U.: The ACE Programmer’s Guide. Addison-Wesley Professional (2004)

Reliability Modelling of a Typical Peripheral Component Interconnect (PCI) System with Dynamic Reliability Modelling Diagram Guan Yi and Zhao Jiacong(&) Dalian Neusoft University of Information, Dalian, China {guanyi,zhaojiacong}@neusoft.edu.cn

Abstract. As the widely use of complex and large computer system in diverse industries, the demand for achieving a system with high reliability is increasing as well. System reliability is summarized as a sub-concept of system dependability, which is measured by the Mean Time to Fail. The performance of a system is achieved by the quantities, qualities and arrangement of system’s components that construct the system. The reliability model should be constructed with reference to life distributions and maintenance policies of components, and sub-assemblies and assemblies of system components. In this situation, a number of system reliability modelling tools are proposed to model the system reliability. Current research shows that excluding Dynamic Reliability Block Diagrams (DRBD), none alternatives can adequately model dynamic behaviors of a system such as dynamics, dependencies, redundancy and load sharing. In order to guarantee the correctness of the designed DRBD model, verification methodology Colored Petri Nets (CPN) is introduced to verify the DRBD model. Multiprocessor Peripheral Component Interconnect (PCI) system is one of the most widely used computer system. Accurately model its system reliability contributes to establish a computer system with high reliability. Keywords: Dynamic reliability block diagrams Dynamic behaviors

 System reliability

1 Introduction As the widely use of computer technology in various industries, how to critically guarantee the reliability of complex computer-based systems is receiving increasingly attention. System reliability is summarized as a sub-concept of system dependability, which is measured by the Mean Time to Fail [1]. This indicates that system reliability can be evaluated by a model which is able to represent the time to entire system failure. The performance of a system is achieved by the quantities, qualities and arrangement of system’s components that construct the system [2]. It can be seen that the reliability model should be constructed with reference to life distributions and maintenance policies of components, sub-assemblies and assemblies of system components [3, 4]. Additionally, the increasingly complexity of distributed computer systems motivate the construction of dynamic system reliability modelling tools to model dynamic © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1569–1576, 2019. https://doi.org/10.1007/978-981-13-3648-5_203

1570

G. Yi and Z. Jiacong

behaviours of system, such as dynamics, dependencies, redundancy and load sharing [5, 6]. These mentioned situations incur the construction of dynamic tools, like Dynamic RBD (DRBD) [7, 8]. DRBD is first proposed by Stephano and Xing [5] and then updated to a much simpler structure by Xing and Xu [4, 5]. Compared with current alternatives, DRBD is more complete in modelling dynamic properties between components of a system [7, 8]. The developed DRBD model, the exact analytical solution for the reliability of a system can be calculated. In order to guarantee the correctness of the designed DRBD model and update current system, verification methodology Coloured Petri Nets (CPN) is introduced to verify the DRBD model. The advantage of this verification methodology is the converted CPN models can be automatically analysis by CPN Tools. The state space tools provided by CPN Tools calculated the CPN properties like boundedness and aliveness. Specify these properties to DRBD and then its design flaws like deadlocks and faulty states can be identified. A typical Peripheral Component Interconnect (PCI) system is widely used by Enterprise Resource Planning (ERP) systems. This project contributes to the analysis and improvement of the reliability of a typical multiprocessor by innovatively introducing DRBD models. The following of this paper is arranged as follows. Section 2 discusses related work and theoretical background of this project. DRBD model design and verification are depicted in Sect. 3. System updating is given in Sect. 4. Section 5 concludes this project and proposes future work.

2 Related Work Based on the time variant characteristics of the original DRBD, Xing and Xu [8] proposed a new structured DRBD which specifies on the relationship management between different components. The newly proposed DRBD extends the traditional RBD by fully considering the various dependencies and system dynamics. Thus, DRBD mainly contains two parts: state-based RBD (SRBD) and dynamic controller blocks [5–8]. Specifically, DRBD shares the same theory with RBD in terms of static behaviors modeling [4–6, 8], while distinguishes itself by introducing State Dependent Controller (SDEP) and Spare Part Controller (SPARE) [4, 5] to model dynamic behaviors. Controller blocks deal with dynamic behaviors, which is based on the stateevent mechanism [5]. Briefly, DRBD characterizes each component’s condition with states and states’ evolution with events [1, 5, 7, 8]. In time-variant systems, every component’s condition is changing as time passes [8]. Correspondingly, the component’s state is changing among Active event (A), Deactivation event (D) and Failed event (F) [8]. SDEP is introduced to model the state dependency relationship between trigger and target component. The SDEP is shown in Fig. 1, its dependency relationship is represented as (F, D). Currently, SDEP can model nine types of dependencies between components, which are (A, A), (A, D), (A, F), (D, A), (D, D), (D, F), (F, A), (F, D), and (F, F). Additionally, SDEP not only takes advantages in modelling more types of dynamic relationships, but it is able to show the logic AND relationship for multiple components. SPARE is introduced for dynamic behaviours modelling of spare components. A spare component is for the backup of a failed trigger component [7].

Reliability Modelling of a Typical Peripheral Component …

1571

The difference between spare and normal components is that spare component with a default initial state of Deactivation which has three kinds of temperature: hot, warm and cold [7, 9]. The working mechanism of SPARE controller can be described as Fig. 2. The C means cold temperature, W and H stand for warm and hot temperature, separately. The trigger component sends a Deactivation or Failure event to SPARE controller. Then, SPARE generates a Activation event and sends it to the target component. Then the state of the target component changes from Deactivation to Active. The temperature is defined based on the time and power consumption for backup. Specifically, a hot Deactivation spare component means it is powered and prepares to replace the trigger component at any time. The cold Deactivation means the component is unpowered until a failed component needs to be replaced. The warm Deactivation indicates that the time and power consumption for this spare component to be active is a trade-off between hot and cold temperatures.

D|F

A|D|F

Trigger

SDEP

Trigger

A|D|F

SPARE A

A|D|F

A

C|W|H

1

n

Fig. 1. SDEP controller

1

C|W|H

n

Fig. 2. SPARE controller

3 Reliability Modelling of PCI Practical reliability modelling work of this project is described in this chapter, which is mainly divided into two steps. The first step gives an introduction and system structure of the picked PCI System. The next part specifies this PCI system’s data diagram model and models the reliability of this system with DRBD. The following two sections detail each step, separately. 3.1

PCI System

In PCI system, bus provides the communication pathway for devices [10]. PCI is a high-bandwidth, processor-independent bus that takes advantages in supporting both single- and multi-processor systems and making use of synchronous timing and a centralized arbitration scheme [10]. Thus, PCI is widely used in diverse systems, such as personal computer and server system. It indicates the importance of reliability modelling for PCI system. Figure 3 [11] shows a typical multi-processor PCI system which is also the case study system of this thesis. This multi-processor PCI system contains four subsystems: CPU, I/O controller, Bus_1 and Bus_2 (Fig. 3). Bus_1 supports three devices: ISA_Bridge, Video and PCI_Bridge. The ISA_Bridge in the

1572

G. Yi and Z. Jiacong

system supports device I/O controller which controls the keyboard, mouse and floppy [11]. The function of PCI_Bridge is only to connect Bus_1 with Bus_2 system, so as to achieve the data transformation between them. PCI_Bridge only passes data from upstream bus to down-stream bus. Specifying to Fig. 3, data are transformed from Bus_1 to Bus_2 if they own the memory addresses of either SCSI or LAN. Both SCSI and LAN are devices of Bus_2. SCSI is small computer system interface which can be used to support local disk drives and other peripherals [11]. LAN is for ethernet network connection. The entire PCI system is operational if CPU, I/O controller, Bus_1, all devices of Bus_1, Bus_2, at least one component between SCSI and LAN is functioning.

CPU PCI_Bus_1

PCI_ISA Bridge Video

I/O Controller

Bus2

SCSI

PCI_Bus_2

LAN

Fig. 3. PCI system structure

3.2

DRBD Model Construction

The process for using DRBD to model system reliability is summarized in [12] which specifies this process into five steps: system specification, subsystems identification, structural linking, dynamic linking and reiteration [12]. Xing and Xu stated that the entire system DRBD model is a serial connection of subsystem DRBD models [13]. Figure 3 has specified each component of the system and the four subsystems identification. Thus, this section designs the data diagram of PCI system, subsytems of DRBD, and then connecting subsystem DRBD models in series to form the entire DRBD model. There are four subsystems of the system. I/O controller and CPU subsystems are independent simple components of the entire DRBD system, either failure of I/O controller or CPU will lead to the failure of the whole system. The dynamic interactions between components mainly exist within Bus_1 and Bus_2 subsystems. Firstly, consider the Bus_1 subsystem, the failure of component Bus_1 means the Deactivation of all devices attached to it. Additionally, the information transmitted by one device is available for reception by devices attached to the bus [10]. Specifying to Fig. 3, the failure of either PCI_ISA or Video will send a “Failed” information to PCI_Bridge, which leads to the unaccessible of Bus_2 subsystem. However, PCI_ISA and Video will does not effect the state of each other. Because, bus allows only one component to

Reliability Modelling of a Typical Peripheral Component …

1573

communicate its signal at one time [10]. Secondly, in terms of Bus_2 subsystem. The failure of component Bus_2 leads to the unusable of both SCSI and LAN. Hence, based on the previous analysis, the data diagram of this system is constructed as Fig. 4 [13]. The arcs show the interaction between components, which starts from trigger component to target component. The dynamic interaction of components are contained in Bus_1 and Bus_2 subsystems (Fig. 4).

I/O

CPU

PCI-ISA

Bus1

Video PCI_ISA Bridge

SCSI

Bus2

LAN Fig. 4. Data diagram of PCI system

Specifying data diagram into DRBD model, the Failure of Bus_1 incurs the Deactivation of its attached devices: PCI_ISA, Video and PCI_Bridge. Another one is the Failure of PCI_ISA leads to the Deactivation of PCI_Bridge. The last one is the Failure of Video leads to the Deactivation of PCI_Bridge. It is clear that any failure of components Bus_1, PCI_ISA, Video and PCI_Bridge will lead to the Deactivation of Bus_2. Components PCI_ISA and Video communicate information through Bus_1, independently. Additionally, both PCI_ISA and Video output data through PCI_Bridge. PCI_Bridge is for the connection between Bus_1 and Bus_2, which serially connects to Bus_2. In the DRBD model of Bus_1 dynamic interactions are connected to Bus_2, directly. There are two state dependency relationship of Bus_2. The Failure of Bus_2 donates to the Deactivation of SCSI and LAN, respectively. Finally, construct the entire system DRBD model (Fig. 5) is constructed by connecting the four subsystems serially.

F|A

SDEP

PCI-ISA I/O

CPU

Bus1

PCI Bridge Video F|A

D|A

SDEP F|A D|A

D|A

SDEP

A

SCSI

Bus2

LAN F|A

D|A

SDEP

Fig. 5. DRBD model of entire PCI system

A

SPARE

1574

G. Yi and Z. Jiacong

4 Verification and Findings CPN model is introduced in this part for DRBD model verification and system updating. The following parts firstly apply the proposed conversion algorithms to convert DRBD models into CPN. Then, using CPN Tools for CPN generation and analysis so to remove the design flaws of DRBD models and improve the accuracy for the reliability modelling of this system. The Designed DRBD model is converted into CPN net which is shown in Fig. 6. The boundedness calculation, aliveness calculation and the consideration of reactivation relationship between devices of CPN tools, two problems are tracked: Design errors and no home marking. After verification, this system is updated based on the previous verification of Bus_1 and Bus_2 DRBD reliability models. For example, the added SDEP controller between components ISA and Bus_2, which performs the (A, A) transition. Additionally, in order to eventually model the dynamic behaviors between LAN and SCSI. SPARE controller is introduced to model the dynamic relationship between them. It is therefore can be seen that the verified DRBD model constructs a more accurate system reliability.

Fig. 6. CPN net of DRBD

Reliability Modelling of a Typical Peripheral Component …

1575

5 Conclusion System reliability is achieving increasing attention as the widely use of large and complex systems, especially computer system. High reliability should be achieved by the accurate and productive reliability modeling tools and the formal verification methodologies to guarantee the precise results of these tools. Act as one of the significant sub-systems of complex computer systems, PCI systems play an important role. Compared with current system reliability modelling tools, DRBD is able to fully model dynamic behaviours of a system with dynamic controllers. Another advantage of DRBD model is it can be formally verified by CPN Tools. CPN Tools provides the state space analysis tools to analysis the properties of the CPN models converted from DRBDs. In this situation, DRBD and CPN tools are introduced in this project for the construction of a more accurate system reliability model for computer system. An accurate system reliability model not only can be used for measuring the reliability of current system, but also can be referenced for future improvement of the system. Based on DRBD model and analysis of CPN Tools, further reliability improvement of PCI system could be done by adding a “Host Bridge” component to establish an AND logic SDEP controller between up-stream and down-stream PCI system.

References 1. Distefano, S., Puliafito, A.: Dynamic reliability block diagrams: overview of a methodology. ESREL 7 (2007) 2. Rausand, M., Hoyland, A.: System reliability theory: Models and statistical methods (2003) 3. Vesely, W.E., et al.: Fault Tree Handbook. No. NUREG-0492. Nuclear Regulatory Commission Washington DC (1981) 4. Distefano, S., Xing, L.: A new approach to modeling the system reliability: dynamic reliability block diagrams. In: RAMS’06 Proceedings, pp. 189–195 (2006) 5. Robidoux, R., Xu, H.P., Xing, L.D., Zhou, M.C.: Automated modelling of dynamic reliability block diagrams using coloured Petri nets. IEEE Trans. Syst. Man Cybern. A, Syst. Hum. 40(2), 137–351 (2010) 6. Manian, R., Dugan, J., Coppit, D., Sullivan, K.: Combining various solution techniques for dynamic fault tree analysis of computer systems. In: Proceedings of 3rd International Symposium on High-Assurance Systems Engineering (HASE’98), Washington, D.C., USA, pp. 21–28 (1998) 7. Distefano, S., Puliafito, A.: Dynamic reliability block diagrams: overview of a methodology. In: Proceedings of the Safety and Reliability Conference (ESREL07) (2007) 8. Xing, L., Xu, H., Amari, S.V., Wang, W.: A new framework for complex system reliability analysis: modeling, verification, and evaluation 9. Stallings, W.: Computer Organization and Architecture: Designing for Performance. Pearson Education, India (2000) 10. Rusling, D.A.: The PCI system. http://www.tldp.org/LDP/tlk/dd/pci.html 11. Ullman, J.D.: Elements of ML Programming. Prentice-Hall (1998) 12. Xing, L.: Efficient analysis of systems with multiple states. In: Proceedings of the IEEE 21st International Conference on Advanced Information Networking and Applications, 21–23 May 2007, Niagara Falls, Canada, pp. 666–672 (2007)

1576

G. Yi and Z. Jiacong

13. Dugan, J.B., Doyle, S.A.: New results in fault-tree analysis. In: Tutorial Notes of the Annual Reliability and Maintainability Symposium (1997) 14. Abd-Allah, A.: Extending reliability block diagrams to software architectures. Technical Report USC-CSE-97-501, Department of Computer Science, University of Southern California (1997) 15. Duke, R., Rose, G., Smith, G.: Object-Z: a specification language advocated for the description of standards. Comput. Stand. Interfaces 17, 511–513 (1995)

Mid-long Term Load Forecasting Model Based on Principal Component Regression Xia Xinmao1, Zhang Zengqiang1, Yuan Chenghao2(&), and Yu Zhiyong1 1

Sate Grid Xinjiang Economic Research Institute, Xinjiang 830002, China 2 North China Electric Power University, Beijing 102206, China [email protected]

Abstract. Most of the factors influencing multivariate regression prediction models are macroeconomic indicators, and indicators tend to have strong correlations. The regression prediction model has strong multicollinearity, which results in distortion or inaccurate estimation of model estimates. To solve this problem, this paper first analyzes the principal components of the indicators, extracts the two principal components, eliminates the multicollinearity between the indicators, and considers the policy background of the “Electrification of Xinjiang”, and finally establishes a mid-long term load forecasting model based on principal component regression. The principal component regression prediction model has a better prediction effect. After inspection, the prediction accuracy of this model is ideal. Keywords: Load forecasting component regression



Electrification of XinJiang



Principal

1 Introduction Since the 21st century, the macro economy in most regions of China have experienced a period of rapid development, with GDP growth rates of up to 10% in some years, and the rapid development of electricity consumption. However, as the economy develops to a certain stage, China’s economic development has entered a “New Normal”, the industrial structure is facing an upgrading and transformation, the economic development rate has been changing to medium-high speed growth, and the growth rate of electricity consumption has also slowed down [1]. On the one hand, China is actively promoting supply-side structural reforms, “cutting overcapacity, destocking,” and accelerating the elimination of backward and inefficient industries, which will also have a negative effect on the growth of electricity consumption. On the other hand, in order to deal with the increasingly severe problems of environmental pollution and shortage of resources, China has also actively promoted the progress of electric power replacement. Relevant documents include “Energy Development Strategic Action Plan (2014–2020)” (National Office (2014) No. 31) and “Guidance on Promoting Electricity Substitution” (Development and Reform Energy [2016] No. 1054) [2]. Therefore extensive electric power replacement has also played a positive role in the growth of © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1577–1583, 2019. https://doi.org/10.1007/978-981-13-3648-5_204

1578

X. Xinmao et al.

electricity consumption. For more complex backgrounds, there are more and more influencing factors to be considered in the medium and long term load forecasting, and the difficulty is also increasing. The core of power load forecasting is to establish a mathematical model based on historical data to express the law of development and change of electrical load, so as to obtain a reasonable forecast result, and provide basis and guarantee for grid companies to make correct decisions [3]. With regard to medium and long term load forecasting methods, scholars at home and abroad have conducted more research. Yuan Tiejiang et al. analyzed and compared the characteristics of the current medium and long term load forecasting methods, improving the traditional power consumption flexible coefficient method, combining the GM(1, 1) with Second exponential smoothing method, establishing a new comprehensive forecasting model, and making use of Genetic Algorithm to optimize the weights of each Prediction model [4]. Tan Zhongfu et al. considered the influence of various random factors such as politics, economy, population, and climate that affect the long-term load of the power system, proposing a combined method based on econometrics and system dynamics, and demonstrating that the method had high prediction accuracy [5]. Tan Zhongfu et al. proposed a variable weight buffer gray model to solve the problems existing in the application of traditional gray model and buffer operator in the medium and long term load forecasting. The model combines the variable weights buffer operator with the background value optimized by grey model to achieve dynamic pretreatment of raw load data [6]. The traditional methods of time series prediction and grey prediction which are based on the electricity consumption or the characteristics and laws of the load itself don’t consider increasingly complex influencing factors [7–9]. As most of the influencing factors are macroeconomic indicators, when we use the multidimensional regression forecasting methods, there is a relatively serious multicollinearity between variables, which leads to unpredictable regression coefficients of variables and greatly reduced model accuracy [10]. The traditional BP Neural Network prediction is prone to occur to the problems of local optimization, over-fitting and poor prediction accuracy, especially the results of the medium and long term load forecasting are not ideal [11–13]. To solve this problem, this paper establishes a principal component regression prediction model that takes into account the influence of policies. First, it uses the principal component analysis to reduce dimensional influencing factors, filter out miscellaneous information, and obtain several principal components. Second, it takes the least square estimation of the principal component variables obtained to establish a multivariate regression model which has higher prediction accuracy after testing.

2 Data Processing Figure 1 shows the electricity consumption data of Xinjiang in 2000–2017: In recent years, due to the declining economic situation, the growth rate of electricity demand in Xinjiang has slowed down, and the generating hours for power generation enterprises has dropped sharply. The fact that the cogeneration units in Xinjiang accounts for about 90% of the installed capacity of thermal power plants causes great difficulties for the peak regulation during the heating period. In order to

Mid-Long Term Load Forecasting Model Based on Principal …

1579

ensure the heating of residents, the phenomenon of abandoning wind and abandoning light is very serious and the contradiction between power supply and demand is prominent. In order to enhance the ability to absorb affluent electric power, improve the energy consumption structure, and increase the level of regional electrification, Xinjiang Autonomous Region issued the “Work Plan for Accelerating the Electrification of Xinjiang” on November 14, 2016. This plan proposes the task target of electricity replacement work in the Xinjiang region during the 13th Five-Year Plan (2016–2020), which consists a qualitative indicator of the factors affecting electricity consumption. Besides the qualitative indicator, this paper also selected regional GDP (100 million CNY), industrial added value (100 million CNY), total social fixed assets investment (100 million CNY), regional annual per capita GDP (CNY), fiscal expenditure (100 million CNY), per capita disposable income (100 million CNY) and the amount of population (ten thousand people) in total 7 quantitative indicators. The above seven variables are represented by X1–X7 respectively.

2000.00

40.00%

1500.00

30.00%

1000.00

20.00%

500.00

10.00% 0.00%

0.00 200020022004200620082010201220142016 Electricity Consumpon

Growth Rate

Fig. 1. Electricity consumption in Xinjiang region in 2000–2017

Table 1 summarizes the indicators for the Xinjiang region from 2000 to 2017. Data from 2000 to 2013 are used to model training, and data from 2014 to 2017 are used for verification. For the treatment of qualitative indicators, the method of introducing dummy variables is adopted. During the 2016 and 2017 “Work Plan for Accelerating the Electrification of Xinjiang” proposed, this indicator is defined as 1 and other years as 0. The qualitative indicator is represented as X8.

1580

X. Xinmao et al. Table 1. Indicators for Xinjiang region from 2000 to 2017 Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017

X1 1365.00 1485.00 1598.28 1875.00 2200.15 2604.00 3018.98 3494.42 4203.41 4273.57 5418.81 6574.54 7530.32 8510.00 9264.10 9324.80 9617.23 10920.09

X2 422.00 450.00 473.00 571.00 745.00 992.00 1218.73 1405.11 1790.70 1579.88 2105.00 2764.14 2929.90 2895.95 3179.60 2690.04 2440.94 3229.09

X3 612.00 706.00 812.63 1002.00 1160.00 1352.32 1567.05 1850.84 2314.00 2827.23 3539.00 4712.77 6258.38 8148.41 9744.68 10729.32 9983.86 11795.64

X4 7382.37 7914.98 8389.26 9695.18 11207.47 12952.97 14726.73 16678.30 19726.82 19797.60 24803.70 29766.42 33726.21 37847.00 40607.00 40034.00 40427.00 45099.00

X5 207.00 281.81 366.00 368.00 410.00 553.00 727.58 875.00 1173.34 1474.87 1885.56 2598.34 3079.50 3519.60 3782.97 4169.00 4496.73 4946.80

X6 5817.00 6590.30 6941.00 7220.61 7503.00 8100.00 9120.00 10313.44 11432.00 12258.00 13644.00 15514.00 17921.00 19874.00 23214.00 26274.60 28463.43 30775.00

X7 1849.00 1876.19 1905.15 1933.95 1963.11 2010.35 2050.00 2095.19 2130.81 2158.63 2184.68 2208.71 2232.78 2264.30 2298.47 2360.00 2398.08 2421.34

3 Model Establishment Principal component regression is an improved statistical analysis method of least squares regression, of which core method is principal component analysis and multiple regression analysis. Principal component analysis is a multivariate statistical analysis method that converts multiple variables into a few comprehensive indicators through dimensionality reduction techniques. It is also a commonly used data mining technology. The principal component analysis can filter out the miscellaneous information and obtain a series of comprehensive indicators. The comprehensive index contains the useful information of the original variable as much as possible, which is the optimal weighted linear combination of the original variables. The main components of the above eight influencing factors were analyzed. According to the obtained gravel diagram in Fig. 2, two principal components are taken. As shown in Table 2, the cumulative contribution of the extracted two principal components is as high as 99.671%, which shows that the two principal components retain most of the information of the original variables. The KMO statistic is 0.715, and the variables pass the Barrett’s sphericity test, indicating that the variables are suitable for principal component analysis.

Mid-Long Term Load Forecasting Model Based on Principal …

1581

Fig. 2. Gravel diagram

Table 2. load list of principal component analysis Component

1 2 3 4 5 6 7 8

Initial eigenvalue Sum Variance % 6.974 87.174 1.000 12.497 0.015 0.185 0.009 0.115 0.002 0.022 0.000 0.004 0.000 0.002 0.000 0.000

Cumulative variance % 87.174 99.671 99.856 99.971 99.993 99.998 100.000 100.000

Extracted square sum Sum Variance % 6.974 87.174 1.000 12.497

of loads Cumulative variance % 87.174 99.671

Finally two principal components are extracted: z1 ¼ 0:143Y1 þ 0:143Y2 þ 0:143Y3 þ 0:143Y4 þ 0:143Y5 þ 0:143Y6 þ 0:142Y7  010Y8 z2 ¼ 0:002Y1  0:030Y2 þ 0:025Y3  0:008Y4 þ 0:026Y5 þ 0:007Y6 þ 0:044Y7 þ 0:998Y8

ð1Þ

ð2Þ

In Eqs. (1) and (2), Yi represents Xi after normalized. The meaning of the two principal components is: z1 represents a macroeconomic indicator and z2 represents a policy indicator. z1, z2 and power consumption data are used to build a multiple regression model. As can be seen from Table 3, there is no multicollinearity between variables.

1582

X. Xinmao et al. Table 3. Results of principal component regression Coefficient

t

Sig.

Colinear statistics Tolerance VIF

B Standard error C 2084.336 60.710 34.333 0.000 z1 1573.466 71.998 21.854 0.000 0.679 z2 116.418 26.641 4.370 0.001 0.679

1.472 1.472

The principal component regression prediction model of power consumption is: b ¼ 2084:336 þ 1573:466z1 þ 116:418z2 Y

ð3Þ

The two formulas of (1) and (2) are brought into (3), and the finally obtained principal component regression prediction model fitting line is: b ¼ 2084:336 þ 225:854X1 þ 221:371X2 þ 228:194X3 þ 224:585X4 Y þ 228:498X5 þ 226:023X6 þ 229:135X7 þ 101:185X8

ð4Þ

4 Model Test Use the principal component regression model established in this paper to forecast the electricity consumption of Xinjiang in 2014–2017. The results are shown in Fig. 3. From the results, we can see that the maximum relative error of the model is 1.59%, the average relative error is 1.04%, and the prediction accuracy is high, which can provide a useful reference for the medium and long-term load forecasting method selection.

2000

2.00%

1500

1.00%

1000

0.00%

500

-1.00%

0

-2.00% 2014

2015

Actual value

2016

Predicve value

Fig. 3. Forecasting results

2017 Relave error

Mid-Long Term Load Forecasting Model Based on Principal …

1583

References 1. Jin, B.: Research on the new normal of china’s economic development. Chin. Ind. Econ. 01, 5–18 (2015) 2. Implementing electric power replacement promoting the energy consumption revolution— interpretation of guiding opinions on promoting electric energy substitution. China Econ. Trade Rev. 18, 51–52 (2016) 3. Dongxiao, N., Shuhua, C., et al.: Power Load Forecasting Technology and Its Application (Second Edition). China Electric Power Corporation, Beijing (2009) 4. Yuan, T., Yuan, J., Qin, G., Li, G., Zhang, L.: KONG Feifei. Study on integrated model of medium and long term load forecasting in power system. Power Syst. Prot. Control 40(14), 143–146 + 151 5. Tan, Z., Zhang, J, Wu, L., Ding, Y., Song, Y.: Combination model of econometrics and system dynamics for medium and long term load forecasting. Power Syst. Technol. 35(01): 186–190 (2011) 6. Wang, D., Wang, B.: Medium and long term load forecasting based on variable weight buffer grey model. Power Syst. Technol. 37(01), 167–171 (2013) 7. Nie, S.: Early development of time series analysis. Northwest University (2012) 8. Meiying, Zhang, Jie, He: A review of research on time series prediction models. Math. Practice Theory 41(18), 189–195 (2011) 9. Wang, D.: Grey prediction model and application of medium and long term load forecasting. Huazhong University of Science and Technology (2013) 10. Jinhai, Li: Application of multiple regression analysis in prediction. J. Hebei Univ. Technol. 03, 57–61 (1996) 11. Mao, L.: Research on Medium and Long Term Load Forecasting Technology for Power Grid Planning. Hunan University (2011) 12. Lin, M., Cheng, Y, Zhou, L. Mid-long term electric load forecasting based on improved BP neural network. Microcomput. Inf. 26(25), 217–218 + 210 (2010) 13. Liu, P., Meng, X., Xiang, T., Wen, S.: Medium and long term load forecasting based on the combination of grey theory and BP neural network model. China Rural Water Hydropower 01, 120–122 (2009)

Research on Intrusion Detection Algorithm in Cloud Computing Yupeng Sang(&) Department of Information Engineering, Heilongjiang International University, Harbin 150025, China [email protected]

Abstract. The traditional intrusion detection as one of the network security defense technology, plays an important role in the field of network security. But in a cloud environment application there response speed, data size, and many other restrictions, unable to meet the demand of real time validity, etc. Therefore, to build an intrusion detection system in cloud computing environment is an important subject. This paper aims at the design requirements of intrusion detection in cloud environment, through the research of the intrusion detection algorithm, proposed the use of Extreme Learning Machine (ELM) algorithm as an intrusion detection classification algorithm and the rationality verification using extreme learning machine algorithm. At the same time for massive highdimensional intrusion detection data redundancy and noise influences the efficiency of detection by principal component analysis PCA algorithm for feature extraction, dimensionality reduction to improve the detection efficiency, reduces the detection time. Experiment results show that the algorithm is effective. Keywords: Cloud computing  Intrusion detection  Extreme learning machine PCA

1 Introduction The research on Intrusion Detection found that selecting a fast and efficient detection algorithm is most important to the entire Intrusion Detection System. Neural network algorithms are always applied to the research of Intrusion Detection algorithms [1, 2]. However, with the arrival of the era of big data and cloud computing, traditional neural networks needs a lot of time for training and the detection rate is not ideal which makes the neural network algorithm unable to be applied to the cloud computing applications. These years, a new type of single-hidden layer feed-forward neural network-the extreme learning machine (ELM) algorithm, has been brought out. It has received the extensive attention and application from researchers for the advantages of fast calculation speed, strong generalization ability, simple model, and can deal with problems such as classification and regression [3, 4]. With the rapid growth of the network size and the invention of cloud computing, the data collected by the Intrusion Detection System has the characteristic of large quantities, complicated structures, and high dimension. When dealing with these data, there are two problems. On one hand, the high-dimensional characteristic will increase © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1584–1592, 2019. https://doi.org/10.1007/978-981-13-3648-5_205

Research on Intrusion Detection Algorithm in Cloud Computing

1585

the training and detection time of the entire Intrusion Detection process. On the other hand, a large quantity of characteristics like redundancy and noise will make a dent in the detection effect and cause the problems of algorithm in learning and overfitting and so on. In order to cope with Intrusion Detection of high-dimensional and complicated data, this paper selects principal component analysis (PCA) and dimension reduction method to extract the characteristics to ensure the detection effect and decrease the consumption time.

2 Extreme Learning Machine Algorithm Extreme learning machine (ELM) [5, 6] algorithm is a single hidden layer feed-forward neural network algorithm [7], which is new type algorithm brought out in 2004. The invention of ELM algorithm makes up for the deficiency of the application of feedforward neural network algorithm, which greatly improves the training speed, and decrease the computation time, having a good detection rate and generalization ability, which can be applied to solve classification and regression problems. Before the analyzing and researching of the extreme learning machine algorithm, it is necessary to learn the single hidden layer feed-forward neural network. Generally, the single hidden layer feed-forward neural network has three layers: input layer, hidden layer and output layer. The input layer in the network includes many input nodes. The number of nodes indicates the dimension of sample. The hidden layer also contains includes many nodes to complete the high-order statistics. The output layer outputs the final results through a small number of nodes. In a feed-forward neural network, the signal is a one-way communication, and the sequence is from the input layer to the hidden layer to the output layer, which cannot be the back propagation. The ELM algorithm is the advanced version of the single hidden layer feed-forward neural network in nature. It applies the randomly setting of the initial values for the input layer weights and hidden layer offsets. After the hidden nodes are set, the output weights are obtained by the least squares method. The entire calculation process gets rid of the repeated iterations, which has saved time and improved the calculating speed. Setting N numbers of different inputs of ðui ; yi Þ 2 Rn  Rm , in which u and y represent the input vector and the target vector respectively. The number of hidden layer nodes is L, and the excitation function is represented by gðxÞ. The overall model can be formula (1) as: L X i¼1

bi gðai  uj þ bj Þ ¼ oj

ð1Þ

From the formula (1), we can find that ai ¼ ½ai1 ; ai2 ; . . .; aim T indicates the weight vector of the i-th hidden node and the input node, and bi ¼ ½bi1 ; bi2 ; . . .; bin T indicates the weight vector of the i-th hidden node and the output node. In the formula (1), bi represents the first i-th hidden node offsets. During the learning process, when the zero error approaching training sample, the following formula (2) can be obtained:

1586

Y. Sang L X i¼1

bi gðai  uj þ bj Þ ¼ yj

ð2Þ

The ELM algorithm can be concluded as the following: Step 1: Randomly generate the input weight ai and hidden node offset bi ; Step 2: Calculate matrix H of the hidden layer output; Step 3: Calculate the output weight vector b ¼ H þ T. When compared with other algorithms, the extreme learning machine algorithm has advantages in both speed and precision. They are the following features: 1. The calculation speed of extreme learning machine algorithm is fast. When applying this algorithm, it always takes only a few seconds to complete the calculation for the experiment. However, the traditional algorithm often takes a lot of time. 2. The generalization ability of the extreme learning machine algorithm is excellent and can cope with many problems. 3. Extreme learning machine algorithm often created a direct single hidden layer feedforward neural network, and it can avoid similar problems such as minimum local.

3 Feature Extraction of Principal Component Analysis (PCA) Principal component analysis (PCA) algorithm [8, 9] is a multivariate statistical analysis algorithm by finding some of the important variables from applying multiple variables in the form of linear transformation. It usually considers the low dimensional data after dimensionality reduction as the initial high-dimensional data. The feature extraction of PCA is accordance with the original sample matrix to calculate the covariance matrix, and then finally obtains the characteristic matrix through the covariance matrix. In the end, multiplying the original sample matrix with the resulting characteristic matrix to generate a low-dimensional sample matrix. Though the newly obtained sample matrix has a low characteristic dimension but it still contains most of the characteristic information of the initial sample matrix. For the moment, PCA algorithm is widely applied in data compression and other research ways. Due to the increase in the dimension of intrusion detection data, PCA technology is also applied in intrusion detection. Now it is the learning time for the PCA algorithm below. The principal component analysis (PCA) algorithm process is: 1. The original sample matrix: transforming the original data sample into a matrix form, in which the number of data contained is n, the characteristic dimension of each sample is m, and the original sample matrix can be brought out. It is shown in formula (3) X ¼ ½xij 

ð3Þ

Research on Intrusion Detection Algorithm in Cloud Computing

1587

2. Data standardization: according to the original sample matrix to calculate the mean qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P P value xj ¼ 1n ni¼1 xij and the standard deviation sj ¼ 1n ni¼1 ðxij  xj Þ2 of the x x

sample firstly, and then the data in the matrix is standardized as yij ¼ ijsj j . The standardized data matrix is denoted as Y ¼ ½yij . 3. According to the formula A ¼ 1n Y T Y to obtain the correlation coefficient matrix A, and performing the characteristic decomposition A ¼ Q ^ QT of the correlation coefficient matrix A. Get the eigenvalues k. 4. According to the contribution value of cumulative variance to determine the number of K principal components that need to be kept, if the principal component accumulative contribution value reaches 85%, with this method, the original sample is transformed from the m dimension to the k dimension.

4 Intrusion Detection Dataset When analyzing and researching the Intrusion Detection Algorithm, it is necessary to first collect the corresponding detection data. In this paper, it will selects the KDD CUP 99 data set [10] as the data to evaluate the performance of the intrusion detection algorithm for intrusion detection. The KDD CUP 99 data set is an Intrusion Detection data set created by DARPA for the 1999 KDD competition. This data set has been widely applied in the intrusion detection field and has a certain authority for differentiating the advantages and disadvantages for the intrusion detection algorithm. The KDD CUP 99 data set contains a total of approximately 7 million data samples. The first part is the training data set. It has a total of about 5 million connection data, and the other part is a test data set with about 2 million connection data. In the training data set, each piece of data consists of 41 feature quantities and a tag. According to tags, type of data can be distinguished. The entire data set can be divided into five categories which are normal behavior and four attack modes (Dos attacks, U2R attacks, R2L attacks, and Probing attacks). Now it is the learning time for the PCA algorithm below: Dos (Denial of Service, denial of service attacks): Dos attacks are attacks or directly consume the resources of the attacked users against the defects of network protocols, causing the computer or network that cannot provide related service or resource access requests. The attack will cause the failure or breakdown of the target service system, resulting in that it cannot meet the requirements of the legitimate user’s service. In the KDD CUP 99 data set, Dos attacks include: apache2, neptune, land, smurf, udpstorm, and so on. U2R (User to Root, U2R attack): A U2R attack means that the local user is unauthorized but still can take advantage of the super-user privileges to operate. It circumvents some of the verifications or directly obtains root privileges through some vulnerabilities of system or website, and then performs illegal operations. The types of U2R attacks in KDD CUP 99 data set include: httptunel, perl, ps, rootkit, and so on. R2L (Remote to Local, remote user attack): R2L attacks are generally resulted by users not having a good remote control. The security of user passwords has been

1588

Y. Sang

exploited by attackers, and intruders who did not have access to them but can obtain data by sending and receiving data packets. The types of R2L attacks in KDD CUP 99 include: ftp, write, imap, named, spy, and so on. Probing (surveillance and other probing, port attack): probing attack finds the vulnerability to obtain information such as the IP address of the target computer to conduct the port attackport attack by scanning the DNS server, computer network, and other ports. The types of probing attacks in KDD CUP 99 data sets include: ipsweep, mscan, mmap, and so on. In the four types of intrusion behaviors of the entire data set, there are a total of 39 different network attack methods. The data distribution of KDD CUP 99 is similar to the reality. The intrusion behavior of the training set of the data set, and the test set in KDD are not exactly the same. There are some attack methods that do not exist in the training sample, which makes the entire data environment similar to the real environment. There are 22 types of 4 attack modes in the KDD CUP 99 training set of the data set. The remaining 17 attack types are only reflected in the test set. We have listed the specific attack types and distributions. It is shown in Table 1. Table 1. KDD CUP 99 dataset attack type and distribution Attack mode Dos attack U2R attack R2L attack Probing attack

Category description Denial of service attack Rights attack Remote user attack Port attack

Training set type

Test set type

back, land, neptune, pod, smurf, teardrop

apache2, mailbomb, udpstorm, processtable

perl, rootkit, loadmodule, buffer_overflow ftp-write, guess_passwd, phf, multihop, imap, spy, warezclient ipsweep, nmap, portsweep, satan

Httptunnel, ps, sqlttack, xterm Named, xsnoop, xlock, sendmail, worm, snmpgetattack, snmpguess Saint, mscan

5 Experimental Results and Analysis 5.1

Intrusion Detection Performance Evaluation of Extreme Learning Machine Algorithm

When conducting the ELM algorithm test for Intrusion Detection performance, it selects the kddcup.data_10_percent sample set for experimentation. Experimental environment: i5, 2.4 GHz CPU, 4G memory; Windows 8.1 system; MATLAB 2012a edition. When applying the ELM algorithm to conduct experiments, the number of hidden nodes of the ELM algorithm needs to be determined first. The number of hidden nodes will greatly influence the test accuracy and time. Therefore, in this paper, it should select the optimal number of hidden nodes through experiments to improve the detection performance of the ELM algorithm. During the experiment, the ELM algorithm hidden layer activation function selects the sigmoid function. 6000 samples were selected for experimentation, of which 1000 were applied as training samples and the

Research on Intrusion Detection Algorithm in Cloud Computing

1589

remaining 5000 were applied as test samples. The ELM algorithm was applied to classify the samples. The influence of the number of hidden nodes in the ELM algorithm on the test accuracy and test time was analyzed to determine the optimal number of hidden nodes. It is shown in Figs. 1 and 2.

Fig. 1. The hidden number nodes and detection rate of ELM algorithm

Fig. 2. The hidden number nodes and detection time of ELM algorithm

From the results of experimental Fig. 1, it can be found that the overall trend is that the greater the number of hidden nodes in the ELM algorithm, the higher the detection accuracy. When the number of hidden nodes exceeds 30, the influence of the hidden

1590

Y. Sang

nodes on the accuracy of the detection starts to slow down and gradually stabilizes. From Fig. 2, it can be found that the detection time increases with the number of ELM hidden nodes and the time will become longer. With a comprehensive analysis of the above experiments, it shows that when the number of hidden nodes in the ELM algorithm is set to 30, the experimental results are optimal. The detection accuracy of the algorithm is high, and detection time will be less. 5.2

Intrusion Detection Performance Analysis of PCA Dimension Reduction

Applying the kddcup.data_10_percent sample set in the KDD CUP 99 data set to verify the effect of PCA feature extraction when conducting the experiments. The experimental environment was the same as the previous section. Before the dimension reduction of PCA data, it should process the experimental data in advance, namely the digitized and normalized operation. Then we can apply the PCA algorithm to reduce the dimension of the data set, and analyze the contribution of the relevant features, and select the main features according to the size of the contribution. Analysis of data contribution rate is shown in Fig. 3.

Fig. 3. PCA contribution degree

It can be found from Fig. 3 that after the experiment was performed with MATLAB, the contribution rates of the top ten principal components did not meet the requirements. Since MATLAB can only display at most 10 principal components, it is necessary to continue the analysis. Table 2 shows the contribution of each component after PCA dimensionality reduction. The contribution rate of the component with the highest degree of contribution after reduction in dimension using the PCA algorithm is 26.61%, and the contribution rate with the component with the second highest

Research on Intrusion Detection Algorithm in Cloud Computing

1591

component is approximately 11.92%, and so on. In order to make the data after PCA dimensionality reduction still have a high detection effect, the cumulative contribution is selected. All principal components with degrees greater than 85% are considered to be the main components of the original sample set. Table 2. Contribution rate and accumulation contribution rate of PCA principal component analysis Principal component sequence number 1 2 3 4 5 6 7 8 9 10

Characteristic value 10.39 4.66 3.38 2.85 1.84 1.54 1.16 1.35 1.03 1.01

Contribution rate (%) 26.61 11.92 8.68 7.35 4.85 3.96 2.99 2.91 2.84 2.60

Cumulative contribution rate (%) 26.66 38.62 47.29 54.64 59.39 63.38 69.25 69.24 72.02 74.69

6 Conclusions This paper chooses KDD CUP 99 intrusion detection data set to learn, and uses ELM algorithm to carry out intrusion detection experiments on KDD CUP 99 data set. The characteristics of data sets are analyzed by principal component analysis (PCA) algorithm, and 10 dimensional features are selected for detection. The experiment shows that the ELM algorithm has obvious advantages when it carries out intrusion detection. It greatly shortens the detection time and experiments after the PCA dimensionality reduction. It improves the detection rate and further shortens the detection time.

References 1. Helman, P., Liepins, G., Richards, W.: Foundations of intrusion detection. In: Proceedings of the Fifth Computer Security Foundations Workshop, pp. 114–120 (1992) 2. Jiang, J., Ma, H.: Summary of network security intrusion detection research. J. Softw. 11, 1460–1466 (2012) 3. Anderson, J.P.: Computer security threat monitoring and surveillance. Fort Washington: Pennsylvania 11(2), 65–68 (1980) 4. Helman, P., Liepins, G., Richards, W.: Foundations of intrusion detection. In: Proceedings of the Fifth Computer Security Foundations Workshop, pp. 114–120 (2002) 5. Huang, G.B., Zhu, Y.Q., Siew, C.K.: Extreme learning machine: a new learning scheme of feedforward neural networks. Proc. IJCNN 2, 985–990 (2004)

1592

Y. Sang

6. Huang, G.B., Chen, L.: Convex incremental extreme learning machine. Neuro Comput. 70, 3056–3062 (2007) 7. Huang, G.B., Chen, Y.Q., Babri, H.A.: Classification ability of single hidden layer feedforward neural networks. IEEE Trans. Neural Netw. 11(3), 799–801 (2014) 8. Mao, Y., Zhou, X.: Overview of feature selection algorithms. Pattern Recognit. AI 20(2), 211–218 (2007) 9. Jolliffe, I.T.: Principal Component Analysis. Springer, Berlin (2002) 10. KDD Cup 1999 Data: Information and Computer Science. University of California, Irivine. http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html

An Improved Medical Image Fusion Algorithm Hui Li(&), Qiang Miao, and Hua Shen Dalian Neusoft University of Information, Dalian, Liaoning 116023, China [email protected]

Abstract. Medical diagnosis becomes widely popular with the progress of technology and social life. High quality medical image fusion technology could provide more effective medical diagnosis image. This paper represents an improved medical image fusion algorithm with three parts. First is image median filtering. Then using modality independent neighborhood descriptor to realize the image registration. Last is the medical image fusion algorithm including the wavelet transform, weighted average and IHS. This paper uses CT, MRI and SPECT images and puts entropy, PSNR, RMSE and average gradient in tests. The results show that the fusion medical image has rich medical information, and it is useful for different medical image fusion scene. Keywords: Medical image fusion Weighted average  IHS

 Image registration  Wavelet transform

1 Introduction of Medical Image Fusion In the clinical diagnosis, the single modality medical image has too little information. So the single modality medical image could not provide comprehensive and useful information. The multi-modality could tell the information of human body from different angles [1]. The fusion image of multi-modality medical image could show information of kinds of images in one image, and it could fuse useful information such as anatomic information and functional information to images with more rich and practical information, which will be some important reference for the clinical diagnosis and medical treatment.

2 Design of the Improved Medical Image Fusion Algorithm The improved medical image fusion algorithm represented in this paper is shown in Fig. 1. The first step is image pre-processing, and then the image registration through feature extraction, last three image fusion algorithms.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1593–1601, 2019. https://doi.org/10.1007/978-981-13-3648-5_206

1594

H. Li et al.

Fig. 1. Flow of improved medical fusion algorithm

2.1

Median Filtering

Median filtering is used to filter the high frequency and low frequency of the image. A sliding window with odd points is chosen to sort the gray value [2]. The median value is set to be the central pixel of the former widow. Assume the input image is fij , and the output image is gij , then the formula of the median filtering is shown in (1). Where: A stands for the widow, f fi; jg is the two-dimensional data sequence of image.   gij ¼ medA fij

2.2

ð1Þ

Modality Independent Neighborhood Descriptor Algorithm

The modality independent neighborhood descriptor (MIND) algorithm uses independent neighborhood descriptor to define the similarity between images on the base of the sum of square differences [3]. First using image self-similarity for the construction of an image descriptor, and then representing the definition of self-similarity by using a Gaussian-weighted patch-distance. The flow of MIND algorithm is shown in Fig. 2.

Image after filtering

Overlap estimation

Feature extraction

Image after registration

Image matching

Fig. 2. Flow of MIND algorithm

In the algorithm, multidimensional image descriptor represents the different image structure of local area. The formula is shown in (2):   1 DP ðl; x; x þ r Þ MINDðl; x; rÞ ¼ exp  n V ðl; xÞ

r2R

ð2Þ

DP is distance, V is the estimate of variance, R is the space search area, n is the normalization constant, r 2 R defines the search area. Through calculation, the position x of each image could be represented in two voxels of one image by a vector with the size of jRj. A distance measure is defined to measure the sum of squared differences DP ðx1 ; x2 Þ.

An Improved Medical Image Fusion Algorithm

1595

Define a voxel x ¼ ðx; y; zÞT , after image registration the voxel is which uses q ¼ ðq1 ; . . .; q12 Þ as the parameterization, the formula is shown in (3): 0

u ¼ x  x ¼ q1 x þ q2 y þ q3 z þ q10  x 0

ð3Þ

v ¼ y  y ¼ q4 x þ q5 y þ q6 z þ q11  y 0

w ¼ z  z ¼ q7 x þ q8 y þ q9 z þ q12  z

2.3

Medical Image Fusion Algorithm

2.3.1 IHS Transform Algorithm IHS (Intensity, Hue, Saturation) fusion algorithm could extract most of the spectrum information. The first step is to extract I, H and S components from the multispectral image with the IHS transform. Then get I’HS transform by replacing I by I’. Finally get the fusion image through the IHS inverse transform [4]. The IHS algorithm needs at least one multispectral image. In this paper, we choose SPECT image. The flow of the fusion algorithm is shown in Fig. 3.

Fig. 3. Flow of IHS transform algorithm

2.3.2 Weighted Average Algorithm The weighted average fusion algorithm gets new image through the weighted average operation for the pixel grey scale of the source image [5]. Assume the two source images are A and B, then the pixel scale after fusion is A*50% + B*50%. The formula of the weighted average algorithm is shown in (4). The flow of the weighted average fusion algorithm is shown in Fig. 4.

Source image

Choose the same weight

Assignment matching

Weighted mean

Image after fusion

Fig. 4. Flow of weighted average algorithm

f3 ðx; yÞ ¼ x1 f1 ðx; yÞ þ x2 f2 ðx; yÞ

ð4Þ

1596

H. Li et al.

2.3.3 Wavelet Transform Algorithm The wavelet transform algorithm could get more edge information and the fusion effect is more suitable for human eyes. In this paper, set two wavelet decomposition levels of the source image, then get the approximations and details of each level, and then merge the coefficients and fuse image through high-low frequency rules. Finally get the fusion image through inverse wavelet transform. The principle of the algorithm is shown in Fig. 5.

Fig. 5. Principle of wavelet transform algorithm

The first step of wavelet fusion algorithm is to calculate the matching degree of two source images. If the matching degree is not smaller than 0.7, then choose the weighted average algorithm because the corresponding local energy is close. Otherwise wavelet transform fusion algorithm with the larger local energy wavelet coefficients [6]. The flow of the algorithm is shown in Fig. 6.

Fig. 6. Flow of wavelet transform algorithm

2.3.4 Evaluation of the Medical Fusion Algorithm In this paper puts entropy, peak value signal-noise-ratio (PSNR), root mean square error (RMSE) and average gradient (grad) to evaluate the medical image fusion quality.

An Improved Medical Image Fusion Algorithm

1597

The fusion quality is better if the entropy, PSNR and grad are bigger as well as the RMSE is smaller. The entropy is the average information of each message. The formula is shown in (5). X EðXÞ ¼  pðxi Þ logðpðxi ÞÞ ð5Þ In the formula, N represents the pixel gray value from 0 to 255. The formula (6) defines the RMSE between fusion image and the standard reference image. sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn 2 1 ðXo;i  Xm;i Þ RMSE ¼ n

ð6Þ

The formula (7) defines the PSNR. PSNR ¼ 1011g

255  255 RMSE 2

ð7Þ

The grad value represents the image sharpness and the formula is shown in (8): M 1 X N 1 X 1  grad ¼ ðM  1ÞðN  1Þ i¼1 j¼1

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðFði; jÞ  Fði þ 1; jÞ2 þ ðFði; jÞ  Fði; j þ 1Þ2 Þ2 2 ð8Þ

3 Simulation of the Improved Medical Image Fusion Algorithm The algorithm simulation is on the base of Matlab. The CT and MRI image registration after filtering is shown in Fig. 7. The first image is source CT image after filtering. The second image is source MRI image after filtering. The third image is registration image of the former two. The design of the medical fusion simulation is on the base of the Matlab Guide editor. The medical fusion simulation interface is shown in Fig. 8. User can choose three fusion algorithms and get the simulation results [7].

1598

H. Li et al.

Fig. 7. Simulation of image registration

Fig. 8. Simulation interface

4 Test of the Improved Medical Image Fusion Algorithm 4.1

Test of CT/MRI Image Fusion Algorithm

This paper set two group images with the size of 256  256 and gray levels from 0 to 255 for test. The lesion location is marked with border in the image below. The first test images are CT and MRI brain images. The wavelet transform and weighted average fusion images are shown in Fig. 9. The first group images are source CT image and MRI image. The third image is the wavelet transform fusion image. The fourth image is the weighted average fusion image [8].

An Improved Medical Image Fusion Algorithm

(a)CT

(b)MRI

(c)Wavelet transform

1599

(d)Weighted average

Fig. 9. CT/MRI fusion image

The evaluation of the fusion algorithms is shown in Table 1. The evaluating indicators are entropy, peak value signal-noise-ratio, root mean square error and average gradient. Table 1. Evaluation of CT/MRI fusion Fusion algorithms imen PSNR MSE AVE Weighted average 4.3783 27.4785 116.2070 0.0395 Wavelet transform 4.4223 27.5351 114.7540 0.0721

The evaluation result show that the imen, PSNR and AVE value of the wavelet transform algorithm are bigger and the MSE value is smaller than the weighted average fusion algorithm. That means the wavelet transform algorithm contains more information and gets better fusion result. 4.2

Test of CT/SPECT Image Fusion Algorithm

The second test images are CT and SPECT brain tumor images. The wavelet transform and IHS transform fusion images are shown in Fig. 10. The first group of images are the source CT image and SPECT image. The third image is the wavelet transform fusion image. The fourth image is the IHS transform fusion image. The evaluation of the fusion algorithms is shown in Table 2.

(a)CT

(b)SPECT

(c)Wavelet Transform

Fig. 10. CT/SPECT fusion image

(d) IHS Transform

1600

H. Li et al. Table 2. Evaluation of CT/SPECT fusion Fusion algorithms imen PSNR MSE AVEGRAD Wavelet transform 4.3172 27.3652 116.3170 0.0311 IHS transform 4.4323 27.5456 114.6640 0.0721

The evaluation result show that the imen, PSNR and AVE value of the IHS transform algorithm are bigger and the MSE value is smaller than the wavelet transform fusion. That means the IHS transform algorithm contains more source image information and provide precise tumor location medical image [9, 10].

5 Conclusion Medical image fusion become popular in recent years. The common medical images have some limitations. CT image has high resolution and poor display of the soft tissue lesions. MRI image has good display of the soft tissue lesions and low resolution. SPECT image provides excellent function and metabolism information and bad anatomic information. The simulation shows that the proposed fusion algorithm can obtain better fusion result and provide rich medical information. Acknowledgements. This work was, in part, supported by the Liaoning Province Natural Science Fund (Grant No. 20170540052); Liaoning Province Science Fund for Research and Development Program (Grant Nos. 2017225078, 2017225079).

References 1. Glatard, T., Lartizien, C., Gibaud, B.: A virtual imaging platform for multi-modality medical image simulation. IEEE Trans. Med. Imaging 32(1), 110 (2013) 2. Yang, Y., Huang, S., Gao, J.: Multi-focus image fusion using an effective discrete wavelet transform based algorithm. Meas. Sci. Rev. 14(2), 102–108 (2014) 3. Heinrich, M.P., Jenkinson, M., Bhushan, M.: MIND: modality independent neighbourhood descriptor for multi-modal deformable registration. Med. Image Anal. 16(7), 1423–1435 (2012) 4. Tongying, L.I., Zhu, H.: Study on fusion and recognition method of human brain medical image under MATLAB environment. China Sciencepaper 19(3), 4–19 (2016) 5. Bai, C., Liu, H.: Image Fusion Based on Wavelet Transform with MATLAB. Science Mosaic (2014) 6. Shen, Z.: Forensic Image Fusion in Wavelet Domain Based on MATLAB GUI. Forensic Science & Technology (2016) 7. Lei, J.-J., Yang, Z., Liu, G., Guo, J.: Review of noise robust speech recognition. Appl. Res. Comput. 26 (2009) 8. Wu, J., Zhao, J., Li, Q.: The IMCRA algorithm based on wiener filtering. J. Xian Univ. Posts Telecommun. 22(05), 73–77 (2017). In: Proceedings of 7th USENIX Conference on Networked Systems Design and Implementation, pp. 17–17 (2010)

An Improved Medical Image Fusion Algorithm

1601

9. Chen, H., Qiu, X.-H.: Research on speech enhancement of improved spectral subtraction algorithm. Comput. Technol. Dev. 4, 17 (2014) 10. Chen, C.-B.: Research and Development in the Home Environment for Speech Enhancement System. South China University of Technology (2015)

Construction Study on the Evaluation Model of Business Environment of Cross-Border E-Commerce Enterprises Zhao Yuxin(&), Zhuang Meinan, and Wang Yatong Dalian Neusoft Institute of Information, Dalian, Liaoning 116023, People’s Republic of China [email protected]

Abstract. In the face of the explosive growth of cross-border e-commerce, China has gradually focused on the construction of the business environment. This paper constructs the business environment evaluation index system of cross-border e-commerce enterprises in Liaoning Province, and builds the corresponding evaluation model based on the Analytic hierarchy process (AHP), which researches and analyses of evaluation factors through a large number of literature and combines the qualitative and quantitative analysis. This paper finds that the first thing to optimize the business environment of cross-border ecommerce enterprises in Liaoning Province is the construction of the government policy environment, the second is the construction of the market economy environment and the construction of the legal supervision environment. Keywords: Business environment Evaluation model

 Cross-border e-commerce

1 Background and Significance of the Study In May 2017, the China E-Commerce Research Center released the China ECommerce Market data Monitoring report for 2016, which shows that cross-border ecommerce market transactions in China reached 6.7 trillion yuan in 2016, an increase of 24% over the same period last year. Recalling the reported data in recent years, with the introduction of the concept of “Internet +” and the introduction of national policies to encourage cross-border e-commerce development, “increased penetration of ecommerce + accelerated transformation of traditional foreign trade” has driven the explosive growth of cross-border e-commerce, Cross-border e-commerce transactions are growing at a rate of about 30% a year. In the face of the blowout development of cross-border e-commerce, China has gradually focused on the construction of the business environment. On June 13, 2017, Prime Minister Li Keqiang emphasized that the business environment is productivity at a national teleconference on deepening the reform of “decentralization, combination of decentralization and management and optimization of services” [1]. In September of the same year, the Beijing Municipal Committee of the Communist Party of China issued the “Implementation Plan for the First Action to Reform and Optimize the Business Environment,” pointing out that we © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1602–1611, 2019. https://doi.org/10.1007/978-981-13-3648-5_207

Construction Study on the Evaluation Model of Business …

1603

must create a more open investment environment, a more convenient trade environment, a more favorable production and operation environment, and a more refined talent development environment, a more equitable legal environment. To sum up, how to analyze and evaluate the business environment and build a good cross-border ecommerce business environment in Liaoning Province has become an urgent problem to be solved [2]. The purpose of this paper is to analyze the business environment of cross-border ecommerce enterprises in Liaoning Province, and to study the factors that affect the business environment of cross-border e-commerce enterprises, so as to set up a scientific and practical evaluation index system and model of business environment of cross-border e-commerce enterprises in Liaoning Province, and put forward suggestions and improvement strategies combined with examples.

2 Overview of the Research on the Business Environment The term “business environment” was originally derived from the World Bank Group’s business environment project [3]. The project, which was launched in 2002, aims to investigate domestic SMEs and assess the applicable regulations during the business cycle of the enterprise. It mainly investigates the laws and regulations that improve or hinder the business activities of an enterprise, and assesses the applicable regulations in the life cycle of the enterprise, and discusses the effects of legal supervision and regulation of enterprises on social productive forces, unemployment, economic growth and poverty [4]. The World Bank defines the business environment as follows: the time and cost that an enterprise needs to comply with policies and regulations in terms of starting a business, operating, trading, paying taxes, bankruptcy of an enterprise, and enforcing a contract. There are many factors that influence the business environment of enterprises [5]. What is widely mentioned in the relevant research is the index of business environment designed by the World Bank to measure the objective environment of the operation of enterprises in various countries, and its ranking represents the degree of difficulty and ease of doing business of the enterprises in the country [6]. With the rapid development of network and economy, the differences of regional characteristics, such as political system, cultural differences, business ideas and so on, have gradually become the new focus. Based on the political, economic and cultural background and development status of the country itself, many scholars have carried out relevant research on the factors affecting the business environment [7]. For example, Hanil Jeong believes that the overall environment of business operations is composed of parts, each part of which exists independently and has a specific role, which will affect the production and business activities of the enterprise at different stages of development. In the long run, compared with the internal environment, the influence of the surrounding environment on the enterprise is greater and wider. Ricardo Pinheiro-Alves et al. [8] found that the business environment assessment index is widely used in the investment decisions of multinational corporations when making location decisions. Then the correlation between different variables in the business environment index is verified by factor analysis and Cronbach coefficient analysis. The results show that the investment

1604

Z. Yuxin et al.

decisions of multinational corporations will be affected by the business environment [9]. Xu Ke and Wang Ying studied China’s business environment at the enterprise level based on survey data from the World Bank. The results show that the main problems facing China’s business environment are difficulties in financial access, low level of education of human resources, competition in the informal sector and high tax burden. Judging from the current research on the business environment, scholars at home and abroad have conducted more studies on one or more aspects of the business environment of small and medium-sized enterprises in a country or region [10]. However, there are few studies on the analysis and evaluation of the business environment in combination with the business development of cross-border e-commerce enterprises. In addition, some scholars only put forward opinions and suggestions on the development of cross-border e-commerce from the theoretical point of view. What we need is, take Liaoning Province as the research object, unifies this province crossborder e-commerce development situation and the enterprise present situation, constructs the suitable degree cross-border e-commerce business environment appraisal system and the model, and uses the demonstration research method to carry on the quantitative analysis, however, there are few such studies.

3 The Construction of Business Evaluation Index System 3.1

Study and Analysis of Business Environment Assessment Factors

In the research and analysis of cross-border e-commerce business environment assessment factors in Liaoning Province, we collated literature, research and analysis mainly from the three aspects of the business environment related assessment factors, cross-border e-commerce evaluation factors and other evaluation factors. Through literature research, we can know that there are few studies on the business environment assessment of cross-border e-commerce enterprises in Liaoning Province, although a few documents mention specific areas, the selection of indicators still does not have local characteristics. Nevertheless, these scholars have conducted in-depth studies on the impact of the business environment, the World Bank’s Business Environment Assessment, the factors affecting the development of cross-border ecommerce, and the relationship between cross-border e-commerce and the business environment, and so on. It provides guidance for the selection of indicators in this paper. We preliminarily combed and compared the evaluation factors of the business environment of cross-border e-commerce enterprises in Liaoning Province, which based on the principles of scientific, comprehensive, systematic and practical, and combined with the development of cross-border e-commerce in Liaoning Province. A total of 5 dimensions and 40 original influencing factors. 3.2

Establishment of Business Environment Assessment Index System

Based on the above mentioned 5 dimensions and 40 influencing factors, we selected 10 industry experts in cross-border e-commerce industry and relevant workers with

Construction Study on the Evaluation Model of Business …

1605

experience in cross-border e-commerce enterprises in Liaoning province as the interviewees to conduct expert interviews with non-structural. We simplified and classified the single evaluation index which impacts on the business environment of cross-border e-commerce in Liaoning province from two aspects of enterprise ownership and importance. In addition, we have made clear the evaluation dimensions and measurement indicators of the business environment. First, we need to provide experts with basic information on indicators and seek their views; second, analyze and summarize the views of experts and re-feed the statistical results back to experts; finally, experts revise their views according to the results of feedback. From the first round of scoring, each round of scoring needs to refer to the results of the previous round of scoring. Finally, the analysis conclusions of this expert interview were formed after three rounds of consultation and feedback. We select 28 evaluation factors to measure the business environment of cross-border e-commerce enterprises in Liaoning province from the five dimensions of government policy environment, legal supervision environment, social service environment, financing and payment environment and market economy environment. Then, we get the business environment assessment system of cross-border e-commerce in Liaoning Province as shown in Table 1.

4 Construction of Business Environment Assessment Model 4.1

Determination of Model Evaluation Objectives

The purpose of this model is to study the factors that affect the business environment of cross-border e-commerce enterprises, and to analyze the business environment of cross-border e-commerce enterprises in Liaoning Province by using the model, and then put forward some strategies to improve the business environment of cross-border e-commerce enterprises in Liaoning Province which combined with the development of cross-border e-commerce business environment in Liaoning Province. To sum up, the model is built on the basis of the evaluation index system of the business environment of cross-border e-commerce enterprises, and its evaluation object is the business environment of cross-border e-commerce enterprises in Liaoning Province. The objective of the evaluation is to measure the business environment of cross-border ecommerce in Liaoning Province scientifically, comprehensively, systematically and practically. 4.2

Method Selection of Model Construction

In order to better analyze the evaluation methods in order to select the appropriate model construction methods, we sort out and compare the basic meaning, advantages and disadvantages of AHP, grey system theory and fuzzy comprehensive evaluation method. It is known from the literature that both AHP and fuzzy comprehensive evaluation can transform qualitative evaluation into quantitative evaluation, which has the characteristics of clear train of thought, simple and easy to operate, clear results and strong systematicness. In addition, it can solve non-deterministic problems to a certain

1606

Z. Yuxin et al.

Table 1. The evaluation system of the business environment of cross-border e-commerce Primary indicator Government policy environment

Legal regulatory environment

Social service environment

Financing payment environment

Market economy environment

Secondary index Government decision-making transparency Degree of support for government decision-making The degree of implementation of government decisions The state of streamlining government and delegating authorities Preferential status of tax policy Extent of protection of intellectual property rights The perfection degree of the commercial dispute settlement mechanism Degree of perfection of the legal system of cross-border ecommerce Supervision of relevant laws and regulations The construction degree of credit mechanism Customs clearance efficiency Port efficiency The extent to which trade barriers are not prevalent Support degree of information network service Support degree of relevant education and training Degree of perfection of matching service system Cross border logistics cost Transport time for cross-boundary logistics Difficulty level of bank loan Degree of perfection of guarantee system The guarantee degree of credit insurance service The breadth of financing channels Payment security Convenience of payment system Reasonableness of payment and settlement Market access system Convenience of registration system Market management order

extent. To sum up, at the level of model construction, the analytic hierarchy process (AHP) is selected to construct the evaluation model to determine the weight of the evaluation factors. At the level of model application, the fuzzy comprehensive evaluation method is chosen to evaluate the enterprise concretely. 4.3

Constructing Hierarchical Structure Model

The business environment assessment system of cross-border e-commerce enterprises in Liaoning Province is a complex system composed of multi-level, multi-criteria and

Construction Study on the Evaluation Model of Business …

1607

multi-index. Based on the business environment evaluation system, we use AHP to build the evaluation model in three levels: target layer, criterion layer and indicator layer (Fig. 1). The target layer is the overall evaluation of the business environment of cross-border e-commerce enterprises in Liaoning Province. The criterion level refers to the evaluation of the business environment of cross-border e-commerce enterprises in Liaoning Province from five aspects: government policy, legal supervision, social services, financing payments and market economy. The index layer is further decomposition of the evaluation index based on the criterion layer, the evaluation index is the third level of the evaluation system indicators, including 28 items. 4.4

Establishment of Judgment Matrix and Consistency Test

The analytic hierarchy process (AHP) is based on the judgment of the relative importance of each factor at each level. Based on the hierarchy model, the indicators in the above level are used as the benchmark, and the relative importance of the indicators in the next level is analyzed by pairwise comparison method, and then the judgment matrix is obtained. The pairwise comparison is based on the 1–9 scale method, which takes the integer 1–9 and its reciprocal. This paper carries out expert investigation by the method of “pairwise comparison”, and constructs judgment matrix by scoring and evaluation, which takes experts in the cross-border e-commerce industry in this province and relevant workers with relevant experience in cross-border e-commerce enterprises as the subjects of investigation. Establish judgment matrices for A-B, B1-C, B2-C, B3-C, B4-C, and B5-C, as shown in Tables 2, 3, 4, 5, 6 and 7. Table 2. Judgement matrix of A-B A B1 B2 B3 B4 B5

B1 1 1/2 1/2 1/3 1/4

B2 2 1 2 1/3 1/3

B3 2 1/2 1 1/2 1/3

B4 3 3 2 1 1/2

B5 4 3 3 2 1

Table 3. Judgement matrix of B1-C B1 C1 C2 C3 C4 C5

C1 1 1/2 2 1/3 3

C2 2 1 3 1/2 3

C3 1/2 1/3 1 1/4 2

C4 3 2 4 1 4

C5 1/3 1/3 1/2 1/4 1

1608

Z. Yuxin et al. Table 4. Judgement matrix of B2-C B2 C6 C7 C8 C9 C10

C6 1 1/3 1/2 1/2 1/3

C7 3 1 1/2 2 1/3

C8 2 2 1 3 1/2

C9 2 1/2 1/3 1 1/4

C10 3 3 2 4 1

Table 5. Judgement matrix of B3-C B3 C11 C12 C13 C14 C15 C16 C17 C18

C11 1 1/2 1/8 1/5 1/6 1/4 1/3 1/3

C12 2 1 1/9 1/7 1/8 1/4 1/2 1/3

C13 8 9 1 3 2 3 5 5

C14 5 7 1/3 1 2 3 5 4

C15 6 8 1/2 1/2 1 3 5 4

C16 4 4 1/3 1/3 1/3 1 4 3

C17 3 2 1/5 1/5 1/5 1/4 1 2

C18 3 3 1/5 1/4 1/4 1/3 1/2 1

Table 6. Judgement matrix of B4-C B4 C19 C20 C21 C22 C23 C24 C25

C19 1 1/7 1/8 1/3 1/4 4 3

C20 7 1 1/2 4 3 8 7

C21 8 2 1 5 4 8 7

C22 3 1/4 1/5 1 1/2 4 3

C23 4 1/3 1/4 2 1 5 4

C24 1/4 1/8 1/8 1/4 1/5 1 1/2

C25 1/3 1/7 1/7 1/3 1/4 2 1

Table 7. Judgement matrix of B5-C B5 C26 C27 C28

C26 1 2 1/2

C27 1/2 1 1/2

C28 2 2 1

Whether the judgment matrix has satisfactory consistency in AHP is measured by CR. When CR = CI/RI < 0.1, the judgment matrix is considered to have satisfactory consistency; on the contrary, it is necessary to adjust the judgment matrix to make it have satisfactory consistency. Because there are many evaluation factors in the index

Construction Study on the Evaluation Model of Business …

1609

layer, it is easy to produce errors in the process of manual calculation, so this paper chooses Yaahp software to analyze and study the data. Yaahp is a AHP-FCE analysis software, which provides the functions of analytic hierarchy process (AHP) and fuzzy comprehensive evaluation (FCE), and integrates them highly. As shown in Table 8, the detailed calculation results of the consistency test were obtained by Yaahp software, n which the consistency index CI = (kmax − n)/(n − 1), CR = CI/RI. Table 8. Consistency test result Judgement matrix kmax CI RI (order) CR A-B 5.1546 0.0387 1.1200 (5) 0.0345 B1-C 5.1146 0.0287 1.1200 (5) 0.0256 B2-C 5.2136 0.0534 1.1200 (5) 0.0477 B3-C 8.5223 0.0747 1.4100 (8) 0.0529 B4-C 7.4384 0.0731 1.3600 (7) 0.0537 B5-C 3.0536 0.0268 0.5200 (3) 0.0516

To sum up, CR of judgment matrix A-B is 0.0345, CR of judgment matrix B1-C is 0.0256, CR of judgment matrix B2-C is 0.0477, CR of judgment matrix B3-C is 0.0529, CR of judgment matrix B4-C is 0.0537C, CR of judgment matrix B5-C is 0.0516; The results are all less than 0.1, which have passed the consistency test, and have satisfactory consistency. 4.5

Weight Calculation and Hierarchy Order of Influencing Factors

The consistency test of the judgment matrix proves the validity of the data. The emphasis of the interviewees on each evaluation factor in this expert interview is fully reflected in the weight value of the evaluation factors. From Table 9, we can find that the weight of government policy environment ranks first in the ranking of the weight value of the criterion layer, the weight value is 0.3661; the weight of social service environment ranked second, the weight value is 0.2474; the third is the weight of the legal regulatory environment, the weight value is 0.2050; the next is the weight of the financing payment environment, the weight value is 0.1100; the bottom of the ranking is the market economy environment, the weight value is 0.0716. The data show that the government policy environment plays the most important role in evaluating the business environment of cross-border e-commerce enterprises in Liaoning Province, the second is the social service environment, and the third is the legal regulatory environment and the financing and payment environment, the importance of market economy environment is the lowest. The preferential status of tax policy is the most important, and the second is the degree of implementation of government policy. In addition, the degree of protection of intellectual property rights, the efficiency of customs clearance, the efficiency of ports and the importance of transparency in government decision-making are also relatively high. But the importance of the level of support for relevant education, the

1610

Z. Yuxin et al. Table 9. Weight table of business environment assessment model

Criterion layer (level I indicator) Government policy environment (B1)

Legal regulatory environment (B2)

Social service environment (B3)

Financing payment environment (B4)

Market economy environment (B5)

Weight Indicator layer (secondary indicator)

Weight

0.3661 Transparency in government decision-making (C1) Degree of government policy support (C2) The degree of government policy implementation (C3) The state of streamlining government and delegating authorities (C4) Preferential status of tax policy (C5) 0.2050 Extent of protection of intellectual property rights (C6) The perfection degree of the commercial dispute settlement mechanism (C7) Degree of perfection of the legal system of crossborder e-commerce (C8) Supervision of relevant laws and regulations (C9) The construction degree of credit mechanism (C10) 0.2474 Customs clearance efficiency (C11) Port efficiency (C12) The extent to which trade barriers are not prevalent (C13) Support degree of information network service (C14) Support degree of relevant education and training (C15) Degree of perfection of matching service system (C16) Cross border logistics cost (C17) Transport time for cross-boundary logistics (C18) 0.1100 Difficulty level of bank loan (C19) Degree of perfection of guarantee system (C20) The guarantee degree of credit insurance service (C21) The breadth of financing channels (C22) Payment security(23) Convenience of payment system (C24) Reasonableness of payment and settlement (C25) 0.0716 Market access system (C26) Convenience of registration system (C27) Market management order (C28)

0.0603 0.0388 0.0987 0.0243 0.1439 0.0740 0.0348 0.0245 0.0568 0.0149 0.0733 0.0626 0.0058 0.0089 0.0094 0.0169 0.0348 0.0356 0.0191 0.0034 0.0026 0.0103 0.0071 0.0398 0.0277 0.0222 0.0353 0.0140

Construction Study on the Evaluation Model of Business …

1611

supportiveness of information network service, the payment security, the extent to which trade barriers are not prevalent, the consummation degree of guarantee system and the credit insurance service safeguard degree the important degree is relatively low. As shown in Table 9, we can get the criteria layer and the index layer of the weight of the factor based on the Yaahp software, and ultimately build a assessment model of cross-border e-commerce enterprise business environment in Liaoning Province, which contains the weight of each factor. In summary, the order of importance of the criterion-level indexes from high to low in the evaluation model of the business environment of cross-border e-commerce enterprises in Liaoning Province is: government policy environment, social service environment, legal supervision environment, financing and payment environment and market economy environment. From the perspective of specific indicators, the preferential status of tax policies is most important in the government policy environment. The efficiency of customs clearance is most important in the social service environment. The degree of protection of intellectual property under the legal regulatory environment is of the highest of importance. The rationality of payment and settlement is most important in the financing and payment environment. The convenience of the registration system under the market economy is of the highest degree of importance.

References 1. Huang, Y.: Study on business environment assessment system based on business environment report. Enterp. Reform Manage. 16, 93 (2015). (in Chinese) 2. Jeong, H., Cho, H., Jones, A.: Business process models for integrated supply chain planning in open business environment. J. Serv. Sci. Manage. 1, 1–13 (2012) 3. Pinheiro-Alves, R.: The ease of doing business index as a tool for investment location decisions. Econ. Lett. 1, 66–70 (2012) 4. Xu, K., Wang, Y.: Re-understanding of China’s business environment in the post-crisis erabased on the World Bank’s empirical analysis of the survey data of 2,700 private enterprises in China. Reform. Strategy 7, 119–123 (2014). (in Chinese) 5. Sun, T.: Research on Taxation of Cross-Border E-Commerce Transactions. Guangdong University of Foreign Studies (2017). (in Chinese) 6. Li, X., Wang, Y.: Service function integration and optimization of cross-border logistics enterprises under the multi-mode of cross-border e-commerce. J. Commer. Econ. 5, 78–80 (2017). (in Chinese) 7. Guo, H.: On the countermeasures of cross-border e-commerce development from the angel of industrial clusters—taking Hebei Province as the example. China Bus. Market 5, 55–56 (2017). (in Chinese) 8. Ding, T.: Research on the development of cross-border e-commerce in China under the “one belt and one road” strategy. Manage. Adm. 7, 101–102 (2017). (in Chinese) 9. Yang, T.: Study on the establishment of business environment evaluation index system— based on the comparative analysis of the four provinces of Jiangsu, Zhejiang and Guangdong. J. Commer. Econ. 13, 28–29 (2015). (in Chinese) 10. Guo, J., Liu, Y.: Study on ways to improve the international competitiveness of cross-border e-commerce logistics industry in Hangzhou—based on fuzzy analytic hierarchy process. China J. Commer. 12, 5–7 (2017). (in Chinese)

Research on Information Construction of Enterprise Product Design and Manufacturing Platform Yan Ningning(&) School of Management and Economics, Jingdezhen Ceramic Institute, Jingdezhen, China [email protected]

Abstract. In order to develop and grow, the enterprise must strengthen internal management, and the management is based on all kinds of data information. In the era of relatively mature software technology and hardware technology, only by means of information management tools, can the enterprise meet the practical application needs of finance, logistics, production, quality management and so on. The results show that we should focus on the innovation of science and technology, seize the opportunity, speed up the technology translation of the design and manufacture technology of foreign companies, accumulate their advanced design technology and information integration technology to form integrated development system of product design and manufacture with independent intellectual property rights, and realize the design means and design of product. Digitalization of the process, shortening the product development cycle, and further improving the product design and innovation ability of enterprises. This paper analyzes how to build three technical platforms of design, manufacturing and products with international advanced level, and how to further improve the overall level of the technical platform through the implementation and application of information projects. Keywords: Information construction platform



Product design



Manufacturing

1 Introduction With the continuous development of information technology, in recent years, there has been a worldwide boom in the construction of network environment and dissemination of data and information. With the continuous increase of information stored in computers, data backup and disaster recovery have become a topic of concern [1]. The most valuable asset of an enterprise is data. To ensure the continuous operation and success of an enterprise’s business, it is necessary to protect computer based information [2]. Human errors, hard disk damage, computer viruses, natural disasters and so on may all cause data loss and cause incalculable losses to enterprises [3]. For any core business system, the loss of business data is a major disaster, which will lead to the loss of system files, customer data and business data, and the business will be © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1612–1617, 2019. https://doi.org/10.1007/978-981-13-3648-5_208

Research on Information Construction of Enterprise Product …

1613

difficult to carry out normally [4]. At this time, the key problem is how to restore the computer system as soon as possible so that it can operate normally. At present, product innovation capability is the key point of enterprise’s core competitiveness. In the shortest possible time, with competitive prices, providing customers with satisfactory products and making the best interests of the enterprise, it has become the key to the survival and sustainable development of the enterprise. The accumulation and reuse of knowledge is the foundation of realizing innovation [5]. In the process of daily operation, the enterprise generates data in the process of daily operation, and organizes a large number of data, and takes the past rough to form the knowledge assets of the enterprise, and reuses it in the future. The urgent matter is that enterprise informatization can help enterprises to improve the efficiency of business process and enhance competitive advantage by using IT technology. It is also the key factor for enterprises to realize business strategy.

2 Construction of Product Design and Manufacture Platform 2.1

PDM Product Data Information Management System

At present, product innovation capability is the key point of enterprise’s core competitiveness. In the shortest possible time, with competitive prices, providing customers with satisfactory products and making the best interests of the enterprise, it has become the key to the survival and sustainable development of the enterprise. The accumulation and reuse of knowledge is the foundation of realizing innovation. In the process of daily operation, the enterprise generates data in the process of daily operation, and organizes a large number of data, and takes the past rough to form the knowledge assets of the enterprise, and reuses it in the future. The urgent matter is that enterprise informatization can help enterprises to improve the efficiency of business process and enhance competitive advantage by using IT technology. It is also the key factor for enterprises to realize business strategy. PDM (Product Document Management) is the data base of enterprise informatization. Through the implementation of PDM system, we can help enterprises build the basic platform of information system, that is, to achieve the following objectives: Single product data source, data integrity, accuracy and consistency, to achieve document management, ensure smooth flow of information. System integration platform to solve CAD design, CAM manufacturing, CAE analysis, CAPP computer aided process, ERP system and other system tools integrated management. It not only provides unified data results management platform, but also realizes detailed data creation process management. Standardize coding management, scientifically and effectively classify, promote design, knowledge reuse, and enhance enterprise knowledge management. Use electronic processes to improve design efficiency and quickly and accurately query progress of design tasks. The version changes caused by the specification of management changes.

1614

Y. Ningning

Centralized engineering data management improves project management and manufacturing process management functions that are synergy with group, other vehicle factories or outsourcing units. 2.2

PDM Management Basic Requirements

Product design document management. An enterprise level product drawing document management system is introduced. Through the implementation of the product data management project, it realizes the close integration with various application software of the enterprise, ensures the consistency, integrity, effectiveness, security and secrecy of the data, forms a unified integrated research and development system platform, supports the heterogeneous computer environment, and integrates the CAD system to guarantee the consistency of the 2D and 3D data and forms a single form. One product data source, convenient and correct management of changes caused by changes, making design, technology and other staff at all levels can access and access related product information according to the rights given by their respective posts, carry out their respective work, improve the quality and speed of product development. The accumulation and management of enterprise knowledge. The accumulation of product R&D knowledge forms the technological precipitation of enterprises. Through the platform of R&D management system, the following objectives are achieved: All kinds of knowledge base, such as standard parts, materials, tooling and general parts, can be centralized and maintained. Effective management of R&D technical documents requires effective management of R&D process. Gradually forming CAX (CAD/CAM/CAE) application platform, CAPP application platform, PDM system platform and ERP system platform. The design of data sources is unimpeded in the company. Besides the close integration with the design software, the design data must be delivered to the CAPP design system and the ERP system in time, accurately and completely, to solve the current multi head data sources, to unify the standard and scientific material coding, to eliminate/reduce the manual duplication of labor, to eliminate the system information isolated island and to ensure that the design data source can be in the whole the factory runs smoothly.

3 Main Functional Modules 3.1

Map Document Management

Team Center provides a complete user interface of Web-centric and Java-base. This means that users can enter the system directly by browser or client interface, and use the functions provided by the system, as shown in Fig. 1. For product structure, components, files and so on, it supports single attribute query and multiple attribute

Research on Information Construction of Enterprise Product …

1615

query. It can query the related data arbitrarily, and establish the related query in the query builder according to the requirement. Query conditions can be retrieved by full string or through string.

Fig. 1. Java portal and web browser

3.2

Product Research and Development Process Management

The key point of electronic process management is real-time, data and decisions must be transmitted to relevant people immediately, so as to avoid loss of timeliness. The second is integrity. It is also necessary to transmit all necessary information to all the necessary people. The third is to record, all the causes, movements, changes, results and related practitioners in the process must be fully recorded so that they can be traced in the process and in the future. Fourth is monitoring, the current state and progress of the process, the manager must be able to complete control, and when necessary, make appropriate management actions, such as acceleration, suspension, interruption, and so on. As shown in Fig. 2, the agent mechanism can be set according to the agency system of the enterprise. The system will automatically transfer the data that must be approved by the system during the business trip or other unapproved period.

Fig. 2. Agent setting

1616

3.3

Y. Ningning

Extended Configuration Management

There are different expressions of product structure in different enterprises, such as product list, bill of material, and part table. In order to facilitate the concept of unified nouns, we use the word BOM (Bill of Material: BOM) in the following chapters. BOM generation and editing. Provides a convenient graphical BOM interface for building/modifying/querying. Users can easily set up/modify accessories, set the relationship between parts, establish replacement parts, effective dates, batch numbers, etc. The expansion of multi-layer BOM. The deployment of BOM structure makes the structure of BOM clear at a glance. As shown in Fig. 3, the accessories are expanded and the relevant data and drawings can be displayed using the product data function.

Fig. 3. Multi-layer BOM expansion

4 Conclusions The basic principle of establishing the project of enterprise information project is based on the guiding ideology of “overall planning, step by step implementation, key breakthrough, benefit driven”, combined with the overall requirements of the construction of the three platforms of XXX group enterprises, combined with the application level of the enterprise’s soft and hardware system, and through enterprise information construction. The implementation of the project can enhance the overall information management level of the enterprise. The principle of overall planning and step-by-step implementation. In the implementation of enterprise informatization construction, we must focus on the long-term development goals of enterprises, and at the same time, we should base ourselves on the current foundation and urgent problems to be solved. In order to improve efficiency and practicality, the master plan is implemented step by step. Fully consider the current situation and requirements of enterprises, and the interface between future and subsequent systems, so as to avoid the formation of new information islands.

Research on Information Construction of Enterprise Product …

1617

References 1. Liu, F., Ng, G.S.: Artificial ventilation modeling using nero-fuzzy hybrid system. Int. Joint Conf. Neural Netw. 3(5), 2859–2864 (2006) 2. Zhang, W., Yuan, Z.: PLC studies in China area contribution to the development of international standard environmental electromagnetisms. In: The 4th Asia-Pacific Conference, vol. 8, no. 10, pp. 759–764 (2008) 3. Youngrock, Y., Guilherme, N.D., Avinash, C.K.: Real-time tracking and pose estimation for industrial objects using geometric features. In: Proceedings of the International Conference in Robotics and Automation, Taiwan, vol. 9, no. 20, pp. 3473–3478 (2003) 4. Wangxiao: The Strategies’ Research of Electric Power Marketing Based on the Demand Side Management Theory. North China Electric Power University (2008) 5. Didden, M.H.: Demand side management in a competitive European market: who should be responsible for its implementation. Energy Policy 7(25), 26–38 (2003)

Planning and Simulation of Mobile Charging for Electric Vehicles Based on Ant Colony Algorithm Yang Fengwei1, Liu Jinxin1, Zhang Zixian2, and Chen Peng1(&) 1

College of Mechanical Engineering, Southwest Jiaotong University, Chengdu 611756, China [email protected] 2 School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China

Abstract. The popularization of electric vehicles has a close relationship with the construction of charging infrastructure. However, the fixed charging stations are not flexible enough for their limited number and distribution. To promote the mobile charging as supplementary, a planning method of mobile charging for electric vehicles is proposed in this paper. Firstly, the roads are described as a series of nodes with certain topology. Then, the planning based on ant colony algorithm is conducted. For the simultaneous charging requests from multiple uses, the optimal charging point is determined and the paths for electric vehicles are provided. Finally, the mobile charging is simulated by taking the roads around Tianfu Square in Chengdu as an example, and the simulation result is visualized on an application developed on Android platform. Keywords: Electric vehicle Ant colony algorithm

 Mobile charging  Path planning

1 Introduction Over the past decade, the development and the application of electric vehicles have been paid considerable attention with the worsening global environment and the depletion of energy and resources. However, there are still many obstacles to the popularization of electric vehicles. The charging process is not as convenient as the refueling process, which limits the long-distance travel. The lack of charging infrastructure is also increase the doubts from users about the convenience of using electric vehicles. There has been a number of research discussing the planning of fixed-charging infrastructure [1, 2]. The user experience is crucial in the mobile charging service, so the mobile charging should be well planned by providing optimal paths for electric vehicles. The optimal path analysis of transportation network has been widely studied in the fields of geographic information science, applied mathematics, and transportation logistics. It is important to select an appropriate path for the efficiency improvement of the transport task [3]. With the advancement of science and technology, the methods © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1618–1624, 2019. https://doi.org/10.1007/978-981-13-3648-5_209

Planning and Simulation of Mobile Charging for Electric …

1619

for solving traditional path planning problems have evolved from precise algorithms to heuristic algorithms and then to bionic intelligence algorithms, such as genetic algorithm simulating the theory of biological evolution and ant colony algorithms emulating the behavior of other creatures [4]. Among them, ant colony algorithm is a type of bionic optimization method proposed in recent years. It was discovered by observing the foraging process of ants in nature, because the ant colony foraging behavior is very similar to the shortest path search. In actual applications, ant colony algorithm could solve the problems with a huge number of variables and constraints [5]. Therefore, it is also suitable for the shortest path search in the planning of mobile charging. In this paper, it aims to study the planning of mobile charging to support the charge of electric vehicles. In the second section, the topology of the road is described for further route choice. In the third part, the ant colony algorithm is introduced for the planning by analyzing the characteristics of mobile charging process. Finally, the simulation is conducted by obtaining user requests, and providing them the destination of charging and the optimal paths to reach this position though the Android application.

2 The Principle of Algorithm and Its Modeling The transportation network could be described as a directed graph [6] and weighted graph [7]. Thus, it is the basis for the studying the shortest path of urban roads by constructing road network topology on actual maps. The ant colony algorithm was firstly proposed by Marco Dorigo in 1992, and it is a positive feedback probabilistic path selection algorithm. The idea of the algorithm originated form the foraging behavior of ants [8]. When ants look for food, they leave pheromone on the path. Pheromone is a mean for communication between ants about the path length. The higher the concentration of pheromone, the greater the probability that ants will find food through this path. With identical pheromone secretion on each path for ants, the amount of pheromone in unit length on the longer paths are lower than the shorter ones. By pheromone positive feedback effect on the path length, more and more ants select the shorter paths. Finally, the ants find the shortest path, and construct their solutions through the existing pheromone trails and heuristic information available a priori [9]. Colony algorithm can be used to solve the traveling salesman (TSP) problem [10], but the problem in this paper is different from TSP in two aspects. For the first, every two nodes in the actual roads are not necessarily connected. For the second, the traditional TSP path in planning is a closed loop, but the path planning in this paper are the routes radiating from the center. Therefore, it needs to consider the non-closed loop features when the ant colony algorithm is applied in the shortest path planning for the meeting of multiple vehicles. The Definition of parameters is shown as Table 1. gij reflects the heuristic level from node i to node j, which is the expectation for ant transferring from node i to node j. It expressed as (1). gij ¼

1 dij

ð1Þ

1620

Y. Fengwei et al. Table 1. Parameter definition of ant colony algorithm Variable name Variable description m The number of ants n The number of nodes D The matrix composed by the distances dij between the nodes i and j The amount of pheromone on path (i, j) at time t with initial value sij ðtÞ gij The visibility or heuristics of paths (i, j) q pkij ðtÞ

The volatilization coefficient of pheromone The probability of ant k turning from node i to node j at time t

Tabu Tau Q Best_dist

The The The The

m  n matrix of records for m ants accessed nodes n  n matrix of pheromone between n nodes total amount of pheromone left by ants on the path in one iteration minimum value for the sum of the all path weights

The m ants refer to the pheromone concentration on each path and their length for selecting the next node accessed from the current node. Ant on the node i at time t, has the probability pkij ðtÞ for transferring to the node j, which is the node make sij ðtÞgij ðtÞ reach its maximum value.   j ¼ arg max sij ðtÞgij ðtÞ

ð2Þ

where pkij ðtÞ ¼

8 < Psij ðtÞgij ðtÞ ; j 2 allowedk s ðtÞg ðtÞ :

ij

j

ij

ð3Þ

0; otherwise

Formula (2) illustrates that the ants always tend to travel on the paths with shorter distances and high pheromone. The value of pkij ðtÞ determines whether the ants will utilize experiences from other ants or explore the paths autonomously. Each ant leaves pheromone on the road between the nodes. Pheromone concentration is updated according to the (4). sij ðt þ 1Þ ¼ ð1  qÞ  ðsij ðtÞ þ Dsbij Þ

ð4Þ

where ( Dsbij ¼

Q Dbij

; ði; jÞ 2 Pathb

0; otherwise

ð5Þ

Planning and Simulation of Mobile Charging for Electric …

1621

1  q is the residual coefficient of the pheromone with 0 < q < 1. Dsbij represents the amount of pheromone increment from each road sections after a route choice process. Pathb expresses the updated optimal path. Dbij denotes the length of section (i, j) on the optimal path Lb . After a group of ants have completed their own travel, the ant with the shortest path is selected and the pheromone concentration on its path is enhanced according to the formula (4). Then, the pheromones on all paths are evaporated partly. The k-th ant travels from node i to node j in a round of route choice, and all the sections passed are summed up according to formula (6): Lk ¼ Lk þ Dij

ð6Þ

Then the paths of ants that arrive at the four preset nodes from the intermediate node are separated and compared respectively. The paths from to the intermediate node to n preset nodes are separated and compared, and the ant with the smallest weight among all the ants that reach the same preset node is retained. If the path with lower weight is found in the next around searching, the retained path is replaced. Finally, the minimum weight path reaches the predetermined nodes are summed up according to the formula (7) to obtain the optimal path. best dist ¼

u X

minðLk Þ; k 2 pathðvÞ

ð7Þ

v¼1

In formula (7), pathðvÞ represent paths ended with the predetermined node v. The steps to implement the algorithm are listed as following. Step 1: Load the coordinates of n nodes to the n  2 matrix and weights of sections passing n nodes to the n  n adjacency matrix, and finish the initialization of parameters. Step 2: Randomly generate four different nodes with three of them represent the locations of the electric vehicles that need to be charged and the left one is mobile charging vehicle. Then take the average position of the four nodes as the central node for all ants to start. Step 3: Let m ants start from the central node by placing it at the first column of the list Tabu, and then let them visit the next node according to the above rules combining with the roulette wheel selection scheme. Leave the pheromones on the passing road, and save them in the table Tau. Step 4: All the ants complete their own route choice when they have no node to select or reach the preset nodes. With the summation, calculate the access sequence and path length of the preset nodes. Step 5: According to the rules of the ant colony algorithm, update the pheromone on the best paths to the four predetermined nodes, and return to Step 3 for the next iteration. Step 6: When the algorithm reaches the maximum number of iterations, output the optimal paths and the minimum weight.

1622

Y. Fengwei et al.

3 Computer Simulation Experiments and Results In this paper, the road network topology is based on the roads around Tianfu Square in Chengdu map as Fig. 1. The road network includes 51 nodes marked by numbers with red color, and the weights of the road sections between these nodes are marked by numbers with black colors in Fig. 2.

Fig. 1. Roads around Tianfu Square in Chengdu map

Fig. 2. Road network topology with nodes and weights

The simulation is conducted by programming with MATLAB 2014 a on the Windows 10 Pro platform, and the computer with 8G memory and CPU is i7-4500U as the server. In the simulation, there are 3 electric vehicles requesting the charging service with the blue icons, and one mobile charging vehicle with the green icon. The simulation results are visualized on the client application developed on Android platform as Fig. 3, and the initial vehicle positions are shown as Fig. 3a. In Fig. 3b, the charging point is marked with red icon, and the paths recommended to each vehicle after the optimization are displayed as red lines. The optimal weights of this path planning are 3567.

Planning and Simulation of Mobile Charging for Electric …

1623

Fig. 3. The mobile charging paths planning on Android platform: a the initial vehicle positions; b the optimal paths and charging point

4 Conclusion The large-scale popularization of electric vehicles is imperative, and the mobile charging vehicle is helpful for solving the problem of insufficient charging infrastructure. For further route choice, the topology of the road network is constructed. The ant colony algorithm is introduced to the path planning of multiple vehicles that request the charging service simultaneously. Calculated by the ant colony algorithm, the optimized charging point is determined and the paths for electric vehicles are generated. The simulation is conducted in a charging situation with 3 electric vehicles and one charging vehicle. The charging point and the optimal paths are all displayed on the user interface of the application developed on Android platform. Acknowledgements. The work is supported by the Fundamental Research Funds for the Central Universities (Grant No. 2682017CX036).

References 1. Kuby, M., Lim, S.: The flow-refueling location problem for alternative-fuel vehicles. SocioEcon. Plan. Sci. 39(2), 125–145 (2005) 2. Chung, S.H., Kwon, C.: Multi-period planning for electric car charging station locations: a case of Korean expressways. Eur. J. Oper. Res. 242(2), 677–687 (2015) 3. The best path analysis in military highway transport based on DEA and multiobjective fuzzy decision-making. Math. Probl. Eng. 6, Article ID 206024 (2014) 4. Tian, D., Junjie, H., Sheng, Z., Wang, Y., Ma, J., Wang, J.: Swarm intelligence algorithm inspired by route choice behavior. J. Bionic Eng. 13(4), 669–678 (2016) 5. Hasany, R.M., Shafahi, Y.: Ant colony optimisation for finding the optimal railroad path. Proc. Inst. Civil Eng. Transp. 170(4), 218–230 (2017)

1624

Y. Fengwei et al.

6. Fang, Y., Chu, F., Mammar, S., Zhou, M.: Optimal lane reservation in transportation network. IEEE Trans. Intell. Transp. Syst. 13(2), 482–491 (2012) 7. Khan, K.U., Dolgorsuren, B., Anh, T.N., Nawaz, W., Lee, Y.-K.: Faster compression methods for a weighted graph using locality sensitive hashing. Inform. Sci. 421, 237–253 (2017) 8. Colorni, A., Dorigo, M., Maniezzo, V., et al.: Distributed optimization by ant colonies. In: Process of the 1st European Conference on Artificial Life, pp. 134–142 (1991) 9. Mavrovouniotis, M., Li, C., Yang, S.: A survey of swarm intelligence for dynamic optimization: algorithms and applications. Swarm Evol. Comput. 33, 1–17 (2017) 10. Mavrovouniotis, M., Yang, S.: Ant colony optimization with immigrants schemes for the dynamic travelling salesman problem with traffic factors. Appl. Soft Comput. 13(10), 4023– 4037 (2013)

Research on Urban Greenway Design Based on Big Data Wenjun Wang(&) City College of WUST, Wuhan, China [email protected]

Abstract. In this paper, based on the comparative analysis of city green way of big data as the foundation of research, analysis the micro environment under the influence of the user in the urban greenway movement’s choice of path, focus on pedestrian space, landscape nodes, such as micro environmental factors in continuity, agreeableness, diversity, etc. How to use big data to look at urban green system, how to rely on large data defines the visual aesthetic from the perspective of subjective imagination to rely on the data of the rational evolution, how to rely on large data to evaluate the landscape resources, is the basis of this article. Walking path for data contrast and resolution result to explore city environmental construction, in order to improve the present city walking fitness environment in terms of human construction, in order to further create a good domestic urban greenway design provides the basis. Keywords: Big data

 Urban greenway  Environmental design

1 Introduction In recent years, cycling, hiking and other low-carbon travel, green fitness has become an important part of sustainable development. The improvement of living standard makes people pursue higher quality of life. Healthy lifestyle is more and more accepted by people. However, the development of the city has occupied a large amount of outdoor space, so the urban green road is especially precious. Therefore, it is of positive significance to improve the quality of people’s outdoor activities to improve the environmental quality of urban greenways. With the advent of the information age, big data has become an important tool for analyzing people and cities, providing innovative technical methods and objective data guarantee for research. Through big data, we can more objectively analyze and feedback people’s intention in the city.

2 Problems and Current Situation As the national fitness become a fashion in the city, people have higher demand to the greenway in the city [1]. But the present stage our country city green way environment under the national fitness value degree is not enough, many cities are considered less, in the construction of urban green environment there are many problems, such as spatial layout is unreasonable, path occupied, environmental quality and environmental design © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1625–1629, 2019. https://doi.org/10.1007/978-981-13-3648-5_210

1626

W. Wang

is difficult to meet the demand of the public fitness, etc. Managers often only pay attention to the traffic path attributes and ignore the improve and improve the urban public environment quality, lack of encouragement, hiking fitness on the planning and design, the designers lack of attention to hikers feeling, consider lack of humanized design of fitness environment, which makes urban fitness space can not be fully utilized, the national fitness cannot be carried out very good [2]. The information storm brought by big data is changing people’s thinking mode and life style. How to collect, sort out, excavate and apply big data has become the focus of all industries [3]. In recent years, big data has been applied to urban construction to reduce energy consumption intelligently, and traffic congestion can be predicted. Landscape planning and design of large data covering space geographic data and historical data of humanities, including geological, soil, hydrology, vegetation, natural geographic data, such as atmosphere also contains population, history, culture, such as buildings, roads, community cultural history data [4]. City at the same time, along with wisdom and wisdom for the construction of the scenic area, there are more data to enrich the content of the landscape big data of time and space, such as cities and wisdom scenic area dynamic monitoring data, block, or real-time statistics of stream of people, people and communication vehicle positioning data and citizens and tourists, Internet social network data, network media data citizens brush calorie of consumption data, and so on [5]. Big data of landscape planning and design is an important data source and quantitative scientific basis for digital landscape planning and design.

3 Big Data Acquisition and Parsing 3.1

Obtain

Data, including numerical data and non-numerical data, is an important means for us to obtain information through observation, experiment, analysis or calculation. In a narrow sense, data is usually thought of as Numbers, or numerical data, which is the simplest form. In addition, the data include the numerical data, images, text information, audio information, abstract geometry, graphics, forms, etc., in broad scope, who can obtain information of things we can all be called data. The data studied in this paper are data in a broad sense [6]. The acquisition of object visualization data is the key of this paper and the basis of subsequent analysis [7]. It is mainly reflected in the acquisition of path recording data and environmental image data. 3.2

Parsing

Mobile devices become an indispensable part of life people, mobile devices and various APP records the People’s Daily life, through to the urban population of mobile equipment reflects the big data provide the basis for urban greenway design planning. At present, in the city green way of fitness, many bodybuilders will by intelligent terminal to locate their own fitness, such as walking or cycling path information upload

Research on Urban Greenway Design Based on Big Data

1627

and share network platform, data display and share their movement. For example, Strava and other apps can obtain the objective trajectory positioning data basis for studying urban greenway fitness. Navigation and positioning data is typically a location service (FBS) data, which contains the location of the information and track information, is a good way to depict the crowd behavior patterns of space and time, reveals the relationship between human and space, and people with the characteristics of space and time. It is an objective and quantitative decision basis for urban greenway planning management and service, which is of great significance for improving its intelligent planning, smart management and accurate service [8]. The big data of social network contains a lot of space-time information, semantic information, emotional information and related information, etc. [9]. Social network data to the user, time characteristics, spatial distribution and sentiment detection, etc. for smart urban greenway planning, fine management, precision services are proposed [10]. Analysis on big data mining at the same time, reveals the user age composition, spatial behavior, preferences and satisfaction evaluation to the scenic spot, etc., further analyzing the correlation of urban greenway landscape node strength, provide support for urban green planning and design.

4 The Design Strategy The choice of “path” in urban greenway is closely related to the quality of environment. First of all, the path has a complementary relationship with environment, environment quality would lead to the mass use of path, the path to the increase in the frequency of, in turn, promote the construction of the surrounding environment. In addition, from a dialectical point of view, the former has subjectivity, the latter is the objective existence factor. The user’s choice of the subjectivity of the greenway path is influenced by the objectivity of the environment, and the objectivity of the environmental quality is reflected in the subjective behavior of the user. 4.1

Comfort

The humanistic feature of big data application is embodied. Above analyzes the humanistic characteristics of large data of time and space, in fact, the urban and rural landscape planning and design is on the premise of respecting the rule of nature and society, around to satisfy people’s production and living, leisure, recreation, entertainment, from the demand to embody the people-oriented. Landscape planning and design based on large data of time and space, therefore, must always embody humanistic characteristics, through the analysis of the large data quantitative characterization of multidimensional characteristics such as natural, social, cultural, emotional, and then through the planning and design methods, in space, time, facilities, environment and so on multiple dimensions meet the multidimensional characteristics. The improvement of facilities is mainly the improvement of public service facilities, which is another important factor for the comfort and humanity of urban greenway environment. Perfect facilities are an important embodiment of the humanized design

1628

W. Wang

of the hiking and fitness environment. Walking and fitness in an environment full of love and care can make users feel happy and relax mentally. Specific performance is safety facilities, rest facilities, lighting facilities. Since walking and cycling on the city’s green roads is a heavy workout, it’s easy to get tired. Perfect recreation facilities, fitness environment can not only for hikers to rest, the body fatigue, alleviate foot process efficiently foot fitness goals, hikers can create more easy to communicate, share experience of workplace atmosphere. Therefore, it becomes an environmental factor that affects the path choice of urban hikers. Safety facilities is mainly refers to has the prompt identification and protection facilities, etc., these facilities not only by providing a convenient and easy to identify information, ensure the safety and guide the warning role, more hikers and its activities to provide security, to create pleasant hiking fitness environment. 4.2

Coherence

Because the city green passage for the movement of the body function is a continuous process, so hiking and rider always hope to be a continuous and complete path, do not want to block by motor vehicles and traffic lights, so that not only ensure the security of the movement, is unfavorable to the continuity of the state of motion and persistence of the behavior. Therefore, the continuity of walking fitness is mainly to meet the needs of behavioral continuity, while the continuity of fitness environment path is an important factor to ensure the continuity of behavior. The pedestrian walkway system, which is relatively independent and organically connected, can guarantee the continuity of the path from the whole to the part, and become the concentration area of the pedestrian path data are connected to the space pedestrian space system at the same time, the city of hikers path choice for hiking fitness frequency is relatively high, but due to the complexity of the elevated road system is usually fast lane, does not favor the formation of pedestrian space system, so the hikers choose low frequency. A systematic pedestrian walkway can not only provide the path distance required by fitness as a whole, but also form a circular path locally to meet users’ needs for path consistency. Urban parks, pedestrian plazas and other complete Spaces are more likely to form a systematic pedestrian walkway to ensure the continuity of the hiking path, fitness enthusiasts choose relatively high frequency. 4.3

Landscape Character

When using urban greenway fitness, users will choose the area with high environmental attraction and good landscape quality as the hiking path, which is inclined to good environmental landscape. They always in the city on foot fitness of good has the tendency of landscape quality, especially for urban green space, city parks, landscape lakes, monuments, historical preservation construction and so on good natural landscape has a tendency of city. At the same time, fitness enthusiasts pay more attention to the environmental landscape image with historical and cultural information and natural landscape

Research on Urban Greenway Design Based on Big Data

1629

information. These kinds of landscape environments not only have rich information elements, but also have open space, providing a good place for the hikers to exchange and rest and a good view of the landscape.

5 Summary Urban greenway needs to meet the needs of fitness enthusiasts for the path environment. Different micro environments influence users’ choice of path, and users’ choice of path reflects the advantages and disadvantages of environmental quality. The application of big data in urban greenway research and the advantage of big data in behavior analysis provide a basis for urban greenway design.

References 1. Dang, A.R., Jian, X., Biao, T.: Research progress of the application of big data in china’s urban planning. China City Plan. Rev. 24, 24–30 (2015). (in Chinese) 2. Hawelka, B., Sitko, I.: Geo-located Twitter as proxy for global mobility patterns. Cartogr. Geogr. Inform. Sci 41, 260 (2014) 3. Vu, H.Q., Gang, L., Law, R.: Exploring the travel behaviors of inbound tourists to HongKong using geotagged photos. Tour. Manag. 46, 737–777 (2015). (in Chinese) 4. Song, W.J., Wang, L.Z.: Geographic spatiotemporal big data correlation analysis via the Hilbert-Huang transformation. J. Comput. Syst. Sci. 89 (2017). (in Chinese) 5. Xinyi, N., Liang, D.: Understanding urban spatial structure of Shanghai central city based on mobile phone data. China City Plan. Rev. 4 (2015). (in Chinese) 6. Guo, L., Li, Z.: Understanding travel destination from structured tourism blogs. In: Proceeding of 2015 Wuhan International Conference on e-Business, pp. 144–151 (2015). (in Chinese) 7. Palomares, G., Carlos, J.: Identification of tourist hot spots based on social networks: a comparative analysis of European metropolises using photo-sharing services and GIS. Appl. Geogr. 63, 408–417 (2015) 8. Susan, G.A., Joanne, H.: Data visualization as a communication tool. Library Hi Tech News 32(2), 1–9 (2015) 9. McNeely, C.L.: The big (data) bang: policy prospects and challenges. Rev Policy Res. 31(4) (2014) 10. Namyun, K., Taylor, VS.: Influences of wildland-urban interface and wildland hiking areas on experiential recreation outcomes and environmental setting preferences. Landsc. Urban Plan. 127 (2014)

Design and Implementation of Art Rendering Model Haichun Wei(&) Arts Department, Heilongjiang International University, Harbin 150025, China [email protected]

Abstract. Digital art painting occupies an important role as well as a hot spot in the research of NPR (Non-Photorealistic Rendering). In the research of painting, the core process is to generate a personalized artistic painting from an image, through limited interactions. In order to make the effect closer to the artistic effect finally, based on painters’ practices from of old, human beings’ visual perception of arts and professional painting effects are analyzed, thus a mathematical model of the painting process is abstracted from perception to final rendering, most of operations are automatic, personal elements are added by few operations. In the whole model, the Mean Shift algorithm is used to do the “Over-Segmentation”; then, the result is used to build the Markov Random Field; and, using the Graph Cut algorithm, the objects in the image are segmented layer by layer. Furthermore, the main structure of the object is parsed by the Primary Sketch algorithm. According to the main structure, the direction field is made by the oriented diffusion. And through color transferring, the image takes on the color of art paintings which provides pigments for rendering. Using the direction field and the artistic pigments based on the original image, the digital art painting is rendered by the many layers’ “pursuit” algorithm finally. The whole process simulates the human painting process basically. Keywords: Non-photorealistic rendering Direction field

 Graph cut  Color transfer

1 Introduction With the great success of the computer graphics community in the real drawing field, people have also felt some new problems [1–3]. Due to the fact that realistic renderings reflect the reality with too much precision, the lack of artistic sensation in painting works makes it difficult to express the artist’s artistic conception. This is precisely the place where realistic rendering technology is currently considered to be the most serious problem. Therefore, the realistic rendering technology developed for more than 40 years has such drawbacks as being too precise, redundant information, and feeling stiff [4–6]. Some important details are not properly emphasized. A new and more interesting research direction has emerged. It hopes to use computer to simulate human hand-painted works of art. This research motivation is subtler, more interesting and more humane than the goal of realistic graphics. It’s more about creating useful and beautiful images [7]. At this point, such research is no longer a complete recourse to © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1630–1637, 2019. https://doi.org/10.1007/978-981-13-3648-5_211

Design and Implementation of Art Rendering Model

1631

physics, but more and more involves traditional fields such as cognitive science, art, graphic design, and computer vision. This new area of research is called nonphotorealistic graphics. Its main content is to use computer hardware and software means to simulate pictures and animations with different painting effects [8]. Non-photorealistic rendering technology not only enables everyone easily and quickly create images full of artistic charm and artistic imagination. Besides, it is also possible to create videos with artistic representations, such as videos with oil paintings, watercolors, or Chinese painting styles. Therefore, it can be widely applied in the media industry to provide people with a broader and more convenient creation platform, so that the created works can be more personalized and more expressive of the mood at that time, such as happiness, sadness and homesickness [9, 10]. Digital art painting, as the main component of non-realistic drawing, is to hope to bring more artistic effects and more self to the picture. This paper mainly studies a new art drawing algorithm by combining the human visual perception, computer vision and computer graphics knowledge of an image. The purpose of this article is not only to achieve the effect of art and painting, but to simulate the artist’s entire process of painting a painting, including from the composition, analysis of the relationship between the primary and secondary, changes in color, as well as the position, direction, thickness of each stroke. Through this algorithm, the system can be used to draw oil paintings and embroidery by using different brush models, such as oil painting brushes and embroidered embroidery threads.

2 Art Modeling Mathematical Model 2.1

Drawing of Art Painting Process

The drawing of an artistic painting is a combination of the subjective and objective things of a person/painter. When a beautiful scene enters the human brain, the human brain will perform a pictorial analysis of the image that has just been acquired from the objective world. The first is to segment the image and separate the objects inside one by one, and then determine the context of the object. Different people will get different segmentation results based on his/her different recognition or emphasis on different objects in the scene. Front and back occlusion relationship, reflecting the individualization of everyone. At this time, it determines the order and the primary and secondary relationship of each object drawn during the painting. This only completes the first step of the analysis. After that, the brain will further subdivide, analyze the main structure of each object, determine the drawing direction of the main structure, namely the position and direction of the brush, and then further consider the direction of the main structure. According to some painters who have long been engaged in the art industry and teachers who teach painting, the relationship between the writing direction of the other parts of a picture and the down direction of its main structure is very large. According to this relationship, the human brain obtains the direction of the brush of the unstructured part. This is the whole process of the interpretation of the painting.

1632

2.2

H. Wei

Drawing of Art Painting Model

In order to further explain the generation process of art painting, the final computer simulation is achieved. This paper abstracts it into a mathematical process. It is shown in Fig. 1.

Fig. 1. The mathematical process of the creation of Art Painting

This mathematical process is the mathematical description of Fig. 1, where 1, 2, 3 represent the directional field model, 4 represents the color transformation model, and 5 represents the brush placement model. The brush model is a pre-defined underlying model, which is obtained by collecting samples of various brushes of a class of painters for analysis and processing. Starting from this point onwards, this article focuses on further mathematical analysis of the directional field model, color transformation model, and brush placement model. 1. The Direction Field Model The direction field model is divided into 3 levels: the object hierarchy of the image is referred to as “image layering”, the main structure analysis, and the direction diffusion. (1) Image stratification A picture is composed of multiple layers. Each layer consists of multiple objects and the previous layer blocks the next layer. Each person has its own interpretation of the same picture. Therefore, there is a certain amount of interaction, and it is hoped that personalization can be maximized. It is shown in Eq. (1). W ¼ pðIÞ

ð1Þ

(2) Primary sketch After obtaining different objects, analyze each object to get the main structure of each object. It is shown in Eq. (2). 0

0

0

W ¼ arg maxPðIjW ÞPðW Þ

ð2Þ

Design and Implementation of Art Rendering Model

1633

(3) Directional diffusion When the main structure part of the object is obtained, the brush direction of any position of the main structure can be obtained. However, the other parts of the object cannot be determined because other parts are generally textured or noise-like and are guided by professionals who have long been engaged in painting. The analysis shows that the non-primary structure part is greatly influenced by the main structure part. Therefore, this article defines a direction probability smoothing term pðV^nsk jV^sk Þ to find the maximum probability to obtain the direction of the non-primary structure part. It is shown in Eq. (3). pðV^nsk jV^sk Þ1

Y

expf

v2^nsk

XX

eðsv ; su Þ2 g

ð3Þ

v2nsk u2@v

2. Color Transformation Model Because each person has their own different color perceptions and preferences, the transformation model in this paper is not static and has human interaction. But there is a basic reference model in itself. This model is a statistical model obtained by statistically characterizing the existing thousands of oil paintings. 3. Brush Setting Model Once there is a directional field and color transformation model, you must begin to determine the brush placement order, because unreasonable order will bring unsightly results. For this reason, this article specifically defines an energy formula, the order of the brushes placed on the same layer of objects. It is shown in Eq. (4). Jn ¼ arg min

k X

½qððI; J þ kÞ

ð4Þ

i¼1

3 Implementation of Art Painting Drawing 3.1

Image Stratification

Image layering is the first step the artist makes when composing a picture. The artist first analyzes what objects exist in an image, the front and back occlusion relations between each object, and the categories of each object, providing information for further drawing process. In order to achieve this goal quickly, this paper uses the Mean Shift algorithm to perform initial segmentation. Then Markov field is built based on the segmentation result and optimized by Graph Cut.

1634

3.2

H. Wei

Primary Sketch

In the process of automatic drawing of digital art paintings, such as oil painting, the key step is how to layout the brush, i.e., given the input image and brush model, how to modulate the brush model based on the image data so that the resulting artistic effect can be reflected? Drawing on the artist’s actual painting process, it can be known that the main structure of the image (also known as the original simple figure) is the main factor guiding the actual layout of the brush. For this reason, this dissertation researches how to extract the original simple image and the digital art based on the original simple figure. 3.3

Directional Diffusion

Through the main structure analysis to obtain the main element map, you can get the main structure of different layers of different objects, ignoring those unwanted texture parts. Then we can determine the direction of the subsequent drawing by the direction of the resulting main structure. However, how to determine the direction and position of the brush in the non-structured part is discussed through discussion with the painters and some principles of the painting. This paper concludes that when creating a work, after the completion of the drawing of the main structure, the drawing of other unstructured parts is affected by the drawing direction of the surrounding main structure. By reading various kinds of literature, it is concluded that the use of diffusion techniques should be a good idea. 3.4

Color Transformation Model

In traditional art painting, pigments and colors are very important factors. Artists often use different colors to create different atmospheres, express different emotions, and convey their thinking and art concepts to the audience. Therefore, in the study of drawing digital art paintings, analyzing and simulating the way artists use color is a very important issue. Based on the related knowledge of computational chromatics, this paper makes a comparative statistical analysis of traditional paintings and natural images, and accordingly implements color conversion on natural images to simulate the color effects of paintings. When light is mixed by a plurality of light sources having different main frequencies, a series of lights of other colors can be obtained by changing the intensities of the respective light sources. In this method of forming different colors, light sources for generating other color lights are used. The color is called the base color. In the commonly used color model, the model based on the basic color is more commonly used. Spectrum expression is shown as shown in Fig. 2.

Design and Implementation of Art Rendering Model

1635

Fig. 2. The spectrum of visible light is represented by RGB model

3.5

Brush Model

In the field of non-realistic rendering, there have been many researches on brush models, most of which are based on physical processes. The advantage of such brush painting is that they have high controllability. By changing their physical properties and simulating the physical processes, they can achieve cumulative painting effects, but their disadvantages are poor realism, because they completely simulate the physics of brushes and paints and papers. Attributes and their interactions are very difficult. In the experiment of this paper, some statistics and computer vision were used for reference, and a sample-based method was chosen to generate brushes. This paper has obtained a large number of brush samples of different lengths and shapes from professional painters and organized them into a data dictionary. Figure 3 shows some of the samples in the brush Library.

Fig. 3. Some samples of oil painting brush in brush library

1636

3.6

H. Wei

Brush Setting Model

In this paper, the brush placement model uses a multi-layered structure with different objects. Different brush strokes are used for different objects, resulting in different artistic styles and multiple layers of the same object. Each layer employs a pursuit algorithm, using alpha fusion based on the image itself between layers. Given a stack of stroke dictionaries, they have different sizes, shapes, colors, and then calculate their total energy changes after they are placed on the canvas. They are sorted into a linked list from small to large. Each time the most varied stroke is taken out, it is placed on the canvas in the direction of the direction field, and then recalculated—sequenced and iterated until the change is less than a threshold value. This field value is related to the object. The total energy is defined by the distance between the histogram of the color-converted image and the histogram of the currently drawn result. The change in total energy is the change in energy before and after adding a brush. It is shown as shown in Fig. 4.

Fig. 4. Draw the result

4 Conclusions Based on the painter’s long-term practice of painting, this paper first analyzes the human visual perception of art and the artist’s process of drawing oil and abstracts a set of mathematical models from the visual perception to the final drawing. Guided by the mathematical model, with the help of image layering, color conversion, direction field diffusion and other aspects of knowledge, the traditional algorithm of generating oil painting is improved to achieve the proposed mathematical model of abstraction.

Design and Implementation of Art Rendering Model

1637

References 1. Guo, Q.: Generating realistic calligraphy words. IEEE Trans. Fundam. E78-A(11), 1556– 1558 (1995) 2. Helman, P., Liepins, G., Richards, W.: Foundations of intrusion detection. In: Proceedings of the Fifth Computer Security Foundations Workshop, 114–120 (1992) 3. Ai, Z., Cao, Y., Xiao, L., Wang, H.: Graphics driven perception efficient rendering model for heterogeneous hardware. J. Syst. Simul. 28(10), 2394–2399 (2016) 4. Zhu, L., Yue, A., Zhou, C.: Rapid rendering method for 3D point cloud model for urban buildings. J. Comput. Aided Des. Gr. 27(08), 1442–1450 (2015) 5. Li, Y., Li, Z., Wu, B., Yin, Z.: A 3D GIS parallel rendering model using OpenMP technology. J. Wuhan Univ. 38(12), 1495–1498 (2013) 6. Huang, X., Song, J., Yu, C.: Industrial control computer based on OpenGL ES’s 3D model of mobile platform. 26(01), 60–62 (2013) 7. Ji, Z., Wang, Y.: Computer aided design and graphics for maintaining the intrinsic properties of 3D models. 24(09), 1151–1155 (2012) 8. Wang, X., Qin, X., Xin, L.: Research progress in non-photorealistic rendering technology. Comput. Sci. 37(09), 20–27 (2010) 9. Zhang, Y., Li, L., Jin, Y., Zhu, H.: Study on the structured drawing model of tree based river system based on graph theory. J. Wuhan Univ. 06, 537–539 (2014) 10. Xiao, X., Chen, X., Tang, M., Dong, J.: Drawing based on multiple rendering models. J. Zhejiang Univ. 06, 36–41 (2006)

LiveCom Instant Messaging Service Platform Qiang Miao(&), Hui Li, and Xiaolong Song Dalian Neusoft University of Information, Dalian 116023, Liaoning, China [email protected]

Abstract. This paper presents an instant messaging service platform frame “LiveCom” which is both cross messaging service platform and cross operating system. The “LiveCom” could realize communication among different chat software or different groups of the same chat software under instant messaging environment. The frame is written in Python and provides user-defined application access ability of popular instant messaging platform. The frame has good cross operating system transportability, and allows developers to access frame in a more unified way. Keywords: Instant messaging

 Service platform  Python

1 Introduction In nowadays, the instant messaging based on network become more and more important. Especially the users with cooperation demand need to gather groups such as QQ, Wechat and Baidu hi. The users release announcements and assignments, share work progress, co-processing work assignments. Most service frame of mature cross messaging platform could support a few fixed platforms, such as communication between Telegram and QQ. This kind of service frame is lack of generality. Take emoticon as an example, Telegram uses Emoji as the default emoticon set, but QQ uses the QQ self-define emoticon set which is continuous upgrading until now. The two emoticon sets should be converted to transmit information. The service frame supporting fix platforms only focus on the information conversion among present platforms. The development cost will increase on platform migration or building new platforms.

2 Application of Development Technology Python language is an object-oriented interpreting computer programming language. The function, module, number and string are all object. Python completely support inherit, overload, derived and multiple inheritance. The design of Python is focus on the readability and simple syntax of the code [1]. Python could accomplish the task with less code compared with C++ and Java. The threading module is an encapsulation of the underlying module _thread [2]. It provides an advanced interface for operating system thread control and provides lock encapsulation for basic synchronize threads. © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1638–1646, 2019. https://doi.org/10.1007/978-981-13-3648-5_212

LiveCom Instant Messaging Service Platform

1639

The subprocess module provides available cross platform sub-process management methods for Python [3]. The design of the module is to replace the some old functions (such as: os.system, os.spawn*) in the os module, and provides more flexible access mode for standard data flow (standard input, standard output and standard mistakes) and exit status of sub-process. The queue module realizes the multi-producer and multi-user function. The module is used for exchange information safely among treads.

3 LiveCom Service Platform Frame 3.1

Overall Design of LiveCome

The cross platform generality including cross messaging platform and cross operating system platform of the design is very important [4]. The front adapter could program according to different operating system demands without changing the frame. The overall design is shown in Fig. 1. The access is from outside [5]. The message and instructions are access to the platform interface and then to the message routing layer for screening and addressing [6]. The next trend is decided on the message routing layer configuration state. If the message is transmitted to other platform interfaces, the message is transmitted directly. If the message is going access to the message processing service or application layer, the message is transmitted to monitor interface service layer of specific service or application program [7]. The response information of application and service is transmitted to user from message routing layer to platform interface.

User application program (Application program layer) Message processing service (Service layer) Message routing and session management (Message routing layer) Platform interface and operating system interface (Interface and adapter layer) Operating system and Python interpreter (Operating system and runtime layer)

Fig. 1. Overall design

3.2

Overall Structure of Frame

The frame structure is shown in Fig. 2. The user space is corresponding to variety application programs which could be written in any language. The user application program connects with the kernel through user application adapter.

1640

Q. Miao et al.

User space

User application A Back-end Adapter

Kernel space

Scheduling service layer adapter Front-end adapter

User application adapter

kernel application adapter

User session manager

Message routing manager

Other scheduling service layer program

Message front-end service interface A

Message front-end service interface B

Message front-end service interface C

Local message server A Platform interface

User application B

Message server B Message server C

Message proxy server A Message server A

Fig. 2. Frame structure

The kernel space is the operating environment of the LiveCom frame [8]. The message bus and the adapters are in the kernel space. The modules of the kernel associated with each other using the library registration function and library loading function of API manager. The kernel application adapter a kind of back end adapter, and this adapter can accomplish task independently without external application support. The platform access part is the actual interfaces connecting to each messaging platform [9]. The platform access part is not an actual platform service but an agent program connecting with the front adapter in some cases. The message front-end service interface A shown in Fig. 2 is an example of this [10]. The message front-end interface A is connected to the local message service proxy, and the actual message service is processing by two-layer proxy server program. 3.3

Function Structure Design

The function structure of the system is shown in Fig. 3. (1) Message Front-End Adapter

LiveCom Instant Messaging Service Platform

1641

Pushing stack queue Session control command

User application adapter

Service demanding event

Application receiving event

Other scheduling layer service

Process control command

Message routing manager

Event

Session manager

Routing control command

Message receiving event

Routing control command

Message front-end adapter B

Session control command

Message receiving event

Message receiving event Message front-end adapter A

Service access adapter

Service reply event

Application sending event

Event

Application sending event

Message sending event

Message sending event

Message sending event

Popping stack queue

Fig. 3. Function structure of system

Message front-end adapters convert the message event of the messaging platform to general format, push them to stack queue of the message bus, get message event to be sent from popping stack queue of the message bus, and then convert the message information to send out. (2) User Application Adapter Application back-end adapter is in charge of sending event of application layer to the pushing stack queue of the message bus, and accepts the control of the session manager to manage the application process. The user application adapter can also send session control command to inform the session manager to close the expired sessions. (3) Schedule Layer Service Adapter Schedule layer service adapter is an adapter type. All adapters with the type are in charge of the event processing. The schedule service adapter has three default adapters which are session manager, message router manager and name resolution service.

1642

Q. Miao et al.

(4) Session Manager The session manager maintains and coordinates the relation between user space process and the message front-end communication routing. If the session manager receives new session creating request, the message routing manager will apply the routing adjustment and creating initial process to the session adapter. In this way the data flow of the specific process of the user application adapter and the data flow of the message front-end adapter are connected. The session manager will also accept process manage request of service access controller and send the process manage request to the user application adapter to process. (5) Message Routing Manager The message routing manager is in charge of coordinating the information routing relation of each front-end and back-end adapters. Other adapters could adjust the routing status of the event and message in the frame through routing control commands. The routing transmit of message front-end is shown in Fig. 4.

Message front-end adapter Pushing stack queue Message routing manager Popping stack queue Message front-end adapter Fig. 4. Front-end message routing transmit process

(6) Service Access Controller Service access controller is in charge of collecting service request event from user application service program of user control, and then querying the request data to each adapter or sending corresponding control commands and return results to application service program of user space. (7) Name Resolution Server Name resolution server analyzes the actual source names of message objects of front-end adapters or back-end adapters to obtain the user friendly or user identifiable name information.

LiveCom Instant Messaging Service Platform

1643

4 Implementation of LiveCom Service Platform Frame 4.1

Basic Environment Configuration

LiveCom supports POSIX compatible platforms such as Windows, Linux and FreeBSD. The Python version should be 3.6 and above. The design may bring extra environment restrictions according to adapter module requests. Take CQHTTP front-end adapter as an example, the local operating system should have the ability to execute Windows application program if starting up local platform proxy. Windows could be used as the running platform, or the design allows the Windows special components relied by CQHTTP to operate on the POSIX compatible platforms through Wine as the compatible layer. The platform proxy could also be based on the network communication, and the adapters will not add environment restrictions to local platform, but it will cost extra equipment. 4.2

Implementation of Components

The implementation of components include event bus, API manager, API host class, API client class, API broadcast receiver class, API loader, UMP basic class (APIUMPModuleUtils), message routing of scheduling layer service, name resolution service, front-end adapter CQHTTP, session manager, back-end adapter, local standard flow interface (NPSSI), session control service. (1) Event Bus Table 1 lists the initial components demanded by the initial event bus class process. Table 1. Event bus components Component name self._gc_collected_refs self._tree

Type

Applications

integer(int) Confine total quantity of garbagy collecting dictionary key-value Record the data structure of event bus pairs (dict) self._token integer(int) The access Token of event bus, using for authentication when stop the bus self.name strings(str) Name of event bus self._incoming_queue queue object Message queue object of pushing stack (Queue) self._outgoing_queue queue object Message queue object of popping stack (Queue) self._incoming_thread_run_mark bool value(bool) Mark the running status of pushing stack queue thread self._outgoing_thread_run_mark bool value (bool) Mark the running status of popping stack queue thread self._first_started bool value (bool) Mark the first initial process of event bus self._thread_worker_running bool value (bool) Mark the status of processing function of pushing and popping stack have started self.APIManager API manager object Only exist in system bus: API manager object (APIManager) self.is_sysbus bool value (bool) Mark the bus type is the system bus

1644

Q. Miao et al.

(2) API Manager Components API manager has only one instantiated object in a single LiveCom frame instant. The object is created with the system bus. Table 2 lists the demanding components in the API manager initial process.

Table 2. API manager components Component name self._group_reference_table

Type dictionary key-value pairs (dict)

self._apihost_info_dataset

dictionary key-value pairs (dict) event bus(EventBus) API loader object (APILoader)

self._sysbus self.APILoader

self. _broadcast_receiver_object self. _broadcast_connector_object self._broadcast_refs

dictionary key-value pairs (dict) dictionary key-value pairs (dict) dictionary key-value pairs (dict)

self.user_extra_data

dictionary key-value pairs (dict)

4.3

Applications Record monitoring group reference status information between API host and client Record monitoring information used by API host Access the upper system bus object Create with API manager, in charge of loading order relation among modules Record the reference of API receiver for information storage Record the reference of API connector for information storage Record the reference of monitoring group of each broadcasting receiver and connector Store the self-define data of the corresponding API manager

System Test

(1) The message bus test results is shown in Fig. 5. The test result shows the operation of message bus is normal. The red marking message is incorrect, and they are set on purpose. So the result means the system operates normally with the exceptional handling. (2) Message routing transmit test result is shown in Fig. 6. (3) Test result of multi-group message transmit in actual application is shown in Fig. 7.

LiveCom Instant Messaging Service Platform

Fig. 5. Message bus test result

Fig. 6. Message routing transmit result

Fig. 7. Instance of multi-group message transmit

1645

1646

Q. Miao et al.

5 Conclusion LiveCom implements the basic function of user interactive logic and message transmitting. Adding support of popular service platform such as Wechat and telegram, media interactive and converting of cross platform are considered in the future development. LiveCom has already implemented the media format converting standard and tool library cross platform except the processing logic of the front-end adapters. Besides, the performance optimization is also very important in the further development. Acknowledgements. This research was supported by College Students’ Innovation and Entrepreneurship Project of Dalian Neusoft University of Information in Fist Half year of 2018 (Project No. 201812005: LiveCom Instant Messaging Service Platform); 2018 Project of 13th Five-Year plan of Education Science in Liaoning Province (Project No. JG18DB037: Research of Applied Talents Cultivation Mode based on Intergration of Production and Education; Project No. JG18DB036: Research of Innovative Talents Cultivation Modes of IT in “School-Small Enterprise”).

References 1. https://www.python.org/ 2. He, H., Wang, Z., Wang, X., Ji, L.: Analyzing and processing worksheet on multi-platform based on Python. Electron. Des. Eng. 19, 67–70 (2011) 3. Chen, C.: Research on cloud-based service-oriented python parallel computing. University of Electronic Science and Technology of China (2014) 4. IEEE, Open Group. POSIX.1-2008/IEEE Std 1003.1-2008/The Open Group 5. Technical Standard Base Specifications, Issue 7[S], http://pubs.opengroup.org/onlinepubs/ 9699919799/ (2016) 6. Python Software Foundation. threading—Thread-based parallelism [EB/OL], https://docs. python.org/3/library/threading.html (2018) 7. Julien, D., Wang, F.: The hacker S guide to Python, 3rd edn. Posts & Telecom Press, 2016 8. Python Software Foundation. Queue—A synchronized queue class [EB/OL], https://docs. python.org/3/library/queue.html (2018) 9. Python Software Foundation. subprocess—Subprocess management [EB/OL], https://docs. python.org/3/library/subprocess.htm (2018) 10. Peter, W., Jeffrey, E., Allen, B.D.: Event-Driven Programming [EB/OL], http:// openbookproject.net/thinkcs/python/english3e/events.html (2012)

Empirical Research on Brand Building of Commonweal Organization Based on Data Analysis Shujing Gao(&) Tianjin Normal University Jingu College, No. 393, Extension of Bin Shui West Road, Xi Qing District, Tianjin, China [email protected]

Abstract. The brand of commonweal organization is an intangible asset that brings premium to the commonweal organization and produces value-added. Its carrier is the name, terminology, symbol, symbol or design and combination which are used to distinguish products or services from others. It is of profound social significance to improve its brand awareness, establish its own brand culture, set up a valuable brand image of commonweal organizations and attract social insight, although commonweal organizations are not for profit-making purposes. 455 valid samples were obtained by means of questionnaire survey and cluster sampling in this study. Data are counted and analyzed by computer. Through the analysis and discussion of first-hand data, the brand building of public welfare organizations has been empirically studied. After data analysis, we draw a conclusion and put forward our own view, that is, through the strength of the brand culture of public organizations, more people are attracted to public organizations. In this way, we can further enhance the brand influence of public organizations and provide a benign cycle for the brand building of public organizations. Keywords: Brand building

 Commonweal organization  Data analysis

1 Introduction Commonweal organizations generally refer to organizations or institutions that do not maximize profits as the primary goal and take public welfare as the main goal. Their work involves environmental protection, rescue, poverty alleviation, rights and interests of women and children, animal protection and so on. The brand of commonweal organization is an intangible asset that brings premium to the commonweal organization and produces value-added. Its carrier is the name, terminology, symbol, symbol or design and combination which are used to distinguish products or services from others [1]. Its source of appreciation comes from the impression of its carrier in the public mind. The brand building of commonweal organizations refers to the behavior and efforts of the owners of commonweal organizations to plan, design, publicize and manage their brands. It is of profound social significance to improve its brand awareness, establish its own brand culture, set up a valuable brand image of © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1647–1653, 2019. https://doi.org/10.1007/978-981-13-3648-5_213

1648

S. Gao

commonweal organizations and attract social insight, although commonweal organizations are not for profit-making purposes. The brand building of commonweal organization mainly has the following four pillars: the brand identification of commonweal organization, the brand planning of commonweal organization, the brand extension of commonweal organization, and the brand assets of commonweal organization [2]. In view of the availability and quantifiable principle of index data, researchers mainly measure the brand building of commonweal organizations from two aspects: brand recognition of commonweal organizations and brand equity of commonweal organizations because the brand planning of commonweal organizations and the brand extension of commonweal organizations are not easy to quantify [3]. The specific factors are brand awareness, reputation, brand loyalty, cultural cohesion, social influence and brand association (Fig. 1).

Fig. 1. Four pillars of commonweal organization brand building

2 Research Design 2.1

Research Procedure Preparatory Phase. According to the needs of investigation, we should determine the scope of the survey subjects and the questionnaire items, analyze which items are important, and whom to investigate. Analysis of the characteristics of the respondents as the basis for the formulation of the questionnaire. Questionnaire Arrangement Phase. Determine the structure of the questionnaire and arrange the problem. Detailed inspection, screening, design and layout of each item. Every problem must be carefully considered for its necessity and feasibility. Trial and Revision Phase. The designed questionnaire is carried out in a small range in order to find out the problems existing in the first draft of the questionnaire and make the necessary modifications to make the questionnaire more perfect. Implementation Phase. The final version of the questionnaire was made into a formal questionnaire. This time the anonymous questionnaire was adopted, and the whole group sampling method was adopted to investigate the undergraduates of grade two and the undergraduate of grade three. A total of 476 questionnaires were distributed, 21 out of the unqualified samples were excluded, and 455 valid samples were obtained.

Empirical Research on Brand Building of Commonweal Organization …

1649

Summing-up Phase. The data of effective questionnaires were compiled, imported into the computer, analyzed and later arranged. Finally, the conclusion is drawn. 2.2

The Structure and Content of the Questionnaire

The questionnaire was divided into three parts. The first part investigates the brand recognition of public welfare organizations and focusing on the brand recognition of public welfare organizations [4]. The main problems are as follows: The brand has a high reputation [5]. The organization’s brand of social approval is very high. The organization has a strong sense of visual impact. I was impressed by a particular trait of the brand. I like the slogan of the brand. The core values of the brand can resonate with me [6]. I like the organizational culture of the brand. I would like to be involved in the activities of the organization. This part adopts the difference scale and uses Li’s seven component method [7]. The second parts of the questionnaire investigated the brand communication of public welfare organizations. The main problems were as follows: How many public welfare organizations are there in your city? When did you last see the brand of public welfare organization? Which ways do you contact the brand of public welfare organizations? What is the slogan of the commonweal organization that you most impressed? What do you know about the activities of the public welfare organization? What are the representatives of public welfare organizations you know? In which aspects should public organizations improve their brand building? What kind of spirit does a public welfare organization show? The third part of the questionnaire investigates the brand assets of public welfare organizations, including the following major problems: It is the leader of the public welfare organization. The organization has a good social reputation and is very popular. The organization is actively serving the community [8]. The organization is upright and has no corruption. Even if there is negative information about the organization, I believe it is not true. If I take part in the social activities of the public organization is preferred. Public organizations made more contribution to the society [9]. I would like to introduce the charity to the people around. The charity brand image is distinctive. The brand gives me a feeling of comfort [10]. Every time I think of the organization, I will be very solid in my heart. For each aspect, two or more questions were asked from different angles to ensure the scientificity and rationality of the questionnaire. For example: The core values of the brand can resonate with me. I like the organizational culture of the brand. We sort out the data in Table 1 (Fig. 2). The correlation coefficient between A and B is 0.999, and shows a 0.01 level of significance, which indicates that there is a significant positive correlation between A and B. The reliability and validity of the questionnaire were enough through multiple groups of similar tests.

1650

S. Gao Table 1. Reliability and validity Very agree

A: The core values of the brand can resonate with me B: I like the organizational culture of the brand

Be unable to explain clearly

A little disagree

Totally disagree

118

A little agree 199

116

22

0

115

200

111

29

0

Fig. 2. The scatter diagram

3 Research and Analysis Results 3.1

Which Ways Do People Contact the Brand of Public Welfare Organizations?

According to Fig. 3, people mainly contact the public welfare organizations through the Internet. Under the rapid development of the information age, network communication has become the mainstream and TV transmission is subsidiary. Public welfare organizations should pay more attention to these two channels.

Fig. 3. Question: Which ways do you contact the brand of public welfare organizations?

Empirical Research on Brand Building of Commonweal Organization …

3.2

1651

When Did Men Last See the Brand of Public Welfare Organization?

As Fig. 4 shown, most people saw the brand of public interest organizations in this year in the 455 questionnaires. The rest are scattered in this month, this week, nearly two or three days and this day. It shows that the publicity of public welfare organizations is insufficient, and the public’s attention to public welfare needs to be improved.

Fig. 4. Question: When did you last see the brand of public welfare organization?

3.3

How Should Public Welfare Organizations Improve Their Brand Building?

As shown in Fig. 5, most of the respondents think the ranking first is social influence, which has significantly reached the peak value of the sample as for the brand building of the public welfare organization. Only few people believe that brand associations should first be promoted. This proves that public organization’s brand association is pretty good. Sorting second is mainly distributed on Reputation, cultural cohesion and Social influence three items. Sorting third has the most uniform sample size distribution.

Fig. 5. Question: In which aspects should public organizations improve their brand building?

1652

3.4

S. Gao

What Kind of Spirit Does a Public Welfare Organization Show?

Researchers found that many data have dual characteristics. All data are commended for the brand spirit of the brand. It shows that the reputation of public welfare organizations in the public is very high. Researchers divided complex data into 4 categories: mental, health, self strengthening and team spirit. The mental class is to improve the personal spiritual accomplishment, such as: friendship, strength, courage, dedication, teamwork and friendliness. Health class is to improve personal health, such as: green, culture, health, sports. Self strengthening class individuals and groups to improve the quality, make it more courageous, such as: innovation, sports, higher, faster, stronger, selfless dedication, willing to help others, struggle, and self-improvement. Team spirit class is to improve team synergy, such as: unity, cooperation, friendship, tidy, team spirit, dedication, friendly spirit, public participation and solidarity.

From the data statistics, we can see that public organizations help individuals to improve their quality and culture, and improve their teamwork ability. The number of people who raised mental health and physical health was flat. On the whole, although there are differences in number, there is not much difference in each category. Generally speaking, public interest organizations bring benefits to society.

4 Conclusions and Suggestions In order to attract more public interest and interest in public welfare organizations, public welfare organizations should do well in brand building. Through the strength of the brand culture of public organizations, more people are attracted to public organizations. In this way, we can further enhance the brand influence of public organizations and provide a benign cycle for the brand building of public organizations. Through the investigation of the brand construction of the public welfare organization, we can see that some public welfare organizations have high reputation, and have made a lot of contributions to the society. But at the same time, some people are not familiar with the brand of public welfare organizations. Public welfare organizations should pay more attention to their own brand building so as to win more support from society and better play the role of light and heat.

Empirical Research on Brand Building of Commonweal Organization …

1653

References 1. Gao, S.: Practical Technical Manual for Market Research, 1st edn. The North Literature and Art Press (2018). (in English) 2. Galan Ladero, M.M., Galera Casquet, C., Singh, J.: Understanding factors influencing consumer attitudes toward cause related marketing. Int. J. Nonprofit Volunt. Sect. Mark. 20 (1), 52–70 (2015). (in English) 3. Chokkalingam, T.S., Ramachandran, T.: The perception of donors on existing regulations and code of governance in Singapore on charities and non-profit organizations—a conceptual study. Asian Soc. Sci. 11(9), 89 (2015). (in English) 4. Li, D.: Contemporary Advertising. China Development Press, Beijing (2015). (in Chinese) 5. Chen, F., Sun, Y., Wang, B.: Study on the influencing factors of micro-blog fans’ loyalty. Inf. Mag. 12(1), 120–126 (2014). (in Chinese) 6. Chen, G.: Management of Organizational Behavior. Qinghua University Press, Beijing (2014). (in Chinese) 7. Wu, J., Nie, Y.: Marketing. Higher Education Press, China (2014). (in Chinese) 8. Vanauken, B.: Developing the brand building organization. J. Brand Manag. 7(4), 281–290 (2000). (in English) 9. So, K.K.F., King, C., Hudson, S., Fang, M.: The missing link in building customer brand identification: the role of brand attractiveness. Tour. Manag. 59, 640–651 (2017). (in English) 10. Manthiou, A., Kang, J., Hyun, S.S., Fu, X.X.: The impact of brand authenticity on building brand love: an investigation of impression in memory and lifestyle-congruence. Int. J. Hosp. Manag., 75, 38–47 (2018). (in English)

The Influence Study of Online Free Trail Activities on Product Sales Volume Shushu Gu1,2, Yun Lu1, Xi Chen1(&), Ranran Hua1, and Han Zhang1 1

2

Business School, Nanjing University, Nanjing, China [email protected] JinLing College, Nanjing University, Nanjing, China

Abstract. Online free trial weakens information asymmetry to some extent. By analyzing online free trial data of Taobao and Tmall, the author discusses the influences of product attributes and trial application activities on trial product sales volume. The results show that product evaluation score, product evaluation quantity, and free trial samples have the significantly positive influences on product sales volume, while consumers’ proportion of successful trial products has the significantly negative influences on product sales volume. Keywords: Online trial

 Free trial  Online sales volume

1 Introduction With the popularity of the internet, it is increasingly general to do consumption online, thus consumers are increasingly difficult to select homogeneous products on the internet [1]. It is urgent for consumers to effectively purchase necessary products in a suitable price. Therefore, online free trail products emerged at the right moment under the background with the main purpose of introducing internet flow to the online store and promote purchase. Online free trail activities mean that customers gain free trail products through online application, so as to reduce uncertainty and risks relating to products perceived by customers (Rogers 1995) [2]. This reduces information asymmetry to some extent and guides customers to a store, while improving product popularity. In terms of merchants, online free trial activities not only can improve popularity and sales volume of trial products, but also can indirectly enhance public praise of a store or a brand and recommend other products [3]. Moreover, consumers take delight in accepting free trial, which is able to greatly reduce risks faced by consumers. Consumers who gain a trial chance can obtain the practical free commodities and it is good for them to make a suitable decision [4]. The studies on trial are mainly focused on the public praise, but scholars seldom discuss the flow role of trial products and specific operation of trial marketing. In this thesis, the author will make innovations from this aspect.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1654–1662, 2019. https://doi.org/10.1007/978-981-13-3648-5_214

The Influence Study of Online Free Trail Activities …

1655

2 Literature Review If consumers don’t use a product, they have to face up with uncertain risks of product value [5]. Free trial can directly reduce such uncertain risks. Before consumers purchase a product, free trial will make consumers have the deeper understanding on products. Scholars pay more attention to practical significance on studying the product trial, thus studies will be concentrated on trail marketing and new product market development. When a brand-new product or a new product of a mature brand is listed, consumers will encounter with two risks in the purchasing process—uncertainty of product functions and poor experience of bad functions. Trail before purchasing enables consumers to obviously reduce uncertainty of product functions and risks caused by poor consequences brought by improper selection (Jiang Yan 2013). The marketing effect of trial marketing can be summarized into direct effect and indirect effect (Heiman et al. 2001) [6]. The direct effect is a short-term effect. The direct trial marketing increases sales volume of trail products and other products in a store and gains the direct economic earnings. Beginning with gifts and principle of reciprocity, the study of Jiang Yan (2013) finds that trail marketing will bring the impulse buying [7]. The indirect effect is a long-term effect, implying that trail marketing improves public praise and business reputation of merchants (Heiman et al. 2001). This is good for gaining customers’ lifelong value and improving brand value. Heiman et al. (2001) indicated that the balanced sales and business reputation have the positive correlation with the trial efficiency [8]. The costs of short-term trial marketing can offset in the long-term business reputation. Even if the marketing mode of free trial is very popular and consumers’ acceptance degree is relatively high, some problems may be inevitable. Enterprises are facing up with a large challenge, in order to reserve customers brought by free trial promotion activities. Enterprises should realize that customers brought by free trail differ from general customers [9]. Datta et al. (2015) studied the influences of using trail marketing to gain customers on the customer relationship, so as to affect customers’ retention rate, response of enterprise marketing activities and customers’ lifelong value. Trail marketing will generate some gnawing-away operation effect, while causing the positive network effect (Cheng and Tang 2010). Cheng and Tang indicated that trail marketing exactly can increase user foundation of products, so as to improve the product value [10]. However, trial products will encroach on the market of official products. In other words, consumers’ demands are satisfied from trial products, instead of pursuing for official products.

3 Research Hypotheses In this thesis, the author thinks that online free trail activities should firstly focus on the subjects of trial activities—attributes of trial products. Consumers mainly give considerations to price favorable strength, monthly sales volume, evaluation score, evaluation comments and collection of trial products. The larger price favorable strength is, the higher earning degree perceived by consumers will be. In this way, it can simulate consumers’ purchasing desire, so as to positively affect product sales volume. Meanwhile, monthly product sales volume, evaluation score, evaluation comments and

1656

S. Gu et al.

collection are key indexes to measure historical sales performance and popularity of commodities. All of these will make consumers recognize products, improve the brand consciousness, reinforce consumers’ brand loyalty, and further improve advertising conversion effect and persuasive effect, so as to affect sales volume. Based on it, the author proposes the following hypotheses: H1: Price favorable strength of online free trail products has the positive influences on trail product sales volume. H2: Evaluation score of online free trail products has the positive influences on trail product sales volume. H3: Evaluation comments of online free trail products have the positive influences on trail product sales volume. H4: Collection of online free trail products has the positive influences on trail product sales volume. As a marketing mode that is recognized and welcomed by online retailers and consumers, online free trail marketing has the excellent performance in sales. Advertising can change consumer preferences from multiple aspects. Firstly, advertising can publicize the product attribute and arouse consumers’ brand consciousness. This is the notification effect of advertising. Secondly, advertising will affect consumers’ brand comments and improve the brand loyalty, implying the persuasive effect of advertising. Thirdly, advertising will affect product evaluation when consumers experience products, implying the advertising conversion effect. Since advertising reminds the guiding functions, potential consumers will concern the specific features of products. The trial activities mentioned in this thesis get involved in advertising conversion role. Trail application activities bring more application and sales volume. In this thesis, the author thinks that product quantity of online free trail activities will directly affect consumers’ expectations on trail qualification. The more quantity is, the higher possibility of consumers’ successful application will be. Meantime, the participation will be higher, so as to bring more conversion effect. As a result, the influences on the sales volume will be higher. The applicants of trail activities and original price of trail products directly affect consumers’ participation. The higher consumers’ participation is, the wider spread range of trail activities will be and the stronger strength will be. There is no doubt that the advertising conversion effect also will be stronger. The influences on product sales volume are likely to be increasing. Furthermore, not all applicants will successfully gain the free trail products. At the same time, consumers who have the free trail qualification can’t gain the free trail products for various reasons. These people are likely to purchase such products in a store for forming a compensation mechanism after receiving trail products, so as to satisfy product experience and use demands. Based on it, the author comes up with the following hypotheses: H5: Free trail samples have the positive influences on sales volume. H6: Trail products in non-active stage have the positive influences on product sales volume. H7: Consumers’ proportion of successful trail products has the negative influences on product sales volume. H8: Trail activity applicants have the positive influences on product sales volume.

The Influence Study of Online Free Trail Activities …

1657

4 Data Analysis 4.1

Data Description

Data used in this study come from Taobao and Tmall. The dataset mainly includes 169 trail products from April 2, 2015 to September 7, 2015, including nine categories, such as food, digital, clothing, cosmetics, infant and mom. The dataset of trail products mainly contains two categories: the first category gets involved in attributes of commodities, such as discount strength, monthly sales volume, evaluation score, evaluation quantity and collection of commodities. The second category refers to attributes relating to trail activities, such as quantity of trail products, normal price, and total application, etc. In addition, the author collects descriptive score of stores, service score, logistics score, target stores, average industrial refund rate, refund speed, and difference level of dispute rate. 4.2

Empirical Model

Based on the features of the dataset, time level of relevant variables is limited to every day. The subscript is defined as each product in Taobao trail center. The subscript is defined as every day on the time level. The dependent variable in this thesis refers to the sales volume Salesit of the product i at t, implying the total sales of the product I on t-th day. The independent variables in this study can be divided into the products’ association attribute, trail application activity attribute, and product trail report attribute. Product attributes are stated as follows: Price_Discountit stands for the price favorable strength of the product i at t-th day, calculated by using price favorable divided by the products’ original price; Monthly_Soldit represents the monthly sales of the product i at t-th day displayed on the product page; Rating_Scoreit is the product evaluation score of the product i at t-th day; Rating_Countit stands for the product evaluation quantity of the product i at t-th day; Bookmarkit represents the collection of the product i at t-th day. The trail application activity attributes are stated as follows: Quotai represents the free trail samples provided by the trail product i; Usual_Pricei represents the original price of the trail product in non-active stage; Accept_in_Totalit is the consumers’ proportion in all applications after successfully receiving the trial product i at t-th day; Final_Applicantit stands for the applicant number of applying for the product i at t-th day in the trail activities. In order to control influences of store attributes on product sales volume, Rate_Descriptionit, Rate_Serviceit, Rate_Deliveryit, Avgrefund_Diffit, Refundrate_Diffit and Complaints_Diffit are introduced to the store of the product i at t-th day. Among which, Avgrefund_Diffit, Refundrate_Diffit and Complaints_Diffit are obtained by using average refund, refund rate and complaint rate within 30 days of the product i in the store to minus the industrial average value, so as to objectively measure the service level of this store. With the purpose of studying influences of product attribute and trail application activity attribute on product sales volume, the author constructs the model (1) to stand

1658

S. Gu et al.

for the stage from trail product application to the end of application until the product trail report. Salesit ¼ b0 þ b1 Price Discountit þ b2 Monthly Soldit þ b3 Rating Scoreit þ b4 Rating Countit þ b5 Bookmarkit þ b6 Quotai þ b7 Usual Pricei þ b8 Accept in Totalit þ b9 Final Applicantit þ b10 Rate Descriptionit ð1Þ þ b11 Rate Serviceit þ b12 Rate Deliveryit þ b13 Avgrefund Diff it þ b14 Refundrate Diff it þ b15 Complaints Diff it þ eit : Among which, eit stands for the random error item. 4.3

Data Analysis

First of all, data are conducted descriptive statistics, obtaining the results as shown in Table 1.

Table 1. Descriptive statistical table Variable Obs. Mean Std. dev. Min Sales 1827 26.86535 131.1618 1 Price_Discount 1827 0.4589168 0.2846554 0 Monthly_Sold 1827 1047.348 2791.448 0 Rating_Score 1827 4.268692 1.569871 0 Rating_Count 1827 5092.154 31848.49 0 Bookmark 1827 5194.999 9919.16 2 Quota 1827 12.948 15.16806 1 Usual_Price 1827 587.6196 879.1591 17.9 Accept_in_Total 1827 0.0016898 0.0023891 1.97e−06 Final_Applicant 1827 43567.07 92577.75 796 Rate_Description 1827 4.82173 0.0673686 4.6 Rate_Service 1827 4.799781 0.0764775 4.6 Rate_Delivery 1827 4.789491 0.0818671 4.5 Avgrefund_Diff 1827 0.0832591 1.218242 −1 Refundrate_Diff 1.452695 0.3160749 1.452695 −1 Complaints_Diff 1.06805 −0.5017569 1.06805 −1 Notes There are a total of 1827 observation records, 128 products observational days

Max 2739 0.9016064 23,266 5 340,024 61,724 90 3888 0.0176768 507,978 5 5 5 19.84416 14.33136 19.14286 and 94

Next, store attributes of the product i (store description score, store service score, store logistics score, average refund speed, store refund rate, and complaint rate)— control variables of this thesis are conducted analysis of the random effect model. The results are stated in the model (1) in Table 2. The findings show that store service score has the significantly positive influences on the product sales volume (0.004***), the

The Influence Study of Online Free Trail Activities …

1659

store logistics score has the significantly positive influences on the product sales volume (0.010***), and the store refund rate has the significantly positive influences on the product sales volume (0.003***). The above-mentioned three variables embody the store service level, especially for the sales services, logistics and refund closely relating to the consumers. The analytical results indicate that consumers pay attention to sellers’ online service level in online shopping. Table 2. Model analysis result table (1) Rate_Description −75.935 (64.423) Rate_Service 241.220 (82.820)*** Rate_Delivery 211.322 (81.579)*** Avgrefund_Diff −2.155 (2.519) Refundrate_Diff 6.239 (2.128)*** Complaints_Diff 1.897 (2.923) Price_Discount Monthly_Sold Rating_Score Rating_Count Bookmark

(2) −98.803 (65.071) 226.118 (85.501)*** 118.212 (79.984) −0.152 (2.457) 6.475 (2.052)*** 2.460 (2.867) 5.103 (11.749) 0.016 (0.001)*** 4.508 (2.055)** 0.001 (0.000)*** 0.000 (0.000)

Quota Usual_Price Accept_in_Total Final_Applicant 246.484 −47.987 (233.979) (237.440) N 1827 1827 *p < 0.1; **p < 0.05; ***p < 0.01 _cons

(3) −81.873 (64.648) 240.360 (86.428)*** 211.141 (81.764)*** −2.172 (2.525) 6.963 (2.131)*** 1.485 (2.982)

1.335 (0.282)*** −0.000 (0.006) −6314.756 (1755.288)*** 0.000 (0.000) 271.232 (240.483) 1827

1660

S. Gu et al.

In addition to the control variables, this study also conducts the random effect model analysis on independent variables of product attributes (price favorable strength, monthly sales volume, product evaluation score, product evaluation quantity and product collection). The results are stated in the model (2) of Table 2. The findings show that monthly sales volume displayed on the product page has the significantly positive influences on the product sales volume (0.000***), product evaluation score has the significantly positive influences on the product sales volume (0.028**), and product evaluation quantity has the significantly positive influences on the product sales volume (0.000***). The product price favorable strength (0.664) and product collection (0.340) have the positive influences on the product sales volume, but the influences are not significant. The analytical results indicate that historical sales data of products and public praise of product comments are important reference information when consumers make a decision for online shipping. Next, the author conducts the random effect model analysis for the trail application activity attributes (free trail sample quantity, original price in non-active stage, consumers’ proportion of successfully receiving trail products, and applicants). The results are stated in the model (3) of Table 2. The results indicate that free trail samples have the significantly positive influences on the product sales volume (0.000***). The results show that when there are more trail samples in the trail activities, more consumers will accept product experience. And trail activity influences will be wider, so as to promote product purchase and spread of public praise and improve product sales volume. Meanwhile, the consumers’ proportion of successfully receiving trail products has the significantly negative influences on the product sales volume (0.000***), implying that if the consumers’ proportion of successfully receiving trail products is smaller (the consumers’ proportion of unsuccessfully receiving application is larger), it is more possible for consumers without trail products to directly purchase the product to satisfy demands of experience and utilization. Next, two categories of independent variables are gradually added to the model to verify the model significance and degree of fitting. The results show that monthly sales volume displayed in product page in product attributes has the significantly positive influences on the product sales volume (0.000***). Product evaluation score has the significantly positive influences on the product sales volume (0.046**). Product evaluation quantity has the significantly positive influences on the product sales volume (0.000***). In product trail application activity attributes, free trail samples have the significantly positive influences on the product sales volume (0.046**). The abovementioned results indicate that when consumers make a decision to online shopping, they pay more attention to the product attributes and sellers’ online service level. As a promotion mode, online free trail activities indeed can bring flow, but can’t guarantee transformation of flow.

5 Conclusions Through the data analysis, it can be found that product attributes and product trail application activities have the significantly positive influences on the product sales volume. The specific results are stated in Table 3.

The Influence Study of Online Free Trail Activities …

1661

Table 3. Hypothesis testing table Hypotheses

Product attributes

H1 Price favorable strength ! product sales volume (+) H2 Product evaluation score ! product sales volume (+) H3 Product evaluation quantity ! product sales volume (+)

Fullmodel support N

Sub-model support

Y

Y (model (2) in Table 3) Y (model (2) in Table 3) N

Y

H4 Product collection ! product sales volume N (+) Trail application H5 Free trail samples ! product sales volume (+) N activity attributes H6 Original price in the non-active N stage ! product sales volume (+) H7 Consumers’ proportion of successfully N receiving trail products ! product sales volume (−) H8 Trail applicants ! product sales volume (+) N

N

Y (model (3) in Table 3) N Y (model (3) in Table 3) N

Acknowledgements. This research was supported by China National Natural Science Foundation Project (71771118; 71471083); China National Natural Science Foundation major project (71390521); Jiangsu province University Philosophy and Social Science Research Project (2016SJD630160); China National Social Science Fund major project (15ZDB126); Jiangsu Natural Science Fund Project (BK20151388); China Jiangsu province social science talent project and the top six talent peak project.

References 1. Datta, H., Foubert, B., Heerde, H.J.V.: The impact of free-trial acquisition on customer usage, retention, and lifetime value (2013) 2. Datta, H., Foubert, B., van Heerde, H.J: The challenge of retaining customers acquired with free trials. J. Mark. Res. 52(2) (2012). 150313092007002. Social Science Electronic Publishing 3. Foubert, B., Gijsbrechts, E.: Try it, you’ll like it—or will you? The perils of early free-trial promotions for high-tech service adoption. Mark. Sci. 35(5), 810–826 (2016) 4. Heiman, A., Mcwilliams, B., Shen, Z., Zilberman, D.: Learning and forgetting: modeling optimal product sampling over time. Manage. Sci. 47, 532–546 (2001) 5. Heiman, A., McWilliams, B., Zilberman, D.: Demonstrations and money-back guarantees: market mechanisms to reduce uncertainty. J. Bus. Res. 54(1), 71–84 (2001) 6. Scott, C.A.: The effects of trial and incentives on repeat purchase behavior. J. Mark. Res. 13 (3), 263–269 (1976)

1662

S. Gu et al.

7. Verhagen, T., Dolen, W.V.: Online Purchase Intentions: A Multi-channel Store Image Perspective. Elsevier Science Publishers B.V. (2007) 8. Verhoef, P.C., Donkers, B.: The effect of acquisition channels on customer loyalty and crossbuying. J. Interact. Mark. 19(2) (2010) 9. Hang, Yu., Wang, Z.: Research on the effects of online word-of-mouth communication. Inf. Mag. 6, 100–106 (2013). (in Chinese) 10. Zuo, W., Xu, W., Chang, F.: Relationship between electronic word of mouth and purchase intention in social commerce environment: a social capital perspective. Nankai Bus. Rev. 4, 140–150 (2014). (in Chinese)

Research on Behavior Oriented Scientific Research Credit Evaluation Method Yan Zhao(&), Li Zhou, Bisong Liu, and Zhou Jiang Social Credit Branch, China National Institute of Standardization, Beijing, China [email protected]

Abstract. Scientific research credit is an important part of social credit. From the view of the current scientific research credit system framework, the foundation and most important parts of the construction of scientific research credit system are establishing evaluation index system and evaluation method. In this paper, we put forward a behavior oriented scientific research credit evaluation model, which indirectly reflects the capacity and willingness of the scientific research subjects. Also, the Behavior Oriented Scientific Research Credit Evaluation include the following three aspects: S&T plan program credit, academic research credit and the related credit of the scientific research subjects. Furthermore, evaluation index, evaluation methods, scoring rules and evaluation process are also given in this paper. And the principles of “veto power” and “encouraging trustworthiness and punishing dishonest” are considered as well in this model. All in all, carrying on Behavior Oriented Scientific Research Credit Evaluation and making full use of credit evaluation results on scientific research to encourage trustworthiness and punish dishonest, can improve the fairness and effectiveness of S&T resource allocation and enhance credit awareness of the scientific research subject, thus promote the overall construction and development of the whole social credit system. Keywords: Credit Index design

 Scientific research credit  Credit evaluation

1 Introduction Scientific research credit is an important part of social credit [1]. However, on the whole, the foundation of the construction of scientific research credit system is very weak. Behaviors of breaching promises in scientific research often occur. Therefore, the construction of scientific research credit system is very urgent. The establishment of scientific research credit management system in the national S&T plan program can prevent the occurrence of misconduct and corruption in the implementation and evaluation of the project. From the view of the current scientific research credit system framework, the foundation and most important parts of the construction of scientific research credit system are establishing evaluation index system and evaluation method of scientific research credit. The design of credit evaluation index on national S&T plan program will help record, and evaluate credit of the relevant administrative organizations, responsible © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1663–1671, 2019. https://doi.org/10.1007/978-981-13-3648-5_215

1664

Y. Zhao et al.

institutions and researchers, consultants, evaluation experts, etc. during the key stage of establishment, budget and acceptance check. Moreover, it is the foundation of the establishment of S&T plan program credit information database. Credit evaluation results can be employed for encouraging institutions and personnel of good credit and punishing those of poor credit accordingly. Taking the credit status of experts and institutions as important basis for hiring and decision making can further enhance the sense of responsibility of experts and prevent the occurrence of misconduct and corruption in national S&T plan program [2]. In a word, making full use of credit evaluation results on scientific research to encourage trustworthiness and punish dishonest, can improve the fairness and effectiveness of S&T resource allocation and enhance credit awareness of the scientific research subject. Furthermore, by innovating management mechanism, strengthening evaluation and supervision and standardizing research credit behavior from the viewpoint of management, we can prevent and curb corruption fundamentally, thus promote the overall construction and development of the whole social credit system.

2 Behavior Oriented Scientific Research Credit Evaluation Index System From the perspective of credit management, it is necessary to make a clear definition before conducting credit investigation or credit rating in a certain field. Credit is the willingness and capacity to meet commitments, which include provisions stipulated in laws and regulations and mandatory standards, contractual provisions, and social reasonable expectations. Scientific research credit is an important part of social credit. Therefore, in this study, scientific research credit is defined as follows: scientific research credit refers to the willingness and capacity of the subjects of scientific research. In particular, it refers to the extent to which scientific research subjects seek truth from facts, do not deceive, do not falsify, abide by relevant laws and regulations, abide by scientific values, scientific spirit and code of conduct of scientific activities, and the willingness and capacity to meet scientific research commitments [3]. Through the method of literature review, previous research on scientific research credit evaluation mainly focus on direct evaluation on willingness and the capacity of scientific research subjects to fulfill scientific research commitments [4–9]. However, the shortcoming of such method is as follows: Firstly, the evaluation index itself is relatively abstract. Secondly, it is difficult to collect information. Thirdly, the evaluation mainly relies on the rich practical experience, subjective judgment and analysis capacity of the evaluators. However, while willingness and capacity are the reasons, behavior is the result. From the perspective of behavioral theory, behavior is the external expression of willingness and capacity. Therefore, in this paper, we put forward a behavior oriented scientific research credit evaluation model, which indirectly reflects the capacity and willingness of the scientific research subject by collecting all aspects of the behavioral performance of the scientific research subjects in the scientific research activities, so as to evaluate the scientific research credit of the scientific research subjects.

Research on Behavior Oriented Scientific Research Credit …

1665

For scientific researchers, scientific research involves both undertaking S&T plan program and carrying out other academic research. Therefore, scientific research credit includes S&T plan program credit, as well as academic research credit. In addition, related credit in other fields should also be considered. For scientific researchers, such poor credit record as tax evasion, arrears of public utilities, overdue record of credit cards, financial fraud and other crimes, cannot be ignored as related credit record. Therefore, the Behavior Oriented Scientific Research Credit Evaluation should at least include the following three aspects: S&T plan program credit, academic research credit and the related credit of the scientific research subjects. The object of the S&T plan program credit evaluation is the credit information of the S&T plan program, including four aspects: the basic information, the general information, poor behavior record information and good behavior record information. The basic information includes the identity information of scientific research subject, including the name, address, legal representative, registered capital, competent department, contact person of scientific research organization and so on, as well as the name, title, post, phone number, ID number, specialty of the project leader. General information refers to the implementation records in accordance with the relevant laws, regulations, rules and regulations on project management and fund management of S&T plan program. The poor behavior record information mainly refers to the misconduct of the scientific research subject in the process of undertaking S&T plan program and the punishments, including serious behaviors of breaching promises, the misconduct on scientific research, and behavior on financial violation. The good behavior record information refers to the major social and economic value, outstanding contributions, various awards received by the scientific research subjects in the process of undertaking S&T plan program. The S&T plan program credit covers the whole process of S&T plan program, including the stages of demand collection, guide release, project application and budget arrangement, supervision and inspection, examination and acceptance and so on, covering project management and fund management. Academic research credit means to what extent the scientific research subjects meet commitments in the process of publishing papers, writing books, publishing patents, attending academic conferences and so on apart from undertaking S&T plan program. The related credit shows if scientific research subjects have the following behavior records in other fields, for example, tax evasion, arrears of public utilities, overdue record of credit cards, financial fraud and other crimes and so on. The overall idea of the design of scientific research credit evaluation index is shown in Fig. 1. First, the goal of scientific research credit evaluation is defined at Objective Layer. Based on this goal, eight rules of the whole scientific research credit evaluation are defined. Then, on the basis of the evaluation rules, with scientificity, rationality, and completeness being guaranteed, scientific research credit evaluation indices shall be designed. Primary indices include S&T plan program credit, academic research credit and related credit. It covers the current policy documents and the main contents of scientific research credit under current situation. Secondary indices will be designed by specializing and refining primary indices.

1666

Y. Zhao et al.

Fig. 1. Overall idea of the design of scientific research credit evaluation index

S&T plan program credit, as the core of scientific research credit, should be taken into consideration during the stage of index design. Specifically, secondary indices on S&T plan program credit of scientific research organization include good behavior record, plagiarism, fabrication, falsification, violation of the ethics and morality of scientific research, forgery, improper means, violation of rules and regulations in project management, lack of necessary rules and regulations, poor implementation of the project, violation of rules and regulations in fund management. Secondary indices on academic research credit include violation of policy document named “Five not allowed in publishing academic papers” [10], withdrawal of manuscripts for academic papers by editors. Secondary indices on related credit include poor public records and records on joint punishments. On observed Variables Layer, secondary indices will be explained accordingly.

Research on Behavior Oriented Scientific Research Credit …

1667

3 Behavior Oriented Scientific Research Credit Evaluation Method The behavior oriented scientific research credit evaluation method proposed in this study, aiming at the positive guidance, covering S&T plan program credit, academic research credit and related credit, assumes that scientific research subjects are honest and trustworthy. Initially, a basic level and score are given. When behaviors of breaching promises occur, the corresponding score is deducted according to the rules. In light of the principle of “veto power”, once serious dishonesty and serious violation of rules and regulations occurs, the credit grade is reduced to the lowest level. Since S&T plan program credit is the core of scientific research credit, the effects of “encouraging trustworthiness and punishing dishonest” should both be considered. In view of the objective existence of the discredited behavior, the bonus points of good credit behavior are only valid for scientific research subjects without behaviors of breaching promises. As for the evaluation of academic research credit and related credit, we mainly focus on preventing risk of discredit, namely, corresponding score should be deducted according to rules when behavior of breaching promises occurs. As for scientific researchers’ S&T plan program credit evaluation, once the behavior of breaking promises occur, corresponding score shall be deducted directly according to the evaluation index and scoring rules. As for S&T plan program credit evaluation of scientific research organization, scientificity and rationality of evaluation should be considered seriously. It is worth mentioning that the size of each scientific research unit is quite different. As for the huge scientific research organizations such as Tsinghua University and Chinese Academy of Sciences, annual expenditure on scientific research reaches hundreds of billions of levels. If one researcher responsible for a certain project has been dishonest and corresponding score is deducted directly according to the evaluation index and scoring rules, then it is unfair for the whole scientific research organization and it is not reasonable. In the long time, such negative thoughts as “the more you do, the more mistakes you make” will dominate, which is not good for development of scientific research. 3.1

Credit Grade

Based on the S&T plan program credit information, academic research credit information and related credit information collected by various channels, basic scientific research credit evaluation is carried out. According to the scoring rules, the credit grades of scientific research subjects are divided into four initial grades of A, B, C and D. For the research subjects with initial level of A, an additional credit evaluation is started, and the credit grades can be rated as grade AA and AAA according to the scoring rules and the final credit score. Table 1 gives a summary of credit evaluation scale for scientific research.

1668

Y. Zhao et al. Table 1. Credit evaluation scale for scientific research

Name of the grades Range AAA 90–99 AA

81–89

A B C D

80 60–79 50–59 Below

3.2

Remarks Additional grade, only valid for subject whose basic grade is A Additional grade, only valid for subject whose basic grade is A Basic grade, initial grade Basic grade Basic grade 49 Basic grade

Evaluation Process

The initial grade of scientific research credit is comprehensively evaluated according to the credit rating of scientific research subjects and the credit rating rules of scientific research. The evaluation process is detailed in Fig. 2. Before the basic evaluation of scientific research credit, the completeness of scientific research credit records of scientific research subjects shall be confirmed. During the evaluation period, scientific research subjects with S&T plan program credit or academic research credit records will be rated as grade A (the initial score is 80) initially. Then according to the poor credit records of scientific research subjects and scoring rules, the credit score can be calculated. Scientific research subjects with credit score between 60 and 79 points, 50 and 59 points, below 49 points will be rated as grade B, C and D respectively. As for scientific researchers’ scientific research credit evaluation, once the behavior of breaking promises occur, corresponding score shall be deducted directly according to the evaluation index and scoring rules. For scientific research credit evaluation of scientific research organization, the basic credit score of scientific research organization can be calculated according to formula (1) and the basic credit level can be determined accordingly. C ¼ 80 

n X i¼1

wi c1i 

m X j¼1

f j c2j 

p X

c3k

ð1Þ

k¼1

Among them, C is the basic scientific research credit score for scientific research organization; n denotes the total number of poor behaviors of S&T plan program; m denotes the total number of poor behaviors of academic research; p denotes the total number of poor behaviors of related credit; wi denotes weight of the poor behavior of S&T plan program item i; c1i denotes the deduction of credit points of poor behavior in S&T plan program item i; f j denotes weight of poor behavior of academic research item j; c2j denotes the deduction of credit points of poor behavior of academic research item j; c3k denotes the deduction of credit points of poor behavior in related credit item k. On the basis of the basic evaluation of scientific research credit, additional credit evaluation shall be started for subjects who have been rated as grade A initially. Then

Research on Behavior Oriented Scientific Research Credit …

Start

1669

Without S&T plan program credit and academic research credit records

Screening completeness of credit record records With S&T plan program credit or academic research credit records Rated as grade A initially

Rated as grade Bn

Basic evaluation

Rated as grades B C D

dynamic adjustment

Rated as grade A

Additional evaluation

Rated as grades AA, AAA

End

Fig. 2. Scientific research credit evaluation process

according to scoring rules and the final credit score, the subjects can be rated as grade AA or AAA. In addition, S&T plan program credit is the core of scientific research credit evaluation and academic research credit is also the key point of scientific research credit evaluation. If scientific research subjects have neither S&T plan program credit records nor academic research credit records, then they will be rated as Bn.

1670

Y. Zhao et al.

4 Summary Scientific research credit is an important part of social credit. From the view of the current scientific research credit system framework, the foundation and most important parts of the construction of scientific research credit system are establishing evaluation index system and evaluation method. Through the method of literature review, previous research on scientific research credit evaluation mainly focus on direct evaluation on willingness and the capacity of scientific research subjects to fulfill scientific research commitments. However, the shortcoming of such method is as follows: Firstly, the evaluation index itself is relatively abstract. Secondly, it is difficult to collect information. Thirdly, the evaluation mainly relies on the rich practical experience, subjective judgment and analysis capacity of the evaluators. From the perspective of behavioral theory, behavior is the external expression of willingness and capacity. Therefore, in this paper, we put forward a behavior oriented scientific research credit evaluation model, which indirectly reflects the capacity and willingness of the scientific research subject by collecting all aspects of the behavioral performance of the scientific research subjects in the scientific research activities, so as to evaluate the scientific research credit of the scientific research subjects. Eight rules of the whole scientific research credit evaluation are defined. Then, on the basis of the evaluation rules, primary indices including S&T plan program credit, academic research credit and related credit have been designed. It covers the current policy documents and the main contents of scientific research credit under current situation. Secondary indices have been designed by specializing and refining primary indices. Also, evaluation methods, scoring rules and evaluation process are also given in this paper. Furthermore, the principles of “veto power” and “encouraging trustworthiness and punishing dishonest” are considered as well in this model.

5 Acknowledgments This research was supported by National Key R&D Program of China (Grant No. 2017YFF0207600), and credit evaluation projects entrusted by Supervision Service Center for Science and Technology Funds, Ministry of Science and Technology in 2017.

References 1. The plan of the construction of social credit system (2014–2020). http://www.gov.cn/ zhengce/content/2014-06/27/content_8913.htm (2014) 2. Ministry of Science and Technology of the People’s Republic of China. http://www.most. gov.cn/kycxjs/kycxgfxwj/200703/t20070321_42256.htm (2004) 3. Zhou, L., Zhao, Y., Liu, B., Jiang, Z.: Research on scientific research credit standardization and its standards system frame. Stand. Sci. (3), 11–14 (2017)

Research on Behavior Oriented Scientific Research Credit …

1671

4. Xu, H.: Credit evaluation index system and model construction of science and technology assessment experts. Sci. Technol. Manag. Res. 2009(7), 512–515 (2009) 5. Xu, H.: Appraisement model design of science and technology credit based on scientific research personnel. Sci. Sci. Manag. S&T 30(6), 182–187 (2009) 6. Lv, L., Chen, Y., Xiong, X.: Study on the credit evaluation index system of science and technology project leader in Guangdong Province. Sci. Technol. Prog. Policy 33(20), 122– 127 (2016) 7. Xiao, X., Li, X., Peng, J., Gao, S., Tang, F.: Methodological studies on credit rating for academic organizations undertaking and managing NSFC projects. Bull. Natl. Nat. Sci. Found. China 2018(2), 183–187 (2018) 8. Li, A., Li, C., Yang, S.: Credit evaluation index system of scientific research personnel. Forum Sci. Technol. China 2017(12), 123–130 (2017) 9. Zhang, S., Chang, X., Zhang, J.: On the credit evaluation system of science and technology based on the “4C” mode. Value Eng. 2014(27), 24–26 (2014) 10. China Association for Science and Technology: Five not allowed in publishing academic papers. http://news.sciencenet.cn/htmlnews/2015/12/333071.shtm (2015)

Realization and Improvement of Bayes Image Matting Xu Qin1,2 and Ding Xinghao1(&) 1

2

Xiamen University, Xiamen, China [email protected] Jiujiang University, Jiangxi, China

Abstract. The image matting technology is an image separation method, and there are many ways to implement foreground extraction. The Bayes matting is an interactive input matting algorithm. In the original image, the selected foreground area and background area are identified by different colors. The users can take the interactive matting algorithm to achieve the expected image segmentation. The realization of image matting has very good result and has a good distinction between the foreground and the background parts at the edges. But if the colors of current scene is closed to the background at the edges then the foreground image is confused. In the preparation work, firstly the algorithm identifies the face and draw the outline. The image of this scope is classified the part of the foreground without the calculation of the matting algorithm. This ensures the integrity of the prospect image. The result can quite precisely separate the two parts of the foreground area and the background area. Keywords: Image matting

 Bayes matting  Improvement  Integrity

1 Introduction The Image Matting is an image separation method. It is an ultra fine image separation technique. It usually takes the content to be separated from the background, and even the generalized image separation includes the separation of different levels of the target. In the field of image matting, many very successful algorithms have been developed, such as the famous Bayes matting, KKNN matting and Poisson matting and so on. This paper will study the methods of image matting and try to improve the way to solve the problem of the discrimination.

2 Matting Algorithm The image segmentation and image matting are similar, but the corresponding algorithm is not exactly the same. The image matting algorithm is more complex for it needs to involve a problem. So the precision is far higher than the image segmentation algorithm. Of course, the speed is very slow so the basic engineering application is very difficult [1]. The commercial software matting functions are all through image segmentation algorithm, such as some upgraded version of grab cut algorithm. © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1672–1677, 2019. https://doi.org/10.1007/978-981-13-3648-5_216

Realization and Improvement of Bayes Image Matting

1673

The basic method of image matting is to divide a complete image into two regions: the foreground area and the background area [2]. Among them, the part of the user’s interest is called the foreground part and the part which users are not interested in and want to ignore is called the background part. The classic algorithm for image matting, Bayesian matting, appears earlier, so it is not possible to be the most advanced or widely used technology at present, but an algorithm example used as a teaching purpose is a good choice. This algorithm has been a simple example of technical details compared with other matting algorithms, and many of the subsequent algorithms have learned some of the technical methods, so that the Bias algorithm should be studied when learning image matting. The core problem of image matting is to solve the following matting formula (1). C ¼ aF þ ð1  aÞB

ð1Þ

And C is a known pixel in the image to be processed (or can be understood as the whole image) [3]. F is the foreground image (one of the pixels in the image). B is a background image (one of the pixels in it). It is called the mask, representing the proportion of the foreground color in the whole color, or called the opacity [4]. C represents the color of the pixels in the image; the F represents the foreground color of the pixels in the image; and the B represents the background color of the pixels in the image. Moreover, C, F and B contain the components of three channels of R, G and B respectively, and the formula (1) becomes formula (2). 8 < CR ¼ aFR þ ð1  aÞBR C ¼ aFC þ ð1  aÞBC : C CB ¼ aFB þ ð1  aÞBB

ð2Þ

Therefore, image matting is known as C, solving F, B and alpha. Because F, B and alpha are unknown, it is clear that it is not easy to find so many unknown items. It can be said to be an unsolved problem, because there are too many unknown parameters, which is the difficult point of matting [5]. So we need to add some additional constraints, which are usually given in the form of the Trimap. The Trimap is a binary graph, which is the same size as the image to be partite, but the pixels in the picture have only three values which are 0, 128 (about) and 255. The black part is determined to know the background, and white is the prospect of determining the future. The grey is the intersection of foreground and background for further refinement, or we can interpret it as the edge of prospect. The fusion coefficient alpha is a score between 0 and 1. It gives the ratio of foreground to background in the image to be processed [6]. Obviously, for the identified background part, alpha = 0; for the identified foreground part, alpha = 1. On the edge of the fusion of foreground and background, alpha is between 0 and 1. In the fusion equation, only C is known and F, B and alpha are unknown, so the solution of the values of the F, B and alpha, is the core of the matting problem. In the process of the image matting, it is necessary to use the corresponding symbols to represent the parts to be separated. The commonly used marking methods include the Trimap marking and Strokes marking. Among them, the Trimap is used to

1674

X. Qin and D. Xinghao

divide the three parts of the foreground area, the background area and the unknown area by describing the topic boundary in the image [7]. While the strokes uses a graffiti way, the foreground area and the background area are graffed out directly on the image, and the remaining unlabeled part is summed up as the unknown area to be calculated.

3 Realization of Bayes Image Matting The Bayes Matting is an interactive input matting algorithm. The user determines the F, B and alpha values of the pixels entered in the area, which is the area determined by mouse input. A moving circular window is used to sample the pixels of the knowledge part. The center point of the circular moving window is a pixel point to be calculated in the non area [8]. In order to realize the sample points as much as possible in the foreground region and separate the collection of unknown background area, the algorithm adjusts the radius of the circular window. Finally it can ensure that all samples in the input image can separate the two parts of the foreground area and the background area. The formula (3) is converted from multiplications to additions by means of logarithm. At the same time, for P(C) is a constant term, it is omitted. arg max PðF; B; ajCÞ F;B;a

¼ arg max PðCjF; B; aÞPðFÞPðBÞPðaÞ=PðCÞ F;B;a

ð3Þ

¼ arg max LðCjF; B; aÞ þ LðFÞ þ LðBÞ þ LðaÞ F;B;a

The implementation process is as follows: the algorithm uses a continuous sliding window to sample the neighborhood, and the window is pushed inward from the two edges between the unknown region and the known region, and the calculation process is also promoted. We can select some original pictures shown in Fig. 1. The original drawing can be image processing with the common drawing tool. The selected foreground area and background area are identified by the color pen in the original map with different colors. The user can determine the pixel values of the foreground and the background parts by the annotation [9]. After the foreground and background area is marked, the image is saved in the work folder and then use the interactive matting algorithm to realize the projected image segmentation. The simulation is carried out, and the image segmentation is realized. The foreground area and background area image after separation were obtained respectively [10]. These images are shown in Fig. 2. The segmentation of the villi can be clearly seen from the image. The boundary of each villi at the dividing boundary is particularly distinct and no longer a fuzzy piece. The interactive Bayes matting technique realizes the clear segmentation of hair image.

Realization and Improvement of Bayes Image Matting

(a)

1675

(b) Fig. 1. Input picture.

(a)

(b) Fig. 2. Parts of foreground and background area.

4 Problem and Improvement Method There are many ways to realize the image matting. In the implementation results, the foreground and the background have a very good edge distinction and the final effect is very good. But if the colors of current scene is closed to the background at the edges, then the final picture of the foreground is astonishing. In Fig. 3, the picture shows that

1676

X. Qin and D. Xinghao

the result of the matting is relatively good and the characters are drawn out as the whole foreground. But the eye of the figure is similar to the bottom color and does not distinguish between the edges. The matting algorithm realizes the eye as the background part and uses a uniform white cover. The result affects the face as the foreground.

Fig. 3. Matting result with confusion.

In the preparation of image matting, we add the morphological recognition of images, such as human body and face recognition. In the picture, the face or body is encountered, and the most critical parts are identified by recognition, such as the face and body trunk, and then the outline is drawn, and the circle or rectangle is presented. This range belongs to the foreground without the calculation of the matting algorithm. This ensures that the foreground and image required are complete, and there will be no scratching. The result of the improvement method is relatively good, as shown in Fig. 4, and the characters are drawn out as the whole foreground, including the waist

Fig. 4. Result of improvement method.

Realization and Improvement of Bayes Image Matting

1677

ornaments. The human face is the most important part and it can clearly be displayed including the eyes [11]. The physical characteristics are retained such as the waist ornaments.

5 Conclusion The image separation technique takes the content to be separated from the background. The realization of image matting has usually very good result but if the current scene and the background are close to the edge then the foreground image is confused. This paper studies the improvement method to solve the problem of the discrimination. It is to identify faces and torso and other key points, and draw them. The part of this scope is a prospect without the calculation of the matting algorithm. This can ensure the integrity of image. The result effectively realize the separation of the confused foreground and background. Acknowledgements. This research was supported by Scientific Research Project of JJU (2016KJ001).

References 1. Chuang, Y.Y., Curless, B., Salesin, D.H., Szeliski, R.: A Bayesian approach to digital matting. In: Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR 2001), vol. II, pp. 264–271 (2001) 2. Li, X.Y., Zhou, W.X., Wu, S.J., Li, D., Hu, X.H.: The image background of the virtual technology based on Bayesian matting. Comput. Knowl. Technol. 13(28), 211–214 (2017) 3. Lv, J.J., Zhan, Y.W.: Improved Bayes matting algorithm. Comput. Eng. 36(3), 213–217 (2010) 4. Sindeyev, M., Konushin, V., Vezhnevets, V.: Improvements of Bayesian matting. Proc. Graphicon 88–95 (2007) 5. Wu, W., Zheng, Z.: An improved sample pixel weight calculation in Bayesian matting. J. Chongqing Univ. Sci. Technol. 19(4), 84–86 (2017) 6. Renukalatha, S., Suresh, K.V.: Segmentation of noisy PET images using Bayesian matting. In: International Conference on Contemporary Computing and Informatics (IC3I), 1161– 1165 (2014) 7. Selvakumar, J., Lakshmi, A., Arivoli, T.: Brain tumor segmentation and its area calculation in brain MR images using K-mean clustering and Fuzzy C-mean algorith. In: IEEEInternational Conference on Advances in Engineering, Science and Management, pp. 186– 190 (2012) 8. Czernin, J., Allen-Auerbach, M., Schelbert, H.R.: Improvements in cancer staging with PET/CT. J. Nucl. Med. 2007(9), 48–78 (2007) 9. Aravind, B.N., Suresh, K.V.: Multispinning for image denoising. Int. J. Intelli. Syst. 21(3), 271–291 (2012) 10. Diaz, I., Boulanger, P., Greiner, R., Murtha, A.: A critical review of the effect of de-noising algorithms on MRI brain tumor segmentation. Eng. Med. Biol. Soc. 3934–3937 (2011) 11. Chotikakamthorn, N.: Alternative formulation of Bayesian-based digital matting technique. Electron. Lett. 42(20) (2006)

Research on Image Processing Algorithm in Intelligent Vehicle System Based on Visual Navigation Congbo Luo1(&) and Zihe Tong2 1

2

Changchun Sci-Tech University, Changchun 130600, China [email protected] Jilin Science and Technology Vocational College, Changchun 130123, China [email protected]

Abstract. In this article, the writer researches an algorithm of image navigation of smart car and introduces a kind of image navigation algorithm based on image restoration, and resolve the problem that the smart car runs in a wrong way. Firstly, this algorithm preprocesses the video image which is collected by image sensor, including binarization processing, denoising processing and background segmentation processing, so that the clear runway boundary line is available. Then, calculate the boundary equation through the boundary extraction algorithm. Judge the trend of boundary line by extract algorithm, and select the optimum fitting algorithm by the boundary line type. In this paper, the navigation algorithm we studies is more accurate than the others, and improves the ability of analyzing the route by smart car, and makes smart car runs more stably. Keywords: Video navigation

 Image preprocessing  Image restoration

1 Introduction In recent years, with the increase of car ownership, urban traffic safety has become an important issue affecting social development [1]. The development of high performance and high reliability intelligent vehicles has become one of the main means to reduce traffic accidents, and can also reduce energy consumption. At present, most video navigation algorithms are mainly focused on structured runways, and most of them can realize automatic driving [2]. However, this does not mean that the relevant research has met the actual requirements, because the complexity of lane detection is related detection algorithm, and there are still many problems to be solved. For example, ambient light intensity interferes with video navigation and so on [3]. This paper mainly studies the detection and identification algorithm of intelligent vehicle at the bend of structured runway [4]. A navigation algorithm based on regional growth is proposed.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1678–1688, 2019. https://doi.org/10.1007/978-981-13-3648-5_217

Research on Image Processing Algorithm …

1679

2 Research on Intelligent Vehicle Hardware System The intelligent vehicle hardware system mainly includes K60 as the core of the minimum system board, speed detection module, image acquisition module, steering control module, motor control module and power management module. The hardware structure of intelligent vehicle is shown in Fig. 1.

Fig. 1. Intelligent vehicle hardware structure block diagram

3 Research on Image Processing Algorithm The basic idea of the intelligent vehicle navigation system algorithm is to first extract the lane marker line from the road image collected from the image sensor and describe it by mathematical equation [5]. This equation is then fed into the control system of the intelligent vehicle, so as to carry out the next control action. 3.1

Lane Image Preprocessing

3.1.1 Binary Processing of Image Information The selected image sensor is the OV6620 camera. The images collected were of a size of 640 * 480, with a range of 0–255 grayscale changes. In order to improve the processing speed of the system and ensure the real-time performance of the system, binary processing of the original image is required. The main idea of binary processing of gray image is to distinguish different regions by different features of pixel points. In the original image, each pixel has its own characteristics, some of which are similar, some of which are quite different. The socalled image segmentation is to attribute pixel points with the same characteristics to one region and different pixel points to another region. The most commonly used segmentation method is to use the threshold value to binarize the original image. In the process of binarization, the most important parameter is the threshold used in binarization. The grayscale of all pixel points in the binary image is only used in two ways:

1680

C. Luo and Z. Tong

0 and 255, where 0 represents black and 255 represents white. Common image binarization algorithms include fixed threshold binarization and adaptive threshold. The image processing algorithm in this paper USES fixed threshold method to binary image processing. Fixed threshold binarization processing is as follows (1):  gðx; yÞ ¼

255 0

f ðx; yÞ  T f ðx; yÞ\T

ð1Þ

In the type (1), gðx; yÞ is the grayscale value of pixel points after binarization processing. T is the fixed threshold value selected, and f ðx; yÞ is the grayscale value of original image pixel points. The fixed threshold used in binarization in this paper is realized by analyzing the gray histogram of the image. The image processed in this paper is a discrete image, so it is used to represent the probability of a certain grayscale value appearing in the image. The calculation formula is shown in Eq. (2). pðSk Þ ¼

nk ; k ¼ 0; 1    255 n

ð2Þ

In the type (1), Sk is the k-level grayscale value of the original image, nk is the number of pixels whose grayscale value is Sk in the original image, and n is the total number of pixel points in the entire image. As can be seen from formula 2, histogram is a probability statistic for the distribution of different grayscale values in the image. From histogram, we can see the grayscale distribution of the whole image and the distribution of the grayscale value in the image. Figure 2 shows the original image of the runway collected by the image sensor. According to formula (2), the histogram of the image is obtained, and the result is shown in Fig. 3.

Fig. 2. Original runway image

It can be seen from Fig. 3 that the grayscale value in the original image is mainly below 150, and there are two areas where the distribution of grayscale value is relatively concentrated, namely 0–100 and 110–150. The analysis of the original image shows that the area with gray value of 0–100 is the boundary black line of the runway,

Research on Image Processing Algorithm …

1681

Fig. 3. Histogram of runway image

and the area with gray value of 110–150 is the white background of the runway. According to the above data, 100–110 can be selected as the fixed threshold of image binarization [6]. Finally, 105 was selected as the fixed threshold, so as to minimize interference. 3.1.2 Denoising of Image Information After Binarization After binary image processing, some scattered interference points will appear in the image [7]. In order to facilitate subsequent control algorithm processing, these interference points need to be removed. In this paper, the method of expansion corrosion is used to remove the noise of binary images. Corrosion algorithm is a boundary point elimination algorithm, it will make the boundary inward contraction. This eliminates small, meaningless noise points. The etching algorithm USES a 3 * 3 structural element and the original binary image it covers to do the “and” operation. If the result of the operation is all 1, the pixel point will be 1 after processing, otherwise it will become 0. Doing this will shrink the original binary image by a loop. The binary images in this paper mainly have regular structural elements, so the reduction processing can be carried out by using the principle of matrix decomposition. That is, the product of {1,1,1} and {1,1,1}T is used to replace the original structural elements, which can greatly reduce the number of times to access pixel points and improve the real-time performance of the whole software. In this case, the corrosion algorithm is mainly used to eliminate the scattered black spots in the background area. From top to bottom, from left to right, use structural elements to scan the image, find the point of interference and corrupt it into white. The expansion algorithm is an algorithm used to fill the hole in the center of an object. It combines the background points in contact with the object into the object and expands the boundary of the object outwards. The expansion algorithm USES a 3 * 3 structural element and the original binary image it covers to do the “or” operation. If the operation results are all 0, the pixel point will be 0 after processing, otherwise it will become 1. After doing this operation, the original binary image will be enlarged by one circle. Like the corrosion algorithm, the expansion algorithm can also use the product of {1,1,1} and {1,1,1}T to replace the original structural elements, so as to improve the real-time performance of the software. In this case, the expansion algorithm is mainly used to remove the white interference points in the boundary black line. From top to

1682

C. Luo and Z. Tong

bottom, from left to right, use structural elements to scan the image, find the point of interference and expand it to black. The above method is used to denoising the binarization image, as shown in Fig. 4.

a. Binary image after processing

b. Image after de-noising

Fig. 4. Denoising process renderings

3.1.3 Image Segmentation Algorithm After Denoising Image segmentation is an algorithm that divides regions with different special meanings in the whole image. By separating the regions of interest from the complex images, the corresponding analysis and recognition can be carried out. In order to segment the boundary line and background area of the runway, an algorithm based on edge detection is adopted in this paper. Edge detection is the key of edge detection and segmentation algorithm. Generally, two edge detection methods are serial and parallel. In the serial edge detection method, whether a pixel point is an edge is determined by the detection results of the previous pixel point. In the parallel edge detection method, it is affected by adjacent pixel points. Since the change of image grayscale can be reflected by the image grayscale distribution gradient graph, the edge detection operator is obtained by differentiating the local image. Where, the distribution gradient of image grayscale is calculated by formula (3): rf ðx; yÞ ¼

@f @f iþ j @x @y

ð3Þ

By formula (3), we can find out the local grayscale change. The magnitude of the gradient can be expressed as eðx; yÞ: eðx; yÞ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi fx2 ðx; yÞ þ fy2 ðx; yÞ

ð4Þ

eðx; yÞ can be used as the edge detection operator. Or it is possible to use the sum of absolute values of two partial derivatives as an operator:

Research on Image Processing Algorithm …

  eðx; yÞ ¼ jfx ðx; yÞj þ fy ðx; yÞ

1683

ð5Þ

In addition to gradient operator, the extreme point of first-order derivative used in edge detection can be obtained by many operators, such as Roberts operator and Prewitt operator. Roberts operator, because it USES horizontal and vertical detection methods, has much better effect than oblique detection methods, is more sensitive to the noise in the image, and has better positioning accuracy. Therefore, Roberts operator is selected here to perform image segmentation algorithm, and the difference between two adjacent pixels in the diagonal direction is selected as Roberts operator. Dx f ¼ f ði; jÞ  f ði þ 1; j þ 1Þ

Dy f ¼ f ði; j þ 1Þ  f ði þ 1; jÞ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   Rði; jÞ ¼ D2x f þ D2y f or Rði; jÞ ¼ Dx f þ Dy f  

1 0

ð6Þ ð7Þ

 0 , and the convolution operator of 1

where, the convolution operator of Dx f is   0 1 Dx f is . 1 0 Select the appropriate threshold value TH , when Rði; jÞ [ TH is, ði; jÞ is the edge point, and the edge image is the set of all ði; jÞ. The result of edge detection of the denoised image using Roberts operator is shown in Fig. 5.

Fig. 5. Edge detection results

3.2

Extraction of Runway Boundary

It is the key of vision navigation intelligent vehicle to extract the boundary of the runway on the road. In this paper, since there are not many elements of the runway, there are obvious boundary lines between the runway area and the background area, and the boundary black line can be regarded as the dividing line between them. When the runway is segmented from the whole image, the complexity of the extraction algorithm can be reduced by using the black line at the boundary of the runway.

1684

C. Luo and Z. Tong

The extraction algorithm steps are as follows: Find the maximum slope and minimum slope of runway boundary in video image. The image was scanned from top to bottom with two lines of half-width and the first pixel point coordinates with a grayscale value of 255 were recorded. Determine the left and right scanning areas. The pixel point in the runway is changed to 255 by increasing. Scan the entire image until an image of the runway is formed. 3.2.1 Determine the Scanning Area Formula (8) for calculating slope k of runway boundary in video image is as follows: k¼

h x

ð8Þ

where, h is the height of the camera from the ground, and x is the horizontal distance between the camera and the runway boundary. Assuming that the control algorithm is effective and  the  intelligent vehicle can keep running in the runway, the range of x should be b2 ; 2l , where b is the vehicle width and l is the runway width. Therefore, the   value range of slope k is 2hl ; 2h b . The two lines with half the length of the image are respectively in the left and right parts of the image. The image is scanned from top to bottom to find the first pixel with a grayscale value of 255. Left side to pixels as the endpoint of the slope as 2hl and 2h b area between two rays for scanning area, namely the right half first pixel as the endpoint of the slope as  2hl and  2h b area between the two rays is the scanning area. 3.2.2 Regional Growth Method A large number of experimental data show that the runway area is generally in the lower half of the original image. A baseline is selected for the entire runway area, according to which the entire enclosed area is separated by regional growth method. In this way, only one operation is required to complete the separation of the runway in the whole image, and the efficiency is greatly improved. By introducing the growth example of two adjacent lines of images, the regional growth algorithm is explained. Suppose the AB segment is a segment within the defined runway range on line I in the image, and the CD segment is a segment in line I + 1 that has not been detected. The starting coordinates of the two line segments are Aðxa ; yi Þ, Bðxb ; yi Þ, Cðxc ; yi þ 1 Þ and Dðxd ; yi þ 1 Þ respectively. If the starting point of two line segments AB and CD satisfies Eq. (9): xc  xb [ xd  xa

ð9Þ

The CD line segment is a valid line segment on the line I + 1, and the gray value of the pixel points between corresponding starting points is set to 255. If, in accordance with the above algorithm is obtained in a row with multiple effective line, is to take all the effective line segment on the left side of the pixel as a

Research on Image Processing Algorithm …

1685

starting point, all valid pixels as the finish line on the far right, the attachment as the final effective line between two points. The above algorithm is used to scan the image line by line, and every line of the effective line segment is incorporated into the runway area, so that the entire runway area can be obtained. When a line is scanned, no valid line segment is detected, indicating that the runway area has reached the end in this direction, thus stopping the scanning. 3.3

Description Algorithm of Runway Boundary

Below need to be described in mathematical language, the border of the runway is obtained by this calculate before images, can analyze the runway location relations, generally close to the runway straight shape, the curved distance of the runway. The direction and curvature of the bend are obtained by the algorithm. In order to improve the fitting effect, a software algorithm is used to determine the direction of the runway before the runway is fitted. 3.3.1 Boundary Alignment Judgment As can be seen from Fig. 6, the shape of the runway can be analyzed by analyzing the possible situation of the boundary line of the front runway. There are usually only two possibilities: going straight or turning. If the boundary line is two straight lines, the runway trend is straight. If the boundary line is two curves, it indicates that the runway trend is turning.

Fig. 6. Runway boundary trajectory trend map

The boundary line of the runway in Fig. 6 can be roughly divided into four cases. The parts above point PL2 and PR2 can be called the upper half of the boundary line, while those below are called the lower half. The line between PL1 and PL2 is called the left boundary, and the line between PR1 and PR2 is called the right boundary. In whole image, the border at longer than on the line this is the effect of the perspective effect, but in fact, on the border line represents the real length of far longer than the length of said border. The slope of the runway is generally shown by the lower boundary and the curvature by the upper boundary. When the smart car is in the left

1686

C. Luo and Z. Tong

and right center of the runway, the slope of the left boundary is positive and the right boundary is negative. The deviation Angle between the center position of the smart car and the front runway can be analyzed by comparing the slope of the left and right boundary. In the image, the extension line of the left and right lower boundary lines intersect at a point, and the vanishing point is immediately quasi-vanishing point. The distance between this point and the intersection point of the left and right upper boundary lines indicates the degree of bend of the runway. If the runway is straight, the two points should coincide. The greater the distance between the two points, the greater the bend of the runway. According to the above properties, the bending degree of the runway can be analyzed by analyzing the upper and lower boundary line. If the vanishing point is close to the intersection of the upper boundary line, the runway is straight. If the upper boundary turns right, the runway turns right. Turn left at the upper boundary, then turn left at the runway. Since the direction of the left and right upper boundary line is consistent, only the slope of one line needs to be analyzed, which can simplify the algorithm. The upper and lower boundary line on the left is selected as the analysis benchmark. Find the slope of the fitting line at the upper and lower boundary of the left side respectively. If the slope of the fitted line of upper and lower boundary line is positive, and the slope of the fitted line of lower boundary line is greater than that of upper boundary line, then the runway will turn right. If the two slopes are approximately equal, the runway is straight. The least square method is used to fit the upper and lower boundary. Firstly, the fitting pixel points on the boundary line are determined. By calculating Pn 2 i¼1 ðkxi þ b  yi Þ , the smallest k and b values are obtained. The fitting line is formula (10): y ¼ kx þ b

ð10Þ

3.3.2 Runway Boundary Equation After determining the runway boundary direction, the boundary equation can be obtained. If the runway is straight, the boundary line equation is y ¼ kx þ b (where k is the slope of the line fitted by the upper boundary line). If the runway is curved, the upper boundary line needs to be further fitted with a quadratic curve, and the equation should be solved. The equation of point fitting curve can be set as Eq. 11. y ¼ ax2 þ bx þ c

ð11Þ

Like method to calculate the fitting line, select the number of pixels in the line into the fitting curve equation is 11, and is the smallest difference coefficient a, b, c, and thus determine the fitted curve.

Research on Image Processing Algorithm …

1687

4 Control Algorithm Through the above treatment, we can get the direction and bending degree of the runway. The above data can be used to control motor speed and steering gear. In this way, the smart car can drive at high speed and stable within the boundary of the runway. The steering gear steering and motor speed control algorithm used in this paper is a classical PID algorithm. In this paper, the incremental PID algorithm is used for motor speed control. Its advantages are small memory footprint and fast operation speed. The PWM pulse signal of the control motor is obtained by comparing the speed of the motor with the given value.

5 Test Data and Result Analysis In order to verify the validity of hardware design and software algorithm, a smart car based on freescale K60 microcontroller was built and tested on the runway. Figure 7 shows the test runway photo. The entire test runs to a total length of 37 m, including straight lines, large S bends, small S bends, cross bends and hairpin bends. The car passed multiple tests at different speeds on the test runway. The test data analysis diagram is shown in Fig. 8.

Fig. 7. Test runway photos

Fig. 8. Time diagram of single test lap

1688

C. Luo and Z. Tong

Through the test, the fastest lap time was 20.44 s and the average speed was 1.54 m/s. In the intelligent vehicle test, the image processing algorithm in the navigation system of this paper can accurately determine the boundary line of the runway and analyze the direction of the runway, so as to realize autonomous driving function.

6 Conclusion In this paper, an intelligent vehicle video navigation algorithm based on image restoration is proposed. In addition, an intelligent vehicle hardware system based on freescale K60 microcontroller is built to test the proposed navigation algorithm. Based on the relevant test data, the conclusion that the algorithm can accurately and rapidly extract the runway boundary is obtained.

References 1. Rochefort, Y., Piet-Lahanier, H., Bertrand, S., Beauvois, D., Dumur, D.: Model predictive control of cooperative vehicles using systematic search approach. Control Eng. Pract. 204– 217 (2014) 2. Zhao, Y., Collins, E.G.: Robust automatic parallel parking in tight spaces via fuzzy logic. Robot. Auton. Syst. 51, 111–127 (2005) 3. Cavin, L., Fischer, U.: Multi-objective process design in multi-purpose batch plants using a Tabu search optimization algorithm. Comput. Chem. Eng. 28(4), 459–478 (2004) 4. Visioli: Tuning of PID controllers with fuzzy logic. IEEE Proc. Control Theory 148(1), 69–81 (2004) 5. Marcuzzo, M., Quelhas, P., Campilho, A., et al.: A hybrid approach for arabidopsis root cell image segmentation. In: Image Analysis and Recognition (2008) 6. Rowe, R.K., Nixon, K.A., Butler, P.W.: Multispectral fingerprint image acquisition. In: Advances in Biometrics (2011) 7. Su, H.W., Kun, S., Zhang, L., Zhang, Q., Xu, Y.L., Zhang, R., Li, H.P., Sun, B.Z.: Meat Sci. 98, 110 (2014)

A Study on Influencing Factors of International Applied Talent Cultivation in Higher Vocational Education Jian Yong1(&), Fanyi Kong2, and Xiuping Sui2 1

2

Weihai Vocational College, Weihai, Shandong, China [email protected] Shandong Jiaotong University, Jinan, Shandong, China

Abstract. With the tide of global economic integration, international exchanges and cooperation in higher education have become more and more frequent and deepening. The internationalization of higher education has become a hot issue and new trend in the reform and development of higher education institutions in the world. This paper uses Analytic Hierarchy Process, combining qualitative and quantitative analysis, to explore the problems, influencing factors and countermeasures in the training of internationalized applied talents in higher vocational education, and expounds the connotation, standards, evaluation methods and evaluation steps of internationalized applied talents in higher vocational education. Keywords: Higher vocational education

 AHP  Internationalization

1 Background With the tide of global economic integration, the international exchanges and cooperation of higher education are becoming more and more frequent and deepened [1]. The internationalization of higher education has become a hot issue and a new trend in the reform and development of colleges and universities around the world [2]. Although some achievements have been made in the way of internationalization of education in our country, in the process of educational internationalization, the international concept has not yet been exposed, the development of two-way student education is slow, the international curriculum system and personnel training mode have not been established, and the international development of departments and departments is not balanced and many other questions. Following the experience of internationalization of higher education, we should actively adopt international teaching staff, build an internationalized curriculum system, actively carry out Chinese and foreign cooperation in running schools, actively promote two-way student education, and strengthen international scientific and technological cooperation. On the National Education Science 13th Five-Year Plan 2016 Key topic of the Ministry of Education “STCW New Convention Driven International Applied Talent Cultivation Theory and Practice Research” approved DJA160380. © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1689–1694, 2019. https://doi.org/10.1007/978-981-13-3648-5_218

1690

J. Yong et al.

basis of the development trend of internationalization of higher education and the international experience of universities and colleges at home and abroad, this paper studies the development concept of higher education internationalization and the practice of running a school, and actively explores the practical and feasible path of internationalization [3]. Using the analytic hierarchy process (AHP), this paper combines the qualitative and quantitative analysis methods to explore the existing problems, influencing factors and Countermeasures for the training of international applied talents in higher vocational education, and expounds the connotation, standards, evaluation methods and evaluation steps of the training of international applied talents in higher vocational education.

2 Analytic Hierarchy Process Analytic Hierarchy Process (AHP) is a simple, flexible and practical multi-criteria decision-making method proposed by American operational research professor T. L. Saaty in the early 1970s [4]. This is especially true for problems that are difficult to fully quantify. By using analytic hierarchy process (AHP), the modeling can be carried out in the following four steps: Establishing hierarchical structure model; All judgment matrices in each level are constructed [5]. Hierarchical single ordering and consistency checking; Total hierarchical ordering and consistency check [6, 7]. 2.1

The Impact Factors of Influencing Factors of International Applied Talent Cultivation in Higher Vocational Education

The influencing factors of higher vocational education internationalization applied talents training include strategic positioning internationalization, personnel element internationalization, operation element internationalization, organization element internationalization, curriculum internationalization. In the second level of the indicator, including: A. The internationalization of strategic positioning includes: education concept, unified understanding, policy initiatives, school atmosphere, and implementation strength. B. Internationalization of personnel elements includes: internationalization of leadership, internationalization of students, international exchanges of students, and teacher exchange. C. Internationalization of operational elements includes: scientific research cooperation and exchanges, Chinese-foreign cooperation in running schools, and education funding from educational sources. D. Organizational elements include: international academic research institutions, foreign-related management institutions, foreign-related societies, and foreignrelated study groups. E. Internationalization of courses includes: bilingual teaching materials, original teaching materials, internationalization of teachers, mutual recognition of credits.

A Study on Influencing Factors of International Applied Talent …

1691

Training of Internationalized applied talents in higher vocational education

internationalization strategic orientation

internationalization personnel elements

organizational elements

international operational elements

international courses

bilingual teaching materials

original textbooks

credit recognition

and international teachers

international academic research institutions foreign-related management institutions

foreign groups

foreign-related associations

scientific research cooperation and exchange

Sino foreign cooperation schools

funding for education

education expenditure

student internationalization

Leadership internationalization

international exchange teachers

international exchange students

educational concepts

a unified understanding

policy initiatives

implementation power

school atmosphere

3 Establish Judgment Matrices Establish Judgment Matrices of the first indicator index. A0 A1 A2 A3 A4 A5

A1 1.0000 0.3333 0.5000 0.5000 0.5000

A2 3.0000 1.0000 0.5000 0.3333 0.3333

A3 2.0000 2.0000 1.0000 0.5000 0.5000

A4 2.0000 3.0000 2.0000 1.0000 1.0000

A5 2.0000 3.0000 2.0000 1.0000 1.0000

The result of calculation is as follow: CI ¼ 0:0723; CR ¼ 0:0646 The contrast matrix A0 passes the consistency test, and each vector weight vector W is: W ¼ ð 0:3588 0:2568 0:1712 0:1066 0:1066 Þ, kmax ¼ 5:2892.

A1 A11 A12 A13 A14 A15

A11 1.0000 0.5000 0.3333 0.5000 0.5000

A12 2.0000 1.0000 0.5000 0.3333 0.3333

A13 3.0000 2.0000 1.0000 1.0000 0.3333

A14 2.0000 3.0000 1.0000 1.0000 1.0000

A15 2.0000 3.0000 3.0000 1.0000 1.0000

1692

J. Yong et al.

Establish Judgment Matrices of internationalization of strategic positioning. The result of calculation is as follow: CI ¼ 0:0697; CR ¼ 0:0622 The contrast matrix A1 passes the consistency test, and each vector weight vector W is: W ¼ ð 0:3439 0:2722 0:1590 0:1225 0:1024 Þ, kmax ¼ 5:2786. A2 A21 A22 A23 A24

A21 1.0000 0.2500 0.5000 0.3333

A22 4.0000 1.0000 0.5000 0.5000

A23 2.0000 2.0000 1.0000 1.0000

A24 3.0000 2.0000 1.0000 1.0000

Establish Judgment Matrices of Internationalization of personnel elements. The result of calculation is as follow: CI ¼ 0:0655; CR ¼ 0:0727 The contrast matrix A2 passes the consistency test, and each vector weight vector W is: W ¼ ð 0:4946 0:2183 0:1534 0:1337 Þ, kmax ¼ 4:1964. A3 A31 A32 A33 A34

A31 1.0000 0.2500 0.5000 0.3333

A32 4.0000 1.0000 2.0000 0.5000

A33 2.0000 0.5000 1.0000 0.5000

A34 3.0000 2.0000 2.0000 1.0000

Establish Judgment of Matrices of Internationalization of operational elements. The result of calculation is as follow: CI ¼ 0:0323; CR ¼ 0:0358 The contrast matrix A3 passes the consistency test, and each vector weight vector W is: W ¼ ð 0:4761 0:1547 0:2523 0:1169 Þ, kmax ¼ 4:0968. A4 A41 A42 A43 A44

A41 1.0000 2.0000 3.0000 4.0000

A42 0.5000 1.0000 0.5000 0.5000

A43 0.3333 2.0000 1.0000 1.0000

A44 0.2500 2.0000 1.0000 1.0000

A Study on Influencing Factors of International Applied Talent …

1693

Establish Judgment of Matrices of Organizational elements. The result of calculation is as follow: CI ¼ 0:0690; CR ¼ 0:0767 The contrast matrix A4 passes the consistency test, and each vector weight vector W is: W ¼ ð 0:1059 0:3841 0:2424 0:2676 Þ, kmax ¼ 4:2071. A5 A51 A52 A53 A54

A51 1.0000 0.5000 3.0000 2.0000

A52 2.0000 1.0000 2.0000 1.0000

A53 0.3333 0.5000 1.0000 0.5000

A54 0.5000 1.0000 2.0000 1.0000

Establish Judgment of Matrices of Internationalization of courses. The result of calculation is as follow: CI ¼ 0:0690; CR ¼ 0:0767 The contrast matrix A4 passes the consistency test, and each vector weight vector W is: W ¼ ð 0:1832 0:1661 0:4193 0:2314 Þ, kmax ¼ 4:2071. Finally, we test the consistency according to the principal.

4 Conclusion The factors affecting the training of international talents in higher vocational education include the internationalization of strategic positioning, internationalization of personnel elements, internationalization of operational elements, organizational elements, and internationalization of courses, among which strategic positioning internationalization has the highest weight [9, 10]. In the internationalization of the strategic positioning of secondary indicators, the weight of education concepts is the highest, in the internationalization of personnel elements, the weight of leadership internationalization is the highest, in the internationalization of operational elements, the weight of Chinese-foreign cooperation in running schools is the highest, and in the organizational elements, the weight of foreign-related academic groups is the highest. In the internationalization of courses, Bilingual teaching materials are the lowest.

References 1. Discretion and bias in performance evaluation: the impact of diversity and subjectivity. 30 (1), 67–78 (2005, Jan) 2. http://thequalityportal.com/q_ahp.htm. Retrieved 2007-08-21 3. Eur. Manag. J. 21(3), 323–337 (2003, June)

1694

J. Yong et al.

4. Prieto, N., Uttaro, B., Mapiye, C., Turner, T.D., Dugan, M.E.R., Zamora, V., Young, M., Beltranena, E.: Meat Sci. 98(4), 585 (2014) 5. Riovanto, R., De Marchi, M., Cassandro, M., Penasa, M.: Food Chem. 134(4), 2459 (2012) 6. Prieto, N., Dugan, M.E.R., López-Campos, O., McAllister, T.A., Aalhus, J.L., Uttaro, B.: Meat Sci. 90(1), 43 (2012) 7. Prieto, N., López-Campos, ó., Aalhus, J.L., Dugan, M.E.R., Juárez, M., Uttaro, B.: Meat Sci. 98(2), 279 (2014) 8. Pla, M., Hernández, P., Ariño, B., Ramírez, J.A., Díaz, I.: Food Chem. 100(1), 165 (2007) 9. Pullanagari, R.R., Yule, I.J., Agnew, M.: Meat Sci. 100, 156 (2015) 10. Saudland, A., Wagner, J., Nielsen, J.P., Munck, L., Norgaard, L., Engelsen, S.B: Appl. Spectrosc. 54(3), 413 (2000)

Research on the Platform Structure of Product Data Information Management System Bingfeng Liu(&) and Mingjuan Jiang School of Management and Economics, Jingdezhen Ceramic Institute, Jingdezhen, China [email protected]

Abstract. The product data information management system has improved the enterprise information application management level as a whole, at the same time, it also provides reliable data basis for the enterprises to strengthen the internal management to further analyze and control the cost of product materials. On the basis of the comprehensive application of the system, it effectively promotes the standardization of enterprise management, the accuracy and timeliness of the data, and has a positive and far-reaching impact on the further improvement of the enterprise management level. Facing the fierce competition in the international and domestic market. No quick and accurate response can be made to market changes. Through the implementation of the enterprise information project, it can greatly improve the design ability and management level of the new product, and also improve the competitiveness of the enterprise in the manufacturing. It has become inevitable to promote enterprise development through informatization. Keywords: Platform structure system



Product data



Information management

1 Introduction Since the implementation of ERP project, most domestic enterprises have set up a leading group of informatization work, and the leader group leader is personally held by the director [1]. The unified leadership of enterprise informatization has created favorable conditions for the implementation and popularization of informatization projects. It has good network and hardware base. Through many years of effort, the enterprise has built up the network system application platform of the backbone, and the use surface of the technology center product design workstation has reached more than 80% [2]. It provides a good hardware and network application environment for the popularization and application of information technology. Have a team of skilled personnel [3]. The application and application of enterprise OA, ERP, PDM and the implementation and application of the national debt information project have improved the overall management level of the enterprise [4]. At the same time, it has also trained a batch of key users who know the business and familiar with the software operation process and the system management and technical support © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1695–1699, 2019. https://doi.org/10.1007/978-981-13-3648-5_219

1696

B. Liu and M. Jiang

personnel of the Department of information science and technology for the implementation and application of the future information project [5]. A good foundation is laid.

2 System Platform Structure 2.1

Product Data Information Management System

Based on the needs and objectives of the enterprise, according to the information construction experience of some domestic and foreign enterprises in the locomotive and rolling stock industry, from the overall situation, the composition of the whole information system of the enterprise is depicted. On the PDM system of product data management platform, there are design system, process manufacturing system and enterprise resource system to complete the product [6]. Full life cycle management. Overall planning and step-by-step implementation. Starting from the application of the motor train design department, it is easy to reproduce, gradually improve the application, and then popularize the application in various departments. Based on the robust, open, advanced and flexible TeamCenter system platform, it establishes a research and development support management platform to meet the rapid development of enterprise business. At present, effective document management is urgently needed to promote the popularization and application of 3D CAD of design system. In the process of building unified PDM platform, we need to gradually solve several major business problems that the enterprise is very concerned about at present. Coding management, document management, product development process management, close integration with CAD or CAPP/ERP systems, extended configuration management and project management. 2.2

Integration with CAPP System

For the CAPP system, the product management software company has a special system to solve the problem. The enterprise uses the domestic XTCAPP system [7]. Through the TeamCenter integrated development package, the design EBOM data is easily transferred to the CAPP system to produce the data of the manufacturing MBOM. The enterprise will manage product and process data in a single database and operating environment. TeamCenter will manage these processes and data automatically, and transmit information to the ERP system through interface to make production plan and material preparation [8]. When finishing the preparation and approval of all parts of a part, TeamCenter will issue a complete manufacturing data package to allow the production department to produce. 2.3

Integration with ERP System

For commonly used ERP systems, TeamCenter has provided ready integrated modules for integration interfaces, such as integration with ERP systems such as SAP and Oracle.

Research on the Platform Structure of Product Data …

1697

The XXXERP system has been preliminarily implemented for the enterprise. For the integration of PDM and ERP systems, the following two ways can be adopted: the intermediate file transfer mode or the development of the intermediate integrated interface mode. Transfer Mode of Intermediate Files According to the requirements of the ERP system for designing data sources, through the two development of the TeamCenter system, output relevant reports, and transmit them to ERP as intermediate files. In the primary stage of realizing the integration of PDM and ERP, this simple implementation will be considered to facilitate the transmission of data and manual intervention to ensure that the ERP data source is correct and complete [9]. Developing Intermediate Interfaces Using the openness of TeamCenter, a specific interface program is developed, and the public information of TeamCenter and ERP is stored in the agreed database, and data processing and dynamic exchange of information are carried out through the interface program.

3 Extended Configuration Management There are different expressions of product structure in different enterprises, such as product list, bill of material, and part table. In order to facilitate the concept of unified nouns, we use the word BOM (Bill of Material: BOM) in the following chapters. BOM Generation and Editing Provides a convenient graphical BOM interface for building/modifying/querying. Users can easily set up/modify accessories, set the relationship between parts, establish replacement parts, effective dates, batch numbers, etc. The Expansion of Multi-Layer BOM The deployment of BOM structure makes the structure of BOM clear at a glance. As shown in Fig. 1, the accessories are expanded and the relevant data and drawings can be displayed using the Product Data function.

Fig. 1. The expansion of multi-layer BOM

1698

B. Liu and M. Jiang

BOM Difference Analysis Compared to the differences in the parts used for the same or different products, the user can understand the difference of the changes before and after BOM, avoid correcting errors, and trace the correction process. A comparative result report can also be produced. Product Version Configuration (Revision Configuration) Through the release of the approval process (Release), the versions of each part will be attached to the Status, so for the different stages of the product lifecycle (conceptual design, prototype stage, small batch trial production, mass production and manufacturing …) the product design, the user can use BOM version configuration management, at any time consult the BOM parts product data of each stage, so that the design concept is saved, and the old data is called. Product Structure Variable Configuration Management For example, in Fig. 2, in view of the selection of different products, the engineer can define the types and conditions of the product configuration by itself in a conditional way. The user can require the system to automatically reflect the relative BOM by the convenient condition setting method.

Fig. 2. Variant configuration

Effectiveness management of parts and product structure. As shown in Fig. 3, once the expiration date is set, the system will not allow the use of parts or structures that

Fig. 3. Effectively configuration

Research on the Platform Structure of Product Data …

1699

exceed the expiration date, which can solve the problem of the increase in inventory caused by improper procurement. The user can also specify a previous date to display the product structure of the time so that the maintenance engineer can easily look up the product structure of the past time. Validity can be set by a batch number, a serial number, or a date.

4 Conclusions For some parts that can be replaced by another part in a specific structure, they can be replaced. Replacement parts can be replaced wherever parts are available. TeamCenter can build related product structures and display related BOM, such as engineering BOM (E-BOM), BOM (M-BOM), etc., based on the life cycle of the product. As follows, the user can build a product structure at various stages in a graphical interface, based on the requirements of each phase of the product lifecycle, to establish the relevance or components displayed in this stage. The system can expand BOM on a product structure, according to the configuration items and other conditions set above, and avoid the trouble of maintaining a large number of BOM.

References 1. Wang, X.: The Strategies’ Research of Electric Power Marketing Based on the Demand Side Management Theory. North China Electric Power University (2008) 2. Chiu, C.-F., Shih, T.K., Wang, Y.-H.: An integrated analysis strategy and mobile agent framework for recommendation system in EC over internet. Taking J. Sci. Eng. 5(16), 159– 174 (2002) 3. Zhou, M.: Research on Several Key Problems of Electricity Demand Side Market Operation. North China Electric Power University (2005) 4. Prieto, N., Uttaro, B., Mapiye, C., Turner, T.D., Dugan, M.E.R., Zamora, V., Young, M., Beltranena, E.: Meat Sci. 98(4), 585 (2014) 5. Riovanto, R., De Marchi, M., Cassandro, M., Penasa, M.: Food Chem. 134(4), 2459 (2012) 6. Prieto, N., Dugan, M.E.R., López-Campos, O., McAllister, T.A., Aalhus, J.L., Uttaro, B.: Meat Sci. 90(1), 43 (2012) 7. Prieto, N., López-Campos, ó., Aalhus, J.L., Dugan, M.E.R., Juárez, M., Uttaro, B.: Meat Sci. 98(2), 279 (2014) 8. Pla, M., Hernández, P., Ariño, B. J.A. Ramírez, Díaz, I.: Food Chem. 100(1), 165 (2007) 9. Pullanagari, R.R., Yule, I.J., Agnew, M.: Meat Sci. 100, 156 (2015)

Research on a New Clustering Validity Index Based on Data Mining Chaobo Zhang(&) School of Economics and Management, Beijing University of Posts and Telecommunications, Beijing 100876, China [email protected]

Abstract. Clustering analysis has made great achievements in the development process, but there are still many problems in it. In this paper, the question on determining optimal number of clusters in cluster analysis is studied mainly. The KMIBCP (K-means Intra-Between-cluster partition) in K-means clustering algorithm is proposed. KMIBCP algorithm uses IBCP (Intra-Between-cluster partition) validity index to analyze the validity of clustering results produced by K-means clustering algorithm, and to determine optimal number of clusters. The experimental results on artificial datasets verify the effectiveness of the proposed algorithm. Keywords: Data mining

 Clustering validity index  K-means clustering

1 Introduction K-means clustering algorithm is one of the most widely used algorithms in clustering analysis. Many researchers have studied and improved the algorithm in detail [1] and gradually formed an improved clustering strategy, becoming a typical partition clustering algorithm. Literature [2, 3] introduced the K-means improvement method by adopting the genetic algorithm, continuously adjust the weight in the clustering process to improve the K-means algorithm. Literature [4–6] clustered after giving different weight to variables, and improved the clustering quality by continuously adjusting the weight. This method somewhat is equivalent to changing the distance function between the sample and the clustering center. The K-means algorithm can process big data sets in an efficient manner. Especially when the sample distribution is in a cluster-like state, the good clustering results can be achieved [7, 8]. These clustering validity indices are used to calculate the appropriate clustering number k, i.e., the optimal clustering number kopt . However, due to the defects of these validity indices, their clustering validity test results are unsatisfactory for the difficulty in identifying the clustering structure, and it is difficult to get the correct optimal clustering number [9, 10]. A new validity index—IBCP index was presented in the paper. On this basis, an algorithm to determine the optimal clustering number of samples was proposed to evaluate the clustering results of K-means algorithm and to determine the optimal clustering number of samples.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1700–1704, 2019. https://doi.org/10.1007/978-981-13-3648-5_220

Research on a New Clustering Validity Index Based on Data Mining

1701

2 Basic Algorithm 2.1

K-Means Clustering Algorithm

The algorithm selects the k cluster and selects k initial cluster centers, and each sample is assigned to a cluster in the k cluster according to the minimum distance principle. After that, heart cluster and the category of each sample are adjusted continuously, and finally. The sum of the squares of each sample to the center of its category is minimum. The algorithm steps are as follows: Algorithm K-means clustering algorithm (1) For n samples, k samples are selected as the initial clustering centers ðz1 ; z2 ; . . .; zk Þ. (2) For each sample xi , find the nearest cluster center zv , and assign it to the cluster uv marked by zv . (3) The average method was used to calculate the centers after reclassification. P (4) Computing D ¼ ni¼1 ½minr¼1...k dðxi ; zr Þ2 . (5) If the D value converges, then returns ðz1 ; z2 ; . . .; zk ; UÞ, and terminates the algorithm, or goes to the step (2). 2.2

The Method for Determining Optimal Clustering Number

The basic algorithm idea of determining the optimal cluster number of clustering algorithm is: Algorithm The traditional K-means algorithm optimal clustering numerical determination algorithm 1. Search range of selecting cluster number is ½kmin ; kmax , usually take pffiffiffi kmin ¼ 2; kmax ¼ Intð nÞ. 2. For k = kmin to kmax . The k initial cluster center Z k selected randomly Using K-means clustering algorithm, update membership matrix U k and cluster center Z k Check the termination conditions, if not satisfied, turn Using cluster results to calculate the validity index value, turn 3. Comparing the validity index value, the validity index value reaches the corresponding optimal k. That is the optimal number of clusters kopt . 4. Output clustering results: Cluster central point Zopt , membership relation matrix Uopt , the optimal clustering number kopt .

1702

C. Zhang

3 The Method for Determining Optimal Clustering Number Based on IBCP Index The process of evaluating the performance of clustering results is called clustering validity analysis. In general, a good clustering should reflect the internal structure of the data set as much as possible, and make the Intra-cluster samples as similar as possible and the between-cluster samples as dissimilar as possible. Considering the distance measure, the optimal clustering refers to clusters that minimize the intra-clusterdistance and maximize the between-cluster-distance. Some clustering validity indices have been proposed. Due to the inherent defects of these indices, it was generally difficult to find the correct optimal clustering number. To handle this situation, a new clustering validity index—IBCP index was presented in this paper, which could evaluate the clustering results of the K-means algorithm and could be used to determine the optimal clustering number. In this paper, combining K-means algorithm and IBCP clustering validity index defined, an algorithm for analyzing the clustering effect and determining the optimal number of clusters is proposed, which is denoted as KMIBCP. Algorithm KMIBCP algorithm 1. The search range of selecting cluster number is ½kmin ; kmax . 2. For k = kmin to kmax . (1) Call K-means algorithm; (2) Calculate the IBCP index value of single sample; (3) Calculate the average IBCP index value. 3. Calculate the optimal number of clusters. 4. Output the optimal number of clusters, validity index values and clustering results.

4 Experimental Results and Analysis In order to test the performance of validity index IBCP and KMIBCP algorithm to determine the optimal clustering number, four data sets were tested and compared with the common indices such as CH index, DB index and IGP index. The experiment consists of artificial data sets, namely SM1. The SM1 dataset consists of two-dimensional two-Gaussian distribution data of (0, 0) and (30, 30) respectively. There are 100 samples in category (0, 0) and 300 samples in category (30, 30). Their covariance matrices are I2 and 50I2 respectively, and I2 is a 2-order identity matrix. The structure of this dataset is characterized by different cluster densities of the two clusters. Its structure is characterized by slight overlap and loose clustering structure. The optimal clustering number of the SM1 data set was estimated by different validity evaluation index. It is shown in Table 1. Where, the percentage value represents that the algorithm runs for w time, being represented by getting the corresponding number of optimal clustering times and the percentage of w ratio. In the experiment, w = 50. Where, the underlined value means that the final clustering number obtained is

Research on a New Clustering Validity Index Based on Data Mining

1703

the correct optimal clustering number, (CH) index, (DB) index and (IGP) index have good performance and stable evaluation results, so the correct optimal clustering number can be obtained every time. The structure distribution and clustering results of the SM1 data set are shown in Fig. 1.

Table 1. Optimal clustering numbers evaluated by some validity indexes for dataset SM1 Index Optimal number of clusters (%) 2 3 4 5 Other CH 90 0 0 0 0 DB 90 0 0 0 0 IGP 100 0 0 0 0 IBCP 100 0 0 0 0

Final cluster Number 2 2 2 2

Fig. 1. Clustering result of dataset SM1 for k = 2

1704

C. Zhang

5 Conclusions K-means clustering algorithm requires the user to provide the clustering number k based on an empirical knowledge. In most cases, however, the clustering number k could not be determined in advance. The method of determining the optimal clustering number of K-means algorithm was studied in this paper. The basic algorithm for Kmeans optimal clustering number was first determined. Subsequently, in order to handle the clustering validity problem of K-means algorithm, the paper defined the sample clustering distance and sample clustering deviation distance from the perspective of sample geometric structure, designed a new clustering validity index—ICP index, and proposed a KMICP method determining K-means algorithm’s optima clustering number on this basis.

References 1. Frey, B., Dueck, D.: Clustering by passing messages between data point. Science 315(5814), 972–976 (2016) 2. Frey, B., Dueck, D.: Response to comment on “clustering by passing messages between data points”. Science 319(5864), 72–80 (2008) 3. Chan, C.: Clustering design of big data based on MIP and improved fuzzy K-means algorithm. Comput. Meas. Control. 22(4), 1270–1272 (2014) 4. Chang, D., Zhang, X.: A genetic algorithm with gene rearrangement for K-means clustering. Pattern Recogn. 42(7), 1210–1222 (2009) 5. Cheng, Y.: Research on Hybrid Recommendation Algorithm for P2P Net Loan Products. South China University of Technology (2016) 6. Dembélé, D., Kastner, P.: Fuzzy C-means method for clustering microarray fata. Bioinformatics 19(8), 973–980 (2013) 7. Huang, J.: Automated variable weighting in K-means type clustering. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 657–668 (2015) 8. Krishna, K., Murty, M.N.: Genetic K-means algorithm. IEEE Trans. Syst., Man Cybern.-Part B: Cybern. 29(3), 433–439 (1999) 9. Brusco, M., Köhn, H.: Comment on “clustering by passing messages between data points”. Science 319(5864), 22–81 (2008) 10. Wang, Z., Liu, G.: A K-means algorithm for optimizing the initial center point. Pattern Recog. Artif. Intell. 22(2), 299–304 (2009)

Optimization of Ultrasonic Extraction for Citrus Polysaccharides by Response Surface Methodology Yongguang Bi(&), Yanshan Lu, and Zhipeng Su College of Pharmacy, Guangdong Pharmaceutical University, Guangzhou 510006, Guangdong, China [email protected]

Abstract. The ultrasonic-assisted extraction of polysaccharides from citrus peel was modeled by response surface methodology which adopts a Box-Behnken design with the three-factor-three-level. Based on the single-factor-test, the BoxBehnken design was employed to optimize the extraction variables (extraction temperature, ultrasonic power, and ratio of distilled water to raw material) for the extraction yield of polysaccharides. The statistical analyses showed that the quadratic terms (X12, X22 and X32) and the interaction of X2 with X3 had significant effect on the yield (p < 0.05). The optimized conditions were found as follows: extraction temperature of 50 °C, ultrasonic power of 370 W, ratio of distilled water to raw material of 31:1 mL/g. Under these conditions, the extraction yield of polysaccharides was 30.77 ± 0.48% (n = 3), which was closed to the model predicted value of 30.72%. This study can provide the theoretical guidance for the industrial extraction of polysaccharides and improve the utilization of citrus peels. Keywords: Citrus polysaccharides Extraction process



Response surface methodology



1 Introduction Citrus peel, which was named chenpi, has been widely used in traditional Chinese medicine and was officially recorded in the Chinese Pharmacopoeia [1]. It is wellknown that citrus peels possess varieties of pharmacological properties such as antiinflammatory, anticarcinogenic as well as prevention, reduction and treatment of chronic diseases like gastrointestinal disorders and cardiovascular disease [2]. However, citrus industry highly consumes citrus fruits as fresh produce, juice and produces a large amount of by-products such as peels and seed residues which can account for up to 50% of the total fruit weight [3]. Thus, studies on the bioactive compounds of the citrus peel are beneficial for citrus processing industry, citrus growers and human society. The polysaccharide is one of the most important bioactive compounds among the components of citrus peels. Polysaccharides exhibit a wide range of physiological activity including antioxidant, immuno-modulating, anti-ulcer and antitumor © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1705–1712, 2019. https://doi.org/10.1007/978-981-13-3648-5_221

1706

Y. Bi et al.

effects [4]. Therefore, it is meaningful to optimize the extraction conditions of polysaccharides from citrus peels. UAE has high reproducibility at shorter time, require lower energy input, simplifies manipulation and significantly reduces in solvent consumption and temperature among these extraction techniques [5]. Consequently, UAE has been used in the the extraction of many natural products, such as polysaccharides, saponins and alkaloids [6, 7]. Response surface methodology (RSM), a collection of mathematical and statistical techniques, derive a model equation that later can be applied for response prediction and the determination of optimal conditions [8]. The advantage of RSM is more efficient and easier arrangement and interpretation of experimental trials [9]. Therefore, it has been widely applied to optimize the extraction conditions of flavonoids, phenolics and polysaccharides from different materials [10, 11]. In this study, the main objective was to optimize the ultrasound-assisted extraction variables (extraction temperature, ultrasonic power and ratio of raw material to distilled water) of polysaccharides from citrus peels in terms of extraction yield using the RSM methodology. The alduronic acid content from citrus peels was also studied.

2 Materials and Methods 2.1

Materials and Chemicals

Citrus fruits were purchased from Sichuan Pujiang (China). The standards of glucose was from Shanghai Yuanye Biological Technology Co., Ltd. (Shanghai, China). The other reagents were analytical grade. 2.2

Preparation of Standard Curves of Glucose

The calibration curve of glucose was following the early report with some modifications [12]. Stock solution (0.9 mg/mL) was prepared by dissolving 0.09 g of glucose into 100 mL distilled water. The stock solution was diluted with distilled water to provide six standard solutions whose glucose concentrations ranged from *9 to 90 mg/mL. 2.0 mL of six standard solutions were transferred to 10 mL test tube, respectively. Then 1.5 mL phenol solution and 5.0 mL concentrated sulfuric acidic solution were added into the individual test tube. After mixing them up, the mixture was placed for 5 min and bathed with boiling water for 15 min. The absorbance of the mixture was measured at 485 nm by UV-Vis spectrophotometer (752, Shanghai, China). 5.0% phenol solution was prepared by diluting 5.0 mL phenol with 95 mL distilled water. The onedimensional linear regression equation and linear regression coefficients (R2) were presented as following: A ¼ 0:0125C þ 0:0137; R2 ¼ 0:9975 where A is the absorbance, C is the concentration of polysaccharides (mg/mL).

ð1Þ

Optimization of Ultrasonic Extraction for Citrus …

2.3

1707

Extraction Procedure

The dried citrus peels were powered by a pulverizer (DFY-600, Wenling, China) and then passed through a 40 mesh sieve. After that, the powers were degreased with petroleum benzin. Two grams of the degreased dry citrus peel powders was used for each case in a test tube. The ultrasonic-assisted extraction of polysaccharides originating from degreased samples was performed in an ultrasonic homogenizer (scientz-II D, Ningbo, China). Effects by the extraction temperature, the extraction time, ultrasonic power and ratio of distilled water to raw material were studied. 2.4

Determination of Total Polysaccharides Yield

After ultrasonic extraction, the extracted slurry was filtered to collect the filtrate. The concentration of total polysaccharides in the filtrate was determined using the method as described in Sect. 2.2. The extraction yield was calculated as follows: Extraction yield (%Þ ¼

CV  103  100% W

ð2Þ

where C (mg/mL) is the concentration of total polysaccharides in the filtrate, V (mL) is the volume and W (g) is the weight of degreased dry citrus peel powders. 2.5

Experimental Design

The Box-Behnken design (BBD) with three-factor-three-level was employed to obtain the best combination of extraction variables for total polysaccharides based on the single-factor-test results. Extraction temperature (X1), ultrasonic power (X2) and ratio of distilled water to raw material (X3) were the independent variables. Box-Behnken experiment designs was given in Table 1. Experimental data were fitted for a quadratic polynomial model explained by the following quadratic equation: Y ¼ a0 þ b1 X1 þ b2 X2 þ b3 X3 þ c12 X1 X2 þ c13 X1 X3 þ c23 X2 X3 þ d1 X21 þ d2 X22 þ d3 X23

ð3Þ

where Y represents the independent variable; X1, X2, X3 represent the independent variables. a0 is the intercept; b1, b2, b3 are the linearity coefficients; c12, c13, c23 are the interaction coefficients; d1, d2, d3 are the square coefficients.

3 Results and Discussion 3.1

Effect of Extraction Temperature on Extraction Yield

Extraction temperature was chosen at different value from 40 to 80 °C when the ultrasonic power and ratio of distilled water to raw material were set as 380 W and 40:1 mL/g, respectively. The effect of extraction temperature on extraction yield of total polysaccharides was shown in Fig. 1a. When extraction temperature increased, the

1708

Y. Bi et al. Table 1. Experimental results for Box-Behnken design

Run X1 (Extraction temperature, °C) 40 1 60 2 50 3 50 4 50 5 60 6 40 7 50 8 50 9 10 60 11 60 12 50 13 50 14 50 15 40 16 40 17 50

X2 (Ultrasonic power, W) 285 285 285 380 380 380 475 475 380 475 380 475 380 380 380 380 285

X3 (Ratio of distilled water to raw material, mL/g) 30 30 40 30 30 20 30 40 30 30 40 20 30 30 40 20 20

Y (Extraction yield, %) 28.2 28.6 29.6 30.7 30.9 27.8 28.8 27.8 30.6 28.2 28.1 28.3 30.5 30.8 27.9 27.4 28.1

Fig. 1. Effects of extraction temperature (a), ultrasonic power (b) and ratio of distilled water to raw material (c) on extraction yield of polysaccharides

Optimization of Ultrasonic Extraction for Citrus …

1709

extraction yield increased at first and reached a maximum at 50 °C, and then decreased as the extraction proceeded. The possible reason to explain this result was that the polysaccharides in citrus peel were oxidized at high temperature although rising temperature was beneficial to accelerating the diffusion and solubility rate of molecules. Therefore, 50 °C was favorable for extracting the polysaccharides. 3.2

Effect of Ultrasonic Power on Extraction Yield

Ultrasonic power was fixed at different value from 95 to 475 W while the extraction temperature and ratio of distilled water to raw material were 50 °C and 40:1 mL/g, respectively. The effect of ultrasonic power on extraction yield of total polysaccharides was shown in Fig. 1b. By increasing the ultrasonic power, the extraction yield tended to increase slowly and a maximum yield achieved at 380 W, and then decreased rapidly after the ultrasonic power was over 380 W. This phenomenon may be attributed to destroyed structure of polysaccharides under high ultrasonic power. Therefore, 380 W was good for extracting the polysaccharides. 3.3

Effect of Ratio of Distilled Water to Raw Material on Extraction Yield

Extraction process was carried out using ratio of distilled water to raw material in the range of 10:1–50:1 mL/g while the extraction temperature and ultrasonic power were set as 50 °C and 380 W, respectively. The effect of ratio of distilled water to raw material on extraction yield was shown in Fig. 1c. As the ratio increased, the extraction yield was improved at first and reached the maximum value at 30:1 mL/g, and then slightly decreased. As the ratio increased, the dissolution of polysaccharides was increasing as well. While the ratio exceeded 30:1 mL/g, the decreasing of the distribution of ultrasonic energy density in the polysaccharides extract was dominant, leading to a low extraction yield of polysaccharides. Therefore, the ratio of distilled water to raw material of 30:1 mL/g was sufficient for extracting the polysaccharides. 3.4

Optimization of the Extraction Parameters for Polysaccharides by RSM

As shown in Table 1, there were process variables and experimental data of 17 runs containing 5 replicates at center point. The data were further analyzed by multiple regression analysis with the Design-Expert software version 8.0 and the quadratic polynomial equation which represented the model for the response variable was shown in the following: Y ¼ 30:70 þ 0:050X1  0:18X2 þ 0:23X3  0:25X1 X2  0:050X1 X3  0:50X2 X3  1:45X21  0:80X22  1:45X23

ð4Þ

As shown in Table 2, there were the results of analysis of variance (ANOVA). The model F-value (32.34) with a low probability P value ( < ni1  ceil Vmax Vupper  DVT  ni ¼ Vlower Vmin > > n þ ceil > DVT > i1 : ni1

Vmax  Vupper ; Vmin  Vlower Vmax [ Vupper ; Vmin  Vlower Vmax  Vupper ; Vmin \Vlower Vmax [ Vupper ; Vmin \Vlower

ð5Þ

where ni is the position of the transformer tap, ni1 is the position of the transformer tap at previous moment, DVT is the voltage span between tap gear of transformer, Vmax , Vmin are the upper and lower limit of bus voltage respectively, and ceil means upward rounding. The OLTC voltage regulation model is shown as Fig. 2 [12]. The automatic

… ...

Tap changer

Time delay

Dead-band compensation



Vf

+

Vref

Fig. 2. The OLTC voltage regulation model

A Cooperative Control Scheme for Voltage Rise …

1763

voltage control (AVC) relay [13] gathers the voltage V1 and V2 at both ends of the feeder and drives the tap operation compared with the reference voltage. The first reference voltage is set to Vupper and the end reference voltage is set to Vlower . Set the voltage dead-band to reduce unnecessary action of the OLTC. The voltage dead-band is V1  Vupper ; V2  Vlower , V1 [ Vupper ; V2 \Vlower . When V1 \Vupper ; V2 \Vlower , AVC relay drives tap up, lifting feeder voltage; When V1  Vupper ; V2  Vlower or V1 [ Vupper ; V2 \Vlower , the voltage is in the dead-band and the OLTC does not work. When V1 [ Vupper ; V2 [ Vlower AVC relay drives tap down, reducing feeder voltage.

4 A Cooperative Control Scheme for Voltage Rise in Distribution Networks The cooperative control scheme combined with local control and centralized control. The local control requires the inverter to absorb reactive power to counteract the effect of active power generated by DG on the voltage. Thus, the distribution network is equivalent to the traditional single-source network. finally, the centralized control requires OLTC regulate voltage to specified limits. In the process of voltage regulation, determining the amount of reactive power absorbed by inverter is the core of the control scheme. The following is mainly about it. The single-line model diagram of the distribution network is shown as Fig. 1. The current injecting to generator bus by DG can be given by IG ¼

  PG þ jQG  PG  jQG ¼  VG cos d  jVG sin d VG

ð6Þ

where IG is the complex current injected by the generator. Assuming that the bus does not connect loads, then we can get [2] V0 VG cos d ¼ VG2  RPG  XQG

ð7Þ

V0 VG sin d ¼ XPG  RQG

ð8Þ

By (7) and (8), we can get VG4  ð2RPG þ 2XQG þ V02 ÞVG2 þ f ¼ 0

ð9Þ

where f is given by R2 P2G þ X 2 Q2G þ X 2 P2G þ R2 Q2G . If there is no voltage loss between the secondary of the transformer and the generator bus, we can get 2RPG þ 2XQG ¼

f VG2

ð10Þ

which, for VG ¼ 1 p.u:, An approximate binary first order equation can be obtained from (10) as follows

1764

W. Liu et al.

Q2G 

2X 2RPG QG þ P2G  2 0 R2 þ X 2 R þ X2

ð11Þ

Reactive power required to counteract the effect of distributed power on voltage is X  2  R þ X2

QGN

s ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 X 2RPG P2G þ 2 2 2 R þX R þ X2

ð12Þ

The value of inverter absorbing the reactive power is QGN , which counteracts the influence of DG, and the network voltage restores the droop distribution. The AVC relay collects voltage at both ends feeders, then OLTC adjusts the voltage according to the rules of (5) and completes the operation of voltage regulation.

5 Simulation Results 5.1

Control Settings

The performance of the cooperative control scheme is illustrated on the IEEE33 system, whose diagram is shown in Fig. 3 [14]. The first end of the feeder is equipped with OLTC, and others can selectively connect the IIDGs. The secondary of OLTC transformer and the end of each feeder are equipped with AVC relays to capture the voltage at both ends of the feeder. The flowchart of the cooperative control scheme for voltage rise in distribution networks is shown as Fig. 4. The voltage at both ends of the feeder is collected by AVC relay. Compared with the reference voltage, when the voltage exceeds the limit, the inverter absorbs reactive power to offset the influence of DGs. At this point, detect whether the voltage is in the specified limits. if not, the OLTC voltage regulation is started, otherwise the process is finished.

22

0

1

23

24

2

3

4

5

18

19

20

21

6

25

26

27

28

29

30

31

32

7

8

9

10

11

12

13

14

15

16

17

Fig. 3. The IEEE33 system diagram

5.2

Simulation Analysis

The system configurations are as follows. The active power and reactive power load of the distribution network are 5200 kW and 3220 kVAr, respectively. The power reference is 10 MVA, and the voltage reference is 12.66 kV. The upper and lower limits

A Cooperative Control Scheme for Voltage Rise …

1765

Start

AVC relay collect data

Dose voltage exceed the lim its

No

Yes Inverter absorbs reactive power

No

Dose voltage exceed the lim its

Yes Tap changer

Stop

Fig. 4. The flowchart of the cooperative control scheme for voltage rise in distribution networks

of voltage are 0.95–1.05 pu. Bus 0 serves as the balance bus, in which the OLTC is connected, and others are PQ buses. The distance between adjacent buses is 20 km, and the type of feeder is LGJ-120, r0 ¼ 0:27 X=km, x0 ¼ 0:379 X=km. The normal daily load curve of the system is shown as Fig. 5. As we can see that the system has the lowest load at 6 o’clock and the highest load at 19 o’clock. The bus voltages at 19 o’clock are shown as Fig. 6. The analysis shows that the bus 32 has the lowest voltage among the whole buses at 19 o’clock.

Fig. 5. The normal daily load curve of the system

1766

W. Liu et al.

Fig. 6. The bus voltages at 19 o’clock

In the case of no DG connection, the distribution of voltage is droop. It can be deduced that the bus 32 has the lowest voltage among the whole buses, and the bus 1 has the highest voltage among the whole buses. Therefore, when using OLTC to adjust voltage, it is only necessary to verify that the voltage of the system is not beyond the limits by verifying that the voltage of bus 1 and 32 are within the specified limits. The 24 h voltage at the original state and at the state of OLTC participated of the system are shown as Fig. 7. The voltage fluctuates with the load. The main problem is that the voltage of bus 32 exceeds the lower limit when the load is heavy. In the case of OLTC participating in voltage regulation, the system voltage has been greatly improved, and it has been maintained within the specified limits. The effectiveness of the OLTC regulation scheme is proved.

Fig. 7. The 24 h voltage at the original state and at the state of OLTC participated of the system

Connect DG at bus 1, and the normal daily output curve is shown as Fig. 8. The voltage of the DG connection bus and the feeder terminal bus is shown as Fig. 9. Under the influence of DG, the voltages of DG connection bus are raised. The problem is that the voltage of bus 32 exceeds the lower limits in 19–21 o’ clock, and at the same time, the OLTC voltage regulation is failed. By analyzing the voltage, we can see that if the OLTC is operated and raise the voltage, the voltage of bus 32 can be in the specified

A Cooperative Control Scheme for Voltage Rise …

1767

Fig. 8. The normal daily output curve of DG

Fig. 9. The voltage of the DG connection bus and the feeder terminal bus at the state of centralized control

limits. However, it will make the voltage of bus 1 exceed the upper limits. OLTC regulation follows the rules of (5), and in the above case, the voltage is in the deadband and the tap does not operate. Under the cooperative control scheme, the inverter absorbs reactive power determined by (12) and eliminates the influence of DG. The voltage of the DG connection bus and the feeder terminal bus is shown as Fig. 10. Because the reactive power absorbed by the inverter counteracts the influence caused by DG, the OLTC works normally at 19–21 o’ clock and the voltage is maintained within specified limits, which verifies the effectiveness of the cooperative control scheme.

6 Conclusion A large number of DGs connection have caused the voltage rise problem in distribution network, changed the original voltage distribution, and made the traditional OLTC voltage regulation no longer applicable. In order to solve this problem, the cooperative control scheme is proposed. When there is no DG connected, OLTC is used to adjust

1768

W. Liu et al.

Fig. 10. The voltage of the DG connection bus and the feeder terminal bus at the state of cooperative control

the voltage, and when DG is connected, the inverter absorbs reactive power to offset the voltage rise, and then adjusts the voltage with OLTC. Through the theoretical analysis and the verification of the IEEE33 system, it can be seen that the cooperative control scheme can ensure the system voltage in the specified limits and can effectively ensure the normal operation of the distribution network. And no extra investment is required. Acknowledgements. This work was supported by Neijiang Power Supply Company, State Grid Sichuan Electric Power Company.

References 1. Liu, L.: Cooperative control of feeder voltage for distribution network with photovoltaic connected. J. Electr. Power Sci. Technol. 32(3), 43–49 (2017). (in Chinese) 2. Carvalho, P.M.S., Correia, P.F., Member, IEEE, Ferreira, L.A.F.M.: Distributed reactive power generation control for voltage rise mitigation in distribution networks. IEEE Trans. Power Syst. 23(2), 766–771 (2009) 3. Masters, C.L.: Voltage rise: the big issue when connecting embedded generation to long 11 kV overhead lines. Power Eng. J. 16(1), 5–12, Power Eng, Inst. Eng. (2002) 4. Dugan, R.C., McGranaghan, M.F., Santosa, S., Beaty, H.W.: Electric power systems quality. In: Distributed Generation and Power Quality, pp. 9. McGraw-Hill, New York (2002) 5. Leisse, I., Samuelsson, O., Svensson, J.: Coordinated voltage control in distribution systems with DG-control algorithm and case study. In: CIRED Workshop, pp. 29–30. Lisbon, Portugal (2012) 6. You, Y., Liu, D., ZHONG, Q., et al.: Multi-objective optimal placement of energy storage systems in an active distribution network. Autom. Electr. Power Syst. 38(18), 46–52 (2014). (in Chinese) 7. Majumder, R.: Aspect of voltage stability and reactive power support in active distribution. IET Gener. Transm. Distrib. 8(3), 42–45 (2014) 8. Calderaro, V., Conio, G., Galdi, V., et al.: Optimal decentralized voltage control for distribution systems with inverter-based distributed generations. IEEE Trans. Power Syst. 29 (1), 30–241 (2014)

A Cooperative Control Scheme for Voltage Rise …

1769

9. Sansawatt, T., O’Onnell, J., Ochoa, L.F., et al.: Decentralized voltage control for active distribution networks. In: 44th International Universities Power Engineering Conference (UPEC), pp. 1–4, Glasgow, UK (2009) 10. Kong, X., Zhang, Z., Yin, X., Wang, F., He, M.: Study on fault current characteristics and fault analysis method of power grid with inverter interfaced distributed generation. Proc. CSEE 33(34), 65–74 (2013) (in Chinese) 11. Zhang, J., Wang, L.: Coordinated voltage control strategy for active distribution networks. J. Northeast Electr. Power Univ. 37(4), 14–19 (2017). (in Chinese) 12. Salih, S.N., Chen, P., Member, IEEE.: On coordinated control of OLTC and reactive power compensation for voltage regulation in distribution systems with wind power. IEEE Trans. Power Syst. 31(5), 4026–4035 (2016) 13. Meng, T.: Research on voltage regulation strategy for distribution network with DG. Shandong University (2016) (in Chinese) 14. Liang, J., Lin, S., Liu, M.: A method for distributed optimal reactive power control of active distribution network. Power Syst. Technol. 42(1), 230–237 (2018). (in Chinese)

A New OFDM System Based on Companding Transform Under Multipath Channel Guangcheng Xie1(&), Kaibo Luo2, Yang Wang3, Dexiang Yang4, Jun Ye1, and Quan Zhou1 1

Chongqing Electric Power Research Institute, Chongqing 400015, China [email protected] 2 State Grid Chong Qing Electric Power Company, Chongqing 400015, China 3 State Grid Chong Qing QI NAN POWER SUPPLY COMPANY, Chong Qing 401420, China 4 State Grid Chong Qing NAN AN POWER SUPPLY COMPANY, Chong Qing 401336, China

Abstract. Companding technology has been widely used for reducing PAPR of orthogonal frequency division multiplexing (OFDM) system, because of it low implementation complexity and the simple realization. But it also causes serious constellation diffusion and bit-error-rate (BER) under the multipath channel condition in traditional OFDM system. In order to reduce the constellation diffusion problem, a novel system which adds a pair of FFT/IFFT transform before the received signal direct expanding at the receiving terminal was proposed. While between the FFT and IFFT transform, frequency equilibrium will be used to eliminate multipath interference. Numerical results show that the BER performance of our proposed system is significantly lower than that of conventional OFDM systems. Keywords: Companding

 PAPR  Equilibrium  FFT/IFFT  OFDM

1 Introduction Orthogonal frequency division multiplexing (OFDM) has been attracting substantial attention due to its excellent performance under severe channel condition [1]. Which is believed to be a suitable technique for broadband wireless communications and has been used in many wireless standards, such as DAB (digital audio broadcasting), DVB (digital video broadcasting), and digital HDTV (High - definition television), The ETSI HIPERLAN/2 standard and the IEEE 802.11a standard for Wireless Local Area Networks (WLAN), and the IEEE 802.16a standard for Wireless Metropolitan Area Networks (WMAN) [2]. However, a main drawback of OFDM system is its high peak-to-average power ratio (PAPR) [1]. This means that the RF front-end of the power amplifier must be highly linear, otherwise, it will give rise to nonlinear distortion, signal spectrum expansion, which will lead to the receiver BER performance degradation [3]. It also increases the complexity of sending and receiving terminal equipment, which is terrible for today’s pursuit lightweight, compact electronic product. Up to now, for reduce the © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1770–1778, 2019. https://doi.org/10.1007/978-981-13-3648-5_229

A New OFDM System Based on Companding Transform …

1771

PAPR, several techniques have been investigated, such as clipping [4], partial transmit sequence (PTS) [5], selective mapping (SLM) [6], coding [7, 8], and companding [9]. Clipping is the simplest technique but it causes additional clipping noise, which degrades the system performance. Coding seems attractive, but to date, no good coding solutions are known which can maintain a reasonable coding rate for arbitrary numbers of sub-carrier. Some new approach which based on those technique have been proposed to reduce or eliminate the realize complexity of OFDM system. Non-linear companding transform is an effective technique in reducing the PAPR of OFDM signals. In addition, the schemes based on companding technique have low implementation complexity and no constraint on modulation format and sub-carrier size [2]. However, in the multipath channel environment, companding transform will lead to very obvious constellation diagram spread. In order to reduce PAPR based on companding technique of OFDM system at the multipath propagation, we introduce a novel OFDM system, which is add a new FFT and IFFT transform at the receiving terminal, between the first FFT/IFFT transform, we use an equalization in frequency domain to restrain constellation diagram spread. An effective trade-off between the PAPR and BER performances can be achieved. Theoretical derivation has also been provided to conform our present system have a lower BER than conventional system. In addition, it also allows more flexibility choice the companding method to satisfy various design system requirement. The rest of this paper is organized as follows: Sect. 2 describes the conventional OFDM system model and briefly introduces the theory of companding scheme. The theoretical performance analyses that we proposed novel OFDM system which can restrain Constellation diagram spread with use equalization in frequency domain are present in Sect. 3. Section 4 presents computer simulation results to demonstrate the effectiveness of the present system in multipath propagation. Finally, conclusion is given in Sect. 5.

2 Conventional OFDM System with Companding Figure 1 shows the conventional OFDM system based on companding technique to reduce PAPR. From the flow chat, the input bit-stream through baseband mapping and IFFT transform at the transmitting terminal. Let Sð0Þ;    ; SðN  1Þ represent the data sequence after IFFT transform, which is an OFDM signal with N subcarriers. If neglect the rectangle function, samples the baseband OFDM symbol in the time domain, the sample-rate is T=N, T is the duration of one OFDM symbol, that is t ¼ kT=N ðk ¼ 0; 1;    ; N  1Þ. The discrete OFDM signal can be described as: N1 1 X sðnÞ ¼ pffiffiffiffi SðkÞej2pkn=N N k¼0

ð1Þ

where n ¼ 0; 1;    ; N  1. After IFFT block, both the real and imaginary parts of sðnÞ are processed by the compressing transformation, and the compressed signal can be written as:

1772

G. Xie et al.

Original -signal

Coding

QPSK Encoder

S/P

IFFT

Compan ding Multipath -channel

noise Decodi ng

Demod ulation

P/S

FFT

Expandi ng

+

S/P

Fig. 1. Conventional OFDM system

sc ðnÞ ¼ f ðsðnÞÞ

ð2Þ

where f ð xÞ denotes the companding function, which use different sorts of companding algorithm, such as u-law and exponential companding method. sc ðnÞ is the compressed signal of sðnÞ. Then, the OFDM symbol which will be amplified by the Solid State Power Amplifier (SSPA) before it sent to the multi-path channel. At the receiving end, the received signal can be denoted as: r c ð nÞ ¼

L1 X

sc ðn  lÞ hðlÞ þ zðnÞ

ð3Þ

l¼0

where r ðnÞ is the received signal, L is the total number of multipath, hðlÞ denotes the channel impulse response at the l path. Noise zðnÞ contents an independent, zero-mean Gaussian random distribute. While, in conventional OFDM system, at the receiving end, the received signal r ðnÞ will use expanding function to obtain decompressing values. It can be expressed as: r ðnÞ ¼ f 1 ðr c ðnÞÞ

ð4Þ

where f 1 ð xÞ is the inverse function of f ð xÞ , after expanding, the signal value r ðnÞ, which will be sent to FFT block for demodulation and inverse mapping to get the original bit stream.

3 Proposed OFDM System Figure 2 shows our proposed novel OFDM system which use companding transform under the multipath channel environment. From Fig. 2, it is easy to find that the main difference between our proposed system and traditional OFDM system is located at the receiving terminal. Generally, in our present system, signal through multipath propagation, received signal will transformed from time domain to frequency domain by first FFT operation. Then using an equalization transform in the frequency domain to eliminate multipath effect. The next, using IFFT transform the signal into time domain for expanding. And the second FFT will be

A New OFDM System Based on Companding Transform …

1773

used to transform the signal into frequency domain for the next demodulation and decoding. Based on the analyses of conventional OFDM system in part II, presuming that sn

Original -signal

QPSK Encoder

Coding

S/P

IFFT

Compan ding Multipath -channel

noise Deco ding

Demod ulation

FFT

P/S

Expandi ng

IFFT

Equilib rium

+

FFT

Fig. 2. Our proposed new OFDM system

represents the transmit signal after IFFT transformation at the transmitting end. After companding transform, the signal sn turns into scn . Then the signal is sent by RF terminal. Time domain OFDM symbol passes through multi-path channel, at receiving end, assuming perfect synchronization, received signal r c samples can be denoted as: r c ðnÞ ¼ sc ðnÞ  hðl; nÞ þ zðnÞ ¼

L1 X

hðl; nÞsc ðn  lÞ þ zðnÞ

l¼0

¼

ð5Þ

N1 X L1 1X hðl; nÞSc ðkÞej2pkðnlÞ=N þ zðnÞ N k¼0 l¼0

where L is the number of channel taps. hðl; nÞ is the multi-path channel impulse response function. zðnÞ corresponds to the impulsive noise process, which probability density function content an independent, zero-mean Gaussian random distribute. In order to eliminate the multi-path effect, r c (n)’s sent to FFT block, its frequency domain signal RðkÞ is derived as: Rc ðmÞ ¼

N 1 X n¼0

¼

N 1 X n¼0

¼

r ðnÞej2pmn=N ! N 1 X L1 1X hðl; nÞSc ðk Þej2pkðnlÞ=N þ zðnÞ ej2pmn=N N k¼0 l¼0

ð6Þ

N 1 X L1 1X H ðl; mÞSc ðkÞej2pkðnlÞ=N þ Z ðmÞ N k¼0 l¼0

where H ðl; mÞ; m ¼ 0; 1;    ; N  1 represents the channel frequency response on the k th sub-carrier of l path-channel. k ¼ 0;    ; N  1. N is the total number of sub-

1774

G. Xie et al.

carrier. Z(m)’s the frequency-domain form of zðnÞ. Using the pilot point information, take appropriate channel estimation method to get channel estimate value, assuming the ^ ðl; mÞ, when the estimate response H ^ ðl; mÞ is estimate channel frequency response is H extremely accurate, then H ðl; mÞ  @l ! 1 ^ ðl; mÞ H

ð7Þ

Equilibrium is used in (6), and the received equilibrium signal can be expressed as:

Rcequilibrium ðmÞ ¼ ¼ ¼

1 N

N1 P L1 P k¼0 l¼0

H ðl; mÞSc ðkÞej2pkðnlÞ=N þ Z ðmÞ ^ ðl; mÞ H

N 1 X L1 1X H ðl; mÞ c Z ðm Þ S ðkÞej2pkðnlÞ=N þ ^ ðl; mÞ ^ ðl; mÞ N k¼0 l¼0 H H N 1 X L1 1X

N

@l Sc ðkÞej2pkðnlÞ=N þ

k¼0 l¼0

¼ sc ðnÞ@^l ðmÞ þ

ð8Þ

Z ðm Þ ^ H ðl; mÞ

Z ðm Þ ^ ðl; mÞ H

P H ðl;mÞ j2pml=N  Ndðm  kN Þ where @^l ðmÞ ¼ L1 l¼0 H ^ ðl;mÞe

k 2 Z.

Then formula (8) can be represented by Z ðmÞ Rcequilibrium ðmÞ ¼ sc ðnÞ@^l ðmÞ þ ^ ðl; mÞ H Z ðm Þ ¼ Ns ðnÞdðm  kN Þ þ ^ H ðl; mÞ c

ð9Þ

From (9), we can conclude that using frequency equilibrium can nearly eliminate multi-path propagation influence which use companding technique to reduce PAPR. The main influence to the received signal comes from noise equilibrium, just as the (9) portrayed. After that, IFFT will be used to convert frequency domain signal c Rcequilibrium ðmÞ to time domain signal requilibrium ðnÞ for expanding, after expanding, the signal can be denoted as:

A New OFDM System Based on Companding Transform …

   requilibrium ðnÞ ¼ f 1 IFFT Rcequilibrium !  N 1  X Z ðm Þ 1 1 c j2pni=N ¼f Ns ðnÞdðm  kN Þ þ e ^ ðl; mÞ N m¼0 H  ! N1 X Z ðmÞ 1 c j2pni=N ¼f s ðnÞ dðm  kN Þe þ IFFT ^ ðl; mÞ H m¼0

1775

ð10Þ

¼ s ð nÞ þ W m    mÞ where Wm ¼ f 1 IFFT H^Zððl;m . Þ

Where s(n)’s transmit signal, Wn is the interference signal after frequency equilibrium in multi-path channel propagation system. f 1 ð xÞ is the decompanding function. Through detail analysis of our proposed system, In the multi-path channel environment, received signal is the superposition of transmit signal with multi-path delay. From (3), it is easy to see that after compressing, the real and image parts of the transmit signal are expanded or narrowed, with multi-path propagation, constellation points will no longer focused and mixed together, it is hard to get accurate theoretical expression of r c ðnÞ, which can also use different companding function, in this letter, we use u-law companding algorithm, comparing the constellation diagram to analyze the performance of both advantages and disadvantages of old OFDM system and our proposed method of OFDM system.

4 Performance Simulation In order to intuitive feeling our proposed novel OFDM system exerts better performance in the reduction of PAPR and BER compared with traditional one, we simulate a base-band OFDM system as descript as in part II and III. In this simulation, the number of data sub-carriers N = 192, besides, it will insert 8 pilot points and 56 guard interval, which sub-carrier modulator and demodulator were implemented by using 256 point IFFT and FFT, the cyclic prefix is LGP ¼ 64. For simplicity, we assume that bit stream data are randomly generated, which will be modulated by QPSK mode. In this simulation, we generally used the u-law companding method. For highlight the multipath propagation to exert of our proposed system, a six path channel model ruled by 3GPP standard is used in our system. The power profile is given by p ¼ ½3; 0; 2; 6; 8; 10 dB, and the each path delay length after sampling is s ¼ ½0; 2; 4; 12; 18; 38. The noise of each path is an independent, zero-mean complex Gaussian random process. The constellation diagram of conventional and our proposed OFDM system both through multi-path channel propagation are respectively portrayed in Figs. 3 and 4, which SNR ¼ 30. Obviously, from Fig. 3, the constellation points are spread and mixed together, it also accompany with much phase rotation, which will bring difficulties to the threshold judgment and will take big bit error rate. And in Fig. 4,

1776

G. Xie et al.

constellation character, which is much better than Fig. 3 shows, is concentrate and focus together. Without doubt, when we use companding technique to reduce PAPR, multipath propagation will lead to large BER, and the performance of the systems are also been infected. The proposed OFDM system can reduce constellation point spread. 3000

2000

1000

0

-1000

-2000

-3000 -2000 -1500 -1000

-500

0

500

1000

1500

2000

2500

3000

Fig. 3. Constellation diagram of conventional OFDM system through multi-path propagation

1500

1000

500

0

-500

-1000

-1500 -1500

-1000

-500

0

500

1000

1500

Fig. 4. Constellation diagram of our proposed OFDM system through multi-path propagation

Figure 5 depicts the performance of BER versus SNR (dB) which scope from one to thirty between our proposed OFDM system and traditional system. It can be seen that when SNR value is less than 13 dB, D-values between two systems are not very

A New OFDM System Based on Companding Transform …

1777

obvious. However, with the increase of SNR value, D-values of BER will increase larger between traditional OFDM systems and our proposed system. Obviously, our proposed method has a lower BER performance at multi-path channel. This is because when SNR value is very high, the signal power will be higher than noise power, from (10), when equilibrium calculation is done. The interference signal Wn compared to the original waiting demodulated signal sðnÞ is very small. So, the multi-path propagation has little effect to the transmit signal.

0

10

BER(dB)

Our proposed system Conventional system

-1

10

-2

10

0

5

10

15 SNR(dB)

20

25

30

Fig. 5. Comparison of BER between conventional OFDM system and our proposed system

Comparing the simulation process of the different system, companding algorithm was used on the transmit side to reduce PAPR of the OFDM signal. Traditional OFDM systems without making any deal to eliminate multi-path effect, and the received signal directly do expanding transform to reduce PAPR. Those systems usually cause serious deterioration of the constellation diagram and increase the value of BER. It is common to know that equilibrium can decrease the multi-path influence. Inspired by this, we use frequency equalization to eliminate multi-path channel interference before expanding. Compared with the traditional OFDM system, our proposed system can reduce the BER, but the disadvantage of this system is to increase the system’s computational complexity.

1778

G. Xie et al.

5 Conclusion A simple OFDM system based on companding technique to reduce PAPR is proposed in this paper, which provides a new method to eliminate multipath channel effect by adding a pair of FFT/IFFT transform and equilibrium at the receiving end. Simulation results show that our proposed system exhibits a lower bit-error-rate performance compared with conventional ones, especially when SNR at the large scope. The system also shows a good ability to reduce PAPR performance, because it can be applicable with any companding format. At the same time, the compressed signal by the proposed system has a good constellation diagram in multipath propagation. Acknowledgements. This work is supported by Chong Qing Electric Power Research Institute, State Grid Corporation of China technology project, cstc2016jcyjA0214, National Key Technology Support Program (2015BAG10B00).

References 1. van Nee, R., Prasad, R.: OFDM for Wireless Multimedia Communications. Artech House, Boston, MA (2000) 2. Jiang, T., Yang, Y., Song, Y.-H.: Exponential companding technique for PAPR reduction in OFDM systems. IEEE Trans. 51, 244–248 (2005) 3. Gong, L., Yang, S.-H., Chen, Y.: Research on the reduction of PAPR for OFDM signals by companding and clipping method. In: IEEE Conference, pp. 1–4 (2010) 4. Kim, D., Stuber, G.L.: Clipping noise mitigation for OFDM by decision-aided reconstruction. IEEE Commun. Lett. 3, 4–6 (1999) 5. Wang, L.Y., Liu, J., Zhang, G.W.: Reduced computational complexity PTS scheme for PAPR reduction of OFDM signals. In: IEEE Conference, pp. 1–4 (2010) 6. Wang, C.-L., Yang, Y.-O.: Low-complexity selected mapping schemes for peak-to-average power ratio reduction in OFDM systems. IEEE Trans. 53, 4652–4660 7. Shepherd, S., Orriss, J., Barton, S.: Asympotic limits in peak envelope power reduction by redundant coding in orthogonal frequency division multiplexing. IEEE Trans. Commun. 46, 5–10 (1998) 8. Grant, A.J., van Nee, R.: Efficient maximum like hood decoding of peak power limiting codes for OFDM. In: 48th IEEE Vehicular Technology Conference, 18–21, May, 1998, Ottawa, pp. 2081–2084 (1998) 9. Wang, Y., Ge, J., Wang, L., Li, J., Ai, B.: Nonlinear companding transform for reduction of peak-to-average power ratio in OFDM systems. IEEE Trans., 1–7 (2012) 10. Su, H.W., Kun, S., Zhang, L., Zhang, Q., Xu, Y.L., Zhang, R., Li, H.P., Sun, B.Z.: Meat Sci. 98, 110 (2014)

Construction of Gene Regulatory Networks Based on Ordered Conditional Mutual Information and Limited Parent Nodes Ming Zheng(&) and Mugui Zhuo Guangxi Colleges and Universities Key Laboratory of Professional Software Technology, Wuzhou University, Wuzhou, China [email protected]

Abstract. The reconstruction of gene regulation network is the basis of functional genome research. It is helpful to understand the mechanism of gene regulation and explore the complex life system and its essence. The traditional Bias method has high complexity and can only construct a small scale gene regulation network. The information theory method has many false positive sides and can’t speculate on the gene cause. In this paper, based on ordered conditional mutual information and finite parent nodes, this paper proposes a OCMIPN algorithm. OCMIPN method for fast construction of gene regulation network. First, the gene regulation correlation network is constructed by orderly conditional mutual information, and the parent node of each gene node is restricted according to the topological prior knowledge of gene regulation network. The number, using the Bayesian method to deduce the gene regulation network structure, effectively reducing the time computation complexity of the algorithm. The results of simulation experiments on artificial synthetic network and real biological molecular network show that the OCMIPN method can not only build high precision gene regulation network, but also have low computational complexity. Keywords: Gene regulatory network  Bayesian network model Ordered conditional mutual information  Finite parent node  Causal orientation

1 Introduction Gene regulatory networks (GRNs) [1] is a relationship network formed by gene interaction in a genome. It can reveal the life phenomenon and its essence from the angle of gene action. It is an important content of the study of functional genomics, and the construction of the gene regulatory network is helpful to understand the mechanism of gene regulation and preview. Measuring unknown gene function, understanding disease pathogenesis and accelerating drug research and development. Gene chip technology and high throughput sequencing technology produce large scale gene expression data. From these gene expression data, many computational methods have been developed to deduce gene regulation network. These methods can be classified as supervised learning [2], information theory [3] and model square method [4]. The supervised learning method can construct the gene regulatory network with high © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1779–1784, 2019. https://doi.org/10.1007/978-981-13-3648-5_230

1780

M. Zheng and M. Zhuo

precision, but it needs a priori information guidance, but the known gene regulation label is less. The information theory method can construct a large-scale genetic network without prior regulation information, but the constructed gene network is a kind of phase network, not the real gene regulation network. The model method can provide a deeper understanding of the dynamic behavior of gene regulation, but the model parameters have a great influence on the construction precision of the gene network. When the constructed gene regulatory network contains a large number of genes, it takes a lot of time to learn the conditional dependence, Time complexity of the algorithm is high. The model method generally includes the ordinary differential equation model [5], multiple linear regression model [6], linear programming model [7], Boolean network model [8] and Bayesian network model [9]. The differential equation model is a simple expression, the required model parameters are less, the recognition process is relatively easy, but this model constructs the gene regulation network quasi. The Boolean network model is relatively simple, but because it is a discrete mathematical model, it can’t reflect the actual situation of the cell very well. The linear regression and linear programming model is a simple linear mathematical model, which can only deal with the linear relationship of the biological gene expression data, and can’t deal with the nonlinear relationship. The application scope is small. In addition, these models are relatively weak in processing data noise. In the case of lack of more gene expression data, these models can’t efficiently construct the regulatory gene network. The Bayesian network model can capture the inherent noise in the gene expression data, based on the statistical hypothesis to reveal the gene expression level. However, the Bayesian network model needs to spend a lot of time to learn the conditional dependence, which leads to high computational complexity and can’t be used in the construction of large scale gene control networks. By analyzing the topological structure of the existing gene regulation network [10], we found that: the entry value of the gene node is exponentially decreased, that is, most of the gene nodes in the gene regulation network are regulated by a few parent nodes, only a few gene nodes are regulated by multiple parent nodes. That is to say, one of the bases in the gene regulatory network The number of regulatory factors is limited, and these limited regulatory genes are called “finite parent nodes”. In view of this, a network construction algorithm based on ordered conditional mutual information and finite parent nodes (OCMIF) is proposed. The fast construction of gene regulation network. OCMIF algorithm starts from gene expression data, first of all, using mutual information to construct the initial gene correlation network, then delete the redundant gene associated edges in the network based on the order condition mutual information, and finally use the strategy of restricting the number of the parent node to learn the Bayesian network structure. The fast construction of the gene regulation network. OCMIF algorithm can construct the gene regulation network with high precision and effectively reduce the algorithm.

Construction of Gene Regulatory Networks Based on Ordered …

1781

2 Method 2.1

Dataset

In order to evaluate the performance of the algorithm, this paper verifies the construction performance of the OCMIF algorithm on 4 computer analog networks, 1 synthetic networks, and 1 real biological molecular network data. The 4 computer simulation network (Net10, Net20, Net50, Net100) data from the DREAM competition data is derived from the competition data. Gene expression data and standard network, its network is experimentally verified that yeast (Yeast) and Escherichia coli (Escherichia coli) regulate network. Net10, Net20, Net50 and Net100 network contain 10, 20, 50, 100 genes and 10, 45, 77, 166 gene regulation edge respectively. Human synthesis network data IRMA, the network is brewed The wine yeast (Saccharomy cescerevisiae) synthesis network contains 5 genes and 6 genes regulating edge. Real bio-molecular network data for Escherichia coli SOSDNA repair network data, including 9 genes, 24 gene regulation edge. 2.2

Mutual Information

Mutual information (MI) not only can measure the nonlinear correlation between genes, but also can effectively deal with high dimension and low sample gene expression data. This measurement and conditional mutual information (conditional mutual information, CMI) has been widely used in the construction of genetic network. If the gene expression data is expressed by vector X (or Y), the vector element indicates that the gene is expressed at different times or under different conditions, then the correlation between the gene variable X and Y can be measured by mutual information MI (X, Y), such as Eq. (1): MIðX; YÞ ¼ 

X x2X;y2Y

2.3

pðx; yÞ log

pðx; yÞ pðxÞpðyÞ

ð1Þ

Bayesian Network Model

The goal of Bias’s network structure learning is to find the BNs structure with the highest matching degree to the data D based on the training data D. At present, the learning method of BNs structure can be classified as the constraint based method and the method based on the scoring search. The constraint based method is to find the implied conditional independence in the data through the conditional independence test. To find a network structure consistent with these conditions, the method is more intuitive, but the conditions are too much to be tested and the high order test error is large. The method based on scoring search can find the highest score in the search space by scoring function. The method is a statistical driving method and search space. The score function generally includes: Bayesian statistical method, equivalent Bayesian information criterion (BIC) method, minimum description length (MDL) method and

1782

M. Zheng and M. Zhuo

entropy information method. Because of the good performance of the mutual information test (MIT) scoring function, this paper uses the MIT scoring function (i.e. the bay leaf). The joint probability of the BNs. 2.4

OCMIF Algorithm

The algorithm constructs gene related network by orderly conditional mutual information, and then constructs a gene regulation network by limiting the number of node points of each gene node. The algorithm can effectively reduce the time complexity of the algorithm, which consists of three parts: (A) is based on mutual information (MI) to construct the initial gene phase network; (B) is based on the order conditional mutual information (MI). CMI) deleting redundant related edges; (C) restricts the number of parent nodes per gene, and constructs a gene regulatory network based on Bayesian network model.

3 Result In order to detect the performance of OCMIF algorithm in the construction of gene regulatory network, we first simulated the small scale gene network data set (Net10, Net20, IRMA, SOS), then simulated the large scale network (Net50, Net100), discussed the experimental results, analyzed the validity and time of the OCMIF algorithm. Using Bayesian networks to infer the optimal structure of gene regulatory networks is a NP-hard problem. Although currently learning Bayesian networks based on constraints and scoring search strategies can reduce the search space to a certain extent, the time complexity of the algorithm is still high for large scale gene control networks. The network analysis shows that the value of the gene node is exponentially decreased, that is, the majority of genes in the gene regulation network are regulated only by a few genes (1–3 genes), only a few genes are regulated by multiple parent genes. In addition, using MI and CMI to construct a genetic correlation network to determine the correlation between genes can effectively reduce the search space for Bayesian network structure learning. The Net20 was shown as Fig. 1.

4 Conclusion In view of the high complexity of the traditional Bias method, only the small scale gene regulation network can be constructed, and the existing fast Bias model constructs the large scale gene regulation network with low precision and relatively high time complexity. In this paper, a fast construction of gene regulation network based on ordered condition mutual information and limited parent node is proposed in this paper. The OCMIF algorithm. OCMIF algorithm starts from the gene expression data, first constructs the initial gene correlation network by using mutual information, then deletes the redundant gene association edge based on the order condition mutual information. Finally, the Bayesian network structure is learned by restricting the number of parent node points, and the gene regulation network is constructed.

Construction of Gene Regulatory Networks Based on Ordered …

1783

Fig. 1. The figure of Net20 GRN running result of the OCMIF algorithm proposed in the paper Acknowledgements. This work was supported by grants from The National Natural Science Foundation of China (No. 61502343), the Guangxi Natural Science Foundation (No. 2017GXNSFAA198148, 2015GXNSFBA139262) foundation of Wuzhou University (No. 2017B001), Guangxi Colleges and Universities Key Laboratory of Professional Software Technology, Wuzhou University.

References 1. Khan, A., Saha, G., Pal, R.K.: An approach for reduction of false predictions in reverse engineering of gene regulatory networks. J. Theor. Biol. 445, 9–30 (2018) 2. Maetschke, S.R., Madhamshettiwar, P.B., Davis, M.J., et al.: Supervised, semi-supervised and unsupervised inference of gene regulatory networks. Brief. Bioinform. 15(2), 195–211 (2014) 3. Liu, F., Zhang, S.W., Gao, H.Y.: Inferring gene regulatory networks based on ordered conditional mutual information and limited parent nodes. Prog. Biochem. Biophys. 44(5), 443–450 (2017) 4. Pajaro, M., Alonso, A.A., Otero-Muras, I., et al.: Stochastic modeling and numerical simulation of gene regulatory networks with protein bursting. J. Theor. Biol. 421, 51–70 (2017) 5. Wu, H.L., Lu, T., Xue, H.Q., et al.: Sparse additive ordinary differential equations for dynamic gene regulatory network modeling. J. Am. Stat. Assoc. 109(506), 700–716 (2014) 6. Dong, Z.J., Song, T.C., Yuan, C.: Inference of gene regulatory networks from genetic perturbations with linear regression model. PLoS One 8(12) (2013)

1784

M. Zheng and M. Zhuo

7. Wang, Y., Joshi, T., Xu, D., et al.: Supervised inference of gene regulatory networks by linear programming. In: Li, K., Irwin, G.W. (eds.) Computational Intelligence and Bioinformatics, Pt 3, Proceedings. City, pp. 551–561 (2006). :// WOS:000240085400059 8. Menini, L., Possieri, C., Tornambe, A.: Boolean network representation of a continuous-time system and finite-horizon optimal control: application to the single-gene regulatory system for the lac operon. Int. J. Control 90(3), 519–552 (2017) 9. Sanchez-Castillo, M., Blanco, D., Tienda-Luna, I.M., et al.: A Bayesian framework for the inference of gene regulatory networks from time and pseudo-time series data. Bioinformatics 34(6), 964–970 (2018) 10. Lopes, F.M., Martins, D.C., Barrera, J., et al.: A feature selection technique for inference of graphs from their known topological properties: Revealing scale-free gene regulatory networks. Inf. Sci. 272, 1–15 (2014)

Innovation Research of Cross Border E-commerce Shopping Guide Platform Based on Big Data and Artificial Intelligence Jiahua Li(&) Guangzhou Vocational College of Science and Technology, Guangzhou 510550, China [email protected]

Abstract. Through the framework development model of the service oriented mechanism of computer SOA, the data set up the commodity management, order management, account management, financial management, and financial settlement based on the exchange of the application layer and the basic layer. The cross-border e-commerce platforms mainly include intelligent recommendation, intelligent image recognition, and intelligent overseas platform orders. The main function of traditional cross-border electricity providers can not be achieved, so it can meet the intelligent transformation of cross-border electricity providers. This paper uses big data and artificial intelligence to design of intelligent guide system for cross border e-commerce shopping guide platform based on K-Means algorithm and DEA evaluation model. Keywords: Cross border  E-commerce Big data  Artificial intelligence

 Shopping guide platform

1 Introduction Cross border e-commerce has gained rapid development in recent years and has become an important form of foreign trade. Its research value lies in the rapid growth of trade volume [1]. Through SWOT analysis, the development of the third party logistics overseas is speed up [2]. By promoting the development of the third party cross-border electronic payment platform, the application of the law and the solution of the supervision problems, the service level of cross-border e-commerce customers should be improved, the protection of intellectual property should be strengthen and the construction of the credit system should be promoted [3]. A good environment conducive to the healthy and sustainable development of cross-border e-commerce is built, which can promote the rapid development of cross-border e-commerce [4]. It will be a good thing for consumers to start laying stress on this part of business. After all, ecommerce giants are more capable and utterance [5]. In seeking overseas commodity resources, they can be authorized by brands or large international retailers to guarantee the quality of goods and price advantages. They are also more powerful in the layout of cross-border logistics systems to improve the speed and efficiency of logistics.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1785–1792, 2019. https://doi.org/10.1007/978-981-13-3648-5_231

1786

J. Li

2 A Big Data Analysis Method Based on the Intelligent Shopping Guide System In the combination of Internet, Internet of things, mobile technology and other new applications and e-commerce, a large number of various forms of user network behavior data are produced and accumulated, which are known as the large data of ecommerce. This state has prompted the e-commerce industry to examine the importance of data, form a new management concept for data, extract effective data from large data, combine with specific e-commerce services, dig into the value of data resources, and carry out a precise, personalized and intelligent customer service innovation, which can achieve both reducing cost and improving the efficiency of the double effect. Whether domestic electricity suppliers or cross-border electricity providers, the role of big data can not be ignored. The cross-border e-commerce shopping guide platform based on big data and artificial intelligence can enhance the value of competitive advantage. The source of modern e-commerce data is not limited to the Web site of the enterprise, and the enterprise will make more use of the social media, such as e-mail, micro-blog, Web log, interactive community and other social media to collect related data. These data will reflect the business status of the enterprise, the state of the customer, and the trend of the competitor from different aspects. The decision making behavior of enterprises is based on the analysis of data. Therefore, when the data information in this area is more comprehensive, the platform will be more socialized, the more real-time, the more accurate, more pertinent and closer to the customer, which is more sustainable for the competitive advantage in the market. Based on big data and artificial intelligence, cross-border e-commerce shopping guide platform can excavate data-driven operational value. The large data amount of big data is the basic guarantee for the e-commerce enterprises to lock up and grasp the consumers. Through continuous integration of data resources, e-commerce enterprises can facilitate the sharing of information and resources in the upper and lower supply chain, and blur the boundaries of business nodes so as to optimize the whole business process of e-commerce and improve the business process. The fluency of business nodes improves business efficiency. At the same time, the interactive data brought by the e-commerce transaction in the big data mode not only provides the omni-directional market information for e-commerce enterprises, but also for the network trading platform. It creates an active data platform for the emerging industry chain, which is the core of e-commerce transactions. The cross-border e-commerce shopping guide platform based on big data and artificial intelligence can reshape multiple business opportunities. For e-commerce enterprises, low cost and high efficiency are the magic weapon to win the market, and the winning strategy is based on the analysis and optimization of big data. By collecting massive data from consumers, the user needs can be further tapped, so that enterprises can accurately predict the potential customer market and improve the success rate of transactions. On the other hand, under the impetus of large data state, the ability of consumers to obtain, filter and analyze the data information is constantly improved. The enhancement of the accurate recognition ability of the data is beneficial

Innovation Research of Cross Border E-commerce Shopping …

1787

for the consumers to react their attention to their network behavior, and then benefit the development and promotion of the business and service of the intellectual energy of the e-commerce enterprises. In addition, cost and occupation of the market bring huge business opportunities. The cross-border e-commerce shopping guide platform based on big data and artificial intelligence can improve the value of logistics service quality. With the breakthrough of cloud computing, Internet of things and data applications, the cooperation between e-commerce and logistics is becoming more and more close. Ecommerce enterprises and logistics enterprises have brought common service objects because of the transaction. The analysis of customer data is no longer limited to the one-way operation of e-commerce enterprises. The big data changes the service direction and service content of the logistics industry. Through the analysis of customer data, logistics enterprises can choose the way of sending more reasonably, optimize the path, provide differential service, improve the quality of logistics service, and improve the brand image of the logistics industry. The cross-border e-commerce shopping guide platform based on big data and artificial intelligence can create consumer perceived value. As the main force of the application of Internet technology, consumers maximize the consumption data in big data. These data are converted into valuable commercial data in the analysis of data information in the enterprise. Under the big data environment, the Internet consumption system has created an open data system. The funds invested by the Internet consumers in the network application are more to get the experience of personal satisfaction. While the network consumption objects are expanded, the intelligent, humanized, differentiated and interactive network services are first presented in consumption. To the maximum extent, the consumers feel the sense of belonging, satisfaction and happiness of the consumer, which achieve the value creation between businessmen and consumers. The main modes of cross-border electricity providers at present are mainly three modes: proprietary mode, purchasing mode, and buyer mode. Amazon’s global purchase is a typical model of self-management, and it directly delivers goods to consumers through self built logistics or third party logistics. Taobao’s global purchase is a typical purchase model, but the traditional Taobao sellers became overseas students, overseas Chinese and other groups. The products of the world and honey panning belong to the buyer’s model, and the platform is recruited overseas by the qualification audit. Buyers are selected and sold to the consumers on the platform. And the Jingdong has adopted a mixed mode with self operating platform, which is an overseas direct mining, and the platform relies on the introduction of third parties. Both of them complement each other, which enriches the SKU of the platform and reduces the financial pressure of the Jingdong itself. There have always been disputes over the merits and faults of these models. The self operating mode of the Jingdong is the consistent embodiment of its B2C development model. The self-management model can ensure the reliability of the quality of the goods on the platform to the maximum extent. At the same time, the self operated model large volume procurement can save the transportation cost and have the bargaining advantage. Of course, the premise is that the platform needs a strong capital strength by the Jingdong platform mode. The qualification audit of the business can

1788

J. Li

also guarantee the quality and the ability to replicate to a great extent, and the application of the Jingdong self-service logistics. Taobao’s global purchase and purchase model can maximize the diversity of goods on the rich platform, but the quality assurance of the goods is not as good as the model of self-management and B2C platform. Buyer mode ensures the basic quality of commodities through the first step of buying buyers, but the limited SKU and higher cost are the disadvantages. In a word, many modes of cross-border e-commerce have their own advantages and disadvantages, but the potential of the purchase model is obviously stronger than the buyer model, while the unique self-employed platform model of the Jingdong has more advantages than the purchase model in quality and logistics.

3 Big Data and Artificial Intelligence Design of Intelligent Shopping Guide System 3.1

K-means Algorithm

K-Means algorithm is a kind of algorithm accepted and used by electric providers. KMeans clustering algorithm was proposed by J. B. Mac Queen early in 1967, and k fuzzy mean clustering algorithm is a generalization of k mean clustering algorithm, and its membership degree can be taken at any number in [0, 1] interval. The basic idea of the algorithm is that the sum of square sum of weighted errors in class is minimized. The classification of K-fuzzy mean clustering algorithms is implemented in an iterative way. Usually, it is needed to set an optimization function or called the objective function, which can be regarded as the best result with its minimum value. In general, the implementation of the minimum value is through the iterative method, but the program itself does not have such a minimized objective function, that is, the number after multiple iterations is not changed before the number is compared to the previous until the classification is finished. K-Means clustering algorithm is a classical clustering algorithm and it begins with a set of a cluster center selected at random. In each iterative process, every sample point is assigned to the nearest cluster according to the calculated similarity. The steps of KMeans algorithm are as follows. Step 1. k objects are selected randomly as the original clustering centers in m data objects. Step 2. The remaining objects outside the k object need to match the similarity degree according to these objects and cluster centers, and then assign these objects to the clustering with the highest similarity. Step 3. The clustering center for each newly acquired clustering is calculated, which is the mean of all the objects in the cluster, and repeats the process, if the standard measure function converges and stops repeating. The Groups are made for reduce the value of the objective function by the formula (1),

Innovation Research of Cross Border E-commerce Shopping …



n X i¼1

 2 min xi  pj 

j2f1;2;...;kg

1789

ð1Þ

The center is determined for minimize the value of the target function by the formula (2), m X i¼1

kyi  w k2 

m   X y  w k2 kyi  yk2 þ k

ð2Þ

i¼1

The necessary and sufficient condition is w ¼ y ¼

m 1X y m i¼1 i

ð3Þ

Find out the point which has the longest distance with the initial point, denote it as Ti1 .   Ti ¼ arg minXj  Cj  j

ð4Þ

The point Ti2 has the longest distance with Ti1 . Find out the points which have a distance less than or equal to the N/K point denote it as cluster i; Cj ¼

3.2

Pj

s¼1

Xs

Nj

ð5Þ

DEA Evaluation Model Based on K-means Algorithm

In order to make the efficiency evaluation more close to the real production situation, a DEA evaluation model based on K-means clustering is proposed. This model realizes the homogeneity of heterogeneous decision making units, and makes the frontier more realistic. The model first uses the K-means clustering algorithm to distinguish the homogeneous decision making units. The DEA efficiency is evaluated for each homogeneous decision unit in each cluster. Then select the central point of each cluster as a homogeneous virtual decision-making unit. And the DEA model is used to solve the problem. At the same time, the efficiency evaluation of the original decision units can be calculated by the evaluation of the homogeneity in each cluster to the projection of

1790

J. Li

the center. The efficiency of each decision unit in the model is determined by the clustering center and its evaluation efficiency in clustering. Therefore, the model can not only reflect the homogeneity of clustering decision units, but also reflect the relative efficiency of different decision making units, and avoid the negative effects caused by the heterogeneity of decision-making units. Model framework is shown in Fig. 1.

DMU1

DMU2



DMUn

Homogeneous clustering1

Homogeneous clustering2



Homogeneous clustering m

Clustering center 1

Clustering center 2



Clustering center m

Relative DEA efficiency

Central DEA efficiency

New DEA efficiency=Relative DEA efficiency ×Central DEA efficiency

Fig. 1. DEA model framework based on K-means clustering

As can be seen from Fig. 2, the efficiency curve of the decision unit obtained by this method is smoother than that of the traditional decision making unit. And the number of inflection points decreased significantly. This shows that the method proposed in this paper can reduce the data heterogeneity between different decision making units, and eliminate the negative impact, so that the evaluation efficiency is closer to the actual production. At the same time, it reduces the management and operation cost of enterprise.

Innovation Research of Cross Border E-commerce Shopping …

1791

1.2 1.0 0.8 0.6 0.4 0.2 0.0 0

5

10

15

20

TradiƟonal evaluaƟon method EvaluaƟon method based on K-means clustering Fig. 2. The comparison of traditional evaluation method and K-means clustering method

4 Conclusion The benefits of innovation of cross border e-commerce shopping guide platform based on big data and artificial intelligence are as follows. Enrich the extension of B2C e-commerce. Traditional foreign trade continues to slump, cross-border electricity supplier exports suddenly rise, and B2C business performance is particularly dazzling. The scale of cross-border electricity export is 5 trillion and 200 billion, and the growth rate of export B2C is 42% faster than that of export B2B by 27%. In the past five years, the scale of cross-border e-commerce exports CAGR is 29%, of which the cross-border export B2B growth rate is 27%, and the cross-border export B2C growth rate is 43%. Meet the upgrade of consumer demand. In recent years, with the improvement of consumption level, the development of logistics, and the popularization of mobile payment, more and more middle classes have extended their desire to foreign countries. Therefore, it is inevitable to continue to satisfy the core user groups to develop crossborder electricity providers. Release the value of its logistics advantages. One disadvantage of the traditional sea naughty is that the waiting time is too long. The solution of the cross-border ecommerce platform is to reduce the time cost and transportation cost of global procurement by its self operating mode. On the other hand, once the goods arrive at the customs, the domestic logistics needs to play its advantage, especially the last one kilometer distribution service. The tax zone to the user is basically the same as domestic distribution.

1792

J. Li

References 1. Arcos-Vargas, A.: A DEA analysis of electricity distribution in Spain: an industrial policy recommendation. Energy Policy 102(12), 583–592 (2017) 2. Joseph, T., Jitesh, H.P.: Resource allocation in cloud-based design and manufacturing: a mechanism design approach. J. Manufact. Syst. 43(5), 327–338 (2017) 3. Oliveira, G.V., Coutinho, F.P.: Improving k-means through distributed scalable metaheuristics. Neurocomputing 246(56), 45–57 (2017) 4. Gaia, N., Andrea, P.: Price of fairness for allocating a bounded resource. Eur. J. Oper. Res. 257(47), 933–943 (2017) 5. Doan, T.T., Olshevsky, A.: Distributed resource allocation on dynamic networks in quadratictime. Syst. Control Lett. 99(5), 57–63 (2017)

Research on Interior Design of Smart Home Hongxing Yi(&) City College of WUST, Wuhan, China [email protected]

Abstract. Today, with the rapid development of the Internet of things, a new concept is emerging in the interior design industry, that is, “smart home”. The concept of smart home interior design plays an important role in improving people’s quality of life. By studying the intelligent control system of smart home, this paper expounds the advantages of intelligent design. By analyzing its influence on interior space design and combining the characteristics of interior design, the design techniques adapted to the design of intelligent control system and interior design are summed up, so that the two parties can be coordinated and unified in the design scheme. It can create a more harmonious living space for people to make life and work environment more safe, convenient, energy saving and environmental protection. Keywords: Smart home

 Interior design  Intelligent control system

1 Introduction With the development of our society and economy, people have higher requirements for indoor living environment. In line with the concept of sustainable development and people-oriented design, China’s interior design is developing towards diversification. Among them, under the rapid development of the Internet of things, smart home has gradually entered the life of people, forming a new trend of development.

2 What Is “Smart Home” Smart home is the embodiment of the Internet of things under the influence of the Internet. It is on the platform of housing, connecting all kinds of equipment in home through Internet of things, providing home appliances control, lighting control, telephony remote control, indoor and outdoor remote control, anti-theft alarm, environmental monitoring, HVAC control, infrared forwarding and programmable timing control and other functions and means to build efficient housing [1]. The management system of its facilities and family schedule can improve the security, convenience, comfort and art of home, and realize the living environment of environmental protection and energy saving [2].

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1793–1800, 2019. https://doi.org/10.1007/978-981-13-3648-5_232

1794

H. Yi

3 The Development Trend of Interior Design in China 3.1

Humanized Development of Interior Design

“People oriented, service for people” is the eternal theme of interior design. The interior design of residential space is centered on users and aims at improving people’s living quality. Interior designers need to meet people’s requirements for comfort, create a comfortable and beautiful indoor environment, meet the needs of people’s physiological, psychological and social levels, and make people feel the comfort, beauty, safety and convenience of the living environment in the indoor space. In the future, interior design will emphasize more people-oriented, and design will be more humane care. 3.2

Sustainable Development of Interior Design

In the development of interior design, sustainable development is also an eternal theme. In order to pursue natural and healthy indoor space, in general, designers often use natural materials (such as wood, stone, bamboo, etc.) on the design materials, so the destruction of the natural environment is becoming more and more serious, which runs counter to the sustainable development we advocate [3]. Designers with a sense of social responsibility will be more inclined to choose ecotype materials. Eco-friendly materials do not release toxic and harmful gases, it can replace the use of natural materials and let our interior design have a higher level of environmental protection. Therefore, ecological residence and green residence are new trends in the development of interior design [4]. Throughout the development of interior design, whether it is humanized development or sustainable development, is the theme of the development of the design industry [5]. With the progress of science and technology and the rapid development of the Internet of things, smart home interior design can better achieve the theme of design. With the development of intelligent technology, intelligent control system has been applied to the design of home space. Our living space will be more convenient, environmental protection and efficient [6]. Therefore, in the near future, smart home will become a new direction for the development of interior design of residential space in the future.

4 The Composition of Intelligent Home Control System 4.1

Intelligent Security Alarm

Intelligent security alarm mainly aims at the contents of safety problems that will occur in residential buildings, mainly including fire prevention, burglar prevention and gas leakage prevention. It can be divided into two modes: home alarm and home alarm. If

Research on Interior Design of Smart Home

1795

set to leave home alarm state, all the equipment in the room work, once the danger of all the equipment can be sent to the host to complete the alarm content, and can also use the phone or cell phone to complete the alarm [7]. If it is set up in the home alarm state, the owner can be in the house at home any activity, the system equipment will have direction identification function can detect the direction of the human body, play the role of anti-theft. If there is a fire or gas leak at home, the host will contact the user to turn off the gas valve through sensors. 4.2

Intelligent Control of Lighting

The intelligent lighting control system can control indoor lamps and lanterns as they please [8]. It can switch automatically with the setting time of indoor illumination, or automatically control the switch lights with human body induction. It not only brings convenience and comfort to life, but also saves energy and saves labor. It can also realize the convenient control mode by system integration, not only the control of a lamp, but also the linkage control [9]. It can be realized by remote control in other rooms. Or with the background music system to achieve linkage control, as the music sounded with the needs of the lighting environment, reflects the full individuation. 4.3

Intelligent Control of Electrical Apparatus

Intelligent electrical control mainly refers to the choice of appropriate control forms through the establishment of system links, such as a variety of forms, such as phone, computer remote, timing control, and other forms of control to realize the intelligent control of the traditional electrical equipment in our life, such as home TV, air conditioning, water dispenser, water heater and various kinds of control [10]. The control of electric equipment can be combined with lighting system to form a more comprehensive and comprehensive smart home system. 4.4

Intelligent Background Music

The intelligent background music system is to connect the sound source signal to the room and any place where the background music system is needed (including the bathroom, the kitchen and the balcony) by professional wiring, and the background music special sound box is controlled independently in the room by the corresponding control panel of each room, so that every room can hear the beautiful background music [11]. The greatest advantage of the intelligent system is the ability to share the sound source. It can output the sound source to the individual player on a separate operation and control the selected player to switch the sound source through the remote control. In addition, it is suitable for young people to play music regularly. In the morning, the music can be heard in a bed, and it can be integrated into the lighting control system and switch to different scenes in different scene mode (Fig. 1).

1796

H. Yi

Fig. 1. The composition of intelligent home control system

Home intelligent system can be said to be a multi-functional technical content. Its functions and contents are very complicated and developing rapidly. So far, in addition to the previous intelligence security alarm, lighting intelligent control, electrical intelligent control, intelligent background music, as well as intelligent video sharing, intelligent door lock control, video system and so on (see 错误!未找到引用源。). With the continuous improvement and development of smart home system, the contents of smart home will be enriched and stronger in the short term.

5 Advantages of Smart Home Design 5.1

Efficient and Convenient Life Experience

Under the background of the Internet of things technology, smart home meets users’ different needs for living space environment to a certain extent. People can carry out integrated management of household electrical equipment through intelligent control system to realize intelligent control of household appliances and intelligent control of lighting. It can also carry out various scenes operation according to the requirements of indoor environment, so that a variety of equipment can form a linkage mechanism. In the development of intelligence, we can promote the realization of various controls, which can control remotely and inductively, and also can control and centrally control mobile phones. Smart home can also provide users with personalized customized services. Depending on the interior design of smart home, people can get rid of the tedious family affairs, improve the efficiency, reduce the cost of control, and bring great convenience to people’s life. 5.2

Safe and Reliable Life Guarantee

In the process of smart home design, the introduction of security technology is more secure and reliable than traditional home form. With the development of the technology of the Internet of things, people install the corresponding video phone in the modern residential district, or install the corresponding security and anti-theft system in the

Research on Interior Design of Smart Home

1797

home in order to live in the home safety. In this way, people can monitor home indoor environment in real time through mobile terminals in the idle time, effectively avoid the appearance of some illegal elements and prevent the invasion of thieves. Once the illegal invasion of the situation, the family alarm system will automatically alarm and call the host, to provide security for people’s lives, let people live in a more comfortable and safe environment. 5.3

Design Concept of Sustainable Development

Sustainable development is the trend of interior design in China and even in the world. In the environmental protection concept of green residential and ecological residence, smart home interior design is also implementing the environmental protection concept of sustainable development. Smart home mainly controls the water meters, electricity meters, gas meters and other household electrical appliances through an integrated intelligent automation control system. In essence, it will monitor the consumption of energy and reduce the waste of energy in life to achieve the goal of energy saving and environmental protection. At the same time, the central air conditioning and heating system temperature control and intelligent control of the lighting system, reduce the consumption of electricity, so as to achieve sustainable development.

6 The Influence of Smart Home on Interior Design Smart home design is a comprehensive system engineering, which covers many specialties, including architecture, structure, HVAC, water supply and drainage, electrical specialty and interior design. Then in the age of the Internet of things, the first problem that the interior designers will face is how to update the traditional interior design concepts, consider the differences in the interior design procedures and design techniques, and how to combine the interior design elements of the traditional living space with the smart home, so as to achieve a beautiful and harmonious environment. Use the effect. 6.1

The Influence on the Interior Style

Smart home will have an impact on the interior design style of the living room. It can reflect certain times and locality and meet the needs of the owners. Good indoor style can edify people’s sentiment, improve people’s living environment, and realize the subjective function of coordinated development. The purpose of smart home pursuit is to better serve the people, emphasizing the harmony and subjective initiative between people and the living environment. The modeling of smart home products is mostly based on simple modeling, showing strong sense of the times and modernity. If the interior design style that the owner expects is not simple style, at the same time demands the system environment of intelligent home, the designer should integrate the intelligent system into the interior style, adjust the relationship between the two in the interior design of the intelligent home, and ensure the unity of the design pattern and embody the design.

1798

6.2

H. Yi

The Impact on the Interior Space Organization

Smart home makes the combination of indoor space more flexible. In the interior design, the wireless smart home network technology is applied to truly realize the no “line” connection, and a new life experience and enjoyment of the home equipment is established. Moreover, the wireless smart home has the advantages of low cost, flexible networking, strong mobility. For interior decoration design, the construction procedures such as slotting wiring and through walls are removed, and the flexibility of indoor space organization is greatly enhanced. Interior designers can organize indoor space more rationally according to the needs of the users, improve the utilization of indoor space, and provide more comfortable space environment for people. 6.3

Influence on Indoor Lighting

The intelligent home control system can monitor the indoor humidity, temperature and brightness in real time, and adjust the most comfortable physical environment according to the needs of the user, and save energy. For example, the lighting control system will automatically control light illumination according to the intensity of indoor light and the actual needs of the user, create different indoor environmental effects and avoid unnecessary energy waste. And, at present, some intelligent lighting systems in the market do not need special wiring, without affecting the effect of family decoration, it has realized the updating and upgrading of lighting fixtures. It can also control the lighting of the family by other control methods, without any time and without restriction. Intelligent lighting system is more humanized service, so that people can truly feel the high-end life enjoyment and interest brought by intelligent system.

7 The Application of Smart Home in Interior Design 7.1

Intelligent Vestibule

Vestibule is the first space from outdoor to indoor. Its design is an important embodiment of the owner’s living environment. The design of the vestibule in the intelligent residence is different from that in the ordinary residence. The mystery of intelligent house should not only consider the owners’ psychological needs, behavior needs and basic storage functions, but also consider the problems of intelligent equipment entering the house, the installation of the control panel of the intelligent equipment, and the setting of the intelligent equipment. For example, the location of the gate should be equipped with an electronic lock, a wireless door alarm, and a dimming switch or a color touch screen in the hall. All these require early consideration and planning in the design, and the reserved area should be increased to meet the installation requirements of intelligent facilities. 7.2

Intelligent Living Room

The living room for the people’s daily leisure, design, need to establish a good living conditions, to meet the personalized needs of people. First, in the application of smart

Research on Interior Design of Smart Home

1799

home, the automatic playing function of large screen TV and projection TV in the living room is the best visual enjoyment, which allows the user to fully experience the atmosphere of the cinema in his own home. However, the application of large screen equipment has certain requirements for the space and area of the living room. Secondly, the application of intelligent lighting equipment can configure the corresponding lighting mode under a variety of functions. The user only needs to use the remote control equipment to do simple operation. 7.3

Intelligent Bedroom

The bedroom is a rest environment for people. When we design the bedroom environment, we need to ensure the safety, comfort and privacy of the space. At the same time, we should fully consider the air quality and lighting needs of the bedroom. Installing air purification devices inside the bedroom can improve the air environment very well. In the intelligent design of light, two way switches can be used to control the two lights of the bedroom space. In the head of the bed, a variety of mode functions can be installed. For example, reading mode, night mode, rest mode and so on. In the design process, the design factors such as energy efficiency, convenience and science need to be embodied. In addition, the design of electric curtain and automatic clothes hanger has also increased the convenience of people’s life.

8 Summary To sum up, with the development of intelligent technology, smart home interior design has gradually matured. The application of intelligent control system has indeed improved people’s quality of life and created an efficient, comfortable, safe and environmentally friendly living environment for people. In the process of development and construction in the future, we should continue to follow the concept of humanization and sustainable development, further improve the design of intelligent control system, combine the design of intelligent home with the interior design of traditional living space, and create a more humanized and intelligent living environment.

References 1. Diaodi, O., Yiqi, Y.: Analysis of the development of smart home in interior space design under the background of internet of things. Value Eng. 36, 219–220 (2017). (in Chinese) 2. Pei, Y.: Preliminary study on the development of intelligent home in the future design of interior space. Design 16, 108–109 (2017). (in Chinese) 3. Wei, Y.D.: The development of smart home in the future interior space design. Ability Wisdom 1, 245 (2018). (in Chinese) 4. Nan, Y.: Study on Indoor Space Design and Application Development of Intelligent Home Furnishing, pp. 11–26. Changchun University of Technology (2013). (in Chinese) 5. Yan, L.: Application Study of Intelligent Residential Space, pp. 12–23. Nanjing Forestry University (2012). (in Chinese)

1800

H. Yi

6. Wei, L.: The development of smart home in the future interior space design. Residence 20, 55 (2017). (in Chinese) 7. Xiufu, T.: Research on the application and development of smart home in the future interior space design. Home Drama 22, 171 (2016). (in Chinese) 8. Haitian Electric Business Finance Research Center: Reading a Smart Home in a Book. Tsinghua University Press, Beijing (2016). (in Chinese) 9. Xiudi, X.: Smart Home Products From Design To Operation. Post & Telecom Press, Beijing (2015). (in Chinese) 10. Haiyang, C., Shicheng, J.: On the Adaptive Development of Intelligent Home for Interior Design. Shanxi Archit. 25, 22–24 (2016). (in Chinese) 11. LNCS Homepage: https://baike.baidu.com/item/

Antioxidant Activities of Polysaccharides from Citrus Peel Yanshan Lu, Zhipeng Su, and Yongguang Bi(&) College of Pharmacy, Guangdong Pharmaceutical University, Guangzhou 510006, Guangdong, China [email protected] Abstract. Citrus peel, which was named chenpi, has been widely used in traditional Chinese medicine for many years. In this paper, the antioxidant activity of the polysaccharides from citrus peels was measured by DPPH radical scavenging assay, hydroxyl radical scavenging assay as well as ferric reducing antioxidant power assay. The result showed that the polysaccharides from citrus peels possessed significant radical scavenging activity with DPPH and *OH values (IC50) of 0.388 mg/L and 0.982 mg/mL, respectively. Moreover, their ferric reducing activity at the concentration of 1.0 mg/mL (0.964 mmol/L FeSO4) was stronger than that of BHT at the concentration of 1.0 mg/mL (0.772 mmol/L FeSO4), which indicated that the polysaccharides from citrus peel had a higher ferric reducing activity compared to BHT. In conclusion, the polysaccharides from citrus peels possessed the significant antioxidant activity. Therefore, we propose that the polysaccharides from citrus peels can be used as an abundant source of natural free radical scavenger. Keywords: Antioxidant activity

 Free radical scavenging  FRAP assay

1 Introduction Citrus fruits with the largest amount of production, trade and processing are cultivated in many countries and regions. According to statistics of the Food and Agriculture Organization of the United Nations (FAO), the citrus cultivated area in the world has reached 7.8 million m2 in recent years, and the total output is 1–4 billion tons [1, 2]. However, by-products from citrus fruits such as peels and seed residues not only lead to valuable resources loss, but also aggravate increasingly serious disposal problems [3]. Thus, it is necessary to study on the bioactive compounds of citrus peels. Citrus peels contain numerous biologically active compounds including natural flavonoids, phenolics, polysaccharides as well as essential oils which was used in the food, cosmetic and pharmaceutical industries [4–6]. Polysaccharides generally refer to high molecular polymers in which more than 10 monosaccharide molecules are linked by glycosidic bonds. The amount of the monosaccharide molecule can reach hundreds or even thousands and the polysaccharide structure also contains uronic acid, amino sugar and sugar alcohol besides the monosaccharide. In recent years, people have had a deeper understanding of the diversity of biological functions of polysaccharides with the development of separation analysis techniques and molecular biology. A large © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1801–1807, 2019. https://doi.org/10.1007/978-981-13-3648-5_233

1802

Y. Lu et al.

number of pharmacological and clinical studies have confirmed that polysaccharides are widely involved in various life activities, such as cell recognition, growth, differentiation, metabolism, embryonic development, cell canceration, viral infection and immune response, which indicates that polysaccharides play an important role in the regulation of life activities [7, 8]. Moreover, polysaccharides have varieties of biological activities such as antioxidant and antitumor effects [9]. Consequently, it is meaningful to do research on polysaccharides from citrus peels. In this study, the antioxidant activity of polysaccharides was evaluated by DPPH, hydroxyl radical scavenging assay and ferric reducing antioxidant power assay.

2 Materials and Methods 2.1

Materials and Chemicals

Citrus fruits were purchased from Sichuan Pujiang (China). The standard of glucose was from Shanghai Yuanye Biological Technology Co., Ltd (Shanghai, China). 1,1-diphenyl-2-picrylhydrazine (DPPH) and 2,4,6-Tri(2-pyridyl)-s-triazine (TPTZ) were from Sigma (America). The chemicals of 2,6-butylated hydoxytoluene (BHT) were analytical grade and purchased from Aladdin Reagent Co., Ltd (Shanghai, China). The other reagents were analytical grade. 2.2

Preparation of Standard Curves of Glucose

The standard curve of glucose was following the early report with some modifications [10]. Stock solution (0.9 mg/mL) was prepared by dissolving 0.09 g of glucose into 100 mL distilled water. The stock solution was diluted with distilled water to provide six standard solutions whose glucose concentrations ranged from *9 to 90 lg/mL. 2.0 mL of six standard solutions were transferred to 10 mL test tube, respectively. Then 1.5 mL phenol solution and 5.0 mL concentrated sulfuric acidic solution were added into the individual test tube. After mixing them up, the mixture was placed for 5 min and bathed with boiling water for 15 min. The absorbance of the mixture was measured at 485 nm by UV-Vis spectrophotometer (752, Shanghai, China). 5.0% phenol solution was prepared by diluting 5.0 mL phenol with 95 mL distilled water. The one-dimensional linear regression equation and linear regression coefficients (R2) were presented as following: A = 0:0125C + 0:0137; R2 ¼ 0:9975

ð1Þ

where A is the absorbance, C is the concentration of polysaccharides (mg/mL). 2.3

Preparation of Calibration Curve of FeSO4

Different concentrations of FeSO4 solution (0.2, 0.4, 0.6, 0.8, 1.0, 1.2 mmol/L) were prepared and 150 lL was taken into 10 mL test tube, respectively. Then 4.5 mL FRAP solution was added into the individual test tube. The mixture was vortexed thoroughly and reacted in 37 °C water bath for 10 min, and then the absorbance was measured at

Antioxidant Activities of Polysaccharides from Citrus Peel

1803

593 nm [11]. The FRAP solution was prepared by mixing acetate buffer (pH = 3.6, 300 mmol/L), TPTZ (10 mmol/L) and FeCl3 (20 mmol/L) in the ratio 10:1:1. The one-dimensional linear regression equation and linear regression coefficients (R2) were presented as following: A = 0:6606C  0:0667; R2 ¼ 0:9993

ð2Þ

where A is the absorbance, C is the concentration of FeSO4 (mmol/L). 2.4

Extraction Procedure

The dried citrus peels were powered by a pulverizer (DFY-600, Wenling, China) and then passed through a 40 mesh sieve. After that, the powers were degreased with petroleum benzin. Eight grams of the degreased dry citrus peel powders was used for the experiment. The ultrasonic-assisted extraction of polysaccharides originating from degreased samples was performed in an ultrasonic homogenizer (scientz-IID, Ningbo, China). The extraction conditions were extraction temperature of 50 °C, ultrasonic power of 370 W and the ratio of distilled water to raw material of 31:1 mL/g. 2.5

Determination of Total Polysaccharides Yield

After ultrasonic extraction, the extracted slurry was filtered to collect the filtrate. The concentration of total polysaccharides in the filtrate was determined using the method as described in Sect. 2.2. The extraction yield of total polysaccharides was calculated as follows: Extraction yield ð%Þ ¼

CV  103  100% W

ð3Þ

where C (mg/mL) represents the concentration of total polysaccharides in the filtrate, V (mL) represents the volume and W (g) represents the weight of degreased dry citrus peel powders. 2.6

DPPH Radical Scavenging Assay

The antioxidant activity of the total polysaccharides from citrus peel was measured by DPPH assay with minor modification [12]. Briefly, 2.0 mL of 0.2 mmol/L DPPH in ethanol was added into 2.0 mL of the extract solution at different concentrations. The mixture was shaken thoroughly and incubated at room temperature for 30 min in the dark, and then the absorbance was measured at 517 nm. The DPPH radical scavenging effect was calculated by the following equation:   Ai  Aj Scavenging effect ð% ) ¼ 1   100% A0

ð4Þ

1804

Y. Lu et al.

where A0 is the absorbance of control solution consisted with ethanol and DPPH solution, Ai is the absorbance of sample solution consisted with extract and DPPH solution, Aj is the absorbance of blank sample consisted with ethanol and extract (Fig. 1).

Fig. 1. DPPH radical scavenging effect of polysaccharides from citrus peel

2.7

Hydroxyl Radical Scavenging Assay

The hydroxyl radical scavenging effect of total polysaccharides from citrus peel was assessed [13]. Briefly, the reaction solution contained 1.0 mL FeSO4 (9 mmol/L), 2.0 mL salicylic acid (9 mmol/L), 1.0 mL H2O2 (8.8 mmol/L), 4 mL distilled water as well as 2.0 mL the extract solution at different concentrations. The mixture was vortexed thoroughly and incubated at room temperature for 15 min, and then the absorbance was measured at 510 nm. The ability of Hydroxyl radical scavenging was calculated by the following formula:   Di  Dj Scavenging effect ð% ) ¼ 1   100% D0

ð5Þ

where D0 represents the absorbance of control solution (water instead of the sample), Di represents the absorbance of sample solution, Dj represents the absorbance of blank sample as Di with water instead of H2O2 (Fig. 2).

Antioxidant Activities of Polysaccharides from Citrus Peel

1805

Fig. 2. The hydroxyl radical scavenging effect of total polysaccharides from citrus peel

2.8

Ferric Reducing Antioxidant Power (FRAP) Assay

One of the most efficient assay to assess the antioxidant capacity was the ferric reducing antioxidant power (FRAP) assay. Briefly, using the method as described in Sect. 2.3, only the standard solution was replaced by the sample solution at concentrations of 0.4–1.2 mg/mL as well as BHT for the control solution. The reducing power of the sample solution were expressed in mmol Fe2+ per liter at last.

3 Results and Discussion 3.1

DPPH Radical Scavenging Activity

DPPH assay is applied to evaluate the radical scavenging activity of the polysaccharides from citrus peels due to the stability of DPPH radicals and the rapid reaction process compared with other methods [14]. The scavenging capacity of total polysaccharides from citrus peel on DPPH free radical was presented in Fig. 3. It can be seen in the figure that the scavenging effect of polysaccharides increased rapidly with increasing the concentrations. Moreover, its IC50 was 0.388 mg/L. These results demonstrated that total polysaccharides from citrus peels had a noticeable effect on scavenging DPPH. 3.2

Hydroxyl Radical Scavenging Activity

Hydroxyl radical scavenging assay is commonly used to evaluate antioxidant activity of compound in vitro. Hydroxyl radical scavenging effect of total polysaccharides from

1806

Y. Lu et al.

Fig. 3. The correlation between content of polysaccharides from citrus peel and reducing ability

citrus peel was described in Fig. 4. It was obvious in Fig. 4 that the rate of radical scavenging increased quickly with the increasing of concentrations. Moreover, its IC50 was 0.982 mg/mL. These results indicated that total polysaccharides from citrus peels had a noticeable effect on scavenging hydroxyl free radicals. 3.3

Ferric Reducing Antioxidant Power (FRAP)

FRAP assay has recently developed to the direct test of “total antioxidant power” due to its simplicity, sensitivity and fast speed. The ferric reducing activity of total polysaccharides from citrus peel were shown in Fig. 5. It was found in Fig. 5 that the ferric reducing activity of polysaccharides increased quickly with the increases in the concentrations. Moreover, their ferric reducing activity at the concentration of 1.0 mg/mL (0.964 mmol/L FeSO4) was stronger than that of BHT at the concentration of 1.0 mg/mL (0.772 mmol/L FeSO4).

4 Conclusion The polysaccharides from citrus peels possessed significant radical scavenging activity with DPPH and *OH values (IC50) of 0.388 mg/L and 0.982 mg/mL, respectively. Moreover, their ferric reducing activity at the concentration of 1.0 mg/mL (0.964 mmol/L FeSO4) was stronger than that of BHT at the concentration of 1.0 mg/mL (0.772 mmol/L FeSO4), which indicated that the polysaccharides from citrus peels had a higher ferric reducing activity compared to BHT. In conclusion, the polysaccharides from citrus peels possessed the significant antioxidant activity.

Antioxidant Activities of Polysaccharides from Citrus Peel

1807

Acknowledgements. This work is financially supported by Guangdong Provincial Science and Technology Department of the project (No.2016A020210133), Guangdong Ocean and Fishery Bureau Project (No. A201501C11).

References 1. Johnson, T.M.: Citrus Juice Production and Fresh Market Extension Technologies (2001) 2. Houjiu, W.: The Situation and Outlook of Citrus Processing Industry in China (2001) 3. Wang, X., Chen, Q., X, Lü.: Pectin extracted from apple pomace and citrus peel by subcritical water. Food Hydrocolloids 38(3), 129–137 (2014) 4. Diaz, S., Espinosa, S., Brignole, E.A.: Citrus peel oil deterpenation with supercritical fluids: optimal process and solvent cycle design. J. Supercrit. Fluids 35(1), 49–61 (2005) 5. Donpedro, K.N.: Investigation of single and joint fumigant insecticidal action of citruspeel oil components. Pest Manag. Sci. 46(1), 79–84 (2015) 6. Manthey, J.A., Grohmann, K.: Phenols in citrus peel byproducts. Concentrations of hydroxycinnamates and polymethoxylated flavones in citrus peel molasses. J. Agric. Food Chem. 49(7), 3268–3273 (2001) 7. Marquardt, T., Denecke, J.: Congenital disorders of glycosylation: review of their molecular bases, clinical presentations and specific therapies. Eur. J. Pediatr. 162(6), 359–379 (2003) 8. Rudd, P.M., Elliott, T., Cresswell, P.: Glycosylation and the immune system. Science 291 (5512), 2370–2376 (2001) 9. Yamada, H.: Pectic polysaccharides from Chinese herbs: structure and biological activity. Carbohyd. Polym. 25(4), 269–276 (1994) 10. Li, J.W., Ding, S.D., Ding, X.L.: Optimization of the ultrasonically assisted extraction of polysaccharides from Zizyphus jujuba cv. jinsixiaozao. J. Food Eng. 80(1), 176–183 (2007) 11. Benzie, I.F., Strain, J.J.: The ferric reducing ability of plasma (FRAP) as a measure of “antioxidant power”: the FRAP assay. Anal. Biochem. 239(1), 70–76 (1996) 12. Thaipong, K., Boonprakob, U., Crosby, K.: Comparison of ABTS, DPPH, FRAP, and ORAC assays for estimating antioxidant activity from guava fruit extracts. J. Food Compos. Anal. 19(6), 669–675 (2006) 13. Spigno, G., Tramelli, L., De Faveri, D.M.: Effects of extraction time, temperature and solvent on concentration and antioxidant activity of grape marc phenolics. J. Food Eng. 81 (1), 200–208 (2007) 14. Gulcin, I., Sat, I.G., Beydemir, S.: Comparison of antioxidant activity of clove (Eugenia caryophylata Thunb) buds and lavender (Lavandula stoechas L.). Food Chem. 87(3), 393– 400 (2004)

Design of Rural Home Security System Based on the Technology of Multi-characters Fusion Shuchun Chen1, Peng Chen2, Libo Tian3, and Tao Wang4(&) 1

Hebei Software Institute, Baoding, HeBei, China Shijiazhuang University, Shijiazhuang, China 3 Hebei Kunneng Power Engineering Co. Ltd., Wuhan, China School of Electrical and Electronic Engineering, North China Electric Power University, Changping District, 102206 Beijing, China [email protected] 2

4

Abstract. The central server of system is developed with the embedded platform, which is EC5-1719CLDNA. The system realizes the function calling 110 and remote monitoring and control by application of GSM and GPRS technology, and realizes real-time sensor information collection and transmission of by Zigbee wireless sensor technology. Even more, it develops the smart home security system framework combined with the stream media technology, multi feature fusion technology, wireless network transmission technology and other advanced technology. Especially, it designs the access control system using the multi feature fusion technology and neural network fusion strategy, the security feature of rural home security system is promoted, because of using them. All in all, the system realizes the safety precautions of home furnishing and remote intelligent control. Keywords: Smart home

 Multi-characters fusion  Zigbee  Neural network

1 Introduction Recent years, high-tech product are constantly emerging, for example, cell phones, computers, TV and other home applications are stepping into a intelligent era in a surprising speed and entering into ordinary people’s home. Based on various electronic products, researchers “invented” smart home system in a creative way by comprehensively using current technologies [1]. The system is used in living, working and commercial distribution environment, combined with wireless sensor technology, network-communication automatic control, stream media and other modern technologies, integrate different devices and environment, establish high efficiency buildings and daily routine manage system [2]. At last, indoor environment is greatly improved in security, convenience and intelligence, in the main time, comfortable and energysaving living environment is realized in the post-E era. Especially, after urban-rural integration, the difference between urban and rural area is smaller and smaller, the rural economic and technology in some cities is more advanced than urban area, so the countryside is also entering intelligent product era. The urbanization in the rural area stimulates the need of smart home [3]. The © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1808–1814, 2019. https://doi.org/10.1007/978-981-13-3648-5_234

Design of Rural Home Security System Based on the Technology …

1809

appearance of smart home has revolutionarily changed life in rural area, its intelligence has optimized people’s living, and increased the tastes of people’s life. What’s more, security has become a main focus [4]. Security is the primary requirement which people ask smart home for, such as function of anti-theft, fireproof, gas poisoning prevention and relative alarm and warning. However, nowadays the good and the bad are intermingled in smart home market, or the system only has a single function, accuracy is not good enough, wiring is burdensome and the data is easy to get lost, the market can’t meet people’s requirement to smart home. So this paper introduces a smart home security system, which is based on multi-characters fusion technology, it solves the problem of security in smart home field, in the meantime, it also meets the requirement of home application smart control [5].

2 The Overall Framework of System 2.1

The Function that System has Realized

The system is focused on realization of home security monitor, anti-theft and alarm. Among these functions, multi-characters fusion technology is used in the identification of door guard system; real-time dynamic monitoring of moving things and wireless transmission of image is realized by wireless camera control system; the overall collection and monitor of indoor environment data is based on sensor data acquisition and Zigbee wireless network; the function of alarm and remote monitor is realized by GSM and GPRS network technology. 2.2

The Structural Design of System

Based on the main function of system, central server, which is designed as the core of system, together with other four subsystems: intelligent door guard subsystem, realtime dynamic monitor subsystem, indoor environment monitor subsystem and alarm and remote control subsystem make up the overall structure of smart home security system. Figure 1 is the overall structure [6].

3 Research on Core Technology of System With high performance, low power consumption and rich expansion interface, EC5-1719CLDNA meets the requirement of system function. The information of indoor gas, smoke and temperature and outdoor wind speed and rainfall is collected by Zigbee wireless network; GSM/GPRS network is realized through GTM900C modules made by Huawei, communication between microprocessor and central server is based on RS232 serial port as well as direction execution, data transmission, including the release of voice, message and multimedia message and remote monitor and control of home applications. Wireless camera is used to realize fixed-point monitor, stream media technology is used to detect the moving things indoor and out of window. In the

1810

S. Chen et al.

Face

Voice Fingerprint

Relay

The collecƟon of signal

electromagneƟc switch

Wireless camera

visual intercom

Photo collecƟon

Control system

real-time dynamic monitor

Smart door access control subsystem

subsystem

GSM/GPRS

Zigbee

Central sever EC5-1719CLDNA

alarm and remote control subsystem

indoor environment monitor subsystem

Fig. 1. Whole structure

same time, it can send alarm message and photos taken by cameras to special people at any time. The most core technique is that this system has introduced multi-characters fusion identification [7]. By the identification of one of or more than one face, voice, figure print, the set of parameter of door control system is resolved. This kind of multifeature arbitrary combination identification technology not only improve the system precision, but also enlarges its application. The main content will be introduced briefly in the following text. Conventional identification is single feature identification, for example, face identification system only uses face photo taken by camera, voice identification system only use voice signal collected by microphone, and fingerprint identification only identifies the fingerprint information collected by biological fingerprint entry device [8]. However, single biological feature is very easy to appear error during the identification, especially in dim, noisy and other harsh environment, the rate of error is larger. This paper build up a system based on the fusion of face, voice, fingerprint. Experiments show that multi-character-combination identification is more precise, especially in complex environment (disturb from angle, light, noisy and so on), it highlights its advantages, and its stability becomes better. 3.1

The Outline of Identification Based on Three Characters

Among single-character identification of face, voice and fingerprint, face identification is the most mature method adopting PPBTF (Pixel-Pattern-Based Texture Feature) and based on textural features of pixel pattern; voice identification adopts MFCC (Melfrequency Campestral Coefficients) which is Mel-Frequency Cepstral Coefficients, making use of Mel-Frequency scale to bend the practical frequency axis, so that simulate the non-linear relation between strength and frequency; fingerprint identification adopts point pattern matching algorithm. As for this three single-character

Design of Rural Home Security System Based on the Technology …

1811

identification methods, there are already many detailed discussion before, so we need not do anything. This paper improves PPBTF character identification method based on the fusion of the three character [9]. During face identification, the PPBTF directly extracts gray-scale map, and it is sensitive to light, so this paper describes the patent of image by bring in the main component, after the improvement and shielding the disturb of gray-scale, the stability was greatly improved. 3.2

The Introduction of Strategy and Method of Fusion

According to the methods above, face, fingerprint and voice was gotten, considering this three characters are independent to each other, so we can’t conduct parameter fusion and character fusion and fuse only in decision-making level, taking advantage of BP neural network fusion, we conduct some non-linear fusion for the three characters. The framework is Fig. 2.

Face recognizer

Personal informaƟon

voice recognizer

PPBTF

Character extracƟon

fingerprint recognizer

MFCC

fusion training recogniƟo n

Output the result

Point mode

Fig. 2. Multi-characters fusion framework

The training process of BP algorithm adopts multilayer perception of error inverse transfer algorithm, a serial of non-linear map was conducted layer to layer until a separable expression is found in some space. 3.3

The Result of Fusion Experiment

The identification experiment, which uses BP nervous network fusion method, build a database with faces, voice and fingerprint of 30 people. Face figure was gathered with common cameras, voice was gathered with common microphone inserted into computers, the sample frequency is 16 kHz, 8-bit quantizing, the fingerprint is gathered with LD-800 creature identification fingerprint device whose response is very fast, the time between no open access and response of door open is less than 0.13 s. We build up a BP nervous network for everyone. In order to compare the performance, single character identification experiment and multi-character fusion identification experiment are respectively introduced interference factors, the results clarify that: in interference environment, the system has a better stability after adopting multi-character fusion technology. The result is listed in Tables 1, 2, 3.

1812

S. Chen et al. Table 1. The comparison table of speech recognition Rate (%)

SNR 45 40 35 30 25 20 15 10 Voice subsystem 100 98 90 80 70 50 30 10 Fusion system 100 100 98 95 93 93 90 89

Table 2. The comparison table of face recognition Rate (%)

Angle between camera and face 90 75 60 45 30 Face subsystem 100 98 80 60 20 Fusion system 100 100 96 90 83

Table 3. The comparison table of fingerprint recognition Rate (%)

Mixed noise coefficient 0.1 0.15 0.2 0.25 0.3 Fingerprint subsystem 100 97 90 70 45 Fusion system 100 100 99 97 95

3.4

The Hardware Design of Door Access System

The door access system is composed of wireless camera, microphone, fingerprint, controlled rotary motor and MSP430 expansion board. MSP430 communicates with central processing unit by RS232 serial port, and it is performance unit of voice module besides it receives control message of central server. Before starting to working, system collects face, voice and fingerprint of legal person, which make up matching template. When there is a home-entry request, voice module sends voice prompt to help multicharacter identification method, if the person is already set in the system, central server will send construction to MSP430 controller, activate electromagnetic switch and open the door, otherwise, warning message will be send. Partial hardware circuit is list as Fig. 3. 3.5

The Design of Software

This paper brings multi-character fusion identification method into door access subsystem as the core module and monitor module of the whole system, in order to settle conflict in the two modules, we adopted multithread program, using Microsoft base class to design program in windows system. OpenCV is a multi-platform computer vision library based on open source release, which provides the interface with camera and vision and implements many universal

Design of Rural Home Security System Based on the Technology …

1813

Fig. 3. The circuit diagram of access control system (Wireless camera and live fingerprint recording instrument)

algorithms about graphic processing and computer vision. With the help of function module of OpenCV, we can realize character identification in identification system. For example, during the collection of face photos, CVCapture structure is selected in the camera, CVCapture * pcapture = CVCaptureFrom (n), n is the number of camera; OpenCV provides powerful support for real-time face identification, this paper detects target photo by cvHaar Detect Objects, program flow chart is listed in Fig. 4.

4 Conclusion After the design, we test the subsystems for software and hardware unit and debug the whole circuit, they work well, so we achieve the due goal and all functions of system. This paper creatively introduces multi-characters fusion technology into door access control subsystem to identify, and briefly introduces its core technology, experiments clarify that multi-character fusion greatly improves the stability of identification

1814

S. Chen et al.

start

Image

Face detected

Storage the face detected

storage

Cascade_haar Min_size=cvsize(0,0)

The original inside value Scale_factor=1.05

min_neighbors=0

end

Neglect the margin Flags=0

min_neighbors>=1

Detect the min_neighbors

San the rate of windows of the detected photo, expand in the second

Fig. 4. The program flow chart of face detection

system, and significantly improves security and intelligent degree; the use of wireless communication and wireless camera make routing easy, and has remarkable effect on anti-theft of Single family residence. The system worth promotion for its high performance and easy install, and is fit for intensive and scattered houses, villas and so on, so it has a social value and application perspective. However, the system neglects the design of outdoor anti-theft function, if time and vitality are enough, we can extend the functions and make the whole system better.

References 1. Han, X.: The Design and the Achieve of Smart Home Security, Dalian University of Technology (2009) 2. Li, L., Li, Y., Cai, G., et al.: The research and development of smart security system based on the internet of things. Control Eng. 22(5), 1001–1005 (2015) 3. Wang, Y.: The Research of Identification System Based on Multi-character Fusion, Southeast University (2010) 4. Ru, L., Yang, J., Su, Y.: The identification system based on multi-character fusion of nervous network. Comput. Eng. Des. 25(2), 277–280 (2004) 5. Wang, Y., Shi, J.: The design of smart light control system based on BP nervous network. Comput. Meas. Control 24(2), 91–93 (2016) 6. Cao, H., Cao, L., Jian, X.: The voice and face identification method based on nervous network fusion. Comput. Eng. 33(11), 184–186 (2007) 7. Prieto, N., López-Campos, ó., Aalhus, J.L., Dugan, M.E.R., Juárez, M., Uttaro, B: Meat Sci. 98(2), 279 (2014) 8. Pla, M., Hernández, P., Ariño, B., Ramírez, J.A., Díaz, I.: Food Chem. 100(1), 165 (2007) 9. Pullanagari, R.R., Yule, I.J., Agnew, M.: Meat Sci. 100, 156 (2015)

Research on Offline Transaction Model in Mobile Payment System Songnong Li(&), Xiaorui Hu, Fengling, Yu Zhang, Wei Dong, Jun Ye, and Hongliang Sun Chongqing Electric Power Research Institute, Chongqing 400015, China [email protected]

Abstract. Mobile payment is a killer wireless network service in the e-commerce. Currently, the typical e-commerce modes based on mobile payment still encounter the problems to meet consumers’ daily needs such as: supporting the macro payment, supporting the offline transaction, improving the validity of payment. This paper puts forward an Offline transaction e-commerce system model based on mobile payment which includes the offline POS terminal, mobile device, payment center. The key idea of this model is using the mobile device as a medium to transfer the offline terminals’ transaction voucher and payment center’s payment confirmation to complete the transaction. In this model, the transaction voucher, a random generated ID on the offline POS terminal, is signed by the digital signature technology on the payment center to generate the payment confirmation. And the digital signature verification of payment confirmation, which applies the emerging ID-based cryptography for key agreement and authentication, is the guarantee of the validity in offline transaction. The offline transaction model based on mobile payment saves the costs of wiring and also makes the transaction process more convenient. Keywords: Offline POS terminal Digital signature

 E-commerce  Mobile payment

1 Introduction Mobile payment is defined as a payment that is conducted via handheld devices such as a mobile phone or a PDA (personal digital assistant) [1]. Nowadays, mobile devices have been the necessities of people’s daily life and they have become a business tool to increase the opportunity to connect with consumers. The data show that mobile payment market size of China in 2012 is 120 billion Yuan. And it is expected that China mobile payment transaction scale will beyond 500 billion Yuan by 2015 [2]. Achieving the mobile payment in e-commerce (Electronic Commerce) to make the consumption process more convenient, will be inevitable trend which has drawn the attention of telecom operators, banking institutions, merchants, and content service providers. Mobile payment is an evolution of e-payment which will facilitate e-commerce. Convenience and security are two important factors influencing the widely use of the mobile payment service. For consumers, convenience means low transactional cost, simple transactional procedure and Interoperability, whereas for merchants it means © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1815–1820, 2019. https://doi.org/10.1007/978-981-13-3648-5_235

1816

S. Li et al.

low installation as well as operational cost. Security is another utmost concern for consumers and merchants. The securities include the private information of consumers not being stolen and misused. If they think the payment is not safe, they will not prefer such payment solution. Obviously, the cryptography is an efficient way to ensure security and enables users to perform secure payment transactions using mobile devices. And Ref. [3] proposes five types of mobile payment including the B2B (Business to Business), B2C (Business to Consumer), C2C (Consumer to Consumer), B2G (Business to Government), P2P (Person to Person) type. Whatever the type of the mobile payment is, it requires the part B to be always online to check the payment state of part A. This paper gives the work on the B2P type to allow the offline merchant terminal to implement mobile payment and provide the service conveniently and securely. The rest of the paper is organized as follows: In Sect. 2.1, we introduce the most common existing transaction methods of e-commerce based on the mobile payment in China and point out the advantages and disadvantages of these methods. In Sect. 2.2, we present an Offline POS (point-of-sale) terminal transaction method based on mobile payment and provide the framework and procedure of the new method which can accomplish the whole e-commerce mobile payment progress. In Sect. 3, we specifically describe the method using for ensure the security and validity of the offline POS terminal transaction. In Sect. 4, we will make some conclusion of this paper.

2 The Current Mobile Payment The e-commerce implementing by mobile device in China now is in the stable developing stage. According to the most popular e-commerce transaction modes based on mobile payment, we give two typical application methods in china. 2.1

Internet Mobile Payment Method

Internet mobile payment method [4] is alternative on the internet. Users just need to provide cell number to finish the transaction and the settlement is charged on their mobile carrier phone bill. The advantages of this method are that the transaction can be conducted anywhere, anytime; the process is also simplified to certain degree by excluding bands and credit card companies’ participant [5]; and the merchants and consumer don’t need to import new components or equipment. However, it only supports some Micro-Payments service such as downloading the paid software, recharging game account, Mobile Added Value Service and some other virtual digital product, etc., and the payment mix up with call charge. Absolutely, it can’t meet the daily needs of people.

Research on Offline Transaction Model in Mobile Payment System

2.2

1817

POS Mobile Payment Method

It is e-payment (Electronic Payment) allowing people to pay at a point of sale (POS) with mobile device. Consumers choose the commodities and then use the mobile device to settle with merchant’s POS terminal [6]. POS terminal is widely used in various large or middle-sized shopping malls which enormously facilitated consumers’ daily lives. What’s more, it supports a bigger payment with your bank account bound to your cell phone or your phone wallet. Obviously, the drawbacks of this method is that the merchants’ stationing and wiring costs is very high, since POS terminal must be always on line with the pay center in order to check the payment state of consumers. Moreover, it will increase the equipment wire maintenance costs. For the purposes of making the consumption more convenient, reducing the merchants’ equipment, maintenance and human resources costs, we present the Offline POS terminal transaction method based on mobile payment.

3 Our Proposed Off-line POS Transaction In this paper, our mobile payment to complete the off-line POS terminal e-commerce transaction process requires a prepaid account which must pre-register in certain payment center to enable the mobile payment. An offline e-commerce transaction system model based on mobile devices is composed of offline POS terminal, mobile device, and payment center, as is shown in Fig. 1.

mobile device

Short-range wireless communication Purchase phase

offline POS terminal

mobile wireless communication

Payment pahse

Settlement phase

Payment center

Data transmission path The three phase in a payment transaction

Fig. 1. The offline POS transaction system framework

(1) Offline POS terminal: The Offline POS terminal includes a short-range wireless communication module, such as NFC module, Bluetooth communications module, IR communications module, etc., to exchange the payment information with mobile device. The main functions of the offline POS terminal are receiving transaction request, transferring transaction information, receiving payment confirmation and completing the transaction. (2) Mobile device: The mobile device installs a short-range wireless communication module to exchange information with Offline POS terminal. And the mobile

1818

S. Li et al.

device must support the mobile payment, that’s to say, it should register in certain payment center. Transferring transaction information and payment confirmation, and paying for the transaction are the major functions of mobile device. (3) Payment center: The payment center could be the merchant itself, or a bank settlement center signed with merchant. The payment center is responsible for handling accounts and generating payment confirmation. The mobile device is the key link to complete the offline transaction, since it will be the medium to implement the communication between offline POS terminal and payment center as well as pay for this transaction. Figure 1 also illustrates the conceptual trading model for our offline POS terminal transaction system. As the general conceptual trading model for e-commerce, the process of the offline POS terminal payment transaction consists of three phase: purchase phase, payment phase, and settlement phase [7]. In our trading model, the process begins at the purchase phase where consumer proposes the transaction request on the offline POS terminal. Then, the process enters the payment phase. In the payment phase, consumer issues electronic means (equal to the value of merchandises) to payment center, which we call “payment”. Payment center check the account balance, if the payment is successful, the consumer gets the payment confirmation and is permitted to purchase the merchandises or services. Then the payment center gets into settlement phase. In the settlement phase, the payment center deposits the electronic money to the merchant account. The specific flow of the Offline POS terminal transaction method based on mobile device mainly includes following step, as is shown in Fig. 2. consumer

offline POS

mobile device

payment center

1.transaction request 2.transaction information 3.payment information

5.payment confirmation 6.payment confirmation 7.provide service and good

4.deal with the payment

check payment confirmation

verify payment confirmation

Fig. 2. The offline POS terminal transaction procedure based on mobile payment

(1) Propose transaction request. The proposing of the transaction request simply needs consumers to select the products or services content on the Offline POS terminal. Simultaneously, the transaction information is naturally generated with the transaction request confirmation. (2) Obtain transaction information. Consumers use the mobile device to obtain the transaction information via shortrange wireless communication technology. The transaction information, generated by

Research on Offline Transaction Model in Mobile Payment System

1819

the Offline POS terminal, includes the transaction number, transaction amount, Services content or product number, transaction vouchers, and so on. The transaction vouchers could be a random number ID which will identify each transaction. (3) Send payment information to payment center. The mobile technology provides various possibilities to communicate with the payment center. Normally, a mobile device may send or receive information through SMS (Short Message Service), WAP (Wireless Application Protocol) technology [8]. The payment information includes the transaction information and payment request. The payment request is naturally produced at the time the transaction information sent to payment center. (4) Process the payment requests. After accepting the transaction information from mobile device, payment center will check consumer’s mobile account balance and make the virement. Then, the payment center will give a feedback to consumer. The feedback information contains the real transaction funds, account balance, service content and the payment state which indicate whether the payment is successful. If the payment is successful, the payment center will generate a payment confirmation depended upon the transaction voucher. If the feedback information is failure, for some reasons such as insufficient account balance, out of service, etc., the transaction will be stop. (5) Receive the feedback and payment confirmation. In this step, mobile device receive the feedback to check the real transaction content, account balance, etc. And if the payment is finish, then consumer must ensure obtaining the payment confirmation. Or, consumer will call for the payment center to deal with the trading incident. (6) Verify the payment confirmation, provide the transaction content. Offline POS terminal receives the payment confirmation from mobile device, and then verify the validity of the payment confirmation via certain regulation admitted by payment center. Obviously, authentication of the payment confirmation is a key factor to promote this offline transaction model, since it will give the permission to offline POS terminal for whether providing service or merchandise. (7) The last step, if the payment confirmation is valid, the Offline POS terminal will provide the service and goods, or, the transaction will be stopped.

4 Conclusion Mobile payment is a killer service in e-commerce. We first analyze the current typical mobile payment methods in China, and point out the merits and drawbacks of each method. Then we present a new offline transaction model based on mobile payment. The new offline transaction model based on mobile payment is to provide convenient, reliable, and low cost method for e-commerce.

1820

S. Li et al.

In this paper, we showed the framework of the offline POS transaction system which includes the offline POS terminal, mobile device, and payment center. We described the specific flow of the offline POS transaction based on mobile payment, which use the mobile device as a medium to transfer the offline terminals’ transaction information and payment confirmation as well as pay for the transaction. We present taking advantage of the Digital Signature Technology to ensure the validity and security of the transaction process. Acknowledgements. This work is supported by Chong Qing Electric Power Research Institute, State Grid Corporation of China technology project, cstc2016jcyjA0214, National Key Technology Support Program (2015BAG10B00).

References 1. Au, Y.A., Kauffman, R.J.: The economics of mobile payments: understanding stakeholder issues for an emerging financial technology application. J. Electron. Commer. Res. Appl. 7(2), 141–164 (2008) 2. Enfodesk. http://www.enfodesk.com/SMinisite/maininfo/articledetail-id-316870.html 3. Singh, B., Jasmine, K.S.: Comparative study on various methods and types of mobile payment system. https://doi.org/10.1109/mncapps.2012.44 4. Henkel, J.: Mobile payment—The German and European perspective. In: Silberer, G. (ed.) Mobile Commerce. Gabler Publishing, Wiesbaden (2001) 5. Zheng, X., Chen, D.: Study of mobile payments system. https://doi.org/10.1109/coec.2003. 1210227 6. Lin, Y., Chang, M., Rao, H.: Mobile prepaid phone services. IEEE Pers. Commun. 7, 4–14 (2000) 7. Li, X., Zhu, W., He, M.: Secure remote mobile payment architecture and application. In: 2010 International Symposium on Computer Communication Control and Automation (3CA), vol. 1, pp. 487–490, 5–7 May 2010 8. Wikipedia the free encyclopedia, “Mobile payment”. http://en.wikipedia.org/wiki/mobile_ payment

The In-Use Performance Ratio of China Real World Vehicles and the Verification of Denominator/Numerator Increment Activity Compliance Qian Guogang(&), Xie Nan, and Yang Fan China Automotive Technology and Research Center, Tianjin, China [email protected]

Abstract. The data stream collected from China in-use vehicles reveals that the increment of numerators and denominators in congest traffic is slower than the condition of common running. The necessity for the minimum number of OBDCOND is analyzed in this article, when selecting samples for computing IUPR. Limit value for IUPR that can be suitable for the social application in China is 0.336. The compliance of the increment of OBDCOND and numerators such as catalyst can be judged based on in-use vehicle data stream. Such logic check can be essential to prohibit violation by forged data for OBDCOND or CATCOMP, etc. Keywords: In-use vehicle General denominator

 Data stream  OBD  IUPR  OBDCOND

1 Introduction During the first 10 years of OBD II in California, a few manufactures was found to adopt a trickery design for some on-board diagnostic functions by restricting the enable condition within confined situation such as certification test conditions. IUMPR, also called IUPR, indicates how often a specific On Board Diagnostic (OBD) monitor operates relative to the vehicle drives, as defined in Eq. (1) [1]. Traffic condition and driver running style can both influence numerator and denominator, so as to influence IUPR. With the implementation of IUPR in China Stage5b emission standard since 2013 in several megacities [2], IUPR function gradually been equipped. Collecting Numerator and Denominator values together with speed data stream from real world vehicles for several months can provide a sketchy situation of IUPR application.  IUPRM ¼

NumeratorM DenominatorM

 ð1Þ

NumeratorM—measures the number of times a fault could have been detected; DenominatorM—measures vehicle activity. © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1821–1828, 2019. https://doi.org/10.1007/978-981-13-3648-5_236

1822

Q. Guogang et al.

2 Accusition of China Fleet Data Telematics, which based on telecom technology and ISO 15031 technologies, offers an applicable approach to win car owners’ cooperation, since car owners taking part in this program can still drive the car as usual all along, without inconvenient. The data acquisition system is consisted of slot widgets with advanced 3G telecom chip, 3G telecom net-frame and a database. Slot widgets are plugged on the OBD socket to communicate with vehicles, and the data stream collected by it is uploaded to database through telecom net-frame. Totally 51 widgets were settled on social in-use China vehicles, among which 34 widgets were settled on China Stage5b vehicles. These 51 vehicles are of better represent than the type approval vehicles, so as to reveal a manufacture’s fleet in-use compliance. 6 of vehicles are of local technology, and amount from Europe origin technology, Japan/Korea origin technology, or US origin technology are 25, 14, 6, respectively. Data of each vehicle is collected for several weeks or months. The parameters include IGNCNTR, OBDCOND, CATCOMP, CATCOND, O2SCOMP, O2SCOND, EGRCOMP, EGRCOND, AIRCOMP, AIRCOND, EVAPCOMP, EVAPCOND, SO2SCOMP, SO2SCOND, and O2 sensor signal. OBDCOND is the general denominator counter, which will increment 1 when all the three criteria are met: ① total time over 600 s, ② time accumulated in speed above 40 km/h over 300 s, ③ have a consecutive idle over 30 s.

3 Analysis of Data 3.1

Comparison of Traffic Congesting or Common Running

Trips in congesting group are mostly of a speed sequence consecutively below 50 km/h, and the driving period between two adjacent idles is mostly of only several minutes. The numerator and denominator’s increment behavior in this group can be of explicit difference from the trips else, which are grouped as common running, as illustrated in previous study [3]. When a vehicle sustains congestion for one week and runs in common traffic in the week after, these two periods are sorted out into the two groups apart. Total 24 vehicles contributed data for this analyze. If a vehicle runs these two conditions alternately daily always, then it can’t be segmented into the groups above. 3.1.1 The Numerator Increment Behavior Between Congesting and Common Running CATCOMP, O2SCOMP, SO2SCOMP and EGRCOMP are the numerator parameters for the catalyst, front oxygen sensor, rear oxygen sensor and EGR/VVT, respectively. The average daily increment number for CATCOMP is shown in Fig. 1. Some vehicles have collected more than one sample in congesting or common running group, illustrated as an additional point. The distribution of common running is mostly above one for the congesting, and the prior shows average value of 1.32, a sharp distinction from

The In-Use Performance Ratio of China Real World Vehicles …

1823

the average value for congesting, which is 0.34. These two numbers presents the average daily diagnostic frequency for catalyst.

Fig. 1. Average daily increment number for CATCOMP

The average daily increment number for O2SCOMP, SO2SCOMP and EGRCOMP are shown in Figs. 2, 3 and 4.

Fig. 2. Average daily increment number for O2SCOMP

Fig. 3. Average daily increment number for SO2SCOMP

The increment frequency of them is lower in congesting traffic conditions comparing to the ones in common running, just similar to the appearance of CATCOMP (Table 1).

1824

Q. Guogang et al.

Fig. 4. Average daily increment number for EGRCOMP

Table 1. Monitoring item average daily increment Catalyst O2 sensor Rear O2 sensor VVT/EGR Common running 1.32 1.57 1.74 2.6 Congest traffic 0.34 0.53 0.67 1.57

3.1.2 The Denominator Increment Behavior Between Congesting and Common Running The scatter plot for average value of IGNCNTR daily incremental quantity versus average value of OBDCOND daily incremental quantity is shown in Fig. 5. Congesting group, with a linear regression coefficient 0.034, are mostly far below the points for common running group. The incrementing activity for general denominator is also lagged by congesting traffic.

Fig. 5. General denominator increment and ignition counter increment

3.2

The Necessity for the Requirement of a Minimum Denominator

When a manufacture selects representative vehicles for IUPR statics, the denominator for several denominators of the sampled vehicles should meet the requirements listed in Table 2.

The In-Use Performance Ratio of China Real World Vehicles …

1825

Table 2. Requirements about samples Monitoring items Denominator minimum 1 EVAP system, second air, cold start strategy, AC 75 2 PM filter, oxidation catalytic converter 25 3 Catalyst converter, O2 sensor, EGR, VVT, and other 150

There are 11 new cars among all the samples, which reveal a gradually change in running habit for the first a few months. These vehicles’ increment steps for IGNCNTR and OBDCOND is fairly unusual, as the curves shown in Fig. 6.

Fig. 6. The scatter of new car’s OBDCOND and ignition counter

The curve for Car 12 is obvious nonlinear, such that a discrepancy exists if the IUPR is calculated when OBDCOND is 200, comparing to the value when OBDCOND was 80. Car 7 is another case of nonlinear relations, a likely. Taking that a vehicle operates routinely with almost no significant diversification, its’ IUPR will reach the character for authentic IUPR value when OBDCOND increases to almost one thousand or several thousands. By taking the snapshot of IUPR values when the OBDCOND of a vehicle reaches 75, 150, and near 1000, the divergence between these three can be calculated and evaluated for several monitoring items, as shown in Table 3. If such divergence is too prominent to be omitted, then this case can be a reference supporting the necessity for a minimum number. For all of the three monitoring components, Car 12 got significant value in 75_M and D150_M for catalyst monitoring only, while Car 19 not only showed notable values in D75_M and D150_M, but also a distinctive gap between D75_M and D150_M for each. It’s clear that these two vehicles will bring deviation if they are taken as samples when their OBDCOND is merely of dozens, and such instance proved that the requirement in

1826

Q. Guogang et al.

denominators’ minimum number is indispensable to avoid such destructive affairs. The requirement about the sample quantity “15” is also of positive contribution to compensate such bias when some unordinary sample is introduced.

Table 3. Two new car’s IUPR in 2 stages Catalyst

O2 sensor

RearO2 sensor

3.3

Car IUPR theory value 12 0.066 19 1.13 Car IUPR theory value 12 0.066 19 1.33 Car IUPR theory value 12 0.197 19 0.933

IUPRD = 75 D75_CAT

IUPRD = 150 D150_CAT

0.24 0.174 0.707 0.423 IUPRD = 75 D75_O2Sensor

0.133 0.067 0.86 0.27 IUPRD = 150 D150_O2Sensor

0.133 0.067 0.073 0.007 0.8 0.53 0.913 0.417 IUPRD = 75 D75_SO2Sensor IUPRD = 150 D150_SO2Sensor 0.227 0.693

0.03 0.24

0.207 0.773

0.01 0.16

The Recommendation About IUPR Limit 0.336

IUPR limit 0.336 was suggested since CARB’s target for social application aimed at “90 percent of the vehicles detecting malfunctions within 2 weeks”. CARB made an investigation during year 2000–2002, about 244 vehicles in 3 cities in US were sampled and daily frequentness of f-trips (filtered trip) is analyzed, so that Limit 0.336 is selected. 51 Chinese vehicle samples’ data also provided such an analysis opportunity for China, with procedures a likely. Firstly, count the OBDCOND value increment in 14 days period for each vehicle, so as to get a reference array; Secondly, multiply number “x” by the reference array to make exact 46 elements be greater than “2”, the corresponding number “x” should just be taken as the proper limit. Here the analysis result is 0.336.

4 Against the Defeat Device of OBDCOND and Numerators 4.1

Verification of OBDCOND Increment

According to formula (1), a lower denominator can generate higher calculated IUPR result, so it’s necessary to adopt supervisory measures to restrict some defeat device to miss or delay the increment of denominators intentionally by OEMs. A vehicle’s denominators for catalyst and O2 sensors are mostly of the same value as its OBDCOND, so that OBDCOND can be an ideal target, as SAE J1699-3’s test and assessment. Here 34 CHINA 5b vehicles’ velocity value, ignition signal, and the OBDCOND real time data stream can be used to inspect the increment activity, and these samples showed no violation acts.

The In-Use Performance Ratio of China Real World Vehicles …

4.2

1827

Verification of Numerators’ Increment

Obviously a higher numerator will be positive relative to a higher score, so it’s necessary to adopt supervisory measures to counterstrike activities pretending to have performed a diagnostic and incrementing the corresponding numerator. Take catalyst diagnostic for example, data stream provided an opportunity to inspect the increment activity for CATCOMP. The parameters necessary to judge a vehicle includes its velocity value, fuel injection flow rate, etc. Take the trip in Fig. 7 for example, the moment when CATCOMP incremented can be dubbed as tcatcomp, the affairs happened around tcatcomp (i.e. 30 s prior/later to that moment) can be the prove for catalyst diagnostic. As shown in Fig. 7, the two curves representing speed velocity and fuel inject rate in “zoom in window” displays an enrichment fuel injection actually occurred 5–15 s prior to tcatcomp. Since an enrichment fuel injection a few seconds prior to the catalyst diagnostic decision is a typical phenomenon, the CATCOMP increment in this trip is unlikely to be defrauded. Some other parameters, besides fuel injection rate signal, can also serve as backing for this judge, such as rear O2 sensor voltage, etc.

Fig. 7. Judge the increment of CATCOMP with trip segment data

5 Conclusion The relation between General Denominator increment activity and vehicle driving condition in their daily running is investigated, and a technical approach to verify the in-use vehicles’ increment logic of numerator and denominator, the core element of IUPR, is presented.

1828

Q. Guogang et al.

24 vehicles’ running character shows that numerators will be lower in congesting running condition than in the common running, and denominators show the same trend, too. 11 new cars parameters show that the relation between OBDCOND and IGNCNTR for new cars could be nonlinear, which means that the minimum value for denominators is reasonable. The analysis based on 51 vehicles suggests that 0.336 could be a proper value as the IUPR limit for China. In-used vehicles equipped with widgets can contribute valuable data to verify the compliance of increment logic for numerators and denominators of some certain vehicle types, such as CATCOMP and OBDCOND. A new method practical to fight against the defeat device is suggested. Acknowledgements. 2017YFC0212101, High resolution vehicle emission component spectrum characterization and simulation technology.

References 1. CCR 1968.2: Malfunction and diagnostic system requirements—2004 and subsequent modelyear passenger cars, light-duty trucks, and medium-duty vehicles and engines 2. Limits and measurement methods for emissions from light-duty vehicles (CHINA 5) 3. Daniel, R., Brooks, T., Pates, D.: Analysis of US and EU driving styles to improve understanding of market usage and the effects on OBD monitor IUMPR

Simulation and Analysis of Vehicle Performance Based on Different Cycle Conditions Yangmin Wu1,2(&), Zhien Liu1,2, and Guangwei Xi1,2 1

Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan 430070, China [email protected] 2 Hubei Collaborative Innovation Center for Automotive Components Technology, Wuhan 430070, China

Abstract. In this paper, the vehicle dynamics simulation model of a vehicle model is established according to the structure of the vehicle and the performance parameters of each part of the vehicle, and the performance prediction model of the NEDC cycle, the WLTC cycle, and the China cycle (CHINA VI) are constructed. The differences effects of three test cycle on vehicle dynamics, fuel economy, and vehicle emission performance are studied. And provide a theoretical basis for the matching of engines and vehicles and the selection of different emission technology routes. Keywords: Cycle conditions Performance prediction

 Vehicle performance  Simulation analysis

1 Introduction Power and fuel economy are the most basic and important performance of automobile. With the constant attention to environmental pollution problems in various countries, the emission limit of emission regulations is becoming more and more strict, so the performance of automobile emission has also become the necessary standard to measure the performance of automobiles. Normally, the reasonable matching of the automotive power system determines the power and fuel economy of the vehicle, but according to the test standard, the test cycle of the vehicle is usually performed on a chassis dynamometer, and there is a big difference from the road test of real vehicles. Therefore, in order to obtain accurate data, real vehicle road test experiments must be conducted. But if all the road experiments are carried out, the cost of development will be increased, so in the development stage, the vehicle emission and power can be predicted and analyzed with the help of computer software. Chen et al. established an off-road vehicle model based on GT-DRIVE, and simulated its power and fuel economy, optimize it on the basis of the original engine. After the simulation of NEDC cycle, it is found that the improvement of diesel engine can effectively improve the performance of the vehicle [1]. Yang Dong et al. used GT-DRIVE to simulate the emission test cycle of an off-road vehicle and obtained the operating conditions of © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1829–1840, 2019. https://doi.org/10.1007/978-981-13-3648-5_237

1830

Y. Wu et al.

the engine at various time points during the test cycle of the vehicle. The specific operating methods were used to obtain six typical operating points, can be used to guide engine calibration experiments [2]. At present, emission studies on the circulatory working conditions generally use a number of typical conditions to represent the overall cycle conditions, and the NEDC cycle is generally selected for the selection of the cycle conditions. It is unable to cope with the new national six emission regulations that China will adopt in the future. The standard requirements of the WLTC cycle and the Chinese operating conditions cycle are rarely mentioned. In this paper, the vehicle simulation model will be establish to determine the NEDC cycle, WLTC cycle, and Limits and measurement methods for emission from light vehicles cycle (CHINA VI). The characteristics of differences are predicted and analyzed, and their differences are analyzed to provide suggestions for optimizing the calibration of new vehicles in China.

2 Driving Cycle 2.1

New European Driving Cycle (NEDC)

The New European Driving Cycle (NEDC) refers to the new European driving cycle, which is used to evaluate the emission levels and fuel economy of passenger cars (excluding light trucks and commercial vehicles) during vehicle testing. In the emission regulation before the six standard of the light vehicle, this cycle is mainly used as a vehicle test cycle in China. The NEDC cycle consists of part 1 (urban operation cycle) and part 2 (suburban operation cycle). The cycle parameters are shown in Fig. 1 and Table 1, and the total cycle time is 1180 s. In the NEDC cycle, there are more constant speed and less acceleration and deceleration, and the acceleration and deceleration stage is uniform acceleration and deceleration. Therefore, many countries have adopted other regulations in the latest emissions regulations to covered the actual conditions.

Fig. 1. NEDC

Simulation and Analysis of Vehicle Performance …

1831

Table 1. The parameters of NEDC Number Part 1 Part 2

2.2

Cyclic mileage (km) 4.052 6.955

Average speed (km/h) 19 62.6

Maximum speed (km/h) 50 120

Cycle time (s) 780 400

Worldwide Harmonized Light Vehicles Test Cycles (WLTC)

The Worldwide harmonized Light Vehicle Test Cycles (WLTC) cycle refers to the global unified light vehicle test cycle, which is mainly based on the actual driving conditions of vehicles in Europe, the United States, Japan, South Korea, and India, as well as M1, M2, and N1 vehicles. The test cycle developed on a large amount of data on different road types and driving conditions, the driving cycle is closer to the actual driving conditions. As shown in Fig. 2, Tables 2 and 3, the WLTC cycle is divided into three grades according to the power-mass ratio, and the light passenger car basically belongs to Class 3b. A complete WLTC cycle is divided into four parts, such as low speed, middle speed, high speed and overspeed. The cycle mileage is longer, the acceleration and deceleration are more intense, the maximum speed is higher, the working condition is more, and the operating conditions of the vehicle are covered more widely.

Fig. 2. WLTC class 3b

Table 2. The three grades of WLTC Class Power-mass ratio Maximum design speed Class 1 Pmr  22 Vmax < 70 km/h Vmax  70 km/h Class 2 22  Pmr < 34 Vmax < 90 km/h Vmax  90 km/h Class 3 Pmr  34 Vmax < 135 km/h Vmax  135 km/h

1832

Y. Wu et al. Table 3. The parameters of WLTC

Number

Cyclic mileage Average speed Maximum speed Cycle length (km) (km/h) (km/h) (s) Low speed 3.09 18.9 56.5 589 Middle speed 4.76 39.5 76.6 433 High speed 7.16 56.7 97.4 455 Over speed 8.25 92.0 131.3 323 Total 23.27 46.5 131.3 1800

2.3

China Light Vehicles Test Cycles

China began implementing light vehicle emission standards in 2000, followed by the implementation of the 2nd, 3rd, 4th, and 5th National Standards. These standard emission control levels are equivalent to the standards implemented in Europe, but the time of implementation in China is relatively backward. In order to further control the emission of pollutants from motor vehicles, strengthen the pollution prevention and control of motor vehicles, and promote the technological upgrading of the automotive industry to protect the environment and human health, in December 2016, the Ministry of Environmental Protection and the General Administration of Quality Supervision, Inspection and Quarantine jointly issued the six standards for light vehicle. As of July 1, 2020, all light vehicles with sales and registration levels should meet the requirements of this standard [3]. In addition to combining the emission standards of the United States and Japan and continuation of the EU’s emission standards, the Light Car Country VI standard also incorporates the actual conditions in China and conduct design experiments and widely solicit opinions [4]. China light vehicles test cycle is shown in Fig. 3. The cycle includes low-speed, medium-speed, high-speed, and extra high-speed segments. The test time is the same as that of the WLTC cycle. However, on some curves, the China test cycle is more sleek. The China test cycle is based on more than 20 cities’ traffic test data of the road. It shows that the Chinese people’s driving habits and working conditions are closer to China’s actual national conditions and are of great significance.

Fig. 3. China light vehicles test cycles

Simulation and Analysis of Vehicle Performance …

1833

3 The Model of Simulation In the process of building the whole vehicle simulation model, the corresponding modules are first selected, and each component module is constructed separately, and parametric modeling of each component is carried out. The vehicle dynamic transmission path is usually the Engine-Clutch-Transmission-Main-Reducer-DifferentialDrive shaft—Brake-Wheel. Secondly, according to the path and control requirements of the power transmission, the connection function is used to establish the physical or signal connection between the parts and modules, and the simulation model of the whole vehicle is completed at the end. 3.1

Engine Model

The basic data included in the engine model include engine type, displacement, idle speed, engine rotation inertia, fuel density, and fuel calorific value. The engine’s BMEP, BSFC and FMEP are all enter into the module through the MAP diagram. In the kinematics mode, the vehicle speed curve or cycle condition is input, and the corresponding engine speed and torque are calculated in conjunction with other defined vehicle components. The fuel consumption rate and emission characteristics in this state is obtained by the linear interpolation of the fuel consumption rate MAP and the emission MAP. The basic engine parameters of this article are shown in Table 4. Table 4. The basic parameters of engine Item Engine type Compression ratio Displacement (L) Maximum power (kW) Maximum torque (N m) Firing order Idle speed (r/min) Maximum speed (r/min) Fuel calorific value (J/kg) Fuel density (kg/m3) Rotation inertia (kg/m2)

3.2

Value L4, naturally aspirated 10.3 1.599 82 147 1-3-4-2 800 5800 4.35E7 756 0.25

Clutch Model

The clutch module mainly transmits and cuts off the power. If the transmission mode is manual, the clutch control is realized by the driver module. If the transmission mode is automatic, the control of the clutch is controlled by an external module [5]. The basic information of the clutch is shown in Table 5. The formula for calculating the torque T transmitted by the clutch is as follows:

1834

Y. Wu et al. Table 5. The parameters of clutch Item Value Type Dry monolithic, diaphragm spring Effective radius (mm) 90 Maximum static clutch torque (N m) 300

T ¼ Tmax  fn 

l ls

ð1Þ

In which, Tmax is maximum static clutch torque, fn is Clutch load, ls is the maximum static friction coefficient of the clutch, in this paper, the wet friction coefficient l is set as 0.3. 3.3

Gearbox Model

The gearbox requires user-defined number of gears, speed ratio of each gear, and moment of inertia, etc. The torque and speed of the engine crankshaft output are changed through different gears to achieve the vehicle speed and driving force under different driving conditions. This module only includes the gear ratio, gear shift efficiency and other information of the transmission, and the actual shift strategy is controlled by the driver model. The basic parameters of the gearbox are shown in Table 6.

Table 6. The parameters of gearbox Gear number Gear ratio 1 3.46 2 1.887 3 1.26 4 0.95 5 0.78

3.4

Vehicle Body Model

The basic parameters of the car body are shown in Table 7. The basic data contained in the car body module includes total vehicle mass, passenger and cargo quality, initial speed, drag coefficient, front and rear wheelbase, etc. The body module will calculate the road resistance and dynamic load according to the drag coefficient. The maximum climbing speed, maximum acceleration and maximum speed of a vehicle in terms of statics model are calculated according to the basic parameters of the vehicle body. The form equation of the stage fixed transmission ratio transmission vehicle is as follows:

Simulation and Analysis of Vehicle Performance …

1835

Table 7. The parameters of vehicle body Item Vehicle Vehicle Vehicle Vehicle

mass (kg) frontal area (m2) drag coefficient wheelbase (m)

Value 1450 2.5 0.32 2.725

Ft ¼ Ff þ Fw þ Fi þ Fj

ð2Þ

Ttq ig i0 gT CD A 2 du u þ G sin a þ dm ¼ Gf cos a þ 21:15 a dt r

ð3Þ

That is:

In which, Ft is driving force (N); Ff is rolling resistance (N); Fw is air resistance (N); Fi is grade resistance (N); Fj is acceleration resistance (N); Ttq is engine torque (N m); ig is transmission ratio of transmission; i0 is drive ratio of main decelerator; gT is mechanical efficiency of transmission system; r is rolling radius; G is air resistance coefficient; A is frontal area (m2); f is rolling resistance coefficient; a is slope angle; ua is the speed of car (km/h); ddut is vehicle’s acceleration (m/s2); d is rotary mass conversion coefficient. 3.5

Vehicle Driver Model

The basic parameters that the driver module needs to input include the accelerator pedal position, brake pedal position, clutch pedal position, and shift strategy. The pedal position is entered in the form of a table to simulate the driver’s shift process during actual vehicle operation. Engine speed control shift strategy, as shown in Table 8. The driver module serves as the main controller for the system input, controls the accelerator pedal, brake pedal, and clutch position. The driver also controls the selection of the manual transmission gear.

Table 8. Driver shift strategy Gear 1 2 3 4 5 Angular speed at gear up-shifts Ign 3500 2800 2500 2500 Angular speed at gear down-shifts Ign 2000 1800 1800 1800

1836

3.6

Y. Wu et al.

Vehicle Model

Figure 4 shows the complete vehicle model. The vehicle power transmission system is mainly composed of engine, clutch, gearbox, transmission shaft, main reducer, brake, tire, road, and vehicle body. In the model, 1 is the engine module, used to simulate engine operating conditions; 2 is the clutch module that controls the shift process; 3 is the transmission module; 4 is the transmission shaft, which transmits power and torque; 5 is the main reducer; 6 is the brake module, through the driver and the controller’s signal performs the braking operation; 7 is the tire module, which simulates the tire condition; 8 is the road surface module that simulates the actual road surface state; 9 is the vehicle model used to simulate the state of the vehicle; 10 is the environment model, describes the air properties, simulates the ambient temperature and pressure, and is used to calculate the drag force; 11 is the driver module controls the acceleration and deceleration and shifting of the vehicle during driving.

Fig. 4. Vehicle model

3.7

Test Cycle Model

The economic simulation uses the kinematic calculation model, the user input the speed curve or the cycle driving condition and the road condition, calculate the load transferred from the road to the vehicle, and get the reverse solution to get the torque, speed and fuel consumption rate of the engine and drive system, the fuel consumption of 100 km or the constant oil consumption, to satisfy the speed curve or adopts the dynamic calculation mode. The dynamic response of the vehicle is calculated according to the valve position of the engine, the position of the clutch, the brake action and so on. In fact, the motion of the vehicle is controlled by the acceleration action, and the model is different from the kinematic model. Using the forward calculation mode, we finally get the operation of the vehicle [6]. In the kinematic computation mode, the vehicle’s running state can be defined by the controller, and the vehicle speed control, acceleration control and mixed mode control can be selected. This paper studies the economy and emission performance of cycle driving conditions, so the vehicle speed control model is selected [7]. The test cycle needs to define the driving condition. Based on the cycle data, the NEDC cycle, the WLTC cycle and the Chinese cycle condition are defined respectively. In China, the

Simulation and Analysis of Vehicle Performance …

1837

cycle conditions are based on the 1.3 section of this article, and the speed (km/h)— time (s) cycle form is used to input the vehicle speed control model.

4 Results 4.1

Dynamics

As shown in Fig. 5, the calculated driving force travel resistance balance diagram, acceleration chart and each slope gradient map are obtained. When the speed of the car is lower than the maximum speed, the driving force is greater than the driving resistance, the maximum speed is the highest speed that the car can reach on the good road. The speed of the power curve of the engine at the 5 gear and the intersection point of the resistance curve is the highest speed. The maximum speed of the vehicle is 190.58 km/h and the corresponding power is 79.47 kW. From the graph, the maximum acceleration of the 1 gear is 3.26 m/s2, with larger acceleration and better dynamic performance. The maximum climbing gradient usually refers to the maximum gradient of the first gear. From the chart, we can see that the maximum climbing angle of the first gear is 56.6%, which has enough climbing ability.

Fig. 5. Driving force diagram, acceleration chart, slope gradient diagram

4.2

Economy

The economic indicators mainly include: economy of constant speed fuel and economical performance of cycling over multiple kilometers [8]. The fuel consumption of a 100 km is defined as the fuel consumption of the vehicle running 100 km at the maximum level in a better road environment. The fuel consumption of a 100 km can not reflect the fuel economy of the vehicle in the actual driving process. Therefore, the fuel consumption index of a hundred kilometers in a multi cycle is proposed, which refers to the economic of the vehicle under different driving conditions. Figure 6 shows the fuel consumption of 100 km/h in three conditions under multiple conditions. It can be seen from the figure that the NEDC cycle has higher cycle oil consumption in the urban area and lower cycle oil consumption in the suburbs. When the WLTC cycle and China cycle conditions are at the lowest speeds, the fuel consumption is higher around this area.

1838

Y. Wu et al.

Fig. 6. Economy of the three cycle

As shown in Table 9, the fuel consumption per 100 km is 7.7 L/100 km under the NEDC cycle condition, and the fuel consumption per 100 km under the WLTC cycle condition is 5.8 L/100 km. The fuel consumption per 100 km under the cycling conditions in China is 5.6 L/100 km. In the 100 km fuel economy, the NEDC cycle fuel consumption is higher than the WLTC cycle and the Chinese cycle because the WLTC cycle and the Chinese cycle increase the cycle time by 620 s compared to the NEDC cycle. The ratio of the cold start time is shortened and the percentage of parking time is shortened [9]. The average parking speed has also increased from 44.1 to 53.2 km/h, which is closer to the economic speed, and the number of short trips has been reduced from 13 to 8, so the economy has improved.

Table 9. The fuel consumption Item NEDC cycle (L/100 km) WLTC cycle (L/100 km) China cycle (L/100 km)

4.3

Value 7.7 5.8 5.6

Emission

Figures 7, 8 and 9 is the emission of three cycles, and Table 10 is a hundred kilometers of emissions under three cycle conditions, and (a) (b) (c) (d) in turn is the emission of CO, HC, NOx, Soot with the cycle time. As can be seen from the figure, in the acceleration and deceleration phases, the driving status of the vehicle changes, and the emissions of CO, HC, etc. have all increased significantly. Since the working conditions of the NEDC cycle have not changed much, most of them are uniform acceleration/deceleration states. In the urban area, the operation cycle presents a certain regularity. The other cycle conditions are more complex than the NEDC cycle due to changes in driving conditions, and the speed changes more, so the emissions change more quickly, but the coverage of the driving conditions is wider and more efficient. Reacting to the complex conditions of the actual driving process, the prediction model is closer to the actual working status, which is also the Chinese characteristics of China’s cycle conditions [10]. The HC emissions from the NEDC cycle are slightly lower at the high speed segment, and the CO emissions from the NEDC cycle and the WTLC cycle are lower at the high speed segment. The main reasons for the discharge

Simulation and Analysis of Vehicle Performance …

1839

of NEDC operating conditions above the other two cycle conditions are the long duration of cold start-up, the incomplete combustion of large quantities of fuel, and the increase in emissions. In the high-speed segment, the WLTC cycle and the Chinese cycle are significantly accelerated at a shorter time than the NEDC cycle, resulting in lower emissions.

Fig. 7. The emission of NEDC cycle

Fig. 8. The emission of WLTC cycle

Fig. 9. The emission of China cycle

Table 10. The emission of three cycle Emission NEDC cycle WLTC cycle China cycle

CO (g/km) 7.47 3.31 3.71

HC (g/km) 0.55 0.24 0.18

NOx (g/km) SOOT 1.05 4.3e−3 0.42 2.0e−3 0.74 1.7e−3

1840

Y. Wu et al.

5 Conclusion (1) Through the parametric modeling method, combined with the theory of each part of the model, the engine model, the clutch model, the vehicle model, and the circulation prediction model were established respectively, and the vehicle model was established. (2) Through kinematics and dynamic calculation methods, a complete vehicle performance prediction model is constructed to obtain the prediction of the entire vehicle’s dynamic performance. Through the establishment of emission prediction models, the emission performance of the entire vehicle NEDC cycle, WLTC cycle, and China cycle was predicted. The cold start-up time of the NEDC cycle is longer and the acceleration time is longer. Therefore, the economy and emission performance are poor. The WLTC cycle and China’s cycle conditions are more better than the NEDC cycle, and the economic efficiency is also better. The forecast of NEDC cycle, WLTC cycle and China cycle on the vehicle dynamics, economy and emissions is of great significance to the selection of the technical route that satisfies the requirements of China’s Sixth National Standard, and can provide suggestions for the optimization of new vehicles in China.

References 1. Chen, C., Zhao, W.: Performance simulation and optimization of off-roader. Mach. Des. Manuf. 8, 81–83 (2011) 2. Yang, D., Xu, Y., et al.: A Method to Study Engine Emission of Light-Duty Vehiclebased on GT-DRIVE. Combustion Energy Purification Branch of China Internal Combustion Engine Society (2011) 3. GB 18352.6-2016: Limits and measurement methods for emission from light vehicles (CHINA 6) 4. Wu, C., Zhao, L., et al.: Introduction to China VI limit and measurement methods for emissions from light-duty vehicles. Intern. Combust. Engine Parts 7, 28–30 (2017) 5. Chen, C., Wang, W.: Simulation and analysis of off-roader dynamic and fuel economy. J. Nanchang Univ. (Eng. Technol.) 32(4), 339–343 (2010) 6. Sufen, Y.: City Driving Cycle Research and Matching Optimization of Power Train System. Wuhan University of Technology (2013) 7. Prieto, N., Uttaro, B., Mapiye, C., Turner, T.D., Dugan, M.E.R., Zamora, V., Young, M., Beltranena, E.: Meat Sci. 98(4), 585 (2014) 8. Riovanto, R., De Marchi, M., Cassandro, M., Penasa, M.: Food Chem. 134(4), 2459 (2012) 9. Prieto, N., Dugan, M.E.R., López-Campos, O., McAllister, T.A., Aalhus, J.L., Uttaro, B.: Meat Sci. 90(1), 43 (2012) 10. Prieto, N., López-Campos, Ó., Aalhus, J.L., Dugan, M.E.R., Juárez, M., Uttaro, B.: Meat Sci. 98(2), 279 (2014)

Comparison Research on Lightweight Plan of Automotive Door Based on Life Cycle Assessment (LCA) Yusong He(&) and Yiwen Xie Passenger Vehicle, SAIC Motor Corporation Limited, Shanghai 201804, China [email protected], [email protected]

Abstract. With the rapid development of automobile industry, the problems of industrial energy and environment are also becoming increasingly prominent. The overusing of fossil energy and the exhaust emission during vehicle driving are mainly responsible for the pollution of PM and Photochemistry event (Shi et al. in Environ. Sci. 1105–1116, 2015 [1]). The Chinese government is pushing ahead with the implementation of automotive eco-design and the policy of energy saving and emission reduction is strengthened day by day. The light weight of automobile materials is one of the ways to save energy and reduce emissions by automobile manufacturers. Keywords: Eco-design Quantitative analysis



Lightweight design



Life cycle assessment



In this paper, the whole life cycle (LCA) evaluation of two different materials of a new automobile was evaluated before and after the lightweight. The comprehensive impact of the pure steel design and the aluminum design on the environment in the whole life cycle of the door is analyzed. Through modeling analysis, it is concluded that under the same conditions, when the mileage reaches 129,234 km, the carbon emission of the two designs has a balance point, and the application advantage of lightweight technology aluminum door technology is gradually reflected. The research method and conclusion of this paper can be used as reference or guidance for quantifying the lightweight material selection and the comprehensive influence of the environment in the process of automobile product design.

1 Preface Promoting the construction of ecological civilization and taking the road of green development have been the important parts of Chinese development. China has taken green development and ecological civilization construction as an important part of the national development plan, and the development of green economy has become the requirement of the new era of economic development [2]. Therefore, China has issued some policies successively, adding the Life cycle assessment (hereinafter referred to as “LCA”) into the national policy system. From 2015, the State Council issued policies © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1841–1850, 2019. https://doi.org/10.1007/978-981-13-3648-5_238

1842

Y. He and Y. Xie

such as . The Ministry of Industry and Information Technology issued the “green industrial development plan (2016–2020)”, “green manufacturing engineering implementation guidelines (2016– 2020)” and other rules. Green development is not only the road of China’s new era, but also the direction of global enterprises. Such as HP’s integration of environmental protection concept into enterprise development and operation. Baosteel set up its own LCA team to conduct life cycle assessment. LCA is a method used to evaluate the impact on the environment of the product (or service) during its entire life cycle, that is, from the acquisition of raw materials, the production of the product, the use of the product, to the disposition of product after use. According to ISO14040 [3], this method includes 4 interrelated steps (Fig. 1).

Fig. 1. LCA frame diagram

2 LCA of Automobile Doors 2.1

Goal and Scope Definition

This research selected a new energy automobile door (1 sets) as the analysis object, analyzed the whole life cycle of automobile door (including raw material acquisition, manufacturing, driving stage and recovery) by using the LCA evaluation software and database, acquired environmental impact emission comparison and main emission factors, compared the differences between the advantages of the two designs, to improve the design, optimize the production process and reduce the environmental impact. 1. Based on the traditional steel body design and the remaining the rest body parts unchanged, replace the advantages and disadvantages of lightweight doors schemes are compared from the following situations: Supposed the automobile is scrapped after the 150 thousand km driving phase. 2. Analyze and compare the environmental impact, when replace the raw material aluminum of aluminum door with domestic hydroelectric aluminum. 3. This research did not include the environmental impact of various machinery and equipment, factory construction, human resources and living facilities involved in the production process. The information of the two designs is shown in Table 1.

Comparison Research on Lightweight Plan of Automotive Door …

1843

Table 1. Designs of a new energy automobile door

Steel doors Weight: 61.18kg

Al doors Weight: 38.46 kg

The production process and the LCA boundary of the two designs are shown in Figs. 2 and 3.

Fig. 2. The production process and the LCA boundary of the steel design

2.2

Inventory Analysis

The inventory analysis started with the acquisition of raw materials and terminated in the recovery stage of products. The quantitative content of the research includes resources, energy consumption and waste gas, liquid and solid discharged into the environment. Data types include primary data and secondary data. Primary data, such as BOM, was directly provided by manufacturers and suppliers. The data level is the actual site value, and the data quality is high. And, the fuel economy of motor vehicles uses measured value of enterprise. Secondary data, such as external power and energy, raw materials and materials (steel, aluminum, etc.) are derived from the German GaBi database. The data model has

1844

Y. He and Y. Xie

Fig. 3. The production process and the LCA boundary of the aluminum design

taken full account of China’s power structure and thermal efficiency, and finally used the average of China power grid. 2.3

Impact Assessment

The selection of environmental indicators in this study is mainly divided into 2 categories. (1) The inventory analysis results selected the main life cycle inventory emissions: carbon dioxide, sulfur dioxide, nitrogen oxides and PM2.5. (2) The characteristic impact category mainly use the CML2001 [4] evaluation method (the guiding index of China Eco-Car Assessment Programme, C-ECAP), the impact types include global warming potential, acidification, photochemical oxidant generation, eutrophication and ozone depletion. According to the CML2001 classification method, specific effects are quantified. The specific impacts was quantified expressed according to the CML2001.

3 Modeling and Analysis of the Environmental Impact of Auto Door Supposed the automobile is scrapped after the 150 thousand km driving phase, the environmental impacts and factors of two schemes were analyzed through LCA modeling.

Comparison Research on Lightweight Plan of Automotive Door …

3.1

1845

Analysis of Environmental Impact Emissions at Different Stages of Traditional Steel Design

The environmental impact of each phase of the steel design is shown in Table 2 and Fig. 4. Table 2. The environmental impact of each phase of the steel design (absolute value) Environmental impact indicators Carbon dioxide Sulfur dioxide Environmental impact indicators Nitrogen oxide PM2.5 Acidification potential Eutrophic potential

Value

1.61 0.18 3.14

Production phase 147.8070 0.3494 Production phase 0.3551 0.0850 0.6017

Using phase 597.3170 1.6739 Using phase 1.3566 0.1039 2.7074

Recovery phase −87.2399 −0.1023 Recovery phase −0.0978 −0.0136 −0.1706

0.23

0.0514

0.1955

−0.0128

706.85

153.9920

646.4240

−93.5675

kg Phosphate eq. kg CO2 eq.

Global warming potential Ozone depletion Potential Photochemical ozone creation potential

5.52353E −07 0.29

3.21E−09

1.43E −10 0.2563

5.49E−07

kg R11 eq.

−0.0433

kg Ethene eq.

657.88 1.92 Value

0.0730

Unit kg kg Unit kg kg kg SO2 eq.

Fig. 4. The environmental impact of each phase of the steel design (driving 150,000 km)

1846

Y. He and Y. Xie

The overall environmental impact of the steel program shows that the environmental impact of use phase in eight environmental indicators is higher than that of the other phases except for the ozone depletion potential. In particular, the global warming potential emission is 597.3 kgCO2e, and the percentage of the use phase is 91.45%. Ozone Depletion Potential appeared mainly in the recovery phase, with a percentage of 99.39%. It is mainly derived from the recycling of steel waste. 3.2

Analysis of Environmental Impact Emissions at Different Stages of Lightweight Aluminum Design

The environmental impact of the lightweight aluminum design in various phases is shown in Table 3 and Fig. 5. Table 3. The environmental impact of each phase of the aluminum design (absolute value) Environmental impact indicators Carbon dioxide Sulfur dioxide Nitrogen oxide PM2.5 Acidification potential Eutrophic potential Global warming potential Ozone depletion Potential Photochemical ozone creation potential

Value 616.60 2.02 1.44 0.08 3.19 0.21 679.08 4.0832E −09 0.24

Production phase 340.9860 1.1674 0.7476 0.0565 1.8221 0.1101 379.4340

Using phase 377.8030 1.0587 0.8581 0.0657 1.7124 0.1237 408.8630

Recovery phase −102.1848 −0.2106 −0.1707 −0.0416 −0.3434 −0.0245 −109.2220

4.48E−09

9.02E −11 0.1621

−4.87E−10 kg R11 eq.

0.1071

−0.0297

Unit kg kg kg kg kg SO2 eq. kg Phosphate eq. kg CO2 eq.

kg Ethene eq.

POCP(kg Ethene eq.) ODP(kg R11 eq.) GWP(kg CO2 eq.) EP(kg Phosphate eq.) AP(kg SO2 eq.) PM2.5(kg) Nitrogen oxide(kg) Sulfur dioxide(kg) Carbon dioxide(kg) -40%

-20%

0%

20%

Production stage

40% Driving stage

60%

80%

100%

Recovery stage

Fig. 5. The environmental impact of each phase of the aluminum design (driving 150,000 km)

Comparison Research on Lightweight Plan of Automotive Door …

1847

According to the overall environmental impact of the aluminum design, emissions in the production phase and the using phase were almost equal. The main reason is that the process of aluminum production is more energy consuming, while the aluminum door weight is lighter and consumes less energy during the using phase, so the growth rate was slower. Taking the GWP as an example, 379 kgCO2e was emitted in the production phase and 408 kgCO2e in the using phase. 3.3

Comparative Analysis of Environmental Impact of Two Kinds of Door Designs

In order to compare the advantages and disadvantages of the two designs, 4 different scenarios were selected for analysis and comparison. Situation 1: Comparison of steel door and aluminum door when driving 150 thousand kilometers. Situation 2: After driving how many kilometers, the environmental impact of aluminum doors can be balanced with the steel doors, that is to say, there is a balance point. Situation 3: Analyze and compare the carbon emission of production phase, when replace the raw material aluminum of aluminum door with domestic hydroelectric aluminum. 3.3.1 Situation 1: Driving 150 Thousand km According to Table 4 and Fig. 6 shows, the environmental impact indexes of aluminum design are superior to those of steel design except sulphur dioxide and acidification potential. The main reason is that the emission in the production phase of the aluminum door is higher than that of steel door, sulfur dioxide and acidification potential have positive correlation with it. Therefore, the aluminum design is inferior to the steel design in the two indicators of sulfur dioxide and acidification potential. Table 4. Comparison on the results of the whole life cycle environmental impact between two designs (driving 150,000 km, including recovery) Environmental impact indicators Carbon dioxide Sulfur dioxide Nitrogen oxide PM2.5 Acidification potential Eutrophic potential Global warming potential Ozone depletion Potential Photochemical ozone creation potential

Steel design (absolute value) 657.88 1.92 1.61 0.18 3.14 0.23 706.85

Aluminum design (absolute value) 616.60 2.02 1.44 0.08 3.19 0.21 679.08

Unit kg kg kg kg kg SO2 eq. kg Phosphate eq. kg CO2 eq.

5.52353E−07

4.0832E−09

kg R11 eq.

0.29

0.24

kg Ethene eq.

1848

Y. He and Y. Xie 3.5

720

3

700 680

2.5

660

2

640 1.5

620

1

600

0.5

580

0

560

Steel door

Al uminum door

Fig. 6. Comparison of the environmental impact of the two designs in the whole life cycle (driving 150,000 km, including recovery)

3.3.2 Situation 2: Analysis of Environmental Impact Balance Point of Two Designs Taking carbon emission index as an example, the mileage simulation of vehicle door life cycle was carried out by using the scenario analysis function of LCA software, and the balance point of two kinds of door carbon emissions was obtained. It can be seen from Fig. 7 that steel design is superior to aluminum design in terms of environmental friendliness in the initial stage of driving. Under the same conditions, when the mileage reaches 129,234 km, the carbon emission of the two designs will strike a balance. Since then, the advantages of the aluminum design are gradually prominent. This result is in line with the trend of automobile lightweight.

1000 900 800 700 600 500 400 300 200 100 0

129234

Aluminum door Steel door

Fig. 7. The balance point trend diagram of carbon emission of two designs (analyze until recovery)

Comparison Research on Lightweight Plan of Automotive Door …

1849

3.3.3 Situation 3: Replace the Raw Materials of Aluminum Doors with Domestic Hydroelectric Aluminum At present, in the domestic and foreign markets, the raw materials of the aluminum doors are mostly thermal power, and the corresponding environmental emissions will be relatively high. In the research, in order to choose a better and more green raw material, we assumed that we choose hydroelectric aluminum material which supplied by a hydroelectric aluminum manufacturing enterprise in China instead of the model to do the modeling analysis. Taking carbon emission as an example, the production phases of steel design and aluminum design were compared. As shown in Fig. 8, the carbon emission of aluminum doors in the production phase is 84 kgCO2e, which is 75% lower than that of the original thermal power aluminum from Japan (379 kgCO2e). 379.43

400 300 200 100

135.37 84.49

0 carbon emission [kg CO2 eq.] Steel door Aluminum door (Hydroelectric aluminum, china) Aluminum door (Raw material aluminum, Japan)

Fig. 8. Environmental impact comparison of two kinds of aluminum used in car doors

4 Conclusion Through the LCA modeling and comparison analysis, the environmental impact of the aluminum design is better than steel, the advantages are as follows: (1) Energy saving: According to the weight reduction and energy saving coefficient of the new energy automobile, the aluminum design has a total power saving of 261.75 Kwh over the life cycle 150,000 km compared with the steel design. (2) Life cycle inventory emissions: Compared with the steel design, the aluminum design has less carbon dioxide emissions of 699.81 kg, less sulfur dioxide emission of 1.75 kg, less nitrogen oxides emission of 1.67 kg, PM2.5 less emission of 0.21 kg. (3) Carbon emission: The greenhouse gas emission of aluminum design is less than that of steel design with 740.45 kgCO2e. (4) Balance point analysis:Steel design is superior to aluminum design in terms of environmental friendliness in the initial stage of driving. When the mileage

1850

Y. He and Y. Xie

reaches 129,234 km, the carbon emission of the two designs will strike a balance. Since then, the advantages of the aluminum design are gradually prominent. This result is in line with the trend of automobile lightweight. Especially when aluminum is replaced by hydroelectric aluminum, it is better than steel design at the beginning of using phase. This has important reference value for the selection of raw material suppliers. (5) Analysis of aluminum replacement: In this study, hydroelectric aluminum was used to replace Japanese thermal power aluminum for modeling analysis. The carbon emission of aluminum doors in the production phase is 84 kgCO2e, which is 75% lower than that of the original thermal power aluminum from Japan (379 kgCO2e).

References 1. Shi, X., Sun, Z., Li, X., et al.: Comparative study on life cycle environmental impacts of electric taxis and fuel taxis in Beijing. Environ. Sci. (3), 1105–1116 (2015) 2. Wu, c: Reflections on green development and ecological civilization construction. Commer. Econ. 04, 4–5 (2017) 3. ISO14040:2006, Environmental Management-Life Cycle Assessment-Principles and Framework 4. Benetto, E., Becker, M., Welfring, J.: Life cycle assessment of Oriented Strand Boards (OSB): from process innovation to ecodesign. Environ. Sci. Technol. 43(15), 6003–6009 (2009)

The Analysis of the New Energy Buses Operating Condition in the North China Xiaoqin Yang1, Lu Zhang1(&), Yuze Zhang1, and Qiang Lu1,2 1

2

Transportation Institute, Inner Mongolia University, Huhhot 010070, China [email protected] School of Automotive Engineering, Dalian University of Technology, Dalian 116024, China

Abstract. In recent year, new energy buses have been highly developed in supported by the new energy policy in our government. Only accurately know the Operating condition of the new energy buses, it would be better helping for the new energy buses production, application and so on. Besides the vehicles own characteristics, their operating condition are influenced by the operating environment, such as atmospheric pressure, climate, load and so on. Therefore, the research about the new energy buses in the north China whose altitude is high and whose temperature is low is carried on. By means of GPS speed sensor and vehicle terminal, the data of new energy buses is timely collected. Depending on the SPSS software, based on the principal component analysis and K means clustering method, three typical working conditions are achieved. There are suburbs road condition, urban bus lanes condition and urban congested road condition. Keywords: New energy car analysis  Clustering method



Operating condition



Principal component

1 Introduction According to the statistics of 14 thousand and 300 new energy bus sales in 2014, 75 thousand new energy bus sales in 2015, 160 thousand new energy bus sales in 2016, and the influence of new energy policy in 2017. The market experienced a downturn, warm, stable and explosive four stages, and 100 thousand new energy bus sales were sold in the whole year. It is estimated that the number of public transport in China will be 700 thousand in 2020. If China’s public transport fully uses new energy sources in 2020, the new energy bus market will be considerable. According to the technical level, the new energy bus is obviously favored by the policy, occupying the leading position of the future bus [1–3]. The establishment of new energy bus energy storage system control method is based on standard working conditions. According to a large number of studies, the actual operating conditions of the new energy bus are very different from the state standard operating conditions, so a set of operating conditions suitable for the new energy bus should be set up. At the same time, the road types, climate, models, traffic rules, road conditions, drivers and so on will affect the operating conditions of the car, but the cross meridian and latitude are very large in China, and the climate © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1851–1862, 2019. https://doi.org/10.1007/978-981-13-3648-5_239

1852

X. Yang et al.

conditions in each place are different. Therefore, it can be seen that the national standard conditions cannot fully express the driving characteristics of the various cities. This paper focuses on the use of new energy buses in northern cities with high altitude and low temperature. International driving conditions are mainly European driving cycle (EDC), American driving cycle (USDC) and Japanese driving cycle (JDC) three kinds [4–8].

2 Data Acquisition 2.1

Research Object

The data collection place was selected in northern city of Baotou, Inner Mongolia, in view of two reasons. First, Baotou is located in the north of the motherland, the climate characteristics clear, the temperature difference is great, and the main three major urban areas are covered with various operating conditions. Secondly, Baotou bus transport group has bought hundreds of new energy buses, such as Biya, silver long, North Ben brand and so on. These new energy buses have been successively in recent years. Invest in 1 main routes, 2 roads, 5 roads, 35 roads and 23 main routes. Among them, the 23 bus is an indispensable part of the traffic network in Baotou City, which is connected to the Kun Lun in Baotou, the Qingshan District and the Jiuyuan district. Its running characteristics basically include the average speed and the highest speed of other bus routes in Baotou, the long time, the long time of the parking and the fixed running time. It has the characteristics of short running time, starting and slowing down, and the roads along the line basically cover all the road conditions in Baotou, including congested and non congested sections, bus lane, suburban and urban sections, 4 lane, 6 lane and 8 Lane sections. The operation range is the Kun River south bridge bus station—labor building station, passes 45 platform, the whole course 22.6 km, the line Kunhe Nan Bridge station to the labor building station main road, the peak period belongs to the traffic congestion section, the labor building station to the Cultural Road and the Qing Dong Road junction station as the secondary road, normal, bus operation route see Fig. 1.

Fig. 1. Running route of 23# bus line

The Analysis of the New Energy Buses Operating …

2.2

1853

Acquisition Process

In accordance with the fixed route of the bus, the fixed platform stops, the driving interval is small, and many other factors are used to record the data every second by the vehicle terminal acquisition equipment, which is used to establish the working condition of the hybrid electric vehicle. The test equipment is like Fig. 2. During the test, it is necessary to strictly observe the departure time of the hybrid electric bus and to get the passengers off the bus according to their normal living habits.

Fig. 2. Vehicle terminal equipment

Before the start of the test, the test vehicle was checked to ensure the safety performance of the hybrid electric vehicle, to install the experimental equipment, and to test the equipment preliminarily, to check the signal, the signal normality, the stability of the signal and the firmness of the equipment. The data collection of the actual road condition of the new energy bus was collected on the 23 route bus route. The collection time was 7 days, 5 times a day. The total mileage reached 1582 km, and 297,584 effective data were collected through the equipment. These data covered the peak time period, the off-peak time period, the working day period and the weekend. The comprehensive data of the time period. 2.3

The Overall Characteristics of the Data

According to the characteristic analysis data of acceleration ratio, deceleration ratio, uniform speed ratio and idle speed ratio, the total driving characteristics of 23 road buses under normal operating conditions are shown in Table 1. The proportion refers to the proportion of the new energy bus in the continuous process of the acceleration greater than or equal to 0.1 m/s2 in the entire journey; the deceleration ratio refers to the proportion of the new energy bus in the continuous process of the acceleration less than or equal to −0.1 m/s2 in the whole journey; the uniform rate refers to the acceleration of the new energy bus in the whole journey. The proportion of the absolute value less than 0.1 m/s2 in the continuous process; the idle speed ratio refers to the proportion of the new energy bus in the whole journey, the engine is working, but the speed is zero [9–12].

1854

X. Yang et al.

The speed distribution law is analyzed and the statistical analysis results are obtained (see Table 2). By further statistical analysis of the collected data, removing abnormal data (acceleration 7) can obtain the distribution of the acceleration of 23 road buses under normal operating conditions (see Table 3). Table 1. The running characteristic of 23# bus line Total mileage (km) Average speed (including idling) Average driving speed (km/h) Acceleration ratio Deceleration ratio Uniform velocity ratio Idle speed ratio

1582 15.944 19.967 0.2395 0.1827 0.4069 0.1709

Table 2. Regularities of velocity distribution Speed range (km/h) V0 = 0 0 < V1 < 5 5 < V2 < 10 10 < V3 < 15 15 < V4 < 20 20 < V5 < 25 25 < V6 < 30 30 < V7 < 35 35 < V8 < 40 40 < V9 < 45 45 < V10 < 50 50 < V11 < 55 55 < V12 < 60 60 < V13 < 65 65 < V14 < 70

Speed (km/h) Proportion (%) 0 0.1945 2.5 0.2255 7.5 0.0515 12.5 0.0571 17.5 0.0708 22.5 0.0830 27.5 0.0730 32.5 0.0808 37.5 0.0585 42.5 0.0371 47.5 0.0297 52.5 0.0188 57.5 0.0074 62.5 0.0008 67.5 0.0001

From Tables 3 and 4, we can see that the idle speed of 23 bus buses in the Inner Mongolia Autonomous Region Baotou has kept up to about 19%, and the basic speed keeps low speed. It also shows that the bus speed is not much related to the performance of the bus itself, but it is determined by the road facilities in the city and the road condition, which is also in line with the normal operating conditions of the bus. From Table 3, it can be seen that the 23 road new energy bus in Baotou, the Inner Mongolia Autonomous Region, has long maintained low acceleration and low speed, which is also in line with the special performance of the bus. The maximum

The Analysis of the New Energy Buses Operating …

1855

Table 3. Regularities of acceleration distribution Acceleration range (m/s2) −7 < a1 < −6 −6 < a2 < −5 −5 < a3 < −4 −4 < a4 < −3 −3 < a5 < −2 −2 < a6 < −1 −1 < a7 < 0 0 < a8 < 1 1 < a9 < 2 2 < a10 < 3 3 < a11 < 4 4 < a12 < 5 5 < a13 < 6 6 < a14 < 7

Acceleration (m/s2) −6.5 −5.5 −4.5 −3.5 −2.5 −1.5 −0.5 0.5 1.5 2.5 3.5 4.5 5.5 6.5

Proportion (%) 0.0088 0.0126 0.0175 0.0233 0.0332 0.0519 0.1799 0.1958 0.1275 0.0523 0.0263 0.0166 0.0043 0.0008

Table 4. The results of principal component analysis Component Initial eigenvalue

2 Add Variance % Accumulation %

3

4

5

6

7

8

9

10

4.1 2.6 1.6 0.7 0.5 0.4 0.1 0 0 0 41.3 25.9 16.1 6.7 4.8 3.6 1.1 0.4 0.1 0.0 41.3 67.2 83.3 90.0 94.7 98.3 99.5 99.9 100.0 100.0

acceleration and maximum deceleration are small, which shows that the new energy bus has the characteristics of starting and decelerating steadily.

3 Working Condition Analysis 3.1

Division of Working Conditions

Each short condition of a bus is composed of an idle starting point and the next idle starting point. Due to the particularity of the bus, we choose 600 s as a short working section. We choose 30 sets of shorter working sections which are more neat and representative, and calculate their characteristic parameters. The characteristic parameters include the maximum speed (vmax), the average speed ( v), velocity standard deviation (sv), idling time ratio (ηi), speedup time ratio (ηa), deceleration time ratio (ηd), maximum acceleration(aa−max), maximum deceleration (ad−max), average acceleration (aa ), average decrease (ad ).

1856

3.2

X. Yang et al.

Principal Component Analysis

In the course of research, there are bound to be many variables in order to be able to respond comprehensively. If these variables are calculated one by one, it will affect the research efficiency, on the other hand, there is a certain correlation between these variables, which makes these data overlapped to a certain extent, which also has a great impact on the research conclusions. The principal component analysis is to consider the existence of a certain correlation between several variables, so by using the idea of dimensionality, several variables are converted into several unrelated variables through linear combination, and these variables are called principal components. In this paper, the principal components analysis is used to reduce the dimension of these 10 parameters, so that the classification of working conditions is more succinct and clearer. In this paper, there are 30 samples for short working conditions, and there are 10 characteristic parameters in each short period, that is, 10 indexes, so that a 30  10 dimension matrix can be obtained: 9 8 y1;1 y1;2 : y1;10 > > > > = < : : : : ð1Þ y3030 ¼ : : : : > > > > ; : y30;1 y30;2 : y30;10 Among them: yi, j represent the j characteristic parameter of the i short working condition, i = 1–30, j = 1–10. (1) standardization of the original matrix Y30  10, so that the mean value of each characteristic parameter is zero and the standard deviation is 1, so the normalized matrix is obtained.

x3030

Formula: xi;j ¼

yi;j uj pffiffiffi ; rj

8 x1;1 > > < : ¼ > : > : x30;1

x1;2 : : x30;2

9 : x1;10 > > = : : : : > > ; : x30;10

ð2Þ

uj is the mean value of the characteristic parameters in matrix

y3010; rj is the variance of the characteristic parameters in matrix y3010. (2) Find the Correlation matrix R 8 r1;1 > > < : R¼ > : > : r30;1

r1;2 : : r30;2

9 : r1;10 > > = : : : : > > ; : r30;10

P 30  i Þðxkj x  jÞ ðxki x P 30 k¼1 P 30 Among them: ri;j ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi . 2   j Þ2 ðxki xi Þ ðxki x k¼1 k¼1

ð3Þ

The Analysis of the New Energy Buses Operating …

1857

By solving the characteristic equation jkE  Rj ¼ 0, 10 eigenvalues of matrix R are obtained: k1  k2  . . .k10  0, and then the eigenvector of each eigenvalue kk is obtained according to the equation group Rb = kkb, and the unit eigenvector bk is obtained. (3) Calculate the contribution rate and cumulative contribution rate of principal components. The contribution rate refers to the proportion of the variance of the k component in the variance of the total principal component. This value explains how large the comprehensive ability of the k component is in the principal component, that is, how much information of the original 10 characteristic parameters is reflected, and the formula of the contribution rate is expressed as: k /k ¼ P 10k

i¼1 ki

The cumulative contribution rate is the superposition of the variance of each component, which reflects the comprehensive ability of the first few components in all components. The cumulative contribution rate formula is expressed as: Pk ki wk ¼ P i¼1 10 i¼1 ki The purpose of the principal component analysis is to reduce the dimension, which is to replace the original 10 characteristic parameters with the smallest possible principal component, and generally choose the principal component with the cumulative contribution rate of over 80%. 3.3

The Result of Principal Component Analysis

In this paper, the principal component analysis (PCA) of the SPSS software is used to perform principal component analysis on the characteristic parameters of 30 short working conditions, and 10 principal components are obtained, (by 1–10). From the eigenvalue and cumulative contribution rate of principal components, we can see that the original 10 principal components can be reflected only by 3 principal components and achieve the purpose of dimensionality reduction. Each principal component eigenvalue is obtained through the principal component analysis function in SPSS software, as shown in Table 4. From the eigenvalues of the correlation coefficient matrix, we can see how much the principal component contains the original characteristic parameter information. The larger the eigenvalue, the more information the principal component includes. When the eigenvalue is less than 1, the information contained in the principal component is not as much as the information in the original characteristic parameter. In order to achieve dimensionality reduction, we select the principal component whose eigenvalue is greater than 1. From Table 4, the first 3 principal component eigenvalues are all greater than 1, and the cumulative contribution rate is 83.332%, so the first three

1858

X. Yang et al.

principal components are enough to reflect the driving data represented by the original 10 characteristic parameters. For the load calculation of the first three principal components, the load matrix of 3 principal components can be obtained, as shown in Table 5. The principal component load reflects the correlation between the principal component and the characteristic parameters. The greater the absolute value of the load coefficient, the greater the correlation between the characteristic parameters and the principal component.

Table 5. Principal component loading matrix Zscore Zscore (vmax) (ad ) 1 0.919 −0.19 2 0.008 0.831 3 −0.12 0.461

Zscore (ηd) −0.76 0.461 −0.094

Zscore (ηa) −0.014 −0.242 0.868

Zscore (v) 0.859 0.437 0.016

Zscore (ηi) −0.425 −0.866 −0.216

Zscore Zscore Zscore (sv) (aa ) (aa-max) 0.851 0.793 0.209 −0.374 0.497 −0.358 −0.138 −0.11 0.725

Zscore (ad-max) −0.596 0.415 −0.145

According to the load values of principal components in Table 5, the absolute value is greater than 0.5, which is a strong correlation. (1) The principal component 1 mainly reflects the 6 characteristic parameters, the maximum velocity, the average speed reduction, the acceleration time ratio, the average velocity, the maximum acceleration, and the standard deviation of the velocity. (2) Principal component 2 mainly reflects the 2 characteristic parameters of deceleration time ratio and idle time ratio. (3) Principal component 3 mainly reflects the 2 characteristic parameters of mean acceleration and maximum acceleration. In the original data bar of SPSS software, the main components (F1, F2, F3) of each short working condition are given, and the normalized principal components are pffiffiffiffiffi multiplied to correspond to them kk, and they are reduced to the non standardized main component score, which is the basis for the cluster analysis of the following working conditions.

4 K-Means Clustering 4.1

The Thought and Process of K-Means Clustering

Clustering analysis is a classification method based on the characteristics of the object itself. Its principle is to classify individuals according to their size. There are many methods of clustering analysis, such as system clustering, non system clustering, fuzzy clustering and two step clustering, and the system clustering is divided into Q type clustering and R type clustering. Each clustering method has different clustering methods and steps, and the result of clustering is aggregated into one category of the

The Analysis of the New Energy Buses Operating …

1859

same characteristics. K-means Clustering in SPSS software is used to classify the 30 short working periods according to the last principal component score. K-means Clustering is also called the sample fast clustering method. It is the most commonly used clustering method in non system clustering. Its principle is to select some points in the sample as the center, and then determine the category of other samples by calculating the distance between other samples and the center points, and finally complete the convergence through the suitable convergence function Class. The steps of K-means Clustering are as follows: (1) First, according to the given number of classifications k, according to a certain method, select k samples from n samples as cluster centers (z1–zk). The Euclidean distance of all samples to each cluster center is calculated, that is, the actual distance between two points in n-dimensional space. The formula is: Dxy ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X n ðx  y Þ2 i i¼1 i

In this formula, xi, yi represent the coordinates of each sample. After calculating Euclidean distance, each sample is classified into the nearest category with Euclidean distance, and k initial class is obtained. (2) Calculate the mean value of samples in each initial class, and use the mean as a new clustering center. (3) Use the formula in (2) to calculate the distance of all the samples to the new cluster center, and then calculate the mean value as the next cluster center, and so on, until we meet the requirements or reach the upper limit of the iteration, then we get the final cluster center. 4.2

Results of K-Means Clustering

K-means Clustering needs a given number of classifications first. In theory, the more the classification results will be more accurate, but for working conditions, the classification is more difficult to identify. So this paper uses SPSS software to carry out Kmeans Clustering for 30 short conditions, and the root is divided into 3 or 4 categories according to the main component score. The classification results are that there are eighteen short conditions in the category 1, three short conditions in the category 2 and nine short conditions in the category 3 according to 3 categories, and there are eight short conditions in the category 1, eight short conditions in the category 2, eleven short conditions in the category 3 and three short conditions in the category 4 according to 4 categories. It can be seen from the classification results that the distance between the 4 types and the divided 3 classes is not much higher than that of the respective cluster centers. In the case of 4 categories, there are only 3 working conditions, so we choose the 3 types of working conditions to study.

1860

4.3

X. Yang et al.

The Establishment of Representative Working Conditions

In general, the typical working conditions are generally fitted by the cluster center, but in order to choose the more practical working conditions, we choose the shortest distance from the cluster center. From Table 6, we can see that the thirtieth, twentyninth, and eighth conditions are the representative working conditions we need, expressed in Rep1, Rep2, and Rep3. Table 6 is the characteristic parameter of the three representative working conditions, Figs. 3, 4, and 5 are the speed and time histories of these three representative working conditions. Table 6. The characteristic value of the representative operating condition sv gi Working condition vmax v Rep1 51.7 20.3 13.9 0.15 Rep2 32.5 9.93 9.21 0.29 Rep3 36.7 16.6 11.3 0.17

gd d 0.33 0.34 0.41

aa-max 1.73 1.3 1.63

km/h

ga 0.54 0.4 0.43

s

km/h

Fig. 3. Suburbs road condition

s

Fig. 4. Urban congested road condition

ad-max −2.99 −2.1 −3.44

aa 0.454 0.42 0.521

ad −0.75 −0.495 −0.546

1861

(Km/h)

The Analysis of the New Energy Buses Operating …

(s)

Fig. 5. Urban bus lanes condition

The 3 representative working conditions extracted by principal component analysis and K-means Clustering method can be seen: 1 The Rep1 working condition belongs to a suburban road, characterized by high average speed, high speed, high acceleration frequency and relatively low deceleration frequency. 2 The Rep2 working condition belongs to the congested road section in the urban area, which is characterized by low average speed, high speed and high acceleration and deceleration frequency. 3 The Rep3 working condition belongs to the urban exclusive bus lane, which is characterized by relatively high average speed, high speed and low speed and uniform frequency of acceleration and deceleration.

5 Conclusion By collecting the real-time running data of the new energy bus in the typical bus route in the target city, the short working conditions of the thirty groups of 600 s are set up, and the ten characteristics parameters of each short condition are the maximum speed (vmax), the average speed (v), velocity standard deviation (sv), idling time ratio (ηi), speedup time ratio (ηa), deceleration time ratio (ηd), maximum acceleration (aa-max), maximum deceleration (ad-max), average acceleration (aa), average decrease (ad). In the process of data analysis, 600 s is a cut-off for short work design. In the case of significant computer lifting operation, the design of short working conditions should be fully adjusted and a more reasonable study interval should be selected. By principal component analysis, the three principal components have the highest eigenvalues, of which the principal component 1 is the largest, and the principal component 1 is in accordance with the strong correlation, and the order of the characteristic parameters is: and the order of the characteristic parameters is: the maximum velocity, the average velocity, the average speed reduction, the acceleration time ratio, the standard deviation of the velocity, and the maximum acceleration. Finally, three typical working conditions are extracted by the K-means Clustering method, which are the working conditions of the suburban sections, the special bus

1862

X. Yang et al.

routes in the urban area and the congestion section of the urban area. Next, we can use the characteristic parameters of typical working conditions to identify the driving conditions of the new energy bus, and put forward the control of new energy buses, which are beneficial to energy saving, environmental protection and have better power.

References 1. Chen, Y.: Development status and application prospect of new energy vehicle. China South. Agric. Mach. 48(13), 142–143 (2017) 2. Ren, B., Ren, X., et al.: Analysis of developing conditions and problems of the domestic new energy bus. Energy Conserv. Environ. Prot. Transp. (4), 28–32 (2013) 3. Zhang, H., Lv, Y.: Information flow characteristics analysis of vehicular Ad-hoc network. Chin. J. Eng. Math. 34(5), 449–457 (2017) 4. Mata, C., Leite, W.D.O., Moreno, R., et al.: Prediction of NOx emissions and fuel consumption of a city bus under real operating conditions by means of biharmonic maps. J. Energy Eng. 142(4), 04016018 (2016) 5. Zhong, S., Huang, J., et al.: Frame design and key technical analysis of EMI test system for new energy vehicle dynamic condition. China Meas. Test, 43(8), 76–79 (2017) 6. Sun, L., Lin, X., et al.: Management strategy based on type recognition and multivariate nonlinear regression optimization. China Mech. Eng. (22), 2695–2700 (2017) 7. Bai, D.: Study on Control Strategy Optimization for CNG Hybrid Electric Bus Based on Driving Cycle. Jilin University (2014). (Recognition) 8. Suraparaju, K.R., Pillai, G.: Type 2 fuzzy logic–based robust control strategy for power sharing in microgrids with uncertainties in operating conditions. Int. Trans. Electr. Energy Syst. 27 (2017) 9. Liu, J.: Vehicle Driving Cycle Discrimination Based On Multi-Sensor Information Fusion. Changchun University of Technology (2016) 10. Tang, B.: Research on Driving Cycle and Energy Control Strategy for Pure Electric Buses in Hefei. Kunming University of Science and Technology (2011) 11. Meng, J.: Study of Xi’an Typical Section of City Bus Driving Cycles. Chang’an University (2014) 12. Zhu, J.: The City Bus Driving Cycle Construction. Hefei University of Technology (2011)

Experimental Study on Influence Factors of Emission and Energy Consumption for Plug-in Hybrid Electric Vehicle Le Liu(&), Lihui Wang, and Chunbei Dai Testing Laboratory, China Automotive Technology and Research Center Co., Ltd, Tianjin 300300, China [email protected]

Abstract. Due to there being significant differences in emission and fuel consumption tests and actual applications for hybrid conventional fuel and power-driving power train characteristics of plug-in hybrid electric vehicles (PHEVs), there are various emission and energy consumption standards for PHEVs in various countries or regions. Analysis of differences between emission and energy consumption test requirements for American, European and Chinese PHEV standards was carried out and effects of various conditions (driving cycle and ambient temperature) on their emission and energy consumption performances were further analyzed. Results indicate that emission and energy consumption was affected by the driving cycle in its complexity and mileage and the low or high environment may deteriorate the vehicle emission and fuel consumption performances to some extent; moreover, fuel consumption may rise due to air conditioning load or low-power. For more comprehensive evaluation of emission and energy consumption performances of PHEVs, the high temperature and air conditioning tests as well as petrol-electric conversion method are necessarily complemented to the national standards. Keywords: PHEV Temperature

 Emission  Energy consumption  Driving cycle

1 Introduction For solving the increasingly severe environment issues and gradually growing fuel consumption requirements, PHEVs may make full use of electrical energy and reduce the traditional fossil fuel consumption to achieve the purpose of energy conservation and emission reduction; thus, PHEVs are currently a focused research topic in the automobile industry [1]. Foreign advanced countries (regions) have various standards for inspection of emissions and fuel consumption of PHEVs, which were basically supplemented based on emission rules of conventional vehicles due to there being two power sources for PHEVs and the corresponding measurement procedures being more somewhat complicated than those for conventional vehicles. There are primarily two major hybrid power standard systems (namely European and American standards); and their test requirements for emissions, fuel consumption and driving range of PHEVs are basically consistent and include the maximum and minimum charge state tests © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1863–1873, 2019. https://doi.org/10.1007/978-981-13-3648-5_240

1864

L. Liu et al.

(@ −7 °C (normal temperature) and 25 °C (low temperature)); whereas, their test and calculation methods are different. The American standards utilize 5 driving cycles (FTP75, HWFET, US06, SC03, and low temperature FTP72) [2]; on the other hand, Chinese standards adopt NEDC cycles with reference to European standards [3, 4]. A single-cycle or continuous multi-cycle conditions may be selected for Test Method A; in contrast, the WLTC cycle is switched and continuous multi-cycle chargedepleting mode tests are enforced for China and EU 5 Standards like American standards [5]. As for American and EU standards, the emission results corresponding to various charge levels are not necessarily weighted; but emission results for Conditions A and B shall be necessarily weighted in accordance with requirements of China GB Standards. As for calculation of fuel consumption results, American standards require weighing FTP75 and HWFET fuel consumption results as the integrated fuel consumption under Mode CD by means of the petrol-electric conversion method; on the other hand, the 5-cycle fuel consumption results are weighted for the integrated fuel consumption under Mode CS. The fuel consumption results are not weighted but the petrol-electric conversion method is not introduced for Condition A or B in accordance with requirements of EU and China GB standards. The test requirements are significantly different for emissions of PHEVs between EU/China GB standards and American standards. The effects of those factors such as various driving cycles, ambient temperature and battery level on emissions and energy consumption of PHEVs were further studied in this study, where various ambient temperatures (−7, 25, 35 °C) and air conditioning states were set; various driving cycles were selected and the battery level was adjusted. The emissions, fuel consumption and driving range were measured under the charge depletion and charge-sustaining modes for PHEVs to investigate thoroughly the corresponding influencing rules.

2 Test Method for Emissions and Energy Consumption of PHEV Under Various Conditions For thorough comparison of energy consumption and emission tests for PHEVs under various conditions, a set of PHEV test processes was developed in this study with reference to related requirements of American and China GB standards and by comprehensively taking those objective factors such as various temperature conditions and air conditioning states into account [6], which is shown in Fig. 1. 2.1

Driving Cycle

While any vehicle is tested or certified, its emissions and fuel consumption results may be influenced by several factors, where the test driving cycle shall be the core part of test conditions. Objective study was carried out here to test cycles of American, EU and China GB standards. The representative FTP75 operating mode curve is selected for American standard (Fig. 2). Comprehensive comparison of basic features of various driving cycle is represented in Table 1 [7]. Various driving cycles may bring about various emissions and fuel consumption results primarily representing their cycle periods, driving ranges and

Experimental Study on Influence Factors of Emission …

1865

Fig. 1. PHEV test processes

Fig. 2. Primary driving cycles for PHEV standards of various countries

Table 1. Statistics of basic features of driving cycles for standards of various countries. Name

Period (s)

FTP75 1874 NEDC 1180 WLTC 1800

Range (km) 17.77 11.04 23.27

Mean velocity (km/h) 34.1 33.68 46.54

Maximum velocity (km/h) 91.2 120 131.3

Maximum deceleration (m/s2) 1.78 1.39 1.5

Maximum acceleration (m/s2) 1.78 1.04 1.67

Idle speed ratio (%) 19.40 24.80 13.20

deterioration of cold started verse states; the higher maximum velocity and acceleration lead to growth of the vehicle load and the complexity of the shift point results in whether the engine operates in the optimum rotational speed range, the idle speed ratio affects emissions and fuel consumption, and the mean velocity is close to the vehicle velocity corresponding to the optimized fuel consumption. In addition to the driving cycle, the test ambient temperature are also significant influencing factors for emissions and energy consumption of PHEVs.

1866

L. Liu et al.

3 Test Cycle Factor Emissions and fuel consumptions of 9 PHEVs were tested under various test cycle in this study. For separate analysis of effects of each driving cycle on their emissions and fuel consumptions and avoidance of differences in those requirements such as calculation methods, test methods (Condition A) and loading resistance for standards of various countries, this experimental study focused on objective analysis of emissions and fuel consumption results for Test Condition B. The recommended resistance coefficient was adopted based on the reference mass. 3.1

Analysis of Emissions of PHEVs Under Various Test Cycle

CO emissions primarily come from the cold start, transient acceleration and ultra-highspeed driving phase; the longer the cycle mileage is, the lower the integrated effect due to emissions at the cold start stage is; thus, FTP75 CO emissions corresponding to a longer mileage are slightly lower than NEDC; while the maximum velocity and the acceleration ratio peak of WLTC cause most CO emissions, as shown in Fig. 3. On the other hand, the whole HC emissions are almost from the cold start stage so that a longer cycle mileage may weaken the effects of HC emissions due to the cold start stage on the integrated emissions of the entire cycle. Thus, HC emission levels from high to low are NEDC, FTP75 and WLTC. As for most of vehicles in good states, their NOx emissions are relatively small (Fig. 3). In accordance with modal analysis for test cycle, NOx emissions are very small during other stages under NEDC and FTP75 except cold start processes. Thus, the integrated emissions fall as for the long cycle mileage under the FTP75 operating mode. More NOx emissions occur during the large instantaneous acceleration and high-speed stage under the WLTC operating mode, whose constant speed rate is only a small part; and the acceleration rate, maximum acceleration and speed peak. Such case indicates there are more instantaneous driving modes and high driving speed states under the WLTC operating mode; its exhaust temperature and NOx emissions are more than those under any other operating mode, respectively.

Fig. 3. Analysis of PHEV emission results under various operating modes

Experimental Study on Influence Factors of Emission …

1867

Effects of various cycles on emissions are significantly different in 3 aspects: (1) Vehicle speed at the cold start stage; (2) Acceleration during the instantaneous process; and (3) Acceleration during the speeding process. 3.2

Analysis of Fuel Consumptions of PHEVs Under Various Test Cycle

The fuel consumption results of PHEVs at Condition B under various operating modes are shown in Fig. 4. FTP75 and NEDC fuel consumptions are approximately equal due to their mean velocities being nearly the same. However, the acceleration rate of the FTP75 cycle is higher, which includes more instantaneously changing operating modes so that vehicles need more energy input. Relatively speaking, the maximum speed and fuel consumption are more for the NEDC cycle. The integrated factor gives rise to relatively close fuel consumptions for FTP75 and NEDC cycles. Comparatively, the idle speed ratio is less but the maximum vehicle speed is more for the WLTC cycle so that its fuel consumption should be more than that for any other cycle; on the other hand, the mean velocity for the WLTC cycle is much close to the vehicle speed corresponding to the optimal fuel consumption so that the WLTC fuel consumption performance shall be optimized. As for any vehicle equipped with a small displacement engine, the frequent instantaneously changing operating modes and super-high vehicle speed ensure its small displacement engine shall operate at the full load state. Whereas,

Fig. 4. Analysis of fuel consumption results of PHEVs under various operating modes

a PHEV is driven integrally by its motor and engine; thus, the fuel consumption of a small displacement vehicle shall be slightly higher than the NEDC fuel consumption of Vehicle 4 or 5. As for any PHEV, the performances under the maximum and minimum charging states (Conditions A and B) of its energy storage device shall be comprehensively taken into account for its emissions and energy consumption. Condition A tests defined in current EU and China5 standards are single cycle tests; in contrast, the multi-cycle Condition A test method is defined in EU and China6 and American new PHEV standards; thus, the test method of the standards corresponding to FTP75 and WLTC ensures that the emission energy consumption result at Condition A shall not be zero.

1868

L. Liu et al.

The final calculation weighing results are much higher than those based on the current NEDC test method and shall more in line with those under the actual driving situations. As for calculation of fuel consumption of any PHEV in accordance with American standards, the petrol-electric conversion factor is utilized; but there is no such calculation method in China standards; thus, the fuel consumption weighted based on requirements of American standards approaches the traditional fuel engine level but well above the NEDC/WLTC weighted fuel consumption.

4 Testing Temperature Factor The ambient temperature as an essential test condition is regarded as a main influencing factor for energy consumption and emissions of PHEVs, especially for battery performance. Engine emissions are primarily centered on the cold start stage; thus, the test original temperature significantly affects the emission level at the cold start stage and consequently worsens the entire emission level. Thus, study was carried out to emissions and energy consumption of 3 PHEVs in various types under various temperatures; and emissions and energy consumption were measured while the air conditioning units were ON and OFF under various ambient temperatures. Test results indicate that the vehicle driving ranges are above those under one test cycle regardless of test ambient temperature. Emissions and fuel consumption results at Condition A under one single cycle are zero. For parallel comparison of effects of various temperatures on HEV emissions and fuel consumption, direct analysis of results at Condition B was carried out here for reference. 4.1

Analysis of Emissions of PHEVs Under Various Temperatures

Effects of various temperatures on emission results of PHEVs are shown in Fig. 5. Figure 5 indicates that CO emission results are significantly higher at low ambient temperature than those at normal temperature regardless of the air conditioning ON or OFF state because CO is the product of the imperfect combustion and primarily comes from the cold start stage and instantaneous acceleration and deceleration operating

Fig. 5. Effects of various temperatures on emission results of PHEVs

Experimental Study on Influence Factors of Emission …

1869

modes as well as operating modes where the air-fuel ratio is under inaccurate control. Under any low ambient temperature, the light-off period of the vehicle catalytic converter is relatively long and more fuel may be ejected based on the cold start control strategy to maintain the initial operation of the engine so that the air-fuel ratio may be severely rich and the engine combustion shall be uneven; thus, more CO may be generated. CO emission results under the high ambient temperature are remarkably lower by 36, 33 and 27% than those under the normal temperature, respectively, regardless of the air conditioning ON or OFF state. Due to there being a shorter lightoff period of the catalytic converter under the high ambient temperature, the higher engine oil temperature lowers the fuel enrichment rate and the engine fundamentally works around the theoretical air-fuel ratio to generate a small amount of CO emissions. For parallel comparison of effects of high and low ambient temperatures, CO emissions are not greatly affected by operating modes (ON or OFF) of the air conditioning unit; but CO emissions primarily come from the cold start and high-speed enrichment stages. Very low air conditioning load may not ensure the air-fuel ratio shall deviate from the control objective. Even if the engine load grows to some degree, the engine shall output more torque and inject more fuel to maintain stable operation under greater load; moreover, the corresponding air intake amount also increases to the engine in conjunction with fuel injection so that its theoretical air-fuel ratio may be approximately maintained; but the engine combustion is not deteriorated due to increasing the air conditioning load; thus, CO emissions are not greatly affected by operating modes (ON or OFF) of the air conditioning unit. Most of HC emissions are generated from the cold start stage; thus, HC emissions rather than CO emissions are more sensitive to the ambient temperature. The HC emission level is significantly deteriorated at low temperature test which under high temperature it is superior to that under normal temperature. Figure 5 indicates that change rates of NOx emission results present different phenomena. NOx emissions are more under high temperature rather than normal temperature due to NOx being a combustion product of fuel under high temperature; emissions come from the cold start and high-speed driving stages. Various ambient temperatures may have a certain influence on NOx at the cold start stage. Along with the vehicle working in the high-speed driving stage, the exhaust temperature rises sharply. In accordance with the mechanism of NOx formation, NOx emissions increase along with growth of the combustion temperature and the rich oxidation reaction; thus, decrease of the ambient temperature reduces the oxidation reaction of high temperature exhaust; and a higher ambient temperature may speed up the period for the vehicle to produce high temperature exhaust so that NOx emissions may rise due to early occurrence of high temperature. The air conditioning load accounts for a small part of the engine load in comparison with those at the cold start or high-speed driving stage; and the air-fuel ratio may not change; thus, NOx emissions do not be affected. 4.2

Analysis of Effects of Various Temperatures on Energy Consumption of PHEVs

Effects of various temperatures on fuel consumptions of PHEVs are shown in Fig. 6. Along with the ambient temperature changing, the vehicle fuel consumptions rise on

1870

L. Liu et al.

Fig. 6. Analysis of effects of various temperatures on fuel consumption results of PHEVs

different levels. The vehicle fuel consumption is optimal under the normal temperature (25 °C). While the temperature is low to −7 °C and the air conditioning unit is OFF, vehicle fuel consumptions rise by 63, 56 and 69%, respectively. While the temperature is high to 35 °C and the air conditioning unit is OFF, the vehicle fuel consumptions change little because vehicles need more fuel at the cold start stage under low temperature to maintain stable operation of their engines. Moreover, the water and fuel temperatures of their engines being up to the completely warming-up level, a lower ambient temperature may not affect the engine fuel consumption. While the ambient temperature is the same, operating states (ON or OFF) of the air conditioning unit affect more greatly fuel consumption rather than emissions. The air conditioning compressor is an accessory as for the engine. While the air conditioning unit is in cooling mode, its compressor works and need a certain load from the engine; moreover, the engine necessarily maintains stable operation, more torque is necessarily generated to maintain the necessary torques for the air conditioning unit and other devices. The engine needs more fuel and intake air for which a stable air-fuel ratio shall be maintained under the engine ECU control strategy; the engine may maintain stable combustion but emissions may not be deteriorated. Whereas, more fuel injection into the engine means increasing integrated fuel consumption over the test cycle; and the corresponding results are shown in Fig. 6. The engine load does not rise while the ambient temperature is low and the air conditioning heater works so that fuel consumption may be approximately equal regardless of the air conditioning ON or OFF state under the low ambient temperature. On the other hand, though a small amount of fuel is necessary at the cold start stage under the high ambient temperature, a larger cooling power based on more power dissipation of the air conditioning unit is necessary to maintain the vehicle comfortable temperature; while a certain load grows for the engine due to the air conditioning cooling supply so that the engine may need additional fuel consumption. While the ambient temperature is high and the air conditioning unit is ON, the vehicle needs about 1.3 times of fuel consumption as much as that while the air conditioning unit is OFF. Effects of various temperatures on the pure electric driving range are shown in Fig. 7. Due to too long time (normally about 3 h) for the pure electric driving range, the pure electric driving range tests are not carried out to protect health of drivers while the ambient temperature is low and the air conditioning unit is OFF under the high ambient temperature. The vehicle driving

Experimental Study on Influence Factors of Emission …

1871

Fig. 7. Analysis of various temperatures on driving range results of PHEVs

ranges fall by 13.8, 19 and 15% under the high ambient temperature, respectively; but the vehicle driving ranges decrease by 36, 39 and 31% under the low ambient temperature. The latter is more doubled of the former because effects of low ambient temperature on the battery discharging performance are well above effects of air conditioning power consumption and high ambient temperature on the battery performances. Moreover, PHEVs work under the pure electric mode while measuring the driving range; there is no residual heat from their engines to their air conditioning heaters. PHEVs are generally equipped with electric heaters so that their air conditioning loads may also cause certain energy consumption. Effects of temperature on the discharging performance directly reflect the discharging capacity and voltage. While the temperature falls, the battery internal resistance and electrochemical reaction impedance grow; the polarization resistance rapidly increases but the discharging capacity and voltage fall; thus, the battery power and energy output may be influenced.

5 Conclusions This study focused on analysis of effects of various conditions on emissions and energy consumption results of PHEVs; and the primary influencing factors are temperature, operating modes and battery reserves. Various conditions may greatly affect emissions and energy consumption results of PHEVs. Our conclusions are as follows: CO, HC and NOx emissions under the low ambient temperature are well above those under the normal temperature. On the other hand, CO and HC emissions under the high ambient temperature are less than those under the normal temperature, but NOx emissions are severely deteriorated. The fuel consumption level is optimal under the normal temperature. Fuel consumptions under the high ambient temperature are higher than those under the low ambient temperature due to the load of the air conditioning compressor. The pure electric driving range under the low ambient temperature falls by about 200% than that under the high ambient temperature primarily due to bad effects of low temperature on the battery. As for effects of various operating modes on emissions, driving operating modes are more complicated for CO and HC emissions. The emissions for the WLTC cycle

1872

L. Liu et al.

(high acceleration ratio and high vehicle speed) are higher than those for the NEDC or FTP75 cycle. Because HC emissions primarily come from the cold start stage, the longer cycle mileage may weaken effects of the cold start stage on HC emissions; thus, emissions are presented in descending order for NEDC, FTP75 and WLTC. Fuel consumptions are close for FTP75 and NEDC operating modes; however, the WLTC fuel consumption is slightly higher for a PHEV equipped with a small displacement engine. The energy consumption is higher for WLTC rather than NEDC or FTP75 so that the WLTC driving range may be minimum. Various initial battery reserves may affect emissions and energy consumption results of PHEVs to some extent. The engine works for longer time and emissions and fuel consumption rise along with decrease of the battery reserve. Vehicle emissions are not greatly affected by the operating modes (ON or OFF) of the air conditioning unit. While the ambient temperature is low and the air conditioning heater is ON, the vehicle is warmed up by utilizing residual heat of the engine and no engine load is necessary so that the fuel consumption may be little affected by the operating modes (ON or OFF) of the air conditioning unit. On the other hand, while the ambient temperature is high and the air conditioning unit is in the cooling mode, the air conditioning compressor works to increase some loads of the engine; thus, the fuel consumption rises; whereas, the corresponding air intake amount rises; the air-fuel ratio is unchanged; thus, emissions are not greatly affected.

References 1. Qin, K., Chen, H., Fang, M., Zhang, C.: Investigation on the evaluation methods for fuel consumption and emissions of PHEV. Automobile Technol. (07), 11–16 (2010) 2. SAE International, Recommended practice for measuring the exhaust emissions and fuel economy of hybrid-electric vehicles, including plug-in hybrid vehicles. SAE J1711. 2010-06 3. ECE. Uniform provisions concerning the approval of passenger cars powered by an internal combustion engine only, or powered by a hybrid electric power train with regard to the measurement of the emission of carbon dioxide and fuel consumption and electric range, and of categories M1 and N1 vehicles powered by an electric power train only with regard to the measurement of electric energy consumption and electric range. Regulation 101 ECE, 2005-04 4. GB 18352.5-2013. Limits and measurement methods for emissions from light-duty vehicles. China Environmental Science Press, Beijing (2013) 5. GB 18352.6-2016. Limits and measurement methods for emissions from light-duty vehicles. China Environmental Science Press, Beijing (2016) 6. GB/T 19753-2013. Test methods for energy consumption of light-duty hybrid electric vehicles. China Environmental Science Press, Beijing (2013) 7. Hou, C., Wang, H., Ouyang, M.: PHEV fuel consumption electricity consumption evaluation. Automot. Eng. 37(1), 1–8 (2015)

Experimental Study on Influence Factors of Emission …

1873

8. Prieto, N., Uttaro, B., Mapiye, C., Turner, T.D., Dugan, M.E.R., Zamora, V., Young, M., Beltranena, E.: Meat Science, vol. 98, no. 4, p. 585 (2014) 9. Riovanto, R., De Marchi, M., Cassandro, M., Penasa, M.: Food Chemistry, vol. 134, no. 4, p. 2459 (2012) 10. Prieto, N., Dugan, M.E.R., López-Campos, O., McAllister, T.A., Aalhus, J.L., Uttaro, B.: Meat Science, vol. 90, no. 1, p. 43 (2012)

Design of On-Line Measurement System for Fine Particle Number Concentration of Vehicle Exhaust Based on Diffusion Charge Theory Zhouyang Cong1,2,3, Tongzhu Yu1,3(&), Huaqiao Gui1,3, Yixin Yang1,2,3, Jiaoshi Zhang1,3, Yin Cheng1,3, and Jianguo Liu1,2,3 1

Key Laboratory of Environmental Optics and Technology, Anhui Institute of Optics and Fine Mechanics, the Chinese Academy of Sciences, Hefei 230031, China [email protected] 2 University of Science and Technology of China, Hefei 230026, China 3 Anhui Provincial Key Laboratory of Environmental Optical Monitoring Technology, Hefei 230031, China

Abstract. The concentration of particulate matter is an important parameter to measure exhaust emissions from motor vehicles. This article discusses the design of an on-line measurement system for fine particle number concentration. Based on charging models in continuum regime, a unipolar charger with free electron capture function was designed. After a high-sensitivity I–V conversion circuit was built in the micro current detection module, we designed a widerange Faraday cup electrometer. The experimental results show that the charger can efficiently charge 23 nm–2.5 lm particles, achieve a stable discharge with an average discharge current of −5 lA (relative error is less than 1%). And a wide-range Faraday cup can achieve a ±500 pA range, fA—level electrical signal measurement. Compared with the AVL-APC 489 instrument, the integrated system experimental test is better than 87.2%, which can meet the on-line testing requirements for the concentration of motor vehicle exhaust particulates. Keywords: Fine particle Charger

 Electrometer  Number concentration

1 Introduction In recent years, with the continuous development of China’s economy and the rising energy consumption, air pollution problems have appeared. PM2.5 (small particles with a diameter of less than 2.5 lm) is one of the important causes of haze weather and has been integrated into China’s air quality monitoring system by National Environmental Protection Bureau [1]. PM2.5 mainly comes from various sources of pollution caused by human activities, such as industrial production processes, power plant boilers, motor vehicle exhaust, dust, and straw burning. However, with the continuous

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1874–1884, 2019. https://doi.org/10.1007/978-981-13-3648-5_241

Design of On-Line Measurement System for Fine Particle …

1875

increase in the number of motor vehicles, the source of road traffic emissions has become more and more important in the air pollution [2]. At present, the methods for measuring the concentration of fine particles mainly include light diffraction, microscopy, and electro migration classification. In combination with optical particle counting [3], condensation particle counting, and aerosol electrometer technology, the measurement of fine particle number concentration and particle size distribution can be achieved [4]. Among them, the light diffraction is difficult to measure the absolute concentration of particulate matter. Microscopic analysis is only applicable to static single-particles. They are less used in on-line measurement of fine particles. However, diffusion charge [5], electro migration classification and kinetic classification have gradually become the main means of measurement [6]. In this paper, according to the measurement requirements of the small particle size and high concentration of motor vehicle exhaust particles, a device for measuring the concentration of fine particles in motor vehicle exhaust is developed by using key technologies such as unipolar diffusion charging and aerosol electrometer. At the same time, the accuracy of the measurement system is evaluated through the comparison tests with commercial equipment abroad.

2 Overall Design Due to the high concentration of exhaust gas in motor vehicles, dilution sampling systems are required for dilution pretreatment. At present, there are a series of more mature commercial systems for dilution sampling, such as the dilution system FPS4000 of DEKATI [7] (Finland). Part of the diluted gas is sampled and sent to the charger in the fine particle number concentration measurement system. And excess gas is discharged as the exhaust gas. The integrated particle size measurement system for fine particle number concentration is shown in Fig. 1. It is mainly composed of a unipolar diffusion charger and a wide-range Faraday cup electrometer.

Fig. 1. Integrated block diagram of on-line measurement system for fine particle concentration

1876

Z. Cong et al.

In the charger, the clean air enters the ionization chamber of the charger with a certain pressure, and the high-concentration free ions generated in the ionization chamber are ejected out of the ionization chamber of the charger and collide with the particles in the sample gas. So that particles in the sample gas can be charged with high efficiency. The charger power control module controls the unipolar constant current discharge voltage and the back-end capture voltage. Among them, the free ion concentration is changed by regulating the constant current discharge voltage; by changing the value of the trapping voltage, free ions are trapped, and the lower limit of the particle size of the trapped particulate matter can be changed. The charged particles after being charged are sent to the Faraday cup electrometer. The charged particles collide with the sensitive mesh electrode in the Faraday cup and lose their charge. Then the current was measured by a high-sensitivity electrometer. The total particle concentration in the gas can be obtained by an inversion algorithm. The number concentration of sample gas particles can be calculated by the following formula: N¼

Q

I P1

n¼1

gn  ne

ð1Þ

Among them, N represents the concentration of fine particles in the sample gas (#=cm3 ), I is the current value of the particulate matter measured by the electrometer (A), n is the number of charges carried by a single particle, and gn is the volume fraction of the n charge particles in the unit flow rate, e is the charge of the element (1:6  1019 C), and Q is the flow rate of the sample gas.

3 System Critical Modules and Performance Testing 3.1

Particle Charger

3.1.1 Structural Design In previous studies, most of charge devices directly ionize and charge aerosols [8], which may easily cause contamination of the discharge needle, leading to the deposition of particles on the discharge needle, affecting the ionization efficiency of the discharge needle and requiring regular cleaning. In addition, the lack of free ion removal device will affect the accuracy of the particle charge measurement. Therefore, this article has designed a long-life airborne particulate charging device with free ion removal function [9]. The schematic diagram of the unipolar charger structure of the particulate matter with the free electron capture function is shown in Fig. 2. The particle charger works as follows: (1) The clean air enters the corona chamber with a certain flow and pressure; (2) The discharge needle tip that is connected to the high voltage discharge module ionizes the clean airflow, generating a large number of free electrons; (3) Clean airflow eject the free electrons out of the corona chamber; (4) The sample gas enters into the charged area at a certain flow rate, colliding with the free electrons in the opposite direction. Free electrons are attached to the particles to complete the charging process;

Design of On-Line Measurement System for Fine Particle …

1877

(5) Charged particles and free electrons enter the free electron trapping area under the action of the gas flow. The excess free electrons are collected by the ground electrode under the action of the trapping electric field. While the charged particles have a smaller deflection due to the capture electric field and are discharged from the outlet.

Fig. 2. The schematic diagram of a particle charger with free electron capture function

3.1.2 Theoretical Analysis A key parameter for evaluating a unipolar charger is Ni  t. In the charging chamber, the electron concentration can be expressed as [10]: Ni ¼

Ii eQ

ð2Þ

where Ii is the current value measured on the ground electrode when only clean air is flowing in, and Q is the volume flow rate of the gas at the orifice of the charging chamber. The average charge time of particles can be expressed as: t¼

Va Qa þ Qc

ð3Þ

where Qa is the flow rate of aerosol into the charger, Qc is the flow rate of clean air into the charge chamber. For the unipolar particle charging model, there are three types according to the Knudsen number Kn [11]. They are the free-molecular regime (Kn  1), the transition regime (Kn  1), and the continuum regime (Kn  1). The Knudsen number is given by: Kn ¼

2k dp

ð4Þ

where k is the mean free path of the gas and dp is the diameter of the charged particle. For particles with a diameter of dp , the mean number of charges, nd and nf , obtained in unipolar diffusion and field charging [12], respectively, can be expressed as follows:

1878

Z. Cong et al.

  2pe0 kTt dp c i dp e 2 N i t nd ¼ ln 8pe0 kT e2

ð5Þ

3e pe0 Edp eZi Ni t e þ 2 e 4e0 þ eZi Ni t

ð6Þ

2

nf ¼

where k is the Boltzmann constant, Tt is the absolute temperature, ci is the mean pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi thermal velocity of the ions (ci ¼ 8kT=pme , me is the mass of the electron), e0 is the dielectric constant of the air, e is the relative permittivity of the particles. The theoretical value of the particle charge number np is estimated by the following equation: np ¼ nd þ nf

ð7Þ

3.1.3 Charger Performance Test In order to evaluate the corona discharge characteristics of the charger, an experimental platform test was set up, including an air pump, a diffusion drying tube (3062, TSI, USA), a high-efficiency filter, a plate type electrostatic precipitator [13], and a charge power control module (C50N, EMCO, USA), electrometer (6514, Keithley, USA), and a particle charger. The experimental gas connections are shown in Fig. 3.

Fig. 3. Charger corona discharge characteristics experimental platform

First, the discharge stability of the charger is tested. By monitoring the unipolar corona discharge current and using the built-in hardware PID circuit, the discharge current of the charger can be stabilized. At the same time, the constant current discharge current can be controlled within a certain range through the regulation of the reference voltage. In order to verify the stability of the discharge of the charger, a longterm (8 h) current dithering at a discharge current of −5 lA was tested, as shown in Fig. 4a. The 8-h measurement results showed that although the operating temperature

Design of On-Line Measurement System for Fine Particle …

1879

of the charger has a certain fluctuation, the discharge voltage can be adjusted by using the hardware PID to achieve an average discharge current of −5 lA and an error less than ±1%.

Fig. 4. a Corona discharge current stability test results; b Average charge of particles with different particle sizes

The charger was used to charge particles with different particle sizes. The theoretical and experimental values of the average charge of particles of different particle sizes are shown in Fig. 4b. The experimental data is plotted after the average of three measurements. It can be seen that the experimental data agree well with the theoretical values. The maximum relative error is less than 23.3% when the particle size is in the range of 0.1–0.4 lm and less than 10.2% when the particle size is in the range of 0.4– 1 lm. The error is caused by the diffusion or deposition process in the charger. As can be seen from the Fig. 6, the smaller the particle size, the more particle loss in diffusion process. 3.2

Wide-Range Faraday Cup Electrometer

3.2.1 Wide-Range Faraday Cup Electrometer Design In view of the high concentration of motor vehicle exhaust particulates, the commercial Faraday cup electrometer measurement range (such as TSI 3068B electrometer: ±12.5 pA) cannot meet the detection requirements, and a wide-range Faraday cup electrometer needs to be designed. We have made the following improvements. The improved wide-range Faraday cup electrometer includes the Faraday cup and the electrometer. Its working principle is as follows (the schematic is shown in Fig. 5): Charged particles enter the Faraday cup from the inlet. And when the charged particles are on the metal mesh, it is trapped and loses its charge. The charge lost in the metal net is connected to the micro current detection board through the sensitive electrode in the electrometer and the current value is measured. Since the current signal is of fA magnitude, a shield is used to shield the micro current detection module from external signals.

1880

Z. Cong et al.

Fig. 5. The wide-range Faraday cup electrometer schematic

According to the basic principle of FCAE, the number concentration of aerosols N can be obtained by accurately measuring the current caused by the charged particles and the volume flow of the aerosol through the Faraday cups [14]: N¼

I/C V  ne  gFCUP =60

ð8Þ

where I is the current value measured by the electrometer (A); c is the calibration parameter of the electrometer; V is the volumetric flow rate of the aerosol (L/min); n is the average charge number of all particles; e is the charge of the element; gFCUP is the detection efficiency of the Faraday cup. 3.2.2 Electrometer Performance Test Experiment The Keithley 6221 (the range is ±2 nA) was used as the current source to calibrate the range of the self-developed electrometer. The calibration schematic is shown in Fig. 6. The Keithley 6221 current source was used to adjust the current in the range of −500 to −100 pA and 100–500 pA with an interval value of 100 and 10 pA is the interval value within the range of −100 to 100 pA, and each position is measured five times. The result of the electrostatic measuring range measurement is shown in Fig. 7a. As can be seen from the figure, the self-study electrostatic measuring range can reach −500 to 500 pA. In order to ensure the authenticity of the effective signal, we make the electrometer in a room temperature (22 °C) and carry out a 22-h zero drift test. The test results are shown in Fig. 7b. The noise value is within ±2 fA, which meets the design requirements. In order to get the performance of the Faraday cup electrometer, we set up an experimental platform, which includes a standard aerosol generator (7388L, MSP, USA), a charge of particulate matter, two flat electrostatic depositors, condensed nuclear particle counter (3788, TSI, USA), charger power control module, electrometer (3068B, TSI, USA), radioactive charge neutralizer (241 Am) and self-developed faraday cup electrometer. Experimental gas circuit connection is shown in Fig. 8.

Design of On-Line Measurement System for Fine Particle …

1881

Fig. 6. Schematic diagram of electrostatic meter calibration

Fig. 7. a Range of electrometer; b Electrometer’s 22-h zero test

Fig. 8. Faraday cup electrometer overall performance measurement gas circuit connection diagram

Faraday cup electrometer performance test results were shown in Fig. 9. Compared with the results of TSI-3068B (within the range), the correlation is better than 99%.

1882

Z. Cong et al.

Fig. 9. Faraday cup electrometer test results

4 Overall System Performance Tests A self-developed fine particle number concentration measurement system of vehicle exhaust and the AVL-APC 489 exhaust gas monitor are used to build an experimental platform (Fig. 10). The exhaust pipe of the vehicle is connected to the sampling tube, and then diluted by a two-stage diluter. The self-developed system and the comparison device AVL-APC 489 exhaust gas monitor are connected at the same position. The test cycle uses NEDC sequencing.

Fig. 10. Schematic diagram of test platform for vehicle exhaust particle number concentration test

The correlation between the AVL-APC 489 and the self-research instrument urban circulation and the correlation of the two instruments are shown in Fig. 11a, b respectively. From the test results, it can be seen that the concentration of vehicle exhaust particulates is mainly concentrated on the order of 106. With the change of test conditions speed, the trend of AVL-APC 489 exhaust gas monitor and integrated system is consistent, and the correlation is better than 87.2%. The relationship between them is: YAVL ¼ 1:04Xour þ 0:02. The experimental correlation error is the difference between the self-developed instrument and the AVL-APC 489 measurement method. The self-research instrument is based on the diffusion charge principle, while the AVL-

Design of On-Line Measurement System for Fine Particle …

1883

APC 489 is based on the condensation growth and light scattering principles. The experimental results verified the accuracy of the self-developed instrument’s measurement and provides a method for calibration of the device.

Fig. 11. a Comparison between the urban circulation of AVL-APC 489 and self-developed instruments, b correlation between self-developed instruments and AVL-APC 489

5 Conclusion (1) The design of the unipolar charger with particle collection function based on the principle of diffusion charge is completed. We used a high-voltage corona discharge produces unipolar free ions to efficiently charge fine particles. In addition, a free ion trapping zone with adjustable trapping voltage is designed, which can effectively eliminate the interference of the presence of free ions on the back-end measurement results and ensure the authenticity. The experiment result shows that discharge process can achieve an average discharge current of −5 lA and an error of 7 g/kWh) of the diesels were urea pump malfunction and discontinued refilling of urea. Vehicles with the aftertreatment system in malfunction, in average, emitted as much as 7 times of the NOx emitted from vehicles with the aftertreatment system in sound operating conditions. The OBOLS system we developed was able to select vehicles of excessive NOx emissions with a high efficiency, as well as to provide accurate positioning information, such as drive paths, and with the capability of statistic analysis of emission data. It provides the government authorities for environment protection a new method to layout policies and regulations in a more reasonable and scientific way, and enforce these regulations more efficiently, which is worthy of further study and nationwide implementation. Keywords: Heavy duty diesels oxide



On-Board-On-Line surveillance



Nitrogen

1 Introduction During the 12th 5-year EDP term, the municipality of Wuhan is promoting industrial upgrading and reducing emissions from motorized vehicles, construction, cuisine, plant, etc. with an effort of “restoring blue sky”, which has resulted in remarkable air quality improvement and decrease in PM emission. However, as an exception, NOx emission has increased despite of decreased emission PM10, PM2.5, SO2 year over year in recent year [1]. Take 2017 as an example, the average NO2 concentration increased 8.7% over 2016 to a level of 50 mg/m3; and days of excessive NO2 emission increased 19 days over 2016 to 38 days, i.e. being doubled. As reported by reference [2], with cleaner manufacturing facilities and better emission control from coal burning, emissions from motorized vehicles becomes a more and more important source of air pollution, which has been identified as an important cause of foggy weather. According to the statistics between 2012 and 2016 shown in “Annual Report of Pollutions from © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1885–1895, 2019. https://doi.org/10.1007/978-981-13-3648-5_242

1886

W. Gu et al.

Motorized Vehicle Emission of Wuhan” [3], emission of nitrogen oxides (NOx) has been steadily increasing over the last 5 years, with a percentage over the total NOx emission in the entire city climbing from 31% (2012) to 39% (2017). The number of heavy duty diesel vehicles represents only about 3% of the total number of motorized vehicles, but emitted about 80% of NOx by all vehicles. This indicates NOx emissions from heavy duty diesels has largely affected the air quality in Wuhan. However, it has been a great challenge to supervise and control emissions from heavy duty diesels in practice, for examples: firstly, with G4 and G5 diesels put into services, NOx emission becomes more and more important, and there are still missing national standards (methods and limits) in measuring and detection of NOx emission; secondly, currently implemented annual check is not able to assure vehicles successfully passed the annuals check meet regulations in real road services; thirdly, our environment protection agents have discovered some heavy duty diesels had their aftertreatment system not in sound service conditions, and some with OBD functions such as alerting and torque output limitation due to excessive NOx emission, being disabled. In this scenario, information technology has provided us an effective solution to collect emission and OBD data, with realtime on-line and remote surveillance capability, that can largely reduce the work load of environment protection agencies. In fact, some scholars have studied remote and on-board surveillance of vehicles recently. For example, Zeng used this method for monitoring safety of driving public buses through CAN signals [4]. Yuan transmitted OBD-II signals wirelessly with GIS technologies for provide an early alert of possible malfunctions and for minimizing down time of vehicles [5]. This paper disclosed a research work concerning a construction of an on-board and on-line surveillance (OBOLS) system including on-board devices for collecting information from CAN signals of vehicles, such as NOx emission concentration, OBD MIDs, engine operating conditions, etc. and sending the above information through public wireless network to a server, to form a big data system. This platform has shown a great efficacy in detecting vehicles with their aftertreatment devices disabled and emitting excessive pollutants, which shall be monitored closely by the environment protection agencies.

2 On-Board and On-Line Surveillance System and Remote OBD Platform As required by Act to Restoring Blue Sky in Wuhan 2016 (WZB (2016) 10) [6] to reinforce emission control from motorized vehicles, and to delegate the Bureau of Environment Protection of Wuhan, Wuhan Motorized Vehicle Emission Control & Administration Center, and Lotusfairy Power Technologies Co. jointly developed an On-Board and On-Line Surveillance (OBOLS) system with remote OBD functions, for monitoring and positioning heavy duty diesel (HDD) with an On-Board Signal Collection device (OBS) and GPS, to collect and send emission data from the HDDs in realtime through a public wireless network. The OBOLS system began construction in 2016 and completed in 2017.

Application of On-Board-On-Line Surveillance in Environment …

2.1

1887

OBOLS Structure

OBOLS system consists of OBS devices and servers. OBS collects and transmits information to servers wirelessly; while servers save, analyze and share the information. OBS collects information, positioning vehicle, encoding, encryption and sending data; servers contains functions such as decryption, decoding, database, specialist system, service terminals. 2.1.1 Collecting Data OBS uses MC9S08DZ60 embedded processor from Fiscal as the ECU, with high precision AD converters to read from temperature and pressure sensors, deploying a CAN transceiver from NXP to communicate with vehicle CAN bus for gathering engine operating information, NOx and OBD signals of the aftertreatment system, also equipping a GPS and Beidou dual positioning module N303 with an accuracy better than 10 m. The box is rated IP67 waterproof for improved reliability. a. Collecting Data From CAN bus and manually input information, OBS collect data such as information about the vehicle owner, vehicle registration, air intake temperature and pressure, rotating speed, output torque, fuel consumption, urea level, temperatures and NOx concentration before and after SCR, OBD MIDs, GPS positioning, etc. Encoding and encryption algorithm assures information being sent via public network with a CDMA and GPRS dual mode wireless transceiver be safe and secure. b. Database Based on MySQL database and using thrift of Facebook as RPC, and Apache Shiro for permit management, we were able to share data over network with full capabilities of data storage and backup. The database server contains multiple functions such as information analysis, calculation and management center (IACM), storage center, backup center, etc. 2.1.2 Specialist System Specialist system (SS) tacks emission, location and operation data of vehicles, filters out vehicles emitting excessive pollutants, labels them and put them in the black-list, and further provides indexing, searching, interprets OBD information and names out malfunctions on-line and off-line, and finally notifies concerned parties the vehicles with malfunctioned aftertreatment devices unattended in a defined period of time via email, APP, WeChat, QQ, instant massages, etc. 2.2

Main Functions

2.2.1 Judging Vehicles not Meeting Regulations Judging if a vehicle meets national emission standards with the method defined in the national OBD regulation HJ437-2008 [7], and OBD MIDs and relative NOx emission received from the vehicle.

1888

W. Gu et al.

2.2.2 Statistical Analysis Statistically analyzing on-line ratio, percentage of malfunctioned aftertreatment devices, and of vehicles with emissions exceeded national standards, exhausted NOx concentration, mass in a particular period of time for a particular vehicle; locating the cause of any malfunction, etc.

3 Judging Algorithm 3.1

Calculating Relative NOx

3.1.1 Data Filtering Any data point meeting the following criteria is filtered out and without further consideration a. Engine speed of 70% less than the idle, e.g. radiotherapy and chemotherapy > radiotherapy > operation are the highest. Clinical phases of cancer show significant correlation with years of survival (p > 0.05). Except for death rates of 1A and 1B which are lower, death rates of other phases in three years are higher than 80%. Chronic kidney illness does not show significant correlation with years of survival (p > 0.05). Chronic obstructive pulmonary disease and years of survival are not significantly related (p > 0.05). High blood pressure and years of survival are significantly related (p < 0.05). Among others, death rate of patients without high blood pressure is higher. 3.1

Three-Year Survival Prediction Model for Lung Cancer Treatment

In this study, factors that are significantly associated with the survival risk of lung cancer treatment: age, gender, therapies, clinical phases of cancer, and hypertension as the predictors of the model, and establish a prediction model by neural network and Logistic regression. When ANN model shows Sensitivity = 77.6 and Specificity = 76.8, it can include 83% (AUROC, Area Under ROC curve) model precision. And LR analysis, when Sensitivity = 77.3 and Specificity = 73.8, it includes 81% (AUROC, Area Under ROC curve) model precision.

Fig. 5. Comparison of ROC curve of ANN and LR models

Constructing Prediction Model of Lung Cancer Treatment Survival

1933

According to ROC curve result, curve area of ANN model is larger than that of LR model. Thus, it concludes that in comparison to LR, ANN can more precisely predict survival rate of lung cancer patients in 3 years (Fig. 5).

4 Discussion Although in different countries, there are various risk predication models on lung cancer, this study still shows important meaning since it adopts database of national health insurance and includes 3 years of death probability prediction. Although it includes literatures and comorbidity factors suggested by physicians, there can be limitation in prediction model. Database of national health insurance in Taiwan precisely shows patients’ past medical history. In the future, it should include correlation analysis of all factors and recognize hidden related ones in order to upgrade prediction precision of model. This study treats database of national health insurance as research figures and there can be limitation. All figures are actual medical statistical figures, which do not include the screening of patients’ personal life habit and constraint factors. Interestingly, this study conducts correlation analysis between high blood pressure and death rate of lung cancer in three years and realizes that survival rate of patients with high blood pressure is higher than those without high blood pressure. According to literatures, some medicine of high blood pressure can protect patients from lung cancer. After taking the medicine, they tend to lower the possibility of lung cancer. However, construction of prediction model by different measures can enhance precision of prediction model. Two models established by this study show more than 80% of precision and they are prediction models with high precision. In order to effectively assist with physicians’ clinical diagnosis and patients’ options of therapy, this study constructs the result in the website and by simple options, prediction result can be immediately obtained.

References 1. Welfare, T.M.o.H.a.: 2016 Cause of Death Statistics Analysis (2016) 2. Young, R.P., et al.: COPD prevalence is increased in lung cancer, independent of age, sex and smoking history. Eur. Respir. J. 34(2), 380–386 (2009) 3. Yu, Y.-H., et al.: Increased lung cancer risk among patients with pulmonary tuberculosis a population cohort study. J. Thorac. Oncol. 6(1), 32–37 (2011) 4. 李龍驣, et al.: 肺結核與肺癌:病例對照研究. 台灣醫學. 1(2), 176–184 (1997) 5. Grose, D., Milroy, R.: Chronic obstructive pulmonary disease a complex comorbidity. J. Comorbidity 1, 45–50 (2011) 6. Wang, S., et al.: Impact of age and comorbidity on non-small-cell lung cancer treatment in older veterans. J. Clin. Oncol. 30(13), 1447–1455 (2012)

Research on the Combination of IoT and Assistive Technology Device—Prosthetic Damping Control as an Example Yi-Shun Chou(&) and Der-Fa Chen National Changhua University of Education, Changhua, Taiwan [email protected]

Abstract. In this study, the concept of the Internet of Things is referenced in the lower prosthetic damping controller. Through wireless communication, the wearer can use the mobile phone to directly control the set value of the damping coefficient. During the travel, the damping controller calculates the walking speed between the three axes. The damping coefficient is automatically adjusted by the closed feedback control model, so that the air pressure in the lower limb joints is automatically adjusted to a comfortable range so that the wearer can perform more rapid movement and get a more comfortable feeling, and retain personal dignity. Keywords: IoT

 Damping  Close loop

1 Introduction People need to cut off the lower extremities because of accidents, for example, due to traffic accidents, plant operating machinery errors, illnesses, etc. When the medical treatment is over, they need to wear prosthetic limbs to maintain their daily routine, but usually the place that most wearers feel uncomfortable is the unbalanced walking of the two feet makes the design of the knee joints of the prosthetic body below important. This will make the wearer have the biggest problem that feels different. At present, the knee joint design of the lower body prosthesis is mostly mechanical control, so the internal pressure of the joint is adjusted to make the wearer feel comfortable and impatient. In addition, although the wearer uses the lower body prosthesis, it can still perform fast walking, jogging or running. Motion, if you can not dynamically adjust the pressure inside the knee joint, so that the entire joint movement can not perform more rapid movement, so dynamic adjustment is necessary. This study hopes to change the original mechanically controlled knee joint to an electronic one through the concept of the Internet of Things and adjust it at any time through the wearer’s feelings, even through machine learning and closed-loop models for automatic control, without the wearer having to perform The pressure inside the knee joint must be adjusted before fast movement, and the dignity of the wearer is also preserved.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1934–1938, 2019. https://doi.org/10.1007/978-981-13-3648-5_247

Research on the Combination of IoT and Assistive Technology …

1935

2 Literature Review The study pointed out that the air pressure of the prosthetic knee can be dynamically adjusted through PID control [1]. However, this is dynamically adjusted by a mechanical mechanism on a platform, and there is no physical help for the wearer. The study pointed out that the pressure of the prosthetic knee joint can be automatically adjusted through two double-linkages [2]. This study hopes to simulate the function of the real knee joint through different mechanical designs, so that the wearer does not feel the prosthesis, but only on the platform to verify functionality, does not explain the wearer’s feeling of use. As early as 2013, literature pointed out [3] that with the continuous advancement of technology, the promotion of wireless communication transmission technology, the diversification of sensors, the smaller and smaller identification chips, and the lack of power supply are all necessary. The main reason for the Internet of Things. Sensorequipped items can be sent through wireless communication to send out the required information. The identity of the chip can distinguish who is the user or the user’s basic information. The above two programs can put all kinds of information. Connecting to the Internet through a communicator allows people who need to know the information to use it conditionally. This use can be based on analyzing data and making decisions.

3 Methods In this study, a microcontroller, a Bluetooth communicator and a three-axis sensor were combined to form an electronic damping control system with the concept of the Internet of Things. The mechanical structure was used to dynamically adjust the internal gas pressure of the knee joint to allow the wearer to move while in motion. Get a comfortable feeling. The microcontroller is mainly controlled by a sensor input signal and an output motor to perform a closed-loop model control. When the sensor detects the difference in the wearer’s traveling speed, the position of the motor is automatically adjusted to change the damper. The pressure on the internal gas of the knee joint allows the joint movement to be measured, rather than causing resistance or loose feeling (Fig. 1). This study uses a closed-loop model because the triaxial sensor reads the displacement in one direction and then converts it into the wearer’s travel speed, but because the internal air pressure of the joint creates resistance or looseness, the wearer feels uncomfortable. The need to automatically adjust the motor to change the position of the damper, thus forming an input signal through the microprocessor after the calculation of the output motor position, which is to change the original use of simple mechanical model can only fix the pressure of the biggest difference. The closed-loop model needs to have different rules to calculate the ratio of input and output. Therefore, the concept of the Internet of Things is used to upload data to the cloud to calculate the regular expression, and then download this control specification to the controller inside the prosthetic as an optimization. Control so the wearer can use the control parameters that are more suitable for him after a period of time (Fig. 2).

1936

Y.-S. Chou and D.-F. Chen

Cloud PC

Mobile

DC MOTOR

Digital Accelerometer

Bluetooth V4.2

ARM Cortex M3

DC

Battery 3600mAH

DC Power

Fig. 1. System block

Cloud

Rule engine for PC / AI

Mobile

Bluetooth V4.2 Digital Accelerometer IN

ARM Cortex M3

Close Loop Model

parameters for pressure sensor

DC MOTOR

OUT

Fig. 2. Close loop controller model

4 Result Because it is a wearable device, battery life is taken into consideration in the entire design. Therefore, the triaxial sensor that determines the walking speed must be changed to an interrupt mode to extend the power, and the rest of the time needs to use the sleep mode to save power.

Research on the Combination of IoT and Assistive Technology …

1937

The three-axis sensor will generate three movements such as X, Y, and Z due to displacement. In fact, the wearer has only one direction of travel, and the remaining two directions can be neglected, but the circuit board must be fixed at the same position. Although the wearer’s different traveling speed will affect the displacement of the sensor, there are many ways to convert the displacement into the traveling frequency. This study uses the self-generated frequency of the sensor as the basis. The wearer uses this prosthetic arm. In slow motions, such as walking normally or slightly faster, the slower walking will automatically adjust without feeling uncomfortable. For fast running, the response will not be fast enough. There was an uncomfortable feeling of adjustment, but it returned to a normal feeling after about 5 s. The wearer will use the mobile phone and Bluetooth function to upload the speed of travel to the cloud for analysis. However, wearers prefer to control themselves to adjust the desired comfort level.

5 Discussion and Suggestion This study found that if the wearer can adjust the internal pressure of the lower body prosthetic knee, he can have the greatest comfort because of the wearer’s own feeling. Therefore, the mechanical method can’t always allow the wearer to adjust himself and needs to be improved. The electronic damping mechanism controller designed for this study to adjust the knee joint pressure is necessary and helpful to the wearer. This study originally intended to use the concept of the Internet of Things to use the data detected by the wearer’s travel speed through the three-axis sensor, and use Bluetooth communication to connect with the wearer’s mobile phone to send data to the cloud space for analysis and decision making. This is not significant because most of the wearers are walking at normal or slightly slower speeds, so that the wearer can meet most of the demands through mobile phone control. Although current wearers can adjust themselves to solve the problem, it is necessary for the controller to automatically adjust the internal pressure of the lower limb prosthetic knee through a machine learning mode. In the future, the response can be adjusted so that the wearer does not feel, This also does not require manual control to be able to meet a variety of movement patterns, so that the prosthetic with the actual human knee is the same is the biggest goal.

References 1. Bevilacqua, V., Dotoli, M.: Artificial neural networks for feedback control of a human elbow hydraulic prosthesis (2014) 2. Fu, Q., Wang, D.-H., Xu, L., Yuan, G.: A magnetorheological damper-based prosthetic knee (MRPK) and sliding mode tracking control method for an MRPK-based lower limb prosthesis (2017) 3. Gubbi, J.: Internet of Things (IoT): a vision, architectural elements, and future directions (2013)

1938

Y.-S. Chou and D.-F. Chen

4. Xu, L., Wang, D.-H., Fu, Q., Yuan, G., Hu, L.-Z.: A novel four-bar linkage prosthetic knee based on magnetorheological effect: principle, structure, simulation and control 5. Nandy, A., Mondal, S., Rai, L., Chakraborty, P., Nandi, G.C.: A study on damping profile for prosthetic knee 6. Pandit, S., Godiyal, A.K., Amit: An affordable insole-sensor-based trans-femoral, prosthesis for normal gait (2018)

A Study on the Demand of Latent Physical and Mental Disorders in Taipei City Jui-hung Kao1(&), Horng-Twu Liaw2, and Po-Huan Hsiao3 1

Computing Center, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan [email protected] 2 Department of Geography, College of Science National Taiwan University, Taipei, Taiwan 3 Fire Department, New Taipei City Government, New Taipei City, Taiwan

Abstract. Like everyone else, the disabled have the human rights towards independence and autonomy. Owing to various factors, their disabilities lead to their often requiring external assistance for a life of independence and autonomy. Living independently on their own for the disabled has gradually become the systematic goal which society strives for. Under this presupposition, measures and facilities to support autonomous and independent living of the disabled should be the research focus, regarding the design and how they could become social progressive measures for the disabled. Spatial analysis will be introduced to discover zones in urgent needs of insertion of social welfare resources. The research mainly makes use of the 2015 Taipei City Government database of the disabled and statistical information of the Department of Social Welfare, Taipei City Government. Through GIS displays, states of distribution of the disabled and related resources are depicted. By integrating Taipei City road network diagrams and statistical information of the disabled from the Department of Social Welfare, areas of coverage of social welfare agencies (SWAs) supporting the disabled are calculated, as zones which these agencies are capable to serve the disabled. For areas which they cannot serve, improved relaxed variable kernel density estimator is employed to discover the hotspots. Combining the hotspots with the Ministry of the Interior’s basic statistical areas, calculating weighted population center points, and suitable candidate locations for establishing SWAs supporting the disabled are found. Results of our research indicate in highlighted fashions the importance of strengthening social assistance programs within the health provision system as well as the necessity for co-operation between social workers and health workers to improve the accessibility of medical and health service for the disabled. This research focuses on expounding the disabled and SWAs supporting them, and analyzing the degree of match between their needs and welfare resources, so as to review changes to the behaviors of the disabled and their chances of survival from policy execution, and also the necessity of social workers and health workers to work together to improve the disabled obtaining medical and health service. Keywords: Social welfare agencies  The disabled  Weighted population center points  Relaxed variable kernel density estimator

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1939–1946, 2019. https://doi.org/10.1007/978-981-13-3648-5_248

1940

J. Kao et al.

1 Introduction Different kinds of disabilities have different degrees of impacts. Since the affected body parts for those with limb disabilities are arms, legs or the body trunk, the main body parts for motions, once they were in troubles there will be impacts both physiologically and psychologically. Thus they require special care and training [1]. Because of economic development in Taiwan and times have changed, in recent years government bodies and social organizations gradually pay higher attention to the disabled. Together with legal amendments, rights of the disabled in aspects like medical service, education, care and employment are all better protected than in the past [2]. Since the government has to develop and maintain a suitable service basis in means feasible with the budget, health decision makers and administrators need to utilize existing resources reasonably, to spatially deploy infrastructure and human resources in health, so as to best satisfy citizens’ needs. It is the same for care and treatment services. Since the demand on all kinds of care environments (e.g. families, hospices, hospitals, long term care institutions, care facilities) alleviating caring services and absorbing their workloads increase incessantly, the matching of supply and demand depends most on comprehending the demands [3]. The degree of aging (inhabitants 65 years old or above) of Taipei City is even more apparent than in other developed nations. Overall speaking the total number of elderly care centers in Taipei City is far below the needs of the elderly population. This issue has received emphasis from the Taipei City government. Official bodies and welfare institutions are pushing forth the development of various kinds of space and conducting plans of needs evaluations for establishing social welfare facilities, including hospital beds as well as for general and life situations. Since 2006, progresses of these plans are evaluated yearly. In order to even more precisely ascertain the best locations for short term construction of the facilities, these bodies need reliable evaluations to precisely locate social welfare facilities. However existing evaluations cannot provide efficacious information to government departments and social welfare ends. Thus this research focuses on improving the spatial distribution evaluations for locating social welfare facilities. It proposes indexes for service provision and needs for locating social welfare facilities. 2SFAC (Two-step Floating Area Catchment) distribution maps according to data of social welfare facilities in the Taipei City area are drawn, so as to evaluate how these facilities are distributed.

2 Methods 2.1

Two-Step Floating Catchment Area Method

Referring to the related research by Wang and Luo [4], regarding evaluation of grassroots medical resources it is mainly by self-driving for 30 min or 15 miles (approx. 24 km) as the reasonable usable zone. However the traffic environment of Taiwan has significant differences with the USA’s. This is due to larger traffic volume on roads in Taiwan as well as the roads are generally narrower. The actual vehicular speed of traffic in Taiwan is often slower. The actual life activity zone is thereby also slimmer.

A Study on the Demand of Latent Physical and Mental Disorders …

1941

Thus this research refers to study by scholars like Tseng and Chen [5, 6] by considering approachability. Walking comfortably has the speed of about 1.22 m/s. It takes about 13 min to walk 1 km. For related factors influencing service satisfaction in Tseng’s research, there is the best satisfaction for approachability when the residence and service providing unit has a distance “under 30 min”. Hence the optimized distance for this research is 2 km, about 26 min. The 2-km zone reachable by roads from the center of the sphere of catchment area is the reasonable use zone of social welfare resources of that sphere. Calculations with two-step floating catchment method will be conducted. The two-step floating catchment method proposed by Luo and Wang [7] broke through the limitations of activities framed by administrative borders as aforementioned. The method doesn’t just consider the possibility of populations seeking medical service across districts, but also establish reasonable medical service seeking zone, so as to evaluate spatial accessibility of medical resources. This method is mainly divided into these two steps [4, 8]: 1. For each SWA, possible service population covered within the reasonable service zone (e.g., 2 km) is searched, in order to come up with the provider-to-population ratio. What’s shown in Fig. 1 is the simple concept diagram of two-step floating search method. There includes three SWAs (A, B, C) represented by village population centers and fifteen population points of the catchment areas of the SWAs of the sphere. Using administrative region A as an example, population (1, 2, 3, 4, 6, 7, 9, 10) within the 15-km distance zone is eight. Hence social welfare resources in administrative region A is 1/8 (each SWA can serve eight disabled). The ratio is 1/4 for administrative region B (4, 5, 8, 11). That for administrative region C (11, 13, 15) is 1/3. 2. For each of the population point of potential needs, we search for possible social welfare provider within the reasonable service zone (e.g., 2 km) and add that to the ratio obtained from the perspective of service provider in step 1. That will be the social service accessibility index we desire. Extending 15 km to search for possible social welfare resources from each sphere of catchment area center point, results indicate that for 1, 2, 3, 6, 7, 9, 10 there is just A the single SWA within 15 km. Thus their provider-to-population ratios are all 1/8. Similarly for 5, 8, 11, within the 15-km extended zone there is just B the SWA. Thus their ratio is 1/4. But the sphere of catchment area for 4 is very special. It is because for the disabled living there two SWAs A and B are included in their 15-km extension zone. Thus the provider-topopulation ratio for 4 is 1/8 + 1/4 = 3/8. Calculated according to the traditional administrative region method, that ratio for the disabled living in village 4 would be 1/6 from the perspective of administrative region B (incl. 4, 5, 8, 11, 13, 15). If considered from the level of sphere of catchment area (sphere of catchment area 4) the ratio is 0. 2.2

Relaxed Variable Kernel Density Estimator

RVKDE is a kernel density estimation algorithm jointly developed by Prof Yen-jen Oyang of National Taiwan University and Prof Darby Tien-Hao Chang of National Cheng-kung University [9, 10]. The basic concept of RVKDE is random independent

1942

J. Kao et al.

Fig. 1. The two-step floating catchment area (FCA) method

selection of sample (kernel point) sets in given data distribution and using them to construct an estimation function close to the primary data. Suppose the primary data is distributed in a feature space of m dimensions according to kernel density function f, using RVKDE the algorithm will be the sample sets {s1, s2, … Sn} as kernel points. According to data aggregation close to each kernel point, each of their kernel function will be estimated. Lastly RVKDE conducts linear weighted summation of all kernel functions to arrive at a probability density function ^f to estimate f. For spatial data point v, the output of function f can be estimated as: X  1 m jjv  si jj2 ^fj ðvÞ ¼  1  pffiffiffiffiffiffiffiffiffiffiffiffiffi exp  sj  2r2i 2p  ri Si 2Sj while pffiffiffi  ðsi Þ p R ri ¼ di  b; di ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  ffi m ðk þ 1ÞC m2 þ 1 k1 X  ðsi Þ ¼ ðm þ 1=mÞ ð1=k Þ R k^sh  si k h¼1

!

!

A Study on the Demand of Latent Physical and Mental Disorders …

1943

R(si) is the maximum distance of kernel point si and neighboring k data points; C() represents Gamma function; b and k are parameters to be cross-examined or set by user.

3 Results At the end of 2016, total population of Taipei City is 2,682,893. Taipei City has totally 12 administrative districts, 456 service catchment areas, 11,490 basic statistical areas. The disabled population totals 119.443, amounting to 4.45% of the population of Taipei City. For the extremely disabled (level III), there are 16,136 of them, amounting to 13.51% of the disabled in Taipei City. There are 97 social welfare agencies capable of serving them, able to serve 3890. The administrative district with the largest no. of basic statistical areas is Shilin, totalling 1278. The least is Nangang, with 481. Shilin has the largest no. of extremely disabled, totalling 1861. The least is Nangang with 800 (as in Table 1). Employing Relaxed Variable Kernel Density Estimator (RVKDE), it is discovered that the extremely disabled is most centered in Wanhua, followed by Xinyi (as shown in Fig. 2). Table 1. Table showing the distribution of social welfare agencies and the extremely disabled in Taipei City District

Area (km2)

Zhongzheng Datong Zhongshan Songshan Daan Wanhua Xinyi Shilin Beitou Neihu Nangang Wenshan

7.6071 5.6815 13.6821 9.2878 11.3614 8.8522 11.2077 62.3682 56.8216 31.5787 21.8424 31.5090

3.1

No. of basic statistical areas 759 567 1196 838 1270 767 902 1278 1160 1226 481 1046

No. of social welfare agencies (SWAs) 2 10 10 4 8 5 2 9 25 8 1 13

Client capacity of SWAs 61 401 359 131 278 372 57 496 855 243 20 617

Extremely disabled population 897 956 1406 1168 1623 1487 1301 1861 1548 1477 800 1612

Redressing Non-approachability to Social Welfare Resources for the Extremely Disabled

After we have gathered and organized related data for analysis, as an example we shall make use of the district with the greatest lack, which is Nangang. Spatial analysis will commence. Here it chiefly means evaluation of spatial approachability of SWAs. The method of analysis here is mainly according to the aforementioned research method of

1944

J. Kao et al.

Fig. 2. Distribution of social welfare agencies and the extremely disabled in Taipei City

road network analysis and Relaxed Variable Kernel Estimator. In spatially redressing non-approachability of the extremely disabled to social welfare resources, steps of executing the method are as below: I. Searching for 2 km zones which SWAs cannot serve—Employing Relaxed Variable Kernel Density Estimator combined with the 2 km road network map of SWA services in Taipei City as the basis, 2 km zones which SWAs cannot serve are found. We discovered Nangang has the greatest deficiencies, followed by where Zhongzheng and Neihu meet [as shown in Fig. 3(1)]. II. Spatial Analysis on 2 km zones which SWAs cannot serve and Mean Center— This research has selected Nangang district with the greatest lack, as manifested by 2 km SWA non-service zones, to conduct resource input spatial analysis. We input information like population of the extremely disabled in the non-service zones, the areas of the zones’ hotspots, etc. into active map layers for forthcoming statistical data conversion purposes. We make use of Relaxed Variable Kernel Density Estimator and Create Contour on basic statistical areas with non-service zones, pinpointing them to conduct mean center analysis of the extremely disabled, to discover points for resource input and their basic statistical areas [as in Fig. 3(2, 3)].

A Study on the Demand of Latent Physical and Mental Disorders …

1945

Fig. 3. 2 km road maps of SWA service provision in Taipei City

4 Conclusion This essay utilizes data of basic statistical areas of Taipei City to calculate SNIS. It employs network analysis to enrich the study with considerations between the distance of SWAs and the extremely disabled. It also conducted optimization measurements of the extremely disabled demographic group in need of SWAs. Regarding basic statistical areas having zones without service, mean center analysis is conducted on the extremely disabled population to discover points for resource inputs and their basic statistical areas. On the other hand, under limited means, social service resources in the reachable zones may not be able to satisfy those with the highest demands for them. Making use of 2SFCA to evaluate spatial approachability, we may also simultaneously discuss whether resources in the reachable zones are enough. Considering areas with relatively low Service quantity-to-Disability Population Ratio (SDPR), one may consider raising the service capacity of SWAs, or by referring to non-approachability analysis of the extremely disabled to social welfare resources, increase the number of SWAs. This essay designed a methodology to evaluate input of spatial resources. The evaluation is by spatial approachability of public service facilities and social service provisions. Through tools of Two-Step Floating Catchment Area, Relaxed Variable Kernel Density Estimator and Mean Center within this method as our basis, SWAs of future potential needs are discovered. The results are used to analyze the evaluation of the spatial approachability of the SNIS population of each basic statistical area, to serve government departments without enough budgetary outlay to supply the amount of social welfare service in need. Under a limited budget, locations to deploy social welfare services must be efficaciously selected. Using RVKDE we can discover where

1946

J. Kao et al.

are the priorities for deploying social welfare services. Furthermore the same method can be used for each district. This analysis on two levels provided government decision makers with very useful information.

References 1. W. H. Organization: International Classification of Functioning, Disability and Health: ICF. World Health Organization (2001) 2. McIntyre, L.L., Blacher, J., Baker, B.: The transition to school: adaptation in young children with and without intellectual disability. J. Intellect. Disabil. Res. 50(5), 349–361 (2006) 3. Schuurman, N., Martin, M., Crooks, V.A., Randall, E.: The development of a spatial palliative care index instrument for assessing population-level need for palliative care services. Health Place 49, 50–58 (2018) 4. Wang, F., Luo, W.: Assessing spatial and nonspatial factors for healthcare access: towards an integrated approach to defining health professional shortage areas. Health Place 11(2), 131–146 (2005) 5. Tseng, K.-C.: Factors related to the long-term care utilization and satisfaction among caregivers: use of the behavioral model of health services utilization. Master Thesis, Graduate Institute of Nursing, Taipei Medical University (2002) 6. K.-Y.C., Chen, S.-C., Lu, C.-Y.: The application of frailty prevention by walking speed measurement in community. J. Exerc. Physiol. Fitness 21, 51–58 (2015) 7. Luo, W., Wang, F.: Measures of spatial accessibility to health care in a GIS environment: synthesis and a case study in the Chicago region. Environ. Plan. B Plan. Des. 30(6), 865–884 (2003) 8. McGrail, M.R., Humphreys, J.S.: Measuring spatial accessibility to primary care in rural areas: improving the effectiveness of the two-step floating catchment area method. Appl. Geogr. 29(4), 533–541 (2009) 9. Oyang, Y.-J., Hwang, S.-C., Ou, Y.-Y., Chen, C.-Y., Chen, Z.-W.: Data classification with radial basis function networks based on a novel kernel density estimation algorithm. IEEE Trans. Neural Netw. 16(1), 225–236 (2005) 10. Oyang, Y.-J., Ou, Y.-Y., Hwang, S.-C., Chen, C.-Y., Chang, D. T.-H.: Data classification with a relaxed model of variable kernel density estimation. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005. IJCNN’05, vol. 5, pp. 2831–2836. IEEE (2005)

Real-Time Analyzing Driver’s Image for Driving Safety Kuo-Feng Wu1(&), Horng-Twu Liaw2, and Shin-Wen Chang3 1

National Taipei University of Nursing and Health Sciences, Taipei, Taiwan [email protected] 2 College of Management, Shih Hsin University, Taipei, Taiwan 3 Fu Jen Catholic University, Taipei, Taiwan

Abstract. It’s very important to alert drivers about driving environments and possible car crash with other vehicles. Developing real-time automotive driver assistance systems has attracted much attention recently. In order to accomplish such a system, we have tried a camera mounted on a vehicle to capture road scenes for lane and preceding vehicle detection. Images tracking in old times were captured by color webcam. Webcam cannot work in the dark and influenced easily by shadow. Shadow changing of driver’s face cannot avoid when driving on road. We avoid shadow changing with Kinect in the thesis. We get driver’s depth information of head image then processing and tracking. Using the motion of driver’s head as the line of vision for driving safety. Research method: first, cut out background and get foreground by threshold. After morphology and connected component process, using histogram and face feature to judge face then tracking by particle filter and get head motion. Driver’s line of vision is represented by the angle of head rotation. Head rotation changes the length of shoulders. Estimate the angle of head rotation by short shoulder length divided by long shoulder length. Keywords: Depth image

 Head tracking  Driving safety

1 Introduction 1.1

Background

Recently the research to develop on intelligent transportation system (ITS) and autonomous guided vehicle (AGV) become more popular. The most development potential is safe driving. The use of computer vision technology to analyze road images, detection of road lines and vehicle information is a necessary function of safe driving. The system can help drivers understand the status of the car, and provide early warning mechanism to avoid some accident. When improve the safety, efficiency and comfort of transport systems while reducing the impact of traffic on the environment. Drivers in the switch lane, although the car side can use the rear-view mirror to obtain the vehicle side of the rear of the information, but due to the car side of the visual angle of the existence of the area, and cause the driver does not get enough information to make safe and correct decisions. It is possible to overcome the problems caused by © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1947–1951, 2019. https://doi.org/10.1007/978-981-13-3648-5_249

1948

K.-F. Wu et al.

visual angle by allowing the driver to extend 45° above the side of the vehicle (Bowles [1, 5]). The driver’s vision shrinks when the driver concentrates on driving the vehicle and increase of visual dead area. Recarte和Nunes [1, 6], as a result of the above Shang reasons, drivers in order to be able to obtain enough information on the side of the car, usually need to be turned to confirm the car side of the visual dead area, which not only caused the driver can not concentrate in front of the condition of the car and thus lead to other types of car disaster and collision, cause the psychological burden of the driver. This research want to develop a system that automatically detects if there is a car on the side of the car’s visual dead area and is used as a safe way to help motorists. Using CCD Camera to replace General radar as sensing device to develop vehicle side visual dead angle warning system. Based on the technology of image processing and computer vision, the warning system of preventing side bump is developed. Motorists are able to use the vehicle side visual dead angle warning system to help them to achieve safer vehicles while switching lanes, and can significantly reduce the occurrence of side impact accidents. Improve the vehicle side visual angle caused by and derived from the problem. 1.2

Literature Reviews

On the application of computer vision in Intelligent Transportation System (ITS), most of them are concentrated in front of the car or behind the image with image based driver Assistance system. Enkelmann [2, 9], using adaptive template matching (AdTM) track the car to fit the different distances of the vehicle. A system that combines radar and image information to detect obstructions in the rear of the vehicle [2, 3].

2 Methods 2.1

YCbCr Color Model

Because the original color space is in the RGB color space, and in the system to find the path of the process required is indeed a gray-scale image. Therefore, in the process of looking for routes, first of all, the original color RGB image in accordance with formula (3.1) to convert to YCbCr color space. YCbCr sequential processing that is often used in continuous imaging, Y represents luminance, Cb and Cr mean blue and red concentration offsets. YCbCr as a standard format for digital images. The connection in the image is quite high, which usually results in a repetition of image coding Often causes image coding to recur. In order to reduce the volume of data, it is usually used to reduce the color of the information to achieve data compression [4–6]. 7 2 6 3 2 32 3 6Y7 16 65:481 128:533 24:966 R 6 7 4 Cb 5 ¼ 4 128 5 þ 4 37:797 74:203 12 54 G 5 ð1Þ 128 112 93:786 84:214 B Cr

Real-Time Analyzing Driver’s Image for Driving Safety

2.2

1949

Face Detection

In this paper, we first need to detect the area of the face, and further detect a more accurate facial features in the face region. In the past research, it has been necessary for many scholars to study the face detection technology and put forward the related preliminary concept of face detection. The most common detection methods can be broadly divided into color based and brightness changes based on the search method. Gray-scale image of human face recognition. The facial features of the human face have their own regional characteristics, for example, the eye area is darker than the lower cheek part. So you can design the upper part of the black, the lower half white rectangle to contrast the image of the search. The accuracy is not high when using only one feature rectangle for search may find many similar areas. After further observation of the eye feature area, we can observe the nose part is much brighter than the eye area. We can design another rectangle, from left to right respectively black, white, black and white for second recognition. The detection accuracy and reliability can be improved by the two-time feature rectangle search. The accuracy and reliability of detection can be improved by two times of feature rectangle search. For the facial features of other regions, we can detect the features of facial features only by designing the corresponding characteristic rectangle. Finally, the correct face region is pushed by the relationship between the facial features, size and proportion. The correct face region is pushed by the relationship between facial features, size, and proportion. The researchers combine color and brightness to improve and achieve a more accurate method of detecting human eye [8]. In terms of color, although there are many areas of skin color can be used for reference, but the color of different races is still far away, can not be used as a single area to do the basis. For the brightness of the way, in a complex environment light will make the facial features lose their original light and dark feature distribution, and eventually lead to a miscarriage of the case. The facial features detection must be in real time environment, so the computation is one of the considerations. 2.3

Human Eye Detection and Tracking Technology

The tracking technology of objects in the image is an important part of image processing. In the case of a static background, the simpler way is to first use the background image record as the base image. The act of subtracting each pixel value of a target object image. The background environment of the image is the same, so the image produced by the subtraction is the location of the target object to Achieve the effect of object tracking. Using information from a continuous image, to analysis of the differences in image, used to judge objects moving in images as temporal difference [2]. In the face of more complex environments, researchers have also used particle filters [7, 8, 10, 11]. Because there are many factors in reality that cause the target object not to move in a linear way. Particles filter can solve non-linear problem by sampling, prediction and measurement. Each calculation will carry out this three stages, its calculation is very large and complex.

1950

K.-F. Wu et al.

3 Result In the system, when there is a vehicle to enter the vehicle side of the visual angle of the detection area (ROI), the system keeps track of the markers in ROI, until the marker leave ROI. In the daytime, the car clings to the inside of the vehicle, the security wall is transferred from the original image to the top view. The resulting image is more like a plane, it doesn’t look like a car with a stereo-height message. When close to the inner carriageway will not cause a car to be detected in the wrong warning. The cover of the ditch will create a certain width mark on the top view. Since this marker is considered to be a flat image at height detection, it does not cause false warning. The safety wall on the cover of the ditch is faulted, which leads to false judgment. Although the gap or shadow has a certain width on the projection’s top view, it can be judged as plane rather than stereo object after the judgement of the three-dimensional height, so it does not cause a false detection. The processing speed of the system will vary with the angle of camera erection and the day or night.

4 Conclusion Generally, most of the imaging studies on dynamic vehicles are concentrated in front of the car or in the rear of the car image research and application. The research of the small number is aimed at the vehicle side image analysis and application. In the actual situation, the most common car accident is a collision or side-impact of the car side of the visual angle of the type of car trouble. In general, the research on detecting dead area of vehicle side is mostly based on the use of radar or other active sensors. If the sensor received more than the reflection and its resolution is not high enough factors, and can easily cause errors in the warning. Our system design is mainly used to the car side vision of the dead end area of the study, but the system of vehicle and road line detection module can also be used in front of the car or in the rear image of the road line and vehicle detection applications. The system design is mainly used to study the image of dead angle area of vehicle side vision. The detection module of the vehicle and road route in the system can also be used in front of the car or in the rear of the vehicle detection of road lines and vehicles. Our system can be divided into two main modules, the first module is mainly used for the detection of the track line to determine the system to detect the area, and the second module is part of the vehicle detection. The most common use of vehicle detection in front and rear images of vehicles is that the vehicle detection in front of or behind the vehicle may be an action to detect the shadow part of the road surface. In our system, the system can detect the presence of vehicles, whether they are in the distance or on the side of their own vehicles.

5 Discussion In the future, the current features used in vehicle judgments are not accurate enough, and sometimes there is still a miscarriage of error. Sometimes there will still be a miscarriage of judgment, so we hope to find more representative car features in the later

Real-Time Analyzing Driver’s Image for Driving Safety

1951

research as a basis for judging. By further integrating the information from the radar or other sensing devices to achieve more accurate judgment, to use Using more than one sensor to achieve complementary and mutually supportive effects. From the current processing speed, the system’s calculation process is to achieve the so-called human eye vision real-time processing requirements, therefore, in the future system will actually set up in the car to test its effectiveness.

References 1. Bowles, T.: Motorway overtaking with four types of exterior rear view mirror. In: International Symposium on Man-Machine Systems, IEEE (1969) 2. Enkelmann, W.: Video-based driver assistance-from basic functions to applications. Int. J. Comput. Vis. 45(3), 201–221 (2001) 3. Ruder, M., Enkelmann, W., Garnitz, R.: Highway lane change assistant. In: Intelligent Vehicle Symposium, IEEE (2002) 4. Garcia, C., Tziritas, G.: Face detection using quantized skin color regions merging and wavelet packet analysis. IEEE Trans. Multimedia 1(3), 264–277 (1999) 5. Phung, S.L., Bouzerdoum, A., Chai, D.: A novel skin color model in YCbCr color space and its application to human face detection. In: Proceedings. 2002 International Conference on Image Processing, IEEE (2002) 6. Chai, D., Bouzerdoum, A.: A Bayesian approach to skin color classification in YCbCr color space. In: Proceedings TENCON 2000, IEEE (2000) 7. Bertozzi, M., et al.: Artificial vision in road vehicles. Proc. IEEE 90(7), 1258–1271 (2002) 8. Yim, Y.U., Oh, S.-Y.: Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving. IEEE Trans. Intell. Transp. Syst. 4(4), 219–225 (2003)

A Web Accessibility Study in Mobile Phone for the Aging People with Degradation of Vision Chi Nung Chu(&) Department of Management of Information System, China University of Technology, No. 56, Sec. 3, Shinglung Road, 116 Wenshan Chiu, Taipei, Taiwan, ROC [email protected]

Abstract. This paper discusses the inconvenient and unadaptable issues of current mobile device interfaces for the elderly people, especially compared to the desktop environment and resulted from their degraded eyesight. While there was little work in developing rules to guide the design and implementation of interfaces for the elderly people with age-related vision degeneration and their applications. A set of practical design guidelines of mobile device interfaces for the elderly people with degraded eyesight are proposed. The design of mobile device interfaces is a good starting point for the designers of mobile phone websites to effectively include the elderly people engaged in digital life. The paper highlights what mobile phone interface design with the limited sense of vision in manipulating the mobile devices can be relatively explored and proven and where future research may provide further advancements. Keywords: Mobile phone interface  Degradation of vision  Web accessibility

1 Introduction With the development of science and technology, the usages of smart mobile devices have been being part of everyone daily lives [1–3]. In this forthcoming trend, the elderly people must face their social lives by learning and using different technologies around the internet. However, unlike previous years, these mobile technology products have been introduced with changes from the previous models, such as screen size, operations, and interfaces, etc., and even the manipulation methods are not mechanized and simple operations for the elderly people. Instead, users are given graphical and textual feedback through screen operations which the precise eyesight is needed [4]. Many elderly generations, even if they are owning ones provided by their children, do not have the intentions of using high-tech products, As the mobile technology are growing rapidly, the mobile applications which have been integrated into life comprehensively make the elderly people reconsider using the mobile devices. It is the opportunities for them to take part in the digital society. However there are many interface designs that hinder the elderly people from using mobile phones. Aging is a cumulative change in the structure and function of the © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1952–1956, 2019. https://doi.org/10.1007/978-981-13-3648-5_250

A Web Accessibility Study in Mobile Phone for the Aging …

1953

human body over time [5, 6]. Various sensory organs of the human body will deteriorate with aging [7]. It is a normal, but irreversible, continuous process; a person’s vision reaches a peak in his teens and begins to degenerate after age 40 [8]. The phenomenon of presbyopia in the elderly with decreased vision is mainly due to the gradual hardening of the lens and loss of elasticity; the function and structure of the eyes will decrease with increasing age. Internet access is no longer just a patent for young people. More and more elderly people will use the smart phones available at hand to communicate with others in social network or even to shop online.

2 Web Accessibility for Aging People with Degradation of Vision in Mobile Phone Study showed that visual problems tend to occur in the early middle age, when people begin to notice that it is difficult to adjust the near focus, and the ability of visual acuity makes it difficult to have a clear view [9]. Visual deterioration is one of the aging phenomena that is evident to the elderly. The crystal is the structure that the eye adjusts the focus of the light. After the light passes through the cornea and the lens, it can focus on the retina to have clear vision. Young people’s crystals are soft and flexible, and they have excellent adjustment functions. However, people will begin to harden after forty years of age, and they will not be able to project images of distant objects and objects clearly on the retina. Presbyopia is the main deteriorating function which results from the crystals lose their elasticity so that the adjustment function is reduced, and it is impossible to clearly project the distant objects and the different objects on the retina, resulting in difficulty in reading at close range. The degradation of vision is a considered critical factor affecting the manipulations of a product for the elderly people [10]. The decline of visual function also causes the elderly people to see things blurredly in front such that it will affect the recognition of sensitivity with dark colors such as black or blue. The ability of the elderly people to discriminate between simple black-and-white text and background is higher than that of colored text and color backgrounds, and there are no significant differences in young people’s performance [11]. The study from Japan Institute of Architecture for the elderly people shows that the elderly people could not distinguish the blue font or pattern in the dark background, and be unable to discern the yellow fonts or patterns that appear in white background. Therefore, how to develop a suitable view for elderly people, the color scheme with an image or text should be also considered. The elderly people use visual perception as the main sensory perception in any product manipulations for acquiring external information. Therefore visual status which is degraded with age has a considerable impact on external information contacts for the elderly people in accessing information on the mobile phone. As a result, the elderly people tend to experience visual deterioration as a result of visual perception, and more visual search time is required to find the target when the target is unknown [12, 13]. In terms of visual search work, as search items increase, the time for visual search also increases [14, 15]. Difficulties caused by the search of details in the visual are the reading of small displays, the misclassification of objects and the distance between objects [16].

1954

C. N. Chu

3 Pilot Study of Interface Design Guidelines for the Aging People with Degradation of Vision To set up the design guidelines of web accessibility described in Sect. 2 for the elderly people with vision decline, experimental evaluations were carried out that required users to manipulate different types of information contents and web pages on the small screen of mobile phone. The results of experimental evaluations that compare three categories of information identification with searching, contents in font sizes and colors to determine how they might affect performance and satisfaction of users. A review of the status of mobile applications was also developed to evaluate the needs of the elderly people. The final design guidelines are composed of 3 distinct principles, and each one is grouped under several distinct category headings. Some selected examples of each category are listed below. 3.1

Accessible Text

Suitable Font Size. Provide specific option of font size corresponding to the degree of presbyopia for any web page so that it can be selected by people need. The degree of presbyopia will increase with age. In general the degree of presbyopia with age of 45 is 100° and the age of 55 will increase to 200°. By the age of 60 the degree will increase from 250 to 300°. Since then, the degree of presbyopia will no longer be increased. The experimental results showed the best distance to read the phone between the eyes and screen should be around 40 cm (Table 1). Table 1. Suitable font size Degree of presbyopia Font size 100 0.5 * 0.5 200 0.7 * 0.7 300 0.9 * 0.9

and style cm cm cm

Distinguishable Colors between Foreground/Background. Make content easier for users to see including separating foreground from background. Providing a contrast color between the background and the target directly affects the visual reception of the elderly [17]. Contrast sensitivity is the ability of the eyes to discriminate the difference in brightness between an object and the background such that high sensitivity appears on large areas and low sensitivity appears on small areas [18]. And the use of color pattern should be consistent in the specific information system of one web site. It is easier to help the user to quickly memorize and identify the location of information than other methods by marking the same information in the same color [19]. As the contrast in hue, lightness, and chroma gets more obvious, the visual effects will be larger and better [20]. The visibility of text could be properly displayed from the distinguishable background color [21]. Experimental results showed

A Web Accessibility Study in Mobile Phone for the Aging …

1955

that the reading time could be longer as the difference between the foreground color and the background color is not obvious (Table 2). Table 2. Distinguishable contrast colors for the elderly people Effect Pattern Good (Black, White), (Black, Yellow), (Yellow, Blue), (Black, White) Bad (Yellow, White), (Blue, Green), (Red, Purple), (Red, Green), (Red, Blue), (Blue, White)

3.2

Gettable Information

Open-and-Shut Layout. Information and user interface components must be presentable to users in ways they can perceive. Intrinsic smaller screen of mobile phone is the critical vision overheads to the elderly people in browsing information. Although the use of dynamic images can provide a rich visual presentation, it will increase the user’s mental load level. As the dynamic image is more complex, or the dynamic image with a fast changing frequency, it will make the elderly people confused and difficult to get information resulting from psychological pressure on the operation [21]. The experiment results show that (1) consistent layout interface design on mobile phone suggesting higher search efficiency and (2) the benefit of browsing with limited span and depth of contents and functions presented on mobile phones matches the shorter working memory for the elderly people. Thus, the mobile Internet is somewhat less tedious look for the elderly people with vision degeneration.

4 Conclusion Mobile phones are becoming a daily necessity for the elderly people in this digital era. the features they provide supported by rich functionality made them one of the indispensable gadgets used in their daily life. With the degradation of visual sensory, the elderly people have decreased significantly in the perception from interface of mobile phone. However, as the interface designs of mobile phones get more advanced and complicated and ignore the special need from the vision decline of elderly people, the specific design principles and guidelines need to be developed to serve the elderly needs. In this paper a set of guidelines and design recommendations provide for the mobile phones targeted towards the vision degeneration of elderly people. These guidelines were distilled and consolidated after a comprehensive review of the literature and several experiments. These compiled guidelines will serve as an initiation for future designers/developers to use while designing web pages for the elderly people in manipulating the mobile phone interfaces.

1956

C. N. Chu

References 1. Lane, N.D., Miluzzo, E., Lu, H., Peebles, D., Choudhury, T., Campbell, A.T.: A survey of mobile phone sensing. IEEE Commun. Mag. 48(9), 140–150 (2010) 2. Salehan, M., Negahban, A.: Social networking on smartphones: when mobile phones become addictive. Comput. Hum. Behav. 29(6), 2632–2639 (2013) 3. Vaghefi, I., Lapointe, L., Boudreau-Pinsonneault, C.: A typology of user liability to IT addiction. Inf. Syst. J. 27(2), 125–169 (2017) 4. Chittaro, L.: Visualizing information on mobile devices. Computer 39(3), 40–45 (2006) 5. Gilleard, C., Higgs, P.: Ageing and the limiting conditions of the body. Sociological Res. Online 3(4), 1–11 (1998) 6. Reuter-Lorenz, P.A.: New visions of the aging mind and brain. Trends Cogn. Sci. 6(9), 394– 400 (2002) 7. Gilleard, C., Higgs, P.: Frailty, disability and old age: a re-appraisal. Health Interdisc. J. Soc. Study Health, Illn. Med. 15(5), 475–490 (2011) 8. Abrahamson, J.I.: Eye changes after forty. Am. Fam. Physician 29(4), 171–181 (1984) 9. Fozard, J.L., Gordon-Salant, S.: Changes in vision and hearing with aging. In: Birren, J,. Schaie, K.W. (eds.) Handbook of the psychology of aging, vol. 5, pp. 241–266. Gulf Professional Publishing, Houston, TX (2001) 10. Hawthorn, D.: Possible implications of aging for design. Appl. Ergon. 24(1), 9–14 (2000) 11. Charness, N., Bosman, E.: Human factors and design. In: Birren, J.E., Schaie, K.W. (eds.) Handbook of the psychology of aging, vol. 3, pp. 446–463. Academic Press, San Diego, CA (1990) 12. Caprani, N., O’Connor, N.E., Gurrin, C.: Touch screens for the older user. In: Cheein, F.A. (ed.) Assistive Technologies, pp. 104–128. Intech, Ireland (2012) 13. Munafo, J., Curry, C., Wade, M.G., Stoffregen, T.A.: The distance of visual targets affects the spatial magnitude and multifractal scaling of standing body sway in younger and older adults. Exp. Brain Res. 234(9), 2721–2730 (2016) 14. Plude, D.J., Hoyer, W.J.: Attention and performance: identifying and localizing age deficits. In: Aging and human performance, pp. 48–89. Wiley, New York (1985) 15. Rabbitt, P.: Speed of visual search in old age: 1950 to 2016. J. Gerontol. Ser. B 72(1), 51–60 (2017) 16. Welford, A.T.: Changes of performance with age: an overview. In: Charness, N. (ed.) Aging and Human Performance, pp. 333–365. Wiley, New York (1985) 17. Pirkl, J.: Transgenerational design: products for an aging population. Van Norstrand Reihnold, New York (1994) 18. Grandjean, E., Hunting, W., Nishiyama, K.: Preferred VDT workstation setting body posture and physical impairment. Appl. Ergon. 15(2), 99–104 (1984) 19. Steinberg, E.R.: Computer-assisted instruction: a synthesis of theory, practice, and technology. Lawrence Erlbaum Associates Inc., Hillsdale, NJ (1991) 20. Lippert, T.M.: Color difference prediction of legibility performance for CRT raster imagery. SID Digest of Technical Papers, XVI, 86–89 (1986) 21. Megalakaki, O., Aparicio, X., Porion, A., Pasqualotti, L., Baccino, T.: Assessing visibility, legibility and comprehension for interactive whiteboards (IWBs) vs. computers. Educ. Psychol. 36(9), 1631–1650 (2016) 22. Fisk, A.D., Rogers, W.A.: Handbook of human factors and the older adult. Academic Press, San Diego, CA (1997)

The Comparison Between Online Social Data and Offline Crowd Data: An Example of Retail Stores Jhu-Jyun Huang(&), Tai-Ta Kuo, Ping-I Chen, and Fu-Jheng Jheng Innovative Digi-Tech-Enabled Application and Service Institute, Institute for Information Industry, Taipei, Taiwan {Ginnyhuang,TaitaKuo,be,fjcheng}@iii.org.tw

Abstract. Social network is a new way that helps companies promote their products. People can follow official online pages or hashtag, as well as “like” or “comment” the contents what they like on the Internet. Therefore, it is no longer a matter of online versus offline. If the companies can provide the ability for retailers to offer the constant experience to customers by converging their ‘offline and online’ channels, it will useful that the online social users are able to explain the customers persona of offline. In this study, we use one of the Taiwanese retail stores “Poya”, 10 locations of branches as an example. By calculating and testing the correlation of these two series between online data and offline data, we found that there is a potential relationship between the online users data from the Poya official page of Facebook and the offline customer’s data of entity store (Brick-and-Mortar store). Keywords: Online to offline

 Correlation analysis  Granger causality test

1 Introduction Traditional marketing, such as print, telephone, broadcast from TV is slowly decreasing. Instead of traditional marketing, more companies choose to invest more efforts and money on online marketing due to the power of social media. Social media is a new instrument of communication through which companies can interact with customers by sharing their information and receive feedback from customers (Schejter and Tirosh 2015). Most of companies have multiple official social media accounts such as Facebook, Instagram and YouTube etc. Among these social networks, Facebook is the most popular website in Taiwan. A market research from Institute for Information Industry1 shows that there are more than 90% users have Facebook accounts in 2017. Using Social media to market companies’ products and share their actives such as products discount, promotional offers is effective since people use the internet almost every day. People can not only follow the official page which they are interested in but also “like” and “comment” below these pages.

1

Information source: “https://www.iii.org.tw/Press/NewsDtl.aspx?nsp_sqno=1934&fm_sqno=14”.

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1957–1965, 2019. https://doi.org/10.1007/978-981-13-3648-5_251

1958

J.-J. Huang et al.

Thus, there are so many potential opportunities from these infinite community space of online. For instance, if consumers are going to go to a store tomorrow, they will check the rating of the stores and they can also obtain coupons prior to visiting the stores through searching on the internet. Whether the brick and mortar stores or an eTailer store or not, most of the store will use official social media accounts to spread their information. Evaluating the performance of official page posting flow or customers voice from the internet becomes one of the most important parts. How do we use these daily social network data to verify the users from internet are related to the real daily number of customers in entity store? This is exactly what we are curious about. If we can find the relationship between daily users behavior of the internet and daily crowd of real store, maybe the online social users are able to explain the offline characteristic pattern of customers. In other words, we can use online data to estimate what the offline crowd are interested in. By converging these two platforms’ data will bring more precision recommendation of products to customers. In order to make sure this assumption is useful, this paper use one of retail store, Poya, which have many branches around Taiwan as an example. We calculate the correlation of these two series between online (the page of Poya’s official Facebook) data and offline data. Then we use a method, Granger Causality, to understand the causality between these two series. The remainder of this paper is organized as follows. Section 2 presents some of the analysis method used in analyzing online data or offline data. Section 3 presents the method and data. Section 4 provides an overview of data analysis. Finally, Sect. 5 presents the summary of this study as well as future research issues.

2 Literature Review There are some researches that have been using social network data and offline crowd data in their analysis. In addition, many of them show that social media was proved that it can affect customers’ thinking to use the brand. For instance, Barwise and Meehan’s (2010) showed that Facebook could help brands to propagate word of mouth with the amazing speed. Cho et al. (2014) applied a structural equation modelling to prove that online users have a significant effect on information about products of brand, influencing people to purchase the products, based on a survey of 233 South Korean Facebook users in their 20s and 30s. Skowron et al. (2016) combined two social networking sites: Twitter and Instagram to analyze users’ personality. The results indicate there are effectively reducing the prediction errors. Laura (2016) reviewed the reference and told us both online and offline platforms that compose them through which the customers can interact with the brand become links in a chain attending the fusion not between the different platforms but between the client and the company.

The Comparison Between Online Social Data and Offline …

1959

3 Data In this study, we discuss the relation between online social data and offline crowd data. We choose a retail store, Poya, which have many branches around Taiwan as an example. In online social data, we choose Facebook official page of Poya, and collect all of active users from October 01, 2017 to October 28, 2017. There are four weeks in total. In offline crowd data, we choose 10 branches of Poya: Songshan Taipei, Beitun Taichung, Xinzhuang New Taipei, Pingtung, Fongshan Kaohsiung Taitung, Tamsui New Taipei, Beigang Yunlin, North Dist. Tainan and Lukang Changhua. How do we collect the crown data from 10 branches? We can only circle the field of 500 m times 500 m outwards the store of Poya. That is to say, we can get the number of people in 250,000 m2, and then Poya’s customers is the subset from this Big square (shows in Fig. 1). For the purpose of compare with online data, the time series chosen by us was from October 01, 2017 to October 28, 2017 as well. Figure 1 shows that these location of 10 Poya-branches.

Fig. 1. The location of 10 Poya-branches

1960

J.-J. Huang et al.

4 Methodology In this section, we will provide an overview of our study process and the method we used on data analysis. 4.1

Study Process Step I We first plot the time series from 10 branches of Poya and Facebook official page of Poya respectively, then we discussed the possible differences between them. If there is a difference, we will proceed to Step II. If there is no difference, we will skip Step II and jump to Step III. Step II Using the K-means clustering method to find out whether there are different groups among the 8 branches of Poya. If there is a significant difference between the groups, we analyze different groups verse online data separately. On the contrary, if there is no differences exist between the groups then we do not need to separate it. Step III Compare the data of 10 branches of Poya and online active users data and discuss their relevance. Here, we are not only observe the correlation between online and offline data but also assess the lag between two distributions (online data and offline data). Therefore, we use two methods of evaluation: Correlation Coefficient and Granger Causality Test.

4.2

Cluster Analysis

Cluster analysis is a method that data can be divided into many clusters according to the goal of grouping such as distance and correlation; and data picked in the same group could have similar characteristics. In other words, if data picked in different cluster will have significantly different features. Hierarchical clustering and k-means clustering are the most pervasive. In this study, we used K-means clustering method. K-means was one of clustering algorithm that first proposed by MacQueen (1967). It uses the partition clustering method. First we would assign initial value of clusters K, and then use the iterative method. Afterwards, we can find the minimum distance of cluster center by using iterative method to determine value of the patterns in each one of the K-clusters. The iterative process will be relentlessly continued till the optimum for convergence is met. 4.3

Correlation Analysis

Correlation is a simple way to be used in discussing the relation or connection between two things. For example, If a survey told us salary and a happy life are highly correlated, then there is an implication that the more salary we have, the happier we are. Therefore, we can conclude that there is a strong positive correlation. When correlations are calculated mathematically, the result is a correlation coefficient. A correlation is a number between −1 and +1 that measures the degree of association, connectedness, or linkage between the two variables. The closer absolute value of this coefficient arrive to 1, the stronger correlation of two variables have.

The Comparison Between Online Social Data and Offline …

4.4

1961

Granger Causality Test

Granger causality is purposed by Granger (1969). It is a method used to determine causality between two or more scalar variables of time series. This method is a part of probability theory; it uses empiric data sets to find the two or more variable’s patterns of correlation. In our research, we discuss causal relation of two series. The General formalization of Granger causality for the case of two scalar time series fXt jxt ; xt1 ; . . .; xtn g and fYt jyt ; yt1 ; . . .; ytn g is defined as follows. If series Xt is causal to series Yt , that it can be said to Xt is Granger-cause Yt , to put it from another way, the past series of Xt should contain information of Yt and it helps us to explain the series of Yt . How to evaluate the causal degree of two series? We can use Granger Causality Test which based on F-tests. By testing on lagged values of Xt , we can know that if those Xt can offer significantly information about the future Yt or not. The null hypothesis for the test is that lagged Xt do not explain Yt . Namely, it assumes that series Xt is not useful in explaining Yt .

5 Results The purpose of this study is to determine the correlation relationship between online data and offline data. The result of our analysis is shown below. 5.1

Data Preprocessing

First of all, by plotting ten time series charts from number of customers in Poya’s 10 branches and active users for Facebook official page of Poya severally, we can observe the following three points which are shown in Fig. 2.

Fig. 2. The time series from number of customers in Poya’s 10 branches and active user for Facebook official page of Poya

1962

J.-J. Huang et al.

1. No matter online data or offline data are have a periodic trends as a whole. In 10 branches of Poya’s Brick-and-Mortar stores, all the crowd data of Poya branches are higher except Pingtung branch in the weekends (10/1, 10/7–10/8, 10/14–10/15, 10/21–10/22, 10/28). On the contrary, online data reach their peak on Friday (10/6, 10/13, 10/20, 10/27) and dropping slowly in the weekend. Hence, we speculate there may be an interrelationship of time lag between online users data and offline crowd data. 2. We can see in Fig. 2, it is obvious that all of the offline data dropped drastically at 10/22 except Beigang Yunlin. 3. Time series of Poya’s 4 branches: Beitun Taichung; Songshan Taipei; Tamsui New Taipei and Xinzhuang New Taipei, are relatively close. Whereas, for the other branches is it unclear whether or not a correlation exists. 5.2

The Result of Cluster Analysis

In order to understand whether there are difference in time series trends between Poya’s shops at different regions of Taiwan, we used K-means cluster method. Since we do not need to discuss the outlier of series specifically in this study, we consider substituting 10/22 by using the average of data of last three Sunday which are 10/1, 10/8 and 10/15. In Fig. 3, we found that the daily number of customer’s distribution in 10 branches of Poya can be separated into three groups.

Fig. 3. Three groups which used K-means to classify Poya’s 10 branches

The Comparison Between Online Social Data and Offline …

1963

Group I: Beitun Taichung, Songshan Taipei, Xinzhuang New Taipei, Taitung. Group II: Pingtung, Tamsui New Taipei, North Dist. Tainan, Beigang Yunlin. Group III: Lukang Changhua, Fongshan Kaohsiung. In Taiwan, Group I is the commercial area; Group II is mainly residential area; while the Lukang Changhua in Group III is also called cultural tour area. We will compare the time series of Poya store in different regions with online data in further part. Moreover, Fongshan Kaohsiung branch of Poya is grouped into commercial area because it is not considered to be a tourist area for the sake of consistency. 5.3

The Result of Correlation Analysis and Granger Causality Test

In this part, we need to know the crowd data of Poya’s 10 branches and online active users data of Poya official page and discuss their relationship. We used correlation analysis and Granger Causality Test, and the result shown in Tables 1 and 2, respectively. It can be seen from Table 1 that when lag = 0 day, much crowd data of branches of Poya have a negative correlation trend; when lag = 1 day, it shows 6 of branches has a higher correlation of more than 0.4 in addition to Beigang Yunlin; when lag = 2, the correlation decreases slightly. To observe whether online data and offline data have the lag relationship, we made a Granger Causality Test and the result appeared in Table 2. We can find that there is not only a correlation but a potential lag relationship between online data and offline data. In brief, we find that most of the branch store crowd data could be explained by online data, especially in the Commercial area. Furthermore, we can be happier to see that it is possible that the active users of Poya’s official Facebook page can explain the customers of Poya’s Brick-and-Mortar store. Table 1. The correlation between online data and offline data Lag Commercial area Cultural tour area Songshan Beitun Xinzhuang Fongshan Lukang Taipei Taichung New Taipei Kaohsiung Changhua 0 0.025 −0.513 −0.153 0.085 −0.677 1 0.532** 0.391 0.595** 0.414* 0.418* 2 0.238 −0.342 0.367 0.102 0.412* 3 −0.161 −0.004 −0.066 −0.266 0.119 Lag Residential area Tamsui Taitung Pingtung Beigang North Dist. New Taipei Yunlin Tainan 0 −0.468 −0.426 0.291 −0.038 −0.426 1 0.474* 0.006 0.471* −0.052 0.283 2 0.460* 0.333 0.019 −0.113 0.385 3 −0.049 0.167 0.004 0.115 0.022 *Correlation > 0.4, **Correlation > 0.5

1964

J.-J. Huang et al.

Table 2. The F-test by testing Granger causality between online data and offline data Lag Commercial area Cultural tour area Songshan Taichung Xinzhuang Fongshan Lukang Taipei New Taipei Kaohsiung Changhua 0 0.488 0.193 0.570 0.468 0.124 1 0.005* 0.030* 0.032* 0.040* 0.031* 2 0.238 0.090 0.058 0.160 0.079 3 0.059 0.161 0.071 0.195 0.119 Lag Residential area Tamsui Taitung Pingtung Beigang North Dist. New Taipei Yunlin Tainan 0 0.058 0.013 0.004* 0.824 0.035* 1 0.015* 0.004 0.201 0.788 0.110 2 0.043* 0.004* 0.402 0.308 0.078 3 0.067 0.018* 0.348 0.022* 0.144 *It is significant when p-value < 0.05

6 Conclusion In this study, we use one of the Taiwan retail store “Poya” as an example to calculate and test the correlation of these two series between online data and offline data of the consumers. We find that there is a lag relationship between online user from the page of Poya’s official Facebook and offline customers at entity store. Thus, we will have an opportunity to use the online user data to explain the offline characteristic pattern of costumers by observing which websites or online pages the online users are interested in. In the further, we can use online data to estimate or predict what the offline crowds favor and design a recommending system by converge these two platforms’ data based on our results. Acknowledgements. This study is conducted under the “Big Data Technologies and Applications (4/4)” of the Institute for Information Industry which is subsidized by the Ministry of Economy Affairs of the Republic of China.

References Barwise, P., Meehan, S.: The one thing you must get right when building a brand. Harv. Bus. Rev. 88(12), 80–84 (2010) Granger, C.W.J.: Investigating causal relations by econometric models and crossspectral methods. Econometrica 37(3), 424–438 (1969) Cho, I., Park, H., Kim, J.K.: The relationship between motivation and information sharing about products and services on facebook. Behav. Inf. Technol. 34(9), 858–868 (2014) Laura, T. A.: Online-Offline cooperation and complementation analysis. Màsters Oficials, Màster universitari en Enginyeria Industrial (2016)

The Comparison Between Online Social Data and Offline …

1965

MacQueen, J.: Some methods for classification and analysis of multivariate observations. Berkeley Symposium on Mathematical Statistics and Probability vol. 1, pp. 281–297(1967) Schejter, A.M., Tirosh, N.: “Seek the meek, seek the just”: social media and social justice. Telecommun. Policy 39(9), 796–803 (2015) Skowron, M., Ferwerda, B., Tkalčič, M. Schedl, M.: Fusing social media cues: personality prediction from twitter and instagram. (2016). WWW ’16 Companion Proceedings of the 25th International Conference Companion on World Wide Web, pp. 107–108

The Historical Review and Current Trends in Speech Synthesis by Bibliometric Approach Guang-Feng Deng(&), Cheng-Hung Tsai, and Tsun Ku Institute for Information Industry, Taipei, Taiwan [email protected]

Abstract. To build awareness of the development of Speech Synthesis, this study clarifies the citation and bibliometric analysis of research publications of Speech Synthesis during 1992–2017. This study analysed 17,161 citations from a total of 1256 articles dealing with Speech Synthesis published in 333 journals based on the databases of SCIE, SSCI and AH&CI, retrieved via the Web of Science (WOS). Bradford Law and Lotka’s Law, respectively, examined the distribution of journal articles and author productivity. Furthermore, This study determines the citation impact of Speech Synthesis using parameters such as extent of citation received in terms of number of citations per study, distribution of citations over time, distribution of citations among domains, citation of authors, citation of institutions, highly cited papers and citing journals and impact factor of 17,161 citations. This study can help researchers to better understand the history, current status and trends of Speech Synthesis in the advanced study of it. Keywords: Citation analysis  Bibliometric analysis Voice conversion bradford law  Lotka’s law

 Speech synthesis 

1 Introduction Speech is the most natural way for human beings to acquire and share information and communicate with other people. It is reasonable that speech-related techniques will soon become indispensable for human-machine interfaces. In this respect, speech synthesis technologies have been among the most popular research topics in the field of human-computer interaction for years. Recently, as the emerging virtual personal assistants on mobile devices have become more and more popular, natural speech synthesis is becoming a vital assistive technology component [1–10]. Accordingly, the Speech Synthesis literature has also grown rapidly, and thus this study investigates the characteristics of the Speech Synthesis literature during Jan. 1992 to Dec. 2017 using bibliometric and citation analysis. The specific analysis technique applied here applies bibliography counting to analyze and quantify the growth of the literature on a subject using various laws [11–14]. Tracing the productometric analysis of Speech Synthesis publications requires performing citation analysis, which is necessary to judge the quality and impact of Speech Synthesis papers and their global recognition. Citation reveals the links between pairs of documents, the one, which cites and the other, which is cited. Citation © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1966–1978, 2019. https://doi.org/10.1007/978-981-13-3648-5_252

The Historical Review and Current Trends in Speech …

1967

expresses the importance of the material cited, as authors frequently refer to previous material to support, illustrate, or collaborate on specific points. Citation analysis is an important tool in quantitative studies of science and technology. The quality of specific publications can be assessed based on the number of citations in the literature. The use of citation analysis in research on science history is based on a literary model of the scientific process. The Web of Science (WOS) database currently contains records for over 23 million value adding patent records in Chem/Biochem, Engineering, Electronics, going back to 1966 and covering over 22,000 journals. Generally, each record in the Web of Science (WOS) database contains an English-language title, descriptive abstract, document type, and full information on cited references and number of citations. The bibliographic information includes the journal or other publication title, author name and affiliation, language of the original document, etc. Indexed document types include books and monographs, conferences, symposia, meetings, journal articles, reports, theses and dissertations. This study used the search command to retrieve the phrases “Speech Synthesis”, “Voice synthesis” or “voice conversion” from the descriptor field of the WOS database. The main study objective is to clarify the presence of Speech Synthesis in published citations during 1992–2017 indexed in SCIE, SSCI and AH&CI retrieved using the Web of Science. This study has the following specific objectives: (1) Explore the growth of the Speech Synthesis literature; (2) Identify citation growth of the Speech Synthesis literature; (3) Determine the time lag between paper publication and first citation; (4) Clarify the domain wise distribution of citations; (5) Determine a core of primary journals in which the literature on Speech Synthesis in most heavily represented; (6) Examine the distribution of citations among journals; (7) Identify highly cited papers and track their citation life cycle; (8) Reveal the distribution of the citing journals according to their impact factors; (9) Identify the major contributing countries that publish the largest numbers of Speech Synthesis articles and Clarify the distribution of citing papers based on country of publication; (10) Find the productivity distribution of authors and their institutions on this subject; (11) Determine Cited Authorship Productivity and Lotka’s Law; (12) Plot the Bradford-Zipf graph.

2 Growth in the Published Speech Synthesis Literature The first paper published on Speech Synthesis to appear in the WOS database dates to 1992. This study finds that the database contains 1256 journal articles dealing with Speech Synthesis during 1992–2017. Table 1 lists the number of studies published each year. The table clearly indicates that before 2002, database contained just three hundred items dealing with Speech Synthesis literature. This shows that the collection of Speech Synthesis papers may not be comprehensive during the initial stage in WOS database. The WOS database indicates that 2003 was the most significant year for the publication of literature. The WOS database contains 40 items dealing with Speech Synthesis during that year. The article number peaked in 2015, when 85 articles were published. The literature published steadily increased from 2005 to 2017. Figure 1

1968

G.-F. Deng et al.

plots the annual numbers of published studies on Speech Synthesis and clearly reveals that the sharpest increase occurred in 2007. Based on the figure, this study predicts that Speech Synthesis will continue to rapidly grow. Figure 1 also shows the cumulative growth of the Speech Synthesis literature based on the WOS. Once again, the WOS database reveals growth in published works on Speech Synthesis from 1999. Following 2003, the literature grows approximately linearly, exhibiting growth of about 50 items annually. Table 1. Annual production of Speech Synthesis literature and citation frequency of Speech Synthesis publications Publication year 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017

Papers published 24 37 25 43 29 23 29 23 29 31 32 40 53 72 70 54 29 48 73 54 60 69 83 85 65 76

Cumulative papers published 1 38 63 106 135 158 187 210 239 270 302 342 395 467 537 591 620 668 741 795 855 924 1007 1092 1157 1256

Citation received 4 10 13 59 66 68 84 117 156 139 236 240 242 410 543 685 707 807 966 935 1211 1416 1780 1981 2179 1747

Cumulative citations 0 10 23 82 148 216 300 417 573 712 948 1188 1430 1840 2383 3068 3775 4582 5548 6483 7694 9110 10,890 12,871 15,050 17,161

The Historical Review and Current Trends in Speech …

1969

Fig. 1. Cumulative growth of Speech Synthesis literature & citation trends during 1992–2017

During 1992–2017, the Speech Synthesis papers received 17,161 citations. The annual average number of citations was 646, and the average citations per article were 13.6. The number of citations peaked in 2016 at 2179 and continuous growth of citations was found throughout 1992–2016. The numbers of papers published and the citation rate peaked during 2016 as an inflow of earlier papers continued to receive citations. Table 1 shows the growth in the number of citations of the Speech Synthesis literature over the 25 years period. Figure 1 presents the growth and trends of citations of Speech Synthesis publications per year, and clarifies the information in table.

3 Citation Frequency of Speech Synthesis Publications During 1992–2017 The development of any scientific and research subject depends heavily on its research output and intellectual publications. However, such outputs become redundant and profitless if scientists do not refer to them. If the citation likelihood is the same for every article, then the citation frequency should increase with the number of articles a journal publishes [14]. The data mostly supports this, although numerous articles are never cited. Considering its importance in significance, citation frequency of Speech Synthesis publications was identified and listed in Table 2. Nine-hundred and ninety-eight of the 1256 papers were cited, while the remaining two-hundred and fifty-eight were not. Of 1256 papers, one paper published in 1999 received 926 citations in Acoustics, followed by 482 citations published in 1998 in Computer Science, and 454 citations published in 2009 in the same place. This data clearly reflects that the research on Acoustics conducted by Speech Synthesis received global recognition.

1970

G.-F. Deng et al. Table 2. Citation frequency of Speech Synthesis publications

No. of times cited 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38

No. of papers 258 156 103 81 65 51 41 32 39 35 28 23 19 29 16 24 14 6 8 8 13 7 15 8 9 6 8 6 3 4 5 3 8 6 3 4 4 4 1

No. of citation 0 156 206 243 260 255 246 224 312 315 280 253 228 377 224 360 224 102 144 152 260 147 330 184 216 150 208 162 84 116 150 93 256 198 102 140 144 148 38

Cumulative No. of times cited 0 52 156 53 362 54 605 55 865 56 1120 58 1366 59 1590 60 1902 61 2217 63 2497 64 2750 65 2978 68 3355 69 3579 70 3939 77 4163 78 4265 79 4409 81 4561 82 4821 85 4968 86 5298 87 5482 90 5698 91 5848 92 6056 93 6218 94 6302 95 6418 96 6568 98 6661 99 6917 101 7115 102 7217 111 7357 112 7501 118 7649 119 7687 122

No. of papers 3 1 2 1 2 1 4 1 2 1 3 2 2 1 1 3 2 1 3 1 1 1 1 2 1 2 1 1 1 1 1 1 1 1 1 1 1 2 1

No. of citation 156 53 108 55 112 58 236 60 122 63 192 130 136 69 70 231 156 79 243 82 85 86 87 180 91 184 93 94 95 96 98 99 101 102 111 112 118 238 122

Cumulative 9354 9407 9515 9570 9682 9740 9976 10,036 10,158 10,221 10,413 10,543 10,679 10,748 10,818 11,049 11,205 11,284 11,527 11,609 11,694 11,780 11,867 12,047 12,138 12,322 12,415 12,509 12,604 12,700 12,798 12,897 12,998 13,100 13,211 13,323 13,441 13,679 13,801 (continued)

The Historical Review and Current Trends in Speech …

1971

Table 2. (continued) No. of No. of times cited papers 39 2 40 2 41 5 42 4 43 3 44 2 45 1 46 1 48 4 49 2 50 7

No. of citation 78 82 210 172 132 90 46 48 196 100 357

Cumulative No. of times cited 7765 123 7847 124 8057 126 8229 152 8361 172 8451 183 8497 248 8545 370 8741 454 8841 482 9198 926

No. of papers 1 1 1 1 1 1 1 1 1 1 1

No. of citation 123 124 126 152 172 183 248 370 454 482 926

Cumulative 13,924 14,048 14,174 14,326 14,498 14,681 14,929 15,299 15,753 16,235 17,161

4 Bradford Law and the Journal Literature As discussed previously, the journal article is the single most widespread form of publication. In total, there are 333 journals published 1256 articles dealing with Speech Synthesis. Of these, 206 journals published only one article on Speech Synthesis. To identify a core group of journals containing a high proportion of articles on Speech Synthesis, the Bradford law has been widely employed to study the distribution of literature among journals. Figure 2 illustrates the Bradford plot—the cumulative number of papers published by each journal against the logarithm of ranks of its article amount—for the journal literature on Speech Synthesis. If the plot for data on a specific subject revealed a discontinuity of S-slope in the Bradford method, the phenomenon might result from the dispersion of the literature on the subject. Clearly, Fig. 2 cannot produce a curve like the S-shape as the typical Bradford plot. The curve on Speech Synthesis fails to reproduce the final droop in the Bradford plot, the result suggests that the literature on Speech Synthesis is not spread across numerous different journals. The approximately linear portion appears after the journal rank of about 14. The top 14 journals can be considered the core journals in the Speech Synthesis literature.

Fig. 2. The Bradford plot of the Speech Synthesis literature

1972

G.-F. Deng et al.

Table 3 ranks different journals and number of articles published by them. To avoid making the ranking table too long, the cut off value was set at 11 articles. Thus, Table 3 lists 20 journals which include at least published 11 articles on Speech Synthesis in terms of number of article count and number of citation. Bradford Law claims that a sample of articles can be divided into three equal sets, where each set will contain the amount of journal on a given topic in proportions of 1: n: n2, and this may also be true for the literature on Speech Synthesis. The present sample could be divided into three parts, each containing approximately 420 records. The numbers of journals in each of the three parts were 14:60:256, representing approximate proportions of 1:4:16, so n = 4. The first 14 journals thus comprise approximately 33% of literature, while the remaining 67% is scattered among 316 journals. This statistic shows that the literature on Speech Synthesis is relatively scattered. This statistic also illustrates that 206 journals published only one article dealing with Speech Synthesis, and the first three journals together cover approximately 29% of the literature. The journal with the largest number of articles was Speech Communication, with 195 articles, representing 15.53% of the total. This was followed by IEEE Transactions on Audio Speech and Language Processing with 99 articles (7.88%), and IEICE Transactions on Information and Systems with 70 citations (5.57%).

Table 3. Journals publishing more than 10 articles during 1992–2017 Rank Source title 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Record count

Speech Communication 195 IEEE Transactions on Audio Speech and Language Processing 99 IEICE Transactions on Information and Systems 70 Computer Speech and Language 62 IEEE ACM Transactions on Audio Speech and Language Processing 46 Lecture Notes in Computer Science 45 Lecture Notes in Artificial Intelligence 41 Journal of The Speech Synthesisustical Society of America 37 Text Speech and Dialogue Proceedings 29 IEEE Transactions on Speech And Audio Processing 28 IEICE Transactions on Fundamentals of Electronics 23 Communications and Computer Sciences IEEE Journal of Selected Topics in Signal Processing 22 IEEE Signal Processing Letters 22 EURASIP Journal on Audio Speech and Music Processing 21 Multimedia Tools and Applications 13 Electronics Letters 12 Prosody Phonology and Phonetics 12 Speech Prosody in Speech Synthesis Modeling And Generation 12 of Prosody for High Quality and Flexible Speech Synthesis IEEE Transactions on Consumer Electronics 11 Proceedings of The National Academy of Sciences 11 of The United States of America

% of 1256 No. of citation 15.53 7.88 5.57 4.94 3.66 3.58 3.26 2.95 2.31 2.23 1.83

5105 2906 1010 634 258 90 80 754 67 1420 93

1.75 1.75 1.67 1.04 0.96 0.96 0.96

138 90 55 60 25 0 6

0.88 0.88

66 44

The Historical Review and Current Trends in Speech …

1973

5 Journal-Wise Citation of Papers on Speech Synthesis From the citation perspective, the law describes a quantitative relationship between journals. Figure 3 shows the Bradford plot—the cumulative number of papers for each journal against the logarithm of its rank—for journals citing Speech Synthesis publications. The figure clearly illustrates the S-shape as the typical Bradford-Zipf plot, although the initial rise is somewhat faster than average. The approximately linear portion appears at the journal rank of 14. The top 14 may be considered the core journals of Speech Synthesis.

Fig. 3. The Bradford plot of the Speech Synthesis literature from the citation perspective

During 1992–2017, a total of 333 journals contained 17,161 citations involving works dealing with Speech Synthesis. The journal with the largest number of citations was Speech Communication, with 5105 citations, representing 29.97% of the total. This was followed by IEEE Transactions on Audio Speech and Language Processing with 2906 citations (16.93%), and IEEE Transactions on Speech And Audio Processing with 1420 citations (8.27%). Table 4 lists the top 14 journals in terms of number of citations and impact factor. Table 4. Journals in terms of number of citations Rank Journal title

Paper cited Average citations per paper

1 2 3 4 5 6 7 8 9 10 11 12 13 14

5105 2906 1420 1010 754 634 258 220 189 169 154 150 138 129

Speech Communication IEEE Transactions on Audio Speech and Language Processing IEEE Transactions on Speech And Audio Processing IEICE Transactions on Information And Systems Journal of The Speech Synthesisustical Society of America Computer Speech and Language IEEE-ACM Transactions on Audio Speech and Language Processing Proceedings of The IEEE Journal of Phonetics Learning Disability Quarterly Computational Linguistics SADHANA-Academy Proceedings in Engineering Sciences IEEE Journal of Selected Topics in Signal Processing ACM Transactions on Graphics

26.2 29.4 50.7 14.4 20.4 10.2 5.6 44.0 31.5 42.3 19.3 16.7 6.3 25.8

1974

G.-F. Deng et al.

6 Lotka’s Law and Author Productivity Table 5 lists 2453 authors, including single authors and co-authors, who contributed to the publication of 1256 articles dealing with Speech Synthesis. On average, each author published 0.51 papers. The vast majority (1861 authors, or 75.87%) contributed only one article and did so with the assistance of co-authors. The statistics thus differ from Lotka’s law, which states that roughly 60% of authors contribute just one paper. Furthermore, 1% of the authors contributed more than ten articles. One author contributed as many as 38 articles, while the second and third ranking authors in terms of numbers of articles contributed accounted for 31 and 28 articles. This shows that the Speech Synthesis literature contains an extremely large number of publications contributed by a single author. Table 6 lists the 13 most productive authors, each of whom published more than 13 articles, together with the number of articles published. Notably, the data on the top three authors indicates that their publications dealing with Speech Synthesis appeared between 2006 and 2017 in Scotland, 2006 and 2016 in Japan, and 1993 and 2017 in Japan, respectively. This study finds that Yamagishi, who was the most prolific author writing on Speech Synthesis, and all of his works focus on this field.

Table 5. Author productivity No. of articles No. of authors % of 2453 1 1861 75.87 2 310 12.64 3 118 4.81 4 63 2.57 5 38 1.55 6 18 0.73 7 10 0.41 8 8 0.33 9 4 0.16 10 4 0.16 11 6 0.24 12 0 0.00 13 3 0.12 14 2 0.08 15 0 0.00 16 2 0.08 17 1 0.04 18 1 0.04 19 1 0.04 20 1 0.04 21 1 0.04 38 1 0.04

Cumulative 1861 2171 2289 2352 2390 2408 2418 2426 2430 2434 2440 2440 2443 2445 2445 2447 2448 2449 2450 2451 2452 2453

The Historical Review and Current Trends in Speech …

1975

Table 6. Authors publishing more than 13 articles Rank 1 2 3 4 5 6 7 8 9 10 11 12 13

Author Yamagishi J. Toda T. Kobayashi T. Tokuda K. King S. Ling Z. H. Nose T. Rao K. S. Sagisaka Y. Wu C. H. Erro D. Nakamura S. Zen H.

Record count % of 1256 38 3.03 31 2.47 28 2.23 25 1.99 23 1.83 17 1.35 16 1.27 16 1.27 14 1.12 14 1.12 13 1.04 13 1.04 13 1.04

7 Core Authors Cited in Speech Synthesis Papers This study also aims to identify the key authors on Speech Synthesis from the citation perspective. It is universally accepted in the bibliometric world that the influence of the publications of an author increases with the number of times their works are cited. An author is considered influential if researchers in a similar filed frequently cite their contributions [14]. One author obtained as many as 1768 citations, while the second and third ranking authors in terms of numbers of citations accounted for 1252 and 1233 citations, respectively. This shows that the Speech Synthesis literature contains an extremely large number of citations contributed by a small number of authors. Table 7 lists the 17 most productive authors by citation perspective, each of whom obtained more than 100 citations, and together with published 10 articles. The data on the top three authors indicates that their citation appeared between 2003 and 2017, 2006 and 2017, and 2007 and 2017, respectively. Notably, most of the cores cited authors are also the highly productive authors. This study also finds that TOKUDA, who was the most prolific author writing on Speech Synthesis, obtained the most citations on this field.

1976

G.-F. Deng et al. Table 7. Authors obtained more than 100 citations Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Author Tokuda K. Yamagishi J. Toda T. Zen H. Stylianou Y. Kobayashi T. Ling Z. H. King S. Erro D. Nose T. Hernaez I. Rao K. S. Sagisaka Y. Wu C. H. Shikano K. Dutoit T. Tao J. H.

Number of citations 記錄 1768 25 1252 38 1233 31 958 13 730 11 665 28 362 17 356 23 263 13 247 16 232 11 225 16 201 14 161 14 159 11 144 11 125 11

Average citation per article 70.7 32.9 39.8 73.7 66.4 23.8 21.3 15.5 20.2 15.4 21.1 14.1 14.4 11.5 14.5 13.1 11.4

Lotka’s law was used to measure author productivity. Specifically, this study tested whether the author data conformed to the original formulation of Lotka. Lotka’s law can be expressed as y ¼ c/X n , where y = percentage of the number of authors published x articles relative to total number of authors, x = number of articles published by an author, c = a constant with value 0.6079, and Lotka’s Law defines n as the slope of the log-log plot) (Tsay, 2000). Lotka’s Law describes the frequency of publication by an author in a given field and states that the number of authors making multiple articles to the literature on a given field is about 1/n2 of those making one article. This means that approximately 60% of authors writing on a given field have just one publication; 15% have two publications (1/22 of 60); 7% have three publications (1/23 of 60) and so on. Furthermore, according to Lotka’s law of scientific productivity, only six percent of authors dealing with a given field produce more than ten articles. Figure 4 shows the data on Speech Synthesis author productivity via a fitted line. The red solid-line indicates the fitted line and is based on all data. The data on authors with high numbers of publications are quite scattered and may not be representative. If data on authors with more than 11 publications are omitted, the least square fit follows this fitted line. Notably, the literature suggests that data on high productivity authors should be omitted to achieve good fit.

The Historical Review and Current Trends in Speech …

1977

Fig. 4. Distribution of author productivity

8 Conclusions Citation clearly indicate the quality of the literature on Speech Synthesis. This investigation shows that, during the last 25 years, Speech Synthesis publications received 17,161 citations spread among 333 journals, with impact factor ranges varying from 0.01 to 5.00. Speech Synthesis research was cited by 946 world reputable institutions from 65 countries. This study examined the growth of the Speech Synthesis literature, based on the WOS database, and tested the various characteristics of the literature using bibliometric techniques. The following results were obtained: 1. Following 2004, the citation grew approximately linearly, a rate of approximately 100 citation per year, and peaked in 2016 with about 2179 citation. 2. The journal literature on Speech Synthesis roughly conforms to the typical S-shape Bradford-Zifp plot during the initial stage, but does not show the final droop during the later stage. Clearly Speech Synthesis is not yet widely referred to in the journal literature. 3. The Bradford-Zipf plot reveals 14 core journals. Approximately 33% of the literature is concentrated in the first 14 journals, with the remaining 67% being scattered among the other 319 journals. 4. The journal with the largest number of articles was Speech Communication, with 195 articles, representing 15.53% of the total, and the journal with the largest number of citations was Speech Communication, with 5105 citations, representing 29.97% of the total. 5. The vast majority (75.87%) of authors contributed only one article citing Speech Synthesis. Moreover, the author productivity distribution does not fit the original Lotka law. 6. Only 1% of authors contributed more than ten articles citing Speech Synthesis. The author with the largest number of papers citing Speech Synthesis contributed 25 such papers.

1978

G.-F. Deng et al.

Acknowledgements. This study is conducted under the “III system-of-systems driven emerging service business development Project” of the Institute for Information Industry which is subsidized by the Ministry of Economic Affairs of the Republic of China.

References 1. Bradlow, A.R., Torretta, G.M., Pisoni, D.B.: Intelligibility of normal speech.1. Global and fine-grained acoustic-phonetic talker characteristics. Speech Commun. 20, 255–272 (1996) 2. Kawahara, H., Masuda-Katsuse, I., de Cheveigne, A.: Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction: possible role of a repetitive structure in sounds. Speech Commun. 27, 187–207 (1999) 3. Stylianou, Y.: Applying the harmonic plus noise model in concatenative speech synthesis. IEEE Trans. Speech Audio Process. 9, 21–29 (2001) 4. Stylianou, Y., Cappe, O., Moulines, E.: Continuous probabilistic transform for voice conversion. IEEE Trans. Speech Audio Process. 6, 131–142 (1998) 5. Toda, T., Black, A.W., Tokuda, K.: Voice conversion based on maximum-likelihood estimation of spectral parameter trajectory. IEEE Trans. Audio Speech Lang. Process. 15, 2222–2235 (2007) 6. Toda, T., Tokuda, K.: A speech parameter generation algorithm considering global variance for HMM-based speech synthesis. Ieice Trans. Inf. Syst. E90D, 816–824 (2007) 7. Yamagishi, J., Kobayashi, T., Nakano, Y., Ogata, K., Isogai, J.: Analysis of speaker adaptation algorithms for HMM-based speech synthesis and a constrained SMAPLR adaptation algorithm. IEEE Trans. Audio Speech Lang. Process. 17, 66–83 (2009) 8. Zen, H., Tokuda, K., Black, A.W.: Statistical parametric speech synthesis. Speech Commun. 51, 1039–1064 (2009) 9. Zen, H., Tokuda, K., Masuko, T., Kobayasih, T., Kitamura, T.: A hidden semi-Markov model-based speech synthesis system. Ieice Trans. Inf. Syst. E90D, 825–834 (2007) 10. Zen, H.G., Toda, T., Nakamura, M., Tokuda, K.: Details of the Nitech HMM-based Speech Synthesis system for the Blizzard Challenge 2005. Ieice Trans. Inf. Syst. E90D, 325–333 (2007) 11. Takeda, Y., Kajikawa, Y.: Optics: a bibliometric approach to detect emerging research domains and intellectual bases. Scientometrics 78, 543–558 (2009) 12. Tsay, MY., Jou, S. J., Ma, S. S.: A bibliometric study of semiconductor literature, 1978– 1997. Scientometrics 49, 491–509 (2000) 13. Shiau, W.-L.: A profile of information systems research published in expert systems with applications from 1995 to 2008. Expert Syst. Appl. 38, 3999–4005 (2011) 14. Mishra, P.N., Panda, K.C., Goswami, N.G.: Citation analysis and research impact of National Metallurgical Laboratory, India during 1972–2007: a case study. Malays. J. Libr. Inf. Sci. 15, 91–113 (2010)

A Study on Social Support, Participation Motivation and Learning Satisfaction of Senior Learners Hsiang Huang1(&), Zne-jung Lee1, and Wei-san Su2 1

2

Huafan University, Taipei, Taiwan, ROC [email protected] National Yunlin University of Science and Technology, Douliu, Yunlin, Taiwan, ROC

Abstract. This study focuses on the status and differences between senior learners’ social support, participation motivation and learning satisfaction, and explores their relevance using the structural equation model. Taking Senior Learners of Senior University as the research object, after the questionnaires were collected, statistical analysis was conducted for confirmatory factor analysis, independent sample test, single factor variance analysis, snowflake comparison method, and structural equation model. The results of the study found that: 1. “Information Support” level 3, and the rest are all above 4; 2. Different living areas will affect social support, participation motivation and learning satisfaction; 3. Social support positively influences participation motivation. Social support positively influences learning satisfaction, and participation motivation positively influences learning satisfaction. Keywords: Senior learners Learning satisfaction



Social support



Participation motivation



1 Introduction 1.1

Research Motivation and Purpose

In the ageing environment of changing demographic structure, many colleges and universities have begun to undertake senior education. The purpose of this study is to understand: First, the feelings of the elderly learners on social support, their feelings about participation motivation, and their satisfaction with learning. Second, whether these feelings are different for older learners of different backgrounds. Third, whether social support, participation motivation and learning satisfaction are related to each other. 1.2

Senior University

Senior University is the Ministry of Education to encourage senior citizens to have more learning opportunities, and encourages schools to set up courses that meet the needs of senior citizens. Only 55 years of age, good physical condition, and no © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1979–1984, 2019. https://doi.org/10.1007/978-981-13-3648-5_253

1980

H. Huang et al.

academic qualifications; participants can not only enjoy university-class equipment and teachers, learn new knowledge and skills, but also enjoy the same rights as general college students, such as: Borrowers, parking, medical care, psychological counseling, etc., to experience university life.

2 Method 2.1

Framework

Research framework (see Fig. 1).

Fig. 1. A study on social support, participation motivation and learning satisfaction research framework

2.2

Statistical Analysis Method

Likert Item. A total of 164 pre-test questionnaires were issued and 124 were effectively recovered. Use SPSS for Windows 18.0 to perform project analysis, factor analysis, and reliability and validity testing after data encoding. Analysis of the Project Cronbach’s a is greater than 0.7; the correlation between the project and the total score is 0.3 or more, and it has a criterion of internal consistency when it reaches a significant level (0.05 or 0.01). The CR value of individual items should be at least 3 and have a significant difference (a = 0.05 or 0.01). As a result of the project analysis, all three questionnaires retain the entire questionnaire. Factor Analysis and Reliability Analysis As the basis for selecting the formal questionnaire subject. Selected criteria: The total score of each sub-item for its subscale needs to be more than 0.30, and p < 0.01. For factor analysis, the factor load of 0.40 or more is the topic selection criterion, and then the internal consistency of each level of the scale after correction is observed. After the project scale was analyzed by the project and reliability analysis, whether the analysis method adopted by the KMO test and the Bartlett spherical test to determine the factor analysis method was appropriate, then the factor analysis was performed to establish the construct validity of the scale. The factor analysis used was

A Study on Social Support, Participation Motivation …

1981

principal component analysis for factor extraction, Direct Oblimin for oblique rotations, and retention of common factors with eigenvalues greater than one. The formal questionnaires that combine the above standard production cost studies: senior learner’s social support scale, 19 questions; senior learner’s participation motivation scale, 12 questions; senior learner’s learning satisfaction scale, 16 questions; and senior learner’s background variables. Structural Equation Model This study uses AMOS software to analyze the overall structural model to verify the model proposed in this study.

3 Results 3.1

Social Support Analysis

Social Support Status The highest score was “tool support” (M = 4.18), followed by other “emotional support” (M = 4.17), “academic support” (M = 4.14) and “message support” (M = 3.95). Shows that senior learner has a positive attitude toward social support. This study finds that senior learner’s social support comes from the arrangement of senior university’s teaching equipment and content, as well as the interaction with people and other emotional aspects, indicating that the hardware equipment provided by senior university has reached the level of senior learner’s expectations. The mutual encouragement and sharing of emotional blocks is also an indispensable element. In terms of the communication of lower scores, senior learners are generally uncomfortable; individual counseling provided by senior universities is also insufficient in the eyes of senior learners. Difference Between Background Variables and Social Support Status There are significant differences: Senior learners living in different regions have significant differences in “tool support”. According to Scheffe’s method, senior learner living in the eastern region is higher than senior learner living in the southern region. No significant difference: gender, age, education level, living status, participation status, and sources of information. 3.2

Participation Motivation Analysis

Participation Motivation Status The highest score was “psychological motivation” (M = 4.28), followed by “physiological motivation” (M = 4.22) and “social motivation” (M = 4.1). Shows that senior learner has a positive attitude toward participation motivation. In this study, “It can help rehabilitate and treat diseases or sequelae” (M = 3.92). Senior learner shows that rehabilitation and treatment of diseases are not related to participation motivation. “Improve social status and gain recognition” (M = 3.92). Senior learner explained that it should be self-affirmation and live for oneself. It is not related.

1982

H. Huang et al.

Difference Between Background Variables and Participation Motivation There are significant differences: In “physiological motivation”, According to Scheffe’s method, the second senior learner involved is greater than the first senior learner involved. And in the “social motive”, it was discovered on the basis of subsequent comparisons. The senior learner who lives in the central region is higher than the senior learner who lives in the northern region. No significant difference: sex, age, education, living status, source of information. 3.3

Learning Satisfaction Analysis

Learning Satisfaction Status The highest scores were “interpersonal satisfaction” (M = 4.33), “course satisfaction” (M = 4.23) followed by “environmental satisfaction” (M = 4.18). Shows that senior learner has a positive attitude toward learning satisfaction. This study finds that senior learner learning satisfaction comes from the arrangement of senior university teaching equipment and content. As well as emotional interactions with people, the hardware equipment provided by senior university has the expectation of achieving a senior learner. The mutual encouragement and sharing of emotional blocks is also an indispensable element. Difference Between Background Variables and Learning Satisfaction There are significant differences: In Course Satisfaction, according to Scheffe’s method, the senior learner who lives in the eastern region is found to be higher than the senior learner who lives in the southern region. No significant difference: gender, age, education level, living status, participation status, and sources of information. 3.4

Structural Equation Model Analysis

This study is the path model of the best fitting situation without corrective action. According to the results of the study of the pattern path map, we know that. Social support has a significant correlation with the path correlation coefficient of participation motivation, social support for learning satisfaction, and participation motivation for learning satisfaction (see Fig. 2). Social Support and Participation Motivation Path coefficient 0.74, t = 13.55, p > 0.05. It can be seen that the social support obtained by senior learner will positively influence its participation motivation. Social Support and Learning Satisfaction Path coefficient 0.49, t = 6.54, p > 0.05. It can be seen that the social support received by senior learner will positively affect their learning satisfaction. Participation Motivation and Learning Satisfaction Path coefficient is 0.32, t = 4.29, p > 0.05. It can be seen that the motivation of senior learner’s participation positively affects its learning satisfaction.

A Study on Social Support, Participation Motivation …

1983

Fig. 2. Social support, participation motivation and learning satisfaction pattern path diagram

4 Discussion and Future Research Directions 4.1

Discussion

Social Support, Participation Motivation and Learning Satisfaction’s Status Social support for all facets of satisfaction from high to low was “tool support”, “emotional support”, “academic support”, and “message support”. The overall social support showed a level 4 or higher feeling. It can be seen that senior learners receive senior university’s equipment, teaching resources, and the support and encouragement of teachers and relatives and friends, as well as high levels of support in teaching content. Only the parts that are conveyed and accepted by the message are presented at level 3. The motives of various aspects of participation motives are “psychological motive”, “physiological motive”, and “social motive” in order from high to low. The motivation of senior learners comes from the satisfaction of spiritual level and physical fitness, which is higher than the external support and identification. The overall motivation for participation presented level 4 feelings. Satisfaction of learning satisfaction of various facets from the highest to the lowest is “interpersonal satisfaction,” “course satisfaction,” and “environmental satisfaction.” Senior learners overall learning satisfaction level 4 feelings. Among them, the highest level of satisfaction is in human interaction. Followed by the growth gained from the course. Satisfaction with the environmental requirements is lower, and level 3 feels. Social Support, Participation Motivation and Learning Satisfaction at Different Background Variables Senior learners do not result in differences in social support, participation motivation and learning satisfaction in different genders, ages, and education levels. Senior learners living in the eastern region are more satisfied with “tool support” than senior learners living in the southern region. In terms of participation motivation, senior learners participating in senior university for the second time are higher in “physiological motivation” than senior university learners. Senior learners living in central regions are

1984

H. Huang et al.

higher than senior learners who live in northern regions. Senior learners living in the eastern region of learning satisfaction are higher than senior learners living in the southern region. Social Support, Participation Motivation and Learning Satisfaction Senior learners’ social support has a positive impact on participation motivation. Senior learners’ social support has a positive impact on learning satisfaction. Senior learners’ participation motivation has a positive effect on learning satisfaction. 4.2

Future Research Directions

From the research, it can be found that social support positively influences participation motivation, and it is recommended that existing senior learners be encouraged to take part in school courses for those who have not participated, so that those who cannot get information can participate in senior university. As the senior university courses are set up in accordance with the conditions of each university, it is recommended that all regions can exchange the outlines of the courses and share the results to reduce the gap between urban and rural areas so as to achieve the goal of successful aging in all of Taiwan. In addition to the adjustment of the course and the introduction of the senior learner, in order to allow multiple-agers to participate in senior university, the data mining will be performed in the future. According to different background variables of the senior learner rule analysis.

A Health Information Exchange Based on Block Chain and Cryptography Wei-Chen Wu1(&) and Yu-Chih Wei2 1

2

Hsin Sheng Junior College of Medical Care and Management, Taoyuan, Taiwan [email protected] Department of Finance and Information, National Kaohsiung University of Science and Technology, Kaohsiung, Taiwan [email protected]

Abstract. This study will propose a HealthCoin scheme that can provide hospitals, doctors and patients to exchange health information with each other. The HealthCoin also is a transaction certificate for health information exchange. The study is based on block chain and cryptography and there are three rules: hospitals, doctors and patients that are nodes on our proposed block chain scheme. However, medical records relate to the privacy of patients and medical institutions must not only have the electronic health information exchange mechanism, but also pay more attention to the network security and patients’ data protection. So there was a third-party fair association to integrate the health information exchange mechanism and many associations want to be third parties fair. As a result, many hospitals or medical institutions do not know who to integrate or exchange with. Therefore, the project will use block chain technology and cryptography to solve the problem of third-party fairness and exchange health information with each other using HealthCoin. Keywords: HealthCoin

 Block chain  Electronic medical record exchange

1 Introduction To Improve EMR exchange frequency. Currently, many hospitals operate EMR (Electronic Medical Record) exchange independently so that the exchange frequency is very low. For the reason, the study proposed the HealthCoin, it as a valuable token, not currency and just used in the medical community. We can use HealthCoin to encourage medical institutions to share resources of healthcare information. There are other purposes such as seamless medical care. For example, today one patient is performed an operation in a hospital A. For some reason, next week he also will be performed an operation in other hospital B. If hospital B has some medical records in hospital A, the patient doesn’t do the same diagnosis again. It’s like treating in the same hospital and we can said seamless medical care. This benefit is to reduce waste of medical resources. Therefore, The EMR exchange frequency will increase in the future. One EMR may

© Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1985–1990, 2019. https://doi.org/10.1007/978-981-13-3648-5_254

1986

W.-C. Wu and Y.-C. Wei

store in many hospitals’ database. The medical information security is very important, so to improve medical information security is a hot topic. The blockchain in Health will solve these issues. This study proposed a Healthcoin scheme that can provide hospitals, doctors and patients to exchange health information with each other. The Healthcoin also is a transaction certificate for health information exchange. The study is based on block chain and cryptography and there are three rules: hospitals, doctors and patients that are nodes on our proposed block chain scheme. However, medical records relate to the privacy of patients and medical institutions must not only have the electronic health information exchange mechanism, but also pay more attention to the network security and patients’ data protection. So, there was a third-party fair association to integrate the health information exchange mechanism and many associations want to be third parties fair. As a result, many hospitals or medical institutions do not know who to integrate or exchange with. Therefore, the study will use blockchain technology and cryptography to solve the problem of third-party fairness and exchange health information with each other using Healthcoin.

2 Our Scheme Our proposed scheme is based on Merkle trees that are used in the blockchain to summarize all the transactions in a block, producing an overall digital fingerprint of the entire set of transactions, providing a very efficient process to verify whether a transaction is included in a block. The merkle tree is constructed bottom-up. Each transaction is made a hash that is stored in each leaf node. Two leaf nodes are then summarized in a parent node, by concatenating the two hashes and hashing them together. It can construct the parent node. The process continues until there is only one node at the top, the node known as the merkle root. That hash is stored in the block header and summarizes all the transactions. The study proposed three types of blockchains: 2.1

Patient-to-Hospital (P2H) Blockchain

A patient X goes to a hospital A and then get a diagnosis record from the doctor J. The diagnosis record is a transaction is stored in the patient’s block. The next, the patient X will go to another or same the hospital and then also generate a new block to store a new diagnosis record from different doctor. So, one patient has one P2H blockchain whether it is the same hospital and doctor (see Fig. 1). 2.2

Doctor-to-Patient (D2P) Blockchain

A doctor J treats a patient X. The doctor J can get a copy of the patient X’s P2H blockchain (#0 ← #1 ← #2). If patient X want to be treated by doctor J, the patient X will generate a new block (#3) in own P2H blockchain. At the same time, the doctor J has the same D2P blockchain (#0 ← #1 ← #2 ← #3). So, one doctor has many blockchains depend on how many patients the doctor has (see Fig. 2).

A Health Information Exchange Based on Block Chain …

B

A Block#0 0x000 0x00b Generated by Our System

X

Block#1 0x00b 0x00a Diagnosis Record from A and Dr.J

Block#2 0x00a 0x00f Diagnosis Record from B and Dr.K

1987

A Block#3 0x00f Diagnosis Record from A and Dr.J

Fig. 1. P2H blockchain

X

J Y

Block#0 0x000 0x00b Generated by Our System

Block#1 0x00b 0x00a Diagnosis Record from A and Dr.J

Block#2 0x00a 0x00f Diagnosis Record from B and Dr.K

Block#0 0x000 0x00f Generated by Our System

Block#1 0x00f 0x00c Diagnosis Record from B and Dr.J

Block#2 0x00c 0x002 Diagnosis Record from A and Dr.J

Block#3 0x00f Diagnosis Record from A and Dr.J



Z

Block#0 0x000 0x00c Generated by Our System

Block#1 0x00c 0x002 Diagnosis Record from C and Dr.J

Block#2 0x002 0x00d Diagnosis Record from C and Dr.K

Block#3 0x00d Diagnosis Record from A and Dr.J

Fig. 2. D2P blockchain

2.3

Healthcoin Blockchain

To improve EMR exchange frequency or number of times and to reduce waste of medical resources. This mechanism is promoted the competition between doctor and doctor. That mean that can encourage doctor convince patients to upload and share his EMR with other hospitals. In our health cloud, our scheme stores all P2H, D2P, and Healthcoin blockchains. The following four schemas of main tables: • blockchainTbl (Patient ID, Hash, preHash, Diagnosis ID, Doctor ID, Hospital ID) • patientTbl (Patient ID, Doctor ID, Hospital ID, Healthcoin) • doctorTbl (Doctor ID, Hospital ID, Healthcoin)

1988

W.-C. Wu and Y.-C. Wei

• diagnosiTbl (Diagnosis ID, Doctor ID, Diagnosis Record) For the patient, before he/her wants to upload his/her new EMR using our software application (also gain a Healthcoin), the software will create a new block to store the EMR. Simultaneously, the new block will be both stored in client and server’s database. The software app can read his/her complete blockchain in the patient side. The following functions are in the patient’s user interface of our proposed system. • • • •

Upload EMR Check patient’s historical diagnosis record Check P2H Blockchain Check Healthcoin

For the doctor, a patient goes to a hospital and treated by a doctor. The doctor will create new block for new diagnosis and then copy the blockchain of the patient using our software app. The software app can read many patient’s complete blockchains in the doctor side. The following functions are in the doctor’s user interface of our proposed system. • • • • •

Create new block for new diagnosis Copy the blockchain of new patient Check patient’s historical diagnosis record Check D2P Blockchain Check Healthcoin

3 Use Scenario 3.1

Patient

Before a patient want to upload his/her new EMR using our software app (also gain a Healthcoin), the software will create a new block to store the EMR. Simultaneously, the new block will be both stored in client and server’s database. The software app can read his/her complete blockchain in the patient side. 3.2

Doctor

A patient goes to a hospital and treated by a doctor. The doctor will create new block for new diagnosis and then copy the blockchain of the patient using our software app. The software app can read many patient’s complete blockchains in the doctor side. 3.3

User Interface

• Patient 1. Upload EMR 2. Check patient’s historical diagnosis record 3. Check P2H Blockchain

A Health Information Exchange Based on Block Chain …

1989

4. Check Healthcoin • Doctor 1. 2. 3. 4. 5.

Create new block for new diagnosis Copy the blockchain of new patient Check patient’s historical diagnosis record Check D2P Blockchain Check Healthcoin

• Our system 1. 2. 3. 4. 5.

Check patient’s historical diagnosis record Check P2H Blockchain Check D2P Blockchain Check Healthcoin Blockchain Trace all Blockchain

4 Conclusion For the dashboard, our proposed scheme can be checked patient’s historical diagnosis record, checked P2H blockchain, checked D2P blockchain, checked Healthcoin blockchain, traced all blockchain. Our proposed also can improve EMR exchange frequency or number of times because the exchange of EMR implementation is still not enough nowadays and reduce waste of medical resources. Many patients don’t have to re-admit in different hospitals. They may do the same diagnosis or treatment in different hospitals. Actually, the same diagnosis and treatment don’t need to do again, and the owner of the medical information is the patient, not doctor, not medical institutions. If patients agree to exchange their own EMR, patients should get feedback or reward. We propose a new term Healthcoin that can achieve that.

References 1. Iakovidis, I.: Towards personal health record: current situation, obstacles and trends in implementation of electronic healthcare record in Europe. Int. J. Med. Inform. 52(1), 105–115 (1998) 2. Delamarre, D., et al.: Cardiomedia: a communicable multimedia medical record on Intranet and digital optical memory card. Int. J. Med. Inform. 55(3), 211–222 (1999) 3. Rafiq, A., Zhao, X., Cone, S., Merrell, R.: Electronic multimedia data management for remote population in Ecuador. Int. Congr. Ser. 1268, 301–306 (2004) 4. Greenes, R.A., Collen, M., Shannon, R.H.: Functional requirements as an integral part of the design and development process: summary and recommendations. Int. J. Biomed. Comput. 34 (1–4), 59–76 (1994)

1990

W.-C. Wu and Y.-C. Wei

5. Blumenthal, D., Tavenner, M.: The ‘meaningful use’ regulation for electronic health records. N. Engl. J. Med. 363, 501–504 (2010) 6. Nakamoto, S.: Bitcoin: a Peer-to-Peer Electronic Cash System (2008)

The Relationship of Oral Hygiene Behavior and Knowledge Cheng youeh Tsai1,2(&), Frica Chai3, Ming-Sung Hsu1,4, and Wei-Ming Ou5 1

4

Hsin Sheng Junior College of Medical Care and Management, Taoyuan, Taiwan [email protected] 2 Kaohsiung Medical University, Kaohsiung, Taiwan 3 Far East University, Tainan, Taiwan [email protected] Shu-Zen Junior College of Medicine and Management, Kaohsiung, Taiwan 5 National Taitung Junior College, Taitung, Taiwan

Abstract. In our country, there are no formal oral hygiene courses in senior high school. However, oral hygiene is a serious topic for students. This study is to survey the oral hygiene knowledge, oral hygiene behavior of college students. The aim of this study is discussing the relationship of oral hygiene knowledge and behavior by different schools and departments. The study sample was taken from the highest students in two different college schools. The questionnaire is used to data collect. The data was used to analysis for discussing oral hygiene knowledge, and oral hygiene behavior. The scores of oral hygiene knowledge are significant different in gender and department. The department is the most effect to oral hygiene behavior. The school and education of caregiver are second effects. The most important effect for oral hygiene knowledge, oral hygiene behavior is department. The relationship of oral hygiene knowledge behavior is not stronger. Keywords: Oral hygiene knowledge  Oral hygiene behavior  College student

1 Introduction The study of oral hygiene is important for word. The mouth is the organ of person for getting nutrition. Al et al. (2016) survey the using of oral public utilities. The study founds the people who have career the ratio of using oral public utilities is lower than that of people who have not career. However, the need of public utilities is more important for people who have career. Martins et al. (2015) sampled the students in public and private elementary school for investigating oral conduction and student’s social, family. The aim of study is discussion the relationship of personal social, family and oral conduction. This study found the (public/private) school, education of caregiver, and family income would affect the students’ oral conduction. Sharda (2008) studied the oral hygiene knowledge, oral hygiene attitude and oral hygiene behavior of senior and primary students. The conclusion is the scores of senior students in oral © Springer Nature Singapore Pte Ltd. 2019 J. C. Hung et al. (Eds.): FC 2018, LNEE 542, pp. 1991–1995, 2019. https://doi.org/10.1007/978-981-13-3648-5_255

1992

C. y. Tsai et al.

hygiene knowledge, oral hygiene attitude and oral hygiene behavior is better than that of primary students. However, all scores of oral hygiene knowledge, oral hygiene attitude and oral hygiene behavior are not different by gender. Sharda (2010) compared the medical students, practice students and non-medical students in scores of oral hygiene knowledge, oral hygiene attitude and oral hygiene behavior are better. The study found the getting scores in oral hygiene knowledge from high to low are medical students, practice students and non-medical students. The getting scores in oral hygiene attitude from high to low are practice students, medical students and non-medical students. The getting scores in oral hygiene behavior of practice students and medical students are higher than that of non-medical students. However, the scores of oral hygiene knowledge, oral hygiene attitude and oral hygiene behavior in female is higher than that in male. All past studies focus on foreign people, but the study is maybe not fit in our country. The aim of this study is discussing the relationship of oral hygiene knowledge, attitude and behavior in our country.

2 Methods The study sample was taken from the senior students in two different college schools. One school selects 200 students: 100 medical students and 100 non-medical students. The questionnaire is used to data collect. The consists of questionnaire are 15 questions in oral hygiene knowledge, 10 questions in oral hygiene attitude, 10 questions in oral hygiene behavior and 10 questions in demographics. The data was used to analysis for discussing oral hygiene knowledge, oral hygiene attitude, oral hygiene behavior and so on. SPSS 20.0 are use to data analysis. Statistics analysis tool are descriptive statistics, frequency table, chi-square test, t test.

3 Result The demographic variables are showed in Table 1. Table 1 showed 50% nursing and 50% non-nursing students in the department variable. The percentage of Male and Female is 14 and 86%, respectively. The variable-living with family is statistics significant. Answer “Yes” is 84.3% and “No” is 15.7%. In family status: 73% is live with parents, 23.5% is live alone, 3.5% is live with grandparents. Caregiver education: the percentage of high school and low school is 51.5 and 48.5%. Table 2 showed the score of oral hygiene knowledge. The “department” and “gender” are statistics signal. The means (stand error) of oral hygiene knowledge are 8.23 (2.068) in Hsin and 7.94 (2.355) in Shu. The mean (stand error) of oral hygiene knowledge is 8.66 (1.914) in nursing and 7.52 (2.355) in non-nursing. The mean (stand error) of oral hygiene knowledge is 7.25 (2.752) in male and 8.22 (2.092) in female. The mean (stand error) of oral hygiene knowledge is 8.29 (2.112) in high school and 7.87 (2.312) in low school. The mean (stand error) of oral hygiene knowledge is 8.01 (2.221) in income higher 80,000/month and 8.23 (2.140) in income lower 80,000/month. Table 3 showed the relationship of oral hygiene behavior and knowledge. However, many oral hygiene behaviors are effect oral hygiene knowledge. These oral behaviors are statistical

Yes No

Male Female

Nursing Non-nursing

Parents Single Grandparents Caregiver education High school Income 80,000 Statistic method: Chi-square test *P-value

Family status

Living with family

Gender

Department

Variable

51.5 48.5

206 194

263 65.7 137 34.3 < 0.05 +P-value < 0.01

73.0 23.5 3.5

84.3 15.7

14.0 86.0

100.0 100.0

%

292 94 14

337 63

56 344

200 200

N

121 79

99 101

147 46 7

181 19

25 175

100 100

60.5 39.5

49.5 50.5

73.5 23.0 3.5

90.5 9.5

12.5 87.5

50.0 50.0

Hsin (N = 200) N %

Table 1. Demographics data

142 58

107 93

145 48 7

156 44

31 169

100 100

71.0 29.0

53.5 46.5

72.5 24.0 3.5

78.0 22.0

15.0 84.5

50.0 50.0

Shu (N = 200) N %

0.270

0.424

0.972