Manufacturing systems and industry applications 9781613447222, 1613447221, 9783038136101, 3038136107

1,850 62 46MB

English Pages 1105 Year 2011

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Manufacturing systems and industry applications
 9781613447222, 1613447221, 9783038136101, 3038136107

Citation preview

Manufacturing Systems and Industry Applications

Edited by Yanwen Wu

Manufacturing Systems and Industry Applications

Selected, peer reviewed papers from the 2011 International Conference on Materials Engineering for Advanced Technologies (ICMEAT2011) May 5-6, 2011, Singapore, Singapore

Edited by

Yanwen Wu

Copyright  2011 Trans Tech Publications Ltd, Switzerland All rights reserved. No part of the contents of this publication may be reproduced or transmitted in any form or by any means without the written permission of the publisher. Trans Tech Publications Ltd Kreuzstrasse 10 CH-8635 Durnten-Zurich Switzerland http://www.ttp.net Volume 267 of Advanced Materials Research ISSN 1022-6680 Full text available online at http://www.scientific.net

Distributed worldwide by

and in the Americas by

Trans Tech Publications Ltd Kreuzstrasse 10 CH-8635 Durnten-Zurich Switzerland

Trans Tech Publications Inc. PO Box 699, May Street Enfield, NH 03748 USA

Fax: +41 (44) 922 10 33 e-mail: [email protected]

Phone: +1 (603) 632-7377 Fax: +1 (603) 632-5611 e-mail: [email protected]

Preface I am delighted to declare 2011 International Conference on Materials Engineering for Advanced Technologies will be held in May 5-6, 2011, in Singapore. The objective of this Conference is to bring together the researchers from academia and industry as well as practitioners to share ideas, problems and solutions relating to the multifaceted aspects of Materials Engineering for Advanced Technologies. We are pleased to invite authors to submit their papers to ICMEAT 2011, addressing issues that serve present and future development of the field. All the participants will have a chance to hear the keynote speeches from our experts. The conference of ICMEAT 2011 is co-sponsored by National University of Singapore and Asia Pacific Human-Computer Interaction Research Center. We would like to thank the organization staff, program chairs, and the members of the program committees for their hard work. Special thanks go to TTP Publisher. Welcome to ICMEAT 2011; Welcome to Singapore. Singapore, officially the Republic of Singapore, is a Southeast Asian city-state off the southern tip of the Malay Peninsula, 137 kilometres (85 mi) north of the equator. An island country made up of 63 islands, it is separated from Malaysia by the Straits of Johor to its north and from Indonesia's Riau Islands by the Singapore Strait to its south. The country is highly urbanised with very little primary rainforest remaining, although more land is being created for development through land reclamation. Compared with the short-lived conference, the significance is far-reaching. Let’s get together again next year.

Yanwen Wu, Huazhong Normal University, China

ICMEAT 2011 Organizing Committee Honorary Conference Chair Donald Lee, Missouri University of Science & Technology, USA Chin-Chen Chang, Feng Chia University, Taiwan Jun Wang, The Chinese University of Hong Kong, Hong Kong

General Chairs Yuanzhi Wang, Intelligent Information Technology Application Research Association, Hong Kong Dehuai Zeng, Shenzhen University, China

Publication Chair Yanwen Wu, Huazhong Normal University, China

International Program Committees Ziheng Liu, Huazhong Normal University, China Ying Zhang, Wuhan Uniersity, China Yang Xiaohong, Nanjing University, China Yi Zhang, Beijing Normal University, China Peide Liu, ShangDong Economic University, China Dariusz Krol, Wroclaw University of Technology, Poland Qihai Zhou, Southwestern University of Finance and Economics,China Man Cao, Hubei Normal University, China Carsten Felden, Howard University, USA Wei Li, Asia Pacific Human-Computer Interaction Research Center, Hong Kong Adam Marks, University of Study of Bari, Italy

Table of Contents Preface, Organizing and Committees

Manufacturing Systems and Industry Application Lifelong E-Learning and Individual Characteristics: The Role of Gender, Age, Career and Prior Experience H.L. Liao, S.H. Liu and Y.J. Chou Balanced Link Mapping in Multicast Service Overlay Networks Design N. Qi, B.Q. Wang and B.J. Wang Information Visualization for Decision Support Systems on Commerce S.D. Yu, H. Li and C. Wang An Optimizing Design Approach for the Fiber Manufacturing Based on the Immune Genetic Algorithm-Optimized Neural Network H.Z. Zhu, Y.S. Ding, X. Liang, K.R. Hao and H.P. Wang Scene Shortest Path Solutions Based on the Breadth First Search J.H. Wen, H.L. Jiang, M. Zhang and J.L. Song Analysis on Information Construction of University Personnel Archives S.H. Han The Analysis of Accounting Information Activities Blending SOA Y.F. Niu Study of Assessment of Computer Aided Color Design H.S. Zhang, D.M. Zhuang and D. Ma Rough Set Application in Customer Classification J. Li, W.B. Xu, W.Y. Tu, X. Wang, W. Zhang and J. Wen Scan Conversion for Straight Line Controlled by Residuals L.Q. Niu, Z.Y. Huang and X. Chen Chest Card Recognition System Design L.J. Ma, Y.J. Liang, S.Y. Li and Y. Huang The Design and Implementation of GRPS DTU Based on Rabbit3000 W. Wang, J.H. Ren and J.G. Ren A Research of CIM Model in Education Fields W. Cui, B.G. Zhao, Y. Liu, S.Y. Qian, J. Ye, H.F. Yang and Y.J. Li Library Reference System and its Development Trend Analysis in China and Abroad C. Zhang The Research and Implementation of E-Commerce Secure Payment Protocol X.J. Ding, X.D. Jiang and Y.Z. Zheng Attribute-Based Access Control of Collaborative Design Systems T.R. Fan, H.Y. Guo and Y.J. Li The Investment and Management Pattern of Labs in Newly-Built Academies P.R. Wang A Quick Algorithm for Attribute Reduction Based on Divide and Conquer Method F. Hu, X. Chen, X.Y. Wang and C.J. Luo Design of a Random Test Platform for DSP Serials Used in Embedded Systems C.P. Wei, Z.L. Li, H. Liu and Z.X. Chen Far Field Noise Suppression Method in McWiLL Intercom Based on Double UniDirectional Microphone Y.H. Zhang, L.M. Jia and Z. Li Product Information Model of MEMS X.T. Yan Research on the Fourth-Party Mobile Payment Model Y. Xu and X.T. Li Research on Technology of Medical Image Database and its Connection with HIS Database X.L. Zhang, W.L. Wang and X.Q. Lv

1 7 13 19 25 30 35 39 46 50 56 60 64 70 74 80 86 92 98 104 109 114 119

b

Manufacturing Systems and Industry Application

The Study on Package Decoration Art Design C.M. Li and X.Y. Huang Train Mode Research of Software Outsourcing Talents W. Cui, Y. Liu, Y. Lin, S.Y. Qian, J. Ye, H.F. Yang and Y.J. Li The Study on Metaphor and Interest of Graphic Design L. Kang and C.M. Li Research on Tourism United Marketing in Turpan Area, Xinjiang, China Y. Li and H.J. Sun The Research on Color and Text Usage in the Graphic Design J. Liu and C.M. Li Forest Fire Monitoring System Based on Image Analysis X.L. Li A Software Project Management Method Based on Trust and Knowledge Sharing R.F. Tang and X.Y. Huang Application of 3D Animation Technology in Movie Art Design M. Li, X.Y. Huang and T.J. Zheng Neusoft Software Talent Train Mode Research W. Cui, S.Y. Qian, Y. Lin, J. Ye, Y. Liu, H.F. Yang and Y.J. Li Japan Software Project Risk Management W. Cui, J. Ye, Y. Lin, S.Y. Qian, Y. Liu, H.F. Yang and Y.J. Li A Study on Pen-Based Input Operation and Tilt Angle of Tablet D.X. Bao, X.M. Li, Y.Z. Xin and X.S. Ren Study on Security Management-Oriented Business Process Model Z.W. Yu and Z.Y. Ji Research into Modeling Methods Basing on Product Presentation Information Base Z. Wang, S. Hao and H.H. Shi The Application of Requirement Engineering Model in Large Software Development Process R.F. Tang and X.Y. Huang A Revised BMM and RMM Algorithm of Chinese Automatic Words Segmentation H.Y. Qu and W. Zhao A Convergent Algorithm for Generalized Linear Complementarity Problem in Engineering Modeling H.C. Sun Train Optimal Control Strategy on Continuous Change Gradient Steep Downgrades P. Zhou, H.Z. Xu and M.N. Zhang Developing of Three Degree of Freedoms SCARA Robot J.T. Shi, D.X. Sun and H.Z. Zhang A Character Experiential Learning System: An Animated Vignette Creating Tool H.H. Kuo, S.W. Yang and Y.C. Kuo The Role of the Cities in the Western Economic Development G.J. Zhao, L.J. Jia and T.F. Ma An Analysis of the Strategy of Product Platform A.H. Wu and T.F. Li Study of Enhancement Technology of Color Image Based on Adaptation and Nonlinear T.L. Peng, Y.D. Ding and C.J. Zhu A High Precision Large Area Scanner for Ancient Painting and Calligraphy X.H. Chen and X.F. Shi Application of Community Discovery in SNS Scientific Paper Management Platform R.X. Ma, G.S. Deng and X. Wang Comparative Analysis of the Major Ontology Library R.J. Bai, X.Y. Wang and X.F. Yu Interactive Technology Application Program of Experience Learning for Children with Developmental Disabilities C.Y. Lin, H.H. Lin, Y.H. Jen, L.C. Wang and L.W. Chang The Study on the Quality Management of Supply Chain Production in Operations L.N. Wang

124 130 138 144 149 155 160 164 170 175 179 183 189 193 199 205 211 217 221 227 230 234 241 247 253 259 265

Yanwen Wu

c

Circuits and Intelligent Systems Main Converter Fault Diagnosis for Power Locomotive Based on PSO-BP Neural Networks H.S. Su Research on Software Trustworthiness Level Evaluating Model Based on Layered Idea and its Application J. Zhang, Y.Q. Yan, Y.C. Sun, G.X. Zhao and J.F. Liu Research on Face Recognition Based on Pulse Coupled Neural Network X.C. Wang, Y.M. Liu, K.H. Yue and M. Cheng Study on Coordinate Information Generation Method of Interested Area in IMRT Inverse Planning System Z. Chen and G.L. Li Topology Optimization in the Conceptual Design: Take the Frame of a Bender as Example Y. Wang, G.N. Zhu and B.Y. Sun An Efficient DoS Attacks Detection Method Based on Data Mining Scheme X. Chen An Effective Intrusion Detection Model Based on Random Forest and Neural Networks S.H. Zhong, H.J. Huang and A.B. Chen The Application of Cloud Storage in the Library Z.X. Wang Path Planning Based on Warehousing Intelligent Inspection Robot in Internet of Things L.S. Wei, Y. Guo and X.F. Dai A Partial Unit Delaunay Graph with Planar and Spanner for Ad Hoc Wireless Networks P.F. Xu, Z.G. Chen, X.H. Deng and J.P. Yu Increment Update Algorithms Basing on Semantic Similarity Degree for K-Anonymized Dataset L.M. Huang, J.W. Liu, Y. Qian, X.S. Liu and J.L. Song Minus Domination Numbers of Directed Graphs W.S. Li and H.M. Xing Smart Grid Equipments Condition Monitoring Based on Rough Set X.Y. Zhang, W. Wang and H. Wang A New Type of Algorithm for the Variational Inequalities on Supply Chain Economic Equilibrium Model L. Wang Error Estimation for a Economic Equilibrium Modeling L. Wang Application of UKF Algorithm in Airborne Single Observer Passive Location L.B. Qiu, H.Y. Su, H. Hu, S.L. Huang, J. Wang and T.J. Li Study on the Virtual Signal Production Based on the PCI T.J. Li, J.C. Ren, Y.M. Liu, H. Zhang and X.J. Zhang Improved Subdivision Based Halftoning Algorithm D.X. Wang and S. Chen Dynamic Modeling and Simulation on Ordering Strategy of Five-Stage Supply Chain Y.Z. Li and X.D. Zhang An ETL Framework Based on Data Reorganization for the Chinese Style Cross-Tab S.D. Zhang, Q.H. Zhang and J.B. Yang Slip Pridiction Based Path Planning for Planetary Rovers L.F. Zhou A Document Feature Extraction Method Based on Concept-Word List Z.Y. Zhu, J. He, S.J. Dong and C.L. Yu An Overview of P2P Search Algorithms J.F. Yan and S.H. Tao Application and Research of Data Acquisition Based on Database Technology of LabVIEW B. Hu, X.J. Liu and S. Li The Filling Algorithm for Scanning Based on the Chain Structure W.Q. Wang

271 277 283 289 297 302 308 314 318 322 328 334 338 344 350 356 363 368 372 377 382 386 393 398 404

d

Manufacturing Systems and Industry Application

A Computer Software on Diffusion in Solid S.Y. Luo, W.C. Xu, L.X. Huo, X.L. Zhang and J.Y. Zhang An Algorithm to Eliminate Stochastic Jump Measurements of Ultrasonic Flow-Meter with Time Difference Method Q. Liu, R.D. Wang, Y. Zhu and C.T. Du Numerical Method of Hybrid Stochastic Functional Differential Equations with the Local Lipschitz Coefficients H. Yang, F. Jiang and J.H. Hu A Discrete Data Fitting Models Fusing Genetic Algorithm T.R. Fan, Y.B. Zhao and L. Wang A Novel Method to Calculate Frequency Control Word of Direct Digital Synthesizer J. Guo, J. Zhu, J. Liu and L. Zhou Stability Analysis for Local Transductive Regression Algorithms W. Gao, Y.G. Zhang and L. Liang Data Stream Clustering Algorithm Based on Affinity Propagation and Density Y. Li and B.H. Tan New Criteria on Impulsive Stabilization and Synchronization of Delayed Unified Chaotic Systems with Uncertainty Y.Q. Chen Generalization Bounds for Certain Class of Ranking Algorithm W. Gao and Y.G. Zhang A New Method to Detect P-Wave Based on Quadratic Function N.Q. Zhou Stock Price Index Prediction Based on Improved SVM J.Y. Shi, X. Li and Y.X. Li Application of Teleoperation for Rehabilitation Training Robot X.B. Guo and Y. Zhai Collision Detection Based on Surface Simplification and Particle Swam Optimization W. Zhao and F. Li Application of Wireless Sensor Network for M2M in Precision Fruits L.G. Fang, Z.B. Liu, H.L. Li, C.D. Gu and M.L. Dai Computational Method Research of the Salvo Catching Probability for Wake-Homing Torpedo Based on Compendium Preparation T. Jiang, D.X. Wu and L. Rui Error Estimates of H1-Galerkin Expanded Mixed Finite Element Methods for Heat Problems H.T. Che, M.X. Li and L.J. Liu Anonoymizing Methods against Republication of Incremental Numerical Sensitive Data X.L. Zhang, J. Yu, Y.S. Tan and L.X. Liu Error Estimates of H1-Galerkin Mixed Finite Element Methods for Nonlinear Parabolic Problem H.T. Che A Conceptual Framework for Animation Design Based on E-Learning System C. Gang and X.Y. Huang Modeling Microfibril Angle of Larch Using Linear Mixed-Effects Models Y.X. Li and L.C. Jiang Using UML as Front-End for PLC Program Design C.M. Zhang, Z.H. Fang, C.M. Wang and J.F. Ni M & A Impact in China and its Norms W. Cui, Y. Liu, Y. Chen, S.Y. Qian, J. Ye, H.F. Yang and Y.J. Li A Detection Method for Overlapped Spikes J. Qi, M. Dai, G. Zheng and T.T. Liu The Design of Motor Parameter Test System Based on WAP K.Z. Tong, Y.F. Zheng and T.R. Fan An Evaluated Priority Based P2P Streaming Media Scheduling Algorithm Y.J. Song, R.C. Tang and G. Xu A Flexible Query Answering Approach for Autonomous Web Databases X.F. Meng, X.Y. Zhang and X.X. Li

410 414 422 427 433 438 444 450 456 462 468 472 476 482 488 493 499 504 510 516 521 525 530 536 543 549

Yanwen Wu

Study on the Interference Ratio of Right-Turning Vehicles at Signalized Intersection under Mixed Traffic Environment S.S. Lee, D.L. Qian, D.M. Lin and Z.Y. Peng A New Acoustic Emission Source Location Method Based on the Linear Layout of Sensors Z.N. Zhang and J. Tian Comfort and Energy-Saving Intelligent Shutter S. Chen and D.X. Wang Image Registration Based on MI and PSO Algorithm J.H. Xu, J. Li, Y.W. Wang, H. Liang, D.C. Tian, N. Zhang, Z.Y. Wang and W. Cong Obstacle Avoidance Optimization of Joint Robot Based on Partial PSO Algorithm J.W. Zhao, Y.Q. Li and G.Q. Chen Earthwork Actuarial Software System Design and Development Q.M. Su, J.M. Wang and J.J. Guo Application of Multi-Fractal Spectrum to Analysis the Vibration Signal of Power Transformer for the Detection of Winding Deformation F.H. Wang, J. Zhang, Z.J. Jin and Q. Li The Networking Integrated Character Education (NICE) Project: An Experimental Study H.H. Kuo, S.W. Yang and Y.C. Kuo Discuss the Design Method of Topology Discovery System Based on SNMP S.J. Xue Consistent Extension of Dynamic Knowledge Updating in the Multi-Agent System M.H. Wu Upgrading Water Distribution System Based on GA-RBF Neural Network Model H.X. Wang and W.X. Guo Application of Artificial Neural Network Model Based on Improved PSO in Water Supply Systems H.X. Wang and W.X. Guo Adaptive Control of Vertical Stage of a Robot Arm for Wafer Handling C. Zhang, G.Z. Zhao and F.P.M. Dullens Research on Computer Virtual Experiment System S.C. Ding Synthesis and Analysis of the Handheld Computer Power Consumption A. Mahmoudi, K. Monfaredi, H.F. Baghtash and A. Bahrami An Improved Differential Evolution and its Application in Function Optimization Problem J.F. Yan and C.F. Guo Network Security Situation Awareness Model-Inspired by Immune Y.X. Luo, M.H. Zhao, Q.Y. Zhang and A. Zou Metabolic Algorithm for Software Requirement Engineering V. Pavanasam and C. Subramaniam An Effective Heuristic DSR R.X. Ma, G.S. Deng and X. Wang The Application of Trellis Coded Modulation in Cooperative Communication Systems G.C. Ren The Application of Data Mining in GIS C.C. Fu and N. Zhang The Aumr Tiger’s Individual Identification Based on the Tiger Fur’s Texture Characteristic D.W. Qi, Y.Y. Zhou and X. Yang A Video Streaming Application Layer Multicast Protocol for WLAN Y. Cheng, L. Wen, L. Yang, W. Wang, Y. Zhou and F. Wang Takeoff Analysis and Simulation of a Small Scaled UAV with a Rocket Booster B. Liu, Z. Fang, P. Li and C.C. Hao Study on the Integrable Properties of Two Coupled KdV Equations L.Y. Zhang, L.M. Cheng, W. Yuan and R.X. Yao Research of Task Scheduling Algorithm Based on Parallel Computing Y.J. Liu, X.M. He, D. Feng and Y. Fang Research of Digital Watermark Algorithm Based on Vector Graphics Y.J. Liu, X.M. He, D. Feng and Y. Fang

e

555 561 565 569 574 578 584 590 594 599 605 609 614 620 626 632 635 639 645 652 658 662 668 674 683 693 699

f

Manufacturing Systems and Industry Application

Application Research of an Ultrasonic Ranging System Based on Educational Robot B.B. Shi, M.L. Dai and Z.F. Hu Application Research of Automatic Guided Vehicle System Based on LIN Bus B.B. Shi Piezoelectrically Transduced Resonators with Robust Resonance Frequency Z.Z. Wu The Study of Invasion Examination Algorithm Based on Improvement Fuzzy C Means K. Chen and W.D. Ke Measurement Study of IPv6 Users on Private BT N.X. Ao and C.J. Chen Research on Decision Tree Algorithm Based on Information Entropy M. Du, S.M. Wang and G. Gong Topological Optimization Based on Topologically-Critical Nodes in Unstructured P2P Network W. Fan, D.F. Ye, M.X. Yang and L. Zhang An Algorithm Based on SURF for Surveillance Video Mosaicing K.Y. Guo, S. Ye, H.M. Jiang, C.Y. Zhang and K. Han Optimization for Order-Picking Path of Carousel in AS/RS Based on Improving Particle Swarm Optimization Approach W. Yang, X.L. Li, H.G. Wang and Y.X. Du Modeling of Proportional Integral Derivative Neural Networks Based on Quantum Computation D.X. Nan, Y.S. Zhang and X.Q. Sun Study on Multichannel Speech Enhancement Technology in Voice Human-Computer Interaction J.X. Lu, P. Wang, H.Z. Shi and X. Wang A New Orthogonal Projected Natural Gradient BSS Algorithm with a Dynamically Changing Source Number under Over-Determined Mode P. Wang, J.X. Lu, X. Wang and H.Z. Shi The Research of WebGIS-Based Data Mining C.C. Fu The License Plate Recognition Technology Based on Digital Image Processing J.H. Zhu, A. Wu and J.F. Zhu A Modified TopDisc Algorithm in WSN Z.G. Du and D.H. Hu A Collaborative Filtering Recommendation Algorithm Based on User Clustering in ECommerce Personalized Systems G.H. Cheng Study on Contact Characteristics of Free Rolling Radial Tire G. Cheng and W.D. Wang Response Surface Methodology for the Optimization of Spicy Black Beans Y.J. Sun, H.Y. Gao, Y. Wang and L. Sun A Model of Semantic Development Chain for Multidisciplinary Complex Electronic Equipments H.G. Zhou, W.C. Tang, X.W. Jing and X.J. Zhao Research and Implementation of XML Keyword Search Algorithm Based on Semantic Relatives M.Y. Shen, X. Li and X.F. Meng Empirical Study of Web-Based Instruction in Public P.E. Courses of Chinese Universities M.L. Li, H.J. Peng and X.K. Zhang A Semantics Based Routing Scheme for Content-Based Networking G.D. Zheng and M. Chen Fast Multi-Layer 3D Reconstruction Algorithm S.G. Liang, O.Y. Yi and H. Wang Digital Signature Technology Research of Distance Education Network Security Authentication W.S. Wei, X.X. Meng and H.H. Li

704 710 715 720 726 732 738 746 752 757 762 768 774 778 783 789 794 800 805 811 816 821 827 831

Yanwen Wu

Research on Three-Dimensional Methods to Psychology Skills Training of Badminton Players in College Y. Yang Fuzzy Comprehensive Evaluation of the Garment CAD X.X. Yang, X.P. Zhang and M. Yan Rescue Robot Navigation in Grid Computing Environment W. Wang, H.Y. Wang, S.J. Jia and S.M. Wei Application of Fuzzy Logical in IDS P.F. Wang, S. Meng and J.C. Wang Fast Algorithm of the Traveltime Calculation Based on Binomial Heap Sorts J. Wang Vehicle Access Intelligent Monitoring and Network Management System Based on RFID Technology Q.S. Zhu A Visual-Thermal Image Sequence Registration Method Based on Motion Status Statistic Feature Multi-Resolution Analysis X.W. Zhang, Y.N. Zhang and J. Zhao An Image Mosaic Algorithm Taking into Account Speed and Robustness L.J. Zhang, J.H. Yang and X.K. Wang Design of an E-Learning Based on Service Software Bus S.X. Chen and M.L. Pen The Visual Quality Recognition of Nonwovens Using a Novel Wavelet Based Contourlet Transform J.L. Liu and B.Q. Zuo Study and Implementation of the Vehicle Control Instruments W.H. Li Design and Realization of Excellent Course Release Platform Based on Template Engines Technology C.Y. Zhou and L.J. Huang ZPD Incidence Development Strategy for Demand of Internet in Business-Teaching of CRHEIs H. Liu Object Manipulation Control Strategy Analysis Center of Humanoid Robot Q.J. Du, L.P. Li and B. Dai An Electronic Commerce Recommender System Based on Product Character S.B. Chen Study of Personalized Information Filtering System Based on Multi-Agent S.J. Gong ZPD Incidence Development Strategy for Demand of Internet in Life-Science Teaching of Comprehensive Regional Higher Education Institutes Y.M. Li Development Strategy for Demand of ICT in SMEs of PRC Y.M. Li ZPD Incidence Development Strategy for Demand of Information Technology in Engineering Teaching: Comparing Teachers Belong to Different University Y.H. Chen A New Incremental Updating Algorithm for Core Based on Simplified Discernibility Matrix C.S. Zhang Wireless Sensor Network Summary B.J. Zhao and C.J. Shi The Design and Simulation of Electric-Vehicle Charger Control Strategy H.B. Sun and G. Zhao Local Search Optimization Immune Algorithm Based on Job Insert Method for HFSS Z.F. Liu, P. Qiao, W.T. Yang and J.H. Wang A New Content Management System Based on ASP.NET L.J. Huang, C.Y. Zhou and H.Z. Wu Information Fake in Supply Chains J. Hong

g

837 843 848 852 857 862 867 873 879 884 890 895 900 904 909 913 918 922 926 931 937 941 947 953 958

h

Manufacturing Systems and Industry Application

Modeling of SVM Diode Clamping Three-Level Inverter Connected to Grid Y.G. Guo, P. Zeng, J.Q. Zhu, L.J. Li, W.L. Deng and F. Blaabjerg Fast Restoration Algorithm for Rotational Motion Blurred Images J.Y. Zhou, Y.T. Yang and Y.L. Wu Index Tracking Method Based on the Neural Networks and its Empirical Study J.F. Li and Y.S. Su Research on the Evolution of Software Development Ideas and Development Method Y. Gao, X.Q. Yao and T.J. Li An Artificial Neural Network Approach for Short-Term Electric Prices Forecasting M.T. Tsai and C.H. Chen Prediction of Lignin Content of Manchurian Walnut by BP Neural Network and NearInfrared Spectroscopy Z.H. Qu and L.H. Wang Hill Climbing Algorithm for License Plate Recognition Y.C. Yu, S.C.D. You and D.R. Tsai The Design of Electronic Code Lock S.T. Sun, A.G. Tian and D.C. Zhuang Study on Piezoelectric Stacked Generator W. Lin, Z. Li, W. Chen and J. Zhou Study on Image Search Engine Based on Color Feature Algorithm X.Y. Huang and W.W. Chen Context-Aware Vertical Handover Management Architecture with QoS Provision in Heterogeneous Wireless Networks C.H. Yu, W.Q. Xu, Y.M. Wang, L.G. Liu and Y.H. Zhang Study of Rural Small Hydropower Development and Reform Strategy X.J. Al and L. Bai The Data Acquisition, Process and Reconstruction for a Reverse Engineering Study D.M. Yu, B.Z. Qu, Z.H. Gao and D. Wang Enhanced Multimedia IPTV Packet Transmission over IPv6 Based Wireless Network S.G. Kim and B.J. Park Efficient Mobile Anchor Point Messaging Assumption in IPv6 Based Wireless Mobile Networks B.J. Park, F.A. Alisherov and B.Y. Chang Office System Design and Implementation Based on B/S J. Li, W.Y. Tu, J. Zhou, X. Wang and W. Zhang Application of an Improved Attribute Reduction Algorithm in Diabetic Complication Diagnosis J. Li, B. Zhou, X. Wang, W. Zhang and J. Wen Application of Rough Set Feature Selection in the Hazard Assessment of Debris Flow J. Li, J.Y. Chang, X. Wang and X.B. Wu A Semantics Based Routing Scheme for Cloud Computing W.C. Tang and J. Xie A Passive Circuit Based RF Optimization Methodology for Wireless Sensor Network Nodes L.Q. Zheng, A. Mathewson, B. O'Flynn, M. Hayes and C. O'Mathuna A Co-Training Based Semi-Supervised Human Action Recognition Algorithm H.J. Yuan, C.R. Wang and J. Liu Intelligent Expert System of Garment Workstage Synchronizing X.P. Zhang and M.H. Zhao The Application of Fractal Theory in Graphics Design L. Kang and C.M. Li

963 969 974 979 985 991 995 1001 1005 1010 1014 1020 1027 1032 1038 1044 1048 1051 1054 1059 1065 1071 1075

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.1

Lifelong E-Learning And Individual Characteristics: The Role Of Gender, Age, Career And Prior Experience Hsiu-Li Liao a, Su-Houn Liu b, You-Jie Chou c Department of Information Management, Chung Yuan Christian University, Taoyuan 32023, Taiwan R.O.C. a

[email protected], [email protected], [email protected]

Key words: Lifelong learning; TAM; TPB; gender; prior experience

Abstract. Lifelong learning is a term recognized that learning is not confined to childhood or the classroom, but takes place throughout life and in a range of situations. Compare to the continuous growth of the e-learning market for the lifelong learning of adults, there are relatively few studies are available on the learning behaviors of these learners on the e-learning website. In this study, TAM and TPB theory were integrated and employed to examine the relationships between courses or systems and perception constructs. The degree of learners’ perceptions of interaction with others don’t influence course flexibility, ease of use, and behavioral control learners’ perceived. Different gender and career of learners don’t influence all constructs. Younger learners perceived that they can interact with teachers and other learners as well as learn more on the e-learning website. The e-learning experiences of learners are significantly associated with system functionality, system response, and perceived behavioral control. Introduction Lifelong learning is a term recognized that learning is not confined to childhood or the classroom, but takes place throughout life and in a range of situations. One of the most convenient delivery formats for lifelong learning is e-learning [10]. The US market for Self-paced eLearning will projected to grow to $23.8 billion by 2014 according to a new report by Ambient Insight. The demand for online education products in America is growing by 7.4% [1]. Business spending on e-learning is expected to reach approximately $19.6 billion by 2010, according to IDC [11]. However, compare to the continuous growth of the e-learning market for the lifelong learning of adults, there are relatively few studies are available on the learning behaviors of these learners on the e-learning website. Richness research has presented e-learning potential for increasing learning effectiveness and learner satisfaction [8,13]. Previous research done under different task environments has suggested a variety of factors affecting user satisfaction with e-Learning. These factors that affecting students’ satisfaction of using e-learning can categorized into six dimensions: student, teacher, course, technology, system design, and environment dimension [7,12]. On a preliminary study, after interviewing 40 lifelong learners, the researchers have identified that the student, teacher, course, and technology dimensions, including course flexibility, course quality, perceived interaction with others, system functionality, system response, perceived usefulness, perceived ease of use and

2

Manufacturing Systems and Industry Application

perceived behavioral control, are the major factors that influence the continued use of e-learning for these learners [3]. For research in technology adoption, the technology acceptance model (TAM) has received considerable attention. This model proposes two key beliefs in the adoption of technology: perceived usefulness (PU) and perceived ease of use (PEU). Perceived ease-of-use and perceived usefulness have played important roles in affecting e-learning adoption decisions [14]. Some influential theories in other areas, such as theory of planned behavior (TPB) were borrowed into the studies of technology adoption. TPB is an extension of the theory of reasoned action made necessary by the original model’s limitations in dealing with behaviors over which people have incomplete volitional control [5]. Perceived behavioral control (PBC) refers to one’s perceived ease of performing a behavior, taking into account their personal resources (abilities, skills and knowledge) and situational variables (obstacles and opportunities) to predict behavior directly [6]. For example, when conducting lifelong learning, learners may need not only more resources (time, information, etc.), but also more self-confidence in making a proper decision. Therefore, the study integrates PBC into TAM model to predict lifelong learners’ behavior of continued using e-learning website. The hypotheses addressing lifelong e-learning behavior are as stated below. H1: Different learners’ genders have significantly differences in courses, systems, perceptions constructs. H2: Different learners’ age have significantly differences in courses, systems, perceptions constructs. H3: Different learners’ e-learning experiences have significantly differences in courses, systems, perceptions constructs. H4: Different learners’ careers have significantly differences in courses, systems, perceptions constructs. Method Characteristics of the Sample and Study Context. To test the research model, users from the SME Online University (http://www.smelearning.org.tw) in Taiwan was chosen as a representative of e-learning users for the lifelong learning. The SME Online University has been recognized as the first e-learning website developed for small and medium enterprises (SME) in Asia and one of the biggest in the world. There are now more than 800 free online courses in five major categories. Today, the SME Online University has served over 300,000 SME employers and employees around the world. 500 subjects were randomly selected from the members of SME Online University. A total of 178 surveys were voluntarily completed, resulting in a response rate of 35.6%. The age range of the sample was 20–50 years old (Table 1). Of the 178 respondents, 98 were females (45%) and 80 were males (55%).

Yanwen Wu

3

Table 1. Subject demographic (n=178) Measure and items

Frequency

Percentage

Gender Male

98

55%

Female

80

45%

20-30

66

37%

31-40

69

39%

41-50

43

24%

< 1 year

109

61%

1~2 year

31

17%

2~3 year

17

10%

3~4year

12

7%

> 4 year

9

5%

Age

Prior e-learning experiences

Analysis and Results The alpha-level of the sample indicates a reasonable level of reliability (α>0.70) [9], revealing adequate internal consistency (Table 2). Table 3 shows the each variable’ the square root of AVE and intercorrelations. Convergent validity of the instrument is appropriate when the constructs have an average variance extracted (AVE) of at least 0.5 [2]. All items loadings of each construct are larger than cross-loadings of that construct with all other constructs in the model. Hence, the convergent validity and discriminant validity in the research model were adequate. Table 2. Construct Means, Standard Deviations, and Reliabilities Number Construct

Standard

Cronbach

Deviation

Alpha

Mean of Items

AVE

1. Course Flexibility (CF)

3

5.742

1.589

0.794

0.708

2. Course Quality (CQ)

3

5.146

1.624

0.819

0.734

3. Perceived interaction with others (PIO)

2

3.905

2.284

0.879

0.891

4. System Functionality (SF)

3

5.670

1.438

0.872

0.798

5. System Response (SR)

3

4.959

1.766

0.868

0.865

6. Perceived Usefulness (PU)

4

5.650

1.409

0.948

0.851

7. Perceived Ease of Use (PE)

4

5.772

1.191

0.940

0.801

8. Perceived Behavioral Control (PBC)

3

5.749

1.367

0.929

0.863

9. Intention of Continued Use (INT)

3

5.908

1.323

0.862

0.759

4

Manufacturing Systems and Industry Application

Table 3. Correlations and Average Variance Extracted (AVE) CF

CQ

PIO

SF

SR

PU

PE

PBC

CF

0.841

CQ

0.581**

0.857

PIO

0.095

0.160*

0.944

SF

0.596**

0.794**

0.078

0.893

SR

0.558**

0.555**

0.232**

0.589**

0.930

PU

0.543**

0.758**

0.191*

0.779**

0.633**

0.922

PE

0.611**

0.563**

0.145

0.603**

0.651**

0.650**

0.895

PBC

0.554**

0.588**

0.074

0.635**

0.568**

0.663**

0.805**

0.929

INT

0.589**

0.651**

0.272**

0.618**

0.463**

0.669**

0.664**

0.621*

INT

0.871

Diagonal bolded elements are the square root of AVE. ** p max( D ) 4. then G p ← G p − l //select candidate resources 5. end for 6. Decomposing the MSON construction requirement m requirements R = ∑ ( s, t j , R j m ) . p

m

p

l

m

l

Gm

into the set of path

j =1

7. for each path requirement 8.

do

Using the Shortest Path Algorithm to calculate the minimum delay path from

9.

( s, t j , R j m ) ,

if

d min Ps ,t

j

s to t j

d min Ps ,t > Dt j m j

G s ← Null

10. 11.

do δ else do Psmin ,t

12.

Finding a set of unmarked paths denoted as set S between s and t j satisfying the delay and bandwidth restriction. if there is no such path, do jump to Step 19

j

and jump to Step 19 ←0 ← NULL , (ωl ) P min δ s ,t j

j

∑ cb

m

l l

13.

for each path

Ps ,t j ∈ S j p

mapping path P to G . if s ,t j

14. 15. 16.

do calculating

(δ ) P



s ,t j

(δ ) P

s ,t j

< (δ ) P min δ

do

s ,t j

∀l ∈Ps ,t j

+ β(1 −

k

δ Psmin ← Ps ,t j ,(δ ) Pmin δ ← (δ )P ,t j s ,t j

s ,t j

Ul ) U l max + ε

and marking

end for if P δ ! = NULL min s ,t j

do

δ G s ← G s + Psmin and ,t j

update remain capacity of the links on path

17. else do G = NULL , jump to Step 19 18. end for s s 19. if G ! = NULL return G , else return FAIL s

δ Psmin ,t j

after Ps ,t j

Yanwen Wu

11

The greedy algorithm works as follows: Several underlay links of physical network can not fulfill the demands of MSON construction requirement such as link delay is greater than delay restriction or link bandwidth is smaller than bandwidth demand. Therefore, from step 2 to 5, we first delete these links before executing MSON construction operations in order to reduce complexity of the algorithm. From step 8 to 10, we use the shortest path algorithm to calculate the minimal delay from source node to target node t j to estimate if there exists a bounded multicast tree, if not, it fails to find such an MSON topology. We try to find a path mapping which makes the minimum (δ)P

in step 13. From step 7 to 18, we find the most optimal nodes and links added to

s ,t j

Gs

from the source node to the target nodes.

SIMULATION STUDY In this section, we provide experimental results to evaluate the efficiency of the MSON heuristics greedy construction algorithm. The experiments are conducted on IBM SystemX3200 server with one 2.4 GHz Intel Xeon X3430 CPU and 2G RAM. We impose a tool called BRITE [6] to generate a physical network topology randomly.The number of the network is 100 and the connectivity probability of each two nodes is 0.02, 0.04 and 0.06 for each scenario. The delay of each link is denoted as the Euripides distance between the two nodes of the link. The bandwidth of the link is equably distributed between 50 and 100. The arrival process of the MSON construction request follows the Poisson process whose time unit is 100 and λ = 5 . Each MSON existence time follows the exponent distribution with θ = 100 . For each MSON request, we select the source node and the target nodes set randomly. The number of target nodes is equably distributed between 2 and 10, and the bandwidth requirement is equably distributed between 10 and 50. We contrast the number of congestion links of BLMH and k-MST [2]. Congestion link is defined as the link that the residual bandwidth can not meet MSON bandwidth demand.

Time

5 0

15 10

BLMH k-MST

5 0 10 0 30 0 50 0 10 00 15 00 20 00

BLMH k-MST

Congestion Link Number

15 10

10 0 30 0 50 0 10 00 15 00 20 00

Congestion Link Number

BLMH k-MST

10 0 30 0 50 0 10 00 15 00 20 00

Congestion Link Number

20 15 10 5 0

(c)Number of neighbor N=6

(b)Number of neighbor N=4

(a)Number of neighbor N=2

Time

Time

Figure 3: Contrast of congestion link number From the simulation result we can see that follow the process of emulation, existing MSON topology will release network resources because their lifetime is up. So the congestion link number will be a steady state in the long run. Especially, in the situation of lack of network resources, such as depicted in Fig.3 (a) when N=2, BLMH can utilize resources more efficiently. The congestion link number is much less than k-MST in this situation. We can get the conclusion intuitively that the overall performance of BLMH is better than k-MST because we consider load balance of physical network sufficiently and decrease the generation probability of bottleneck links.

12

Manufacturing Systems and Industry Application

CONCLUSIONS In this paper we have formally defined the MSON construction problem and setup an ILP model whose objective is to find a virtual topology on top of physical network fulfilling all restrictions, and the cost of construction is minimized while the residual physical network is most balanced. In order to address the problem efficiently we develop a greedy algorithm: BLMH. The efficiency of the construction method is testified by emulation experiment according to congestion link number in different scenarios. Based on our simulation and analysis results, we make the following conclusion for MSON design: the overall performance of BLMH is better than k-MST because we consider the balance of physical network sufficiently. ACKNOWLEDGEMENT We would like to thank the anonymous reviewers for their comments and suggestions. This work is supported by the National High Technology Research and Development Program (863 Program) of China under grant No. 2008AA01A323, No. 2008AA01A326 and No. 2009AA01A334. We would like to thank our colleagues in both projects for many fruitful discussions. REFERENCES [1] Z. Duan, Z.-L. Zhang, Y.T. Hou, Service overlay networks: SLAs, QoS, and bandwidth provisioning, IEEE/ACM Transactions on Networking. 11 (6): 870-883 (2003). [2] Li Lao, Jun-Hong. Cui, Mario Gerla. Multicast service overlay design[C]. In Proceedings of International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS’05), Philadelphia, PA, USA. (2005). [3] Shi S, Turner J. Multicast Routing and Bandwidth Dimensioning in Overlay Networks[J]. IEEE Journal on Selected Areas in Communications. 20(8): 1444-1455 (2002). [4] Li Lao, Jun-Hong. Cui, Mario Gerla. Toma: A viable solution for large-scale multicast service support. In Proceedings of IFIP Networking. (2005). [5] Ying Zhu, Baochun Li, Kenqian Pu. Dynamic Multicast in Overlay Networks with Linear Capacity Constraints [J]. IEEE transactions on Parallel and Distributed Systems. vol.20: 925-939 (2009). [6] Alberto Medina, Anukool Lakhina, Ibrahim Matta, John Byers. BRITE: Universal Topology Generation from a User’s Perspective. Tech. Rep. 2001-003, Computer Science Department Boston University. (2001).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.13

Information Visualization for Decision Support Systems on Commerce Shidong Yua, Hang Lib and Chen Wangc College of Software, Shenyang Normal University, China a

[email protected], [email protected], [email protected]

Key words: Information visualization, User interface, CDSS, Knowledge base, Fuzzy query

Abstract. Many users benefit from decision support systems (DSS), but sometimes they can’t readily comprehend the nature or meaning of the outcome from DSS. In general, interpretation of data is much more intuitive if the results from the DSS are translated into charts, maps, and other graphical displays because visualization exploits our natural ability to recognize and understand visual patterns. In this paper we discuss the concept of visualization user interface (VUI) for decision support systems on commerce (CDSS). An information visualization model for CDSS is proposed, which consists of three elements. In addition, a visualized information retrieval engine based on fuzzy control is proposed. Introduction In modern decision support systems (DSS), an increasingly large percentage of the total design effort is devoted to the user interface, or that portion of the software system concerned with providing the means for a human user to interact with a system’s application software. The structure of the user interface therefore has a major impact on the quality of the whole system [1-4]. To work efficiently with DSS, most users benefit from a “representation conversion”, i.e. translating the specific DSS alphanumeric results into the universal language of the visual. In general, interpretation of data is much more intuitive if the results from the DSS are translated into charts, maps, and other graphical displays because visualization exploits our natural ability to quickly recognize and understand visual patterns. For instance, macro-economic DSS users have a better grasp of the momentum of national industrial structure after seeing a bar chart moving up or down dynamically. Visualization divides roughly into two areas, depending on whether physical data is involved. Scientific visualization focuses primarily on physical data such as the human body, the earth, molecules, and so on. Information visualization focuses on abstract, nonphysical data such as text, hierarchies, and statistical data. Information visualization enhances human’s ability of knowledge acquisition and cognition [5-8]. This can be also applied to decision support systems on commerce (CDSS). Information visualization exploits the natural human ability to recognize and understand visual patterns. Visualized User Interface (VUI) for CDSS In the mid 1980s, the user interface for CDSS consisted primarily of a mouse-driven, multi-windows interface such as that found in Apple’s Macintosh computers, Microsoft’s Windows system and X Window based system. Visualized User Interface for CDSS (VUI-CDSS) is the next step in the evolution of CDSS user interfaces. The goal of information visualization was mainly to provide suitable methods and instruments to explore and depict data and information through graphical representation. Information visualization takes advantage of the fact that visual representations can serve as powerful “vehicles of thinking” that help us extract useful information from complex and/or voluminous data sets [9]. It also provides processes for manipulating the data set and seeing what may have previously been invisible, thereby enriching existing investigation methods. However, the concept of information visualization is no

14

Manufacturing Systems and Industry Application

longer limited to the graphical display of data but now encompasses a much broader spectrum including the design of graphical interface used to input and access that data, in addition to be creation of standard and novel data presentation formats. With the overwhelming amount of information that is generated and received through OLTP, well-designed vehicles for facilitating data capture coupled with creative and powerful means for clearly, accurately and concisely conveying meaningful information are essential to effective CDSS implementation. The CDSS designer should offer users effective solutions for accomplishing these tasks. Information visualization is not a goal in itself, but an integral part of the overall process of presenting abstract information or scientific data. Information visualization functions to support CDSS therefore have to be embedded into the application system that deals with all the aspects of the data visualization problem.

Fig 1. Reference model for visualizing data Human interaction changes the parameters of each process When users draw charts from raw data, they usually execute three processes, namely, data transformation, visual mapping, and view transformation. Fig. 1 shows the basic reference model for visualizing data [10]. According to the model, the data table that is needed when drawing a chart is first generated by the selection and aggregation of raw data in the data transformation process. The data table is then transformed into a visual structure that combines spatial substrates, marks, and graphical properties in the visual mapping process. Finally, a view of the visual structure is created by specifying graphical parameters such as the position, scaling, and clipping in the view transformation process. The user interacts with each process to control its parameters by, for example, restricting the view to certain data ranges or changing the nature of the transformation. We propose a model of VUI that extracts drawing parameters from a series of user requirements, and redraws the chart according to the change of the user’s viewpoint interactively. VUI for CDSS addresses the mapping stage of the visualization process and is essentially a dataand knowledge-based graphical software shell that automatically translates CDSS outcome data into charts, maps and animations. The Model of VUI for CDSS Basic Elements. The model consists of three elements: (1) a processing engine (an executable module); (2) a set of knowledge base files, including script files, keywords definition files (standard ASCII text files); (3) a set of data base files (see Fig. 2). Processing Engine. The processing engine is an executable program. During run time it reads knowledge base (KB), and data base (DB) files. There are three types of KB and DB files: (1) script files; (2) keywords definition files; (3) data files. These KB and DB files make up a specific application. The purpose of the processing engine is to search the user’s needs by his/her query and to translate the operators in the KB and DB files into executable functions and routines for displaying a specific dialogue box, loading certain sections of the data, displaying a specific chart, or running an animation

Yanwen Wu

15

sequence. The processing engine gets its commands from the KB and DB files through the user’s selection or his/her questions. They precisely tell the processing engine how the data are organized and what it should do with them.

Fig 2. Main elements of the VUI model Script Files. The script files make up the core of the model. They are linked to menu or dialogue box items and precisely define what the processing engine should do in response to the user clicking on that specific menu or dialogue box item. For instance, the script files define which columns/rows the processing engine should read from a data file. One can specify coherent blocks of data or very complex non-rectangular data structures. One can also use the data description to select columns/rows in a data file that should be used for animation, etc. Besides this simple (but quite powerful) data description language, the script file has sections, which define the type and layout of the screen display. The developer can specify various kinds of general graphs (e.g. bar, line, pie, area, scatter plot, maps, etc.) or special expressions (e.g. Parallel Coordinates, Table Lens, etc.). Data Files. The data files can be organized in many formats. One format is space-separated standard ASCII file which makes it easy to prepare the data for the graphical database. In the first section there can be a data header, such as the title of the specific data set, etc. that readily identifies what it is about. The next section is the data section, which are organized into rows and columns. A more advanced data structure could be envisioned than this simple, spreadsheet-type arrangement, but we think that this scheme has four big advantages: first, it is easily interactive with the application portion (the preparing stage) of the software system. The specific outcome from the CDSS could be easily processed to get the scheme by a simple data preparation program. Second, it is intuitive. Most computer literate individuals can immediately work with data files that stick to the column/row concept. Third, it is simple. It does not require understanding of more advanced record-oriented data structures. Fourth, one can observe the data files directly. It is not necessary to run special software just to look into the data file. Since it is a plain ASCII file, any editor can be used to check the data. Keyword Definition Files. The keyword definition file is for knowledge-based information query. Some definite concepts of the special application domain for a CDSS are defined as the keywords for the indexes of the information query.

16

Manufacturing Systems and Industry Application

Knowledge-based Information Query. The development of a software tool to support the user interface design of a DSS is not sufficient by itself. In addition, we need to enhance the human operator’s ability to use these tools. For example, although information can be displayed graphically, users may not be able to understand all of what’s being displayed or may be overwhelmed of too many graphical displays are presented on the same screen. To minimize these shortcomings, a fuzzy control engine is proposed that supports a fuzzy query based on specific keywords in the application domains (see Fig. 3). As we know, the input of a computer application system consists of a given set of symbols. The output is a set of symbols that is readily understandable by the user. A good system, therefore, should support users in finding meaningful information in a simple way, such as through inputting certain keywords. Many decision support problems have data that lack obvious structures to provide information visualization with a base. We formulate Y = ^F(X), Y is the set of symbols representing the user’s need (also called the satisfied solution set). X is the input, such as some keywords specified in our system. F is a transformation function that transforms X into F(X). ^ is the “satisfaction” operator which transforms F(X) into Y following the user’s requirement.

Fig 3. Fuzzy retrieval engine and query in the VUI model The term “satisfaction” is a fuzzy concept and can be represented by a fuzzy set. Concretely, one or more keywords that are related to the application domains are allowed to be entered into the system, which define the scope of information retrieval. The visualized information that is related to the keywords is provided by the fuzzy retrieval engine (see Fig. 3). The fuzzy retrieval engine retrieves the information through interaction with user. The ‘item of information’ in Fig. 3 is a classified set of the information about the CDSS application domains. It is the title of the content of the information supported by one or more keywords about the CDSS application domains. The Model of VUI for CDSS: A Case Study In this section, we use a real-world example to demonstrate the method described. Shopping traffic is an important factor and important information in commercial operations for malls. Strong traffic, on the one hand brings advertising effects, on the other hand is a precondition to achieve a huge turnover. If we accurately know that a turnover is formed under certain shopping traffic, we can analyse some deep relationship between shopping traffic and turnover. Thus, the information collection and analysis of shopping traffic has important practical significance. Next, we will demonstrate the application of the VUI model in displaying shopping traffic by using two diagrams.

Yanwen Wu

17

Fig 4. Example of single-axis simple diagram

Fig. 4 is a single-axis simple diagram. This kind of diagram is single-axis, simple and clear, such as comparative customers report, comparative sales report etc.. Fig. 4 shows the shopping traffic in selected time and channels. The report compares the shopping traffic in selected time slots on different days. In the figure, different colors represent different days. The form of the report could be bar, 3D bar, line, scatter plot, spreadsheet, which is selected by user.

Fig 5. Example of joint multi-axis diagram Fig. 5 is a joint multi-axis diagram. This kind of diagram is complex, mainly used for mapping different objects to different vertical axes. This chart compares the data of customers and cars. As usually the number of customers is significantly greater than that of cars, so they are mapped to different axis for comparison. Red curve represents the customers data and the blue curve represents the cars data in the site. The data of each day is displayed with a sub-chart, every sub-chart has two curves for customers and cars. The two curves are respectively mapped to the left and right vertical axes.

18

Manufacturing Systems and Industry Application

Summary Information visualization is an approach that can assist CDSS user in gaining insight into the quantitative data so that eventually better decisions can be reached. In this paper, we have discussed the role that information visualization plays in the CDSS and introduced the model of VUI for CDSS. Information visualization is a powerful tool with tremendous potential for supporting complex decision support problems and their problem-solving processes. However, information visualization is still in its growth stage and requires advanced study on techniques and applications. Future work will consist of refining the model with users, and to improve its functions for a broader application. References [1] L.A. Jesse and J.K. Kalita: Knowledge-Based Systems, Vol. 10 (1997), p. 119. [2] P. Zhang: Decision Support Systems, Vol. 23 (1998), p. 371. [3] V.G. Kurakin, A.V. Koltsov and P.V. Kurakin: Proceedings of the Particle Accelerator Conference (Chicago, Illinois, June 18-22, 2001), Vol. 2, p. 1192. [4] M. Jern, D. Ricknas, F. Stam and R. Treloar: Proceedings of the 7th International Conference on Information Visualization (London, UK, July 16-18, 2003), Vol. 1, p. 17. [5] S. Havre, E. Hetzler, K. Perrine, E. Jurrus and N. Miller: Proceedings of the IEEE Symposium on Information Visualization (Washington, DC, USA, October 22-23, 2001), Vol. 1, p. 105. [6] D.H. Zhu and A.L. Porter: Technological Forecasting & Social Change, Vol. 69 (2002), p. 495. [7] T. Keller, P. Gerjets, K. Scheiter and B. garsoffky: Computers in Human Behavior, Vol. 22 (2006), p. 43. [8] V. Gonzalez and A. Kobsa: Proceedings of the 7th International Conference on Information Visualization (London, UK, July 16-18, 2003), Vol. 1, p. 331. [9] R.H. McKim: Experiences in Visual Thinking (Brooks/Cole Publishing Company, Boston 1980). [10] S. K. Card, J. D. Mackinlay, and B. Shneiderman: Readings in Information Visualization: Using Vision to Think (Morgan Kaufmann Publishers, San Francisco 1999).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.19

An Optimizing Design Approach for the Fiber Manufacturing based on the Immune Genetic Algorithm-Optimized Neural Network Hui-Zhong Zhu1, Yong-Sheng Ding1,3,a, Xiao Liang1, Kuang-Rong Hao1,3, and Hua-Ping Wang2 1

College of Information Sciences and Technology, Donghua University, Shanghai 201620, P. R. China

2

College of Material Science and Technology, Donghua University, Shanghai 201620, P. R. China 3

Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, P. R. China a

Email: [email protected]

Key words: Filament spinning; Neural network; Immune genetic network; Optimization; Production line

Abstract. A novel neural network-based approach with immune genetic algorithm is proposed to conduct the optimizing design for the industrial filament manufacturing system. A new model is proposed in this paper to acquire better filament quality during such process. The proposed model was a combination of two components, namely, a traditional neural network which is used to simulate and an immune genetic algorithm-based part which is to improve the performance of the neural network component. Simulation results demonstrate that the proposed method can efficiently demonstrate the spinning process of filament and conduct the prediction of the filament quality with the production parameters as input data. Meanwhile, the proposed method enjoys faster speed and more precise accuracy, compared with traditional methods. Introduction Currently, the improvement methods of fiber production process are mostly based on a particular aspect of the production line, or derived from the experience of production process with subtle adjustments. The former includes update and improvement of drafting equipments and coagulation, improved method of drafting methods and driver, the improvements of center air blow and cross air blow, etc.[2,3]; The other includes the transformation ratio of the spinning solution, changes of drawing rate, and changes in all aspects of the production environment (such as air to high temperature steam) and so on. The improvements in aspects of the production mainly take advantage of the production line after the long-running lack of showing, or a new industrial technology in the spinning industry, which made the gradual application; and fine-tuning of the process is rely mainly on production staff experience, creating emotional, optimizing specific production line content. These methods belong to the local process optimization. There is no single, definitive guidance to the system configuration. The effects of optimization are limited and difficult. The method to solve the technical problem is to provide an intelligent, integrated approach to make out the spinning process optimization design problems in the above background mentioned [4-6]. A model is constructed by immune genetic optimization of neural network [7,8]. Data is processed and analyzed by the model. At last, reasonable parameters of spinning production line configuration can be got. According to the main quality indicators, optimal design of spinning process described in this paper refers to optimal parameters in various aspects of the production after the adjustment process optimization methods.

20

Manufacturing Systems and Industry Application

The Melt Spinning Process The section headings are in boldface capital and lowercase letters. Second level headings are typed as part of the succeeding paragraph (like the subsection heading of this paragraph). A classical filament spinning process is briefly shown as Fig.1. From this picture, it can be seen that the whole spinning process can be roughly divided into four sections as listed in Table 1. Each section has its specific role, and wrong parameters of any section will cause problems that the quality of the final spinning will not meet the desired requirements. Thus, analysis of spinning process should stand on a macro point. However, for each module, the parameters of the effect is very large, but we can get the parameters represented the performance of each modules by analyzing the historical data. The related parameters are listed in Table 2. Section Effect of the module Slurry preparation Making raw materials into slurry Melting process The slurry are processed and transported to spinneret Spinneret The processed slurry will be sprayed into filament Spinning Improving the physical properties of the fiber TABLE 1. SECTIONS OF THE FILAMENT MELT SPINNING PROCESS The whole spinning process can be simulated as long as find out the relationship between fiber properties and the parameters of the spinning process according to the Table 2. As we known, the parameters of production are often inextricably linked with the final product process, but this kind of relations is implicit and nonlinear. Therefore, we need to use the non-linear function to describe them. Along with the increase of production times, there is a lot of input and output data, so we want to find the recessive nonlinear function with these data. Here this paper chooses neural network of immune genetic algorithm. The Neural Network with the Immune Genetic Algorithm Neural network The artificial neural network (ANN) is regarded as a mathematical model for distributed parallel information processing, which relies on the complexity of the target plant. By adjusting the connection between the nodes within the relationship, it achieves the purpose of processing information. Artificial neural network has self-learning and adaptive ability. If provided by a number of pre-reciprocal input-output data, the network can analyze of the potential laws among data. By the laws, the network would forecast the output data according as new input data. The process of this study and analysis is called as "training". The working mechanism of a classical neural network is shown as Fig.2. The black box is the core. Solving the neural network means solving structure of the black-box. If we have found out parameters of the structure, we can determine the characteristics of black-box. In theory, a neural network can approaches a nonlinear model with very small deviation. But this model also exist weakness as follows: --The system is nonlinear, and it is easy to fall into the local minimum. -- Convergence speed is too slow. In the learning process, drop speed is slow and learning speed is also slow, so that the process is easy to form a wide range of error flat area. --The network structure is so complex that it’s not easy to determine. In order to overcome these problems, people think many ways to solve. There are two classic improving methods: adding momentum item and transforming step length. Although these classic methods can improve the performance of the network, but the effect is not significant. So this article adopts the immune genetic algorithm to improve network. Immune genetic algorithm

Yanwen Wu

Section

21

Symbol

Slurry preparation Melting process Spinneret Spinning

G

Characteristics viscosity Amount of pump

TL

Spinning temperature

VL

Spinning speed The ratio of stretch Transforming temperature Half times the elongation

IV 0

DR Td

Fig.1 Spinning process

Note

EYS1.5 Quality Indices TABLE 2. SPINNING CONDITIONS AND PARAMETERS

The immune genetic algorithm derives from genetic algorithm. It adds immune algorithm factor based on genetic algorithm. Because the efficiency of its search mode, it has a excellent researching speed and not easy to drop into mature. The flowchart is listed as Fig.3.

Fig.2 Prototype of neural network Simulation of Spinning Process At first, from table 2, six key parameters of the production process are taken as the input vectors and one important index of materials is considered as the output vector. These data can be seen as the important parameters of the system simulation process. As these data are defined, next step is to find out the exact values about these data. After these data are confirmed, the relationship between input data and output data can be figured out. In this project, the neural network has three layers. The first layer is the input layer which has 6 neurons. The second layer is the hidden layer in which 13 neurons is applied according to the tests with different numbers of hidden neurons. The last layer is the output layer which contains one neuron. The sigmoid function is taken as the activation function whose expression is showed as below: f ( x) =

1 1+ e

−1

,

(3)

x

After constructing the structure of neural network, we start to set the network’s internal parameters. Above all, we regard solving problem as antigen and solution as antibody. When IGA receives an antigen, it will randomly generate an antibody, then the network uses a function to figure out the fitness of each antibody, after that, the antibody begins to cross and mutate. Because concentration and Affinity can affect the death rate of antibody, so the new generations must match the two reasons. The algorithm ends until the solution meets the terminate qualification. Here are steps:

22

Manufacturing Systems and Industry Application

1) Encoding of antibody gene Because all parameters are floating-point type, if using the binary encoding, although the speed of encoding is fast, but the length of code will be too long and the precision is not what we want. So this paper uses real number encoding. The following specific encoding is shown as Fig.4.

Fig.4 Encoding of Antibody genes 2) The function of antibody’s fitness In order to evaluate the performance of each antibody, there is a function which is used to calculate the antibody’s fitness. Specific fitness function is as follows: F (i ) =

1+ C e Ei + C

,

(4)

Where F (i) means the fitness of antibody i , meanwhile C implies a const and E i indicates the error. According to this function, the fitness of antibody i will be computed. If the fitness is large, it means the antibody is suit for the antigen. On the contrary, the antibody will be given up. 3) Concentration In order to measure the difference between the antibodies, this paper defined antibody’s concentration and uses it to distinguish the similarity between the two antibodies. According to the spatial relationship between the antibodies, we first define the distance between them: l

d (i, j ) =

∑ (x − x i

j)

2

,

(5)

K =1

The distance can be understood as follow, if the antibodies are similar, they are close to their mutual attraction; otherwise, they leave each other. Therefore, according to the distance we can define the size of antibody concentration: Ai =

1

1+

∑ d (i , j )

,

(6)

k

Where Ai is a symbol of antibody concentration. If Ai is huge, it means there are a lot of antibody i .The concentration is big, the rate of reproduction will be low. 4) Cross According to the rate of cross ( p c ), this paper randomly selected father generation and produced offspring with linear strategy, the strategy is shown below:  x1' = rx1 + (1 − r ) x 2  '  x 2 = (1 − r ) x1 + x 2

,

(7)

Where x1' , x 2' stand for offspring. x1 , x 2 stand for father generations. r is a const. 5) Mutation Strategy of mutation is like cross, its function is as follow:  x + ∆[ g t , r (k ) − x ] g x' =  , ∆( g t , y ) = yr (1 − t ) , T  x − ∆[ g t , x − r (k )]

(8)

Yanwen Wu

23

Where y is the boundary of x , g t means the generation of the current evolution, T is the maximum of the evolution. According to the above steps, the network structure parameters will be made out. Result and Discussion Aim to solving the network of fiber spinning, this paper used three layers. The number of neuron in the three layers is 6, 13 and 1. Eq. 3 is the activated function. To compare the performance, IGA and the traditional network respectively figured out the solution. The training parameters are listed in table 3. In this paper, two methods were used to simulate 20 groups of original data. After the simulation, the simulation results can be got and were put in Fig.5. In this figure, it can be easily seen that the three curves is very similar and fitting degree is very high. If the deviations of simulation are calculated and put in fig.6, it can be known that the deviations are in range of 0.00001 to 0.0004, it meets the demand deviation. Therefore, IGA and traditional network are two good methods to simulate the process, and they can be up to the task. However, if we compare the two methods, some differences will be drawn out. For both methods, the deviations between simulation results and the original data are put in Fig.7. In this figure, it can figure out that the convergence speed of IGA is very fast. After 50 generation, the deviation of IGA is very small and it meets our default deviation, so this means that IGA have convergence. Meanwhile, the deviation of the traditional network is very large, and it is still in shock and decreased. The difference of convergence speed is just part of the performance parameters, some relative performance parameters are list in Table 4.A draw conclusion can be drawn: Compare with the traditional network, IGA has better performance, it can be very good to improve the search capabilities of the traditional network and overcomes the problems of precocious defects. After improvement, the convergence speed will be faster and Precision will be more accurate than before.

Fig.5 Simulation of the two algorithms

Fig.6 The simulation deviation of the two algorithms

Fig7 The deviation of traditional network In table 3, the sign of a means the rate of cross and the sign of b refers to the rate of mutation. Evolutiongeneratio p c a p m b Training n accuracy 1000 0.1 0.6 0.004 TABLE 3. TRAINING PARAMETERS

Fig.8 The deviation of the IGA Methods

Training time [s]

Generation of convergence 140

Traditional 15 network IGA 6 50 TABLE 4. COMPARISON OF RELATIVE PERFORMANCE PARAMETERS

24

Manufacturing Systems and Industry Application

Summary In order to avoid the problems of slow searching speed and easy dropping into mature, a novel neural network-based approach with immune genetic algorithm is proposed in this paper. The proposed model was a combination of two components, a traditional neural network which is used to simulate and an immune genetic algorithm-based part which is to improve the performance of the neural network component. Compared with the traditional neural network, more powerful capabilities in global and local search are owned to the novel neural network. As example shows, simulation speed and accuracy are improved by this model. Therefore, the nonlinear function between the input and output sample data can be effectively reflected by the novel neural network. Acknowledgement This work was supported in part by the National Nature Science Foundation of China (Nos. 60975059, 60775052), Specialized Research Fund for the Doctoral Program of Higher Education from Ministry of Education of China (No. 20090075110002), and Project of the Shanghai Committee of Science and Technology (Nos. 10JC1400200, 10DZ0506500, 09JC1400900). References [1]. Chang T. Kiang, and John A.Cuculo, “Influence of polymer characteristics and melt-spinning conditions on the production of fine denier poly (Ethylene Terephtalate) fibers. Part I. Rheological characterization of PET polymer melt,” Journal of Applied Polymer Science, vol.46, no.1, p. 55-65, 1992. [2]. K.-Z. Chen and Y. Leung, “A neural network for solving nonlinear programming problem,” Neural Computation and Applications, vol.11, no. 2, p. 103-111, 2002 [3]. J. Timmis, M. Neal, and J. Hunt, “An artificial immune system for data analysis,” Biosystems , vol.55, no.1, p. 143–150, 2000 [4]. N. Y. Nikolaev, and H. Iba, “Learning polynomial feed-forward neural networks by genetic programming and back-propagation,” IEEE Transactions on Neural Networks, vol. 14, no. 2, p. 337-350, 2003. [5]. T.-C. Chen, and P.-S. You, “Immune algorithms-based approach for redundant reliability problems with multiple component choices,” Computers in Industry, vol. 56, no. 2, p. 195-205, 2005. [6]. C.-H. Hou, Y.-S. Ding, and X.-Y. Zeng, Immune based evolutionary algorithm for fabric evaluation, Mathematics and Computers in Simulation, vol.77,no.(5-6): 540-549,2008. [7]. Z.-H. Hu, Y.-S. Ding, and Q. Shao, Immune co-evolutionary algorithm based partition balancing optimization for tobacco distribution system, Expert Systems with Applications, vol. 36:,no.3,p.5248-5255.2009. [8]. L. P. Khoo, and T. D. Situmdrang, “Solving the assembly configuration problem for modular products using an immune algorithm approach,” International Journal of Production Research, vol. 41, no. 15, p. 3419-3434, 2003. [9]. R. L. King, S. H. Russ, A. B. Lambert, and D.S. Reese,“An artificial immune system model for intelligent agents,” Future Generation Computer System,vol. 17,no. 4, p.335-343, 2001. [10]. J.E. Hunt and D.E. Cooke, “Learning using an artificial immune system,” Journal of Network and Computer Applications, vol. 19, no. 2, p. 189-212, 1996. [11]. K. K. Kumar, and J. Neidhoffer, “Immunized neurocontrol,” Expert Systems With Applications, vol. 13, no. 3, p. 201–214, 1997.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.25

Scene Shortest Path Solutions Based on the Breadth First Search Wen Jing-huaa, Jiang He-lingb, Zhang Meic, Song Jun-lingd School of Information, Guizhou Financial Institute, Guiyang, 550004, China a

[email protected],b [email protected], [email protected], d [email protected]

Key words: City map; Shortest path; Queues; Breadth first search; Adjacency matrix.

Abstract: Breadth first search is one of the most convenient traversal methods of a graph, and the shortest path problem of the city map is a kind of traditional data structure problems. It was researched and analyzed that the breadth first search traversing algorithm which adopting the adjacency matrix as its storage structure, in VC++6.0 environment, the shortest path problem of a city map was solved with breadth first search of the graph based on the queue, and its complete program was realized in the machine. Experimental results show that taking the breadth first search to solution city shortest path is an easy to understand and feasible scheme, so it has certain popularize value. Introduction With the continuous development of computer science, diagram applications has penetrated into linguistics, logic, physics, chemical, electronics, communication, mathematics and so on many disciplines, especially the urban traffic, the rapid development of network of graph traversal is becoming more and more important. Graph traversing algorithm is also solved the connectedness of graph topological sort and ask questions, such as the critical path algorithm based [1]. The urban road is the urban important infrastructure, its operation of direct influence urban planning, construction and management. Facing the city of the future, the road to run effectively, it is undoubtedly an important aspect of urban development. With the improvement of urbanization, the city built area gradually expand, the population and the industry gradually to the urban centralized, people in cities each local activities of frequency increases unceasingly, urban traffic network in modern urban life plays a more and more important role, but also influenced by the competent department of city more attention to [2]. Domestic and foreign cities in active development transportation infrastructure and improve the related services at the same time, tend to urban intelligent transportation network system research and development. This paper studies and analysis based on adjacency matrix and queue breadth first search algorithm, in VC++6.0 development environment, with the shortest path solving city, so as to provide intelligent transportation network system of urban theory and algorithm support. Algorithm Base of Breadth First Search Basic Principles: The principle of Breadth first search algorithm is that: Supposing a vertex v0 in a graph G as the source node, leaving from this vertex v0 , after this vertex has been accessed then the adjacency points of v0 which has not been visited are visited in turn, then these vertices are visited by the sequence of adjacent point in turn access them, until all the vertexes are accessed which have a path to the vertex v0 in the graph G . If there are vertexes that have not been visited

26

Manufacturing Systems and Industry Application

in the figure at temporality, then another vertex which is not accessed is selected as a starting point, and the process is repeated until all vertices in the graph are accessed [3]. This algorithm in complete search while, generating a breadth first spanning tree whose root is v0 includes all vertexes it can reach, it can be seen that the algorithm firstly search all nodes with distance k from vertex v0 , and then to search other reach node ve with distance k + 1 from vertex v0 , so the path from v0 to ve in the breadth first spanning tree is the shortest path from v0 to ve in the graph G , every node in breadth first spanning tree at most has one father node. Adjacency Matrix Storage of a Graph: The directional graph with n vertexes can be denoted by a matrix [4] whose is n × n .Assume the name of the matrix is M , then when < vi , v j > is t an arc of the directional graph, M [i][ j ] = 1 , otherwise M [i][ j ] = 0 . For undirected graph, the edge is symmetrical, so if there is edge, it is expressed as M [i][ j ] = M [ j ][i] = 1 . Design of Breadth First Traversing Algorithm Basic Data Structures: (1)In breadth first traverse, request first visited the adjacency point was also vertex priority access, therefore, must be for each vertex access records so that sequence according to the sequence behind each vertex adjacency visit point. We can use a queue structure [5] record vertex visit order, will visit every vertex entered the queue and then again ordinal out of the squad. (2) In breadth first traverse process, in order to avoid a repeat visited a vertex, need to create a one-dimensional array visited [n] ( n is figure number of vertices), used to record each vertex whether it has already been visited. Breadth First Search Algorithm Framework Based on Adjacency Matrix: With the adjacency matrix, set an auxiliary storage array visited [n] , programs will set its initial value to "0",to say vertex no visited, once a vertex vi be visited, they will set visited [i] value as "1", its algorithm framework is as follows:

Application Example of Breadth First Search Algorithm Problem Description: Map of a city is known (showed in figure.1), there are eight different spots A, B, C , D, E , F , G, H , followed by its number 1, 2, ,8 , find a spot to another spot from the path, the path requires at least through the sights.

Yanwen Wu

27

Figure.1 City Traffic Map Problem Analysis: Figure.1 shows the traffic map from the scene A to the scene H . As can be

seen in figure.1, it will pass through a number of cities from scene A to the spot H . Now it is need to find a route through the least scenes and output the line. Breadth-first search graph is similar to the level traversal of the tree; layer by layer the search just as soon as possible to find a node with another node, the most direct path to relative. Breadth-first algorithm is therefore suitable for the problem; the adjacency matrix of the graph showed in figure1 is expressed in Table 1, where which 1 means it can go from one scene to another scene and 0 means it can not go.

A B C D E F G H

A 0 1 1 1 0 1 0 0

Table1 The adjacency matrix corresponds to Figure.1 C G B D E F 0 0 1 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 1 1

H 0 0 0 0 1 1 1 0

The concrete search progress is as follow: 1) Entering scene A (number 1) into the queue, the head pointer qh of the queue is set to 0, and the end pointer is set to 1. 2) Entering all scenes which are directly reachable to the scene pointed by the head of the queue into the queue (if the scene has appeared in the queue then it is not entered into the queue), then the scene is come out the queue and the head pointer qh of the queue is added 1 to get the new head scene of the queue. Repeat the above steps until the scene H has been entered into the queue so far. When it has searched the scene H, the searching ends. 3) Outputting the line road which via least scene. Implementation and Result of the Algorithm: Design of data structure 1) Taking two-dimensional array jz[][] as storage space of the adjacency matrix. 2) Taking array sq[] as storage space of the alive node queue 3) Each node of the queue has two members: sq[i].sceneNm records the number of scene entered into the queue, sq[i]. preNm records the precursor of the attractions, attractions in the queue for the

28

Manufacturing Systems and Industry Application

next standard, so you can detrude inversely the shortest line via sq[i]. preNm . 4) Setting the array visited [] to record the scenes which have been searched. The complete source code of the algorithm is as follow:

Yanwen Wu

29

The running result is showed in figure.2.

Figure.2 The Shortest road path Summary In this paper, it was researched that the application of breadth-first search algorithm based on the queue and the adjacency matrix into the shortest path problem in the city map, and it was programmed and realized with the VC6.0 + + development platform. It provides a new method to study the urban traffic network problem, and it has important meaning for the application and development of the Urban Intelligent Transportation System. Acknowledgment Thank Project Supported by important Research Fund of Guizhou Provincial Education Department ([2007046]). Than Project Supported by Special Research Fund of Guizhou Provincial Nomarch ([2007]41). References [1].Kuanggui Juan. The application of Breadth-first search algorithm in the interconnection network communication, Qingdao University, a master's degree thesis,6(2005). [2] .Wang xingfeng, Jialing. Research on Shortest Path Searching for Urban Traffic Network Based on GIS. Jisuanji yu xiandaihua, 3(2005). [3]. Yang Zhiming. Analysis and implementation of Breadth-first search traversal algorithm of a graph, Agriculture Network Information, 12,136-137(2009). [4.]. (U.S.) Davis book, Feng Shun Xi translation. Data Structures and Algorithm Analysis: C language description, Mechanical Industry Press, 2004. [5]. Yan Wei-Min, Wu Weimin. Data structure (C-language version), Tsinghua University Press. 4( 2007). [6]. Wang Xiaodong.A computer algorithm design and analysis (3rd edition), Electronic Industry Press, 5(2007).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.30

Analysis on Information Construction of University Personnel Archives Han Shuhua Hebei Normal University of Science & Technology, Qinhuangdao, Hebei, China [email protected] Key words: University Personnel Archives, Informatization Construction, Existing Problems, Corresponding Strategies

Abstract. This paper begins with the significance of informatization construction of university personnel archives, then it describes the concept and content of informatization construction of university personnel archives, especially discusses the main existing problems of informatization construction of university personnel archives, and puts forward corresponding strategies according to the list issues. Introduction The Outlines for Informatization Construction of Whole Country Archives was issued by Central Archives Bureau of The State Archives Administration of the People’s Republic of China on 25 November 2002, which is a special plan of national archives development in “tenth five-year” plan, and it is the first national archives informatization construction plan. It is a specific measure for adapting requirements of national informatization construction and archives business development, implementing informatization driving strategy and driving different works of archives business by archives informatization. With the issuing and implementing of Outlines for Informatization Construction of Whole Country Archives, as the informatization leading edge, personnel archives departments of universities have faced the strong pressure of international information explosion and information technology revolution of archives administration within the country. Archives information departments of universities shall conform to the times development and change from passive to active, and traditional laggard archives management patterns shall be changed, informatization level of personnel archives work shall be promoted gradually, informatization construction of university personnel archives shall be facilitated continuously, and new situation of informatization management work of university personnel archives shall be created. Concept and Content of Informatization Construction of University Personnel Archives Concept of Informatization Construction of University Personnel Archives. Informatization construction of university personnel archives mainly refers to the process in which the archives management mode transforms from the previous archival entity custody oriented to archives information digitization, networking and social service oriented in the implementation of personnel archives management and construction work of universities as well as continuous promotion of management efficiency and service level. Main Content of Informatization Construction of University Personnel Archives Informatization for Important and Common Entity Archives. Entity archives of university staff includes information resources of different carriers such as paper materials, photo, disc, image and video, personal information such as recorded date of birth, educational background, working years, titles, go abroad and leave boarder. These pieces of information exert the irreplaceable function in title appraisal, salary promotion, cadre promotion and retiring management. For the entity archives which are easy to be broken after long-time use, the service life of them can be prolonged after conducting informatization by equipments such as computer, therefore accelerating the informatization construction of archives become critically important.

Yanwen Wu

31

Informatization of Receiving, Delivering, Storing and Offering Utilizing. With the development of national education business, school running ways such as joint-school running within the country and Sino-foreign operation school running appear. The cross-regional school running way brings disadvantages such as time and space restricted to the documents convey, together with the environment of society informatization, considerable electronic documents appear between different school areas, therefore, strengthening informatization construction is the necessary requirement. Construction of Several Informatization Data Bases. The universities have long development history, considerable personnel and complicated educational background level, and the huge quantity and inconvenient when using of archive materials refer to its obvious characteristics. In order to realize the accurate, standard, convenient and rapid information extraction, strengthening the data base construction, constructing different electronic data bases such as personnel archives information, file list, consulting and lending registration and data statistics which are standard and easy to be used. It is the necessary trend for social informatization development and it is also the common wish for both archives manager and archives user for a long time. Main Problems Existing in Current Informatization Construction of University Personnel Archives Insufficient Understanding of Significance of Informatization Construction. With the arrival of social informatization times, the important function of informatization construction of personnel archives in university development gains more acceptance from people. However, there are still a few universities lack of adequate understanding of the importance of informatization construction of personnel archives, which is mainly displayed in that the attitude toward informatization construction of archives is always “important when speaking, abandoned when doing”; in addition, the archives managers are influenced by the traditional archives management concept “paper archives library collection comes first, while digital information archives resources is not important”, therefore no innovation and being used to stick to convention, which causes the management development of university personnel archives falls behind the requirement of people to university archives information in new situation. Problem of Insufficient Capital and Equipment. The traditional archives management pattern is to utilize manual operation to conduct works such as archives information filing, ordering, cataloguing, searching, consulting, lending and statistics, which waste both time and energy. While the archives informatization construction requires the archives workers to utilize modern equipments such as computer, scanner and digital images in order to make the personnel archives informatization and realize the routine management works such as archives consulting, lending, filing and statistics informatization and networking. Thus it can be seen that the informatization construction of university archives requires sufficient capital and advanced equipments. The writer learn form the statistics data of examination bulletin of universities archives in Hebei Province in 2010 that, the archives management of many universities still remain in traditional manual management phase, and the extremely insufficiency of capital and equipment is the main reason for causing the falling behind of theses archives management in universities. Problem of Lacking of Regulations for Complete and Perfect Personnel Archives Informatization Construction. Nothing can be accomplished without norms or standards. Compiling the specification and standard of archives work is the main content of informatization construction of university personnel archives. However, because of the factors such as history and society, the archives management specification is not complete and perfect, while some universities even do not have any regulations and systems for construction and perfection of informatization construction of university personnel archives, which result in the informatization construction of personnel archives become arbitrariness and lack of normalized and scientific management. Safety Problem of Archives Information. The country issued and implemented Electronic Signature Law since the date of 1 April 2005 and it frees much more people from the restraint from the entity archives and makes much more people be engaged in daily activities such as e-commerce

32

Manufacturing Systems and Industry Application

and e-government affairs by utilizing the network. However, since the electronic document data has the weaknesses such as easily protected by computer virus, phenomenon such as computer protected by virus and server information resources attacked by hackers are frequently appear during digitalization and informatization process of archives. In addition, links cause secrets disclosure such as data missing in storage system updating. Therefore, the information safety problem in electronic archives management becomes critically important. Quality Problem of Personnel Archives Managers. The key factor for realizing the informatization management of university personnel archives lies in the management personnel. Only archives managers with high political quality, strong business ability and high computer management level is equipped with, the informatization management of university personnel archives can be promoted. Currently, in the actual work of personnel archives management, the professional talents of university archives management are extremely insufficient in many universities, while the current archives managers are not graduated from archives majors and did not receive professional knowledge and business training, which leads these people have low comprehensive quality and poor business ability, and this kind of situation extremely hinder the development of informatization construction of university personnel archives. Resolution of Existing Problems in Informatization Management of University Personnel Archives and the Countermeasures Understanding Deepening, Concept Changing and Satisfactory Atmosphere Building of Informatization Management. Since restrained by traditional archives management concept and insufficient understanding to the importance of university archives informatization management, the process of informatization construction of some universities personnel archives become slow and the archives management still remain in the phase dominated by manual operation. Therefore, to change this situation of lagging behind, the publicity strength for significance of informatization construction of university archives must be strengthened, the traditional far behind archives management concept of “paper archives library collection comes first, while digital information archives resources is not important” shall be abandoned in order to construct a new archives management mode depended on paper archives library collection and dominated by digital archives information resources. Hardware Output Strengthening, Providing Material Guarantee for Informatization Construction of Personnel Archives. Informatization of university personnel archives is an important component of university informatization construction and it is the requirement of times development, and it is the direction of innovation for university personnel archives management. In order to realize the informatization management of university personnel archives, great attention and vigorous support of university leaders and relevant departments must be received, sufficient labor power, material resources and financial resources support shall be given, and modern equipments such as server, computer, scanner and digital image equipments shall be added; only if profound financial resources and advanced equipments are possessed, informatization construction of university archives will have strong material foundation promise, characters, pictures even image information can be transformed to data information through equipments such as computer, scanner and printer, digitalization and informatization of personnel archives can be realized, and daily works such as computer and network consulting, lending, filing and statistics of archives information can be realized. Moreover, once its personal situation changes, the latest information can be typed in the computer accurately at any time, so as to guarantee the timeless, reality and accuracy. Construction and Perfection of Different Management Systems of Safety Guarantee of Personnel Archives Information. Constructing and perfecting the different management systems of personnel archives information management is the guarantee for realizing the informatization construction of university personnel archives. The service property of personnel archives management determines that personnel archives department shall develop the personnel archives information resources sharing as much and soon as possible. However, sharing and safety is a pair of contraction which is not easy to be conciliated. When informatization brings convenience to

Yanwen Wu

33

people, it also brings potential safety hazard. The relationship between development and safe keeping as well as utilization and safety shall be dealt with correctly. On the one hand, different archives management systems shall be constructed, and strict regulations and systems for university personnel archives shall be formulated by combining the actual situation of the personnel archives management work in the unit. For examples, lending system of personnel archives information, resources utilization system of personnel archives information, document safe keeping system, etc. According to the features of electronic information of easy to be copied and changed, the visit and utilization permits shall be determined in accordance with the different security classifications, the license scope of visiting shall be controlled, and feasible safety precaution measures and methods shall be adopted; on the other hand, laws and regulations shall be perfected and completed continuously, the regulation of keeping secret of the Party and the country shall be executed strictly, safety of personnel archives shall be protected to be safety, and legal rights of our school and technical personnel shall be protected, so as to develop and utilize according to law. In the process of informatizaiton, the highly conformity of copy document and original document of the digital information in content shall be maintained, therefore, when using the electronic information in data base, we shall depend on the traditional paper document as the basis in order to guarantee that the informatization construction of university personnel archives can be conducted safely and in order. Construction of Different DataBases for Personnel Archives Informatization Management. The information data base construction of university personnel archives is the important component for informatization construction of personnel archives. The information data base of university personnel archives involves the different business activities of personnel archives management works and the corresponding data bases shall be constructed in different links, including file list storage of personnel archives, full text data base of personnel important information, storage management data base, archives consulting and lending, archives utilizing data base, data statistics data base, etc. Through the construction and maintenance of different data bases, the archives information resources requirement of users can be realized conveniently and rapidly, so as to facilitate the realization of archives information. Team building strengthening and promoting of comprehensive quality of managers. Personnel archives manager is the key factor in informationaiztion construction of personnel archives. Diversification and complexity of the archives management tool decides that informatization management requires more compound talents. Therefore, in order to cultivate an educated and professional archives management cadre team, the universities shall uphold the principle of “first change people then change objects”. On one hand, compound archives talents with adequate political thought and be familiar with both archives professional knowledge and computer management technology shall be introduced and be recruited into the university personnel archives management team, on the other hand, continuing education of current staff in university archives departments shall be strengthened continuously, leaning and business ability training of related knowledge of archives management shall be organized in order to help them realize the transformation to knowledgeable and professional talent, so as to make new contribution to the informatization construction of university personnel archives. Conclusion Informatization construction of university personnel archives is the symbol of modernized management of personnel archives. It brings new content for the archives work and also is the necessary requirement for adapting university personnel system reform. Hence, managers of university personnel archives must change thought, grasp the opportunity, innovate and forge ahead, so as to facilitate the informatization construction of university archives completely; only in this way can promote the informatization management level of university archives continuously and make the information sources of university personnel archives exert important cornerstone function in the revolution and development of the university.

34

Manufacturing Systems and Industry Application

References [1] W. You: Thinking on the Status of Our Country Enterprise Informationization Management, Commercial Research, vol. 3, p.38–43, 2003. [2] Zh. Chen: Should Reform the Personnel Records “lifelong” System, Archives Science Bulletin, vol.5, p.7–9, 2003. [3] The national archives, The National Archives Information Construction Implement Outline, 2004-10-28. [4] M. Zhang: New Viewpoint on the Management of Information Archives, Popular Science, vol.9, p.219–220, 2005. [5] F. Liu: Talk about Information Network Management of Personnel Files, Journal of Southwest Agricultural University: Social Science Edition, vol.4, March 2005, p.187–189. [6] L. Tan: Discussion on University Personnel Files Management, Journal of Hunan University of Science and Engineering, vol.3, p.256–258, 2006. [7] W. Zhang, “Discussion on information network management of Carders' archives,” Shaanxi Archives, vol.3, p.26–27, 2006. [8] H. Lin: University Personnel File Informationization Construction Principle and Key Problems, Sci-Tech Information Development & Economy, vol.21, p.181–183, 2008. [9] A. Long, J. Huang and Y. Ding: Talk about the Files Management in the New Period of Personnel, Lantai World, vol.1, p.55, 2009. [10] J. Fan: Talk about the University Personnel File Informationization Management, China Education Innovation Herald, vol.16, p.212, 2009.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.35

The Analysis of Accounting Information Activities Blending SOA Yan-fang Niu Shandong Financial Institute, Jinan, ShanDong, Chinese [email protected] Key words: accounting value chain, accounting information activities, SOA

Abstract. Accounting value chain can be considered as composed by the five accounting information activities: capturing economic events, accounting business process integration, real-time financial reports, accounting real-time control, accounting information knowledge management. How to fully reflect the added-value of these activities is the key to produce the real-time, accurate information. This paper illustrates these accounting information activities blending the emerging SOA. Introduction Hutton [1] presented the accounting value chain based on Porter’s value chain and Elliott’s information value chain to illuminate how accountants can add value to business organizations blending information and communication technology. It included five parts: capturing economic events, processing economic events, disseminating business knowledge, developing externalities, and providing assurance. With the development of IT integration, information systems is becoming more interconnected and integrated. Specially, Service-Oriented Architecture (SOA) emerged and is becoming more maturity, which is a software architecture standard based on XML (Extensible Markup Language) and can achieve the interconnection between information systems through Web services (specifically through SOAP, WSDL, UDDI, which are based on XML and used to describe the Web Services). Accounting information activities on the platform of SOA show some new characteristics. Referring the Hutton’s ideas and blending SOA, this paper divides accounting value chain into five accounting information activities: capturing economic events, accounting business process integration, real-time financial reports, accounting real-time control, accounting information knowledge management. The following will make a brief introduction. Capturing Economic Events Identifying, measuring, and recording accounting transactions is the basement of accounting value chain and on the low-value end of the spectrum. AIS need to collect the original, semantic information of business events into database rather than record the debit-credit entries which are only useful for balance sheets, which(that?) is also called events-driven design method. The most important thing for event driven method is to define a lots of data views (also called rules) to meet for different accounting standards. The services of SOA can fully realize event-driven principle. In SOA, defining different data views among events can be considered to define different grained services, so it is also called service-driven design method. Different level services complete different layer events. The more fined-grained services, the more involved in how to realize the services, the most fined services are those classes, functions and objects in programming. As for accountants, they are demanded to take part in defining the middle grained coarse services. They need to identify event resources, participants and location; to distinguish characteristics and attributes relevant to events; to identify and record direct relations among resources, events, and locations; to identify the control mechanism of events; and to identify trigger mechanism of all events including trigger, triggering time, the response after triggering, etc. When these rules are defined, IT staffs can use different grained services to carry out. So AIS built on SOA can not only realize the original, semantic information collection, but also fulfill the multiple accounting standards by assembling different grained services.

36

Manufacturing Systems and Industry Application

Accounting Business Process Integration Before interpretation of accounting business process integration, let’s understanding the meanings of business process integration (BPI) in IT domain and business domain respectively. In IT domain, BPI refers that IT staffs coordinate legacy systems through integrating business processes, which emphasizes how to realize in technical way. After the popularity of SOA, BPI has a special meaning, that is specific business functions being mapped to(?) one or a group of services to realize the processes, and there are specialized business process description languages emerged (?), such as BPEL (Business Process Execution Language) and BPML (Business Process Modeling Language). In business domain, interpretation of BPI is to integrate decentralized staff of different departments to complete the sequence of work by re-designing processes, including strategy processes, operating processes, supporting processes etc. So the steps of each task can be minimized, and organizational performance can be improved to the great extend. At the age of fierce competition, enterprises need flexible business process to support information processing activities. Owing to services’ encapsulation and flexibility, systems built on SOA are able to adapt to the changes of diversification and individualization process by reassembling web services process, not fix the process like ERP. Hence, accounting business process integration can also be understood from these two domains. In business domain, from the content of integration, it realizes all-around integration of flow of fund, flow of material and information flow in content; from the function of integration, it transfers from recording afterwards to active managerial control in treatment; from boundary of integration, it expands from within an organization to inter-organizations with the expansion of IS. On the other hand, in IT domain, accounting business processes are totally embedded in service processes in SOA. The true value of SOA is integrating process of information system and business process together through services. Let’s take order to payment process for instance (as shown in Fig.1). Hutton [1]pointed out that the focus was how to leverage information technology to develop and integrate innovative business process models. Owing to services’ clear semantic characteristics, IT personnel, accounting personnel and business personnel can interact better to construct information system process and meet for requirement of real-time enterprises. Then real-time information transmission and sharing will be attained.

Fig. Illustration of Accounting Business Process Integration Real-time Financial Reports Real-time financial reports are the output of accounting business process integration and on the high-value end of the spectrum. AIS built on SOA can realize collecting original data with service-driven method and achieve full business process integration, which ensure timely accounting information processing. So it is not a difficult task to implement real-time financial reports. Service itself is based on common standard language XML. With the advanced XML presentation language, such as XLS, XHTML etc., almost any format of reports can be generated, that is only some financial reporting services to change one format to another. Users can choose the preferred real-time financial

Yanwen Wu

37

reports according to their diverse needs. More importantly, these services allow remote authorized users to visit real-time financial reports through Internet, which is very important for value-chain alliance to interchange financial information. It should be made attention the difference between financial reports based on popular extended language-XBRL and real-time financial report in SOA. XBRL is an XML-based standard for handling corporate financial information, for simplifying the flow of financial statements, performance reports, accounting records, and other financial information between software programs. Financial reports based on XBRL have been used widely in the securities market in the world. The survey made by Premuroso & Bhattacharya [2] on the American stock exchange found that large scale enterprise with good governance and liquidity firstly adopted voluntary declaration XBRL strategy. For AIS built on SOA, realization of financial reports in XBRL is much easier, only some services that can change them into XBRL format. However, for stakeholders of internal and external enterprises, they deadly need much more multivariate and real-time financial reports and are not restricted by classification standards of XBRL. Accounting real-time control Yan and Zhang [3] brought forward the concept of accounting real-time control, that is accountants utilizing (?) time, physical, monetary information and modern technologies to compare and analyze the business and organization operations in real-time way; and intervening the business operations through direction, regulation and promotion. Accounting real-time control has an effect on the whole activities, from the low to high side of accounting value chain。Nowadays, a lot of accounting researches combine IT governance framework to probe into the accounting control problems, for example COBIT, ITIL, ISO 17799, etc. But as for the governance of SOA, it is not same with these common IT frameworks. SOA brings new challenges with respect to the assurance of service quality, consistency, performance, predictability and, perhaps most fundamentally, trust between the providers and consumers of services. Services relevant to AIS involve sensitive data, once the unauthorized third party, who has the chance to access the important services, may cause divulgence and unauthorized falsification of data. So IBM put forward the concept of SOA Governance to help organizations establish the proper controls. SOA governance is an extension of IT governance. It focuses on establishing a framework for assuring service quality and engendering trust between service providers and consumers as both individual services—and the service network as a whole—progress through their lifecycles. Here, this paper only deals with the business process control briefly. Business process control in SOA must be process-oriented, not data-oriented. Here, I recommend the business process control model suggested by Ratliff et.al[4], which categorized five controls: feedforward controls, initiation controls, positive process controls, protective process controls and feedback controls, all of which mapped the full spectrum of controls. It provided a framework for comprehensively assessing business process risks and evaluating and improving business process controls, and was also appropriate for business, production and accounting processes. Although the model was not based on SOA, SOA can strengthen the model roles by services deployment. Moreover, owing to services’ encapsulation, technical details of systems bottom layers can be screened. Accountants, business managers and IT staffs’ can set up five service controls together to assess business process risks, evaluate and improve business process controls. Flowerday etc.[5] put forward that “Real-time information integrity = system integrity + data integrity + continuous assurances”. By establishing the model, the systems built on SOA can realize continuous assurance by monitoring the information processing in real-time way and saving the corresponding audit tracks simultaneously, including operating data, operating time, actors, and mirrored data before and after operations. Thus, completeness, correctness and the usefulness of accounting information can be ensured.

38

Manufacturing Systems and Industry Application

Accounting information knowledge management As suggested by Elliott [6], knowledge leveraging is the most important and distinguishing competency of professional accountants. In these above-mentioned accounting information activities, accountants must make full use of knowledge in order to attain greatest value of accounting value chain. So accounting information knowledge management need to be taken seriously, and it is highest side of accounting value chain. Knowledge management focuses on solving a variety of unstructured or semi-structured problems and integrated the solution into structured information systems. Systems based on SOA can construct many tools to discover, store, share, and manage knowledge, which also can promote the sharing and exchange of accounting knowledge in AIS. But little researches pay close attention to this area. Facing limited space, this paper won’t explain in detail. Summary AIS are indispensable and intimate with other business sub-systems in integrated enterprises systems. How to fully reflect the added-value of accounting information activities is the key to produce the real-time, accurate information. SOA is an information system architecture standard with most potentials at present and in the future. AIS based on SOA can effectively add value to above-mentioned accounting information activities. This paper briefly states these activities blending web services attributes. But there are a lot of details not elaborated in this limited paper. Facing rapid technology development, I hope that more scholars will pay close attention to this field research. References [1]. Hunton JE.: Accounting Horizons, Vol.16 (2002), p55-67. [2]. Ronald F. Premuroso, and Somnath Bhattacharya: International Journal of Accounting Information Systems, vol.9(2008), p: 1-20. [3]. Yan and Zhang:Accouting Resear, No.4(2003), p3-9. [4]. Ratliff RL, Reding KF, Fullmer RR. Managerial Auditing Journal, vol.13(1998), p101–106. [5]. Flowerday S, Solms Rv. Computers & Security, vol.24(2005), p604-613. [6] Elliott.: Accounting Horizons, Vol.15(2001), p359-372.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.39

Study of Assessment of Computer Aided Color Design ZHANG HUISHU1, a, ZHUANG DAMIN1,b and MA DING2,c 1

School of Aeronautics Science and Engineering,Beijing University of Aeronautics and Astronautics,Beijing 100191,China 2

School of Engineering and Art,Beijing Union University,Beijing 100023,China a

[email protected], [email protected], [email protected]

National Program on Key Basic Research Project (973 Program) (Grant No.2010CB734104) Key words: computer aided design; color matching design; objective assessment;primary flight display

Abstract. At present the capacity for computer aided color design both at home and abroad is relatively weak. The paper studies one aspect of it, that is, the objective assessment of computer aided color design. It takes color matching schemes of cockpit information display as an example to carry out the experiment of assessment.The experiment mainly deals with the assessment of the color matching schemes of the primary flight display. Four-dimensional assessment indicators are used to evaluate the quickness of information recognition for pilots, through which the order of excellence of the color matching schemes is achieved. Based on the findings of the experiment, the article proposes the generalized model framework for the assessment of CACD. The framework has four parts. Color design of human-computer interface of the prototype product; the application of computer software to the design of color schemes; Computer aided assessment on color design, including experiment design, the application of computer programming to carrying out the simulated experiment which is based on the prototype and the application of computer software to statistical analaysis of the data collected; Objective assessment on the color design schemes. This framework can give an objective description of the color design, which can be applied to various products. However, different products should design different experiments. This framework provides new references for the study in the field of the CACD. Introduction CAID(computer aided industrial design) is a research field that attracts attention of researchers both at home and abroad, while CACD(computer aided color design) is an important component in that field. At present capacity for CACD both at home and abroad is relatively weak mainly because there is a lack of more objective description of the evaluation of CACD. Evaluations such as evaluation of color harmony aesthetic measure, evaluation of color matching design are mainly carried out subjectively. How to give more accurate objective description of those evaluations in CACD system is the problem that researchers are committed to solve. The purpose of the study is to make an objective assessment of computer aided color matching design by way of an experiment, to determine evaluation framework of CACD so as to provide references for a new evaluation of CACD. The article will take the product of the industrial design, that is, color matching design of Cockpit Display Interface as an example[1][2][3][4][5][6]. The evaluation framework of CACD is illustrated in Fig. 1.

40

Manufacturing Systems and Industry Application

Fig. 1 The evaluation framework of CACD

Computer Aided Color Coding Design of A Cockpit Information Display Interface Color coding of display interface design.Color coding refers to visual coding that takes basic properties of color, such as, hue, saturation and value as codes. A person who has tri-chromatic vision generally can recognize 9 colors, while the number of colors used in color coding is no more than 5 or 6 colors due to the restrictions of lighting conditions. In this paper the number of colors used in color coding of the primary flight interface will be no more than 6 colors in order to optimize the color coding design of the PFD(primary flight display). The design of the characters (font, size and color), color matching design of AHRSA (attitude and heading reference system apparatus) as well as background color matching design of the PFD are made which meet human-rating requirements and are assessed. The design and evaluation suitable for people are done. 47 design schemes are put forward according to the principles and methods of design in industrial design, semantics, psychology as well as the principles in ergonomics. In that design, colors are used by computer-aided software interface. Some schemes are illustrated in table 1. Table 1 Some color schemes

Other factors on the PFD main unchanged, 35 schemes are given as for color matching design of the psychological point of view, color has three-level semantic meanings, that is, sensation, association and symbolism. Color as a sign has two functions as illustrated in Fig.2, that is, practical and aesthetic function. From Fig. 2 we know that these functions are achieved by message compiled from the signs. AHRSA.From

Yanwen Wu

41

Fig.2 Symbolic function Message has intention, and for addressees it achieves indicative function by compiling the message. Color orientation needs to be fully considered in color matching design of AHRSA, and when simulated horizon is taken as boundary, it can be divided into two parts, sky and earth. Color semantic signs that the sky conveys to people are blue tones, while those that the earth conveys are yellow tones, sometimes green tones. The scheme adopts the principle of semeiology, sets out 24 schemes with color deixis and 11 schemes without color deixis. Background color of PFD is black, each color scheme having two sets of values of RGB color for its corresponding colors. Based on these values, one can determine brightness of a color by which matching of color systems can be divided as well. It is illustrated in Table2. Table 2 Categories of color schemes of AHRSA

Under the condition that other factors on the primary flight display interface remain unchanged; we set out 12 schemes for the changes of the background color. In using colors, the color of the graph and that of the background should not be similar in hue, brightness and saturation. Similar colors are apt to be assimilated, which makes the graph vague. When the size of background is large, dark, deep and thick colors are supposed to be used. When the color scheme for AHRSA is blue and brown, we design following four color systems for the background color, which are purple tone (pale purple; purple; rose violet; blue purple); yellow tone (yellow); green tone (bottle green; blue green; yellow green) and plain grey tone (dark grey and light grey). For the yellow tone has the highest saturation, there is sharp contrast between the hue of the background and that of the AHRSA; with the purple tone, there is relatively weak contrast between the hue of the background and that of the graph; the green tone has medium degree of saturation and value, then there is weak contrast between them[7][8]. Color design is carried out by mainly applying such software as Photoshop and Coreldraw, and the various sizes of the color matching graph should be guaranteed in strict accordance with the original sizes of color matching in the aircraft, including scale spacing, the size of the graph etc.. the only thing we do is to change the color matching. The primary optimization is carried out by sequencing color matching results. Besides, software is used to determine the RGB value of the matching color so as to determine the degree of color value and color saturation. Evaluation Experiment Design of CACD Subjects.22 students (12 male students and 10 female students) from Beijing Union University, majoring in Industrial Design, aged 19 to 20, with no color blindness and color amblyopia, are the subjects in the experiment who get a good grasp of the knowledge concerning cockpit display system and control system of an aircraft. Their rectified eyesight is above 1.0. Subjects have taken the 60-hour ergonomics course and they made a thorough study of the cockpit display system, control system as well as corresponding theories before the experiment. Equipments and settings. Equipments include 8 HP xw8600 workstations;the prototype for the simulated flight of the aircraft is Beoing 777, which has 5 rockers (throttle lever, joy stick, rudder,

42

Manufacturing Systems and Industry Application

elevator and flight power system) that can carry out the basic controls, such as, speedup, slowdown, pitching, yawing and rolling etc.. one acoustic equipment and one flight simulation system. On the basis of the flight simulation of Boeing 777, corresponding experiment programme is developed and an evaluation of color design is made. Flight simulation model of cockpit display interface of Boeing 777 (Fig.3)is displayed on the 19 inch screen of HP workstation, with the resolution of 1280*1024 pixels. In the human-computer interaction rocker, keyboard and mouse are adopted. The sitting gestures of the subjects are not to be taken into consideration. The distance between the subject and the screen is 60mm.

Fig.3 Flight simulation model of cockpit display interface The tasks of the experiment. The experiment requires setting up takeoff scenarios in the simulation system. The airport of departure is San Francisco International Airport with 10L runway. It is noon in summer and the weather is fine, the aircraft takes off along the runway, starts to climb and when flight altitude reaches 9000 feet (2743.2 m), it starts auto-flight. The indoor illumination is 872 Lx, with no glare. The experiment requires setting up landing scenarios in the simulation system. In order to reduce the experiment time, the flight altitude is set at 2000 feet (609.6m). It starts at noon, the weather is fine and the aircraft lands on the designated runway. The settings and tasks of the experiment is the same as the first one, only with the change of the time, that is, it starts at midnight. The settings and the tasks of the experiment is the same as the second one, only with the change of the time, that is, it starts at midnight. Experiment methods.In order to avoid the experimental effects, the subjects take three-week simulated flight training (including takeoff, climb, navigation and landing) and spend one week getting familiar with the experiment procedure (including experimental operation, requirements and data recording etc..) In order to avoid fatigue effects, the subjects take turns to do the experiment, with five-minute break for each. The computer will record the data every one second, contents for recording include longitude, latitude, flight altitude, rolling angle, pitching angle, course angle and vertical speed. When stall speed happens, it will raise the alarm and will be recorded. All together nine indicators are to be recorded. In the takeoff experiment, when the aircraft climbs to 9000 feet, it will start autoflight, and then the task is completed. Angle of elevation will be kept at 13 degree while the aircraft climbs. Start up the Boeing 777 simulated flight system. Before data recording at the takeoff and the landing, start up pilot programme. Then start up logging in the file, in the pop-up dialogue box set up the file name of the log files, then click “Logging Enabled” to start recording the data. The Results and Analysis of the CACD Experiment The data collected from the experiment are sorted out and analyzed from four dimensions, one dimensional indicators include time, flight altitude, rolling angle, pitching angle, course angle, airspeed and stall speed alarm. This article mainly analyses time indicator and airspeed indicator. Two-dimensional indicators include climb and descent. While the aircraft climbs, values of the dimensional indicators are as follows, 18; 1000; 2000; 3000; 4000; 5000; 6000; 7000; 8000 and 9000 feet. While the aircraft descends, values of the dimensional indicators are as follows, 1500; 1000; 500; 100; 50 feet. Three-dimensional indicators include daytime and nighttime. Four-dimensional indicators refer to takeoff and landing. Continiuty and stability will be checked in the four-dimensional anaysis. In order to make the data consistent and effective, reliability and validity

Yanwen Wu

43

tests for the data collected are carried out in order to get rid of the invalid data. Besides, scores are marked according to aviation flight standards. The impact of different color matching schemes on target identification. he background color for PFD is black, color matching schemes are carried out of AHRSA in the experiment. The expriment shows that different color matching schemes exert different impacts on the visual attention (p=0.013), and different impacts on takeoff and landing(p=0.027). Through post-hoc multiple comparison test of simple main effects of target color matching, part of findings of data analysis are shown in Table3. Table 3 Post-hoc multiple comparison test of simple main effects of AHRSA color matching

Notes: mean difference refers to mean difference of scores; p refers to modified p-values According to the mean difference shown in table 3, the excellence of different color matching scheme is ordered as shown in table 4. Table 4 The order of excellence of color matching

Whether it is at daytime or at night, we know that three color combinations, the combination of dark color tone and neutral color tone, the combination of two neutral color tones and the combination of light color tone and neutralcolor tone are good color combinations, while the combination of two light color tones or the combination of two dark color tones are in the rear position in the order of excellence because they have sharp contrast with the daytime and nighttime, and there are many alarm calls when the aircraft takes off and descends. Through the comprehensive comparison of color matching schemes in different flight tasks, neutral color tone works well in speed recognition identification and flight stability, but that also needs to take the degree of saturation and hue into consideration. Color matching schemes with sharp contrast in hues, or with high saturation, or with the same tones are in the rear position, for example, the matching of blue and yellow, the matching of cyanine blue and lemon yellow, the matching of yellow and orange which have high saturation etc.. The experiment shows that the original color matching schemes for the AHRSA on the PDF of Boeing 777 rank the top 10 among the 35 schemes; from the experiment, color matching scheme with color orientation can be better identified than that without color orientation, which shows that color matching scheme with color orientation can be easily identified when carrying out different flight tasks, that is to say, the integrated use of color coding and semantic coding will make visual information output more efficient.Better color matching schemes are as follows, the matching of blue and yellowish, blue and toffee, blue and kelly, blue and black, cyanine and toffee. Impacts that the matching of background color and target color exerts on target recognition .The color of AHRSA (blue (RGB (32, 126, 214), brown (RGB (73, 55, 55) and the font color as well as the color of other items of the PFD remain the same as the original color matching, only change the background color matching of the PFD, and then repeated measure ANOVA is used, p=0.000, which shows that different background colors have significant impacts on target recognition. Post-hoc multiple comparison tests of main effects are illustrated in Table 5, and then the order of excellence of

44

Manufacturing Systems and Industry Application

background color matching is illustrated in Table 8. From Table 6, it can be drawn that the original background color which is black is not the best color choice, and it is surely the case when all the other factors remain unchanged. If the target color matching of AHRSA changes, the order of excellence of background color matching of the PFD needs to be checked again. Besides, from that order, it can be drawn that a light color tone with high degree of color value is not suitable for background color matching and color saturation should not be high. Dark color tone and non-colors tone are more appropriate for background color matching. Table 5 Post-hoc multiple comparison tests of main effects of background color matching of the PFD

Notes: p Value obtained by micro-indentation test for thermally grown SiO2 film on a silicon wafer. 70 GPa dim(A) is known as the fractal set(Dim(A) is the Hausdorff Dimension, and dim(A) is the topological dimension). The other is graphs that to a certain extent are composed by self-similar graphs. In general, Dim(A) is not an integer but a fraction. However, though theories and practical tests, these two definitions do not contain all the information. There is no precise definition for fractal and it can only be shown by given features. (1) The fractal set has ratio details under arbitrary small scales, or it has a fine structure. (2) The fractal set cannot be described by the traditional geometry. It does not meet the point track created by some certain conditions, or it is not a solution set for some given equations. (3) To a certain extent, the fractal has the specific form of self-similarity, for instance approximate self-similarity, statistic self-similarity. (4) The fractal dimension of fractal set is always absolutely larger than the topological dimension. (5) In many given cases, fractal sets can be defined by simple methods or iterations. For different fractals, some of them can meet all the needs mentioned above, but others may satisfy most of them. It should be noted that most of fractal in the nature and other applied sciences are approximated The Characteristics of Fractal Art By utilizing fractal principles, art works that both have a degree of aesthetics and are generated by computers or hand are known as fractal art works. Because fractal art shows the fractal visualization and artistc, fractal art has the fractal’s most essential and important characteristics, including the fractal dimension, iteration and self-similarity. The Mathematician Hausdorff Besicovitch proposed the concept of space continuity and argued that the space dimension changes continuously, which is known as the Hausdorff dimension, or D f . Mandelbrot regarded the Hausdorff dimension as the fractal dimension. For fractal objects, if a fractal object’s every side length expands L times, this fractal object should be enlarged K times, and the equation LD f = K is established. It is easy to get the equation D f = ln K / ln L . Let us take Cantor ternary set for an example. The Cantor ternary set is created by repeatedly deleting the open middle

Yanwen Wu

125

thirds of a set of line segments. One starts by deleting the open middle third (1/3, 2/3) from the interval [0, 1], leaving two line segments: [0, 1/3] U[2/3, 1]. Next, the open middle third of each of these remaining segments is deleted, leaving four line segments: [0, 1/9] U [2/9, 1/3] U [2/3, 7/9] U [8/9, 1]. This process continues to the infinite. In this case, the Hausdorff dimension formula is that D f = ln K / ln L( K = 2.L = 3) , so we get its fractal dimension is log K log 2 = = 0.6309 log L log 3 The fractal dimension is both a basic feature of fractal and an important parameter to analyse the complexity of fractal art works. Because fractal objects are different, fractal dimensions vary. The iteration is also another essential parameter to analyse fractal and measures the complexity of fractal. The more the iteration is, the greater the complexity is. In another words, with increase of the iteration time, details and characteristics of the fractal object can be finer. In the case of Cantor ternary set, the fractal dimension is 0.6309. After the first iteration, there are two line segments whose lengths are one third of the original length; after the second iteration, there are four line segments whose lengths are one ninth of the original length… In this way, after the nth iteration, we get 2n line segments whose lengths are (2 / 3) n of the original length. If it is combined with fractal art, the structure number of the whole fractal art work can be easily obtained. The nth iteration means that the fractal art work has n structures, or the fractal art work has n layers of the similar structure. The self-similarity is the third main feature of fractal and fractal art. It is classified as the absolute similarity, for instance, fig. 1, fig. 2, fig. 3, fig. 4, and the traditional similarity, whose feature is similar between the part and the overall.

Df =

The Common Fractal Art Graph The fractal art graph is the visual and aesthetic mathematical graph that meets the three characteristics of fractal. In other words, graphs that are iterative, have fractal dimensions and self-similarities, and show a degree of aesthetics, are fractal art graphs. According to the fractal definition, fractal sorts vary and their graphs differ, for example, mountains, clouds, tree brands, sunflowers, hippocampus and so forth. Most of these graphs are irregular fractal graphs, which are not easily expressed by mathematics. Some common fractals and their graphs are introduced as follow. The Peano Curve and The Sierpinski Space The Italian Mathematician Peano (1852-1932) created the Peano Curve (Fig 1) that fills the whole space in 1904. Polish Mathematician WΗSierpinski created the Sierpinski Space (Fig 2) that looks like sponge.

Fig.1 The Peano Curve

126

Manufacturing Systems and Industry Application

Fig. 2 The Sierpinski Space The Koch Curve and the Koch Snowflake The Koch Curve is a line segment with the structure of self-similarity. Its creating process is that we create a L0 line segment, trisect it and keep two ends. Then we transform the middle line segment into two equal length line segments with the 60° angle. Each line segment’s length is L0 / 3 . This is the first operation or n=1(the first iteration). The second operation is that we trisect the 4 L0 / 3 line segments achieved above, whose length is L0 / 9 , and transform their middle line segments into two equal length line segments with the angle. By iterating limitlessly, the curve with the self-similarity structure is known as the Koch Curve. Its fractal dimension is log N log 4 Ds = = = 1.2618, Shown as Fig 3. log(1 / β ) log 3 The Koch Snowflake is very similar with the Koch Curve. The Koch Snowflake has the same generation method with the Koch Curve and its meta-fractal is a triangle. Upon the original polygon, we get a hexagram after the first operation. According to the generation method of the Koch Curve, we get a 48-gon from the hexagram after the second operation. The graph generated by infinitely operating this method is called the Koch Snowflake.

Fig. 3 The Koch Curve

Fig. 4 The Koch Snowflake

Yanwen Wu

127

The Mandelbrot Set and The Julia Set When Mandelbrot studied z → z 2 + c in the 1980’s, he found the Mandelbrot set, or the M set. It defined Z 0 as an initial constant, and C is a complex constant. By iterating the equation Z n +1 = Z n2 + C , if n trends to the infinite but Z n is bounded, belongs to the Mandelbrot set (Fig 5). The Mandelbrot set and the Julia set can be considered as twins. The Mandelbrot set describes the all constants over the space, but the Julia set specializes in one given constant on the space and calculates every value on the complex space. For Mandelbrot set’s iteration formula Z n +1 = Z n2 + C

and the given constant, if n trends to infinite and Z n is bounded, Z 0 belongs to the Julia Set. Graphs of the Mandelbrot set and the Julia set have the same method to create, but initial conditions, boundary conditions and iteration variables differ. The Julia set is shown in Fig 6.

Fig. 5 The Mandelbrot Set

Fig. 6 The Julia Set

The Generation of Fractal Art Graphs The fractal art is that art works are created by fractal principles and fractal theories. The art works generated by fractal principles and fractal theories are very common, including fractal graphs directly generated by mathematical functions, graphs created by fractal theories (computers, hand-painting and even number arrange), production designs, environmental art designs, architectural designs, video works and sculptures.

128

Manufacturing Systems and Industry Application

The algorithm of fractal art is divided into the two-dimension algorithm and the three-dimension algorithm. The two-dimension fractal can be transformed by the one-dimension transformation. The general algorithm is the one-dimension line segment midpoint algorithm for random translation. Firstly, we create a line segment, set ends for A, B, and find its midpoint C. Then we move it randomly a little distance on this line segment and get two line segments AB, AC. By repeating the steps mentioned above enough times, a desirable two-dimension fractal can be generated. The three-dimension fractal can be created from the two-dimension plane. Actually, in the process of creating fractal art works, what is generated cannot be predicted, especially in the case of utilizing irregular fractal functions to create fractal art works. So to a certain extent, fractal art is non-controllable. According to that, fractal-generated graphs may not have features of aesthetics, reflect the real world and be accepted by the public. The designers should largely try parameters to create a fine fractal art work. This iteratively and incessantly creating process both spurs and perfects the designer’s thought. The extraordinary fractal art works are only generated by constantly exploring. Because of their deficiency of mathematics and computer programming, teachers and students who are engaged into the art design have been discouraged by the fractal art. Fortunately, many fractal enthusiasts have created a variety of simple and developed software’s such as Ultra Fractal, Iterations, and Fractint20.0 and so on. This software’s greatly improve designers’ deficiency of mathematics and computer programming, and stimulate the development of fractal art. The Applications of Fractal Art for the Package Design The Applications of Security Graph for the Package Design At present, counterfeiting, piracy and forgery have been threatening to the security of many normal financial activities. Counterfeiting productions have accounted for five percentage of the total world trade volume. As said by experts, one cause is that producers of famous brands and high-quality productions do not have enough self-protection. According to the current statistics, because anti-counterfeiting technologies do not universally popularize and cannot meet marketing needs, famous brand and high-quality productions equipped anti-counterfeiting techniques are less than 10 percent. The general methods of anti-counterfeiting technique are the special grating graph design, the quickly-reflecting background design, the crystal template design and the path definition design. But from the view of methods and principles of the graph designs mentioned above, fractal art achieves the best anti-counterfeiting effect. The reasons go as follow. First, because of its iteration and complexity, fractal art can make anti-counterfeiting images extremely complex. Due to different iteration times, same graphs have diverse layers, whose structures, principles and parameters are not decoded by conventional approaches. Next, due to fractal aesthetics, fractal anti-counterfeiting images provide a degree of visual aesthetic feeling and improve the package decoration. If fractal graphs could be combined with such popular designs as the special grating graph design, the quickly-reflecting background design and the path definition designs, counterfeiters are daunted by anti-counterfeiting images on the packages. The Applications of Fractal Art for the Package Decoration Design Fractal art works are applied to a variety of fields, including the package of cosmetics, alcohol, tobacco and food, the decoration of finery, gift boxes, and jeweler. For instance, Fig 7, Fig 8.

Yanwen Wu

129

Fig. 7 A Food Package

Fig. 8 A Jewelry Box

Conclusion They are the homeostasis that every part on the graph constrains each other in the changing process, the mathematical harmony that each figure and hue can smoothly transform, and the echo that is neither the symmetrical to the asymmetrical nor the up to the down, but the partial to the overall. Among fractal art works, branching, twisting, irregular edges and diverse changes are very common, bringing a pursuit for the beauty of wildness and an interest of undeveloped nature.

References [1] Zhong Yunfei. “Study of the Application of Fractal Art in Package”, Packaging Engineering, 2001 [2] Zhang Li, Liu Hong. “Research and Application of Fractal Art in Anti-counterfeiting of Production”, Information Technology & Informatization, 2007 [3] XU Ren-ping. “The Design of Mathematical”, Beijing: Chemical Industry Press, 2006 [4] He Kezhi, Qiu Xihong. “Application of Fractal Image of IFS in the Package”, China Science and Technology Information, 2007

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.130

Train Mode Research of Software Outsourcing Talents CUI Wei1, a, LIU Yang1, b, LIN Yan1, QIAN Si-yu1, YE Jia1 YANG Hai-feng2, LI Ya-jun2 1

Transportation Management College, Dalian Maritime University, Dalian Liaoning, P.R.China

2

Development Planning Department, Chongqing Electric Power Corp., Chongqing, P.R.China a

[email protected],[email protected]

Key words: Software outsourcing; Talent; Train

This article has been supported by Chinese National Natural Science Fund in 2010, “Strategy research of using supply chain partner’s knowledge in enterprise knowledge creation process”, Project approving code: 71072124. It was also supported by Fund of Liaoning Province reform of higher education "Research of service outsourcing personnel training mode" in 2007, Fund of Dalian Science and Technology Plan "Research of ways expanding Dalian outsourcing service industry market" in 2008, Fund of Dalian Science and Technology Plan "Research of Dalian comprehensive prediction system of electric power and energy" in 2009, Fund of Chongqing Electric Power Corporation "Research of Chongqing comprehensive prediction system of electric power and energy" in 2008 and Project of Dalian Maritime University reform of graduate education and teaching “Construction of teaching content system of Management Science and Engineering based on the platform of motion and internet of things ” in 2010. Abstract. It analyzed Chinese talent demand character and present training problems in software outsourcing. Detail training mode was discussed. This work introduced detail specialties and relevant superiorities (shortcomings) of several different software talent training mode. Practical training based on realistic enterprise case and composite language studying was concluded . Introduction In face of the global software outsourcing trends in ascendant, China is facing a historic choice and opportunities. On the one hand, China's software outsourcing industry is still full of opportunities, on the other hand, it is already a huge threat--the arrival of clouds and the future of thorns. Despite the support of the policy, the environment of hardware building, the soft environment development ,quite a few software parks and software export base have emerged. The government, enterprises, colleges and universities have tried the best efforts , but the software industry is still facing a " energy crisis“ of "green Industry ". If software outsourcing industry is going to become bigger and stronger [1], the size and talent are two key issues. But these two issues complement each other and about each other, but also in the final analysis,it comes down to talent on the issue. A large number of software outsourcing companies need software engineers who understand the basic knowledge of the primary outsourcing ;have actual combat experience in outsourcing projects,and are able to lead the team out of mid-level technical and managerial personnel; familiar with the needs of customers language and cultural background, proficient in the international outsourcing industry rules, have the ability to open up foreign markets for high-level talent. The lack of software outsourcing talent shows that the software outsourcing market demand, but also reflects the community colleges and computer education , and training cannot be suitable for software outsourcing business real needs. Software outsourcing industry is not the work of the lower threshold,it needs to master professional knowledge and language skills, international background of the export-oriented, composite-type talent. In terms of software outsourcing training on the issue, what should be the personnel training model for this? This is still a worthy of consideration.

Yanwen Wu

131

Characteristics of software outsourcing demand The personnel structure of software outsourcing needs. Now in China, the most urgent need is the development of the software industry. For the software industry, the demand of staffs is more professional than the common understanding, especially not matching with our existing software model of education. In the current model of education software, training software talents structure was "spindle-type." And as traditional industries like manufacturing, software industry professionals are on the real needs of pyramid structure, the team must have a reasonable structure of software talents, like "pyramid" type [2]: the top 5% is the architect engineer, 35% of high-level management, technology and products, senior software engineer, that is, we often referred to the project manager, CTO, or technical director, and so on, the bottom 60% of the software code should be, they are called programmers. From the structure of the human resources outsourcing levels, domestic and foreign markets package is not a shortage of local talent, but a full range of supply shortage. Domestic software outsourcing industry is not only the lack outsourcing to master the basic knowledge of the primary outsourcing software engineers, and project outsourcing with the lack of combat experience, to lead the team out of mid-level technical and managerial personnel, more familiar with the lack of customers familiar with the language and cultural background, versed in the rules of the international outsourcing industry, has the ability to open up foreign markets for high-level talent. Therefore, all the current three-level talents have some supply problems. Software outsourcing needs of human capacity. As the outsourcing services diversity, the rapid development of industry and international market, software outsourcing have significant difference compared to other traditional service industries, the requirement of outsourcing talents’ skills and qualities has its own characteristics. From the analysis of personnel requirements, software outsourcing industry is not the work of the lower threshold, it need master professional knowledge and language skills, international background of the export-oriented, composite-type talent. On the whole, integrative ability such as the language communication skills, industry expertise and experience in project management and collaboration capabilities, and comprehensive understanding of the culture is the necessary ability to engage in software outsourcing industry. Domestic software outsourcing personnel training mode Being unable to find the right talent has become a universal problem in software outsourcing companies to recruit talent. Outsourcing human resources training should be comprehensive and multi-channel, wide range of enterprises involved in a variety of training model in order to fundamentally solve the problem of outsourcing industry shortage of qualified personnel [3]. From the current level of training, the domestic software outsourcing enterprises have talent sources include: industry flowing professionals, college graduates, international talent, talents from their own cultivating businesses, talents from training institutions and so on. Educational channels of all types of software colleges and technical training college. At present, institutions of higher learning in China is the main part of personnel training, community education institutions have unparalleled teaching facilities and teachers, every year they deliver a large number of computer professionals and graduates. These graduates have mastered the software and basic knowledge of English reading and writing skills needed in software outsourcing industry. However, because of the long-term plans for the economy, the market needs what kind of school graduate was informed that the lack of channels, in addition, the pace of the colleges and universities to update teaching materials significantly lags behind the pace of outsourcing software development technology. The institute cannot train a suitable software outsourcing through talent. These graduates lack experience in outsourcing projects, large-scale projects and exchange of team training. As a result, they are the industry's largest outsourcing potential resources, with a strong plasticity and culture and cultivability. Social self-school training and certification bodies. Social self-school training and certification bodies usually aimed at a certificate or job skills, they focus on learning relevant and

132

Manufacturing Systems and Industry Application

practical. However, due to the lack of a good teacher, and the over-emphasis on training for the purpose of profit, the basis of uneven school students, the lack of rigorous evaluation of the examination, a number of institutions make the learning effect of students greatly reduced. Software outsourcing enterprises and agencies customized training and joint training. Thanks to the participation of software outsourcing, the training model is more targeted, and a more closer integration with the actual outsourcing project , and training students must make a formal assessment before entering the company, so this way of cultivating talent have a higher success rate. However, because of software outsourcing enterprises and training institutions are often only loosely co-operation, enterprises lack process monitoring and tracking of the process, making the course content, progress and learning out of line with actual requirements. Software outsourcing business in-house training. Because of the difficulties in recruiting suitable personnel outsourcing, social training institutions for students unable to meet the expectations of companies, many software companies decide to enhance the in-house staff training. The greatest advantage is involved in the training of staff familiar with the business of outsourcing work, with rich experience in outsourcing projects of training teaching staff. Through taking part in the outsourcing project, get a direct access to the most items of work skills, learning and work flow, the communication way. However, due to the high cost of business in-house training, the loss of training brought about by the dimission of trained staff, software outsourcing business in-house training constraints a lot. True enterprise-wide Training case and mixed-language teaching training model Training in a variety of channels all have their pros and cons, then in the face of the huge talent gap in outsourcing, software outsourcing to crack the existence of the shortage of qualified personnel "silver bullet"? After a comprehensive analysis and comparison of a variety of outsourcing training method, the software outsourcing enterprises to participate fully in the way of professional training problem resolution may be good medicine. "Enterprise Training mode" is not envisaged in the short, in fact, IBM, HP and other large software companies have implemented in our country in this way, and achieved good results. China's leading software outsourcing companies Neusoft has also conducted research on this model and try to promote. After a five-year summary of temper, Neusoft has officially launched its own brand of software engineers Training Certification System and is active in the promotion of the domestic Kinds of software outsourcing professionals in the best mode of training. Pairs of Japan software outsourcing successful experience in personnel training. Through the understanding of Neusoft, we find that in recent years, Neusoft has always been able to achieve rapid growth in software outsourcing on the number of personnel. In the final analysis, because of its Japan-based human resources outsourcing software (that is, programmers) aspects of the culture of the three sources, one for colleges and universities with customized training model, and the second for Neusoft through the training center for the certification system and the curriculum is project training, the recruitment of three for the social sector after the in-house training. The three sources in the training mode and take the curriculum and teaching methods used are "true enterprise-wide Training case of mixed-language teaching" the unique software outsourcing training. Training of teaching cases, mixed-language learning is the key factors for Neusoft to achieve both quantity and quality in outsourcing talents in the short term. Europe and the United States software outsourcing successful experience in personnel training. European and American software outsourcing talents pay more attention to the training of foreign language, especially English-speaking ability. With the Neusoft to speed up the process of internationalization of the business and specific requirements, English has become the necessary tools that staffs should have, the English ability of the whole staffs has also been a key factor for a company in international competition. As a result, besides the Japanese software outsourcing training in the use of "true enterprise-wide case-teaching Training software development capacity-building model, in order to help employees learn to adopt a reasonable approach effectively to improve English proficiency as soon as possible, Dalian Neusoft Education Services

Yanwen Wu

133

Limited and Neusoft other departments jointly launched the integration of resources of the "mixed-language learning (Blending Learning)" training model for different learning needs. The training mode characterized by an "online-offline interactive, face-to-face online, with enhanced autonomy." With such "mixed-language learning" training model, Neusoft take full account of the diversity of the staff training needs, not only geographical and time differences, but also to consider individual training needs and existing differences. For in-service staff training, the biggest problem is that it is difficult for a long time in the reunification of the time and place to conduct face-to-face focus training, so we use E-learning, learning face-to-face and telephone counseling for foreign-assisted "hybrid language Learning "training model, mainly for online learning and usually reflected in the accumulation of all the issues, explain some of the key issues after summarizing .It has advantages such as short time needed to focus on, more targeted, complement well with on-line training. Thus, it can achieve better and desired effects in learning. Based on computer networks, this work designed a specific training personalized solutions that integrate evaluation of a personalized learning program, that is “proficiency test →refining capacity analysis →to study the proposal →on-line learning to upgrade + face-to-face language training + foreign language telephone counseling →The level of re-tests ". The program will serve for employees to learn the specific learning needs at different levels to adopt a phased approach to the training, which lasted at least three months. It mainly focus on English study, and can be carried out in other languages, such as Japanese. The training mode has to engage in internal Neusoft's software outsourcing project team Europe and the United States, and achieved good results. Summary Training with true enterprise-wide case + mixed-language teaching training model of software outsourcing will be the training of systematic and professional positions. According to the demand, the need, it can launch reverse-guided evaluation, and divide study into the needs of the knowledge points and the curriculum in detail, and then the whole actual project case with hybrid Training teaching language teaching methods impart the essential knowledge to students participating in the training. After passing the examination system through its online standardized proficiency test,they can be exported to the enterprise. This training model proved to be effective through the practice of Neusoft. References [1] CCID Consulting: Annual research report for software outsourcing market 2008-2009. [2] Huo Tai-wen: A brief introduction to Indian outsourcing development. Programming, (2006). [3] Li Nai-xiang: Innovating IT education and accelerating IT service outsourcing talent training. Computer technology and development, (2009).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.138

The Study on Metaphor and Interest of Graphic Design Kang Lin

Li Chengmao

Guangxi University Key words: Character, Modeling, Aesthetic, Concept.

Abstract. All in all, graphic-design has a wide use in newspapers, magazines, books, symbol, advertisements, packing and many other media. It is to improve functional and aesthetic identification of the information and embody the aesthetic characteristics of brevity, which is a rare element in modern design. When Mr. Lei Guiyuan discussed the image Taichi in his book The First Exploration of Chinese Design Method, he put forward an important viewpoint that why the design is beautiful is that not only does it has the beauty in essence, but also has beauty in form. Introduction Character is not just a form to transmit information, but also an important role for modern graphic designing. General characters should be recreated imaginatively, and Graphic Design of Characters is one of the expressive ways to make this process implemented. It makes the deceivability and visualization of patterns became much stronger. At the same time, it makes the information transmission more imaginable, interesting and artistic; it also helps to enrich the spiritual connotation and attraction of the design. Graphic design of characters takes the shape of characters as a picture and the meaning of them as the foundation of originality. These aspects, being constituted relatively, speed up the expression process of information with their new ideas and bring more reasonable collocations of the characters’ strokes. All these will give interesting figures to information and make characters transferred into visual graphic languages. The Beauty of Succinctness Succinctness, the basic rule for graphic design of characters, is very important. Every character and figure in design work should own its focal point which is characterized by accuracy, formality and objectivity. For attracting people’s attention quickly in the process of information spreading, we must attach high importance to the shape of design work and bring the work the biggest part of succinctness. We also need to sum up the beauty of concision, the art of abstraction, the accomplishment of succinctness for reaching the highest ambit of the beauty of succinctness. The beauty of succinctness on the modeling of graphic design is embodied in the rational factors which were sent by the information of the goods’ packaging. Many excellent packaging designs won a great reputation by their exquisite and beautiful characters. Consumers can not only get the accurate information of the products but also receive the experience of aesthetic feelings from their visual figures. At the same time, the beauty of succinctness in modeling is also embodied in the application of the graphic characters in the title of poster advertisements. The title is the main theme of poster advertisements. It aims to attract people’s attention and put the related contents and information to the eye-catch position through succinct writings. The visual image of graphic characters can increase the striking force of information transferring. It can strengthen the expressive force and instinct interest of the title characters and finally by strength the graphics meanings of their characters, it can also deepen the consumers’ impressions to the advertisements. The beauty of design succinctness in graphic characters is mainly embodied on the logo designing. From the shapes of them, the English characters are remarked by their succinct strokes. The logo expresses its rich connotation through very succinct shapes. The succinct English characters make it applied broadly in the logo designing after graphite. The English characters are succinct in figure and structural, they are easy to be standardized. It is specialized in sending information speedily. For instance, the famous master of British pentagram

Yanwen Wu

139

designing company Mr. Herman holds that simplex is more impressive than the complex figures. It shows much better stress rupture property than complex ones, and more satisfied modern lives’ appreciation. For keeping the logo’s succinctness and usage, American people omit the characters in the brand Nike’s logo. They just retained the “OK “hook. The logo of the 18 years olds Rite of passage is composed of the Arabic number”18”, this graphic character is just like a bird fly highly. As all above, succinctness is the specific reflection of legibility, application and aesthetic feeling. The Feature of Association in Graphic Design of Characters Association is situations caused by a person, one thing, or a concept, which make people associate with another person, another thing or another concept. The association of graphic design of characters is a kind of connection related with the experience and memory of our human beings through our sensory receptor. It is an activity of thoughts that caused by visual sights and then stored in human beings’ mind, finally produced another new sense. Some graphic characters expound the conclusion directly, and other parts of them leave some places for human beings to complement and replenish. The Expansion from Concept to Image The character itself is also a kind of image of which the key point is the concept. Concept is an impression of thinking that reflects the objects’ unique attributes in the way of people to acknowledge the world. It is a rational summary for the objects’ peculiar attributions by concerning the way of language and letters. [1]Most of the concepts were originated from figure, reflect the figure and sum up it. The image and concept connected with each other in essence and naturally. In the transmission through concept to image, we need character to be a matrix to train our visual associate ability, which means our association, should be expanded from one image to many other images for keeping the original situation’s main features and signals. In the specific manifestation of the visual image caused by the concept expansion or the continuing visual mode of thinking contained one meaning with various figures or one figure with various meanings, the visual transmission between systematic concept and visual image make the concept as a perceptive image to be put into the thinking space of the visual images which was expanded by the concepts. It makes use of the concept association and the graphic visual research activities for turning the concept of the graphic texts into the training foundation of visual images’ expansion. This process sends the design a very promising image formulation. Ruth • Strauss • Gaynor and Elaine • Napier • Cohan hold the same idea that the relationship between the art’s signs and the visual realistic is very obvious and direct, so it is much powerful than linguistic signs. Actually the writing letters were originated from the direct hieroglyph, and they are more imaginative and general. The Chinese character is one kind of the world’s characters that relates much closely to the painting and calligraphy. Chinese characters take the meaningful characters as its symbols. [2] The process of the Graphic Design of Characters is to gradate the characters’ concepts and then transmit them into imaginative visual sights. It makes use of the words’ visual experience and figures’ imagination of designing. For example, Mr. Ding Bangyin reminded in the article The Meaning of the Character Shan (ቅ). The character “Shan” take the meaning of mountain in English, can take the common meaning, amplification meaning, or the implicational meaning. The development of concept aimed at developing of the image, in the time of information explosion. We need to develop a way of designing the ideas and intelligence. No matter how image activates the concept or on the contrary, and no matter whether it is the image and concept that activate our intelligence or not, we must develop a new way of designing for the expansion from concept to image. The figure1 illustrates the symbol of Chinese welfare lotteries (designer: Chen Nan). The character “Ё”showed this activity’s national sense. The italic characters embodied the vivid contemporary atmosphere, and CP is the head character of the lottery’s pinyin words. In the middle, the Tran placed rectangle empresses the amount and the justice of lottery by taken the Chinese meaning of third ones can be called a large union. This logo connects the concept of lottery with the character “Ё” then this connection becomes the concept of Chinese welfare lotteries. The meaning of this concept expanded

140

Manufacturing Systems and Industry Application

not only as a representative of China or the lottery itself but a totally new concept. This logo appropriately applies the graphic design of character to the expanding of the concept of character and finally innovate a new visual image.

Figure 1 The Association of the Graphic Design of Characters To understand the feature of association in graphic design of characters, we must first take a good understanding of characters’ association. Chinese characters’ visual image always contains not just the literal contents but also the inner spiritual essence. It brings people the sense of associating, exciting, memorizing and other feelings.

Figure 2 The example of association in the characters’ meaning and image. Firstly, the associative thinking of characters’ image and meaning is radioactive. The image of characters is more expansible and more associative. This association’s image is unviewable, that means the figure in the meaning of characters is very abstract. The expression in the meaning owns a hallucinatory sense of inapt, unrealistic and instability. Take picture 2 for example. We can associate wood to forest and connect wood with broad and other images. Take another example of slow, we may rapidly think of something or somebody’s image, athlete’s jogging, the turtle, the snail and so on. All these characters belong to the concept of using meaning to annotate the figure and concerning the figure to the meaning. The broader the inner connotation become, the more abstract the associated image is. The content of the association can be one single character, double figures or plural ones. For example, the noun Beijing may remind us think about Tainan Square, Great Wall, and Fried Duck and so on. This way of association actually uses letters and characters’ associated with figures to visualize the characters. The Chinese characters’ structural and combination makes people to associate more and fully appreciate the inner figure. We should advocate the way of using characters to annotate pictures, appropriately apply the interactions between characters and images and be bravely to learn from others. Secondly, grapheme association characters on the shape. It is human’s feelings both physiologically and psychologically as well as the association of human’s thinking under visual sensation. It mainly embodies itself in the association of hieroglyph. For instance, the word “Xin ()” in Chinese represents the shape of human’s heart. The word “Xiaoᄱ” shows itself as a son carrying his parent. If a word isn’t like anything, then we’ll make every effort to instruct or sense based on the pictographic character, using a concrete image to represent an abstract conception. The association of English letters avails itself of the horizontal and vertical association produced by the exterior aesthetic feeling of shape. For example, if the letter “A” is a Pyramid and the glory is on the top, meaning that the peak is up. And the letter “K” stands for striding forward and marching fearlessly onward. English letters can also be associated with animals, which makes the imagination have images of both letters and animals. For instance, the letter “B” can make people associate it with penguin, “S” with snake and “L” with dark and so on.

Yanwen Wu

141

The advantage of image association is that thinking can keep direct relations with the object. Meanwhile, the aspect of thinking unfold its pattern in surface shape, that’s to say, the thinking of image association has board aspects and can handle the parts and moments of the object comprehensively. [1] We should reinforce the observation, memory, expressive force and imagination of the image and break the traditional thinking habits and create a new world of imagination. Thirdly, the association of graphic design embodies itself on the basis of truth, good and beauty. Pursuing something new and looking for beauty in life, the graphic design makes the association of letter and the common-to-see shape in life go to the height of art to innovate the visual shape of it, thus living up to the association analysis of letter meaning and shape meaning and discussion of the combination of letter and shape. Object things communicate with each other through all kind of ways. This communication is just the bridge of association, through which, we can find the interior interaction of the things far from each other or even two irrelevant things. People can sense the contents carried by the artists through the visual image of the graphic design. For example, the figure3 is designed for the “Lashed International Organ Festival” by the Finnish designer Keyousutai Waltzes. The writer has graphically demonstrated the character and makes people associate it with the black and white keyboard of the organ. Not only the placard embodies the theme vividly, but also it is full of fun as it enlightens people to think in their own styles.

Figure 3 The poster of the “Lashed International Organ Festival” All in all, association is the key to innovation and the basis of the formation of designing-thinking. We can create a new world of innovative thinking through association in order to transform the abstract thinking into the concrete picture and create a new image. The colorful association has explored large thinking space for graphic-design innovation and also brings limitless possibilities for the presentation of it. The Metaphor of Graphic Design Metaphor is using one thing to demonstrate another in order to make its meaning more vivid. The semanteme of a picture creates a visual language in the form of the things associated by simile and metaphor. A picture can imply different meanings related to each other. Different forms and different images may have different implies, and so does different targets. [3] It can make the common characteristics of certain deep structure get communicated through visualized language, which will not only make people feel compatible and close to nature, but also will be out of the audience’s expectation. The metaphorical sense of the graphic-design embodies itself in the understanding of the metaphor language of the characters picture art. It can extend the exhibiting dimensions of picture innovation and realize the innovation of language, thus making the creation conception adapt itself to the society in a new aspect and keep in line with the whole societies’ culture standard. Besides, the metaphor of the graphic-design can also make the visual image coordinate with the meaning of the letter. Therefore, people can sense the contents carried by the artist just through these visual languages. In the fields of visual art, the metaphor of the graphic-design has a profound enlightening effect on graphic creation. Many graphic-designs have been shaped to be abstract, beyond-experience and top-class. They include the honesty and blur which point to the aesthetic feeling and intention

142

Manufacturing Systems and Industry Application

exclusive to the mental world, and extend the meaning of the letters, which makes the visual images of the pictures more comprehensive. The perception linguist Lakethe thinks that metaphor is the process of recognizing pictures in the conception system. A metaphor has stepped over the graphic recognition between two conceptions. It comes from the practical experience of the designers and has decisions of logical conception, which provides a wider space for semantic and an easy-to-understand vivid picture image. For instance, figure 4 is the posters designed by the Poland designer Hoaxes Lewinsky for the “Salon” Judaic art exhibition and the “subject” cinema. The metaphor of graphic-design has extended the meaning of the letter, which makes the visual image more comprehensive and profound.

Figure 4 Posters for the “Salon” Judaic art exhibition and the “subject” cinema The Interest of Graphic Design The so-called interest specially refers to human’s aesthetic ability. Such kind of ability is called beauty if it is interior, while we call it interest if it is exteriorized. [4] Interest is something that makes people feel pleasant, interesting and attractive. Graphic-design full of amusement can deepen the works and improve its art spice and taste. It can arouse people’s attention immediately, intensify the audient’s perception of dynamics and extend the apperceive time. Thus, people may pay special attention to something and improve their understanding and memorizing of the information and finally achieve their goals of designing works. In Chinese, most characters only have one meaning. Morpheme cannot represent certain “interest” independently, only when it has formed special relations can it has art interest and implication. By strokes and shapes, graphic-design can form new words, which is indirect sometimes. When you conjecture the intention, you’ll gain a deeper understanding of its meaning and sense its interest thus exciting your aesthetic feeling. For instance, when converting the character “fu (福)” ,we have extend the intention and art capacity and provide a wider space for the audient, making the characters more attractive. Graphic-design also combines the picture of special meaning and purpose of information communication with the shape of the word itself, making the works has a visualized characteristics. All the concrete techniques such as metaphor, exaggeration, symbolization, association, contrast, omission, inversion all create amusement. For example, constant-stroke character has a fixate characteristic. And if we deal certain strokes or structure specially, we can certainly add some amusement. Random of the character can create new visual dynamic. We can also affiliate certain concrete or abstract image before or after a character, as all these images are the key points of the designer’s thinking as well as the centers of the amusement. After the character has been graphically demonstrated, it can become an image with some special meaning, can form the amusement centre, can transmit some, can make the picture more vivid and lively and can also intensify the works’ attraction and infectivity, producing a better attention effect. At present, there is a phenomenon that many designers ignore the accuracy of the design so as to pursue the interest of graphic-design. The result is that a multitude of visualized but confusing works has been born. Hence, special attention must be paid to the accuracy and reasonable interest of the graphic-design.

Yanwen Wu

143

Conclusion Designs are full of overtone and the graphic-design has beauty both in form and essence. As long as the visual image of the graphic-design appeals to people’s needs both psychologically and physiologically, it has the characteristics of beauty. References [1] Yi Ding bang. ”Graphics and Meanin” Hunan Science and Technology, Publishing House, 2001,153 192 [2] [Am. ]Elaine.Piere.Kehan. ”Translated by YinShaochun Art”, the Study of Another Language Hunan Art Photography Press 1992,2 [3] Huang You zhu. ”The Analysis about the Metaphor Language of the Graphic Art”, Art Observation 2003,8,104 [4] Fan Yu jie. “The Vicissitudes of Aesthetic Sentiment”Pecking University Publishing House 2006:203 [5] Lei Gui yuan. “The First Exploration about the Methord of Chinese Graphics”. Shanghai People’s Art Publishing House, 1979,43

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.144

Research on Tourism United Marketing in Turpan Area, Xinjiang, China Yong Li1, a, Haojie Sun2 1

School of Geography Science and Tourism Xinjiang Normal University, Urumqi, Xinjiang, China

2

The Xinjiang Laboratory of Lake Environment and Resources in Arid Zone, Urumqi, Xinjiang, China a

[email protected]

Key words: United Marketing; Tourism; Turpan

Abstract. Endowed by nature, Turpan in Xinjiang, China enjoys rich tourism resources. Since the united marketing company for Turpan area was founded in 2007, tourism has been greatly improved. Proceeding from the tourism resources of Turpan and the background of united marketing, combined with the development conditions of recent years, this text analyzes the main problems of Turpan united marketing and proposes up the countermeasures, aiming to provide theoretical and practical guidance for tourism sustainable development. Introduction As a branch of marketing, tourism marketing shares the connation of marketing. It means the pricing, designing, distribution and promotion process of tourism individual or organizations for tourism products, service and idea, so as to achieve their goal. With rich tourism resources, Xinjiang is advantageous in tourism development. After over 30 years of development, tourism has become the backbone of national economy in Xinjiang. Turpan is the first to develop tourism in Xinjiang, with its unique characteristics, since it is the confluence of the world fourth culture, the living fossil of Chinese civilization, the museum of the Silk Road and the paradise where man and nature harmoniously coexist. In 2007, it innovated the operation mechanism and optimized the industrial pattern, and thus the united marketing company led by the government and participated by enterprises emerged. This text analyzes the united marketing and summarizes the united marketing model. Against the problems existing in united marketing, solutions for sustainable development are proposed. An Overview of Turpan Tourism Resources Turpan is located in western China and at the southern foot of Mount Tianshan. Covering an area of 70,000 square kilometers, it accounts for 4.2% of Xinjiang territory. The geographical coordinate lies between 41°12′-43°40′N and 87°16′-91°55′E. With warm temperate continental climate, and surrounded by mountains, Turpan has long sun duration, high temperature, great temperature differences during day and night, low precipitation and strong wind. It is the lowest, hottest, driest and sweetest place in China, being a typical representative of China’s unique natural and ecological environment and oasis culture. The rich tourism resources in Turpan can be classified into the following types according to General Survey Standards of Tourism Resources in China in 2003 (Table 1): Background of Turpan United Marketing In the development of Turpan tourism, vicious competition once occurred, which led to an overall poor economic efficiency. It attracted great attention from Turpan governments and various tourism departments. In 2007, in order to optimize tourism pattern, Turpan innovated its tourism operation mechanism and established the regional united tourism marketing company serving for the following 17 scenic spots(Table 2).

Yanwen Wu

145

Table 1: List of the main Turpan tourism resources Types Representative tourism resources A Landscape oasis, desert, Gobi, snow, forest, meadow, burning mountain, Kumtag desert, China inland zero altitude mark B Water scene Lake Aydingkor, Mutou River, Yaernaizi River, Lianmuqin Valley, Qianlei spring C Biological scene Desert Botanic Gardens, Wild Camel National Nature Reserve D Celestial and climatic high temperature, mirage, fohn phenomena E Historic relics The ancient city of Gaochang and Jiaohe, Bezeklik Thousand Buddha Caves, Tuyu Valley Holly Tombs, Tuyu Valley Thousand Buddha Caves F Architecture and Su Gong Pagoda,the Grape Valley, karez, Ancient Tombs at Astara, facilities Museum, Su Gong Pagoda,Prince Jun Office, Palace of ten thousand Buddha,Uyghur ancient village, picture of oilfield, wine hotel of the western region, drying room G tourism products grape and raisin and its series products; grapevine and its series products H humanistic activity the Grape Festival, the Darwaz, cockfighting, Uyghur songs, dances and clothes, Avanty legend Table 2: United marketing scenic spots No. Names of the scenic spots Level/quality/specification 1 The Grape Valley National AAAAA level 2 Kumtag desert National AAAA level 3 Karez paradise National AAA level 4 Karez folklore garden National AAA level 5 Burning mountain National AAA level 6 Desert Botanic Gardens National AAA level 7 Palace of ten thousand Buddha National AAA level 8 The ancient city of Gaochang National key cultural relic sites 9 The ancient city of Jiaohe National key cultural relic sites 10 Bezeklik Thousand Buddha Caves National key cultural relic sites 11 Su Gong Pagoda National key cultural relic sites 12 Ancient Tombs at Astara National key cultural relic sites 13 Tuyu Valley National key cultural relic sites 14 Prince Jun office Historic culture and folk custom 15 Uyghur village Historic culture and folk custom 16 Lake Aydingkor Geographical mark of the lowest altitude, -154 meter 17 Turpan Museum The second largest museum in Xinjiang Turpan united marketing model is led by the government, participated by enterprises and operated as a company. Each scenic spots were organized into a united marketing company in line with capital contribution. Various scenic spots are operated under one brand, with united management, package and promotion. Through the information panel, resources can be shared, and each scenic spot distributes the benefits according to their capital contribution. However, the scope of its applicability is limited. First, the scenic spots are within one administrative area; in the above case, the 17 scenic spots are all governed by Turpan. Second, the scenic spots are close to each other, so various travel routes can be arranged. Third, marketing in various scenic spots can be standardized and thus Turpan Tourism can be developed. Since May, 2007 when Turpan has carried out the united marketing, various travel routes and scenic spots are all clearly marked, which combats the vicious competition, prevents the deceptive practices, decreases complaints from tourists, and as a result, builds up favorable images for tourism,

146

Manufacturing Systems and Industry Application

increases the revenue for the scenic spots, balances the benefits of each party and guarantees the steady growth of national taxation. By the end of 2007, Turpan united marketing company altogether received 4.06 million tourists, up 32 percent over the same period of last year, and ticket sales reaching over 92 million Yuan, up 88 percent over the same period of last year. Despite the rapid development after the united marketing, there still exist some problems. The Main Problems Existing in Turpan United Marketing False Understanding of the United Marketing and Inflexible Combination of Tourism Products. By now, most people think united marketing is mechanism innovation, namely providing offer within the scope stipulated by price control departments; other people believe united marketing is a means of coercion and thus is going backward. Still some people consider the united marketing as price alliance; packaging the 17 scenic spots together actually burdens the tourists. At present, tourists can know about the information of tourism destinations in a more diversified way and to a full extent. Thus, they hope a flexible combination of tourism products. However, the travel routes under Turpan united marketing are not arranged in line with the theme, with sightseeing as the mainstay, and thus lack diversity. Incomplete Tourism Facilities Suppress the Enthusiasm of Re-investment. Inadequate tourism investment directly leads to the slow development of Turpan tourism and poor overall efficiency. Various supportive facilities are not available, for example, some scenic spots are short of toilets, e-guide and lighting facilities. The entrance guard system in some scenic spots is but an empty shell, without tourist statistics, weather forecast and scenic spot introduction at all. After the united marketing, such issues as scenic spot upgrading and investment for new scenic spots are not clearly stated, which stifle the enthusiasm of re-investment. The marketing mechanism is lack of efficient competition system, with broad access and limited exit. Some scenic spots within the marketing system are unwilling to construct the scenic spots, invest for it, exploit and upgrade the products. Lack of an Overall Image for the Brand and a United Propaganda Slogan. The tourism products for united marketing have not formed an overall image, without a united marketing symbol. Besides, a united propaganda slogan and a carrier vividly showing the image of local tourism have not been developed. The polarization is not powerful enough, and publication among higher learning institutes, research institutes and government institutes is especially limited. The yearlong ticket and travel passports are not fully enforced. The United Marketing for Travel Souvenir is Still not Available. Quite a few employees are engaged in distributing the travel souvenir; however, they are not innovative. The souvenir with distinct features is few, with most being similar to that in China inland, and development of souvenir industry is backward. The price is too high, or it is not cultural. In short, souvenir with originality and characteristics is not available. Countermeasures for Turpan United Marketing Enhance Publication and Optimize the Product Combination. The united marketing will adversely influence the benefits of some units and individuals. Consequently, we need to face the existing problems and gradually supplement to achieve an operative plan. The propaganda slogan for Turpan area should not only comply with the characteristics of local tourism resources, but also attract the market. Thus, it should emphasize on its features of “being the hottest, the sweetest, the lowest and the driest”, give full play to its antiquity and uniqueness and focus on its climate features of “being windy, dry and hot", which will attract the tourists. The united marketing emphasizes more contact with tourists, through which the image can be clearly conveyed. Establishing a good image of a tourism destination is the most direct and efficient way of promotion. Through business operation, planning and implementation, we can launch a big party reproducing the history of Silk Road or the local customs, make it a series product and develop it into

Yanwen Wu

147

a new tourism attraction to carry forward the local culture and ethnic folklore with complete marketing plan in the early beginning. With winter and spring travel as the breakthrough, the four season travel can be promoted. Moreover, we can provide more participation and personal experience for tourists, with hiking and self-drive travel as the mainstay, “shed vegetable travel”, “winter cultural travel”, “desert hiking”, “desert adventure”, “visit Turpan in spring”, and “Xinjiang people travel to Turpan” as the supplementary. Developing monopoly tourism products with complete theme and innovation has great significance for domestic and international tourists, such as desert ecologic travel, folk custom travel, western region culture and art travel, including Turpan study, oasis study, religion, Music and dancing and Muqam. Activities with more involvement of tourists should be added, such as participating imitative Uyghur wedding ceremony, visiting the handicraft workshop, joining the ethnic songs and dances and visiting Uyghur residence, which will be monopoly after innovation. Standardize the Operation and Try to Go public on Small-and Medium-sized Enterprises Board. At present, the united marketing company is organized and each scenic spot is distributed in line with the capital contribution, without considering the tourism resources of each scenic spots. Meanwhile, such issues as scenic spot upgrading and investment for new scenic spots are not clearly stated, which stifle the enthusiasm of re-investment. In the future, the united marketing company should convert the recourses in each scenic spots into shares after evaluation, and then re-estimate the share of each scenic spot, which will solve the above two problems, promote the overall tourism value, attract the investment and achieve the maximum value of the company. When the company is mature, we can reorganize and transform it and apply to launch on small-and medium-sized enterprises board. After going public, we can promote the company’s reputation, attract more investment and management personnel and promote the development of Turpan tourism. Integrate Various Media and Promote the United Image and Propaganda Slogan. Various media will be integrated, including Turpan travel map, brochure, TV broadcast, outdoor advertisement and network media, to convey the “Turpan•China” image. The festivals and public relation activities should also serve the image. We need to analyze the existing tourism resources and make full use of the “Turpan •China” image. According to the image positioning, we will introduce Turpan image project and standardize the designing in various travelling service organizations, for example, the office paper, folder, united and other supplies should all be designed in line with the united signs, patterns and letters in CI system. Besides, the urban identification system should be bettered. The travel map, road names and signs, introduction to tourism attractions and signs for station, hotels and commercial institutes should be united. Such public places as parking lot and toilet should adopt internationally used signs. Through theme and image creation, people tend to associate the name of tourism area with direct image through simple slogans and palatable language. Develop Tourism Souvenir with Distinct Regional Features. Turpan marketing company should emphasize on research and mobilize the social forces to develop tourism souvenir with distinct regional features. Tourism souvenir should display the features of tourism destination in temporal and spatial levels and should be included into the united management system. Besides, tourism souvenir should also reflect the features of different scenic spots, such as grapevine series souvenir. While burning mountain travel series enjoy the ethnic uniqueness and is easy to carry, convenient and applicable. Besides, we can also mark the travel sign, pattern or slogan on the souvenir. Individuals peddling or foisting should be forbidden and punished. Strengthen the Management of Scenic Spots and Working Staff. We will improve the facilities and expand the use of entrance guard system, offering weather forecast, statistics and coordinating the working staff. Starting from the trivial aspect and enhancing training, we aim to improve the overall quality of the employees. The travel agents should be launched as quickly as possible and guide team construction should be speeded up. Hotels should also provide training to service staff on professional knowledge and ethnic folklore to make them the propagandist of Turpan tourism.

148

Manufacturing Systems and Industry Application

References [1] Liu Yang Dj.M. Maric, P.F. Research on Liaoning Tourism Marketing. Hefei University of Technology , 2006. [2] Turpan Culture and Art Cente. Dream to Turpan. Urumqi: Xinjiang People’s Press, 2006. [3] Wang Ping. Analysis of Integrated Marketing Development for Turpan Scenic Spots. Journal of Nanning Vocational Technology College. 2009. [4] Buzainafu Maituerdi. Research on Image Development of Turpan Scenic Spots. Xinjiang University, 2010. [5] Ma Yong. Regional Tourism Planning: Cases of Theoretical Approaches. Tianjin: Nankai University Press, 2000. [6] Wu Bihu. Theory of Regional Tourism Planning. Beijing: China Travel & Tourism Press, 2001.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.149

The Research on Color and Text Usage in the Graphic Design Liu Jun1

Li Chengmao2

1

Guilin College of Aerospace Technology

2

Guilin University of Electronic Technology

Key words: Graphic Design, Color, Text, Image

Abstract. Graphic design is an art with a specific purpose; it is a process of describing a business or product with art. In this process, color, graphics and text play a very important role. They are like three actors to undertake the projects, how to coordinate the relationship among them, to balance the roles, the designers should think in detail, should pursue. Anyway they have a long way to go on the road of graphic design. Design is planning on purpose, graphic design is one of the forms planned to be taken ,in the graphic design you should use visual elements to spread your ideas and plans , use color, text and image to spread the information to the audience so that people, through these visual elements, can understand your ideas and plans. Emphasis on the application of visual elements in graphic design, the combination of visual elements and design ideas, and these two methods will achieve the perfection of ideas to attract the viewer's eye, and to announce the achievement of broad objectives. The use of color In the world today, first of all would be noted is the effect of color, regardless in actually daily lives of people or virtual networks on websites. Color is always the focus of attention. [1] An designer, Mr. Cai Qiren, said, "a designed work generally consists of three elements: color, image, text. Among these three elements color is the most important one." He explained that people are normally very sensitive to color. When they first become involved in a designed work, they firstly feel its color, followed by the image, and the last is the text. It shows that color in the visual communication is superior to image and text, it can give strong visual impression. Therefore, designers often use color to express his design ideas. Color matching The relationship among colors in main screen When there are several color screens, it is important to choose a color-based screen, and its size, brightness, location should be bigger than others, giving the strong visual stimulation and attraction and at the same time it is a color whole, and will not appear the phenomenon of chaos. Add more color in common, in the design process, the repeated use of a reasonable one or several colors will make the color design more harmonious, and make the visual effects echo each other. The use of neutral colors Generally neutral colors refer to colors in the absence of any tendency to hues and colors of no physical and psychological emotion. Neutral colors, now in common, are black, white, gray, gold, silver which are characterized by a typical combination with any color to form the visual experience harmoniously. [2] In the session of the practice of graphic design, the designer should not only be good at using colors, but also at the use of gold, silver, black, white and gray color to ease, to neutralize, and to show the theme by contrast. As shown in Fig.1, neutral colors are well used.

150

Manufacturing Systems and Industry Application

Figure 1. An example of using neutral colors Different shapes and different colors In the human visual habits and visual experience, different colors and different shapes are matched, such as: the corresponding color of square is red; the corresponding color of triangle is yellow, and so on. So when we design colors we should note this issue, and strive to complement each other rather than confuse each other. In addition, pay attention to the distance among colors, the purpose of this is to separate primary and secondary and get the sense of clarity. Color balance Pay attention to the weight of hot and cold colors. In general, red, yellow, orange are hot colors in visual , while blue, green and other cold colors give people a light feeling. In accordance with the principle of this vision, we should pay attention to color balance issues, such as: it can be coordinated from the area, or improved from the lightness and purity. The large area of color of lower purity or brightness and distinctive small color area also need to be balanced. Colors can not be biased towards one side, otherwise it will be weightless. If the large color screen is in the center, it must be surrounded by some small colors. If the objects on the left side have a certain degree of lightness, the right can not be completely dark or blank, and we also need the right amount of prescribed colors. Tones problem Tone is our main color screen. Personality or mood we would like to express will come out on the page. Such as depression with cold colors, warmth with hot colors.[3] And to express the tone we observe, we should exaggerate, refine, emphasize and sum the colors. Single-tone It refers to only one color. We only adjust the lightness and purity, sometimes use neutral colors. In this way, there is a strong personal inclination. We must note that the neutral colors must be very leveled, but also opened on lightness coefficient, so that we can achieve our desired effect. Reconciling tone Use the neighboring color.[4] This method is using the vicinity in a standard color the queue for the color match. Although it is monotonous, we should note the contrast between lightness and purity, pay attention to using a small color screen in order to achieve the coordination of color effect. Contrast tone Compare with colors. This method is using contrast tones in a standard color queue for the color contrast, such as: matching red and green or blue, yellow and purple and so on. It will easily cause disharmony, so neutral colors should be used. We should pay attention to the size, location, layout of the color, to balance our design. In addition, we should reconcile the inter-color with black, white, gray and other neutral colors.

Yanwen Wu

151

The use of text Text is an important component of human culture. In any visual media, text and picture are the two elements. Good or bad of the combination of text will directly influence the effect of visual communication. Not only through their own words it can spread the meaning of the text messages to the audience, but also through its processing of aesthetic processing, the text as a specific graphics will display its charm. Therefore, the text is an important technology designed to enhance visual effects, to increase the demands of work, and to give the layout the aesthetic values. The change of text Slight changes in the text The shape of the text not changed, only through stretching, compression, changing the size, words and colors of the text to increase the level of thickness or depth of feeling to make it eye-catching.[5] Compression and elongation of the text change the shape, as a result, they affect people’s reading of the text, so they can not appear in normal text. The title of this kind or text ads can attract people's attention. The three-dimensional effects of text In the plane, software can also be used to make the text very three-dimensional, but three-dimensional effect in the screen will be very strong in visual effects and sometimes snatch the visual effects of the main objects, as if it is isolated, so even the majority of advertising is three-dimensional, the effects will not be too realistic. For example: (1) Highlight the thickness of the text (as shown in fig.2). Designing software can be used to design fonts to be stereoscopic, increase the form of aesthetics of the font, at this time if we timely apply some special material, we’ll achieve a more beautiful visual icing on the cake.

Figure 2. Highlighting the thickness of the test (2) Increase the text to make it float projection (As shown in figure 3.). Graphic designer in the design of projection often use the method to increase space distance of the font and background, the projection of the dark gray text itself is a very good contrast, it immediately is subdued and quiet in order to give people heavy visual effects, and is the most commonly used method of dealing with text effects in graphic design.

Figure 3. The effect of float Projection of text Radical change of the part of the text Such text will normally be changed with part of the text (such as horizontal, vertical, point, the charge-off,) the designers draw their own special character or graphics, to make the text more beautiful on the purpose of deep memory and to make it eye-catching. As shown in fig.4, the part of the text has been changed, and it gives us deep impression when we see it.

152

Manufacturing Systems and Industry Application

Figure 4. An example of changing the part of the text Change the shape of strokes of the text The strokes will be given a new design and decoration, relatively new, but if the design is not good enough it will weaken discriminating.[6] An example of decorating the strokes of the text is shown in fig.5.

Figure 5. An example of decorating the strokes of the text   Otherwise, the special characters (such as European handwriting, the traditional Chinese calligraphy effect), special effects for its relatively new fonts in the visual experience on more novel, sparked interest in people's vision, as shown in fig.6, fig.7.

Figure 6. The traditional Chinese calligraphy effect

Figure 7. European handwriting effect

Yanwen Wu

153

The text area It is to lay out the area of the text in accordance with the number of units and content in the text, through the emergence of flexible point, line, surface layout of the text, so we can create a compact layout of the presentation, the effect of ease and so on. It will enable the reader to reduce the burden and to increase interest in reading, and the graphic design format can also (especially advertising) have the structure of rhythm, melody and visual impact, as shown in Fig.8.

Figure 8. An example of advertisement design Text layout Improving the readability of text The primary function of language in visual is to convey the author's intentions and all kinds of information in the public communication. To achieve this objective, it is necessary to take the demands of the text's overall effect into account, to give a clear visual impression. Therefore, the designed text should be avoided messy, be easy to identify, understand, and should not be designed for the sake of design. Bear in mind the fundamental design of the text is designed to convey the author's intention and to express the design theme and the idea of ideas better and more effectively. The location of the text and the overall requirements On the arrangements of the text on the screen, take the global factors into account, without visual conflict. Otherwise, it is easy to cause confusion in visual order regardless of primary and secondary on the screen, and the meaning and atmosphere of the whole may be destroyed. This is a very delicate issue, need to be understood. Do not expect the computer can help you arrange, it sometimes will help you down. Details of the places have to be noted, a pixel sometimes will change the flavor of the whole works. The sense of visual beauty In the process of visual Communication, the image of the text as one element of the screen with the function of conveying emotions, so it must have a visual sense of beauty to give the feeling of beauty. Well-designed font and a clever combination of words make people feel happy and leave a good impression, so the work can get good psychological reactions. Instead, people are unhappy, it is difficult to have a visually aesthetic feeling, and even people will refuse to look. This is bound to be difficult to convey the author’s purpose and ideas. The use of graphics The unique performance of graphics displays a unique visual appeal in graphic design. Graphical elements are important material in the formation of character and enhance the visual attention in the print ads. Graphics can subconsciously control the spread effect of advertising. Graphics occupies an important page, sometimes even all of the layout. Graphics can often catch people's attention and stimulate their interest in reading.[7] Graphics gives the better visual impression than text. Graphic symbols should be used in a reasonable manner.

154

Manufacturing Systems and Industry Application

References [1] Chen Xiaoying. “Discussion on Individualization in Modern Packaging Design”, Packaging Engineering, 2008(08):150-152 [2] Liu Hong. “Analysis to Language of Color in Graphic Design”, Art & Design, 2003.(04):93 [3] Huang Yifan. “The Application of color in Ads Design”, Joural of Hunan Mass Media Vocational Technical College, 2009.(03):57-58 [4] Wang Youjiang. “Base on graphic Design”, Bijing: China Textile Press, 2004 [5] Liu Xisheng. “Significance of Font Design in Graphic Design”, Packaging Engineering, 2007(10):233-235 [6] Liang Siping. “Designing the Originality of Fonts”, Computer Knowledge and Technology, 2009(3):2251,2272 [7] Liu Baocheng, Wang Yu. “Discussion on Sketch Language”, Packaging Engineering, 2006.(04): 255-257 [8] Samara. "Design elements--graphic design style," NanningΚ Guangxi Arts Press,2008 (in Chinese)

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.155

Forest Fire Monitoring System Based on Image Analysis Li Xiaoling School of Information Science and Technology,Chengdu University,Chengdu, 610106,China Email:[email protected] Key words: embedded video monitoring system; image processing; dynamic monitoring; in real-time; the Internet

Abstract: Large forest fires are very hazardous and often cause greater losses. However, due to the vast coverage of forests and the environmental complexity, monitoring which relies on sensor detection of direct contact almost cannot achieve its monitoring function. In this paper, the author proposes that long-range images will first be acquired and so do the images of a larger dynamic range of the observed region, then an embedded video monitoring system will be formed realizing a dynamic monitoring. With the techniques of dynamic image analysis and processing, dynamic image alarms and control signals can be obtained and such videos will be transmitted over the Internet in real-time. Experiments shows this system respond to fires with a fast speed of less than 20ms and the control of fire is highly accurate with a dynamic image area that is less than 1% of the background area. Introduction As large forest fire are hazardous and result in substantial losses, timely monitoring on forest fires has a highlighted significance. Because the forests cover a vast territory with complex environments, traditional manual monitoring of low efficiency and improper delay has resulted in a lot of difficulties in fire extinguishing, especially in such case that because of time delay, the fire becomes terribly uncontrollable. Sensor detection of direct contact that monitors forest fire is much difficult in installation and layout of network. It almost can not be achieved. Currently, video monitoring is widely used in such areas as electric power, transportation, public security and fire-controlling and has achieved outstanding results [1]. However, most applications of it are in a small area and even for remote monitoring. Up until now there is no research material on monitoring forest fire. The system introduced in this paper consists of multiple remote monitoring terminals and thus achieves monitoring large-scale forest fire. The monitoring system uses embedded monitor, which is made up of ARM2410, and makes use of techniques of dynamic image analysis and processing in order to monitor dynamic images [2]. It is applied to detect fire, smoke and so as to generate fire alarms and control signals. Such acquired information will then be transmitted in the long distance via a wireless or wired network [3]. This system can be very expediently used in forest fire control and monitor fires on large-scale, in long-range and real-time.

156

Manufacturing Systems and Industry Application

Layout of the System Platform The system is constituted by a remote monitor and a server for management center. Consisting of embedded devices and video capture modules, the remote monitor completes image acquisition, Tripod Head

Auto Focus Camera

Remote Monitor

Tripod

Auto Focus

Head

Camera

Remote Monitor

Tripod Head

Auto Focus Camera

Remote Monitor

Management Centre

Fig.1 The block diagrams of the hard wares

encoding, playback, identifying hard disk storage / analysis and alerting; while the management center server supports multiple independent devices of network platform and is responsible for the management of the platform of network, remote decoding and playback and dynamically changing the configuration of local control terminal. The block diagrams of the hard wares of the system are shown in Fig.1. With the function of daily recording, the system records some critical information such as operation, alarm; with a log alarm search function, it can quickly find the information which is needed when an alarm is given. The site monitor modules consist of image acquisition devices, tripod head, embedded development boards and network interface devices and storage devices. See Fig.2: With a wealth of hardware interface, ARM2410-S can be very convenient to expand the peripheral devices. USB camera and wireless card can be connected to the embedded development board through the USB Host interface. Network transmission adopts 802.11bWLAN access and sends images and controls remote signals through the Internet. Due to a tripod head, the camera has ARM2410-S

Auto Focus Camera USB

HOST1 Enhanced IDE Port

Wireless Network Card

Hard Wares

USB HOST2

Fig.2 The site monitor modules

levels of 270 degrees, 90 degrees vertical angle within the broad monitoring. Auto-focus can capture a clear vision 3 km away, which enables the system to monitor a large area of forests. Meanwhile low light capabilities make the camera monitor forest fires at night. Identifying Dynamic Images Images are obtained by remote cameras and processed by image software modules which include: image processing modules, network communication modules, background comparing and updating algorithm modules, image storage modules. All the application programs use Linux kernel driver video equipment. Detection system sets up the functions of background preview, automatic detection, detection of size and grey level threshold. When the monitored object changes, the background preview module re-captures the background, ensuring that system changes with monitored objects. Grey threshold effect is responsible for adjusting the effect and accuracy of the monitor. Detection of area

Yanwen Wu

157

can set the moving object’s proportion of the background image. When the moving object is below this proportion, no warning signal will be sent [4]. The appropriately selected lower limit of gray level and parameters of alarm limit area can detect forest fire under different brightness. For example, different parameters are to be set for the difference of the brightness of day and night. In order to get a better detection during the night, the camera should have a low light intensity. Background comparing is commonly used for detecting moving objects [5,6], in which calculus of differences of image sequence is used. Absolute values of brightness of two pictures reflect the motion characteristics of the image sequence. If the absolute value is less than a certain threshold, there is no movement; otherwise, there is a movement. The basic concept of calculus of differences of image sequence is as follows: if a fixed part of reference image is defined as the background image, and an identical scene generated later which includes a moving object is defined as moving images, then the moving images and the background images are compared to remove the moving images in the background, and then the remaining and non-zero terms corresponding to the non-fixed portion is the difference value between these two pictures. The simplest description of the algorithm is defined as Eq.1: ∆f = ( x, y, ti ) − f ( x, y, t j ) 1 ∆f > T d ij ( x, y ) =  0 ∆f ≤ T

(1)

( x, y , t j ) t In this figure, ( x, y, ti ) and refer to gray levels at time ti and time j respectively.

According to this formula, the calculated image can better reflect the motion characteristics of images. The greater the difference of images is, the larger the image ∆f is and vice versa. This method calculates fast and can be easily done with a computer [7,8]. However, since the gray levels determine the sensitivity of alarm detection, the selection of gray level becomes a key to the success of this method. In the processing of gray scale of a forest image, as edges and other sharp jump have a great impact on the high-frequency of image processing, smoothening the image is necessary. Images are filtered with Butterworth Low-Pass Filter (BLPF), whose transfer function is as showed in Eq.2. D (u, v) is the distance between point (u, v) and the origin of the frequency domain plane. H (u, v) =

1 1 + [ D (u , v ) / D0 ]2 n

(2)

here D (u, v) = u 2 + v 2 For the convenience of calculation and without affecting the results, Eq. 2 is modified into Eq.3: H (u, v) =

1 1 + [ 2 − 1][ D(u, v) / D0 ]2 n

(3)

Auto Focus system is mainly used to capture images of the motion area when the monitor finds that there are moving images. The system automatically sets parameters for the focusing of the second observed region, adjusts the angles and degrees of the tripod head appropriately and regulates the focus of the camera automatically. Meanwhile in order to get clear and effective images in the second time, time for focusing should be calculated and time parameters for image acquisition should also be set accordingly. Although the inter frame difference method for dynamic scenes does not extract all the pixels

158

Manufacturing Systems and Industry Application

of the characteristics of moving objects, it is suitable for the occasion on which the targets move much fast and the demand for image segmentation accuracy is not so high, which is quite applicable for the features of forest fire monitoring.

Fig.a Background

Fig.b

Fire in the test

Fig.c

Figure of Test Results

Fig.3 Fire Test (Fig.a, b and c.)

Fig.a Background

Fig.b

Smoke in the Test

Fig.c

Figure of Test Results

Fig.4 Smoke Test (Fig.a, b and c.)

Experimental results As shown in Fig.3, which is about the results of tests on the detection effect of moving objects, the monitoring system can detect smoke, flame that appears on the screen in time, mark them with a red box, and set a fire alarm. Being tested, the system can accurately detect moving objects in 20 ms and send out an alarm real-time. Fig.3 is about the results of fire test. Fig.4 is about the results of smoke alarm. Conclusion Compared with the existing network video monitor systems, this video surveillance system has enhanced image processing capabilities and intelligence factors and can provide users with advanced video analysis, thus improving the ability of video monitor systems. With embedded systems, it can acquire images, detect motion pictures, set alarms on-site and over the server and store pictures. It is especially applicable for remote monitoring of wild fields, such as forest fire monitoring. Experiments show that this system has a wide range of applications with its high reliability, robustness, low cost, small size, flexibility of installation and wide range of monitoring.

Yanwen Wu

159

References [1] Ou Yang, Fu Baochuan. Research and Design of Network Intelligence Video Surveillance Terminal Based on Embedded Technology. Control & Automation.Vol.11(2005), p.55-57. [2] Rafael C.Gonzalez,Richard E.Woods. Digital Image Processing.Electronic Industry (2003).

Press

[3] Hu Hao, Wang Xing. Automatic Fire Alarm System Based On Smoke Detection. Sensor World.Vol.10(2004), p.32-34. [4] Zhang Bianli, Chang Shengjiang, Li Jiangwei. Intelligent control of video monitoring system based on the color histogram analysis. Acta Physica Sinica.Vol. 55(2006), p.6399-6403. [5] A G Bors, I P itas. Prediction and Tracking ofMovingObjects in Image Sequences.IEEE Trans. on Image Processing.Vol. 9 (2000), p.1441-1445. [6] Fu Desheng, Wang Haibin, Sun Wenjing. Check-up of Mobile Object in Digital Video Supervising. Microelectronics & Computer.Vol. 22 (2005), p.118-121. [7] Yang Weik,Zhang Tianwen. A new method for the detec-tion of moving targets in complex scenes.ComputerResearch&Development.Vol. 35(1998), p.724-728 . [8] Ding Zhongxiao. Survey on Moving Object Detection Methods for Video Surveillance Images. Video Engineering.Vol. 32(2008), p.72-76.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.160

A Software Project Management Method based on Trust and Knowledge Sharing Tang Rongfa

Huang Xiaoyu

Guilin University of Electronic Technology Key words: software project performance management; trust; knowledge sharing

Abstract. Trust is the most important issue to create relationship making value in knowledge sharing. Knowledge itself cannot lead to a success, as knowledge sharing or flow of knowledge is of prime importance in an organization. Knowledge sharing depends on trust between trusted and trustee members in specific knowledge context and specific time slot. Trust level between members has a high impact on software project performance. The future work could include defining a role and measurement of trust and knowledge sharing in the software project performance. Introduction Several domain experts in the field of software development and project management have commented on the failure rate of software engineering and project management e.g. • Various failure rates for software development projects are up to 85% [1]. • 50% of all software projects are total failures and another 40% are partial failures according to widely quoted surveys in U, USA, and Norway [2]. • Approximately 31 % of corporate software development projects are cancelled before completion and 53% are challenged and cost 180% above their original estimate according to the Standish Group in 1994 [3]. • 46% of software projects are having cost or time overruns or not fully meeting user's requirements and 19% are outright failures according to the Standish Group in 2007 [4]. This has shown that project failure rate is high. A lot of money has been wasted on failed software projects. According to the Standish Group International, roughly 15% never deliver a final product costing $67 billion per year [4]. Stories of software failure attract public attention. Additionally Cerpa and Verner [5] believe that software quality is not improving neither but getting worse. Thus the successful management of software projects is critical. It is vital to understand what is important to complete software project on time within budget, and meet user requirements. Many literatures [5-11] present project failure causes. However, project failure still persists. In this paper we give overview of software development failure in section 2. Then we present the two key variables in software project performance management in section 3. We discuss and conclude the paper in section 4. OVERVIEW OF SOFTWARE DEVELOPMENT FAILURE Teamwork Teamwork issues refer to issues related to team member development, communication between members, and team management. Team members also include customers, users, and stakeholders. Reason that is most cited for project failure is ineffective communication and coordination among project teams. Other factors include inexperienced project manager, lack of specialized skills, low confidence in team members, insufficient support from manager, inadequately train of team members. DeMarco and Lister argued that aspect of the skills and interactions of software team is most critical and hard to overcome [11]. Project Management Project management issues refer to issues related to project plan and schedule, budget, assessment, control, quality assurance. This includes uncertainty of project milestones, change management, progress report, and project management methodology. Technical Aspects Technical aspects refer to issues related to software process activities including requirement engineering, design, implementation, testing, validation and verification. It could

Yanwen Wu

161

cause by ambiguous system requirement, incorrect system requirement, wrong development strategies, inappropriate software design, inadequate testing, lack of reusable support of data, code, component, document, etc. However, McCreery and Moranta believe that project challenges were more behavioral and interpersonal than technical [9]. Issues related to communication, collaboration, and team connectedness are more critical. Project Complexity Project complexity issues refer to issues related to the complexity of project requirements. This includes the projects utilizing cutting edge technology and that require high level of technical complexity. KEY VARIABLES IN PERFORMANCE MANAGEMENT From the above overview of software development failure, we identifies two key variables i.e. trust and knowledge sharing as critical influence factors in software development. Trust The concept of trust is related to different and various fields including philosophy, sociology, business, computing. There are number of trust definitions. Mayer et al. define trust as "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trust or, irrespective of the ability to monitor or control that other party" [12]. Moe and Smite define trust as "the shared perception by the majority of team members that individuals in the team will perform particular actions important to its members and that the individuals will recognize and protect the rights and interests of all the team members engaged in their joint Endeavour [10]. Jarvenpaa et al believe that trust has direct positive effect on cooperation and performance and an increase in trust is likely to have a direct, positive impact on team members' attitudes and perceived outcomes [l3). Giddens [14] sees trust in different view and says that there would be no need of trust if the activities were clearly visible and easy to understand. Hence from his view the prime condition for lack of trust is lack of full information or ambiguous information. As a result trust requires a good knowledge sharing. Trust can be founded in different ways. The most common way is a direct relationship. In vertical view trust is important to leadership while in horizontal view trust is important for knowledge sharing and team working. In relation to teamwork the two most important dimensions of trust that should be focused are benevolence and competency. Benevolence is related to willingness within teamwork based on the idea that members will not intentionally harm another when given the opportunity to do so. This kind of trust can be positive or negative which members in the team may believe on others willingness to share knowledge and trust level can be in highest level. On the other hand, they may refuse to others willingness and trust can be negative. The second dimension of trust is competency. This kind of trust refers to trusting agent's believe on trusted agent’s competency. It describes a relationship in which a member believes that another member is knowledgeable about a given subject area. Competence-based trust also can be negative or positive and members can believe on others ability or they completely refuse others ability in a given subject area. Knowledge Sharing Wang and Yang define knowledge sharing as the action in which people dispread relevant information to others across the organization [11). Melnik and Maurer divide knowledge sharing into two perspectives i.e. Codification approach and personalization approach [15]. Codification approach is based on the notion of knowledge as object [16-19] which can be created, collected, stored, and reused [15). Personalization approach is based on the notion of knowledge as relationship [20-23] which is uncertain, soft, and embedded in work practices and social relationships [15]. Knowledge sharing in software development can be defined as activities between team members in spreading project data/information/ agreement. As seen in Figure 1 knowledge sharing includes communication, updates, advice, problem solving, decision making, issue raising, discussion, etc. Over project data/information/agreement. Organization of the Text

162

Manufacturing Systems and Industry Application

Team Members

Knowledge Sharing ·Communicatoin ·Updates ·Advice ·Problem solving ·Decision making ·Issue raising ·Discussion ·etc.

Team Members

Figure I. Knowledge sharing definition Knowledge sharing in software development situation enables team members to enhance their competency and mutually generate new knowledge [15]. Knowledge sharing can be considered by knowledge complexity and knowledge transferability. The complex knowledge and/or long knowledge transfer chain suffer from information distortion and loss which could lead to inefficient knowledge sharing. CONCLUSION A developed framework is required to measure embedded trust in teamwork. Additionally, the framework should be developed and measured the software project performance in a dynamic environment as knowledge d trust are dynamic entities. References [1] Jorgensen, M. and K. Moloken-Ostvold, How large are software cost overruns? A review of the 1994 CHAOS report, in Information and Software Technology 48. 2006. p. 297-301. [2] Gilb, T. No Cure No Pay: How to Contract for Software Services. 2006 [cited accessed 24 May 2006]; Available from: http://roots.dnd.no/repository/05_Gilb_Tom_No_ Cure_No _Pay.pdf. [3] International, S.G. CHAOS: Project failure and success report. 1994 [cited; Available from: http://www.pm2go.comisample_researchichaos_1994_2.asp. [4] Rubenstein, D. Standish group report: There's less development chaos today. SO Times [cited Mar. I, 2007]; Available from: http://www.sdtimes.comlarticie/story-20070301-01. [5] Cerpa, N. and J.M. Verner, Why Did Your Project Fail? Communications of the ACM, 2009. 52(12): p. 130-134. [6] Linberg, K.R., Software developer perceptions about software project failure: a case study. Journal of Systems and Software, 1999. 49(2-3): p. 177-192. [7] Chen, J.-C. and S.-J. Huang, An empirical analysis of the impact of software development problem factors on software maintainability. Journal of Systems and Software, 2009. 82(6): p. 981-992. [8] Yong, H., et al. A Neural Network Approach for Software Risk Analysis. in Sixth IEEE International Conference on Data Mining -Workshops (ICDMW'06). 2006. Hong Kong, China.: IEEE.

Yanwen Wu

163

[9] McCreery, J. and V. Moranta. Mapping Team Collaboration in Software Development Projects. in Portland International Conference on Management of Engineering and Technology (PICMET'09). 2009. Portland, Oregon USA. [10] Moe, N.B. and D. Smite, Understanding a lack of trust in Global Software Teams: a multiple-case study. 2007, Springer-Verlag Berlin Heidelberg. [11] Juan-Ru, W. and Y. Jin. Study on Knowledge Sharing Behavior in Software Development Team. in Fourth International Conference on Wireless Communications, Networking and Mobile Computing (WiCom'08). 2008. Dalian, China. [12] Mayer, R.C., OJ. Davis, and F.D. Shoorman, An Integrative Model of Organizational Trust. Academy of Management Review, 1995. 20(3): p. 709-734. [13] Jarvenpaa, S.L., T.R. Shaw, and D.S. Staples, Toward contextualized theories of trust: The role of trust in global virtual teams. Information Systems Research 2004. 15(3): p. 250-267. [14] Giddens, A., The Consequences of Modernity. 1990: Stanford University Press. [15] Melnik, G. and F. Maurer. Direct Verbal Communication as a Catalyst of Agile Knowledge Sharing. in Proceedings of the Agile Development Conference(ADC'04). 2004. Salt Lake City, UT, USA. [16] Alavi, M. and D. Leidner, Knowledge Management Systems: Issues, Challenges, and Benefits. Communications of the AIS, 1999. 1(7): p. 2-36. [17] Hansen, M. and M. Haas, Competing for Attention in Knowledge Markets: Electronic Document Dissemination in a Management Consulting Company. Administrative Science Quarterly, 2001. 46(1): p. 1-28. [18] Szulanski, G., The Process of Knowledge Transfer: A Diachronic Analysis of Stickiness. Organizational Behavior and Human Decision Processes, 2000, 82(1): p. 9-27. [19] Zack, M., Managing Codified knowledge. Sloan Management Review, 1999. 40(4): p. 45-58. [20] Boland, R. and R. Tenkasi, Perspective Making and Perspective Taking in Communities of Knowing. Organization Science, 1995. 6(4): p. 350-372. [21] Brown, J. and P. Duguid, The Social Life of Information. 2000, Boston, MA: Harvard Business School Press. [22] Nidumolu, S., M. Subramani, and A. Aldrich, Situated Learning and the Situated Knowledge Web. Joural of Management Information Systems, 2001. 18(1): p. 115-150. [23] Nonaka, I. and N. Konno, The Concept of "Ba": Building a foundation for Knowledge Creation. Califoria Management Review, 1998. 40(3): p. 40-55.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.164

Application of 3D Animation Technology in Movie Art Design Li Menga, Huang Xinyuanb and Zheng Tiejunc College of Information Science and Technology, Beijing Forestry University, Beijing, 100083 a

[email protected], [email protected], [email protected]

Key words: 3D Animation; Movie Art Design; Sacrifice.

Abstract. After1970s, digital technology was introduced into movie making industry, filling new elements to movie industry, expanding expression space into a new world. This article is purposely, under the above circumstance and background, to research on function of 3D animation technology in art design of movie, to make a conclusion on development trend of movie art design.

Foreword In general art of movie, movie art is a department that is responsible for profiling design and making to get movie image, and that includes generally design and execution of scene, characters and stage property, and that takes movie script and director’s design as guideline and is physical basis of composing movie scene. Therefore, development situation of movie art field shows critical important. In recent years, with the fast development and popularity of three-dimension animation, it provides new creation space for movie art, not only being used in post-production of movie, but also being used more widely in pre-stage of movie art design. Application of three-dimension animation technology in movie art design shows this foreground for us: it may do it better if traditional way can do it; it may do it perfectly if traditional way can not do it. Therefore, three-dimension animation technology changes traditional design way of movie art design. Application of 3D Technology in Movie Art Design Brief Introduction of Current Situation of Movie Art Design.When movie originated at the beginning, there was no independent movie art department, but when movie art acted as an independent department and played a role, working process of movie art design also formed a traditional art design working process, that is, art designers firstly design plan of plane area and then design further design drawing and structure drawing of every scene, then design atmosphere drawing in different angle. At the same time, model designers make model of paper or wood in smaller scale, to thereby show design plan and building structure of scene.

Fig.1 Paper Model Making When all tasks above are completed, real scene will be set up, and this kind of real scene has been used in the past several decades in the process of design and making, and played a significant role in the development of domestic movie art. However, design of movie art and development of technology times are closely related, and movie art design that is closed to computer technology also has developed fast. The first breakthrough with the best impact of movie art in newly period began from movie scene design, and movie scene design plan in early stage and real scene setting up are all related to digital technology, so it opened a new page of modern movie art.

Yanwen Wu

165

Promotion Function of 3D Animation Technology to Development of Movie Art Design.With the emergence of digital times, three-dimension animation technology and movie combine naturally together, so new movie art design mode came out. In terms of art expression and art value pursuing, it completes tasks of screen audio and visual effect and movie making, that traditional movie making ways could not complete, so three-dimension animation technology greatly promotes development of movie art design, which can be shown in the following four parts: (1) Scene plan drawing, design drawing, construction drawing and atmosphere drawing, in electronics edition, saved in memory medium, computer hardware in the way of byte, are easy to be amended, saved and forwarded, which undoubtedly is very meaningful for movie art design. (2) Static channels effect drawing completed with three-dimension animation technology not only conveniently shows artist’s design plan, but also provides clear direction to construction workers of scene making, so reality of static three-dimension effect drawing can most directly show artist’s design idea and the final desired effect. (3) Creation of three-dimension scene model may clearly show structure of each building in movie scene, and enable people to directly see whole or part of scene in every required angle. (4) Travel animation of three-dimension virtual camera may make advanced plan and exact judgment for works in photographing process including scene distribution of director, arrangement of light environment, laying of photo track and performance scene selection for actors. In all, combination of three-dimension animation technology and movie art, not only may increase working efficiency of movie art design, but also in more extent save unnecessary expenditure and wasting cost on material. Primary Research on Application of 3D Animation Technology at Art Design in Movie “Sacrifice” The movie “Sacrifice”, another great work of director Chen Kaige who has been working long for it, newly shown recently and originated from Chinese ancient drama “Sacrifice”, shows a story that in Spring and Autumn times, Mr. Zhao, a grand master in Jin state was framed up by treacherous officials and his relatives were also sentenced to death, and after that, Cheng Ying, a doctor, raised the orphan of Mr. Zhao and avenged for Mr. Zhao when this orphan grew up. Since plot of this movie is complicated, the scenes involved in is a lot, including not only main buildings like Xanadu in peach blossom palace, The Zhuang’s mansion, Zhao Dun’s mansion, Cheng Ying’s residence and Gong Sun’s residence, but also some small scenes like barback, schools, hotels and markets at that time. To achieve a perfect image effect of the movie and embody a grand power, director Chen Kaige required to design a warring city. With the theory expatiated above, the author explains application of three-dimension animation technology in movie art design, by the example of scene art design in movie “Sacrifice” participated by author in person. Explanation of Art Design Theory of Movie “Sacrifice”.To meet the requirement of movie positioning and script of director, art director of this movie, Liu Qing concluded art design conception of this movie as: truth, dignifiedness and beauty. Now, this conception in detail is going to be explained as follows. Undoubtedly, a good movie really praised by audiences is based on “truth” which is absolutely not a rote and mechanical repeating of real life scene, but established based on reality and mastering natural rule. The most important factor to make movie look real, is real atmosphere created by art design. For this movie, this is a story that happened in Spring and Autumn times, so task of art design is firstly to make a real scene at that time. To achieve this goal, art designers in art department referred to all books and documents in Spring and Autumn times, to finally have designed a warring city and have completed this project in Xiangshan, Ningbo, to have established a real and so far the biggest movie making city in Spring and Autumn times in China. Dignifiedness is about building style of Spring and Autumn times in scene, because at that time, the buildings were famous for inornate appearance and grand power---the layout of city is regular;

166

Manufacturing Systems and Industry Application

palaces and mausoleums are all big groups, their main parts are high block-type terraced buildings, and the roofs are big with remarkable curves. Drum towers in warring city in Spring and Autumn times belong to monomer building in memorial style with symmetrical cross axes, completely embodying the style and character of rusticity, dignifiedness and gentilesse.

Fig.2. Drum Tower Buildings in Spring and Autumn Times Design idea of “beauty” is always a movie making style of director Chen Kaige. As is known to all, his works are famous for fine and delicate images, which is also the reason why his works are so popular in public---his movies may make audiences enjoy beauty and shock on vision. 3D Scene Making in Middle Period of “Sacrifice” Making of Construction Drawing with CAD Software.Design drawing, structure drawing and construction drawing of each building in warring city are completed at AutoCAD software which may not only create basic lines, like straight lines, circle, ellipse, polygon, spline curve etc., but also have powerful image edition function to easily move, copy, rotate, array, extend, cut or zoom images. Although traditional construction drawing made manually has not been completely eliminated, characters of easy-editing and repeatability of AutoCAD software may greatly increase efficiency of scene design. Owing to advantage of this software, and hard working of every art directors, design plan and construction drawing of a warring city with 152 acres of area and building area of 43,000 square meters, can be completed in only two months. Creation of Three-Dimension Scene Model in SketchUp Software.Input completed plain layout drawing of warring city into SketchUp software, then create basic layout and wall of buildings according to layout drawing, and then create building model of every single building according to detailed structure drawing. It should be paid much attention to that according to building style of Spring and Autumn times, immediacy and changeability of three-dimension model, it is very easy to design several different versions for the layout of whole scene, and then select the best option further according to model comparison and requirement of plot, which may make scene art design much better. Additionally, SketchUp software is available to use virtual camera instead of human eyes, to walk at will in created three-dimension scene to view not only directly the scale of whole scene, but also shape of every building very closely or even component structure of a very small building.

Fig.3 Creation of Three-Dimension Model

Yanwen Wu

167

After model is created by SketchUp software, this model needs to be input into 3ds Max software to add texture mapping for scene model, and light environment for scene. Because of issue of compliance between two software, process of output of this scene may be different in different 3ds Max software---if version of this software is 3ds Max 2010 or earlier, output format of model from SketchUp should be 3ds or obj; if version of this software is 3ds Max 2011, model will be not necessary to output, because the software 3ds Max 2011 has better compliance property with SketchUp and may be compliant with SketchUp file. 3D Scene Mapping, Improvement of Light Environment in 3ds Max Software.Input a completed scene model file into 3ds Max software to make mapping texture and adding of light environment, then complete static atmosphere drawing and dynamic three-dimension demonstration animation. In the process of making mapping texture for model, the key technology is texture mapping making based on real photo, and application of material ball including diffuse reflection mapping, bump mapping, mix mapping and program mapping etc. Making of mapping texture is by Photoshop, so it has a higher demand of model making ability and art quality for designer. Because all buildings are in Spring and Autumn times in scene, all material to be used should meet the characters at that time, but it is hard to find building appearance texture at that time, so this work has to rely on artist to finish, for example, the making of main building material texture of the Zhuang’s mansion in warring city shows a princess yard scene at that time and makes a great impression on audiences. Application of every material type is to reflect real texture of each material, and texture mapping with texture reality and texture feeling may make sense of beauty in max extent.

Fig.4 Three-Dimension Scene Effect Drawing of The Zhuang’s Mansion Additionally, as is known to all, showing of modeling should make the best use of light and color, so if scene is frame of modeling, light will be spirit and main way of showing modeling. In showing process of movie scene art design of “Sacrifice”, art directors make best use of light system in 3ds Max, and use many different types of lights, like floodlight, direct light and spotlight etc., and provide light simulation of all scene natural environment and indoor light environment that meets scrip requirement, which thereby creates a real, dignified and beautiful atmosphere for whole screen of movie and meets requirement of general art director, Mr. Liu Qing, and is highly praised by audiences. After making of material and light for scene model, a single static effect drawing can be output, and to get a better effect, normally scene effect drawing with bigger size are selected, but there is no fixed standard of size, so it is decided by structure of screen.

168

Manufacturing Systems and Industry Application

Fig.5 Three-Dimension Scene Effect Drawing of Xanadu in Peach Blossom Palace Above passages explains how to make static channel three-dimension effect drawing, and at the same time, the scene to complete material and light setting may continue to be used to complete dynamic three-dimension scene demonstration animation, that is, to add one or more virtual camera in 3ds Max scene and set a fixed route for camera according to director’s requirement and let it walk according to fixed route, which enables director to see a established scene clearly. Additionally, virtual characters may be put into scene to simulate an occasion of making movies, to enable director to in advance feel atmosphere of scene in the future, and thereby make a better plan for the future movie making. Achievement of Scene Setting Up of “Sacrifice”.General conception of art design is not just on words, because all jobs about design, plan and making of effect drawing and three-dimension travel animation done by art directors in early stage, are purposely to achieve a final movie scene that can be completed by combination teamwork of many scene site workers, art director, producer and director, as well as the most important condition---sufficient capital. After 4 months of hard working and striving, Xiangshan Warring City in Spring and Autumn Times, currently a most grand movie making city that only specially is made for one movie, with max area and most complicated technique, a total of investment of 120 million RMB, an area of 152 acres and total building area of 43,000 square meters, was finally completed.

Fig.6 Photo of Real Warring City in Spring and Autumn Times Conclusion From the explanation of combination theory of three-dimension technology and modern movie art design, as well as a real case of art design: Sacrifice, it can be concluded that three-dimension animation technology has greatly promoted development of movie art design fast, and will be a key technique in movie making in the future, to undertake the obligation of art making and art configuration in the future, and will finally step into virtual reality with exchange function.

Yanwen Wu

169

References [1] Digital Filmmaking [M], (U.S.A.) Thomas•A•Ohanian et al, Beijing:China Film Press, 1998:73. [2] On Digitalized Film Art [J], Wang Honghai, Zhang Fan, Journal of Beijing Film Academy, 2006,(5):84-90. [3] Development Course of Moive Art[J], Han Shangyi, Journal of Beijing Film Academy, 1994,(2):4-12. [4] Primary Discuss on Development Trend of Contemporary Movie Art[J], Journal of Beijing Film Academy, 1994,(2):13-44. [5] Movie Comment, (U.S.A.) [J], Ma•Chris, Ka• Clearance, (translated by)Yong Xiao, 1978(5-6):223-233. [6] Information on http://baike.baidu.com/view/2980236.htm

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.170

NEUSOFT SOFTWARE TALENT TRAIN MODE RESEARCH CUI Wei1, a, QIAN Si-yu 1, LIN Yan 1, YE Jia1, LIU Yang 1 YANG Hai-feng2, LI Ya-jun2 1

Transportation Management College, Dalian Maritime University, Dalian Liaoning, P.R.China

2

Development Planning Department, Chongqing Electric Power Corp., Chongqing, P.R.China a

[email protected]

Key words: Software service; Business practical training; Complex training

This article has been supported by Chinese National Natural Science Fund in 2010, “Strategy research of using supply chain partner’s knowledge in enterprise knowledge creation process”, Project approving code: 71072124. It was also supported by Fund of Liaoning Province reform of higher education "Research of service outsourcing personnel training mode" in 2007, Fund of Dalian Science and Technology Plan "Research of ways expanding Dalian outsourcing service industry market" in 2008, Fund of Dalian Science and Technology Plan "Research of Dalian comprehensive prediction system of electric power and energy" in 2009, Fund of Chongqing Electric Power Corporation "Research of Chongqing comprehensive prediction system of electric power and energy" in 2008 and Project of Dalian Maritime University reform of graduate education and teaching “Construction of teaching content system of Management Science and Engineering based on the platform of motion and internet of things ” in 2010. Abstract. The software industry needs a large number of software professionals. However, the majority content that students in schools have learned is theory, so that every software company is facing the issue of letting students master the work skills as soon as possible to meet the needs of the work. Other issues include the shortage of high-level developers, the shortage of talents and difficulty of updating the knowledge of developers. Neusoft’s personnel training has a very good model. For college students, it uses Business Training Center to improve practical skills; for working staff, it fully takes into account the existing foundation of staff and uses the training complex system which combines face-to-face, e-learning and a variety of ways. Thus matching autonomy with strength and meeting diverse training needs of staffs with lower cost. Introduction In the world today, the information technology industry has become an important economic growth point, whose growth rate can reach as high as 30% to 40% per year. There is no doubt that the information technology industry plays a very important role in stimulating the world economic growth. The development of the IT industry plays a huge role in the promotion of speeding up the process of informationization in all areas of information technology of national economy, overcoming the blind repetition of construction, increasing the skill level of other industries as well as the entire national economy, improving the quality of workers and labor productivity, transforming the traditional industries, upgrading industrial structure, changing extensive growth mode, improving the overall quality of the national economy and improving the country's terms of trade.

Yanwen Wu

171

As an important element in IT industry, software industry is rapidly developing in China in recent years, especially in the software and outsourcing services. Both of the fields strongly demand for software personnel and require a large number of software talents. At present, the University can provide a large number of talented people, but study in school is too theoretical and practical experience and knowledge is too little. With the rapid development in technology and software industry, school knowledge cannot keep up with the knowledge of the time [1]. Therefore, every software company faces the human resources issue of how to get professional talents as much as possible and how school personnel can master the skills and knowledge as soon as possible. This decides whether or not the company can rapidly grow, grab more resources and seize the market to form the formation of a snowball effect. To deal with the problem, after years of practice, Neusoft has formed a system to solve their own. Current Issue Shortage of high-level developers. At present, there are few developers who can undertake high-level design and form a system, not to mention to put forward the concept of innovation. At the same time, many technicians with good ability have problems in communicating with others. So it is hard for us to entrust high-end from the start when doing abroad commission. Shortage of talents. After several years of expansion of colleges and universities, the number of new graduates increases rapidly. But because the study in school is too theoretical, and most of the knowledge remains a few years ago, it is hard for the graduated students to start work and it takes a long time for them to learn and practice. This brings dangers to our demand of large amount of personnel and we cannot quickly correspond to the huge demand of personnel. Difficulty of updating knowledge of developers. The knowledge of the software industry updates fast, so developers need to update their learning and knowledge systems constantly. However, under the pressure of program, it is difficult for the company to provide systematic training and re-learning opportunities to their staff. This leads to the slow improvement of staff and difficulty of development of high-end. Neusoft's Students Business Training Center Because the number of talents is in shortage, Neusoft has started to offer a specialized software college a few years ago. They teach college students the latest software development theories and knowledge, increase opportunities of practice and training school graduates to keep pace with the needs of industry knowledge. Thus the students can rapidly start working. At the same time, Neusoft also starts a custom training class with many other colleges and universities. Students in junior and senior grade can enter the customized training class. According to the needs of industry and the latest developments, Neusoft arranges the corresponding courses of study and practice so that students can keep up with the industry developments. By these two measures, the company solves the problem of shortage of high-level developers and this ensures the company's rapid development. The software college was founded in 2001, students were taught the latest theoretical knowledge of the software. At the same time, the "Student Office & Venture Office, SOVO" was established, providing college students with technical training, practice and venture guidance. It is the base for college students' teaching and practicing, the place for students to learn enterprise operations management, work flow and accumulate experience and the window for students to contact society.

172

Manufacturing Systems and Industry Application

SOVO is one of the four "Dalian software professionals training base" of Dalian government and an important part of Neusoft’s education system. Based on the years of teaching experience and wealthy educational resources of the company's academic IT education, face-to-face training, online education and numbers of company's software projects, SOVO aims to provide enterprise development project training to IT students who are in college and lack work experience. So that students can complete the transform of "University to Enterprise, Knowledge to Skill". In accordance with the requirements of skills of different posts in software enterprises, the center aims to train applied talents who are in urgent need in IT enterprises and provides JAVA software development, embedded software development, system management and multimedia design, etc. SOVO relies on the notability of the company in the software industry, cooperates with enterprise and provides experienced students and low-cost outsourced projects to enterprises, so that students can participate in the real work of the enterprises before they really graduate. SOVO provides students with a broad platform for employment at different levels, realizing the seamless connection between training college students and the requirements of the enterprises and also erects a barrier-free employment green channel. At the same time, SOVO forms a virtual company which is in accordance with real company's organizational structure for enthusiasm students. It carries out market-oriented virtual operator and also employs experts from different levels of the enterprises to the virtual management team to regularly train and guide the team. At the demand of enterprise market, under the leadership of teachers and personnel in the company, the team can carry out research, development and implementation of the project. The team can also develop real product. Student team which has creativity will be assisted to start a business. When the

Fig. 1 Practical train model with pioneer work

conditions are ripe, the virtual companies will be registered, directly becoming a corporate entity. Virtual companies of SOVO are supported by the software company for the project and also supported by venture capital funds of Dalian Software Park. At present, there are more than 10 virtual companies in SOVO, covering students of all majors of all colleges. Through the incubator, there have been 3 successful students virtual companies became real entity, realizing the dream of entrepreneurs of students.

Yanwen Wu

173

Neusoft Training Complex Education System For the difficult issue of updating knowledge of talented people, the company will arrange systematical learning and training courses. Because the company's strengths of numbers of personnel, learning time of learners has been guaranteed, and the staff views such a great opportunity of improvement as great importance, thus forming a virtuous cycle. The company has its own postgraduate education system, Languages and so on can be learned systematically or learned in spare time, thus rapidly rising the company's echelon of talent construction and high-end talent development. Here set an example of English training of company's employees. With the speeding up of internationalization process of the company and the specific requirements of the business, foreign languages has become the staff’s necessary tool, employees’ foreign language ability as a whole has also become a company’s international competition’s key factor in obtaining success. Company continued increasing efforts and inputs of foreign language training in order to speed up the training of international personnel to enhance their staff's own competitive ability. The implementation makes full use of the three organizations’ outstanding teaching resources and advanced teaching methods, realizing diverse teaching modes of "line-on-line-off interactivity, face-to-face combine online learning, self-strengthening cooperation". Because the completely use of company's internal resources, cost-effective outcome has been achieved. The company’s foreign language training program needs to fully consider the diverse training needs of the staff. It is necessary to consider the geographical and time differences, and also the individual training needs and the existing differences. For working staff training, the biggest problem is that it is difficult for staff to learn face-to-face at the same time and the same place for a long time, so we suggest an E-learning-based learning mainly, face-to-face learning and telephone counseling learning as assistance. We mainly explain questions that normally feedback. After the collection of the questions, some learning methods and key problems will be explained. The gathering time of staff is short, the learning is targeted, and the explanation of question is fully complemented with online training, so the desired effects have been achieved. Based on computer networks, we designed a specific personalized training solution which integrates evaluation of a personalized learning program — "Proficiency Test→ Ability details analysis→ Learning and suggestion→ Online learning upgrade + Face-to-face language training + Foreign language telephone counseling learning→ Proficiency re-test ". The program will mainly train staff with specific learning needs at different levels and different stages. The training will last at least three months, mainly to study English. And the program can be suitable for other languages training, such as learning Japanese. At first, the training program targeted at the company's 100 backbones that have English learning need. The implementation of the entire training program is divided into four stages, including start-up stage of training, learning stage of the allocation of resources, stage of training, following-up and management and effect assessment stage.

174

Manufacturing Systems and Industry Application

Conclusion Finally, because of the effective implementation of the two measures, the number of complex high-end talent grows every year, and this has laid a good foundation for enterprises to develop high-end market. Although these measures are independent with each other, they are closely related. On the one hand, each little improvement will be the solution of other problems and bring about good results and foundation; on the other hand, human resources management policy has been implemented to ensure that the brain drain is below 7 %. In the software industry, this is also a very great success. References [1] Haiyan Nie: The world's top companies employing methods. Administrative and Personnel, Vol. 2, (2000).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.175

Japan Software Project Risk Management CUI Wei1, a, YE Jia1, b, LIN Yan1, QIAN Si-yu1, LIU Yang1 YANG Hai-feng2, LI Ya-jun2 1

Transportation Management College, Dalian Maritime University, Dalian Liaoning, P.R.China

2

Development Planning Department, Chongqing Electric Power Corp., Chongqing, P.R.China a

b

[email protected], [email protected]

Key words: Demand for change; Japan; Software outsourcing; Risk management

This article has been supported by Chinese National Natural Science Fund in 2010, "Strategy research of using supply chain partner’s knowledge in enterprise knowledge creation process", Project approving code: 71072124. It was also supported by Fund of Liaoning Province reform of higher education "Research of service outsourcing personnel training mode" in 2007, Fund of Dalian Science and Technology Plan "Research of ways expanding Dalian outsourcing service industry market" in 2008, Fund of Dalian Science and Technology Plan "Research of Dalian comprehensive prediction system of electric power and energy" in 2009, Fund of Chongqing Electric Power Corporation "Research of Chongqing comprehensive prediction system of electric power and energy" in 2008 and Project of Dalian Maritime University reform of graduate education and teaching "Construction of teaching content system of Management Science and Engineering based on the platform of motion and internet of things " in 2010. Abstract. With the current domestic software outsourcing business with Japan's rise of a large number of outsourcing projects from Japan such as NEC, Hitachi, Toshiba and other large Japanese companies over billing, not only promoted the growth of the domestic economy, but also led to domestic software improve the overall business. In Dalian, the Dalian City party committee and government attach great importance to the development of the software industry, software industry has become the revitalization of old industrial bases to accelerate the economic development of Dalian's one of the important strategies. However, orders easier to do a single hard, if we really want Japan to do a good job outsourcing, is not so simple. How to make the project successful? In the process of the project will encounter those problems? What is the project's impact? These issues will be troubled by the development of each project managers. These are the risks of the project, how to effectively manage the risks, the success of the project is to explore the theme of this article. Introduction Software outsourcing has become a global trend of the IT industry, especially in Dalian in China due to geo-cultural and historical heritage, and many other advantages, together with Dalian City party committee and government attach great importance to the development of the software industry, software industry has become the revitalization of old industrial Base, to accelerate the economic development of Dalian's one of the important strategies. Dalian is also fast becoming a software outsourcing base for Japanese companies. In the high-speed development is bound to have many problems, particularly in software development management process. Risk management is in the process of software development is an important element, which does not involve the actual ring as a result of the operation, often neglected; However, the success of the project, the process is smooth, whether or not a quagmire, often by risk Management can be the result of the decision. The main purpose of this article is to discuss risk management at the project's important role and how effective risk management.

176

Manufacturing Systems and Industry Application

Risk identification and analysis Risk can be divided into internal project risk and impact of the project's external risk. Including the internal risk such as technical issues, changes in demand, construction schedule and so on; external influences include the operating system environment, customers or staff changes, and so determine the uncertainty. Project management in the process of identifying the main risks to answer the following questions: What are the risks should be considered, the main cause of these risk factors is what the risks arising from the seriousness of the consequences. Software project risk simply in the following areas: demand, technology, cost, quality and progress. Specific to different projects, a variety of risks arising from hazards, the impact of the projects as well as the probability of occurrence are different, specific analysis of specific issues, risk management, the premise is to risk identification and analysis. Table 1 Risk identification information

No.

1

Account of the risk of

AA

Impact of classification

Progress

The degree of progress against

Fatal

Probability

High

Priority

Life Cycle

Recognition date

A

before the end of the UT

×××

The frequency of follow-up

Every week

We are currently in the development of the project to Japan, risk management projects as part of the plan, at the beginning of the project should be considered and to identify and to identify the risks of using Delphi method list, marking the risk of various specific events, the occurrence probability, against the impact of the project (the progress, quality, cost, etc.). The situation after identification of risk is shown in Table 1. The risk priority by the risk of damage and the probability of occurrence of the decision can be divided into a total of A / B / C / D 4-point scale, on giving the order of priority. In addition the need analysis, the risk should be tracked and how long can be checked to ensure that the controllable risk, so here is also a need to identify the risks to track the frequency, so that in accordance with the frequency of risk on a regular basis to see the situation and the implementation of measures to address Circumstances. Risk control Risk control is the use of certain techniques such as prototyping, automation software, psychology software, reliability engineering, project management, as well as some methods to circumvent or seek to transfer risk. We have in front of the project risk identification and analysis done by, for the purpose of providing a reasonable basis for risk control, in the light of actual experience in project management on the basis of effective risk control. Risk control is to prevent the occurrence of the risk and reduce the risk of occurrence of the impact. In the past we have a project to use a new technology, and technology in the project team is in no one's grasp. The project's project manager prior to the start of the project, although the effective recognition of the risks of operating in the latter part of the process, not a very good track of this risk-risk situation, as well as the Royal Implementation of the measures, resulting in the latter part of the project, took place The integration of major error, resulting in delayed projects. This is a disadvantage for the risk management control of the real events. Therefore, we can see that the only valid identification and analysis of the risks, and risks beyond the control of the occurrence and impact.

Yanwen Wu

177

So how are we going to conduct an effective risk control and risk so that management can do? Risk management practice in the implementation of the project is a cycle, with the project, will continue to generate new risks, at the same time there will be a part of the risk through effective and to avoid the disappearance [1]. From each cycle, if we want to achieve effective risk control, should do the following: 1) Risk identification and risk analysis of the various attributes (probability, the danger, priority, etc.). 2) According to the characteristics of risk, the risk of development of each of the corresponding measures to avoid. 3) Once the development of risk occurs, in order to minimize the hazards of emergency measures. 4) The risk of the development of the life cycle, responsible for the development of a risk to the risks of follow-up on a regular basis. 5) According to the above measures to deal with the risk, the risk of the implementation of the follow-up inspection on a regular basis and to ensure that the risk of the situation there is accurate grasp. 6) On a regular basis to identify the risk of new projects, the implementation of the next cycle. Table 2 Steps for risk-Royal Royal-risk measures

Responsible

Start time

The end of time

Emergency measures

×××

××

×××

×××

×××

Processes and the implementation of the above measures against Japan after years of practice, project development, combined with the CMMI system, we have used risk-management methods and norms. Through the Practice has proved that this method is effective. However, in order to guarantee scheme, the reasonableness of the measures in planning and risk management, the need for professional assessment to ensure that all work right. In order to facilitate data recording and tracking of risk, in the above table to identify risks on the basis of the expansion of Royal-risk measures, are shown in Table 2. Royal-risk measures described in order to avoid or reduce the risk of the occurrence of the risk of occurrence of harm, the need to take measures to avoid. In order to effectively control the risk, the best of each risk, have a special responsibility, in charge of tracking resolve. In addition, when the risk of occurrence, such as the Arab-Israeli venture will be to minimize the impact, we also need to consider, and this is the "emergency measures" to be described. Table 3 The risk of tracking information State

Follow-up report on risk

No

×××

Close time

As the risk analysis element, the risks have been identified to track the frequency, so effective in order to record each time the tracking information, to expand the follow-up information sheet risk, as shown in Table 3. In the table, the main point in time to describe the current state of risk (that have taken place, did not occur, the risk of disappearing), and describe the follow-up period, the risk of the status of implementation of measures to address the risks of the status quo. If the risk has ended or has not occurred, then shut down the risks. Table 1, Table 2, Table 3 above is a complete template of risk identification, analysis, control and tracking of the process, in this way can be a very good risk for the entire life cycle and effectively intuitive control. Of course, it is just a way of expression, can also be other ways to achieve similar results.

178

Manufacturing Systems and Industry Application

Summary The risks and benefits always go hand in hand. In the course of project development, a set of successful risk management can be well to prevent and reduce the project's potential impact on the problem, it is an effective prescription to deal with the crisis; after the deal had not always effective prevention, and risk management allows us to do a good job in Prevention work in advance [2]. In a project life cycle, a good project managers should be in balance between risk response and risk prevention: when there isn’t any risk, risk management will help you analysis and reduce the risk or probability of occurrence through the scientific method, The transfer of risk, we can avoid risks caused by the loss; when the risks occur, the risk management will help you to adopt a well-considered solutions to respond quickly, thereby reducing the risk of the entire project impact. References [1] Guo Peng and Zhu Yu-Ming: Project Risk Management Framework of a Theory - A High-tech Project as an Example, Economics and Management, Vol. 3, (2005). [2] Chen zhong: Software Project Risk Management, Economic and Social Development, Vol. 12, (2004).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.179

A Study on Pen-based Input Operation and Tilt Angle of Tablet Bao dongxing1,a, Li xiaoming2,b, Xin yizhong3,c and Ren xiangshi4,d 1

School of Electronic Engineering, Heilongjiang University, China 2

Microelectronics Center, Harbin Institute of Technology, China

3

School of Information Engineering, Shenyang University of Technology, China 4

School of Information, Kochi University of Technology, Japan a

[email protected], [email protected]

Key words: Fitts’ law, Steering law, tilt angle of tablet, pen-based input operation

Abstract: In this paper, we designed two experiments that can be used to investigate: 1) whether pen-based input operation can be affected by tilt angle of tablet; 2) at which tilt angle of tablet can user do pen-based input operation with comfort, quickness, and accuracy. The experiments include pointing tasks (1-dimensional and 2-dimensional) based on Fitts’ law and stroking tasks (linear and circle) based on Steering law. Each task would be done under 5 kinds of tilt angle of tablet: 0, 15, 30, 45, and 60 degrees. Introduction Digital pen is mainly used as direct input device for tablet. Studies on pen mainly focus on the modalities of input, such as pointing, stroking, pressing and tilting. G. Ramos et al. [1] investigated human ability to use stylus pressure to perform discrete target acquisition tasks. X. Bi et al. [2] investigated users’ ability to control intentional pen rolling and presented an exploration of the design space of rolling-based interaction techniques. F. Tian et al. [3] utilized pen tilt for interaction: matching cursor shape to tilt angles could improve performance by enhancing stimulus-response compatibility. M. Oshita [4] presented a pen-based intuitive interface to control a virtual human figure interactively, and in this design, the position, pressure, and tilt of the pen was used to make the figure perform various motions. The review indicates that while there are rich literatures on the study of pen, there has not been an investigation into the effect of the environmental parameters, such as tilt angle of tablet, on pen-based input operation. Thus, this is an area that is ripe for research. When information from pen is used as interface, the performance can be affected by many factors. It is necessary to insure that good performance can be got via design while environmental parameters changes. The tilt angle of tablet is the main factor that can affect pen-based input operation, because it can be changed easily, especially during the process of input operation. Thus further study on effect of tilt angle of tablet on pen based input operation is necessary. This study focuses on the effect of pen-based input operation while the tilt angle of tablet changes. We designed two experiments: 1-dimensional and 2-dimensional pointing tasks based on Fitts’ law; linear and circle stroking tasks based on Steering law. Each task would be done under 5 kinds of tilt angle of tablet: 0, 15, 30, 45, and 60 degrees. From the results of the experiment and questionnaire (execution time, error rate, fatigue, and ease of use), the conclusions about how pen-based input operation is effected while tilting the angle of tablet and at which tilt angle of tablet that is easier for user to do input operation with pen can be get.

180

Manufacturing Systems and Industry Application

Experimental theories The experiments in this paper were designed based on Fitts' law and Steering law. Fitts' law is often used as a model for pointing actions in user interfaces, such as interface design and input device evaluation. Fitts' law [5] predicts that the farther and the narrower the target is, the more time is needed to select the target. Fitts' law is commonly expressed in the following form [6]: MT = a + b log 2 (

A + 1) W

(1) Where MT is the movement time, A is the amplitude between the centers of two targets, W is the width of target, a and b are empirically determined constants. The index of difficulty (ID) of the movement is defined as: ID = log 2 (

A + 1) W

(2) Fitts’ law is the principal about pointing, but pen is also used with some actions like drawing and writing. As a model related to these actions, Accot et al. proposed steering model [7, 8]. The formulas of linear and circle actions of Steering law are given below: IDl = A / W

(3)

ID c = 2πr / W

(4)

Where A is the length of the tunnel, W is the width of the tunnel. Experiments The experiments designed in this paper include two parts: 1-dimensional and 2-dimensional pointing tasks based on Fitts' law; linear and circle stroking tasks based on Steering law. Each task should be conducted in 5 kinds of tilt angle of tablet (0, 15, 30, 45, and 60 degrees). Participants 10 participants should participate in the experiments. All of them should have normal or corrected normal vision and were right-handed. None of them had formal experience of doing input as a job with pen and tablet. Apparatus The tablet used for the experiments was WACOM Cintiq 21UX interactive LCD graphics display tablet. The tablet could tilt from 10 degrees to 90 degrees with the adjustment of underplate. The wireless pen used to do input operation came with the LCD tablet. Because 0 degree of the tablet was needed in the experiments and it couldn’t be set by adjusting the underplate, a base must be placed under the tablet to get 0 degree. The interface used for the experiments were developed in Java environment and could run on the PC with Windows XP operating system. Experimental design The tilt angle of tablet from 0 to 60 degrees had 5 levels: 0, 15, 30, 45, and 60 degrees respectively. A protractor, the precision of which was 1 degree, needed to be used to adjust the tilt angle between the tablet and the surface of the table (horizontal). For 1-dimensional left-right pointing task (linear operation), 2 vertical strips (see Fig. 1) were displayed on the screen, one was white and another was black. The white one was the target needed to be pointed. Once the white one was pointed, the position of the white one and black one were reversed. Participants were required to point the current white one with pen as quickly and accurately as possible in the task. In this task, the width (W) of the strips and the amplitude (A) between the center of two strips were set at W = 20, 30, 40 pixels, and A = 200, 300, 400 pixels, and the combinations was randomized. 10 trials were presented in each A and W combination. Each combination was done

Yanwen Wu

181

with 5 kinds of different tilt angle of tablet and this was balanced by a Latin Square. In total, the task consisted of: 10 participants × 3 width × 3 amplitude × 5 tilt angle × 10 repetitions = 4500 trials For 2-dimensional vertical and diagonal pointing task (surface operation), the state of the target on the screen can be seen in Fig. 2. In this task, white circle on the screen was the target needed to be pointed. At the beginning, the white circle appeared in the center of the screen, and once it was pointed, it changed into black. Meanwhile, a white circle in one of the eight compass directions (North, Northeast, East, etc.) around the central circle appeared, and after it was pointed, it disappeared and the central circle became white again to be the next target and so on. Participants were asked to keep on pointing the white one as quickly and accurately as possible in the task. In this task, the width (i.e. diameter, W) of the targets and the amplitude (A) between the center of two targets were set at W = 20, 30, 40 pixels, and A = 150, 250 pixels, and the combinations was randomized. The locations of the possibility for the target to appear around the center were eight. In total, the task consisted of: 10 participants × 3 width × 2 amplitude × 5 tilt angle × 8 locations = 2400 trials

Fig. 1 1-dimensional pointing task

Fig. 2 2-dimensional pointing task

For linear stroke task, a straight tunnel could be seen on the screen (see Fig. 3). There was a start line on one side and an end line on the other side. Participants started behind the start line and stroked over the end line. In this task, the width (W) of the tunnel and the amplitude (A) of the tunnel were set at W = 30, 60 pixels, and A = 200, 400 pixels, and the combinations was randomized. The directions for the pen to move were two. In total, the task consisted of: 10 participants × 2 width × 2 amplitude × 2 direction× 5 tilt angle × 10 repetitions = 4000 trials For circle stroke task, a circle tunnel could be seen on the screen (see Fig. 4). Participants should start from the cross mark and stroked along the circle tunnel according to the direction given by the arrow (clockwise or anticlockwise). The process that the pen started from the cross mark, stroked along the circle tunnel and returned back to the cross mark was called a trial. In this task, the width (W) of the tunnel and the amplitude (A) of the tunnel were set at W = 30, 50 pixels, and A = 300, 600 pixels, and the combinations was randomized. The directions for the pen to move were two. In total, the task consisted of: 10 participants × 2 width × 2 amplitude × 2 direction× 5 tilt angle × 10 repetitions = 4000 trials start

Fig. 3 Straight tunnel

goal

Fig. 4 Circle tunnel

In each task, after the tilt angle of tablet was set to a certain value, participants were told to sit down at the table and to adjust the chair to a height suitable for complete the task.

182

Manufacturing Systems and Industry Application

Participants would be introduced the whole experimental process and warm-up trials would be performed until they felt that the experiment could be started. Evaluation criterion Execution time and error rate would be recorded in the experimental records. For pointing tasks, execution time referred to the time needed to move from one target to the other. If one pointing dropped out of the target area, an error was recorded and it would not be included in computing execution time. For stroking tasks, execution time referred to the time that the pen was used to stroke along the tunnel from the start to the end. If the pen stroked off the tunnel or the tablet after it had left the start line, an error record was reported. Participants were asked to rate the fatigue of the hand used and the ease of use of each tilt angle of tablet on questionnaire with seven-point scale (1 for worst, and 7 for best) after the experiments. Summary From the execution time, error rate, fatigue, and ease of use of each pointing and stroking task under each kind of tilt angle of tablet, the experiments designed in this paper can be used to verify the effect to the pen-based input brought by the tilt angle of tablet, and can be used to find a suitable tilt angle of tablet that can make user feel more comfortable and accuracy during pen-based interface operation. The experiments of this paper can also be used to get some conclusions of the operating characteristics of the pen-based interface while the tilt angle of tablet is changing. These conclusions can be the guideline to aid software design to make devices with pen and tablet more ease of use, and it can also be used as a reference of environmental parameter setting in relative experiment designs. Acknowledgment The study was supported by Open Fund of Key Laboratory of Electronic Engineering, College of Heilongjiang Province, (Heilongjiang University), P. R. China (DZZD20100033). Li xiaoming is the corresponding author of this paper. References [1] G. Ramos, M. Boulos and R. Balakrishnan: Pressure Widgets. Conference on Human Factors in Computing Systems (2004), p. 487-494 [2] X. Bi, T. Moscovich, et al: An Exploration of Pen Rolling for Pen-based Interaction. Symposium on User Interface Software and Technology (2008), p. 191-200 [3] F. Tian, X. Ao, et al: The tilt cursor: enhancing stimulus-response compatibility by providing 3D orientation cue of pen. Conference on Human Factors in Computing Systems (2007), p. 303-306 [4] M. Oshita: Pen-to-mime: A Pen-Based Interface for Interactive Control of a Human Figure. EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling (2004), p. 43-52 [5] P. M. Fitts: The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology (1954), 47: p. 381-391 [6] I. S. MacKenzie: Fitts' law as a research and design tool in human-computer interaction. Human-Computer Interaction (1992), 7: p. 91-139 [7] J. Accot and S. Zhai: Beyond Fitts’ Law: Models for Trajectory-Based HCI Tasks. Conference on Human Factors in Computing Systems (1997), p. 295-302 [8] J. Accot and S. Zhai: Performance Evaluation of Input Devices in Trajectory-based Tasks: An Application of Steering Law. ACM CHI (1999), p. 466-472

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.183

Study on Security Management-oriented Business Process Model Yu Zhiwei1, a, Ji Zhongyuan2,b 1

Ningbo Institute of Technology, Zhejiang University, Ningbo, China 2

Lishui University, Lishui, China

a

Email:[email protected],

b

Email: [email protected]

Key words: Security Management, Information Systems, Risk assessment, Business Process model

Abstract. Information systems security is important for the day-to-day operations of business process. The current business process modeling can’t describe all the factors that the security management concerns on. A tri-layer business process model is presented in this paper from agent layer, activity layer and asset layer. This tri-layer model, which is based on business process and is convenient for security requirements analysis, expresses the relationship between activities, agents and assets. The essence that information systems are used to support the business process to fulfill the organizational functions is expressed in this tri-layer model. Such elements as assets, personnel, business activities the security management cares about are included in this model, which forms the basis and communication platform of security management. Finally, the pragmatic value of this tri-layer model is validated through a case of security management project of a manufacturing enterprise in China. Introduction Information systems security has become a key issue for the development of a global Information society and has been on the list of critical success factors of most major organizations today [1,2]. Information systems security essentially ensures the safety of the business process supported by the information systems. [3]In this digital era, as organizations use automated information technology (IT) systems to process their information for better support to their missions, risk management plays a critical role in protecting an organization’s information assets and their information mission [4]. Risk assessment is an effective way to assist the decision-making and management of risk and it is the core of information system security management [5]. Risk assessment is to identify and valuate the risk faced by the information systems so as to adjust the methods of security control[5].Information security determines what needs to be protected and why, what needs to be protected from, and how to protect it for as long as it exists [6]. It is the basic and key to identify and understand the objectives to be protected in security management. Security Management-oriented Business Process Model is an important perspective between security experts, enterprise managers, and users of information systems. The stage of identification and evaluation and management object in the information system management are mainly used the subjective ways, such as the list work, brainstorming, and questionnaire survey and so on. Due to different ideas of systems and assets from different levels and personal, the paper puts forwards the security management-oriented business process model to establish a communication platform between the expert group and personnel from the enterprise to help identify and valuate specific objects.

184

Manufacturing Systems and Industry Application

The requirements and principles of security management-oriented model of information systems are firstly presented in the paper. In the second part, a detailed description of the model is made and the establishing process is studied. And the next step is to analyze the features of the model and a case study is provided to prove the role of the method as the communication platform between the description of the business process and risk assessment. Characteristics of Information Systems and the Principles of Security Management-oriented Modeling System is something as a whole that consists of relative and interactive components to fulfill specific functions for organizations. As the outcome of informatization, information systems, which become the necessary tool to support the day-to-day operations, have the following characteristics: (1) Integrality Information systems are composed of different kinds of assets which are coordinately interacted to support the operations of an organization. They are an integrated one which fulfills predefined function orderly. The information systems are more than a collection of different assets. (2) Correlativity These components of information systems are correlative and interdependent. There must be some internal rules to associate those components. (3) Objective Information systems are constructed to assist the operation of production, management, finance, sale and so on. The assets are arranged according to the purpose of information systems. (4) Adaptability The function, the structure, and the operation of information systems are different in different organizations. And so the dependent organizational environment of information systems should also be concerned by their components besides the internal relationship and relevance. Information systems are becoming the important support to day-to-day operations. They implicate the business process for fulfilling the function of information systems; contain those relative assets to support the business process operations. Concerned about these aspects, we should describe the ordinal assets from the integrality and the objective of information systems in specific organizations. The principles of security management-oriented modeling can then be put forward as following: (1) This model should reflect the main functions of information systems. The functions of information systems are decompounded through the operations of business process. After an analysis on business process, the function list can be obtained. (2) This model should express the process that the function of information systems is realized. Only we know how the information systems run, we can be clear about the status of security and potential risk of information systems. (3) This model should embody the structure and the internal relationship. Information systems are composed of different assets which are coordinately used to support the organization objective and mission. And so there must be some internal relationship during the activities in the business process. In the current risk assessment, separate asset is analyzed. It is an asset-driven analysis. (4) This model should be understood easily. This model is established to express the information systems, to enhance the comprehensibility on the business process and the environment of information systems, to form a communication platform between security experts and the organization agents.

Yanwen Wu

185

In a word, this model should tell us “who” “does” “something” by “what”. It should embody the process the information systems to fulfill the predefined function; and reflect the essence that the assets are used coordinately to support the business process. Security Management-oriented business process Model and Its Establishing Process In order to express the characteristics and the essence of information systems, we established a tri-layer model according to the above principles as shown in figure 1. This tri-layer model contains agent layer, activity layer and asset layer. Agent layer

Agent 1

Agent 2

Agent 3

…… Illustration:

Activity layer

Activity 1

Activity 2a or

or

Activity 3

……

Activity 2b

Components, such as agents, activities, assets. Logical operators , such as “and”,”or”, “xor”.

Application system 1

Applicatio n system 2

……

Workstation 1

Workstation 2

……

Information flow between activities Support for activities

Asset layer

Network connection Server 1

Server 2

……

Physical environment of assets

Fig.1, The Tri-layer Business Process Model

This tri-lyaer business process model includes: (1) Agent layer The agent means people or machines that perform the business process activity. Agent layer is the aggregate of people and machines. We express it as {Agent=agent1, agent2 … agentM}. More than 80% of successful invasions to information systems are due to ignorance, carelessness and laziness of people[7]. So the security of agent layer is the most important part of information systems. (2) Activity layer The layer of activities indicates the aggregate of ordered activities which perform the fixed tasks of business process. We express it as {Activity=activit1, activit2 … activitN}. The agent layer and the activity layer reflect “who” performs “something”. It presents the aim of information and the process of information system engineering and embodies the nature f it. On the basis of it, further analysis of the necessary information system assets can be conducted and the logic thinking of the integrity of information system then forms. (3) Asset layer In addition to the software and hardware of the computer, database, network assets and physical environment supporting the business process, personnel is also included in the traditional risk assessment. The assets in the paper only refer to the software and hardware of the computer, database, network assets and physical environment, and the personnel, a special element is separated. We express the asset aggregate as {Asset=asset1, asset2 … assetK}. Because asset layer is complex, you can describe it as several sub-layers, such as the implication system sub-layer, workstation sub-layer, and server sub-layer. This security management-oriented model is constructed by the agent layer, the activity layer and the asset layer, tells us “who” performs “something” with the support of “assets”. This model

186

Manufacturing Systems and Industry Application

establishes the mapping relationship among agents, assets and activities and describes the complicated interdependent relationship between assets. This model tells us the structure, the function, and the process to fulfill the function of information systems, describes the objectives for security management clearly in three layers. To establish this tri-layer model, firstly, you should understand the business process and construct the activity layer, and then get the mapping relationship between {Agent} and {Activity}, {Agent} and {Asset}. The complicated asset layer can be constructed by analyzing the dependent relationship between assets step by step. The guidance of the modeling process is shown as figure 2. Identification and analysis of the function of IS

Maping the agent layer and the activity layer

Sub-function and business processes analysis

Mapping the asset layer and the activity layer

Modeling the activity layer

Establishing the tri-layer model

Fig.2, The Modeling Process of the Tri-layer Business Process Model

(1) Modeling the activity layer After identification of the function and the business process of information systems, the activity layer can be then modeled, and the relationship between activities can be identified. (2) Mapping the agent layer and the activity layer In information systems, every activity is performed by agent with predefined role. This step is to find who performs an activity. (3) Mapping the asset layer and the activity layer It is difficult to describe the complicated assets in information systems, but it will be easy to dig out the assets by mapping the asset layer and the activity layer. Generally, an activity is the process of dealing with data, which is done on a software system. And so the software application systems can be dug out by analyzing what supports the activities. And then is the hardware environment of those application systems, the database, the server, the net environment and the physical environment and so on. The Advantages of This Model There are some advantages of this security management-oriented business process model: (1) Understandability There are lots of modeling methods such as DFD (Data Flow Diagram), Entity-Relation (E-R), IDEF series. All these methods are not so difficult to those experts in information systems modeling, but not to general staff in organizations. This security management-oriented business process model can be used as the communication platform for security management because it can be understandable by not only the experts in security but also the general staff in doing business process. (2) Expresses the internal relation of assets in information systems Different from the asset identification in conventional risk management method, this model identifies the assets by inducting the assets which support the activities of business process step by step. (3) Convenient to risk analysis This model analyzes the structure of information systems from business process perspective and then maps the agent layer and asset layer. It includes all the elements the security management concerns on and expresses the relationship between the elements. And with this model it is convenient to analyze the transmission of risk among information systems and analyze the impacts on business process.

Yanwen Wu

187

Case Study This model of business process was used in a project on security management of a large scale manufacturing enterprise in China. In this paper we just give the CAD system as an example. We established a model shown as in figure 3. The main tasks in this CAD system include: (1) the draftsmen draw drafts at the CAD clients, (2) the draftsmen submit drafts to censors, if rejected, (3a) the censors feedback to draftsmen, if passed, (3b) the censors submit drafts to (4) trustees to pigeonhole them and (5) the drawings-printing employees print the drawings. After the analysis on these key activities, we can dig out the personnel performing these activities and the assets supporting the operation of activities.

Draftsmen

Censors

Drawings-printing employees

Trustees

Agent layer Reject Requirements

Draw

Check

Pigeonhole

Or

Print

Drawings

Submit

Activity layer

CAD client

CAD client

CAD client

CAD client

Printer Drawing workstation

CAD server

Checking workstation

Pigeonholing workstation

Printing workstation

File server

Asset layer

Fig.3, The Security Management-oriented Model of CAD System

This model expresses the purpose of CAD system which is mainly used to perform drawings, describes the CAD system as a whole to fulfill the design process, reflects the interdependent relationship between the assets assistant to support the CAD design. This model tells us what is made up of the CAD system and what the main function of the system is. This model can be easily understood by those people in the security management group and the organizational people. The usage of this model in that project shows that it is a good way to understand the information systems by this model and its modeling process. As the communication platform, this model is agreed and accepted by the staff in the organization. And then based on this model, the risk of activities can be analyzed and then assigned to agents and assets. The impacts of the risk of assets on business process can be easily analyzed and expressed clearly through this security management-oriented modeling. Summary Business process modeling is an objective tool to describe the systems abstractly. It is also a communication perspective which provides the general study on information system. With the specific requirements of security management and risk assessment of information systems, the security management-oriented modeling and its modeling process are presented in this paper. This

188

Manufacturing Systems and Industry Application

model describes the information systems from their function, structure, business process they support. The concise lines and symbols are used to establish understandable business process model starting from the analysis of the function and the business process of information systems, and then digging out the activities, the agents performing the activities, to the assets supporting the activities. The advantages of this model were analyzed and the feasibility and pragmatic value was validated through a case of security management project of a manufacturing enterprise. References [1] S.A. Kokolakis, A.J. Demopoulos, E.A. Kiountouzis: The use of business process modelling in information systems security analysis and design, Information Management & Computer Security. Vol. 8, Iss. 3:107 (2000) [2] H. Sharon, B. K. Solms, Rossouw von: A business approach to effective information technology risk analysis and management. Information Management & Computer Security. Vol.4, Iss. 1:19-31 (1996) [3] YU Zhiwei, TANG Renzhong: Analysis on security objectives of business process elements. Journal of Zhejiang University(Engineering Science), Vol.41 No.8:1244-1248,1270(2007) [4] Stonebumer, G., Goguen, A., Feringa, A.: Risk Management Guide for Information Technology Systems, NIST Special Publication 800-30, Information on http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf (2010-12-18) [5] Marle Wright, Third Generation risk Management Practices, Computer Fraud & Security, February, 1999: 9-12 [6] Christopher Alberts, Audrey Dorofee. Managing Information Security Risks: The OCTAVE Approach. Pearson Education, Inc. (2003) [7] He Dequan, No absolute Security, Netinfo Security, Vol.10:23(2002)

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.189

Research into Modeling Methods Basing on Product Presentation Information Base WANG Zhenya a, HAO Song b, SHI Huihui c School of Mechanical Engineering, Shandong University, Jinan, Shandong, China a

[email protected], b [email protected], c [email protected]

Key words: product design, database, modeling, product presentation.

Abstract. Unprecedented challenges has been facing by the traditional industrial design methods, shortening the product development cycle and improving design efficiency has become an important issue for designers, various kinds of new design methods have thus emerged, among which the Design Professional Knowledgebase is the very effective one. The goal of this research is to collect a large quantity of information about product presentation to establish a professional knowledge base – the Product Presentation Database, through which we explored new thinking about industrial design configuration on the one hand, and on the other, designers can refer to this database for lots of information about product design to improve design efficiency greatly. Introduction With the development of the information age, the traditional industrial design methods are facing unprecedented challenges, shortening the product development cycle and improving design efficiency has become an important issue for designers, various kinds of new design methods have thus emerged, among which the Design Professional Knowledgebase is the very effective one. The object of this research is to construct a kind of product presentation design database and knowledge database by using which the designers can shorten their design period and improve their design effectively. The research also bring forward a kind of design method that is base on the product presentation database, and lucubrate on the application of the product presentation database. Overview of industrial design modeling Modeling is to mold the particular presentation of object or to create the presentation of object. The product modeling design is to process the form, material, structure, color, texture of product and make it esthetical with scientific and artistic ways in order to pursue perfect product sculpt[1]. Thus the theory of modeling also has great infection on product modeling design of industrial design. Industrial design modeling means to use engineering and artistic method modeling presentation of product, and take factors of function, structure, technology, market relation into account. Finally they can realize the harmony and unification on “human-computer-environment”. Industrial design modeling method researched here is based on the knowledgebase. We decide to adopt modeling decomposing method after comparing three basic industrial design modeling method. The theory of modeling decomposing method is to decompose form based on all kinds of needs to gain the best modeling. The decomposing here means to decompose the changeable part of form. By decomposing the familiar changeable part of form of the existing product we can get a knowledgebase integrated with design knowledge and cases that is called Product Presentation Database (PPD). The PPD mainly contains three parts that is “samples of product presentation”, “sketches of product presentation samples” and “description of product presentation samples”. Samples of product presentation include key-press, button, screen etc. Description of product presentation samples comprises the description of form, function, material etc.

190

Manufacturing Systems and Industry Application

Connotation of product presentation Presentation is the form of things that comes into being in our brain. It is a token of objects or things which possess bright presentation [2]. Presentation is re-organizes and processes the feeling and consciousness. The presentation comes into being under the cooperate act of perception organ. It is also a general sensible presentation basing on basics of feeling and consciousness. It is the high-grade form of the sensible understanding. Product presentation is the overall impression that the attribute of the product affect our brain. It is not only include the outward manifestation such as form, color, material, taste etc, but also involve the product structure, technology etc, even comprise the quality, function and human-product reactions etc. Product presentation is a systematic concept with widely extension. Basing on the characteristic of presentation we can divide product presentation into three administrative levels as follows: Intuitionistic product presentation. The product presentation of this level refers to those visible, touchable, sensate external attribute like form texture taste etc. Recapitulative product presentation. The matter requirements of product such as structure, technology, material etc can not deliver us direct vision feelings like the Intuitionistic product presentation. So, it is hard for most consumers to pay close attention to the recapitulative product presentation. The product presentation of this level is expressed by product appearances and application indirectly, and formed some information to give the user recapitulative product presentation. Exercisable product presentation. Exercisable product presentation mainly embody in the function, human-product reactions and the relation between product and circumstance etc. It is a kind of relationship of operation and being operated between human and product. In this relation, the function of product is the base and it also has to accord with the man-machine engineering in order to create a product which can be used conveniently , comfortably, rationally and satisfy the material and spiritual need of human. Exercisable product presentation usually comprises “convenient”, “comfortable”, “reliable”, “safety” and “efficiency”. Factors of product presentation information Product presentation information is the information that product impresses to us by all kinds of presentation. The content of product presentation information is very broad which is not only comprises the exterior look like form, color, material, taste etc but also comprises the structure and technology etc, even comprises the quality, function and human-product reactions etc. We will research seven key presentation information that play a vital role on modeling design that mainly contains form, material quality, color, function, structure, technology and human-product reaction. Form. Form is one of the most powerful factors of product visual presentation, and it is the foundation of product modeling design. By means of the dimension, form, proportion of product and their mutually compose relations, form can build certain presentation atmosphere by which it can make people feel different mentality mood like hyperbole, implicit, interest, delighted , relaxed, mysterious and so on, and make the users feel some mentality experiences. Firstly, the form of product is to express the characteristic of product functions, to bring the material and structure characteristic into play, to show the rationality of the inside technology. The form info of product presentation has instruction, identification, operability, and appetency. What can express is the exterior presentation, how to operate it, and also what function info it gets [3]. Color. Human get 90% information form their vision, and form, color, material and all have something to do with vision. The same as form, color has the similar function with language which can express some info to people. The color of product presentation info comprises hue, lightness, purity and mutually organizational relations. The functions of color in product presentation design mainly embody in four aspects: dividing form area of product; emphasizing partial attributes of product presentation; assisting designers with form design of product presentation; expressing product information.

Yanwen Wu

191

Material. Material is the substantial base of composing product. Product design is a creative activity of turning material into something pragmatic, economical and beautiful. Materials have close relationships with form in product design [4]. Material selection of product presentation mainly depends on people’s feelings initiated by applications of different materials like metal, plastic, wood etc. Sense of material is divided into visual sense and touching sense by people’s feeling. Luster, transparency, surface texture of material surface of product presentation will give birth to different visual sense, and different material surface texture can give rise to different touch feelings. Structure. The most important objective of product is function, while structure is the key to guarantee the product function can be realized, and also need support modeling design of the product. Structure is foundation of product design and basis of form, color and material. Analyzed from the aspect of psychology, structure should give people the feeling of “safe, reliable, strong, stable”, and the quality of structure will present users deep impression. Function. Function is the carrier nature of product, the finale objective of product design. The function of product presentation is the specifical function possess. With the function analysis, nail down users need and the content and level of function and improve product competition, designers can realize essential function, eliminate superfluous function and consummate defective function. According to the demand characters of users, product presentation can be divided into matter function and spirit function. Human-product reactions. Design is a tool to serve people, so we should balance the relation that is human-product reactions between product presentation and people. The relation between human and machine is operate and being operated mutual relation. The systematic consideration and realization for this human and machine mutual relation is called human-computer interface design [5]. Excellent human-computer interface design could enhance function, form fine human-product reactions and exert characters of human and product to advance using efficiency, safety and reliability. Product Presentation Database Modeling decomposing method is one of basic industrial design modeling methods. Modeling decomposing method is to decompose the changeable part of form for getting a knowledgebase with design knowledge and cases, and that is the product presentation database (PPD).The objective of building the PPD is to conclude, integrate and classify the product presentation rationally, and establish a huge database which can provide convenience and save more time when designers search and consult it. The PPD is divided into three parts that is product presentation samples, sketches of product presentation samples and description of product presentation samples. PPD is an aggregation used for storing serious of parameter of product presentation info, and the structure of PPD must in favor of storing and searching, and the knowledge in PPD can be revised and edited conveniently when it is played. The structure of PPD is similar to the data storing excel, so the software of Microsoft Excel is chosen as the storage and application platform for PPD. PPD is an on-limits and extendible database that demands the samples have to be renewed and expanded which is a motional expanding process. PPD itself just show parts of the cases and info specifically, while the expanding of PPD would make itself comprehensive and meet the demands of changing and progressing industrial design. The expanding directions mainly comprise extent expansion and depth expansion of function. Figure 1 shows us the specific content of application and extension model of product presentation database.

192

Manufacturing Systems and Industry Application

Product design period Product design period

Enter the database

Extension of database Product design period Product design period

Search for info

Pick-up image info Product design period

yes

no

Exist?

Add image info Product design period Product design period

Product modeling design Product design period Image info being put in storage Product design period

Edit image info Product design

Example the image info Product design period

Example the image info Product design period

Renew product design Product design period

Fig.1 The application and extension model of product presentation database Conclusion According to the research of industrial design modeling methods above, we put forward a kind of modeling method basing on the PPD and build a PPD. We should obey certain principles which can be divided into three steps when we make use of the PPD: Search of product presentation information samples on the basis of actual problem and demand; Analyze, compare, filtrate and refine the samples that have been searched out from PPD; Apply the presentation information to product modeling by means of decomposing, re-organizing, integrating, and other ways. This research brought forward a new modeling method and built a product presentation database, and also validated the feasibility and validity of the PPD. Industrial design modeling method basing on knowledgebase as the new modeling method is in the starting phase still need to be consummated. With more and more in-depth researches of the product presentation information, the function of PPD will be expanded and the industrial modeling method will be consummated. References [1] Tom Djajadiningrat, Stephan Wensveen, Joep Frens and Kees Overbeeke: Tangible Products: Redressing the Balance between Appearance and Action. Ubiquitous Computing, Vol. 8 (2004), p. 294-309 [2] Chen Yinghe: The Development of Cognition Psychology (The people press in Zhejiang, China 1996) [3] Michael Tovey: Styling and Design: Intuition and Analysis in Industrial Design. Design Studies, Vol. 18 (1997), p.5-31 [4] Jim Liske: Industrial design Material and treating handbook (Chinese water conservancy and water and electricity press, China 2005) [5] Brown J, Cunningham S: Programming the user interface (John Wiley & Sons Inc, USA 1998)

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.193

The Application of Requirement Engineering Model in Large Software Development Process Tang Rongfa

Huang Xiaoyu

Guilin University of Electronic Technology Key words: requirement analysis, requirement development, requirement management.

Abstract. Requirements engineering is the initial phase of software engineering process in which user requirements are collected, understood, and specified for developing quality software products. The requirement engineering process deserves a stronger attention in the industrial practices. In this paper, we proposed an effective requirement engineering process model for software development that can be used for software development processes to produce a quality product. Introduction Requirements are attributes or something, which we discover before building products. It is a condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents [1]. A well-formed requirement is a statement of system functionality that satisfies customer needs. There exists a reciprocal interrelationship between human beings and machines for requirement gathering that can assist to produce quality products [2]. Requirements are commonly classified as functional and non-functional [1]. A functional requirement is a requirement that specifies an action performed by a system without considering physical constraints. Non-functional requirement specifies system properties such as environmental and implementation constraints, performance, platform dependencies, maintainability, extensibility, reliability etc. [1]. Requirement engineering is generally accepted to be the most critical and complex process within the development of socio-technical systems [3, 4, 5]. In this way, it helps to describe a multidisciplinary role of requirements engineering process as well as the patterns for social interaction. The main reason is that the requirements engineering process has the most dominant impact on the capabilities of the resulting product. Furthermore, requirements engineering is a process in which most diverse set of product demands from the most diverse set of stakeholders, which is already being considered by the practitioners. These two reasons make requirements engineering complex as well as critical. Requirement engineering is a systematic approach through which the software engineer collects requirements from different sources and implements them into the software development processes. Requirement engineering activities cover the entire system and software development life cycle. Requirements engineering process is an iterative process which also indicates that the requirements management is understood as an aspect of requirements engineering process [9, 10, 11]. Traditionally, requirements engineering is performed in the beginning of the system development lifecycle. However, in large and complex systems development, developing an accurate set of requirements that would remain stable throughout the months or years of development has been realised to be impossible in practice. Therefore, requirements engineering is an incremental and iterative process, performed in parallel with other system development activities such as design, coding etc. Requirements engineering contains a set of activities for discovering, analysing, documenting, validating and maintaining a set of requirements for a system [6]. Requirements engineering is divided into two main groups of activities; namely, requirements development and requirement management. Requirement development covers activities related to discovering, analysing, documenting and validating requirements where as requirement management includes activities

194

Manufacturing Systems and Industry Application

related to traceability and change management of requirements. Requirements verification consists of those activities that confirm that the product of a system development process meets its technical specifications. Requirements validation consists of activities that confirm that the behavior of a developed system meets its user needs [7]. REQUIREMENT ENGINEERING PROCESS The main objective of requirement engineering is to discover quality requirements that can be implemented into software development. The identified requirements must be clear, consistent, modifiable and traceable to produce a quality product. In this paper, we have proposed an effective requirement engineering process model, which is shown in figure 1. It consists of mainly four phases, namely; requirement elicitation and development, documentation of requirements, validation and verification of requirements, and requirement management and planning. Requirement elicitation and development phase includes requirement analysis and allocation and flow down of requirements. Documentation of requirements includes identification of requirements and software and system requirement specification. Validation and verification of requirements phase is concerned with conforming the documented requirements. Business requirement

Customer requirement

Infromation requirement

Security requirements

User requirement

Constraints

Requirement development

Standard

Elicitation and

Requirement analysis

Documentatoin of Requirements

Validation and Verification of Requirements

Identificatoin

Requirement change

Validation

Allocatoin and flow down

Software Requirement Specificatoin

Software requirements

Hardware requirements

Requirement Management & Planning

Verification

System Requirement specification

Modified Requirements

Software development phases

Figure. 1: Requirement Engineering Process Model The requirement management and planning phase controls the continuously changing requirements. Each of these activities is further detailed in the following sub sections. The proposed process describes requirements engineering for software development systems in which requirement engineering must be a part of the software development process.

Yanwen Wu

195

Requirements Elicitation and Development Requirement elicitation and development phase mainly focuses on examining and gathering desired requirements and objectives for the system from different viewpoints (e.g., customer, users, constraints, system's operating environment, trade, marketing and standard etc.). Requirements elicitation phase begins with identifying stakeholders of the system and collecting raw requirements from various viewpoints. Raw requirements are requirements that have not been analyzed and have not yet been written down in a well-formed requirement notation. The elicitation phase aims to collect different viewpoints such as business requirements, customer requirements, user requirements, constraints, security requirements, information requirements, standards etc. Typically, the specification of system requirements starts with observing and interviewing people [15]. Raw requirements Customer/User Customer

Development of software requirements collection

Customer

Technical Feed back

Technical Enviroment Constraints

Representation

Technicality

Figure. 2: Development of Requirements Furthermore, user requirements are often misunderstood because the system analyst may misinterpret the user’s needs. In addition to requirements gathering, standards and constraints play an important role in systems development. The development of requirements may be contextual, which is shown in figure 2. It is observed that requirement engineering is a process of collecting requirements from customer and environment in a systematic manner. The system analyst collects raw requirements and then performs detailed analysis and receives feedbacks. Thereafter, these outcomes are compared with the technicality of the system and produce the good and necessary requirements for software development [3]. 1) Requirements Analysis: The development and gathering of good quality requirements is the basic activity of any organization to develop quality software products. These requirements are then rigorously analyzed within the context of business requirements. It is also observed that the identified raw requirements may be conflicting. Therefore, negotiation, agreement, communication and prioritization of the raw requirements become important activities of requirement analysis. The analyzed requirements need to be documented to enable communication with stakeholders and future maintenance of requirements and the system. Requirements analysis also refines the software allocation and builds models of the process, data, and behavioral domains that may be treated by the software. Prioritizing the software requirements is also part of software requirements analysis. 2) Allocation and Flow-down of Requirements: The purpose of requirements allocation and flow-down is to make sure that all system requirements are fulfilled by a subsystem or by a set of subsystems that can work together to achieve objectives. Top-level system requirements need to be organized hierarchically that can assist to view and manage information at different levels of abstraction. The requirements are decomposed down to the level at which the requirement can be designed and tested. Thus, allocation and flow-down may be performed for several hierarchical levels. The level of detail increases as the work proceeds down in the hierarchy. That is, system-level

196

Manufacturing Systems and Industry Application

requirements are general in nature, while requirements at low levels in the hierarchy are very specific. The top-level system requirements defined in the system requirements development phase are the main input for the requirements allocation and flow-down phase. As the system level requirements are being developed, the elements that should be defined in the hierarchy should also be considered. Allocation and flow- down of requirements include: a) Allocation of Requirements: Allocation of requirements is an architectural task carried out in order to design the structure of the system and to issue the top-level system requirements to subsystems. Architectural models provide the context for defining interaction between applications and subsystems to meet the requirements of the system. The goal of architectural modeling is to define a robust framework within which applications and component subsystems may be developed [15]. Each system level requirement is allocated to one or more elements at the next level. Allocation also includes allocating the non-functional requirements to system elements. Each system element will need an apportionment of non-functional requirements (e.g., performance requirement) [10]. When functional and the non-functional requirements of the system have been allocated then the system engineer can create a model that represents the interrelationship between system elements and sets a foundation for later requirements analysis and design steps. b) Flow-down of Requirements: Flow-down consists of writing requirements for the lower level elements in response to the allocation. When a system requirement is allocated to a subsystem, the subsystem must have at least one requirement that responds to the allocation. The lower-level requirements either may closely resemble to the higher level or may be very different if the system engineers recognize a capability that the lower level element must have to meet the higher-level requirements. The lower-level requirements are often referred to as derived requirements. Derived requirements are requirements that must be imposed on the subsystem(s). These requirements are derived from the systems decomposition process. There are two subclasses of derived requirement, i.e. subsystem requirements and interface requirement. The subsystem requirements are the requirements that must be imposed on the subsystems themselves but do not necessarily provide a direct benefit to the end user. Interface requirements are the requirement that arise when the subsystems need to communicate with one another to accomplish an overall result. This result is needed to share data or power or a useful computing algorithm. In the allocation and flow-down phase, requirements identification and traceability have to be ensured both to higher level requirements as well as between requirements on the same level. The rationale behind design decisions should be recorded in order to ensure that there is enough information for verification and validation of the next phases work products and change management. In theory, it produces a system in which all elements are completely balanced or optimized. In the real world, complete balance is seldom achieved, due to fiscal, schedule, and technological constraints [11, ]. Allocation and flow-down starts as a multi-disciplinary activity, i.e., subsystems may contain hardware, software, and mechanics. Initially, they are considered as one subsystem but different disciplines are considered separately in later iterations. Documentation of Requirements A formal document is prepared after collecting requirements, which contains a complete description of the external behavior of the software system. Requirements development process is the activity of determining which functionality of the system will be performed by software. Non-functional requirements are combined together with functional requirements into the software requirements specification with the help of flow-down, allocation, and derivation. A software requirements specification will be established for each software subsystem, software configuration item, or component is part of this phase [11]. The documentation of requirements includes requirement identification and requirement specification. 1) Requirements Identification: Requirements identification practices focus on the assignment of a unique identifier for each requirement [6]. These unique identifiers are used to refer requirements during product development and management. Requirements identification process consists of three sub activities. The basic numbering activity includes significant numbering and non-significant numbering whereas identification activity includes labeling, structure based identification and

Yanwen Wu

197

symbolic identification. The last technique is to support and automate the management of items, which includes dynamic renumbering, database record identification and base lining requirements [6]. 2) Requirement Specification: Requirements specification document is produced after the successful identification of requirements. The document describes the product to be delivered rather than the process of its development. Software requirement specification (SRS) is an effective tool for requirement specification which is a complete description of the behavior of the system or software to be developed. It includes a set of use cases that describe all the interactions that users will have with the system/software. In addition to use cases, the SRS also contains non-functional (or supplementary) requirements. Non-functional requirements are requirements which impose constraints on the design or implementation. SRS is a comprehensive description of the intended purpose and environment for software under development. The SRS fully describes what the software will do and how it will be expected to perform. An SRS minimizes the time and effort required by developers to achieve desired goals and also minimizes the development cost. A good SRS defines how an application will interact with system hardware, other programs and users in a wide variety of real-world situations. Parameters such as operating speed, response time, availability, portability, maintainability, footprint, security and speed of recovery from adverse events are evaluated in SRS. Requirements Verification and Validation When the entire requirement are described and specified in the SRS, then different parties involved have to agree upon its nature. One should ascertain that the correct requirements are stated (validation) and these requirements are stated correctly (verification). Validation and verification activities include validating the system requirements against raw requirements and verifying the correctness of system requirement documentation. The most common techniques for validating requirements are requirements reviews with the stakeholders, and prototyping. Software requirements need to be validated against system level requirements and SRS needs to be verified. Verification of SRS includes correctness, consistency, unambiguousness and understandability of requirements. In requirement verification and validation, requirements traceability mechanism can generate an audit trail between the software requirements and finally tested code. Traceability should be maintained to system level requirements, between software requirements, and to later phases, e.g., architectural work products. The outcome of the software requirements development phase is a formal document including a baseline of the agreed software requirements. DISCUSSION The proposed requirement engineering process is more effective to produce quality requirements. The other requirement engineering processes are limited to cover only few dimension of requirement engineering such as requirement elicitation, requirement specification and requirement verification and validation [10]. The proposed model introduces all important and hidden aspects of requirement engineering process. The existing requirement engineering process models are unable to communicate their phases with software development process in the right manner [10]. We relate all the important aspects of requirement engineering process to software development process in order to find out good requirements from various sources that can be implemented into software development process for producing quality software products. We have also relate requirement management and planning phase to the software development phases in our model because requirements can change over the time and during software development process, this can give bad outcomes. Therefore, it is necessary to manage continuously changing requirements through requirement management and planning.

198

Manufacturing Systems and Industry Application

CONCLUSION Requirement engineering is a very important activity, which can affect the entire activity of software development project. Requirement engineering is one of the most important tools for gathering requirements, which is concerned with analyzing and documenting the requirements [8]. We propose an effective model of requirement engineering process for software development, which is discussed in detail with various phases in Section 2. The comparative discussion of proposed requirement engineering process model with existing models is presented in Section 3. Finally, Section 4 describes the concluding remarks and future research work. References [1]. P. Jalote, An Integrated Approach to Software Engineering, 3 edition, Narosa Publishing house, India, 2005. [2]. G. Ropohl, “Philosophy of Socio-technical Systems”, In Society for Philosophy and Technology, 1999, pp. 59-71. [3]. Pandey, U. Suman, A. K. Ramani, “Social-Organizational Participation difficulties in Requirement Engineering Process- A Study”, National Conference on

Emerging Trends in

Software Engineering and Information Technology, Gwalior Engineering College, Gwalior,2009. [4]. N. Juristo, A. M. Moreno & A. Silva, “Is the European Industry Moving Toward Solving Requirements Engineering Problems?” IEEE Software, 2002, pp. 70-77. [5]. S. Komi-Sirvio & M. Tihinen, “Great Challenges and Opportunities of Distributed Software Development - An Industrial Survey”, Fifteenth International Conference on Software Engineering and Knowledge Engineering, SEKE2003, San Francisco, 1-3 July 2003. [6]. J. Siddiqi, “Requirement Engineering: The Emerging Wisdom”, IEEE Software, 1996, pp.1519. [7]. Sommerville & P. Sawyer, Requirements Engineering: A Good Practise Guide. John Wiley & Sons, 1997. [8]. D. Pandey, U. Suman, A. K. Ramani, “Impact of Requirement Engineering Practices in Software Development Processes for Designing Quality Software Products”, National Conference on NCAFIS, DAVV, Indore, 2008. [9]. R. Stevens, P. Brook, K. Jackson & S. Arnold, Systems Engineering -Coping with Complexity, Prentice Hall, London, 1998. [10]. G. Kotonya & I. Sommerville, Requirements Engineering: Process and Techniques. John Wiley & Sons, 1998. [11]. J. D. Sailor, System Engineering: An Introduction. IEEE System and Software Requirements Engineering, IEEE Software Computer Society Press Tutorial. IEEE Software Society Press, 1990.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.199

A Revised BMM and RMM Algorithm of Chinese Automatic Words Segmentation Qu Huiyan a, Zhao Wei (Corresponding author)b Institute of Information Technology, Jilin Agricultural University, ChangChun130118, China a [email protected], b [email protected] Key words: Maximum Matching Method; Bound Maximum Matching Method; Reverse Maximum Matching Method; First Matching the Maximum Word-Length.

Abstract. The principle of Maximum Matching Method (MM) is “First Matching the Maximum Word-Length”. At present, however, the method of Maximum Matching Method (MM) does not incarnate the principle of “First Matching the Maximum Word-Length” well. So in order to incarnate well, a revised BMM and RMM Algorithm of Chinese automatic words segmentation is put forward, and its algorithm is also given. Introduction Chinese word segmentation is a prerequisite in Chinese natural language understanding, is the most basic step in machine translation in Chinese natural language understanding and machine translation, and is one of the difficulties in Chinese information processing now. Chinese word segmentation has been studied for over twenty years, during which the proposed automatic segmentation method has two dozen [1] .The segmentation method can be summarized into the following two categories: one is based on a combination of dictionaries and statistical mechanical [6, 7, 8, 9], the other is based on ruled expert system [4, 5] .It will have the wrong segmentation regardless of the type of segmentation method, if the algorithm using in the word segmentation is good , the number of error segmentation sentence would be significantly reduced, together with the appropriate disambiguation strategy will further improve the segmentation accuracy. Machinery segmentation method based on the first category, the advantage is easy to implement, the disadvantage is the accuracy is not high; the rules-based method of the second category has the advantage of high accuracy, and the disadvantage not easy to achieve. It would be worth studying if we can improve the mechanical segmentation method properly, so that not only to satisfy the precision of a certain degree, but also easy to achieve. By the current statistics of the current method of Chinese word segmentation, mechanical segmentation method scanned according to the text order can be divided into: Forward scanning, reverse scanning and two-way scanning; by matching principle it can be divided into: maximum matching, minimum matching, word matching and the best matching [2].Chinese word segmentation should not use minimum matching because in the modern Chinese written language almost every word can be a word (or as a separate morpheme use), If the match using the minimum, it is almost a single word in all fields, then cut each round of matches, which is obviously not enough [3]. The principle of Maximum Matching Method is “First Matching the Maximum Word-Length”, at present, however, the method of Maximum Matching Method does not incarnate the principle of “First Matching the Maximum Word-Length” well. This paper made improvements in the method based on the original BMM and RMM, so that it can find the entire range of the longest sentence term, fully embody the "long-term priority" principle, hence to reduce the number of error-segmentation sentence, and improve the accuracy further.

200

Manufacturing Systems and Industry Application

Design of sub-dictionary The design of this Chinese word segmentation algorithm is based on a dictionary word segmentation. Sub-dictionary of Chinese word segmentation is a fundamental component of the system, from which automatic segmentation system for all types of information must be obtained. Dictionary-based segmentation as a mainstream word segmentation, the segmentation accuracy depends on the dictionaries’ effective segmentation for the accuracy and ambiguity, while the speed of segmentation depends on the design of the dictionary structure. This dictionary consists of two sub-sections: the first word Hash index table and dictionary text (word list). The first word Hash index table. Chinese Standard GB2312 done on the code for 6763 characters, the use of the frequency of which up to 99%. Values of these Chinese characters, Chinese characters range is [0,6767]. When designing the Hash index of the first word, GB2312 encoding table for each character in the first word has a unique Hash index table with a mapping, by calculating the offset of each character to determine the Chinese word in the first place in the Hash Index Table, and its values are in [0,6767] range. Character offset is calculated as follows: offset=(c1-176)*94+(c2-161) (1) Where, offset representative the offset of the character; c1 indicates the first byte characters, that the first byte characters; c2 indicates the second byte characters. Each structure of Hash index table in the first word is as follows: C MaxWordLen

Start

WordNum

Among them, C is a single character; MaxWordLen: the maximum length in all words led by the Chinese characters word C ; Start: The starting position in the body of the dictionary which Chinese characters C as a prefix of the first word. WordNum: the number of words in the word led by Chinese C. Dictionary text (word list). This article contains 48,513 words to the contents as a sub-dictionary. All the first word appears in the GB2312 code in the table. When you want to find a particular word, first calculate the offset of the first word, and then to locate the first word storage location in the first word Hash index table, reading out the Start and WordNum field values, calculate the location of the last word in the body of the dictionary the first word as a prefix, then search the word in the word table between Word [Start] and the Word [Start + WordNum-1]. Thus it can avoid a lot of useless search, improve the efficiency of sub-word search. Improved Algorithm of BMM and the RMM Definition: Let S = C1C2 ... ... Cn to be sub-word sentences (continuous stream of Chinese characters.) Where n ∈ N,N for the set of natural numbers, Ci(i=1,2,……,n) to be first i characters in the sub-word sentence, SLength to be the sentence length the sentence S for segmentation. Improved Bound Maximum Matching Method(BMM). Algorithm Design. For Chinese sentences for segmentation, starting with the first word Hash index table to find out the maximum sentence for each word length number of characters, then the remove the maximum value by comparing, If the maximum value is greater than SLength, the length of sentence for segmentation, take SLength as the longest vocabulary word length i; otherwise take the maximum value. Then from the beginning of the first word in Sentence S to the back, sentence segmentation, treate respectively the first word, second word, ..., treate the SLength-i +1 words as follows: Take the first word of the first sentence (ie the first word), compared the field

Yanwen Wu

201

values MaxWordLen of the first word in Hash index table with I, if the value is less than i, then the character has no words of length of i, and deal with the next adjacent character after it; If the value is greater than i, to intercept a string of length i backward from the the beginning of this character, make it match the term in the table, if you can not find a word in the table to match current positive interception string, then deal with the next character adjacent to current; otherwise put this string out as a word segmentation from the sentence, take the original sentence in the left and right parts of this string as two new sentence, the recursive call in this process. If all the matches are not successful, then it illustrates that there is no word of length i in the sentence, then start looking for a length of i-1 of the word, repeat the process until the entire sentence to be cut points. Algorithm is as follows. 1.Read sentences S to be sub-word in, calculate the length as SLength of the sentence S, if SLength = 0, switch11; or switch 2; 2.Find the number of length maximum of the word of each character in the first word Hash index table in S, that is MaxWordLen field values; 3.Compare and remove the maximum MaxValue= Max {MaxWordLen i}; (i = 1,2, ... ..., n) 4.If SLength> MaxValue, then i = MaxValue, turn 6; otherwise i = SLength, switch 6; 5.Then re-assigned i by the result of i-1, switch 6; 6.From the start word of the sentence to the back, if the first SLength-i +1 characters of the sentence S have been processed, switch 5; otherwise, a positive turn with the next character, switch 7; 7.Read the current Chinese characters of the sentence, find out the index table field values MaxWordLen of the word in the first word Hash, if MaxWordLen MaxValue, then i = MaxValue, turn 6; otherwise i = SLength, switch 6; 5.Then re-assigned i by the result of i-1, switch 6; 6.From the start word of the sentence to the back, if the first SLength-i +1 characters of the sentence S have been processed, switch 5; otherwise, a reverse turn with the next character, switch 7; 7.Read the current Chinese characters of the sentence, find out the index table field values MaxWordLen of the word in the first word Hash, if MaxWordLen 0 such that dist ( x, P) ≤ η1‖ [ D1x − d‖‖ ] ∀x ∈ Rn. 1 + ( D2 x − d 2 )+‖ We also give the definition of projection operator and some relate properties([16]). For nonempty closed convex set Ω ⊂ R n and any vector x ∈ R n , the orthogonal projection of x onto Ω , i.e., argmin{‖y − x‖| y ∈Ω} , is denoted by PΩ ( x) . Lemma 2.1 For any u, v ∈ R n , then ‖PΩ (u ) − PΩ (v‖ ) ≤‖u − v‖. Theorem 2.1 For compact set Ω , there exist a constant η1 > 0 such that dist( x, X * ) ≤ η2{‖V ( Nx + q‖‖ ) + B(Mx + p‖ ) + r ( x)}, ∀x ∈Ω. where r ( x ) = min { A( Mx + p ) , U ( Nx + q )} . Proof Since Ω is bounded, using the proof of Corollary 3.2 in [17] to (1), there exists constants ρ > 0 such that dist ( x, X * ) ≤ ρ r ( x), ∀x ∈Ω ∩ Ω, (2) where Ω = {x ∈ R n | V ( Nx + q) = 0, B(Mx + p) = 0} . For any x∈Ω , we only need to first project x to Ω , i.e., there exists a vector x ∈Ω such that‖x − x‖= dist ( x, Ω) . Since ‖r ( x ) − r ( x ‖ ) =‖min{ AF ( x),UG ( x)} − min{ AF ( x ),UG ( x )}‖ =‖[ AF ( x) − PR+ ( AF ( x) − UG( x))] − [ AF ( x ) − PR+ ( AF ( x ) − UG( x ))]‖

≤‖[ AF ( x) − AF ( x‖‖ ) + PR+ ( AF ( x) − UG( x)) − PR+ ( AF ( x ) − UG( x ))‖ ≤‖AF ( x) − AF ( x‖‖ ) + ( AF ( x) − UG( x)) − ( AF ( x ) − UG( x ))‖ ≤ 2‖AF ( x) − AF ( x‖‖ ) + UG( x) − UG( x‖ ) ≤ 2‖AM‖‖x − x‖‖ + UN‖‖x − x‖ = (2‖AM‖‖ + UN‖‖ ) x − x‖ = (2‖AM‖‖ + UN‖)dist ( x, Ω),

Yanwen Wu

207

where the second inequality is by non-expend property of projection operator. Combining this, we have ‖r ( x ‖ ) ≤‖r ( x‖ ) + (2‖AM‖‖ + UN‖)dist ( x, Ω) (3) Combining (2) with (3), we have dist ( x, X * ) ≤ dist ( x, Ω) + dist ( x , X * ) ≤ dist ( x, Ω) + ρ‖r ( x‖ ) ≤ dist ( x, Ω) + ρ ‖ ( r ( x‖ ) + (2‖AM‖‖ + UN‖)dist ( x, Ω)) ≤ ρ‖r ( x‖ ) + [ ρ (2‖AM‖‖ + UN‖) + 1]dist ( x, Ω)) ≤ η2 ‖ ( r ( x‖‖ ) + B(Mx + p‖‖ ) + V ( Nx + q‖ ) ),

where the last inequality is by Proposition 2.1, and η2 = [ ρ (2‖AM‖‖ + UN‖) + 1] . The error bound in Theorem 2.1 are extensions of Theorem 2.1 in [13], Corollary 3.2 in [17]. In the following, we would establish another type of error bound on a compact set via the Fischer function ([18]) φ : R2 → R1 defined by φ (a, b) = a 2 + b 2 − a − b, ∀a, b ∈ R. . For this function, besides the following basic property φ (a, b) = 0 ⇔ a ≥ 0, b ≥ 0, ab = 0, for any (a, b) ∈ R 2 , it also holds that ([19]) (2 − 2) | min(a, b) |≤| φ (a, b) |≤ ( 2 + 2) | min(a, b) |. (4) n For arbitrary vectors a, b ∈ R , we define a vector-valued function Φ(a, b) with Φi (a, b) = φ (ai , bi ) for 1 ≤ i ≤ n . Based on this mapping, we can establish a system of equations via the following vector-valued function Ψ : R n → R s + m+ t as follows:  Φ( AF ( x),UG ( x))    Ψ ( x) :=  V ( Nx + q) .   B ( Mx + p )  

(5)

Certainly, we have the following conclusion. Theorem 2.2 x* ∈ R n is a solution of the GLCP if and only if Ψ ( x* ) = 0 . Clearly, using (4),(5) and Theorem 2.1, we have the following result which can easily be verified. Theorem 2.3 For compact set Ω ⊂ R n , there exist a constant η3 > 0 such that dist ( x, X * ) ≤ η3‖Ψ ( x‖ ) , ∀x ∈Ω. Clearly, This bound is an extension of Theorem 2.1 in [13], Lemma 1 in [20], and Corollary 3.2 in [17]. Algorithm and Convergence In this section, the L-M method for GLCP also has a quadratic rate of convergence based on the error bound results obtained in previous section, which was introduced first by Wang ([2])for GLCP but result of it was not given. We know that the system Ψ ( x) = 0 is nonsmooth. thus, let ϕt : R 2 → R denote the smoothed Fisher-Burmeister function ϕτ (a, b) = a 2 + b2 + 2τ 2 − a − b , where τ > 0 is a smoothing parameter, in order to make this clear, let us write Θτ ( y, z ) = (ϕτ ( y1, z1),ϕτ ( y2 , z2 ),...,ϕτ ( yn , zn )) ∈ R n ,and use mapping F : R n × (0, ∞) → R s + m +t × (0, ∞) defined by  Θτ ( AF ( x),UG ( x))    V ( Nx + q) , F ( x,τ ) :=   B ( Mx + p )     τ  

(6)

and we define a real-valued function f : R n × (0, ∞) → R , f ( x,τ ) := F ( x,τ )T F ( x,τ ) =‖F ( x,τ ‖ ) 2. (7) * * * Obviously, x ∈ X ⇔ ( x ,0) solves F ( x,τ ) = 0. In this paper, we are interested in developing method for F ( x,τ ) = 0 , let the solution set of F ( x,τ ) = 0 be Ω* . Now, we give some properties of function ϕτ (a, b) from [21 and [22].

208

Manufacturing Systems and Industry Application

Lemma 3.1 Function ϕτ (a, b) has the following properties: (i) Function ϕτ (a, b) is continuously differentiable on R 2 × (0, +∞) , and strong semismooth at any (a, b,τ ) ∈ R 2 × [0, +∞) , i,e, ϕ(τ +∆τ ) (a + ∆a, b + ∆b) − ϕτ (a, b) − V T (∆a, ∆b, ∆τ ) = O‖(∆a, ∆b, ∆τ ‖ ) 2, where V ∈∂ϕ(τ +∆τ ) (a + ∆a, b + ∆b) , and (∆a, ∆b, ∆τ ) → 0, ∂ϕ is the generalized gradient of ϕ in the sense of Clarke. (ii) | ϕ0 (a, b) − ϕτ (a, b) |≤ 2τ , ∀(a, b,τ ) ∈ R2 × (0, +∞). Using Lemma 3.1 to (6), we have the following result. Theorem 3.1 Function F ( x,τ ) has the following properties: (i) Function F ( x,τ ) is continuously differentiable on R n × (0, ∞) , and is local Lipschitz continuous and strong semismooth on R n × (0, ∞) , i.e., for any ( x,τ ) ∈ R n × (0, ∞) , there exist L1 > 0, L2 > 0 and b1 > 0 such that ‖F ( x + ∆x,τ + ∆τ ) − F ( x,τ ‖ ) ≤ L‖ ), (8) 1 (∆x, ∆τ ‖ ‖F ( x + ∆x,τ + ∆τ ) − F ( x,τ ) − V T (∆x, ∆τ ‖ ) ≤ L2‖(∆x, ∆τ ‖ ) 2, (9) ∀(∆x, ∆τ ) ∈ N (0, b1) := {(∆x, ∆τ ) ‖ | (∆x, ∆τ ‖ ) ≤ b1,τ + ∆τ ≥ 0}, V ∈∂F ( x + ∆x,τ + ∆τ ), where ∂F ( x,τ ) is the generalized gradient in the sense of Clarke. (ii) By Theorem 2.3, for any solution ( x*,0) ∈Ω* , there exist a neighborhood N (( x*,0), b2 ) := {( x,τ ) ‖ | ( x,τ ) − ( x*,0)‖≤ b2 ,τ ≥ 0} of ( x*,0) and a constant c1 > 0 , for any ( x,τ ) ∈ N (( x*,0), b2 ) , we have dist (( x,τ ), Ω* ) ≤ c‖ ). 1 F ( x,τ ‖ Proof (i) This is a direct result of Lemma 3.1. (ii) By Theorem 2.3, we know that there exist a constant 1 > b2 > 0 , such that dist ( x, X * ) ≤ η3‖Ψ ( x‖ ) , ∀x ∈ N ( x*, b2 ) := { x ∈ R n | x − x* ≤ b2} , where dist ( x, X * ) =‖x − x‖, x ∈ X * . From Lemma 3.1(ii), we get ‖Θ0 ( AF ( x),UG ( x))‖‖ − Θτ ( AF ( x),UG ( x))‖≤‖Θτ ( AF ( x),UG ( x)) − Θ0 ( AF ( x),UG ( x))‖ s

1

= (∑ (ϕτ (( AF ( x))i ,(UG ( x))i ) − ϕ0 (( AF ( x))i ,(UG ( x))i )))2 ) 2 ≤ 2sτ , i =1

combining this, for any ( x,τ ) ∈ N (( x*,0), b2 ) , we have dist ( x, X * ) =‖x − x‖≤‖( x,τ ) − ( x ,0)‖ ≤‖x − x‖+ τ ≤ η3‖Ψ ( x‖ ) +τ = η3‖ ( Θ0 ( AF ( x),UG ( x))‖1 +‖V ( Nx + q‖ ) 1 +‖B(Mx + p‖ ) 1) + τ ≤ η3 ( s‖Θ0 ( AF ( x),UG ( x))‖‖ + V ( Nx + q‖ ) 1 +‖B(Mx + p‖ ) 1) + τ ) 1 +‖V ( Nx + q‖ ) 1) + ( 2sη3 + 1)τ ≤ η3 ( s‖Θτ ( AF ( x),UG ( x))‖‖ + B(Mx + p‖ ≤ η3‖ ( Θτ ( AF ( x),UG ( x))‖1 +‖V ( Nx + q‖ ) 1 +‖B(Mx + p‖ ) 1) + ( 2sη3 + 1)τ ≤ ( 2sη3 + 1)(‖Θτ ( AF ( x),UG ( x))‖1 +‖V ( Nx + q‖ ) 1 +‖B(Mx + p‖ ) 1 +τ ) ≤ s + m + t + 1( 2sη3 + 1)‖F ( x,τ ‖ ). In this following, a smoothing Levenberg-Marquardt method for solving the GLCP is outlined. It is similar to that in [2,8]. Algorithm 3.1 Step 1: Choose any point x0 ∈ R n ,τ 0 > 0 , parameter ε ≥ 0,σ > 0. Let k = 0. Step 2: If ‖∇f ( xk ,τ k ‖ ) ≤ ε , stop; otherwise, go to Step 3. k Step3: Choose H which denotes the Jacobian of F ( x k ,τ k ), µ k =‖F ( x k ,τ k ‖ ) 2 . Let d k = (∆xk , ∆τ k ) ∈ R n +1 be the solution of the following strictly convex quadratic programming

Yanwen Wu

209

min ‖F ( x k ,τ k ) + H k d‖2 + µ k‖d‖2 1 τk s.t. | ∆τ |≤ 1+ µ k ‖x k + ∆x‖≤ σ . Step 4: Let xk +1 := x k + ∆xk ,τ k +1 := τ k + ∆τ k , k := k + 1, go to Step 2.

In the following convergence analysis, we assume that Algorithm 3.1 generates an infinite sequence. By (8)-(10), combine (6)-(7) with the proof of Theorem 2.1 in [8], we can obtain the convergence and quadratical convergence of Algorithm 3.1. Theorem 3.2. Let {( x k ,τ k )} be the sequence generated by Algorithm 3.1. Assume that the initial point ( x0 ,τ 0 ) is chosen sufficiently close ( x*,0) . Then dist (( x k ,τ k ), Ω* ) converges to 0 quadratically. Moreover the sequence {( x k ,τ k )} converges to a solution ( xˆ,0) ∈Ω* ∩ N (( x*,0), b2 / 2) locally quadratically. In Theorem 3.2, we have showed that L-M method has a quadratic rate of convergence, and we needn't the condition which there exist nondegenerate solution, it is an extensions of convergence result in [2], which is a new result for GLCP. Conclusions In this paper, we established global error bounds on the generalized linear complementarity problems in engineering modeling which are the extensions of those for the classical linear complementarity problems. Surely, we may use the error bound estimation to establish quick convergence rate of the L-M method for solving the GLCP instead of the nonsingular assumption just as was done for nonlinear equations, this is a topic for future research. Acknowledgment This work was supported by the Natural Science Foundation China (Grant No.10771120), Shandong Provincial Natural Science Foundation (Grant No. Y2008A27, Grant No. ZR2010AL005), and Shandong Provincial Education Department Science and Technology Planning Project (Grant No. J08L157). The author wish to give their sincere thanks to the editor and the anonymous referees for their valuable suggestions and helpful comments which improved the presentation of the paper.

References [1] R. Andreani, A. Friedlander and S.A. Santos, On the resolution of the generalized nonlinear complementarity problem, SIAM J. Optim., 12(2001),p.303-321. [2] Y.J. Wang, F.M. Ma and J.Z. Zhang, A nonsmooth L-M method for solving the generalized nonlinear complementarity problem over a polyhedral cone, Appl. Math. Optim., 52(1)(2005), p.73-92. [3] M.C. Ferris and J.S. Pang, Engineering and economic applications of complementarity problems, Society for industrial and applied mathematics, 39(4)(1997), p.669-713. [4] F. Facchinei and J.S. Pang, Finite-Dimensional Variational Inequality and Complementarity Problems, Springer, New York, 2003. [5] G. T. Habetler and A.L. Price, Existence theory for generalized nonlinear complementarity problems, J. Optim. Theory Appl., 7(1971), p.223-239. [6] S. Karamardian, Generalized complementarity problem, J. Optim. Theory Appl., 8(1)(1971), p.161-167.

210

Manufacturing Systems and Industry Application

[7] C. Kanzow and M. Fukushima, Equivalence of the generalized complementarity problem to differentiable unconstrained minimization, J. Optim. Theory and Appl., 90(3)(1996), p.581-603. [8] C. Kanzow, N. Yamashita and M. Fukushima, Levenberg-Marquardt method for constrined nonlinear equations with strong local convergence properties, http://citeseer.ist.psu.edu/596808.html, 2002. [9] N. Yamashita and M. Fukushima, On the rate of convergence of the Levenberg-Marquardt method, Computing [Suppl], 15(2001), p.239-249. [10] J.S. Pang, Error bounds in mathematical programming, Math. Programming, 79(1997), p.299-332. [11] O.L. Mangasarian and T.H. Shiau, Error bounds for monotone linear complementarity problems, Math. Programming, 36(1)(1986) , p.81-89. [12] R. Mathias and J.S. Pang, Error bound for the linear complementarity problem with a P-matrix, linear Algebra and Applications, 132(1990), p.123-136. [13] O.L. Mangasarian and J. Ren, New improved error bound for the linear complementtarity problem, Math. Programming, 66(1994), p.241-255. [14] H.C. Sun, Y.J. Wang, L.Q. Qi, Global error bound for the generalized linear complementarity problem over a polyhedral cone, J. Optim. Theory Appl., 142(2009), p. 417-429. [15] A.J. Hoffman, on the approximate solutions of linear inequalities, J.Res.National Bureau of Standards, 49(1952), p.263-265. [16] Y.J. Wang, N.H. Xiu, Theory and algorithms for nonlinear programming, Shanxi science and technology press, p.170-171(2004)(In Chinese). [17] N.H. Xiu and J.Z. Zhang, Global projection-type error bound for general variational inequlities, J. Optim. Theory Appl., 112(1)(2002), p.213-228. [18] A. Fischer, A special Newton-type optimization method, Optim., 24(1992),p.269-284. [19] P. Tseng, Growth behavior of a class of merit function for the nonlinear complementarity problem, J. Optim. Theory Appl., 89(1996),p.17-37. [20] J.S. Pang, Inexact newton methods for the nonlinear complementarity problem, Math. Programming, 36(1986), p.54-71. [21] S. Engelke, C. Konzow, Improved smoothing type method for the solution of linear programs, Preprint, Institute of Applied Mathematics, University of Hamburg, Hamburg, March, (2000). [22] L. Qi, D. Sun, Smoothing function and smoothing Newton method for complmentarity problems and variational inequality problems, Reprot, University of New South Wales, Sydney, Australia, (1998).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.211

Train Optimal Control Strategy on Continuous Change Gradient Steep Downgrades Peng Zhoua, Hongze Xub and Mengnan Zhangc State Key Laboratory of Rail Traffic Control Safety, Beijing Jiaotong University, Beijing, China a

[email protected], [email protected], [email protected]

Key words: energy saving; control strategy; energy-feedback; optimization method

Abstract. Reducing the traction energy consumption plays an important role in railway energy saving. Viewed from the present research situation--the models were all based on the train without energy-feedback, moreover the line condition is the fixed steep down or steep up grades, the train group energy control strategy on continuous change gradient steep downgrades with the energy-feedback is proposed. The advantage for energy-saving of the strategy is proved through the traction calculation in theory. On that basis the optimization method is applied to get the optimal strategy balancing the operation time and energy consumption. By comparing the traditional control strategy with the optimal control strategy, the experiments show that the optimal overall target index of the operation time and energy consumption is much better. Introduction Reducing the railway traction energy consumption plays an important role in railway energy saving. The key factors in the railway traction energy consumption: 1, Energy consumption as a result of the train running against the resistance; 2, Kinetic energy loss caused by the train braking. Based on research angle, the train energy-saving operation research can be divided into three classes: the first class is from the standpoint of train energy-saving optimization operation based on the fixed train structure and fixed operation environment. The research object is the train without energy-feedback[1-6].The second class is from the standpoint of train energy-saving dispatch-the influence of train formation to energy saving in train operation[7-12]. The third class is from the standpoint of train dynamics system, such as energy storage equipment. The train energy-saving operation research which takes the train dynamics system as the research object starts on the 60's of 20th century, so far some technology is relatively mature and abundant[13,14]. Different from the present achievement, of which the models were all based on the train without energy-feedback, moreover the line condition is the fixed steep down or steep up grades, the train group energy control strategy on continuous change gradient steep downgrades with the energy-feedback is proposed. The advantage for energy-saving of the strategy is proved through the traction calculation in theory. On that basis the optimization method is applied to get the optimal strategy balancing the operation time and energy consumption. By comparing the traditional control strategy with the optimal control strategy, the experiments show that the optimal overall target index of the operation time and energy consumption is much better. Train Optimal Control Strategy for Energy-saving Suppose the train is travelling on a piecewise constant gradient comprising a sequence of gradients that are no-steep at the hold speed V , followed by a sequence of gradients that are steep downhill at speed V , then a final sequence of gradients that are non-steep at speed V .

212

Manufacturing Systems and Industry Application

γ1 p1

... p2 pg −1 a. p ... g p

r

ps ...

γ . ph b ph +1... n −1 p p n −1

n

Fig.1 Vertical section map of continuous change gradient steep downgrades As is shown in figure 1, let 1, p1 ,..., pn be the locations in the interval where the gradient changes; 2, v1 ,..., vn be the velocities of the train at the points p1 ,..., pn . 3, γ 0 , γ 1 , γ 2 ..., γ n −1 , γ n be the respective gradients accelerations of segment (−∞, p1 ) , [ p1 , p2 ) ,… ,[ pn , ∞) ; 4, pr and ps be the starting point and ending point of the steep part; 5, a be the point where coasting starts and b the point where speedhold resumes; 6, pg −1 ≤ a ≤ pg where pg is the first gradient change point during the coast phase; 7, ph ≤ b ≤ ph +1 be the last gradient change point during the coast phase; ψ (V )  Lemma 1 : Let J (v) = ∫  + r (v) − ϕ '(V )  dx , then v  p q

q

q− p  J (v) = ϕ (V ) (tq − t p ) − + [r (v) − r (V )]dx V  ∫p 

(1)

and J (v ) ≥ 0

(2)

where p and t p are the starting point and starting time, q and tq are the ending point and ending time, r (v) is the basic resistance of the train when it run in speed v , ϕ (v) = vr (v) ,ψ (v) = v 2 r '(v) . Lemma 1 is based on the assumption that all the renewable energy is wasted. Let S be the renewable energy produced by the train, if the energy is made full use, then from we can lemma 1 we can get: q ψ (V )  J (v ) = ∫  + r (v) − ϕ '(V )  dx − S v  p (3) q q− p  = ϕ (V ) (tq − t p ) − + [r (v) − r (V )]dx − S V  ∫p  and J (v ) ≥ − S (4) q− p In (3), (tq − t p ) − is the time difference between the train when it runs in change speed in [ p, q ] V q

and the train when it runs in constant speed V in [ p, q ] . ∫ [r (v) − r (V )]dx is the energy consumption p

difference between the train when it runs in change speed in [ p, q ] and the train when it runs in

Yanwen Wu

213

constant speed V in [ p, q ] . So (3) is the comprehensive evaluation of the train running time and the energy consumption. Definition 1[2]: strategy A: VA ( x) = V ( x ∈ [ p1 , a] ∪ [b, pn ]) u A ( x) = 0( x ∈ (a, b)) where p1 < p2 < ... < pg −1 < a < pg < ... < pr < ... < ps < ... < ph < b < ph +1 < ... < pn −1 < pn , [ p1 , pr ] ∪ [ ps , ph ] is the continuous change gradient non-steep downgrades,[ pr , ps ] is the continuous change gradient steep downgrades. Definition 2: strategy B: VB ( x) = V ( x ∈ [ p1 , a '] ∪ [b, pn ]) u B ( x) = q ( x ∈ [a ', a ]) u B ( x) = 0( x ∈ (a, b ']) u B ( x) = p '( p ' ∈ (0, P ], x ∈ (b ', b]) where pg '−1 ≤ a ' ≤ pg ' < ph ' ≤ b ' ≤ ph '+1 As is shown in figure 2, u= p

γ1 p1

u=q

u = p'

u=0

a ... a ' p2 pg ' −1 . p pg −1 . p ... g' g p

u= p

steep downgrade r

ps ... . b. ' b ph ' ph '+1 p . p ... γ n −1 h h +1 p n −1

v =V

v =V

pn

Fig.2 Train energy-saving control strategy B with energy-feedback on continuous change gradient steep downgrades Theorem 1: Let the efficiency that the train produces the renewable energy is η , the energy consumption when the train runs from p1 to pn in strategy A is QA , the energy consumption when the train runs from p1 to pn in strategy B is QB , then necessary and sufficient condition of QB < QA is g −2

η>

p '(b − b ') −  r (V ) − γ g ' −1  ( pg ' − a ') −  r (V ) − γ g −1  (a − pg −1 ) − ∑ [ r (V ) − γ i ] ( pi +1 − pi ) i=g '

q (a − a ')

Proof: let the altitude difference between p1 and pn is h , in strategy A: QA = Q p1a + Qab + Qbpn + mgh

(5)

the train runs in constant speed V between p1 and a in strategy A, so i= g −2

Q p1a =

∑ [ r (V ) − γ ] ( p i

i =1

i +1

− pi ) +  r (V ) − γ g −1  (a − pg −1 )

(6)

214

Manufacturing Systems and Industry Application

similarly, i= g −2

QA = mgh +

∑ [ r (V ) − γ ] ( p

i +1

i

i =1

− pi ) +  r (V ) − γ g −1  (a − pg −1 ) (7)

i = n −1

+ [ r (V ) − γ h ] ( ph +1 − b) +

∑ [ r (V ) − γ ] ( p

i +1

i

− pi )

i = h +1

in strategy B: i = g '−2

QB =

∑ [ r (V ) − γ ] ( p

− pi ) +  r (V ) − γ g −1  (a '− pg '−1 ) − η q (a − a ') + p '(b − b ')

i +1

i

i =1

+ [ r (V ) − γ h ] ( ph +1 − b) +

(8)

i = n −1

∑ [ r (V ) − γ ] ( p i

i +1

− pi )

i = h +1

according to (7) and (8), QB < QA is equivalent to g −2

η>

p '(b − b ') −  r (V ) − γ g ' −1  ( pg ' − a ') −  r (V ) − γ g −1  (a − pg −1 ) − ∑ [ r (V ) − γ i ] ( pi +1 − pi ) i=g '

(9) q (a − a ') Then we begin to research the key position and speed in the line when J (v) (lemma 1) is the minimum, so that we can get the optimal strategy balancing the operation time and energy consumption. The target function: b ψ (V )  (10) J = ∫ + r (v ) − ϕ '(V )  dx − S  v

a'



The equation of motion for the train:

dv = − r (v ) + p + q + γ j dx

v

(11)

according to (11), vi

vdv

∫ − r (v ) + γ

pi − pi −1 =

vi −1

i −1

−q

( g ' < i < g − 2)

(12)

similarly, when g < i < h '− 1 and h '+ 1 < i < h − 1 can be analyzed, b

dx ∫a ' v = vh ' +1

+



vb '

vg '



V

v

a g −2 dv dv + ∫ +∑ −r (v) + γ g ' −1 − q vg−1 − r (v) + γ g −1 − q i = g '

vi +1

dv

∫ − r (v ) + γ

vi

i

−q

(13)

h −1 vi +1

V

dv dv dv + + ∑ − r (v) + γ h ' + p ' v∫h − r (v) + γ h + p ' i = h ' +1 v∫i − r (v) + γ i + p '

according to (12), (13) vg '

b − a ' = ph − p g ' +



V

V

vdv vdv +∫ − r (v) + γ g '−1 − q vh − r (v ) + γ h + p '

(14)

according to the theorem of kinetic energy, vg '

v

a g −2 vdv vdv r ( v ) dx = ( γ − q ) + ( γ − q ) + ∑ g ' − 1 g − 1 ∫ ∫ −r (v) + γ g '−1 − q ∫ −r (v) + γ g −1 − q i = g ' (γ i − q)( pi +1 − pi ) a' V vg −1

b

vh ' +1

+(γ h ' + p ')



vb '

V

(15)

h −1

vdv vdv + (γ h + p ') ∫ + ∑ (γ i + p ')( pi +1 − pi ) − r (v ) + γ h ' + p ' −r (v) + γ h + p ' i = h '+1 vh

according to (10), b

b

dx + r (v )dx −ϕ '(V )(b − a ') − S v a∫' a'

J = ψ (V ) ∫

substitute (13)-(15) into (16), the lagrangian function is defined:

(16)

Yanwen Wu

215

W (vg ' , vg ' +1 ,..., vh , va , vb ' , q, p ') = J (vg ' , vg '+1 ,..., vh , va , vb ' , q, p ') vi +1 g −2    vdv + ∑  λi  pi +1 − pi − ∫   γ − r ( v ) + − q i= g '  i vi    

(17)

vi +1    vdv + ∑  λi  pi +1 − pi − ∫   γ − r ( v ) + + p ' i = h ' +1    i v i    where λi is the lagrangian multiplier, λi > 0 , Karush-Kuhn-Tucker conditions are applied: ∂W =0 ∂vi when i = g ' h −1

(18)

∂W ψ (V ) + γ g ' −1 − q − ϕ '(V ) + 1 vg ' ψ (V ) − (λg ' − 1)vg ' = − =0 (19) ∂vg ' − r (vg ' ) + γ g '−1 − q − r (v g ' ) + γ g ' − q similarly, when i ∈ ( g ', g − 2) , i = g − 2 , i = g − 1 , i = g , i ∈ ( g , h '− 1) , i = h '− 1 , i = h ' , i = h '+ 1 , i ∈ (h '+ 1, h − 1) , i = h − 1 is analyzed. Let ψ (V ) − λr vr +1 −ψ (V ) + λr +1vr +1 f = + − r (vr +1 ) + γ r − q − r (vr +1 ) + γ r +1 − q Algorithm flow chart for train energy-saving control strategy with energy-feedback on continuous change gradient steepdown grades is got. As is shown in figure3,

vg '

vg '+1

vr

vr +1

vr + 2

vg −3

vg − 2

λg '

λg '+1

λr

f

λr +1

λg − 4

λg − 3

Fig.3 Algorithm flow chart for train optimal control strategy with energy-feedback

Experiments As is shown in figure 2, when xg ' = 32.53 , xg = 95.62 , xh '+1 = 616.47 , if the strategy A is applied,

the speed of the train is too fast, which does not meet the security rules, and the minimum of J A is 63.8301756492. As is shown in figure 4, the strategy B is applied. When vg ' = 21.64 , f ≈ 0 . When vg = 20.69 , f 2 ≈ 0 . When vh '+1 = 21.73 , f3 ≈ 0 , and the minimum of J B is J B1 + J B 2 + J B 3 = 41.3937468924 ,

Speed/km.h

Speed/km.h

which is far less than the minimum of J A . So, the optimal strategy balancing the operation time and energy consumption which is much better than the traditional strategy is got.

h '+ 1

g'

g

Fig.4 Speed and position chart for train optimal control strategy

216

Manufacturing Systems and Industry Application

Summary The train group energy control strategy on continuous change gradient steep downgrades with the energy-feedback is proposed. The advantage for energy-saving of the strategy is proved through the traction calculation in theory. On that basis the optimization method is applied to get the optimal strategy balancing the operation time and energy consumption. By comparing the traditional control strategy with the optimal control strategy, the experiments show that the optimal overall target index of the operation time and energy consumption is much better. References [1] ZHU Jinling,“Optimization analysis on the energy saving control for trains,” China Railway Science,vol. 29, 104-108(2008). [2] P.G. Howlett, P.J. Pudney, “Local energy minimization in optimal train control,” Automatica, vol. 45, 2692-2698(2009). [3] Wong K K,Ho T K. K, “Dwell-time and run-time control for DC mass rapid transit railways,”Electric Power Applications, vol.1, 956-966(2007). [4] YU Jin, QIAN Qing-quan, HE Zheng-you, “Research on application of two-degree fuzzy neural network in ATO of high speed train,” China railway Society, vol. 30, 2-12(2006). [5] Liu R F, Golovitcher I M, “Energy-efficient operation of rail vehicles,”Transportation Research Part A, vol.37, 917-932(2003). [6] Khmelnitsky E, “On an optimal control problem of train operation,” Automatic Control, vol. 45, 1257-1266(2000). [7] Goossens J,Van Hoesel S,Kloon L, “A branch-and-cut approach for solving railway line-planning problems,”Transportation Science. vol. 80, 193-210(2004). [8] Chang, Y.H., Yeh, C.H., S, “A multiobjective model for passenger train services planning: application to Taiwan’s high-speed rail line,”Transportation Research B, vol. 34, 91–106(2000). [9] Higgins, A., Kozan, E., “Optimal scheduling of trains on a single line track,”Transportation Research B, vol. 30, 147–161(1996). [10] Ghoseiri, K., Szidarovszky, F., Asgharpour, M.J, “A multi-objective train scheduling model and solution,”Transportation Research Part B, vol. 38, 927-952(2004). [11] Zhou Jianbin, “Utilization of train’s regenerative energy in metro system, ” Urban Mass Transit, vol. 7, 33-35(2004). [12] ZHANG Wen-li ,LI Qun-zhan ,LIU Wei, “Simulation research on energy saving scheme of metro vehicle regenerative braking,” Converter Technology & Electric Traction, vol. 3, 41-44(2008). [13] M. Ogasa,“Energy saving and environmental measures in railway technologies: example with hybrid electric railway vehicles,”Electrical and Electronic Engineering, vol. 3, 15-20(2008). [14] Y. Sekijima, S. Toda, “Test run of onboard energy storage system using EDLC,”Industry Applications Society, 201-202(2008).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.217

Developing of Three Degree of Freedoms SCARA Robot Jiangtian Shi a , Dexin Sun b and Hongzhuang Zhangc Base Department, Changchun Institute of Engineering Technology, Changchun 130117, China a

[email protected], [email protected], c [email protected]

Key words: SCARA robot; PMAC multitude axis motion controller; opening structure.

Abstract. Mechanical structure of three degree of freedoms SCARA robot adopts horizontal joints, and opening PMAC multitude axis motion controller based PC is looked as kernel of control system, adopts the open hardware and software structure, we can conveniently enlarge its functions according to needs, so it has very good expansibility. Its three-dimensional solid model and virtual assemble is carried out using CATIA application, so that we can estimate the status of interference. Through validation, we can prove the feasibility of the robot. Introduction SCARA means a kind of Selective Compliance Assembly Robot Arm. The structure of triple freestyle SCARE is always is simple and its mechanical characteristic is better, which is always used in assemblage and packing the small sized parts of the industrial production It integrates the subjects of machine design, computer controlling skills, detecting sensor technology and informational communication technology. It is the synthesis of sense, decision action and alternation. PMAC (programmable multi-axes controller) is the multi-axes controller developed by American Delta-Tau Company in 1900s. it provided the fundamental functions of movement control, scatter control, and the numerical control of alternation with mainframe. PMAC which has the alternative executive software based Windows operating system can conveniently operate the three axes of the triple freestyle SCARA Robot and PMAC provides PID filter which can make robot work steadily aiming at the accurate position. Robot Noumenal Structure Design The triple freestyle SCARA research and development adopts plane articulation robot which owns the first, second and third vertical motion articulation. They are driven by two direct current servo motors separately. The rotation of the first and second articulation adopts retarder as easy structure and the vertical motion structure adopts ball fila bearing nut transmission Such structural transmission chain is simple with high precision of transmission which guarantees the teaching robot’s arm has the pliant at the horizontal direction It has the larger rigidity at the vertical direction. The first articulation rotation motor and circuit board are put inside the robot. The second articulation rotation motor is put on the top of the upper arm. The second articulation rotation motor is put on the top of the upper arm. The second articulation rotation motor is put on the top of the upper arm. The second articulation rotation motor is put on the top the lower arm In order to relieve the quality and inertia of kinetic parts and benefit the kinetics control, the pedestal, upper arm, lower arm and most kinetic parts adopt the light and high intensity alloy material. Under the circumstance of no effect of capability, try to reduce the brachial thick of kinetic parts. In order to relive the sympathetic vibration, the pedestal is designed with shorter size which can be blocked up and fixed according to the actual situation. Robot noumenal mechanic structure is showed in chart l. Robot noumenal mechanic system is system is composed by the first, second and third vertical motion articulation.

218

Manufacturing Systems and Industry Application

Figure l The triple freestyle SCARAR Robot mechanic Design: 1. pedestal, 2. the first articulation direct current servo motor, 3. the first articulation retarder, 4. bearing, 5. bearing cover, 6. upper arm, 7. the second articulation direct current servo motor, 8. the second articulation retarder, 9. lwer arm, 10. walking retarder motor, 11. balling fila, 12. nut transmission, 13. bin body, 14. drawing pen splint machine There are three kinds of the driving motors which are generally used. They are walking motor alternative servo motor and direct current servo motor. While, the walking motor is always controlled by open loop, which can be easily operated but with small power and is generally in low precision small power mechanic robot system. Direct current servo motor concretes many merits, such as high responsible rate, precision and frequency, better controlling character, etc. However, direct current servo motor’s brush can be easily worn and formed spark. Followed the rapid progress of technology, alternative servo motor is inclining to take place of direct current servo motor and becoming the robot’s size is comparatively larger than the direct one and its price also a little bit of high, which is always used in industrial robots. Synthesizing with all the factors, especially from the economy take direct current servo motor, as the first and second articulation’s driving motor. Three-axis SCARA robot 3D solid modeling In order to guarantee the normal proceed of production-assembly and to find problems occurred in the design procedure in time, gadget molding and computer simulation assembly are done to three-axis SCARA robot by using France-Corp’3D painting software CATIA, by which the interference of components can be checked. Like Figure2, the base size is 120×400, the size of large arm is 200×90 ×25, and the size of small arm is 200×90×25. After the simulation of modeling, the mechanism design is proved to be reasonable; the design size of each component is correct; no interference of components occurs. Thus it meets with the design requirements.

Yanwen Wu

219

Figure 2 Three-axis SCARA robot CATIA 3D solid modeling Robot control system’s hardware structure The robot control system is one kind of typical multiple spindle real-time motion control system. The traditional robot control system basically is developed by designers’ own independent construction and production goal. It adopts the closed architecture which includes the special-purpose computer, the special-purpose robot language, the special-purpose operating system and the special-purpose microprocessor. This kind of structure’s controller has a series of shortcomings, such as, high manufacturing and use cost, long development cycle, difficult updating and substitution and unable to add system’s new functions [1]. The robot developed in this article selects open motion controller PMAC2 PCI Lite which supports the connection between the PCI standard main line in the slot card way and the PC machine .APMAC2 PCI Lite controller may simultaneously operate 8 axles, and may connect the PWM control card. Its single axle servo updating time is approximately 20 us (40 MHZ DSP) and it has 18 DAC output resolutions. It may provide three-axis SCARA robot 3 axis high-speed communication with PC machines [2]. The overall structure of control system adopts double microcomputer masterslave control method and the modular structure software design. The upper CPU uses general PC computers and it mainly processes non-real-time tasks in robot control, like system administration, way plan and robot language translation and so on; The lower CPU uses PMAC2 PCI motion controller, and it mainly carries out the real-time kinematics computation, the path plan, the interpolation computation and the servo-control and so on, thus it realizes each joint’s position servo control and many joints’ coordination control. Robot system software design The software of open style general robot control system should be developed under the standard language environment, achieves transplant, mikes it easy to revise and expand. And can provide the pubic user interface and the routine interface. The robot control system’s software rests on the open style guiding principle, and it uses object-oriented and modular engineering design method. Epigyny PC takes Windows NT as the operating system and uses visual the C++ language programming, easy to realize the software architecture modulation and the good man-machine contact surface; The Lite controller development language of lower position machine OMAC2 PCI is the C language, This kind of control system is a opening system, and the user can expand and improve its performance conveniently.

220

Manufacturing Systems and Industry Application

Various modules’ function as follows: Initialization module: It used to inspect the controller and robot’s current state, let the robot to seek zero, and establish control work environment. (1) Executive module: It used to complete robot’s movement reappearance. It transfers the explanation module to form the instruction that robot can understand, transfers the lower level software according to the different instruction to actuate the robot to complete the task. (2) Monitoring module: It used to monitor robot’s work and demonstrate robot’s active status. (3) Document management module: It used to manage documents. Including document invoked, name change, duplication and deletion. (4) Parameter establishment: It used to establish adjustable parameter, such as robot controller process the control system I\O. (5) Path plan module: It used to complete robot’s kinematics, the counter solution operation and path plan and the interpolation algorithm. (6) Servo-control module: It used to complete electrical machinery’s digit servo-control. Conclusion The article developed Triple Freestyle SCARA Robot, which has the open Style control system. According to CATLA simulation assembly, provided that the robot’s mechanical organization design is reasonable, and control system’s hardware and the software had the high openness. It helped user do re-development to base on it. It has the broad market prospect when it s\used in industry abrasive machining and material transporting. In addition, the modular software design had special; qualities, such as probability, extension and openness, because robot lamination’s system architecture raised system’s efficiency. Therefore, it is extremely easy to expand and transplant in other type of robots. References [1] W. Tianran, Q. Daokui: ROBOT. 24 (2002), p. 256 [2] T. Shizhe, M. Zhiqian: ROBOT. 24 (2002), p.134

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.221

A Character Experiential Learning System: An Animated Vignette Creating Tool Hsin-Hung Kuo 1,a, Szu-Wei Yang 2,b and Yu-Chin Kuo3,c National Taichung University of Education, Taichung City, Taiwan a

[email protected], [email protected]

Keywords: Animation, Character Education, Character Story, Multimedia, Vignette.

Abstract. This research aims to integrate the character education, Internet and animated vignettes into the Character Animated Vignette Electronic (CAVE) system for children in task-oriented activities to enhance their character development. Based on Leb S. Vygotsky’s Social Development Theory, Albert Bandura’s Social Learning Theory and Allan Paivio’s Dual-Coding Theory, this tool provides students with an authentic, meaningful and task-based learning environment. The CAVE system is designed as an instructional multimedia system for children to construct animated vignettes expressing their genuine experiences on campus. Meanwhile, this platform enables users to have more opportunities engaging collaboratively in Web-based editing, revising, sharing and reflecting in class. Totally 104 fifth graders participated in this study. The questionnaire, open-ended questions, and classroom observations revealed that via the CAVE system, students seemed actively to concentrate on building their own animated vignette in the fun learning process. Results showed that participants were able to construct their group-made animated vignette, and this system was easy to use and joyful to learn. Overall, students rated favorably on the system’s design of the vignette-making activity for facilitating their character comprehension and development in elementary school settings. Introduction The Web-assisted learning system has become more and more popular in the modern teaching and learning contexts. The Internet learning mode creates new e-approaches and improves on the traditional teaching with static characteristic. By utilizing the great usefulness and convenience, the Web technology possesses an excellent power not only to deal with huge data quickly but also to deliver the latest information worldwide. Children in such digital era usually like to interact more with computers, and teachers are supposed to provide networking courseware to meet kid’s demand [1]. In addition, character education in elementary school has become an increasingly important subject, especially in today’s schools with seriously bully problems. Children's character concepts are fostered by campus experiences that engage them in reflection and in real-life problem-solving [2]. However, there are scarcely web-based character instruction connecting technology with charater education as well as animated vignettes just appropriate for the interests and demands of the elementary school students. Many of learners’ wonderful experiences come when they are engaged in designing or creating things meaningful to them [1]. Thus, the purpose of this study is to integrate the Internet, animated vignettes and character education into the Character Animated Vignette Electronic (CAVE) system. To build a authentic, meaningful and reflective surrounding, the CAVE system enables children collaboratively to create animated vignettes showing their genuine experiences on campus. By making, discussing, sharing and demonstrating their own vivid vignette with their peers, children focused on their work and expected to acquire better character awareness and social skills. Goals. To establish goals for a useful character vignette system, we have worked with elementary school teachers and several educators who excel at character or technology education. In addition, we designed and constructed several teaching animated vignettes as instructional materials by using web-assisted tools. Our goals are as follows:

222

Manufacturing Systems and Industry Application

(a) To employ appropriate theoretical foundations for the CAVE system. (b) To develop a student-oriented character platform fit for fifth graders. (c) To deliver character as well as technology education through the processes of learners’ planning, constructing and developing their animated vignette. Theoretical Foundations Our study integrated web-based technology and character education into teacher instruction and student learning activities. The CAVE system has been developed whose framework is based mainly on three principles: Social Development Theory. The work of Lev S. Vygotsky's (1896 – 1934) Social Development Theory has become the foundation of educational curricula, instructional materials, learning strategies and skills over the past decades. Vygotsky highlighted the element of social interaction in the development of cognition [3]. Additionally, Vygotsky viewed interaction with more excellent peers as well as adult can better children’s development [3]. Therefore, within children’s zones of proximal development (ZPD), the teachers or instructors should offer necessary guidance or helpful assistance to enhance kid’s development. To reach an effective way of developing basic skills for kids within their ZPD, the CAVE system builds an experiential learning environment to engage children in social context of the school life. In the task of generating a character animated vignette in CAVE, students are requested to design the plots, characters, and settings. During the process, students need to figure out which parts of the ideas and contents are important and worth expressing and constructing. Then, they need to make their vignette in mind come true and organize it logically. It is worthy of note that the CAVE activities regarding plan, design, discussion, organization and elaboration should be beneficial to children’s cognitive development. Social Learning Theory. Albert Bandura’s Social Learning Theory is widely known around the world. Behavior is the outcome of the mutual social interation among environment, manners and individual influences [4]. Human often learns and acquires by observations or imitations through reciprocal interaction, especially from the successful behaviors from his or her trustworthy models [5]. That is, people often model themselves after good examples. Because a lot of learning often takes place through observations, the CAVE system provides learning opportunities for kids to observe not only via the instructional materials concerning character education, but also via the animated vignette-making activity. Teachers have to guide children to do their job, to learn how to make the vignette, to reflect what they have done, and to share what have learned within the learning process. Dual-Coding Theory. Allan Paivio's Dual-Coding Theory advocated two different information-processing subsystems: a verbal and nonverbal system [6]. The verbal system works for processing linguistic input and output, while nonverbal system serves as an imagery function regarding nonverbal events [6,7]. The imagery representation is better than verbal one within the human memory [7-10]. In terms of adaptive memory, information is easier to memorize for people when presenting the data in pictures rather than in words [7,9]. Multimedia learning is more useful than text-based learning [10-12]. Besides, one of the most exciting, fun and attractive forms of pictorial presentation is animation [12]. Moreover, animation, with the characteristic of vivid dual-coded, is deeper within the human long-term memory in comparison with that of static illustration [13,14]. Furthermore, animation may do a better job to facilitate the encoding and retrieval processes than those of static graphics [14]. In the CAVE learning environment, the animated vignette-making activity is designed for kids to involve themselves in creating their own real character story they actually encounter on campus. In the meanful, creative, and problem-solving environment, the CAVE system encourages students to develop their verbal and nonverbal information-processing mechanism.

Yanwen Wu

223

The CAVE System The CAVE system was designed to provide two main component parts. They are (1)Teacher’s Instruction Area, and (2) Students’ Work Area. Teacher’s Instruction Area. Teacher’s Instruction Area was designed to deliver the instruction of Character Education as well as Technology Education. Character Education. The main purpose of this study is to promote children’s character development. Thus, this area is the major content of the CAVE system. The teacher offers many self-developed animated vignettes concerning Character Education for kids to view and learn by observations or imitations through social modeling learning. Technology Education Area. This area provides students with the basic and necessary web-based technology skills. That is, by allowing students to do and complete the animated vignette, the teacher trains children the related computer and the Internet skills in advance. Students’ Work Area. Students’ Work Area was designed and characterized as two major fuctions: Campus Reports and Animated Vignettes. Campus Report. After grouping according to student’s different talents, the students are requested to make their group-made character animated vignette. At the initial stage, little campus reporters have to brainstorm in groups, search for their themes and then plan what they really want to express and present. The campus report is the first work that each group has to complete. Animated Vignette. After deciding the topic, the group members are requested to write a narrative, design the plots and the characters. Then they create their group-made animated vignette by integrating all its elements. Finally, students have to present their work on screen. Most importantly, the participants are requested to reflect what they have done, and to share what they have learned. The teacher constructed and developed this area in collaboration with students. Methodology Subjects. The subjects in the study were three fifth-grade classes of a public elementary school, taught by the same teacher. The total student population is 104. Instruments. The instruments were the Feedback Sheet of the CAVE Users developed by the authors. The questionnaire consists of fourteen items and one open-ended question. At last, the questionnaire was administered to students who completed individually in class. Procedures. Students participated in the study for six consecutive weeks.There are two phases including eight steps shown in the Table 1.

Phase Teacher’s Instruction

Students’ Work

Step 1 2 3 4 5 6 7 8

Table 1. Procedures of the CAVE System Procedures Grouping depending on student’s talent Introudction of core values Value clarification in teaching instructional animated vignettes Instruction of computer-assisted animated movies Little campus reporters to search for their topic or theme Deciding the characters, designing the plots, writing a narrative, sketching graphics and scenes collaborately Integrating the animated vignette Presenting the work and sharing reflection

Results and Discussion Analysis. The Feedback Sheet of the CAVE Users in the Table 2 was conducted with a five-point Likert-scale, where 1 represents ‘‘strongly disagree” and 5 represents ‘‘strongly agree”.

224

No.

Manufacturing Systems and Industry Application

Table 2. The Feedback Sheet of the CAVE Users Contents

Mean

S. D.

1.

I understand all the functions of the CAVE system.

4.27

.947

2.

The CAVE system was easy for me to use.

4.17

.875

3.

The CAVE system was easy to select the proper scene.

4.37

.935

4. 5. 6.

The CAVE system inspire me to create stories. I could teach someone else how to use the CAVE system. The content of the CAVE system is appropriate for me.

4.31 4.20 4.18

.871 .959 .983

7.

.882

8.

Constructing character stories within the CAVE system is more fun than 4.37 using paper style. I learn moral and social skills via the CAVE system. 4.35

9. 10. 11.

I learn technology skills via the CAVE system. I use imagination when making the animated vignette. The CAVE system facilitates my learning on character.

4.36 4.38 4.39

.847 .851 .818

12.

I better my bad manners via the CAVE system.

4.54

.812

13.

Constructing the character animated vignette within the CAVE system can express my ideas well. I hope to participate in the CAVE activities hereafter.

4.28

.960

4.28

.929

14.

.868

In terms of usefulness, most of the participants agreed that the CAVE system was helpful on promoting character learning to them. For instance, most of the participants responded that “I learn moral and social skills via the CAVE system” (M = 4.35, SD = .868). Relevant feedback can also be found in other questionnaire items, such as ‘‘The CAVE system facilitates my learning on character” (M = 4.39, SD = .818), and ‘‘I better my bad manners via the CAVE system” (M = 4.54, SD = .812). In the analysis of the classroom observations, we found that the students often tried to follow the guidance of the instructional system and hoped to learn well. Their feedback revealed “I learn technology skills via the CAVE system” (M = 4.36, SD = .847). Relevant feedback can also be found in other questionnaire items, such as ‘‘I understand all the functions of the CAVE system” (M = 4.27, SD = .947), ‘‘Constructing the character animated vignette within the CAVE system can express my ideas well” (M = 4.28, SD = .960), and ‘‘The CAVE system was easy for me to use” (M = 4. 17, SD = .875). In terms of fondness, most of the participants felt that “I hope to participate in the CAVE activities hereafter” (M = 4.28, SD = .929), which implies that they wanted to engage themselves in this system for promoting their further capabilities in the future. Results from the subjects’ open-ended question of the questionnaire as follows:  I deeply appreciate my English teacher for us to make the great character curriculum.  The CAVE System with many touching stories is wonderful.  I like the CAVE System very much and I hope teacher can employ this tool again.  I think that the CAVE system is useful and has no shortcomings.  The CAVE system should have more animated vignettes and the password had better not too long.  I suggest that the CAVE system have more games to play.  I found that the CAVE system is very practical and complete with nothing missing. Students do not learn well unless they construct their knowledge actively and it is meaningful to them, so the best way to learn is to engage learners in problem-solving activities [15]. Overall, the CAVE system provides learning opportunities for kids to observe and to engage themselves in the

Yanwen Wu

225

animated vignette-making activity. Most of them acquired great learning experiences when they were involved actively in designing and creating their animated vignette in group. The work of the Animated Vignettes. The sample pages of the students’ animated vignettes are illustrated in Fig. 1. From classroom observations, we found that the students always tried their best to learn the technology skills and applied their cute work on screen. The results revealed that most of the participants were able to use our tool and to learn successfully.

Fig. 1 The samples of the Animated Vignettes Created by Two Fifth Graders via the CAVE System. Discussion. The findings agreed with Li and Grabowski’s research, which indicated an animation-learning scenario rich in multimedia could offer learners to gain deeper meanings and construct their own knowledge easier in comparison with the traditional one [14]. Additionally, users demonstrated great participation via the character animated vignette-creating activity through classroom observations. From the literature we reviewed, Bandura’s Social Learning Theory [4,5] may explain the helpful and beneficial influences from modeling on participants’ positive attitudes. In addition, the result of this animated vignettes-creating activity as a feasible multimedia platform was in agreement with Mayer & Moreno’s research, which indicated that animation is a great tool for learning [12]. In brief, the findings of the study imply that the use of the CAVE system is great in promoting learning on character education. Conclusion In this paper, we presented a Character Animated Vignette Electronic (CAVE) system, called CAVE, which offer a platform to create animated Vignettes based on group user collaboration. Based on Leb S. Vygotsky’s Social Development Theory, Albert Bandura’s Social Learning Theory and Allan Paivio’s Dual-Coding Theory, this student-centered tool provides learners with an authentic, meaningful and task-based environment. Results from the questionnaire, the open-ended question of the survey and classroom observations, it was found that most of the participants had positive attitudes and highly interested in learning via the CAVE System. This study makes a contribution to put the CAVE instructional design into action. Our system provides a basis for Taiwanese children to colaboratively perform authentic tasks of character animated vignettes. Through the CAVE system, the kids have opportunities to understand what they have learned and reflect what they have done. They not only acquire technology, moral and social skills within the meaningful activity, but also put creative ideas into action. The group-made animated vignette activity may be the nice way for kids to think, to reflect and to learn on character issues really happened to them. From the students’ feedback to the questionnaire and classroom observations, we could draw several suggestions for future studies. In the future, we plan to investigate more learners for the widespread use of the CAVE system in classroom settings. Moreover, the effectiveness of the CAVE system for student with respect to character performance should be investigated by comparing the CAVE system with the traditional instruction.

226

Manufacturing Systems and Industry Application

References [1] Resnick, M. (2002). Rethinking learning in the digital age. In Kirkman, G. S., Cornelius, P. K., Sachs, J. D., & Schwab, K. (Ed.), The global information technology report: Readiness for the networked world (pp. 32-37). Oxford: Oxford University Press. [2] Nucci, L. (1997). Moral development and character formation. In H. J. Walberg, & G. D. Haertel, Psychology and educational practice (pp. 127-157). Berkeley: MacCarchan. [3] Vygotsky, L. S. (1978) Mind in society. Cambridge: Harvard University Press. [4] Bandura, A. (1977). Social learning theory. Englewood Cliffs: Prentice-Hall. [5] Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs: Prentice-Hall. [6] Paivio, A. (1986). Mental representations: a dual coding approach. Oxford: Oxford University Press. [7] Paivio, A. (2007). Mind and its evolution: A dual coding theoretical approach. Mahwah: Lawrence Erlbaum Associates. [8] Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3(3), 149-210. [9] Paivio, A. (1971). Imagery and verbal processes. New York: Holt, Rinehart and Winston. [10] Yang, J. C., Huang, Y. T., Tsai, C. C., Chung, C. I., & Wu, Y. C. (2009). An automatic multimedia content summarization system for video recommendation. Educational Technology & Society, 12 (1), 49-61. [11] Mackey, T. P., & Ho, J. (2008). Exploring the relationships between web usability and students’ perceived learning in web-based multimedia (WBMM) tutorials. Computers & Education, 50(1), 386-409. [12] Mayer, R. E., & Moreno, R. (2002). Animation as an aid to multimedia learning. Educational Psychology Review, 14(1), 87-99. [13] Lin, C. L. (2001). The effect of varied enhancements to animated instruction on tests measuring different educational objectives. Unpublished doctoral dissertation, The Pennsylvania State University. [14] Li, Z., & Grabowski, B. L. (2006). Web-based animation or static graphics: Is the extra cost of animation worth it? Journal of Educational Multimedia and Hypermedia, 15(3), 329-347. [15] Hawi, N. S. (2010). The exploration of student-centered approaches for the improvement of learning programming in higher education. US-China Education Review, 7(9), 47-57.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.227

The role of the cities in the western economic development 1

2

Zhao Guojie 1, a, Jia Lijie 1, b, Ma Tiefeng 2, c

School of Management, Tianjin University, Tianjin city, China Statistics School, Southwestern University of Finance and Economics, Tianjin city, China a

b

c

[email protected], [email protected], [email protected]

Key words: Economic development of city; Path dependence; Spatial econometric models; System GMM

Abstract. Based on an annual panel for the western cities in China and system GMM method, this paper build the spatial econometric models to analyze if the same factors have the different impacts between the capital cities and the prefecture-level cities on their economic development. The results show that the capital cities more rely on their development path and less dependent on other cities of the same level; while the prefecture-level cities on the contrary. Introduction China’s urbanization process in recent years also strengthens the effect of the cities as the carrier of industrialization in the progress of national economy. Western of China includes nine provinces and one municipality. The west area are behind the general level of land transport for the special geographical conditions, and also lack the export oriented sea ports. There is a big gap between the western region and eastern region for history reasons. The difference speed of development existing in the east and west also is the result of the market segmentation artificially made by the government of province [1]. The other reason causing is the export economy development of China. Western development policy seeks to reduce this gap and provides many preferential policies for the investors, while many scholars worry that the western area may become the “black hole phenomenon” pointed by the principle of spatial economics [2]. The cities play the important and different role to the around area: diffusion and reflux [3,4]. The number of the prefecture-level cities has increased rapidly from 68 in 1989 to 84 in 2008 during the process of urbanization in the western region, which reduced the geographic distance between the radiuses of the cities. the central city become the own development engineer to the western region, which also can drive the progress of the adjacent areas. On the other hand, the city, even those small cities, also can effect the around area in theory. Econometric model In order to reveal the relationship between the cities and the around areas during the process of the economic development in the western region, this paper try to estimate a econometric model based on dynamic panel to make a more accurate description about the complexity of relationship between the variables and take the sys-GMM method to solve the model. Literatures [5,6,7] make it increasingly popular. The model is given as follow Yit = β ⋅ Yit-1 + η ⋅ Xit + µ ⋅ (ω ⋅ Y )it + γ ⋅ (ω ⋅ Y )it-1 + δµi + ε it . (1) We regard the Yit as the GDP at t-time in i-region as dependent variable. In the explanatory variables, we add the first order term of dependent variables, which means that for a city the GDP of the last year can affect the GDP of this year. X it are the main explanatory variables involved in the econometric model. ω is the space weight matrix which response to the degree of interactions between different provinces. (ω ⋅ Y )it and (ω ⋅ Y )it-1 separately consider the strategic interaction

228

Manufacturing Systems and Industry Application

impact by the same period and lagged explanatory variables of the other provinces. µi is the fixed effect of the i -th province. ε i ,t is the random disturbance term. In this paper, ε i ,t ~ N (0, σ ε2 ) , iid. This paper considers two different space matrixes ( ω ) to order to analysis the city’s role in the regional economy: one is to calculate the spatial impact coming from the prefecture-level cities; the other is the provincial cities. The composition of X include the level of urbanization which be described by the ratio of nonagricultural population and the total population, denoted by czh. The ability of a city attracting investment is described by the radio of local account of foreign direct investment (FDI) and total account in the western region, denoted by fdi; Investment in fixed assets is described by the radio of local fixed assets and the total account in the western region, denoted by gdtz; transportation condition which be described by the ratio of the local mileage and total mileage of the western region, denoted by ys; The other in this paper is the indicator of the relationship of the land sale revenue with local fiscal,which is described by the radio of local land revenue and the local budgetary revenue, denoted by tdsr. The land is the basic resources for the development of the economy. Literature [8] pointed out that the radio of land revenue and the entra-budgetary revenue was more than 60% in some cities lying in the east coast, and this radio was 70% in some small cities in the western region. The total land revenue of nation was 5500 billion [8], which is equal to 39.25% of total revenue budget [9]. The independent variables also consider the government input on the area of technology and education, which is described by the radio of local input on technology and education and the total input on these fields, denoted by kjtr; the other independent variable is the local retail trade sales, which is described by the radio of local retail trade sales and the total retail trade sales, denoted by ls. Tibet autonomous region is removed from the data pool for the missing data and the data from 2004-2008, which is coming from China city statistical yearbook and China statistical yearbook. The analysis for model and the results To let data meet the Gaussian assumption, all data are processed by logarithm in this paper. From the data analysis results, we can conclude that the different level cities rely on the different dependent path. The city of provincial more relies on their own development path of the past and keeps on the same direction. And the same indicator of the other provincial city has no effect to the studied object, whether the lag period or the current. The choice of prefecture-level city is completely different with the provincial city. Their direction of the development can be affected by information of development from the same level city. The prefecture-level city rely on their own dependent path and keeps on the same direction, The areas of western part in China have the large gap in economy level between the every province, and this unbalance also can be found in the prefecture-level cities. The other reason is the distance between the cities is larger than in the other part of china, including the cities of prefecture-level city. Table 1: Regression results Lagy Fdi hy kjtr wy

µi AR(2) Observe

Model1(prefecture city)

Model2(provincial city)

-0.3548133 -0.74 -0.1263583 -1.47 0.193934*** 3.213 -0.049774 -0.08 1.874562*** 4.12 14.65552*** 5.10 -1.56 0.12 221

0.8906181** 2.02 -0.0168989 -0.45 -0.2309412* -1.918 0.0118832 0.35 0.7382577 0.62 1.704491 0.49 1.53 0.14 42

czh gdtz tdsr ls lagwy AR(1) Hansen test

Model1(prefecture city)

Model2(provincial city)

0.2862267 0.89 -0.2523607** -2.248 -0.830314** -2.316 0.6070876* 2.1326 1.969219*** 5.47 2.274** 0.024 6.4958 0.63

0.6059761** 2.01 0.3085634* 1.93 -0.0810459 -1.15 0.0583722 0.28 -0.4096443 -0.37 2.557** 0.017 4.2071 0.48

Yanwen Wu

229

The provincial city has the complete social and economic structure may be one reason to explain the provincial city rely on itself development path of the lag period in the same direction. On the contrary, the number of the prefecture-level city has increased. This can regard as one reason that the city in the lower administrative level change their idea affected by the other same level city, and the other reason may be the championship competition, which existing among the government officers are the main impulse of the completion of the city’s economical development. The best strategy for the prefecture-level cities is not to be the last one on every items of competition, including the economical development. The other result of the analysis that the FDI and the local government input for technology and education is not obviously significant to impulse the development of the local economy. In the past thirty years, most scholars believe that the investment of FDI has played an important role to change the way of development and to stimulate the speed of reform [10]. The different policies given by the local government owning the different level of resource endowment and economic development offer much choice for the FDI, and the western region is the only one choice for them [11]. The input of technology and education is important to the western region where are the traditional agricultural areas, while the analysis doesn’t show the significant relationship existing between the development and the input of technology. The lower level of industrial base and the value is usually concentrated in labor-intensive industries. Summary Based on the spatial econometric model, the capital cities more rely on their development path and less dependent on other cities of the same level; while the prefecture-level cities are on the contrary. The effect from the indecent variable is different: that the investment of foreign and the local government input on the technology and education show no significant to the development of local economy and the retail is more significant to the prefecture-level cities comparing with the provincial cities. The reason can be found from the social and economical structure. The provincial cities in the western region should improve the industrial structure and give the stage for the technology and the education. References [1] A.Young: Quarterly Journal of Economics Vol. 20 (2000), p.1091 [2] Y.S. Huang: Selling China-Foreign Direct Investment During the Reform Era (Cambridge University Press, Cambridge, UK 2003). [3] M. Fujita: The spatial economy cities regions and international trade (China Renmin University Press, Beijing (2005). [4] S.Z. Ke: Economic Research Journal Vol.8(2009), p.85 [5] M. Arellano, S. Bond: Review of Economic Studies Vol. 58(1991), p.277 [6] M. Arellano and S. Bond: Journal of Econometrics, Vol.68 (1995), p. 29 [7] R. Blundell and S. Bond: Journal of Econometrics, Vol.87 (1998), p.115 [8] S.S. Jiang, S.Y. Liu and Q. Li: Management World, Vol.9 (2007), p.1 [9] E. Borensztein, J. De Gregorio, J.W. Lee: Journal of International Economics Vol.45(1998), p.115 [11] K.N. Xu and J. Chen: Economic research journal, Vol.3 (2008), p.138-149

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.230

An Analysis of the Strategy of Product Platform Aihua Wu1, a, Tianfu Li2,b 1

School of business, Ludong University, Yantai, 264025, Shandong, China

2

Technology Center, Dongfang Electronics Co., Ltd., Yantai, 264025, Shandong, China a

[email protected], [email protected]

Key words: strategy; product platform; core technology; core product

Abstract. Product platform is a set of subsystems and interfaces that form a common structure from which a stream of related products can be efficiently developed and produced. Core capability is a key driver for successful product platform development and core technology is the basis of all competitive strategies for high-tech enterprises and their products, and the core products supported by it is the physical embodiment of core competence. Introduction Firms in many industries face an increasing need to offer greater level of product variety to meet closely the diverse needs of customers in global markets. As a consequence of the trend, several research papers on the product platform theory were published during the 1990s. A common approach in these papers is to make product development more effective by building a large common product platform that will be reused across a set of products in a product family. Product family is a group of related products that share common features, components, and subsystems, and satisfy a variety of market niches. Product platform is a set of parts, subsystems, interfaces, and manufacturing processes that are shared among a set of products [1]. A product family comprises a set of variables, features or components that remain constant in a product platform and from product to product. The design of platform-based product family has been recognized as an efficient and effective means to realize sufficient product variety to satisfy a range of customer demands in support for mass customization [2]. Product platform strategy is a new product development strategy, enterprises first develop product platform with advanced technology and good performance scalability, on the basis of which develop the product family, and in accordance with market demand, continuous improvement to increase the products. The new product development based on the platform is no longer focused on a single product or single market, but a series of technology products, these products form the product family. C.K. Prahalad and Gary Hamel considered that core competence was an accumulated knowledge in the company, esp., the kind of complex harmonization of individual technologies and production skills [3]. They made a very visual analogy which compared the corporation to a tree, the core products to its trunk and boughs, business units to its sprays, the end products to its leaves, flowers and fruits, and the core competence to the roots which are to provide nourishments, maintain the life of the tree and reinforce the trunk. Meyer and Utterback added core capability as a key driver for successful product platform development [4]. They claim that core capabilities cannot be separated from the products that the company produces. They define a product family as products that share a common platform, but have specific features and functionality. This approach enables companies to create products for different market segments by using a common product platform. So the core technology and competence is the core of product platform strategy. That is to say, core competencies derive from core technologies and core products. Core technology is the essential condition to attain core competency and Core product is the physical embodiment and carrier of core competencies. Based on the analysis of a China’s video surveillance enterprise, the article makes an attempt to get a better answer by giving its insight into core technologies and core products.

Yanwen Wu

231

Fig.1 The corporation tree Literature Review There are various approaches and strategies for designing families of products and mass customized goods reported in the literature. These techniques appear in varied disciplines such as operations research [5], computer science [6], marketing and management science [7], and engineering design [8]. Much work done in strategic management and marketing research seeks to categorize or map the evolution and development of product families. Sanderson (1991) introduces the notion of a “virtual design” to evolve into product families. Wheel Wright and Clark (1992) suggest designing “platform projects” and Rothwell and Gard (1990) advocate “robust designs” as a means to generate series of different products within a single product family. After reviewing prior work, we found that several quantitative frame works have been proposed for product family design. They provide valuable managerial guidelines in implementing the overall platform-based product family development [9]. There are generally two approaches for product family design. One is the top-down approach that adopts platform-based product family design. The other is the bottom-up approach which implements family-based product design through redesign or medication of constituent components of the product. The most import and characteristics that have been stressed in the literature for designing product families are modularity, commonality and reusability, and standardization. The concept of functional modularity should be in corporate with the requirements of product families from the product life cycle perspective. The research and development work is mainly in the realm of academics and does not provide support for core competence-based processes. Before core competencies can be discussed, the term competency needs to be understood. A competency is essentially a construct [10]. As Pedhazur and Schmelkin (1991) and Nunnally and Bernstein (1994) explained, a construct involves inferring the existence of a concept that is not directly measurable or observable (i.e., a construct) based upon related information that is measurable or observable (i.e., indicators of a construct). For example, anxiety is a construct related to an individual, and organizational commitment is a construct related to an individual's relationship with an organization. While these individual and organizational concepts are generally accepted, it is only possible to truly observe or measure indicators of these constructs. A core competency is merely a derivative of a competency. The term core means that it is a key factor, or foundation. As with a competency, defining a core competency is also challenging, since more than one type of core competency can be defined within an organization. These multiple definitions contribute to the lack of clarity regarding the use of the term in business and industry. A cursory review of business and industry literature illustrates this lack of clarity by revealing a range

232

Manufacturing Systems and Industry Application

of terms from core technical and core marketing competencies [11] to market-specific and function-specific core competencies. Generally speaking core technologies and products are behave of core competence. When developing new products, most firms use cross-functional teams. Of the various available team structures, 97% of firms choose cross-functional development teams [12], which consist of members who belong to different departments but share the common objective of developing an new product. We argue that because performance benefits derive from competence diversity, they cannot be predicted effectively on the basis of functional diversity [13]. Competence diversity provides two benefits. First, it positively influences information and knowledge that members bring to the team. Second, it improves the processing of this information in two respects: deeper thinking and a broader range of perspectives considered to make decisions [14]. These complementary effects induce a higher degree of the instrumental use of available information. Instrumentally using information means employing it to resolve a specific problem. In the specific context of new product development, the three main types of success-related information pertain to information about customers, competitors, and Technology. Core Technologies and Products Core technology is series systemic activities to increase the amount of knowledge, and to create new applications through using the knowledge. Prahalad and Hamel emphasized that core technology is very important. Core competence is the collective learning in the organization, especially the capacity to coordinate diverse production skills and integrate streams of technologies [3]. Companies must identify core competencies, which provide potential access to a wide variety of markets, make a contribution to the customer benefits of the product, and are difficult for competitors to imitate. Core competencies are based on the platform of technology and products. Core technology is a concept which is matched with components of products, and is designing and manufacturing technology of critical components. It has four characters: a). owned by few corporations, b). added value, c). higher cost to learn and imitate, d). impossible to replace. For example, compressor manufacturing technology is the core technology for air conditioners and refrigerators, picture tube and single-processing integrated circuits technology is the core technology for color televisions, engine manufacturing technology is the core technology for motorcycles and cars. That is to say core technology decides the levels of critical components and the functions of products. It is the most important aspect for building core competency that to own the core technology and intellectual property rights. Core products are a company's products which are most directly related to their core competencies, so core products are often the physical embodiment of one or more core competencies. In the process of developing and utilizing core product, we should focus on the material object and the value and functions carried by it if we stand by technology aspect, or focus on the concept of core product and the business mode if we stand by management aspect. The business mode of core product is the coupling of concept form and material object form. That a corporation depends on to survive and improve in industry and market is the products which have continuous advantages. They are formed in the process of manufacturing and marketing through the chain: core technology →technology platform→product platform→core products→end products, so core product is the bridge of a product platform and the platform products. Corporations can keep and display their competitive advantage coming from core technology in not only time of process but also space of geography, by developing core products and end products one after another or simultaneously based on product platforms. Competitive advantage of products is a comprehensive expression of core competencies. The innovation capability of a corporation bases on its product innovation platform. The key of improving innovation capability continuously in a corporation is to update its product innovation platform timely. The core competencies of the corporation is derived from its advanced core technology, integrated into core products, and embodied in end products.

Yanwen Wu

233

Conclusions To sum up, it is the only way to form a unique accumulating of technology and knowledge through continuous development, to obtain technical advantages which are difficult for competitors to imitate. Based on advanced technologies, a corporation can develop core products, and then spawn series different products. Keeping such competitive advantage continuously, a corporation can cultivate its core competencies. And the core competencies can give a fresh impetus to corporations to make them to progress and grow continuously. References [1] MacDuffie, John Paul, Kannan Sethuraman, Marshall L. Fisher: Product variety and manufacturing performance: Evidence from the international automobile industry. Management Vol. 42, No. 3, (1996), p. 350-369 [2] M.H. Meyer, A.P. Lehnerd: The power of product platforms. The Free Press, New York, (1997) [3] M.M. Tseng, J.X. Jiao: Product family modeling for mass customization. Computers in Industry, Vol. 35, No. 3-4, (1998), p.495-498 [4] Prahalad C.K., Hamel Gary: Core Competence of the Corporation. Harvard Business Review, (1990), May-June, p.79-91 [5] N. Gaithen: Production and Operations Management: Decision-Making Approach. The Dryden Press, NewYork, (1980)

A

Problem-Solving

and

[6] G.J.Nutt: Open systems. Prentice Hall, Englewood Clis, NJ, (1992). [7] Meyer H, Utterback M.: The Product Family and the Dynamics of Core Capability. Sloan Management Review, No. 1, (1993), p.29-47 [8] K. Fujita, H. Sakaguchi, S. Akagi: Product variety deployment and its optimization under modular architecture and module communalization. Proceedings of the 1999 ASME Design Engineering Technical Conferences, Paper No. DETC99/DFM-8923, ASME, (1999). [9] Xuan F. Zha, RamD. Sriram: Platform-based product design and development: A knowledge-intensive support approach. Knowledge-Based Systems, Vol.19, (2006) p.524-543. [10] Ryan K. Lahti: Identifying and Interating Individual Level and Organizational Level Core Competences. Journal of Business and Psychology. Volume 14, No. 1, Fall (1999) [11] Gallon, M. R., Stillman, H. M., & Coates, D.: Putting core competency thinking into practice. Research Technology Management, Volume 38, No. 3, (1995, May-June), p. 20-28. [12] Mc Donough, E.F.: Investigation of factors contributing to the success of cross-functional teams. Journal of Product Innovation Management, Volume 17, No. 3, (2000), p. 221-235. [13] Christophe H., David G., Marianela F.: Familiarity and competence diversity in new product development teams: Effects on new product performance. Market Lett, No. 20, (2009) p.75-89. [14] Dahlin, K.B., Weingart, L.R., Hinds, P.J.: Team diversity and information use. Academy of Management Journal, Volume 48, No. 6, (2005), p. 1107-1123.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.234

Study of Enhancement technology of Color Image Based on Adaptation and Nonlinear Taile Peng1,a, Youdong Ding2,b, Changjie Zhu1,c 1

School of Computer Science and Technology, Huaibei Normal University ,Huaibei Anhui 235000 ,China

2

School of Film and TV Arts & Technology, Shanghai University , Shanghai 200072,China a

[email protected], [email protected], [email protected]

Key words: Sliding Window, Adaptive Enhancement, Neighborhood Information, Adaptive Object Segmentation.

Abstract. Loss of image features are leaded by Regions with too bright or too dark of image with noise. An adaptive and nonlinear algorithm for color image enhancement is proposed in the paper. which consists of five stages: Converting color images to grayscale images; Threshold value is ascertained by HVS and position of noise is ascertained by method of Sliding Window; According to local characteristic of position of noise, median filtering is executed by weighted template; The following thing is that image is separated into apart; At last image is adaptively enhanced according to idea of Retinex and color of image is reverted. Experiments show that ability of de-noising can be effectively improved by the algorithm and contrast of image can be improved. Introduction Digital images often have their equivalent, known as noise, which is made by some factors. with Image quality declining, the follow-up work has caused great difficulties to process image. By using enhancement technology of image, the better visual effects can be received. Common enhanced algorithms of imagine include Gamma correction, histogram equalization, additional analysis, Bionic and Multiresolution wavelet-based methods [1,2],Also histogram equalization algorithm is improved in some literatures. In the frequency domain, owing to the noise found chiefly in the spectrum of the high frequency band, the algorithms is primarily used to enhance high frequency information, such as the edges of image. The purpose of color image enhancement is in enhancing the image in detail and making image more vivid, colorful but not leading to distortion or color casting and so on. In recent years, algorithm on color image enhancement based on the color enhancement can be divided into two categories: (1) algorithm of image enhancement to maintain color constant [3], which is researched in color space conversion. Researchers use the appropriate treatment on the response to the weight of luminance and saturation components. (2) algorithm of image enhancement based on human visual perception. In these algorithms E. Land's Retinex theory is proposed as the main representative [4]. In recent years, researchers have also proposed many new methods algorithm on color image enhancement [5,6,7]. A new algorithm is proposed, which is based on linear filtering, nonlinear filtering and Retinex theory in the paper. Firstly, noise points of color image have been determined. Secondly, according to the statistical properties on position of pixels point with noise, weighted template is used to

Yanwen Wu

235

remove noise and image is adaptively divided into small pieces. Lastly, better visual effect is obtained by increasing the contrast of the image area based on Retinex theory. The algorithm includes the following three steps: (1) to determine the mutation point (noise points)and to remove noise. (2) to partition image into blocks adaptively and to enhancement contrast to local area of image based on Retinex theory. (3) color restoration. Identify mutation To determine mutating is the first step in the whole process, it's directly related to the final effect of image enhancement. There are many ways to determine mutated point. For example, the mutated point is determined by fixed window [8].Differentia is gotten between the average gray of all the pixels within the window and the gray value of center pixel. If the differential is greater than a threshold, the center pixel is considered as noise point, or as non-noise point. There are two problems in the algorithm: (1) threshold is often random. To select the appropriate threshold is very difficult, which directly affects the effect of filtering. (2) If a fixed value is used as threshold in filtering process, the fixed value can’t truly reflect the noise sensitive degree of local area in image. The k×k window is used to move on the image in some literatures, and the maximum or minimum of gray of window is found out. If the center pixel value of window is equal to the maximum or minimum value, then the pixel is treated as noise point, or as the non-noise point. The drawback of this algorithm is: treating the maximum and minimum of local window as the judgment standard of noise point, it can embody adaptability. but if the point whose gray value is in maximum or minimum is not the noise points, and treating them as judgment standard of noise point, we can get the wrong result. Based on the characteristics above two methods, we use the following algorithm to determine the noise point. Algorithm is described on noise points determined: (1)To a given M × N image, pixel point (1,1) is regarded as the first point, the following thing is that a sub-block with W × H is defined, and it is moved along abeam and length direction with a step a and b. To calculate the average gray of image AVER (M, N), MAX (M, N) and MIN (M, N) AVER(M, N) =

M N 1 f(M, N) ∑∑ M × N M =1 N =1

(1)

Based on computer vision theory, the human eye is most sensitive to luminance changes. To color image, it can be converted to a YUV image and then the HVS (human visual system) of gray image is calculated. Image with noise can be viewed as a superposition of the strong background and noise. As long as the contrast of the noise below the threshold of HVS, HVS can’t sense the presence of noise. Therefore, the noise is detected by noise-sensitive factor treated as threshold. In this paper matrix of noise-sensitive coefficient is represented by the JND [9], (formula (3)). (2) To move sub-block to the right a step with a, according to the formula (2),the sub-block image average gray AVER (W, H) is calculated. If the sub-block is in the image boundary, then go to step (2). Otherwise, the sub-block goes back to the left margin, and goes to the next step. AVER(W, H) =

W H 1 ∑ ∑ f(W, H) W × H W =i H =1

(2)

236

Manufacturing Systems and Industry Application

W

JND(i, j) =

H

2 2 1 [f(i + M, j + N) − AVER(W, H)] ∑ ∑ M × N M =− W N = − H 2

(3)

2

explanation : JND(i,j) is noise sensitivity factor of Window center pixel. (3)to move down sub-block a length of b. According to the formula (4) , the average gray AVER (W, H) of sub-block image is Calculated. If the sub-block does not exceed the image boundary, then go to step (3). Otherwise, go to the next step. AVER(M, N) =

1 W +a H +b ∑ ∑ f(M, N) M × N W =a H =1

(4)

(4)To assume center pixel gray of sub block as GRAYi,j If GRAYi,j is equal to MAX(W,H) or GRAYi,j is equal to MIN(W,H) or |GRAYi,j-AVER(W,H)| is greater than JND(i,j), then the pixel is determined as noise points,Otherwise, as a non-noise point. De-noising smooth algorithm In the paper a improved de-noising algorithm of keeping details is proposed. According to the relevance of the image, the gray value of image without interference will not be great changes in a neighborhood, so its variance should be smaller, that is to say, the neighborhood has the minimum gray-scale. We choose the neighborhood and the corresponding weighted template to convolute and can get a gray value and treat the gray value as gray value of pixels to be processed, it can reduce noise interference and better edge preserving details of image. For improving image quality, a local smoothing algorithm is proposed to retain the details of edge in the paper. whose basic idea is based on the characteristics of pixels in the average with different weights. Five weighted template is defined in the paper, according to the statistical characteristics of noise point in region, the weighted template is selected and shown in Figure 1. 1 2 1  1  2 4 2 16  1 2 1

(a)

1 2 1 1  1 4 1 14  1 2 1

(b)

1 1 2 1  1 4 1 14  2 1 1

(c) Figure 1:

2 1 1 1  1 4 1 14  1 1 2

1 1 1  1  2 4 2 14  1 1 1

(d) (e) Weighted template

Retinex theory The theory of retinal cortex (Retinex theory) is proposed by E. Land in 1964, which is the most influential color constancy theory of computation. E. Land thinks that Color Constancy isn’t susceptible from the environmental change of lighting, and is relevant to perception that visual system reflects to objects. Based on Retinex theory, the human eye perceiving the brightness of object depends on the environment lighting and reflection of incident light. It can be expressed that: L(x, y) = E(x, y) ⋅ R(x, y) explanation : E is light function, R is reflective function, L represent brightness of point (x, y)

(5)

Yanwen Wu

237

According to experiment from E. Land, it shows that the human eye's perception to brightness is the exponential type. The relationship between light and dark pixels can be processed the log-domain. Therefore complex product forms can be simple addition and subtraction type. Equation (5) is taken the logarithm, and equation (6) can be obtained. log(L(x, y)) = log(E(x, y)) + log(R(x, y))

(6)

From equation (6): log(R(x, y)) = log(L(x, y)) − log(E(x, y))

(7)

Then the E(i, j) function can be written as convolution form between Surround function and light intensity in the l-channel. we can get the basic form of Retinex algorithm.

log(R(x, y)) = log(L(x, y)) − log(Fk (x, y) ∗ L(x, y))

(8)

Explanation: L(x,y) is brightness value of each image pixel and L(x,y) values is in the range [0, 255], The smaller value ,the more dark. Fk(i,j) is weighting function, also known as convolution kernel. Log (R (x, y)) is related to the amount of output image, and denoted by γ ( x, y ) According to the literature [10], expression of the output image can be drawn: γ out (x, y) = 250 ⋅

γ(x, y) − min(min(R), min(G), min(B)) max(max(R), max(G), max(B)) − min(min(R), min(G), min(B))

(9)

According to the literature [11] ,overlapping block method is proposed to maintain the brightness of image sub-block, the image is split into batches. Image is enhanced by method of Retinex in each sub-block.

Color compensation to noise point Since the human eye brightness perception is exponential, Better enhancement results can be obtained by adaptive index method. In general, the better enhancement of color image can be achieved by algorithms based on Retinex theory, but there are some deficiencies. For example, when an image has enough color change. To average value of the R, G, B, each component tends to the conflict of equal degree, the processed images will lose a lot of color information. we need to use other measures to restore the color information. Under normal circumstances, component of R, G, B is dealt with separately by Retinex algorithm, and which maybe lead to the imbalance between triple-color and lead to color distortion, it requires more processing time too. The contrast of image can be effectively improved by Retinex algorithm used in the local area of image, and the details of dark areas are highlighted. To sub-block boundaries of output image ,the block effects maybe occur, the block effects removing filter(BERF) can be used to eliminate the block effect[12]. It makes the whole process of image enhancement having some flexibility.

Color restoration After adjusting the brightness and enhancing contrast of local image by method what we said, a enhanced image in brightness can be received. According to the color information of origin image, we can use a simple linear operation to restore color information of enhanced image. As a linear operation, the R, G, B ratio among the three components of the restored image will remain the same, so the color information of original image can be better preserved. Recovery methods of image color information can be described as follows:

238

Manufacturing Systems and Industry Application

Using the formula (10) restores the image color:

I 'j (x, y) = a(x, y)I j (x, y), j = r, g, b

(10)

Explanation: a(x, y) represents enhancement factor of ratio at the point(x, y),Ij(x,y)(j=r,g,b) represents R, G, B color components of the original image, I 'j (x, y)(j = r, g, b) represents triple-color of enhanced color image.

Experimental results and analysis In order to verify algorithms what we offer, we choose 20 poor contrast image with noise to experiment. Under interference with different levels noise, we compare filtering algorithm that we provide in the paper and the median filter and neighbor average filtering in de-noising and protection details by PSNR that is introduced in literature [13]. PSNR is defined as: L2 PSNR = 10log MSE

(11)

Explanation: L is the maximum gray value of image, MSE is the mean square error of image; PSNR is the peak signal to noise ratio. To the picture with 2%, 5%, 10% of salt and pepper , experiments are done by filtering algorithm proposed in the paper, the median filter and neighbor average filtering for de-noising. Results are shown in Figure 2 and Figure 3. From the experimental data and graph, the following conclusions can be drawn: The value of PSNR calculated by algorithm what we offer is larger than the value of PSNR calculated by 3 × 3 mean filter algorithm and 3 × 3 median filter. It shows that the algorithm what we offer is better than the mean filter and median filter in both de-noising and protection capacity. With increasing salt and pepper noise, the PSNR calculated by the three methods is still very difference. Noise can be almost removed by the new algorithm, and the details have been well protected, and higher image resolution can be received. In the three methods, by the data in Figure 2 and Figure 3 ,it shows that the new method has excellent filtering performance.

Figure 2: Experimental results of Plus 2% salt and pepper Figure 3: Experimental results of Plus 5% salt and pepper

Yanwen Wu

239

For an objective evaluation to image brightness and contrast changes, Jobson [14] proposed a method according to the mean of image and mean local variance to compare.

C=

L=

Var (I out (x, y)) − Var (I in (x, y)) Var (I in (x, y))

(12)

Mean(I out (x, y)) − Mean(I in (x, y)) Mean(I in (x, y))

Figure 4:Experimental results to change the contrast ratio

(13)

Figure 5: Experimental results to change the contrast ratio

From figure 4 and figure 5,multi-scale Retinex algorithm (MSR) has good effect in brightness enhancement but not in the contrast. With the new algorithm, small changes is in brightness and contrast is significantly enhanced and the color of image is better fidelity. Conclusion According to the global and local adaptive characteristics of human visual perception, a color image enhancement algorithm based image adaptively spitted is proposed in the paper. In the algorithm, color image enhancement is realized by global dimming, local contrast enhancement and color restoration. Experiments show that the color image can be effectively enhanced by the algorithm. The enhanced image is more prominent in the details, especially in dark areas of the details, and enhanced image is more vivid and realistic. Acknowledgement This work is supported by the University Science Research Project of Anhui China(NO. KJ2010A304).

240

Manufacturing Systems and Industry Application

References [1] LIU Guo-jun,TANG Xiang-long,HUANG Jian-hua,LIU Jia-feng.An Image Contrast Enhancement Approach Based On Fuzzy Wavelet[J].ACTA ELECTRONICA SINICA,2005,33(4):643-646. [2] WANG Shou-jue,DING Xing-hao,LIAO Ying-hao,GUO Dong-hui.A Novel Bio-inspired Algorithm for Color Image Enhancement[J].ACTA ELECTRONICA SINICA,2008,36(10):1970-1973. [3] Kokkeong T, Oakley J P. Enhancement of color images in poor visibility conditions [C]//International Conference on Image Processing. Vancouver, BC:Canada:IEEE, 2000: 788-791. [4] Xia Siyu, Li Jiuxian, Xia Liangzheng. Improved Color Image Enhancement Algorithm Based on Color Constancy[J]. Journal of Nanjing University of Aeronauties&Astronauties,2006,38(suppl):54-57. [5] HUANG Kai-qi,WANG Qiao, WU Zhen-Yang, Multi-Scale Color Image Enhancement Algorithm Based on Color Space and Human Visual System(HVS)[J]. ACTA ELECTRONICA SINICA,2004,32(4): 673-676. [6] Kimmel R, Elad M, Shaked D. A variational framework for Retinex[J].International Journal of Computer Vision,2003,52(1):7-23. [7] Funt B, Ciurea F, Mccann J. Retinex in MATLAB [J] . Journal of Electionic Imaging, 2004, l3(1) :48-57. [8] XU Caijun,WANG Hua,WANGJianglin,CE Linlin. Directionally Dependent Adaptive SIGMA Median Filter[J]. Geomatics and Information Science of Wuhan University, 2005, 30 (10):28-31. [9] Pan Meisen,Yi Ming. An Adaptive Mean Filter Algorithm Based on HVS[J].Computer Engineering and Applications, 2006, 42(10):62-83. [10] JIANG Xing-fang,WANG Ge,SHEN Wei-min.A method of color image enhancement using color advanced retinex [J].Journal of Optoelectronics.Laser, 2008, 19(10):1402-1404. [11] JIANG Ju-lang,ZHANG You-sheng,XUE Feng,HU Min. Local Histogram Equalization with Brightness Preservation[J]. Acta Electronica Sinica, 2006,34(5):861-866. [12] S D Chen, A R Ram1i.Contrast enhanenment using recursive mean-separate histogram equalization for scalable brightness preservation[J].IEEE Transactions on Consrnmcr Electronics,2003,49(4):1301-1309. [13] LEI Chao-yang,LIAO Hai-zhou,ZHOU Xun-bin. Remove Noises Based on a New Filter Method[J]. Microelectronics & Computer, 2009,26(1):162-165. [14] Jobson D J,Rahman ZU,Woodell G A.The statistics of visual representation[C]//Proceedings of SPIE Visual Information Proceeding XI Washington: SPIE Press,2002:25-35.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.241

A High Precision Large Area Scanner for Ancient Painting and Calligraphy Xianghua Chen a and Xifan Shi b Zhijiang College, Zhejiang University of Technology, 310024, Hangzhou, Zhejiang, China a

[email protected], [email protected]

Key words: Image Acquisition, Digital photography, Image Processing, Cultural Heritage Conservation.

Abstract. It is of great significance to digitize ancient paintings and calligraphy. A typical way to acquire them is using a linear CCD based large area table scanner. But this kind of solution has great drawbacks in terms of precision as well as scanning range, which prohibit its use in museums and libraries. Our lab has recently developed a new equipment to solve these drawbacks and hopefully it would shed new light on the documentation of ancient paintings and calligraphy. This paper will discuss a feasible way to improve image sharpness in theory including the theoretical optimal aperture determination, the maximum optical resolution as well as the advantage of using large image sensor. The acquiring experiment shows the method and the scanning hardware can achieve satisfactory results. Introduction It is of great significance to digitize ancient paintings and calligraphy, especially for a country with five thousand years of history and rich cultural heritages. It shows its advantages in at least two aspects. It is a fact universally acknowledged that millions of paintings and calligraphy are haunted by various diseases or even natural disasters and the earthquake recently occurred has remind us the urgency and necessity of heritage conservation. Every creature and object, no matter how carefully it is protected, is doomed to return to dust and cultural heritage is not an exception. It goes without saying that the only way to eternalize it is to store its digital replica in computers, which many museums and libraries are planned or started to do. Apart from this direct conservation, the digital replica of high fidelity and precision can be disseminated across the world with the World Wide Web, which will greatly facilitate the research and exhibition. Otherwise, the precious heritage is probably still sleep on the shelf in the warehouse, which, needless to say, is a great waste. In addition, the digital replica of high fidelity and precision can be reprinted for commercial usage and earn profit for the preservation of heritage. Currently, all the devices to acquire an ancient painting and calligraphy can be classified into two categories: linear CCD based table scanner such as Cruse CS 285ST and photographing based scanner [5]. And the performance of a scanner can be evaluated in 6 aspects: i.e., precision, scanning range [3], cost, flexibility, fidelity and maintenance [4]. The latter has great advantages over the former [4]. In 1991 (The research started at late 1980s.), Martinez [7] and Hamber [8] proposed a photographing based scanner in the Vasari Project to digitalize paintings. Later the scanner was improved [9] and put into operation at the National Gallery [10,11]. Apart from the Vasari scanner, there are others [12,13,14]. In addition to the hardware, the research focused on multispectral imaging [2,9,15], color accuracy [6,10,12,16], mosaicing [1,12,16] and geometry accuracy[4,5], but few has considered the clarity of the final digital replica. Initially, this aspect has been neglected, but our customers sometimes complain that some part of the scanned painting is rather blurred. It's worth remembering the saying: "The barrel is only as good as the shortest plank in the barrel". The blurred part of the image is, no doubt, the shortest plank and should be researched and improved. In this paper, aside from the determination of theoretical optimal aperture, the maximum optical resolution as well as the advantage of using large image sensor is also presented.

242

Manufacturing Systems and Industry Application

Optics Based Theoretical Analysis Theoretical Optimal Aperture. The confusion of a photo comes from two aspects: depth confusion and diffraction confusion. Combining them together, the overall confusion under various apertures (F Numbers) is as follows [4]: f  f λ   ∆u + 2.44 Fu  , (1) u − f  uF f  where f is the focal length, u the object distance, F the aperture, ∆u the ruggedness of the paintings (say 5mm) and λ 0.66 micrometer because the wavelength of more than 90% of visible light is no larger than 0.66 micrometer. Since recent lenses have no aperture rings, resulting in discrete F numbers such as 3.5, 4.0, 4.5, 5.0 etc, each selectable F number can be substituted into Eq. 1 and the theoretical optimal aperture is the F number which makes Eq. 1 smallest [4]. From the minimum value of Eq. 1, which makes the photo sharpest (at least in theory), a maximum optical resolution can be computed, as discussed in section 2.2. Also, when Eq. 1 has the minimum mathematical value, the F number is determined. But the F number may not exist in practices (For example, supposing mathematically, the clearest F number is less than 1.0 and this kind of lens is difficult to find.). When using a DSLR, typical with a relatively large sensor, this F number can be easily satisfied. This issue will be discussed in detail in section 2.3. The Maximum Optical Resolution. If the object distance is u, then the image distance v can be computed by thin lens formula, thus v is: C=

uf . u− f Suppose the magnification is m and the reciprocal of m is k, then: v=

1 v =m= . k u Substituting Eq. 2 into above yields: uf 1 f , u− f =m= = k u u− f

(2)

(3)

(4)

which is equivalent to

u = (k + 1) f . (5) 1 2 Suppose the desired resolution is P (PPI, Pixel Per Inch), the focal length multiplier is s , the length and resolution of an image sensor is L (For 35 full frame DSLR such as Nikon D3X, 36 millimeters) and d (For D3X, d is 6048.), respectively, then by definition of magnification, the following equation holds: d P . s=k L 25.4

(6)

Then, k is:

1

The focal length multiplier is equal to the diagonal of 35mm film (43.3mm) divided by the diagonal of the sensor. For medium and large format camera, s is less than 1.0. For 135 full frame DSLR, s is 1.0. For APS camera from Nikon, Sony and Pentax, s is 1.5. For Canon and Sigma APS-C camera, s is 1.6 and 1.7, respectively. For Olympus and Panasonic, s is 2.0. Finally, for digital compact camera (except for Sigma DP series and some discontinued camera, such as Sony R1), s is generally no less than 4.0. 2

Yanwen Wu

k=

25.4ds 127 ds 127 d . = = L LP 5 LP 5 P s

243

(7)

The minimum of Eq. 1 is f 2 2.44λ ∆u . u− f Substituting Eq. 4 into above gives:

(8)

1 2 2.44λ ∆u . k The pixel pitch should not less than the value of Eq. 9, thus: 1 L 2 2.44λ ∆u ≤ . k ds Now we substitute Eq. 7 into above yields:

L 1 2 2.44λ ∆u ≤ . 127 ds ds 5 LP Simplification above gives:

12.7 2.44λ ∆u

(10)

(11)

5P 2 2.44λ ∆u ≤ 1 , 127 which means the maximum optical resolution is:

P≤

(9)

,

(12)

(13)

where λ is 0.66 micrometer because the wavelength of more than 90% of visible light is no larger than 0.66 micrometer. Consequently,

P≤

12.7 2.44 × 0.66 × 10− 3 ∆u

=

316 ∆u

.

(14)

According to Nyquist's Law, the sampling should be doubled, which is: Pmax = 2 ×

316 ∆u

=

632 ∆u

.

(15)

Suppose the ruggedness of a painting is within 4 millimeters, then: Pmax =

632 ∆u

=

632 = 316 . 4

(16)

Thus, the maximum PPI is 316. Of course, currently, most of the sensor is Bayer typed. As a result, the painting can be photographed at 632PPI and down sampled to 316PPI. The Advantage of Large Image Sensor. When Eq. 1 is minimized, if and only if the following equation holds: f λ ∆u = 2.44 Fu . uF f Then:

(17)

244

Manufacturing Systems and Industry Application

∆u f . u 2.44λ Substituting Eq. 5 into above yields: F=

F=

f (k + 1) f

∆u ∆u 1 . = 2.44λ k + 1 2.44λ

(18)

(19)

Suppose the ruggedness of a painting is still within 4 millimeters and λ is 0.66 micrometer, then: F=

1 4 50 . = −3 k + 1 2.44 × 0.66 × 10 k +1

(20) Suppose the resolution is 600 PPI, then the typical values for the cameras are: For full frame, the d is 6048 for Nikon D3X, Sony A900 / A850. From Eq. 7, k is: 127 × 6048 × 1.0 = 7 .1 . 5 × 36 × 600 For APS-C camera, the d is 5184 for Canon EOS 7D. From Eq. 7, k is: k=

(21)

127 × 5184 × 1.6 = 9 .8 . (22) 5 × 36 × 600 For 4/3 camera, the d is 4032 and the length of image sensor (L/s) is 17.3 mm for Olympus E-30. From Eq. 7, k is: k=

127 × 4032 = 9 .9 . (23) 5 × 17.3 × 600 For compact camera, the d is 4416 for Canon G10. From Eq. 7, k is (I cannot find the exact size of the sensor. But the sensor is 1/1.72 and the size of 1/1.5 sensor is 8.8×6.6.): k=

127 × 4416 = 24 . (24) 1 .5 5 × 8 .8 × × 600 1.72 From Table 1, even at a resolution of 600 PPI, the clearest F number of a DC is 2.1. It is an aperture that few digital compact cameras have. We can safely draw the conclusion that small sensor digital compact camera is not suitable for painting digitization. Later, d will become larger and the optimal aperture will be wider. Hence, digital compact camera is eliminated forever. For DSLR, the larger the sensor size (the smaller the focal length multiplier) is, the narrower the optimal aperture would be. As a result, a large sensor will play a crucial role in improving digitization quality. k=

Table 1. The clearest aperture (F Number) APS 4/3 DC 135FF C D3X 7D E-30 G10 300PPI 3.3 2.4 2.4 1.0 400PPI 4.3 3.2 3.2 1.4 500PPI 5.3 3.9 3.9 1.7 600PPI 6.2 4.6 4.6 2.1

Yanwen Wu

245

Deployment and Results The large area scanner measuring 4m×8m has been deployed in the book scanning center of library of Zhejiang University, located in Zijingang campus. By using it, an ancient painting and calligraphy has been successfully digitized (see Fig. 1). The painting measures about 0.3 meter × 0.7 meter. It is scanned at 300 PPI, 24 bits color depth and the total resolution is 3658×7927, resulting in a TIFF file of 83MB.

Fig. 1 An acquired ancient painting and calligraphy (full view and magnified view) All the movement is driven by a computer. Thus after calibrating the motor, if we want to move the camera to a position, say (1300mm, 1400mm), first the computer will change the coordinates into the numbers of pulses and then send the pulse numbers to PLC (Programmable Logic Controller), and finally the PLC will output the exact numbers of pulses and the camera will be moved to the desired position. After stop a few seconds for stabilization, a photo is taken and then the camera and lighting system is moved to the next position. In this way, the painting is photographed block by block automatically. Conclusions In this paper, a high precision large area scanner for ancient painting and calligraphy is proposed. This paper also presented a feasible way to improve image sharpness in theory including the theoretical optimal aperture determination, the maximum optical resolution as well as the advantage of using large image sensor. The test experiment shows that the large area scanner can achieve satisfactory result. It has been put into operation and has output some important digital replicas of great precision and high fidelity.

246

Manufacturing Systems and Industry Application

References [1] G. M. Cortelazzo and L. Lunchese, A New Method of Image Mosaicking and Its application to Cultural Heritage Representation, Eurographics 99 [2] Y. MIkyake, Y. Yokoyama, N. Tsumura, H.Haneishi, K.Miyata and J.Hayashi, Development of Multiband Color Imaging Systems for Recording of Art Paintings, Part of the IS&T/SPIE Conference on Color Imaging: Device-independent Color. Color Hardcopy, and Graphic Arts IV. San Jose. California (1999) [3] Xifan Shi, Dongming Lu and Changyu Diao, An Ultra Large Area Scanner for Ancient Painting and Calligraphy, in: Proc. Pacific-Rim Conference on Multimedia, (2008), p. 846-849 [4] Xifan Shi, Dongmin Lu and Changyu Diao, Blurring and Lens Distortion Free Scanning for Large Area Painting and Calligraphy, Journal of Information and Computational Science, October (2009), p. 2121-2128 [5] Xifan Shi, Changyu Diao and Dongmin Lu, Photo Vignetting and Camera Orientation Correction for High Precision Acquisition, in: Proc. Pacific-Rim Conference on Multimedia, (2009), p. 155-166 [6] K. Martinez and A. Hamber, Towards a Colormetric Digital Image Archive for the Visual Arts, in Proceedings of the Society of Photo-Optical Instrumentation Engineers, v. 1073, January (1989) [7] Martinez, Kirk, High Resolution Digital Imaging of Paintings: The Vasari Project, Microcomputers for Information Management 8(4) (1991), p. 277-283 [8] Anthony Hamber, James Hemsley, VASARI, A European Approach to Exploring the Use of Very High Quality Imaging Technology to Painting Conservation and Art History Education, Hypermedia & Interactivity in Museums, Proceedings of an International Conference, (1991), p. 276-288 [9] K. Martinez, High Quality Digital Imaging of Art in Europe, Proceedings of SPIE vol. 2663, Very High Resolution and Quality Imaging (1996), p. 69-75 [10] Martinez, K., Cupitt, J., Saunders, D. and Pillay, R. Ten Years of Art Imaging Research. Proceedings of the IEEE, 90 (1). (2002), p. 28-41 [11] Saunders, David; Cupitt, John; White, Colin; Holt, Sarah, The MARC II Camera and the Scanning Initiative at the National Gallery, The National Gallery Technical Bulletin, Volume 23, Number 1, February (2002), p. 76-82(7) [12] Raffaella Fontana, Maria Chiara Gambino, Marinella Greco, Luciano Marras, Enrico M. Pampaloni, Anna Pelagotti, Luca Pezzati, and Pasquale Poggi, 2D Imaging and 3D Sensing Data Acquisition and Mutual Registration for Painting Conservation Proc. SPIE, Vol. 5665, 51 (2005), p. 51-58 [13] MacDonald, Lindsay W., A Robotic System for Digital Photography, Digital Photography II. Edited by Sampat, Nitin; DiCarlo, Jeffrey M.; Martin, Russel A. Proceedings of the SPIE, Volume 6069, (2006), p. 160-171 [14] G. Voyatzis, G. Angelopoulos, A. Bors, I. Pitas, A System for Capturing High Resolution Images, Conference on Technology and Automatics, Thessaloniki, Greece, (1998), p. 238-242 [15] Carcagnì, P.; Della Patria, A.; Fontana, R.; Greco, M.; Mastroianni, M.; Materazzi, M.; Pampaloni, E.; Pezzati, L. Multispectral Imaging of Paintings by Optical Scanning, Optics and Lasers in Engineering, (2007), 45(3), p. 360-367. [16] Bartolini, Franco; Cappellini, Vito; Del Mastio, Andrea; Piva, Alessandro, Applications of Image Processing Technologies to Fine Arts, Optical Metrology for Arts and Multimedia. Edited by Salimbeni, Renzo. Proceedings of the SPIE, Volume 5146, (2003), p. 12-23

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.247

Application of Community Discovery in SNS Scientific Paper Management Platform Ma Ruixin1,a, Deng Guishi1,b, Wangxiao1,c 1

Dalian University of Technology, Dalian, China

a

[email protected], [email protected], [email protected]

Key words: academic research; SNS scientific paper management platform; virtual community; relation structures.

Abstract. SNS provides us with a brand new platform to communicate, interact and share. To better suit the need of scholars to get more authoritative and more satisfactory information about academic research, we construct a SNS scientific paper management platform. In this platform, scholars are divided into different virtual communities accord to their research field and their collaborative relationship with others. Ideas in CF are applied in the procedure of community division which helps us to find the accurate relation structures. At the end of this paper, we use compare the running results of normal platform and SNS to illustrate how useful it is. Introduction SNS is short for social network service or social network software; it stresses the importance of “user centered design”. In recent years, SNS receives concerns not only come from E-commence but also from researchers of psychology, ecology, sociology and many other fields [1][2]. E-commence use SNS to enable users to be both the participants and the designers of shopping malls on the internet; politicians use SNS to make public of their politics and ask voters for support; sociologist use SNS to better research and analyze the general mood of society and the human being’s psychology, to name but a few. Scientific paper management system provides academic researchers with a unitive access to the latest scientific research, to the ultramodern development of high technology and to the advanced house of knowledge. However, the existing paper management systems are not able to present to users the papers they want, not mention to customized recommendation. To adjust to the requests for scholars from various spheres, we construct this SNS paper management platform. The rest of this paper is organized as follows. Section 2 introduces the function of SNS platform. Section 3 emphatically illustrates the design and construction of virtual community; next to it, we interpreted the implementation of personalized recommendation. Section 4 compares the running results of SNS SPMS and normal SPMS. Subsequently, we give a conclusion to our achievements. SNS Scientific Paper Management Platform Social network software takes users as the service center; it accords to the users’ interests, historical behavior and preferences to provide every user with distinctive information. Recent research shows that personalized recommendation is good to increase websites’ cohesions and give more satisfaction than users expect.

248

Manufacturing Systems and Industry Application

The functions of SNS scientific paper management platform can be concluded as blew. First, providing a gapless, safe, effective space for academic communication and study; Second, constructing a hierarchical system to judge the authority of users; Third, accumulating knowledge in every aspect, even former users don’t use this platform any more, their works can still serve for the left users; Finally, we get a place for scholars to discuss, to communicate and cooperate. This platform sufficiently realizes the goal of knowledge transmission and creation; to some extent, it helps scientific research transform to production capability. From the perspective of the overall situation, this platform constructs a higher virtual knowledge space based on the users’ true personal information. It combines information of people, paper and copartners together, divides users into different groups which realize the perfect information communication and teamwork. The platform’s framework is as figure 1 shows.

Figure 1. Framework of SNS scientific paper management platform

Realization of Customized Service A. Different kinds of relations in SNS SP management platform There are two properties of relation in this system, the explicit and the implied. The explicit relation comes from the information we dig from the internet directly, this kind of relation includes: the teacher-student relationship, the workplace relationship and so on, these information is narrow and limited. Merely accord to the explicit, we can’t build right relation models for user. The implied relation comes from our analysis about papers in the system; co-authors relation suggests that they might be work in the same frontier or the same institute.

Yanwen Wu

249

In this paper, we use relation-tree to illustrate the explicit relation in SNS platform.

Figure 2. We construct the relation tree accord to the explicit relation we get, in this figure, node 8 has edges both from node 4 and node c, which means that node 8 get different degrees from different tutors.

B. Discover communities in SNS SP management platform We put our attention to the research of implied relation detecting and virtual community construction. Before this paper, we have proposed the way to find elite[3] ones in large number of users. However, it is not enough to just accumulate scores for users in the system. It is possible that users have high score but less number of papers; it is also possible that users posses low scores but large number of papers. So we come up with new ideas in this paper to find both the authorities in one frontier and the contacts between different fields of research. There are three kinds of people in SNS platform: authorities, liaisons and normal users[4]. One obstacle to realize personalized recommendation in SNS platform is how to exactly find these kinds of people. Fortunately, they have distinctive attributes. Authorities are the best expects in each research field; liaisons are the ones who connect different communities together to combine the entire network. In SNS scientific paper management system, we use average score to represent the authority score of users and use relation matrix to show the connection between different users. Here we introduce two kinds of matrix: Matrixscore and Matrixrelation. They are both m × n matrix. m is the number of authors and n is the number of papers. paper j gives to author u ,

Ru , j

Su , j

signifies the score of

signifies the relation between paper j and author u.

According to the author’s position in the paper they published to accumulate “scores” for them, for example, the first author earns 1 point, the second author earns 0.8, the third earns 0.3, the forth earns 0.2 and the fifth earns 0.1 point. This grading standard considers the situation that compared with the first and the second writer, others are not so important in the paper. This is called Matrixscore.

Matrixscore =

0.8 …… 03

0

0

0.1

0

…… 0.2

0

0.1

0

0.3 0.2

1

……

0

0

0.1

0





… …… …







0

0.1

0

0

1

0

0.8

1

1

…… 0.2 0.8

0.3 …… 0.8

1

0

0

0

0.2

We use formula Sum(row _ vector) to calculate the authors’ whole score in the entire system. The higher of the score, the higher of user’s authority. Here is the problem in Matrixscore, if there is someone who has no authoritative knowledge but has a lot of relations with other authors, in other words, he get lots papers to score for him, may be 0.1 or 0.2 per article. Under this situation, he will

250

Manufacturing Systems and Industry Application

also get high scores. If we only use Matrixscore, it is easy for us to mix up authorities and liaisons. To avoid this kind of situation, we propose another matrix: Matrixrelation. Matrixrelation is in accordance with Matrixscore. If Ru , j (the cross point of uth row and ith column) in Matrixscore is 0, the Ru , j in Matrixrelation is 0; else Ru , j in Matrixrelation is 1. So the corresponding Matrixrelation to Matrixscore is as blew shows.

Matrix srelatione =

1

0

1

……

1

0

0

1

1

1

0

……

1

0

1

0

1

1

1

……

0

0

1

0

… … … …… … … … … 0

1

1

……

1

1

0

0

0

0

1

……

1

1

0

1

represents the number of papers a user appears in. In this paper, we give a parameter δ to the system to set the least number of papers an authoritative person should publish. Besides, we set a minimum threshold ζ to distinguish authorities and liaisons. The steps to divide users into different virtual communities can be described as blew. 1) Calculate the number of papers each user published as formula 1 show. Sum(row _ vector )

m

N i = ∑ Ri , j

(1)

j =1

Ni is the number of papers that user i published, Ri,j is the relation between user i and paper j, if i appears in j, Ri,j=1, else Ri,j=0. We pick up the authors who have published more thanδ piece of papers. 2) Calculate the score of every user in SNS platform as formula 2 show. m

S i = ∑ si , j

(2)

j =1

Si is the score of user i,

si , j

is the score of paper j gives to user i.

3) Calculate the average scores for every candidate authority. Ai =

Si Ni

(3)

If Ai>ζ, user i is authority in his/her field, else, i is the liaison between different communities. As we all know, social structures in large complex network are very clear and there is distinctive differences between authorities and liaisons. Therefore, it is easy for us to set an appropriate threshold to differentiate authorities and liaisons. Community seeds[5] are the central nodes in communities. In this platform, authorities are the ones who have potential to become community seeds. The steps to construct virtual communities are as following. Step one: Put authority users into a list named Lauth in decreasing order of authority value; Step two: The set of community seeds S and the set of existing community {SC} are initially set to empty; Step three: Users are checked in turn from the beginning to the end of Lauth. If a user i does not have connection with the existing communities{SC}, it become a new seed and is added to S; if not, calculate the similarity between i and SN and if similarity(i, SC)δ, i become a member of SC. Step four: For the non authorities, we put them into another list named Lscore in which they array in decreasing order accord to the entire scores they get and check which community they belong to. If the similarity between i and SN similarity(i, SC)>δ, i become a member of SC. Step five: Label the liaisons between different communities. Design of Personalized service For each community, we construct a unique user-interest model[6]. User-interest model represents the community’s feature and research fields, none one is same with the other. In this SNS scientific paper management platform, we also design five modules for user’s space: system recommended paper; system recommended user; dynamic update; congress news and master’s recommendation. Comparison of results We e-mailed 800 users in our SNS scientific paper management system a questionnaire and get 531 back. Here we present the users’ evaluation to our system. We set six levels with the highest score of 100 and the lowest score of 0 for each module. We calculate the average score of each module and get results as figure 3 shows.

Figure 3. Users’ rating for different modules of the system, here we use abbreviations to represent the modules mentioned in part 3.

First we should know that there is no system recommended users and master’s recommendation parts in normal SRMS(scientific paper management system) which explain why its score of SRU and MR is 0. Besides, there is one very special point in this figure. Users’ rating for congress news in SNS SRMS nearly reaches 100, that’s because SNS SRMS remove congress news from the dynamic update and put them into an individual part, which enables users to get congress news more conveniently and timely. Generally speaking, the SNS SRMS provides users with more satisfactory than normal SRMS. Conclusion We introduce a new method to find authorities and liaisons in SNS scientific paper management system; use both the Matrixscore and Matrixrelation to find the virtual communities in it. In this system, we use community seeds to instruct the other users to locate at the best place in the community which facilitates the convergence of our algorithm. We’ll constantly do optimization to our SNS SPMS to make it better and better.

252

Manufacturing Systems and Industry Application

References [1] A. Kobsa, J. Koenemann and W. Pohl. Personalized hypermedia presentationtechniques for improving online customer relationships, The Knowledge Engineer-ing Review, pp.111–155, 2001(reference) [2] Yan Xing, Chang Yaping. A review on the research of social network service. JOURNAL OF INTELLIGENCE. 2010, 29(11): 44-47 [3] X. Li, Adaptively choosing neighbourhood bests using species in a particle swarm optimizer for multimodal function optimization, Proceedings of Genetic and Evolutionary Computation Conference , pp. 105–116, 2004. [4] Wang Xian, Xie Chi, Rong Xue, Fan Wen. The research of operation status and future tendency of SNS website. People’s Net.(2010) [5] Ruixin Ma, Xiao Wang, Research of Community Structure Discovery Algorithm based on Multimodal Function” ISIA,Guangzhou,pp.67-74 ,Noverber 2010 [6] Lee, H.-C., Lee, S.-J., Chung, Y.-J, A Study on the Improved Collaborative Filtering Algorithm for Recommender System, In: Fifth International Conference on Software Engineering Research, Management and Applications,2007, pp. 297–304.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.253

Comparative Analysis of the Major Ontology Library Bai Rujiang a, Wang Xiaoyueb and Yu Xiaofanc Institute of Scientific & Technical Information, Shandong University of Technology, Zibo 255049, China a

b

c

[email protected], [email protected], [email protected]

Key words: Ontology Library, WordNet, DBpedia , Enterprise Ontology.

Abstract. This paper introduces the major and general domestic and foreign ontology libraries: WordNet, DBpedia, Cyc and HowNet, and the more successful professional domain ontology libraries: Biomedical Ontology and Enterprise Ontology. Then separately compare and analyze them from the five aspects as the description language,storage mode, query language, platform build and application. We hope to provide assistance for the study of the domestic and foreign scholars in ontology library and its application. Introduction The concept of ontology stems from the philosophical field at the earliest [1], as semantic foundation it is widely applied in information retrieval, artificial intelligence, semantic networks, software engineering, natural language processing, e-business and knowledge management, etc. Due to the needs of business and academia, a variety of general common ontology library systems have been developed , such as WordNet, DBpedia, Cyc, HowNet, Frame Ontology, DubolinCore, etc, there are a plenty of domain ontology library systems. Domain ontology library system exists two problems, on the one hand, the different fields actively develop themselves ontologies, like the biological and medicine ontology, financial ontology, legal knowledge ontology, e-government ontology and news ontology, tourism ontology, biologic and gene ontology, etc. On the other hand, the same field also has two kinds of cases. Firstly, because of the difference of region, the same knowledge category appears different versions of ontology and ontology model. Secondly, because the concepts of field structure are huge, logic structure is complex, so produce multiple interconnected ontologies, these ontologies combine, common express some domain knowledge category. The ontology so widespread application's reason is: it provides for specific domains knowledge sharing and common cognition, in order to realize the communications of human-machine application system. Using ontology technology constructs domain knowledge library not only can clearly describe the concepts and their relationships in the field, but also can realize domain knowledge sharing and reuse, and beneficial to domain knowledge library management and maintenance. There are many ontology research projects and rich research results at abroad, also they established many open source ontology knowledge library systems which have being put into use, contrast with overseas, the domestic research regarding this is very limited, and has the very big disparity with the overseas research. Through the literature collection can find that there are very few papers about the comparative analysis research of ontology library in domestic and overseas at present, so this paper selects four major and mature ontology library systems: WordNet, DBpedia, Cyc, HowNet and two professional domain ontology libraries, then separately compares and analyses them from the five aspects as the description language, storage mode, query language, platform build and application. We hope to provide assistance for the study of natural language processing and the choosing and application of ontology library.

254

Manufacturing Systems and Industry Application

The Major Ontology Library WordNet. WordNet(http://wordnet.princeton.edu/) is a large on-line lexical database of English, developed under the direction of George A. Miller in 1985. WordNet is a proposal for a more effective combination of traditional lexicographic information and modern high-speed computation [2] . At present the related research about WordNet already involves in German, French and so on many other kinds of language, and it is considered the most important resources to computation semantics, text classification and so on related domains which researchers may gain[3]. WordNet is organized information by the concept of synonym sets (synsets), the query result deduction conforms to the human being's thinking set. The so-called synsets is the synonym set which may exchange in the specific context relations. Its biggest difference with the ordinary dictionary is that it organizes the glossary information according to the word meaning, but not the word morphology. WordNet cares the relations of the words, thinks that the word significance lies in the difference and the relation of the words, and the words' organization ways demonstrate the difference and the connection of the word concepts. WordNet thinks that the word property reflects the concept category which the glossary contains, so it divides the lexicon into five categories: nouns, verb, adjectives, adverbs and function words. In fact, Wordnet contains only nouns, verbs, adjectives and adverbs, neglects function words which is small in English and is the language syntactic component. WordNet uses synsets as language symbol, selectively analyses the semantic relations of nouns, verbs, adjectives and adverbs, and constructs the relations systems, such as the level system, N space relation system, inclusion relation and so on, expects that through these relations attributes the language significance. All the versions of WordNet is freely and publicly available for download at the acknowledge lab of Princeton University (http://wordnet.princeton.edu/wordnet/). Table1shows WordNet3.0 database lexicon statistics data. Table 1 WordNet3.0 database statistics data < Data source http://wordnet.princeton.edu/wordnet/man/wnstats.7WN.html> Word property Statistics Unique Strings Synsets Word—Sense Pairs Monosemous Words and Senses Polysemous Words Polysemous Senses Average Polysemy Including Monosemous Words Average Polysemy Excluding Monosemous Words

Noum

Verb

Adjecti ve

Adver b

117798 82115 146312 101863

11529 13767 25047 6277

21479 18156 30002 16503

4481 3621 5580 3748

15935 44449 1.24

5252 18770 2.17

4976 14399 1.40

733 1832 1.25

2.79

3.57

2.71

2.5

Total s 155287 117659 206941 128391 26896 79450

Yanwen Wu

255

To clearly understand the use of WordNet, change to the browser surface of WordNet, because the WordNet3.0 requires the high install system, so we use WordNet2.1 to see. Figure 1 shows that the concept related information of the word "mouse" which is input in the browser by the author. We may see the word mouse have the noun also have the verb word property from the figure. Click on the Noun option to see its “Syonoyms”, “Coordinate Terms”, “Hypertms”, “Hyponyms”, “brief”, “Hyponyms”, “full”, “Holonyms”, “Meronyms”, “Derivationally related forms” and “Familiarity”. Click on the Verb option to see its “synonym”, “Coordinate Terms”, “Hypertms”, “Derivationally related forms”, “Sentence frames” and “Familiarity”.

Figure 1 WordNet2.1 DBpedia. Knowledge bases are playing an increasingly important role in enhancing the intelligence of Web and enterprise search and in supporting information integration. At the same time, Wikipedia has grown into one of the central knowledge sources of mankind, maintained by thousands of contributors. DBpedia (http://dbpedia.org/About) is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link other data sets on the Web to Wikipedia data [5]. The DBpedia knowledge base currently describes more than 3.5 million things, out of which 1.67 million are classified in a consistent Ontology, including 364,000 persons, 462,000 places, 99,000 music albums, 54,000 films, 17,000 video games, 148,000 organisations, 169,000 species and 5,200 diseases. The DBpedia data set features labels and abstracts for these 3.5 million things in up to 97 different languages; 1,850,000 links to images and 5,900,000 links to external web pages; 6,500,000 external links into other RDF datasets, 633,000 Wikipedia categories, and 2,900,000 YAGO categories. The DBpedia knowledge base altogether consists of over 672 million pieces of information (RDF triples) out of which 286 million were extracted from the English edition of Wikipedia and 386 million were extracted from other language editions (Data source http://wiki.dbpedia.org/Datasets). Figure 2 has demonstrated the DBpedia formidable link data.

256

Manufacturing Systems and Industry Application

< Picture source http://richard.cyganiak.de/2007/10/lod/ last update 2010-09-22> Figure 2 DBpedia link data resource The DBpedia knowledge base has several advantages over existing knowledge bases: it covers many domains; it represents real community agreement; it automatically evolves as Wikipedia changes, and it is truly multilingual. The DBpedia project had demonstrated a rich corpus more than one type of knowledge, these knowledge are people's large-scale common cooperation result these devote to establish the structuration knowledge library. The DBpedia knowledge base covers a series of different domains and these domains' entity relations; this knowledge base represents the consensus opinion of thousands of Wikipedia workers to the concepts, and evolves along with the concepts change. The Comparative Analysis of the Ontology Library The Comparative Analysis of general ontology libraries. Description language. The WordNet database is in an ASCII format that is human- and machine-readable, and is easily accessible to those who wish to use it with their own applications. The Grinder is a multi-pass compiler that is coded in C. The Grinder utility compiles the lexicographers’ files. It verifies the syntax of the files, resolves the relational pointers, and then generates the WordNet database that is used with the retrieval software and other research tools. The Grinder is also used as a verification tool to ensure the syntactic integrity of the lexicographers’ files when they are returned to the archive system with the restore command. The description language of DBpedia is RDF, it has two kinds different methods to draw the semantic relations at present: (1) Map the relations in the relation data base to RDF; (2) Extract information from the article version and article information box directly. CycL, the Cyc representation language, is a large and extraordinarily flexible knowledge representation language. It is essentially an augmentation of first-order predicate calculus (FOPC), with extensions to handle equality, default reasoning, skolemization, and some second-order features. CycL uses a form of circumscription, includes the unique names assumption, and can make use of the closed world assumption where appropriate.

Yanwen Wu

257

The description language of HowNet is KDML (the Knowledge Dictionary Mark-up Language). This is a new set of descriptors of knowledge systems, after more than 80,000 descriptions of the Chinese and English language concepts, prove that: ①strong description capacity; ②facilitates computation to meaning; ③intuitive and better readability. To date, the Knowledge Dictionary Mark-up Language (KDML) comprises the following components: (1) approximately 1500 features and event roles; (2) pointers and punctuation; (3) word order. [17] Name Description language

Table 3 The description language of ontology libraries WordNet DBpedia Cyc (Grinder)C RDF CycL

HowNet KDML

Storage mode. The lexicographers’ source files are maintained in an archive system based on the Unix Revision Control System (RCS) for managing multiple revisions of text files. The archive system has been established for several reasons — to allow the reconstruction of any version of the WordNet database, to keep a history of all the changes to lexicographers’ files, to prevent people from making conflicting changes to the same file, and to ensure that it is always possible to produce an up-to-date version of the WordNet database. The programs in the archive system are Unix shell scripts which envelop RCS commands in a manner that maintains the desired control over the lexicographers’ source files and provides a user-friendly interface for the lexicographers. The storage format of DBpedia is the RDF triples. At present the main DBpedia surface uses Virtuoso and MySQL as the storage backstage. The core of the Cyc System is the Cyc knowledge-based reasoning program (usually called just “Cyc”),and it is contained in two files: the world and the Cyc executable. The world file contains a copy of the knowledge in the KB that has been translated into a compact, efficiently loaded binary format called CFASL. The Cyc executable file contains the compiled object (machine-level) code for the inference engine and for the Cyc agenda. The inference engine allows a running Cyc image to derive new conclusions from the facts and rules stored in the KB. The agenda drives the processing of queued KB update operations (resulting, for example, from edits submitted by a user). The executable file also contains the compiled code for the functional interface (FI) and network socket connections that support the Java API, and for the HTML generation procedures that implement Cyc’s CGI-based web browser user interface. HowNet knowledge dictionary is base file of system. In this file the concept and description of each word form a record. Each language of every record contains four main contents. Each of which consists of two parts, separated by "=". The left of each "=" is the data domain, the right is the data value. Their arrangement is: W_X = words; G_X =characteristic or property of word; E_X = words example; DEF = concept definition. Name Storage mode

Table 4 The storage mode of ontology libraries WordNet DBpedia Cyc HowNet RCS RDF triples CFASL and Concept HTML Description

and

Conclusion Today's research ontology must face the difficult problem of solving natural language understanding, and solve the problem of multiple languages. WordNet and HowNet can be used as early prototype system about ontology development, DBpedia is a large and multi-species-rich corpus, Cyc not only has a complete development tools and marked languages, but also has a large knowledge base developed by itself, and is basis for domain ontology conceptual development, so it is the most perfect reasoning ontology library system. Because the ontology research origins in the field of artificial intelligence, but to build professional domain ontology not only need the engineers of artificial intelligence, but also need experts in

258

Manufacturing Systems and Industry Application

specialized fields to the participation and cooperation of knowledge structure, organization, and improvement. Due to the professional backgrounds and different research purposes, unified collaboration between the two has some difficulties. Experts in the same field may not be consistent about the view. So for the construction of professional ontology needs to complete the coherence of professional knowledge and the expert consensus for the ontology system functions. Whether the general ontology library system or professional domain ontology library system are widely valued and used online knowledge repository in natural language processing. They have been used in various fields of natural language processing, such as the elimination of syntactic ambiguity, semantic ambiguity to resolve, information retrieval and machine translation. Ontology library described above have their own advantages and irreplaceable, and also have their own stable user groups. The ontology library researchers or developers are trying to make them tend to perfect, and hope to provide users with more friendly interface and more fully function. Of course, the ontology libraries are less than satisfactory. To truly resolve these problems, has yet to develop a standardized tool, and it needs to have some characteristics, such as a certain openness, and to provide a unified concepts system and common knowledge base; a uniform markup language format for input and output, and this is the Web standard markup language, and support multilingual and use the Unicode character set, and wide application in the field of AI and knowledge representation, be recognized by the domain experts and IT specialists. Acknowledgement This work was supported by Development of young teachers Support Program of Shandong University of Technology. References [1] Xiulan Zhang, Ling Jiang. Review of Research about Ontology Conception [J]. Scientific and Technical Information, 2007, 26 (4): 527-531. [2] George A. Miller, Richard Beckwith, Christiane Fellbaum,Derek Gross, and Katherine Miller. Introduction to WordNet: An On-line Lexical Database [EB/OL]. ( 1993-08). [3] [2010-9-1]. http://wordnet.princeton.edu/. [4] WordNet.A lexical database for English [EB/OL]. [2010-9-1]. [5] http://wordnet.princeton.edu/wordnet/. [6] Xiaolin Zhang. Application and Research of the Metadata [M]. The front page. Beijing:Beijing Library Publishing house, 2002. 204~205. [7] Christian Bizera, Jens Lehmannb, Georgi Kobilarova, Sören Auerb, Christian Beckera, Richard Cyganiakc, Sebastian Hellmannb. DBpedia-A crystallization point for the Web of Data[C]. In: Web Semantics: Science, Services and Agents on the World Wide Web 7. 2009:154–165. [8] Jing Li. Study on the theory and practice of ontology and ontology-based agricultural document retrieval system [D]. The Chinese Academy of Sciences.2005. [9] Cycorp, Inc. About Cycorp [EB/OL].[2010-9-1]. [10] http://www.cyc.com/cyc/company/about.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.259

Interactive Technology Application Program of Experience Learning for Children with Developmental Disabilities LIN Chien-Yu1,2,a, LIN Ho-Hsiu2,3, Jen Yen-Huai2,4, Wang Li-Chih2 and Chang Ling-Wei2 1

Graduate Institute of Assistive Technology, National University of Tainan, Tainan, Taiwan 2

Department of Special Education, National University of Tainan,Tainan,Taiwan 3

Tainan Municipal Shengli Elementary School, Tainan, Taiwan

4

Department of early childhood, TransWorld University, Yunlin,Taiwan a

[email protected]

Key words: infrared; interactive; assistive technology; teaching materials; children; developmental disabilities

Abstract. This research focuses on designing low-cost aids in teaching materials, to help children with developmental disabilities. Using flash software and power point as interface design with the assistance of interactive device could also be developed teaching materials for children. When detected by a wii remote and infrared (IR emitter) device, corresponding information appears on a screen to increase the interaction of assistive technology aimed at children by adopting an enhanced learning process. There are two cases on this study, they are actually applied on children with developmental disabilities, and the overall equipment cost is approximately US$50.Case 1 is a child wearing hats with reflective stickers, a wii remote with an infrared emitter board in front. When we performed the initial probing of their interactive physical activity, the child was asked to wear hats with reflective stickers, while the teachers asked him to perform continual actions of standing and squatting. Sensors were set up next to the child so that when the child performed the movements, the movements would be sensed by the sensor. The signals were emitted to the computer, and then the corresponding synchronized screen would be displayed immediately. To put it simply, we used the cheap reflective stickers as a mouse, therefore we can design different teaching module courses. Case 2 is one kind of teaching materials for customer design, the participant is also a child with developmental disabilities.In this case, the research took the pictures of the subjects, then edited the images in flash through a mouse. The participating children wore the reflective stickers on their hands, when they moved their hands, their images on the screen would move accordingly as well. In this case, we dressed up the children differently, when the children saw that they were displayed on the screen, and the cursor movements were controlled by their limbs, they felt very happy. In this research, the devices relied upon user-center design, reducing the learning loading from their teaching materials and enhance their learning motivation and interest. Introduction People with disabilities which prevent them from using standard computer control devices, but custom made alternative devices could be more expensive, one solution is to explore the application of devices used in contemporary gaming technology, such as the Nintendo Wii [1].IR Cameras are generally used in tracking systems and this leads to costs often not affordable. particular,Wii remotes used as IR cameras[2],.The study combined wii remote and infrared emitter to create low-cost interactive whiteboard [3] so that teachers enable to design teaching materials enhance learning interest for children with developmental disabilities. Through the demonstration of multimedia design, teachers have enough ability to produce the learning materials of custom made design in order to support children to absorb knowledge [4] and teaching interaction procedure is a systematic form

260

Manufacturing Systems and Industry Application

of teaching where teachers use to describe their behavior [5].The interactive technology consists of a wii remote, infrared emitter ,a laptop and a projector. Combined low-cost gear to create a cheaper device ,the laptop could be controlled by the infrared emitter that functions much like a mouse [6]. The purpose of this research is to help children with developmental disabilities have chance to learn more interested. Assistive technology is a helpful method for learning, which has prominent influences on helping teachers explain difficult concepts, giving access to a huge range of examples and resources, and inducing pupils to engage in learning easily [7].Computer-mediated communication facilitates the understanding of communication patterns, forms, functions and subtexts, which can in turn engender an understanding of how to derive meanings within such context [8]. The application of an infrared light emitter is similar as a mouse, the design of learning interfaces adopts an interactive design while the design of teaching materials adopts flash software and power point that could invent interesting display in order to raise children learning interested. In addition, children with developmental disabilities not only could taste new teaching method but also are impressed by them. Method This research showed appreciation with many experts who create relative program about low-cost interactive whiteboards using the wii remote and shares freely on their website [9]. Wii remote, is a handheld device resembling a television remote, a high-resolution high speed IR camera and wireless Bluetooth connectivity. Wii remote camera is sensitive only to bright sources of infrared light emitter, tracked objects must emit a significant amount of near infrared light to be detected [10]. With infrared emitter and wii remote could be create an effect just like pc mouse, it’s a low-cost custom made device [11]. Therefore, the study is able to integrate the feedback of flash software and power point and to develop teaching materials for children with developmental disabilities.An interactive teaching material is a display interface that connects to laptop, projector and infrared light emitter [12]. A projector projects onto wall that children with developmental disabilities could control the display by using the relative devices, but it cost so expensive and not portable, so it is not convenient for teachers to use the devise in different places.The theory that the study applies is making use of Bluetooth to connect a computer and a wii remote. By means of infrared light emitter, wii remote can track the accurate location of reflected sticker and then became an interactive teaching materials.A projector shows images on the wall. Let reflected stickers reflect from infrared light emitter to transfer information to wii remote camera sensor. Based on the requirement of children, the weight of device that children hold only some grams.range.The display mode of this method is an intuitive learning tool. The operator not needs to use a PC mouse as a tool, children will raise their interest and motivation toward lessen their frustration during their learning processes.Since the wii remote can track sources of infrared light emitter, so the research could apply this technology to make a low-cost device. Reflected stickers on fingers or body are good examples that could be benefit from minimizing tracking instrumentation. According to the requirement and preference of children, participating teachers in the study proceed to devise and adjust the design of infrared light emitter. Case Study The learning materials are custom made for children themselves so as to improve their learning interest and motivation. This study begins from assistive teaching, which provides lower loading. Using infrared light emitter and reflected stickers can link to corresponding information, which is able to increase the attraction and intimacy of teaching materials. Here are 2 cases and explanations of this study. Case 1. The participant of case 1 is a boy with intellectual deficit and speech disability who is belong to moderate multiple retarded. He wear a hats with reflective stickers, a wii remote with an infrared emitter board in front,then, using reflected sticker as a PC mouse. When we performed the initial probing of their interactive physical activity, the child was asked to wear hat with reflective stickers, while the teachers asked him to perform continual actions of standing and squatting,the research put

Yanwen Wu

261

the reflected sticker on the hat, when he stand up and squatting down repeatedly, then he could see the reflections on the wall. Besides, the projector will show the next picture projected on the wall, along with their movements. Fig. 1. shows the experiment process. This case focuses on using low-cost assistive technology that included of wii remote and infrared light emitter, children are offered with digital presentation of designing and learning concepts for easier ways to operate, and it enables children to grasp counting ability by interactive design. Sensors were set up next to the child so that when the child performed the movements, the movements would be sensed by the sensor. The signals were emitted to the computer, and then the corresponding synchronized screen would be displayed immediately. To put it simply, we used the cheap reflective stickers as a mouse, therefore we can design different teaching module courses. The main purpose is to instruct children in the application of counting ability. This case is a activity-based technological teaching material.

Fig. 1. The participant used a sticker as a mouse(To protect the handicapped child, all the pictures have used masaic effect on their face)

Fig. 2. The participant used a reflective hand link as a sensor(To protect the handicapped child, all the pictures have used masaic effect on their face)

262

Manufacturing Systems and Industry Application

The case improve the desire of children's activity; because the real-time feedback could attract children’s willing to do such training of physical ability. Furthermore, because the image design used by power point, the software is very easy, teachers could change different pictures for different courses, Fig. 2.shows the application on shape unit, the child with developmental disabilities, the researcher put reflective hand link on her wrist, when she stretch out her arm, the wii will receive the message, then change the image in the real-time on the wall. In this teaching and learning activity, the teacher introduces and demonstrates how to operate first, and then the child operate it on their own. Case 2. The participant of case 2 is a girl with intellectual deficit and speech disability who is also belong to moderate multiple retarded. Case 2 is one kind of teaching materials for customer made, in case 2, the researcher asked for the resource teacher provide some pictures of the child, then edited the child’s images in flash software, then the child’s image is just like a mouse. The child with developmental disabilities wore a glove, the reflective ring on her wrist, when she moved her hands, her images on the screen would move accordingly as well. In case 2, we dressed up the children differently, when the children saw that her picture was displayed on the screen, and the cursor movements were controlled by their finger, just as she flied in the screen that make she felt very happy. In this case, it’s an interactive design for the child, because focus on custom made, the child look herself on the screen display a different mood, and said: look, it’s me. She want to share the experiment with others. In case 2,the teacher provide some children’s pictures, and ask for the researchers could design different image for the children, so we design different figures, such as superman, bat man for different children.

Fig. 3. Customer made design for children (To protect the handicapped child, all the pictures have used masaic effect on their face)

Yanwen Wu

263

Conclusion Digital interactive teaching material would be popular in the future, because the interface focus on easy learn and easy use, moreover ,the device provide real-time feedbacks, but the price of the devices almost expensive, so this study is designed by using wii remote ,infrared emitter ,laptop and projector, and its aim to create interactive interface using through low-cost devices. . Children with developmental disabilities have limitations due to difficulties in the development of sufficient physical, emotional or intellectual capacities. Developmental disabilities include physical disorders such as cerebral palsy and limited vision, as well as language and speech disorders and so on. Special for children with developmental disabilities who exhibit different levels of understanding and emotional reactions, they need more feedbacks and multiple stimulus on teaching materials , which increases their motivation of learning. In this research, interactive teaching materials are more attractive than regular textbook, according to the feedback and the processes, children like to participant the research’s activities. Using the low-cost combination, it can truly apply on design interactive teaching materials more extensive. Teachers can design the teaching materials according to the needs of students from different courses. Teachers can give much efforts to design the contents of the materials because they can just apply flash software doesn’t increase their load and the real-time feedbacks the materials display are special so that teachers have positive effect on this research. This research used low-cost devices, this teaching material is presented through different courses, which makes the contents become more multi-development, and adds the interactive interface effect on the process of learning. This research is combine assistive technology, information design, communication design and teaching materials design. It is not only give children with developmental disabilities a chance to feel the interactive interface on resource class, but also support teachers another method to modify and redesign their teaching materials. Though the continuous progress on information media, functions of interface become more complicated day by day, making it is harder for children to use, especially for children with developmental disabilities. Children with developmental disabilities have moderate difficulties in operating computers. The purpose of this research is to create some teaching materials suit for children with developmental disabilities. Because using intuition is the operating feature of wii remote combine with infrared emitter, it is unnecessary for operators to use complicated input methods, such as using a mouse or keyboard. Therefore, it is easier for children with developmental disabilities to use interactive interface, and to have realer feelings, moreover, to experience a new interactive way. Acknowledgement This work was partially supported by the National Science Council, Taiwan , under the Grant No. 98-2410-H-024-018-and 98-2515-S-024-001-

264

Manufacturing Systems and Industry Application

References [1] P.J. Standen, C. Camm, S. Battersby, D.J. Brown, M. Harrison: An evaluation of the Wii Nunchuk as an alternative assistive device for people with intellectual and physical disabilities using switch controlled software, Computers & Education Vol.56 (2011), p. 2–10 [2] S. De Amici, A. Sanna, F. Lamberti , B. Pralio: A Wii remote-based infrared-optical tracking system, Entertainment Computing Vol.1 (2010) , p. 119–124 [3] J.C. Lee: Hacking the Nintendo Wii Remote, PERVASIVE computing. (2008), p. 39-45. [4] C.Y. Lin, P.H. Hung, J.Y. Lin, H.C. Lun: Augmented reality-based assistive technology for handicapped children. Key Engineering Materials Vol.439-440 (2010), p. 1253-1258. [5] J.B. Leaf, W.H. Dotson, M.L. Oppeneheim, J.B. Sheldon, J.A. Sherman: The effectiveness of a group teaching interaction procedure for teaching social skills to young children with a pervasive developmental disorder. Research in autism spectrum disorders Vol.4, (2010) p. 186–198. [6] C.Y. Lin, C.C. Lin, T.H. Chen, M.L. Hung, Y.L. Liu: Application infrared emitter as interactive interface on teaching material design for children, Advanced Materials Research, 2011(accepted) [7] S.J. Waite, S. Wheeler, C. Bromfield, Our flexible friend: The implications of individual differences for information technology teaching. Computers & education Vol. 48, (2007), p.80–99. [8] M. Bower, J.G. Hedberg: A quantitative multimodal discourse analysis of teaching and learning in a web-conferencing environment–the efficacy of student-centred learning designs. Computers & education Vol.54, (2010), p.462–478. [9] S. De Amici, A. Sanna, F. Lamberti, B. Pralio: A Wii remote-based infrared-optical tracking system , Entertainment Computing Vol.1, (2010), p.119-124. [10] P.J. Standen, C. Camm, S. Battersby, D.J. Brown, M. Harrison: An evaluation of the Wii Nunchuk as an alternative assistive device for people with intellectual and physical disabilities using switch controlled software, Computers & Education Vol.56, (2011), p.2-10. [11] C.Y. Lin, F.G. Wu, T.H. Chen, Y.J. Wu, K. Huang, C.P. Liu, S.Y. Chou: Using interface design with low-cost interactive whiteboard technology to enhance learning for children, HCII 2011(accepted) [12] H.J. Smith, S. Higgins, K. Wall, J. Miller: Interactive whiteboards: boon or bandwagon? A critical review of the literature, Journal of Computer Assisted learning Vol. 21, (2005), p.91-101.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.265

The Study on the Quality Management of Supply Chain Production in Operations Lina Wang Tang Shan Teacher’ College, Management of Economy Department, He Bei,China E-mail: [email protected] Keywords: asymmetric information; quality control; coordination; principal-agent

Abstract. Based on the different effort levels of product quality management of the suppliers, this paper has established the principal-agent model under the asymmetric information. At the same time, it has also studied the coordination quality control in supply chains and the optimal revenue problem of the supply chain. Finally, the model has been carried out the numerical example analysis. Introduction Under the market competition pattern of economic globalization, the competition of quality management of the supply chain production has increasingly become the main player in the competition. And the cost, the quality, the service and the speed have become the key factors in the supply chain competitive strategies. The quality management model has been changed from the single enterprise quality management model to the more enterprise collaborative quality management model - that is the supply chain quality management model. The guarantee of the product quality is the important basis which can establish and maintain the long-term stable cooperation between the supply chain enterprises. Based on the supply chain system which includes a buyer and a supplier under the inspection system, this paper has considered the asymmetric information of the two parties and respectively studied the collaborative quality control problems of two sides when the quality efforts of suppliers and the inspection level information are concealed. The Collaborative Quality Control under the Asymmetric Information As the suppliers and buyers seeking to maximize their income and the inside information of two sides is difficult to observe, so both of them may conceal their true information in order to increase their earnings. When the quality effort level information of the suppliers is concealed, the suppliers have the information superiority. Then the buyer is considered as the client and the supplier is called the agent. The supplier has the moral hazard problems. And the expected return of buyers is u(c | qi ,θ Sj ,θ Bk ) = rqi - leijk - c +(c + g S )pSij +(c + g B )pBijk - I Bk

(1)

The expected return of suppliers is v(c | qi ,θ Sj ,θ Bk ) = c - (c + g S )pSij - (c + g B )pBijk - Si - I Sj

(2) If the external loss cost of buyer is large enough, no matter how the suppliers choose the quality effort level and the inspection level, the expected return which is acquired in high inspection by the buyers is greater than the one that is obtained in low inspection. So the buyers must select the high inspection. Whether the suppliers choose the high inspection or the low inspection, the buyers must prompt the high-quality work of suppliers rather than the low-quality work in order to achieve the goal of quality improvement. When the suppliers implement the low inspection, the problems of buyers are max u(c | qi ,θ SL ,θ BH ) c,qi

s.t.

v(c | qi ,θ SL ,θ BH ) ≥ 0

v(c | qi ,θ SL ,θ BH ) ≥ v(c | qk ,θ SL ,θ BH ),i ≠ k

(3) (4)

266

Manufacturing Systems and Industry Application

The obtained optimal solution of model:

When S L [1- (1- qH )θ BH ] - S H [1- (1- qL )θ BH ] + g B (qH - qL )θ BH > 0 , the optimal solution of principal-agent model is c* =

S H + g B (1 - qH )θ BH 1 - (1 - qH )θ BH

u* = rqH - l(1 - qH )(1 - θ BH ) - S H - I BH v* = 0

When S L [1 - (1 - qH )θ BH ] - S H [1 - (1 - qL )θ BH ] + g B (qH - qL )θ BH ≤ 0 , the optimal solution of principal-agent model is c** =

SH - SL - gB (qH - qL )θ BH

 SH - SL  v** =  - g B  [1 - (1 - qH )θ BH ] - g B (1 - qH )θ BH - S H  (qH - qL )θ BH  ** u = rqH - l(1 - qH )(1 - θ BH ) - S H - I BH - v**

At this time, the buyers need to pay agency costs to the suppliers in order that the suppliers can’t increase their own income by choosing the low-quality efforts in the condition that the quality effort level is concealed. When the suppliers implement the high inspection, the problems of buyers are max u(c | qi ,θ SH ,θ BH ) c,qi

s.t.

v(c | qi ,θ SH ,θ BH ) ≥ 0

v(c | qi ,θ SH ,θ BH ) ≥ v(c | qk ,θ SH ,θ BH ) ,i ≠ k

The

optimal

solution

of

model

can

be

(5) (6) obtained,

(S H + I SH )(pSLH + pBLHH - 1)+ [ pSLH (1 - pBHHH ) - pSHH (1 - pBLHH )]g S

when

+[ pBLHH (1 - pSHH ) - pBHHH (1 - pSLH )]g B + S L (1 - pSHH - pBHHH ) > 0

, the optimal solution of model is

S + I SH + g S pSHH + g B pBHHH c* = H 1 - pSHH - pBHHH u* = rqH - l(1 - qH )(1 - θ SH )(1 - θ BH ) - S H - I SH - I BH v* = 0

(S H + I SH )(pSLH + pBLHH - 1)+ [ pSLH (1 - pBHHH ) - pSHH (1 - pBLHH )]g S

When +[ pBLHH (1 - pSHH ) - pBHHH (1 - pSLH )]g B + S L (1 - pSHH - pBHHH ) ≤ 0 , the optimal solution of model is c** =

S H - S L + I SH + g S (pSHH - pSLH )+ g B (pBHHH - pBLHH ) pSLH + pBLHH - pSHH - pBHHH

v** = c** (1 - pSHH - pBHHH ) - g S pSHH - g B pBHHH - S H - I SH u** = rqH - l(1 - qH )(1 - θ SH )(1 - θ BH ) - S H - I SH - I BH - v**

Similarly, as the suppliers can save the quality costs when implementing the low-quality effort, so the suppliers are more willing to choose the low-quality effort level. The buyers must increase the unit price so as to promote the suppliers to select the high-quality effort level, which can make the suppliers hold the agency costs. The high inspection implemented by suppliers can increase the inspection costs of supply chains, thereby reducing the total expected profit of supply chains. On the other hand, the high inspection implemented by suppliers is usually able to reduce the probability of the buyers’ rejection, thereby saving the fine caused by the quality defects of products and increasing the expected return. Therefore, the buyers need to sell part of their proceeds to the suppliers in order to ensure that the earnings of suppliers are not less than the one during the high inspection. Then the suppliers are motivated to cancel their inspection. This paper has considered that the buyers can compensate the earning loss of suppliers by increasing the trading price so as to enable the suppliers’ expected return to reach the one that is during the implementation of high inspection. And it has also made Dv become the expected return

Yanwen Wu

267

deviation between the inspection that is not implemented by the suppliers and the one that is implemented. Then Dv = (c' - c)[1 - (1 - qH )θ BH ]

The transaction price c' between the suppliers and the buyers after compensation is c' =

Dv +c 1 - (1 - qH )θ BH

When the inspection level information of the suppliers is concealed, the suppliers have the information superiority. Then the buyer is considered as the client and the supplier is regarded as the agent. The suppliers have the moral hazard problems. Here the problem of buyers is to determine the appropriate transaction price in order to make the suppliers select the low inspection rather than the high inspection in the pursuit of their own profit maximization. The problem of the buyers is max u(c | qi ,θ Sj ,θ BH ) c,θ Sj

v(c | qi ,θ Sj ,θ BH ) ≥ 0

(7)

v(c | qi ,θ Sj ,θ BH ) ≥ v(c | qi ,θ Sm ,θ BH ) , j ≠ m

(8)

s.t.

When the suppliers implement the high quality efforts, the optimal solution of the model is S H + g B (1 - qH )θ BH 1 - (1 - qH )θ BH

c* =

u* = rqH - l(1 - qH )(1 - θ BH ) - S H - I BH v* = 0

When the suppliers implement the low quality efforts, the optimal solution of the model is c** =

S L + g B (1 - qL )θ BH 1 - (1 - qL )θ BH

u** = rqL - l(1 - qL )(1 - θ BH ) - [1 - (1 - qL )θ BH ]c** + g B (1 - qL )θ BH - I BH v** = 0

The implementation of high quality efforts of suppliers will inevitably bring greater benefits to the supply chains. But at this time the suppliers only gain the retained earnings. To ensure the suppliers implement the high quality effort rather than the low quality efforts, it is required that the buyers should sell part of the proceeds to the suppliers by increasing the transaction price. However, the incomes of buyers shouldn’t be lower than their expected return when the suppliers implement the low quality effort level after the transfer in order to ensure that the buyers have an incentive to improve the quality collaborated with the suppliers. Here, the Du is considered as the expected return deviation of buyers when the suppliers implement the high and low quality effort levels. Then Du = c'[1 - (1 - qH )θ BH ] - g B (1 - qH )θ BH - S H

The transaction price between two sides is c' =

Du + g B (1 - qH )θ BH + S H 1 - (1 - qH )θ BH

When the quality effort level and the inspection level information of suppliers are concealed, the suppliers have the moral hazard problems. The problem of the buyers is max u(c | qi ,θ Sj ,θ BH )

c,qi ,θ Sj

s.t.

v(c | qi ,θ Sj ,θ BH ) ≥ 0

v(c | qi ,θ Sj ,θ BH ) ≥ v(c | qk ,θ Sj ,θ BH ),i ≠ k v(c | qi ,θ Sj ,θ BH ) ≥ v(c | qi ,θ Sm ,θ BH ) , j ≠ m

SH (1- pSLH - pBLHH ) - (I SH + SL + g B pBLHH + g S pSLH )(1- pBHLH )+ g B pBHLH (1- pSLH - pBLHH ) > 0

When solution of the model is

(9) (10) (11) (12) , the optimal

268

c* =

Manufacturing Systems and Industry Application

S H - S L - g Bθ BH (qH - qL )+ g Bθ BH (1 - qL )θ SH - g S (1 - qL )θ SH - I SH (qH - qL )θ BH +(1 - qL )(1 - θ BH )θ SH

u* = rqH - l(1 - qH )(1 - θ BH ) - S H - I BH - v* v* = c* (1 - pSHL - pBHLH ) - g S pSHL - g B pBHLH - S H

Then the suppliers hold the agency costs. As the buyers have faced large external losses, so the suppliers are expected to implement the high quality efforts. However, with the development of inspection techniques and the increment of inspection efficiency, the suppliers can find that the choice of low quality and high inspection is more favorable than the one of high quality and low inspection. At the same time, the inspection efficiency is greater, and the suppliers’ choice of low quality and high inspection is better. Then the buyers need to improve the unit price so as to prompt the suppliers to increase the quality efforts which can make the suppliers hold the agency costs. With the development of inspection techniques and the increment of inspection efficiency, the agency cost that is provided by the buyers is also growing. When S H (1 - pSLH - pBLHH ) - (I SH + S L + g B pBLHH + g S pSLH )(1 - pBHLH )+ g B pBHLH (1 - pSLH - pBLHH ) ≤ 0 , the optimal solution of the model is c** =

S H + g B (1 - qH )θ BH 1 - (1 - qH )θ BH

u** = rqH - l(1 - qH )(1 - θ BH ) - S H - I BH v** = 0

Numerical Example Analysis The suppliers can choose two kinds of effort levels that are high quality and low quality to produce the products, in which qH = 0.90 , qL = 0.65 . The corresponding quality costs are S H = 20 , S L = 10 . The suppliers can directly ship the products to the buyers. And the products can be also firstly self-tested. If they are qualified, then they will be shipped to the buyers. The inspection cost is I SH = 1 . The buyers inspect the products provided by the suppliers and the inspection cost is I BH = 1 . The inspection levels of the buyers and the suppliers are same, θ BH = θ SH = 0.87 . The price that the buyers sell to the end-customers is r = 100 . The concealing of quality effort information of the suppliers. When the suppliers directly ship the products to the buyers without their own inspection, the buyers will inspect them. Through the calculation, we can know that when the quality effort level of the suppliers changes from small to large, the maximum expected return obtained by buyers will be larger and the total expected profit of supply chain is also increased significantly. Through the calculation, the data has showed (Table 1) that the total expected profit of supply chain has little changes and the expected return of buyers is significantly increased with the development of inspection level of buyers. It has also proved that the improvement of the inspection level of buyers has increased the probability of rejecting the suppliers’ products. On the one hand, the fine that the suppliers pay to buyers is increased. On the other hand, the external loss cost of buyers is also reduced so that the expected return of the buyers is significantly increased. Table1The income under different inspection level when supplier choose low-inspection

Table 2 The income under different quality effort when supplier choose inspection

θ BH

c

v

u

Z

qH

c

v

u

Z

0.81

29.38

5.38

63.24

68.62

0.83

28.19

4.19

64.47

68.66

0.85

27.06

3.06

65.64

68.70

0.87

25.98

1.98

66.76

68.74

0.89

24.94

0.94

67.84

68.78

0.82 0.84 0.86 0.88 0.90

54.67 47.74 42.13 37.50 33.61

22.02 17.48 13.75 10.76 8.21

37.51 44.10 49.83 54.93 59.53

59.53 61.58 63.58 65.69 67.74

Yanwen Wu

269

When the suppliers imitate the buyers to inspect themselves, from the Table 2 we can know that with the improvement of quality effort level of suppliers, although the total expected profit of supply chain is also increased, it is still smaller than the one in Table 1. The supplier’s income after the adjustment of price has been compared in Figure 1.

Fig. 1 The comparison of supplier’s income after the adjustment of price Fig. 2 The comparison of supply chain’s income under different quality effort level

The concealing of the inspection level information of suppliers .The supply chain’s total income under different quality effort levels is compared in Figure 2. At this time, in order to encourage the buyers to choose the high quality effort level, the buyers may set a higher trading price in contract so that the suppliers can get a higher expected return. However, the buyer’s income should not be less than the expected return when the suppliers implement the low quality effort after the improvement of trading price. If not, the buyers will have no incentive to improve the product quality cooperatively. When the buyers sell the maximum profits, the trading price and the earnings of both sides are listed in Table 3. Table 3 The income after the adjustment of price

θ BH

c'

v'

u'

Z

0.81

41.28

15.95

52.67

68.62

0.83

41.17

15.85

52.81

68.66

0.85

41.06

15.75

52.95

68.70

0.87

40.95

15.65

52.09

68.74

0.89

40.84

15.55

53.23

68.78

The concealing of quality effort level and inspection level of the suppliers . When the quality effort level and the inspection level of suppliers are both concealed, the earnings of buyers and suppliers with the changes of different quality efforts are showed in Table 4 and Table 5. Table 4 The income under different high-quality effort level

Table 5 The income under different low-quality effort level

qH

c

v

u

Z

qL

c

v

u

Z

0.82

44.25

14.19

46.34

60.53

0.65

26.85

2.77

65.97

68.74

0.84

38.79

10.61

51.97

62.58

0.67

30.00

5.65

63.09

68.74

0.86

34.19

7.59

57.05

64.64

0.69

33.71

9.04

59.70

68.74

0.88

30.25

5.00

61.69

66.69

0.71

38.17

13.11

55.63

68.74

0.90

26.85

2.77

65.97

68.74

0.73

43.60

18.07

50.67

68.74

When the high quality of suppliers is close to the low quality, the buyer’s income will be reduced and the agency costs held by suppliers will be significantly increased. Because when the high quality effort of suppliers has very little difference with the low quality effort, the suppliers will find that the implementation of low quality and high inspection is more favorable than the one of high quality and low inspection. Then the buyers need to increase the trading price to pay more agency costs so as to prevent the suppliers from selecting the low quality and high inspection. The earning changes of buyers and suppliers are listed in Table 6 when the inspection level changes. With the improvement of inspection level, the maximum income of buyers has been reduced

270

Manufacturing Systems and Industry Application

and the agency cost obtained by suppliers has been increased. It shows that when the inspection efficiency is gradually increased, the suppliers are more willing to choose the low quality and high inspection. So the buyers must increase the transaction price to make the suppliers hold the agency cost so as to prevent the suppliers from implementing the low quality and high inspection. Then the expected return of buyers is reduced. With the improvement of inspection efficiency, the agency costs are increasing. Table 6 The income under different inspection level

θ BH , θ SH

c

v

u

Z

0.81

26.16

2.42

66.20

68.62

0.83

26.34

2.49

66.17

68.66

0.85

26.57

2.61

66.09

68.70

0.87

26.85

2.77

65.97

68.74

0.89

27.18

2.98

65.80

68.78

Conclusion This paper has established the principal-agent model according to the concealing of quality effort level and inspection level information of the suppliers. And the quality improvement of products has been considered in this model. When the suppliers improve the quality effort, the total income of supply chain will be significantly increased. When the quality effort level and inspection level of the suppliers are both concealed, the maximum income of buyers will be reduced and the agency costs obtained by suppliers will be increased with the improvement of inspection level. It shows that when the inspection technology gradually develops, the buyers need to offer higher agency costs for the suppliers in order to avoid the suppliers to reduce the quality effort level in the pursuit of their own incomes. Acknowledgment Scientific and technological research and development guidance plan of Tangshan City in 2010, Project Number: 10140221c References: [1] Stanley B, Paul E, Madhav V. Information, contracting, and quality costs [J]. Management Science, 2000, 46(6): 776-789. [2] Zhang Cuihua, Huang Xiaoyuan. Effect of Asymmetric Information on Supply Chain Quality Cost Decision [J]. Journal of Northeastern University(Natural Science), 2003 (3) : 303-305. [3] Zhang Cuihua, Huang Xiaoyuan. Supply Chain Quality Prevention Decision under Asymmetric Information [J]. System Engineering Theory and Practice, 2003 (12) : 95-99. [4] Li Lijun, Huang Xiaoyuan, Zhuang Xintian. Strategy of quality control in supply chain under double moral hazard condition [J]. Journal of Management Sciences, 2005, 8 (1) : 42-46. [5] Kashi R. Balachandran, Suresh Radhakrishnan. Quaility implications of warranties in a supply chain[J].Management science, 2005, 51(8):1266-1277. [6] Iny Hwang, Suresh Radhakrishnan, Lixin Su. Vendor certification and appraisal implications for supplier quality[J]. Management science. 2006, 52(10):1472-1482. [7] Zhou Ming, Zhang Yi, Li Yong, Dan Bin. Optimal Contrast Design in the Quality Management of Supply Chain [J]. Journal of Industrial Engineering Management, 2006, 20 (3) : 120-122. [8] Kaijie Zhu, Rachel Q. Zhang, Fugee Tsung. Pushing quality improvement along supply chains. Management Science, 2007, 53(3): 421-436.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.271

Main Converter Fault Diagnosis for Power Locomotive Based on PSO-BP Neural Networks Hongsheng SU School of Automation and Electrical Engineering, Lanzhou Jiaotong University, Lanzhou 730070, P.R. China [email protected] Key words: Converter; Fault diagnosis; PSO-BP neural network.

Abstract. To aim at conventional BP learning algorithm of its flaws, say, low convergence speed and easy falling into local extremum, and etc, during main converter fault diagnosis system for power locomotive, this paper proposed a novel learning algorithm called PSO-BP neural networks based on particle swarm optimization (PSO) and BP neural networks. The algorithm generated the two phases: one is that PSO was applied to optimize the weight values of neural networks based on training samples, the other is that BP algorithm was applied to farther optimize based on verifying samples till the best weight values are achieved. Eventually, a practical example indicates that the proposed algorithm has quick convergence speed and high accuracy, and is ideal patter classifier. Introduction Converter is a key piece during energy conversion in power locomotive, composed of large-power semi-conductor rectifying tube, Thyristor, and other related elements, whether its working is normal or not will have a direct influence on safety operation of railway locomotives and vehicles. Hence, it possesses very important significance to investigate locomotive converter faults. In conventional methods BP neural networks or other improved algorithms are adopted to implement faults diagnosis, but all these methods can not achieve satisfying results [1-3]. To change the situation and achieve excellent results, in this paper, we presents a new learning algorithm called PSO-BP to optimize the weight values of neural networks, as a result, we acquires satisfying results. Locomotive Main Converter AC-DC locomotive circuit made in China is composed of pantograph, main transformer, rectifying equipment, traction motors, and the related HV devices, who convert electric energy into mechanical energy from catenary to locomotive [1]. As shown in Fig.1, a majority of main converters in power locomotives adopt the unequal tri-segment semi-controllable-bridge phase-control-rectifier with booster with resistance brake. The structure of the circuit is simpler, and convenient to control, and so it is broadly applied in diverse AC-DC power locomotives. 25KV,50Hz

A

Ia

a1

VD1

VT1

VT3

b1 x1

LP VD2

VT2

VT4 M

a2

X

x2

VD3

VT5

VD4

VT6

Fig.1 Circuit diagram of main converter

272

Manufacturing Systems and Industry Application

PSO-BP Neural Network Principle BP Neural Network. BP belongs to δ algorithm, and is a supervised learning algorithm [4]. Its main ideal is to apply the errors between the practical outputs of the network and aim vector to modify its weight values so as to let the mean square of the output neurons up to minimum. During the adjustment every time, the varieties of the weight values and deviations have a direct proportion on its error, and the error influence is backward propagated to each former layer. BP learning algorithm comprises the two parts, that is, forward propagation for information and backward propagation for error. During forward propagation, the input information is propagated to the output layer after calculation each layer, the neuron states each layer only influence the ones in its later layer. If the desired values in output layer haven’t been acquired, the changes of the error then are worked out, afterwards, the algorithm shifts to backward propagation process. Across the backward propagation, the error signals in output layer is backward propagated to former each layer to adjust the weight values of the neurons till the expected aim is achieved. Fig.2 shows two-layer neural network structure, its learning algorithm can be found in [5]. j

w1ij

i

w2ki k ……

Pj

a2k a1i Fig.2 BP network structure

PSO Algorithm. Particle Swarm Optimization (PSO) is a global random optimization algorithm. Its basic thinking comes from intelligent behavior of the swarm. Specifically speaking, it mainly emulates the characteristics of the migration and gathering of the birds during seeking food. The algorithm generates swarm intelligence to optimize the seeking aim by the cooperation and competition among the particles. PSO algorithm not only retains global scout strategy with swarm-based, the applied operation model called as displacement-speed is comparatively simple and programming is easily realized, but also it holds the unique optimization properties such as fast operation speed and relatively simple structure. PSO is a high efficient parallel seeking algorithm, and has some prominent behaviors in tackling non-linear optimization problems. A basic PSO algorithm is described below. Set a swarm composed of n particles in D-dimensional space, the ith particle may be expressed as a D-dimensional vector xi=(xi1,xi2,…,xiD), i=1,2,…,n, namely, the position of the ith particle in D-dimensional space is xi, and each such position is named as a potential solution. The adaptability function value of xi is calculated by substituting it into the aim function f(xi), then, according to the value size, xi can be weighed to be the good or the bad. The flight speed of the ith particle is also a D-dimensional vector, and written as and vi=(vi1,vi2,…,viD). Set until now, the optimal position sought by the ith particle is pi=(pi1,pi2,…,piD), and the optimal position sought by the overall particle swarm is pg=(pg1,pg2,…,pgD). Then the position and speed of the particle i can be evolved according to the following equation (1) and (2). k vidk +1 = ω × vidk + c1 r1 ( pidk − xidk ) + c 2 r2 ( p gd − xidk )]

(1)

xidk +1 = xidk + vidk +1

(2)

Yanwen Wu

273

In (1), ω is inertia weight, and indicates an influence yielded by the present speed of the particle on next generation. A suitable ω can make the particle hold balanced exploration abilities. Parameters c1 and c2 are non-negative learning factors, the values of which usually are limited in range of one to two, if the values are too small, the particle is far away from the optimal aim area, inversely, if too large, the particle can suddenly or possibly fly over aim area. r1and r2 are random variables with a scope of zero to one. The prosperous key of PSO algorithm is the selection and adjustment of its parameters, including collective size P, particle dimension d, learning factors c1 and c2, inertia weight ω, the largest speed vmax, the largest iteration times Tmax, and calculating precision ε, where Tmax and ε is the termination conditions, and confirmed by optimization quality and seeking efficiency. In general, P is in range of 10 to 40; d is dimension of solution space determined by concrete problems; vmax is the largest flying speed of the particles, if vmax is too large, the particles likely fly over better solution, and easily fall into a local optimization conversely. Clearly, it directly influences the global exploration ability. To make certain the influence of ω on the algorithm, through analyzing a quantity of experiment data Eberhart and Shi give out the conclusions below: if vmax≤2, then ω tends to 1; if vmax≥3, then ω tends to 0.8 better; when ω∈(0.9, 1.2), the ideal results may be attained. The inertia weight ω is usually considered as ≤1.4 so as to make particles hold locomotion inertia and possess capabilities to break a new exploration space, or mounted as linearly decreasing with iterative time called LDW strategy [6], that is,

ω = ω max −

ω max − ω min G

×g

(3)

where G expresses the gross iterative time, g represents the current iterative time. Learning factors c1 and c2 represent statistic speeding weight of each particle towards Pbest and gbest localization. Compatible c1 and c2 can speed convergence and may not fall into local minimum. c1+ c2 may as well be approximate to 4, generally, c1= c2≈2.05. PSO-BP Learning Algorithm. Typical BP algorithm easily falls into local minimum, this leads to a low convergence speed. In addition, the network learning is very sensitive to initial weight values, whose slightly changes will cause the oscillation of the network. It requires constant training for these parameters to be fixed, but excess training leads to over-fitting. Due to quick convergence speed and better global exploration capability, PSO algorithm is applied to optimize the weights of the neural network so as to overcome the flaws of BP algorithm. Thus, the generalized ability of neural network not only can be expanded, but also its learning ability and convergence speed are improved, dramatically. The two key points must be noticed when PSO is applied to optimize the weights of the networks. 1. To establish a mapping connection between the particle dimensions and the network weights. The dimension weight of each particle in particle swarm is corresponding to a connecting weight in NN, in other words, the number of the weights in NN should be the same as the one of the particles PSO. 2. To select MSE of NN as the fitting function of PSO algorithm. Let d neurons be in input layer, m in hidden layer, and n in output layer, neural networks therefore possess d×m+m×n+m+n weights and thresholds in all. Correspondingly, the dimensions of each particle of PSO algorithm should also be d×m+m×n+m+n. Let the network possess N training samples, mean square error (MSE) is then expressed by MSET=

1 N

N

∑ i=1

n

[ ∑ (tij-yij)2] j =1

(4)

274

Manufacturing Systems and Industry Application

The above equation may serve as the fitness function in PSO algorithm, where tij is the desired export and yij is the practical output of the network. In PSO-BP, all weights and thresholds are firstly coded as a real number vector to express individual in colony. Stochastically generating the colony of these vectors, newly generated individual reverting as the weights of the network during evolution, MSE serving as fitness function, thus learning is changed as optimizing problem, that is, to seek a group of the optimal weights to make MSE minimum. If MSE is lower than the given precision beforehand or iterative time is lower than the largest time, training process then stops, iteration continues implementation till the largest iterative time is arrived otherwise. At the moment, the achieved parameters is quite approximate to the best combination, on the basis of it, BP algorithm is used to optimize the parameters of the network further till the best parameters are obtained, that is, the misjudgment rate of testing sample group is minimum. Example The diagnosis decision table of main converter of one power locomotive is shown in Table 1[3]. Table 1 Diagnosis Decision Table Serieal num.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

Malfunction code 000 0000 001 0001 001 0011 001 0111 001 0110 001 0100 001 1100 001 1000 001 1001 010 0001 010 0011 010 0111 010 0110 011 0001 011 0011 011 0111 011 0110 011 0100 011 1100

Malfunction elements Normal VD1 VD2 VT1 VT2 VT3 VT4 VD3 or VT6 VD4 or VT5 VD1 & VD2 VD3&VD4 (or VT5&VT6) VT1 & VT2 VT3 & VT4 VD1 &VT1 VD1 & VT2 VD1&VT3 (or VT2&VT3) VD1 & VT4 VD2 & VT1 VD2 & VT2

Serieal num. 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38

Malfunction code 011 1000 011 1001 011 1011 011 1010 100 0001 100 0011 100 0111 100 0110 101 0001 101 0011 101 0111 101 0110 101 0100 101 1100 110 0001 110 0011 110 0111 110 0110 110 0100

Malfunction elements VD2 & VT3 VD2 & VT4 (orVT1&VT4) VT1 & VT3 VT2 & VT4 VD3 & VT5(orVD3&VT6) VD3&VT5 VD4&VT5 VD4&VT4(orVT4&VT5) VD3&VD1(orVT6&VD1) VD3&VD2(orVT6&VT2) VD3&VT1(orVT6&VD1) VD3&VT2(orVT2&VT6) VD3&VT3(orVT3&VT6) VD3&VT4(or VT4&VT6) VD4&VD1(orVT5&VD2) VD4&VD2(orVT5&VD2) VD4& VT1(orVT1&VT5) VD4&VT2(orVT2&VT5) VD4&VT3(orVT3&VT5)

In Table 1, fault patterns are coded using 7-bit code X7X6X5X4X3X2X1, and Xi=0 or 1, where the higher 3-bit X7X6X5 represents primary classification, and the lower 4-bit X4X3X2X1 expresses sub-classification in fault patterns with circulation code based. According to [3], we select the network structure as 8-10-7. Thus, there are 8×10+10×7+10+7=167 weights and thresholds to require to be optimized in all, and so, the dimension of the particle in PSO algorithm should also be 167. Let the gross number of the particles be 50, learning factor c1and c2 be 2.05, the initial location of the particles be given according to 2*rand()-1, and the initial location of the particles be given according to rand()*×0.02-0.01. Let ω linearly decrease with iteration time in range of 0.45 to 0.95, and Vmax=0.5. The transfer function of the neurons in middle layer selects S type tangent function tansig, and the one in output layer is S type logarithm function logsig. The reason to do this is that outputs of the functions is within the scope of 0 to 1, output requirements of the networks can just be well satisfied. Learning function is trainlm function. Let learning times Tmax of the networks be 500, training aim ε be 0.01, and learning rate equal 1. Thus we get the learning process of the networks as shown in Fig.3.

Yanwen Wu

275

Fig.3 Error graph of PSO-BP neural network

Let now the input variables of the networks be seen in Table 2, whose expected outputs are described as VD1 malfunction (0010001), VD2 malfunction (0010011), VT1 malfunction (0010111), VT2 malfunction (0010110), VT3 malfunction (0010100), VT1 & VT2 malfunction (0100111), VT3 & VT4 malfunction (0100110), VT1 & VT5 malfunction (1100111), VD2 & VT3 malfunction (0111000), VD3 & VD1 malfunction (1010001), and normal (0000000). Below we apply the data in Table 2 as the inputs of the trained PSO-BP networks, and the diagnosis results are shown in Table 3. Table 2 Test samples Input variable

Serial num

x1

x2

x3

x4

x5

x6

x7

x8

1 2 3 4 5 6 7 8 9 10 11

1.4928 1.5486 1.3836 1.4036 1.4674 1.1494 1.2433 0.9421 1.4370 1.1361 1.5331

1.0277 0.9585 0.8143 0.7968 0.8593 0.7028 0.6857 0.6364 0.7725 0.7913 0.9138

0.3476 0.2951 0.2299 0.2301 0.1711 0.2112 0.1502 0.2085 0.2656 0.2842 0.2615

0.1384 0.1564 0.1586 0.1458 0.1054 0.1335 0.1383 0.0794 0.1647 0.1221 0.1227

0.1107 0.1169 0.0566 0.1314 0.1382 0.1517 0.1180 0.0943 0.1190 0.0998 0.1210

0.0620 0.1265 0.0955 0.0648 0.0969 0.0477 0.0635 0.1128 0.0964 0.0479 0.1092

0.1199 0.0689 0.0623 0.1005 0.0954 0.0750 0.0878 0.0801 0.0520 0.0576 0.0745

0.5000 0.5000 1.0000 0.0000 0.0000 0.5000 1.0000 0.0000 0.0000 1.0000 0.5000

Table 3 Diagnosis results Num 1 2 3 4 5 6 7 8 9 10 11

Export data 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 1.0000 0.0000

0.0000 0.0000 0.0040. 0.0000 0.0000 1.0000 1.0000 1.0000 1.0000 0.0000 0.0000

0.9989 0.9993 1.0000 1.0000 1.0000 0.0023 0.0011 0.0000 1.0000 1.0000 0.0042

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0000 0.9999 0.0001 0.0000

Malfunction 0.0000 0.0000 1.0000 0.9996 1.0000 1.0000 0.9915 1.0000 0.0000 0.0000 0.0001

0.0000 0.9992 1.0000 1.0000 0.0002 1.0000 1.0000 1.0000 0.0001 0.0000 0.0002

1.0000 1.0000 1.0000 0.0000 0.0000 1.0000 0.0000 1.0000 0.0001 1.0000 0.0002

VD1 VD2 VT1 VT2 VT3 VT1&VT2 VT3&VT4 VT1&VT5 VD2&VT3 VD3&VD1 Normal

276

Manufacturing Systems and Industry Application

After checking, the diagnosis results in Table 3 are fully consistent with practice, the correct rate is 100%, but in [3], the sample 7 is diagnosed as VT1 & VT3 by mistaken, and 90.91% for its correct rate. Clearly, the algorithm mentioned in this paper generates higher accuracy. Summary PSO-BP learning algorithm proposed in this paper effectively tackles the flaws of BP neural networks during main converter fault diagnosis for power locomotive, and is an effective global optimal algorithm. Moreover, PSO-BP algorithm also improves the abilities of PSO algorithm such as quick convergence speed and high classification precision. For PSO and BP, they have many improved models, but how to match the two models better and to make them work more effectively is a fatiguesome and tiring job. References [1] Q.L. li and Z.T. He: Railway Locomotives & Vehicles Vol. 4 (2009), p. 29 [2] S.B. Liu, X.H. Jiang and T.F. Chen: Electric Drive for Locomotives No.5 (2005), p.57 [3] Z.L.Wei and H.S. Su: Electronics Quality, No.12 (2009), p.18 [4] S. Cong: Theory and Application of Neural Networks (USTC Press, China 2003). [5] H.S. Su and H.Y. Dong: WSEAS Trans. on Circuits and Systems Vol.8(2010), p.136 [6] Y. Shi and R C Eberhart: Institute of Electrical and Electronics Engineers No.5 (1998), p.69

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.277

Research on Software Trustworthiness Level Evaluating Model Based on Layered Idea and Its Application ZHANG Jin1, a, YAN Yong-quan3, SUN Yun-chuan1, ZHAO Guo-xing1, LIU Jun-fei1 1

College of Information Science and Technology, Beijing Normal University, Beijing 100875, China 2 3

College of Computer, Beijing Institute of Technology, Beijing 100081, China

Information Technology Department, Shan Xi Professional College Finace,Tai Yuan Shan Xi 030008, China a

[email protected]

Key words: trustworthy software, trustworthy level evaluation model, layered idea, analytic hierarchy process (AHP)

Abstract. In order to evaluate software trustworthy level in quantitative method, it is the key how to effectively integrate numerous of evidence items to analyze software trustworthiness. Combing with analytic hierarchy process (AHP), a software trustworthy level evaluation model based on layered idea is proposed. The model is applied to evaluate the trustworthy level of a kind of software. Based on improved trustworthy software triangle model, a single evidence item is analyzed to decide that it contributes only to one single trustworthy attribute or to multiple trustworthy attributes. Then, all evidences are grouped to different evidence groups according to the evidences’ different contribution to different trustworthy attributions. After that, the evidence group is used as an individual layer between evidence level and trustworthy attribute level. Finally, the quantitive evaluation to software trustworthy level is done from the evidence layer in the bottom to the evaluation level in the top. AHP is used to set the weight. The proposed evaluation model balances the complexity of theory and convenience of operation effectively. The layered AHP is easy to realize in the model. In order to analyze the model performance, a kind of software system is used as object in the simulating experiments. Experimental results show that the proposed model can evaluate the software system effectively. Introduction With the degree of global informationization enhanced increasingly, the software, one of the key foundations for the information society, has penetrated into every corner of society and dominates human's activities. However, with the demand for software systems sharply rising, the software systems are increasingly becoming large and complicated. It makes many potential factors inducing software disabling. This kind of software system running in an unexpected way takes the sense of mistrust to the users and results in the issue of "Software Trustworthiness" [1]. The problem of software trustworthiness induces the high requirement. In research, trustworthy software is defined: the operation and results of software always follow the anticipation of people and the trustworthy software can provide continuous service when is attacked or disturbed. In order to satisfy the requirement of trustworthy software, how to evaluate the software trustworthy have to be resolved. But software trustworthy evaluating is very complicated. In Ref.[1], software trustworthiness problem is researched from software trustworthy attributes, which include six aspects (reliability, usability, maintainability, survivability, security and real time). The software trustworthiness is evaluated based on the trustworthiness of different trustworthy attributes. In Ref.[2], if software is trustworthy or not is decided by the product of trustworthy attributes and its weight. So, how to decide the corresponding weight is one of the keys. From another perspective,

278

Manufacturing Systems and Industry Application

software trustworthiness is related not only to trustworthy attributes, but also to different stage of software lifecycle. So, a synthetical method is needed to evaluate it. Aiming to software trustworthiness evaluating problem, different methods for different object are proposed based on different perspective. In Ref. [3], a framework supporting software resource trustworthiness evaluation is proposed and corresponding evaluating system of software resource database is designed. In Ref. [4], the trustworthiness of open source components are researched. In Ref.[5], trustworthy evidence model based on validation is proposed. In Ref.[6], software trustworthiness based on component is discussed. In Ref.[7], a software trustworthiness concept model for Internet is proposed. In Ref.[8], for network service, a network trustworthy service strategy based on decision theory and ontology theory is proposed. In Ref.[9], for software trustworthiness evaluating criterion, a software trustworthiness evaluating method using software grade model is proposed. In Ref.[10], a software trustworthy evaluating method based on OWG operator is proposed. Based on these researches, a software trustworthy evaluating model based on layered idea is proposed. The model uses improved software trustworthy triangle model as the base. The evidence is analyzed and its characteristics are gotten. Then, the similar evidences are clustering to a group as one layer in the model. All the evidences in one group only belong to one trustworthy attribute, or to all trustworthy attributes. Finally, AHP is used to calculate the weights of elements in each layer of the model. In the experiment, one software system is used as the testing objects and experimental results show that the proposed model can evaluate the software trustworthiness effectively. Background Improved Trustworthy Software Triangle Model [11]. Based on the software trustworthy model proposed by Trustie group, improved trustworthy software triangle model, in which business trustworthiness is the core, is proposed combining with business characteristics of some industry in Ref.[11]. The model is shown in Fig.1. “Industry trustworthy software level definition-industry trustworthy software evidence model-industry trustworthy software evaluating model” is the center of the model. In the industry trustworthy software level definition, the software technologies and business characteristics are emphasized. In the industry trustworthy software evidence model, these evidences of core business are emphasized. In industry trustworthy software evaluating model, the importance of business to quantitive evaluation is emphasized.

Fig.1 The trustworthy software triangle model oriented business [11]

Improve trustworthy software triangle model is composed of three parts. Software trustworthy level definition aims to meet user expectations for software trustworthy attributes and submitted trustworthy evidence type. It proposed to grade software trustworthy level and the detailed definition of each grade, which provide the basis for trustworthy evaluation. There are six levels from level 0 to level 5 about software trustworthiness. Different level denotes different trustworthiness. Based on the

Yanwen Wu

279

trustworthy definition, software trustworthy evidence model gives the expected evidence set to satisfy some trustworthy level software. The evidences comes from different stage of software lifecycle, such as software developing stage, software submitting stage and software using stage. Software trustworthy level evaluation can be done based on the definition and the evidences. Analytic Hierarchy Process. AHP proposed by Thomas Ossetia is a systematic and hierarchical method in 1970s’. Combining quantitive with qualitative idea, AHP is a practical and effective method to deal with complex decision-making problem and has been widely used in many fields, such as economic planning and management, energy policy and distribution, behavioral science, military command, transport, agriculture, education, human resources, health care and the environment. Generally speaking, there are four steps using AHP to solve problems. Step One: Establishing hierarchical structure model. Step Two: Constructing comparison matrixes. Step Three: Calculating the weight vector and doing the consistency test. Step Four: Calculating combination weight vectors and doing combination consistency check. Software Trustworthiness Level Evaluating Model Based on Layered Idea Model Summarization. The basic idea of proposed software trustworthiness level evaluating model based on layered idea is shown in Fig.2. The whole model is composed of evaluating layer, attributes layer, grouping layer and evidences layer. The evidence in evidences layer comes from different stage of software lifecycle and different aspects of business and can be given a value. After the single evidence is analyzed, each group in the grouping layer belongs to one trustworthy attribute or to all trustworthy attributes. There are six elements in the attribute layer and each element reflects different aspect of software trustworthiness. Evaluating layer evaluates trustworthy layer of software based on the calculated results of attribute layer. The trustworthy layer of software is decided based on the calculated value and the rules. There are three key points as follows. (1) How to realize the quantitive process in the model The quantitive process in the model is done from the bottom to the top. First, each evidence in the evidence layer has to be given a value in [0,1]. Second, an element in the evidence layer multiplies its weight (θij), to get the product and all the products are added to get the value of some group in grouping layer. Third, each group in grouping layer multiplies its weight (βi or γij) to get the product and all the products are added to get the value of some trustworthy attribute in attributes layer. Finally, all value of trustworthy attributes multiplies corresponding weights (αi) to get the value of software system. Based on the threshold of different trustworthy level, the trustworthy level of software can be decided. (2) How to set the corresponding weight of each element in the model In the model, each element in the different layer (evidence layer, grouping layer and attribute layer) has a weight in order to calculate the value of element in the upper layer. The weights are decided based on AHP. For example, to some software system, the comparing matrix is constructed firstly when the weights of trustworthy attributes are decided. In order to reduce the error, the average matrix is calculated based on several matrixes given by different experts. Then, consistency is checked and the weights are gotten through matrix operation. (3) How to realize the grouping layer in the model According to the improved trustworthy software triangle model proposed in Ref.[11], the evidence in evidence layer relates to software technology and business. Grouping evidence is based on trustworthy attributes characteristics and evidences characteristics. The evidence set corresponding to usability of power product management system is listed in Table.1. There are two kinds of evidence, single evidence (SE) and integrative evidence (IE). Single evidence only contributes to usability and integrative evidence

280

Manufacturing Systems and Industry Application

Fig.2 Software trustworthiness evaluating model based on layered idea

SE group

Table.1 grouping part of evidences Evidence set Name of evidence Working ticket Usability Operating ticket ……

Previous system (PS)

Requirement (RA) IE group

analyse

Software design (SD)

Using effect (UE)

Third party evaluation (TE)

Consistency between previous system and the type of current electric power plant Consistency between previous system and the scope of current electric power plant …… Covering rate joining requirement analyzing process Number of manager joining requirement analyzing process …… Extent to follow the design process managing rules Capability of software designer …… System effect to working efficiency of electric power plant Ratio of manual assistant management …… Evaluating results of third party software evaluating institution Evaluating results of third party power informationization evaluating institution

Simulating Experiment. In order to specify the maneuverability of the proposed model, the usability is used as an example to explain how to construct the model and to calculate. Modeling process First of all, according to the proposed evaluation model, there are several groups in the grouping layer contributing to usability, including SE group, PS, RA, SD, UE and TE listed in Table.1. A 6*6 comparison matrix is constructed and each element in the matrix is set from 1 to 9 based on the relative importance. If ratio of the importance between the element i and j is aij, the ratio of the importance between the element j and i is aji=1/aij. Then, consistency index (C.I.) is calculated based λ −n on the formula C.I . = max . Based on Table.2, the corresponding average random consistency n −1 index (R.I.) is found. The average random consistency index of different matrix, whose order changes from 1 to 15, is gotten by averaging 1000 calculations. C .I Then, the consistency ratio (C.R.) is calculated based on the formula C.R = . When C.R.<0.1, R.I the matrix is acceptable. Otherwise, matrix should be adjusted suitably. When C.R. < 0.1, the elements in next layer can be input. Otherwise, the input is refused and new input should be given until the matrix is satisfied.

Yanwen Wu

281

Table.2 Average random consistency index R.I Order of matrix

1

2

3

4

5

6

7

8

R.L

0

0

0.52

0.89

1.12

1.26

1.36

1.41

Order of matrix

9

10

11

12

13

14

15

R.L.

1.46

1.49

1.52

1.54

1.56

1.58

1.59

Finally, the value of availability is calculated from bottom to top. First, the eigenvector, W, is normalized. Assuming scores of each element (S1, S2, S3, S4, S5 and S6) in grouping layer have been calculated based on the each element in evidence layer. Si (i=1…6) multiplies the weight, W, and all the products are added to get the score of the attributes. Calculating process For power production management system, the calculating process to assess it based on trustworthiness evaluation model can be described as follows. First of all, the mapping matrix from attribute layer to evaluating layer is given. The element value of the mapping matrix given by the power industry experts is listed in Table.3. The matrix satisfies the AHP judging matrix after calculating. Table.3 The value of elements in the second layer judging matrix Trustworthiness Evaluating Usability

Reliability

Maintainability

Survivability

Security

1

3

5

7

8

Real time 9

0.3333

1

3

5

7

8

0.2

0.3333

1

3

5

7

Survivability

0.1429

0.20

0.3333

1

3

5

Security

0.125

0.1429

0.2

0.3333

1

3

Real time

0.1111

0.125

0.1429

0.2

0.3333

1

Reliability Maintainability

Availability

Secondly, the mapping matrix between attribute layer and grouping layer is determined. There are several matrixes, such as the matrix between usability and its corresponding group in grouping layer. Finally, the mapping matrix between grouping layer and evidence layer is determined. There are several matrixes, such as the judging matrix between software design and corresponding evidence. After the data are prepared, according to parameters of judging matrix and scores of evidence in evidence layer, the score of software system can be calculated from bottom to top. By comparing the score with the threshold, the trustworthy level of the software system can be judged. The trustworthiness level of some power production management system is shown in Table.4. Table.4 Trustworthy results of an application Evaluating Level

Requiring Score

Evaluating scores

Passed

Available level

0.323

0.332

Yes

Confirmed level

0.532

0.5405

Yes

Practical level

0.798

0.7545

No

Acknowledge Part of this work was supported by National Natural Science Foundation of China (Grant No. 60901080), National High-tech R&D Program of China (863 Program, Grant No. 2009AA010314) and China Postdoctoral Science Foundation Funded Project (Gran No.20100480219).

282

Manufacturing Systems and Industry Application

Conclusion To software trustworthy level evaluating problem, a software trustworthy level evaluating model based on layered idea is proposed. In the model, there are four layers: evaluating layer, attribute layer, grouping layer and evidence layer. How to realize the quantitive process, how to set the weight of element and how to group the evidences are described in detail. The quantitive process is done from the bottom to the top. The weights are set based on AHP. Grouping process is done according to the evidence characteristics and the attributes characteristics. In order to explain the maneuverability of the model, the operating process is shown based on some software system. Experimental results show that the proposed model is easy to realize and can evaluate the software trustworthy level effectively. References [1] Trustie research group. Trustie Series Technology Standard(V2.0). http://www.Trustie.net, 2009.9 [2] V. Jeffrey: Trusted Software’s Holy Grail. Software Quality Journal, Vol. 11(1) 9-17(2003) [3] Cai Sibo, Zou Yanzhen, Shao Lingshuang, Xie Bing, Shao Weizhong: Framework supporting software assets evaluation on trustworthiness. Journal of Software, Vol. 21(2) 359-372 (2010) (In Chinese) [4] Immonen, M. Palviainen: Trustworthiness evaluation and testing of open source components. In: Proc. Of 7th International Conference on Quality Software, Vol. 1: 316 – 321, 2007 [5] Ding XL, Wang HM, Wang YY: Verification oriented Trustworthiness evidence and Trustworthiness evaluation of software. Journal of Frontiers of Computer Science and Technology, Vol. 4(1): 46-48 (2010) [6] Bill Councill ,George T. Heineman: Component-Based Software Engineering and the Issue of Trust. In: Proc. Of Proceedings of the 22nd international conference on Software engineering, Limerick, Ireland, 2000 [7] Wang HM, Tang YB, Yin G: Trustworthiness of Internet-Based software. Science in China-Series F: Information Sciences, Vol.49(6): 759-773 (2006) [8] E. Michael Maximilien, Munindar P. Singh: Toward Autonomic Web Services Trust and Selection. In: Proc. Of ICSOC’04, Vol. 1: 212--221 (2004) [9] Lang Bo, Liu Xudong, Wang Huaimin, Xie Bing, Mao Xiaoguang: A Classification model for software trustworthiness. Journal of Frontiers of Computer Science and Technology, Vol. 4(3):231-239, (2010) [10] Shuai Ding, Shanlin Yang: Research on Evaluation Index System of Trusted Software. In: Proc. Of 4th International Conference on Wireless Communications, Networking and Mobile Computing, Vol.1: 1-4 (2008) [11] Zhang Jin, Liu Jun-fei, Jiao Hai-xing: Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness. In: Proc. Of International Conference on Power and Energy Systems, Vol.1 (2010)

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.283

Research on Face Recognition Based on Pulse Coupled Neural Network Wang Xincuna, Liu Yumin b, Yue Kaihua c, Cheng Mand Department of Physical and Electronics, Chuxiong Normal University, Chuxiong, China a

[email protected], [email protected], [email protected], [email protected]

Key words: Pulse Coupled Neural Network (PCNN); face recognition; time series icon (TSI); Euclidean distance.

Abstract. In Pulse Coupled Neural Network (PCNN), the conception of time series icon based on neural impulse oscillation realized face recognition by using average time series icon and average Euclidean distance. Simulations showed that the approach is effective and possesses better recognition effect for various faces and complicated expressions. Introduction Face recognition is the current focus of research in artificial intelligence, which has wide applications in areas of national security, civil and economic, family entertainment and etc. In recent years there have been many methods for face recognition. Neural network is one of the hot. And various neural networks provide possibility for various researches. In the past researches of face recognition using neural network, the more use is back propagation network and radial basis function networks [1, 2]. However, PCNN applied in the paper is a new kind of neural networks, which is different from traditional artificial neural networks. PCNN, directly originating from research of mammalian visual cortex neural cells, is a single-layer artificial neural network, moreover, is one mainly based on iterative algorithm. It is a self-learning and self-monitoring network without advance training, and it possesses incomparable superiority [3] compared with traditional methods of face recognition neural networks. PCNN is mostly applied to image processing field [4], such as image fusion [5], image segmentation [6], objective recognition [7], image enhancement [8] and etc. Model of PCNN Standard neuron model of PCNN is shown in Fig.1: +1

Thresholdθ Link 1+βL

β∑

F

Input



Y

U

Step

Fig.1 Standard neuron model of PCNN

Output impulse

284

Manufacturing Systems and Industry Application

Its mathematical expressions[9] are: Fij [n] = e −α F Fij [n − 1] + VF ∑ M ijkl Ykl [n − 1] + I ij

(1)

Lij [n] = e −α L Lij [n − 1] + VL ∑Wijkl Ykl [n − 1]

(2)

U ij [ n ] = Fij [ n ](1 + β Lij [ n])

(3)

1,U ij [n] > θ [n − 1] Yij [n] =  0, otherwise

(4)

θ ij [n] = e −α θ ij [n − 1] + Vθ ∑ Ykl [n − 1]

(5)

E

In Formula (1), the feedback input of (i, j)th neuron is Fij[n]. The amplification factor and decay time constant of feedback input domain are VF andαF respectively. Input stimulus signal is the image pixel gray value Iji[n]. In Formula (2), coupling connection input is Lij[n]. The amplification factor and decay time constant in coupling connection domain are VL andα L respectively. In Formula (3) connection factor of internal activities item Uij[n] is β. In Formula (5) amplification factor and decay time constant of dynamic thresholdθij[n] are VE andαE respectively. Weight matrix Mijkl and Wijkl are connection matrices of feedback input domain and coupling connection domain respectively. Recognition algorithm based on PCNN Input face map as external signal to PCNN network, and according to the following formula obtain g[n], the total number of neurons in the output pulse, which means that two-dimensional binary image outputted from iterations is transformed to one-dimensional time series signal. (6) g[n] = ∑ Yij [ n] ij

In Formula (6) Yij[n] is the output of ignition neuron Nij at time n. g[n] is the statistic total number of neurons issued by PCNN at time n, which denotes total number of ignition neurons [10] in each image during each iterations. Under the circumstance of parameters are unchanged in PCNN a unique time series icon image reflecting the feature of inputted face image is available, which is taken as PCNN parameters of face image. When it is used to store a M×N image compared with M×N pixels, it can save more storage capacity. Compared with the traditional face recognition approaches, time series icon as characteristic parameters of face image can save storage capacity and recognition computation. Establish face template library PCNN by using time series icon of face samples, and match the template by using average Euclidean distance E to determine recognition results of face image. N

∑ ( g [i] − g [i]) 1

E=

2

0

i =1

N

(7)

In Formula (7), N is the number of iterations; g1[i] is the total number of ignition neurons at time I; g0[i] is the average number of ignition neurons of n sample images at time i, which is determined by the following.

Yanwen Wu

285

n

g 0 [i ] =

∑g

k

(i )

k =1

(8)

n

Algorithm flow is shown in Fig.2: Face sample image

Untested face image

Standard PCNN model Average TSI g1 (i ) of untested face image

Average TSI g 0 (i ) of face sample image

Average Euclidean distance E

E∆

Recognition result: the same face

Recognition result: different face

Fig.2 Algorithm Flow Simulation and result analysis Face images in the experiments are from universal face database Oliver Research Laboratory (ORL). The database consists of 10 persons, 10 of size 92 × 112, 256 gray-scale images of each person. The sample set consists of the fore n images of each person of the fore 20 persons from the database ORL, which is composed of a total 20×n samples. Test set 1 consists of the left images of each person of 20 persons in the sample set, which is composed of a total 200-20×n samples. Test set 1 is used to test face recognition rate for known face images by PCNN. Test set 2 consists of 10 images of each person of the other 20 persons in the database ORL, which is composed of a total 200 samples. Test set 2 is used to test rejection rate for unknown face images by PCNN. In the experiment model parameters values are shown in Table 1. Table 1 PCNN model parameters Parameter

β

αL

αE

αF

VF

VL

VE

M =W

value

0.1

1.0

1.0

0.1

0.5

0.2

20

[0.5 1 0.5;1 0 1;0.5 1 0.5]

Fig.3 shows part face images from s8 and s4 in ORL database, which take on large differences.

(a)s4_1

(b)s4_2

(c)s4_3

(d)s4_4

(e)s4_5

286

Manufacturing Systems and Industry Application

(f)s8_1

(g)s8_2

(h)s8_3

(i)s8_4

(j)s8_5

Fig.3 Face gray image of different expressions from 2 persons

(c) TSI of s4_3

(a) time series of s4_1

(b) time series of s4_2

(d) TSI of s4_4

(e) TSI of s4_5

(f) TSI of s8_1

(g) TSI of s8_2

(h) TSI of s8_3

(j) TSI of s8_5

(n) Average TSI when s8_1,2,3,4 as sample

(h) TSI of s8_4

(m) Average TSI when s4_1, 2, 3, 4 as sample

Fig.4 PCNN time series icon (TSI) corresponding to gray images in Fig.3

Yanwen Wu

287

Fig.4 shows the PCNN time series icon corresponding to gray images in Fig.3 and it offers average time series icon when the fore 4 image (n=4) of each person as sample. In Fig.4, the abscissa axis denotes the number of iterations while Vertical axis denotes total number of neurons during iteration. According to (a)~(e) and (f)~(j) in Fig.4, time series icon diagrams of different expressions from one person are similar. Fig.5 draws Euclidean distance curves according to face images of all expressions in No. s4 and s8 over database ORL. Where, abscissa axis is picture number and vertical axis is average Euclidean distance between the average time series icon of each image and the average time series icon of samples (n = 4). Thick solid line denotes the base value △=6. From Fig.5, when the face image and expression image are from the same person, which means different expressions, average Euclidean distance is less than the reference values and recognition rate is high. On the contrary, when the face image and expression image are from the different person average Euclidean distance is more than the reference values and recognition rate is 100%.

(a) Recognition rate curve of s4

(b) Recognition rate curve of s8

(c) Recognition rate curve of s4

(d) Recognition rate curve of s8

Fig.5 Recognition rate curve n takes on different values in simulation experiment. Results are shown in Table 2. Table 2 Recognition rate when n takes on different values

n

Number of samples

Recognition rate of samples

Recognition rate of Set1(%)

Rejection rate of Set 2(%)

3

60

100

87.24

100

4

80

100

89.08

100

5

100

100

92.47

100

6

120

100

96.32

100

288

Manufacturing Systems and Industry Application

Because sample set in Table 2 consists of several fore images of each person and 10 images of each person in ORL are quite different, the error identification occurs. If the sample set increases, the value of the average time series will become stable and the recognition rate will increase. Conclusion The paper adopts PCNN to realize face recognition. TSI is taken as the characteristic parameters of face images. TSI only statistic “bright” number of pixels. It means to consider the number of ignition neurons of iteration. Therefore, in terms of efficiency, the operation speed is faster than other methods, and has higher recognition ability. If make further process on the basis of this method and by means of the classification module, including traditional pattern classification or other type of neuron network-BP network, CP network, RBT network and etc, the performance of the system will be improved. Therefore, search for a quick and efficient identification method is a practical way. References [1] Gan Jun-yin, Zhang You-wei, Face Recognition Based on BP Neuron Network, Systems Engineering and Electronics, 2003, 25(1):113-115. [2] Er M J, Wu Shi-qian, Lu Jun-wei, Face recognition using radial basis function (RBF) neural networks, Decision and Control, 1999, 3: 2162-2167. [3] Liu Kun, Jin Wen-biao, Speech Recognition on Isolated Digit Based on PCNN and RBF, Computer Engineering and Design, 2008, 29 (24): 6298-6301. [4] Johnson J L, Padgett M L, PCNN models and applications, IEEE Transactions on Neural Networks, 1999, 10 (3): 480-498. [5] Yu Rui-xin, Zhu Bin, Zhang Ke, New Image Fusion Algorithm Based on PCNN, Opto-Electronic Engineering, 2008, 35(1): 126-130. [6] Nie Ren-can, Zhou Dong-ming, Zhao Dong-feng, Image Segmentation New Methods Using Unit-Linking PCNN and Image’s Entropy, Journal of System Simulation, 2008, 20(1): 222-227. [7] Broussard R P, Rogers S K, Oxley M E, etal Physiologically motivated image fusion for object detection using a pulse coupled neural network, IEEE Trans on Neural Networks, 1999, 10(3): 564-573. [8] Wu Er-wei, Zhou Dong-ming, Zhao Dong-feng, Nie Ren-can, Multilevel gray image contrast enhancement approach using double level PCNN, Journal of Yunnan University (Natural Sciences Edition) ,2007, 29(5): 459-464. [9] Wang Ke-Jun, Zhang Yan, Tang Mo, Xu Jing, The research of PCNN used in image processing, Journal of Harbin Engineering University, 2006, 27: 182-188. [10] Ma Yi-de, Yuan Min, Qi Chun-liang, Liu Yue, Liu Ying-jie, Research of Feature Extraction from Spectrogram Based on Pulse Coupled Neural Network in Speaker Recognition, Computer Engineering And Applications, Computer Engineering And Applications, 2005, 41(20): 81-8.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.289

Study on Coordinate Information Generation Method of Interested Area in IMRT Inverse Planning System Zhen Chen1, a, Guoli Li2, b 1

School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China 2

School of Information Engineering, Zhejiang University of Technology, Hangzhou, China a

[email protected], b [email protected]

Key words: IMRT, inverse planning, interested area, Visualization Toolkit, Bezier Curse algorithm, Watershed algorithm.

Abstract. In inverse planning system of Intensity Modulated Radiation Therapy (IMRT), the coordinate of interested area (target, organ at risk) is the necessary information for optimization algorithm. This paper analyzes several classical region filling algorithms firstly. Based on the VTK toolkits, Bezier curve algorithm, and the improved Watershed algorithm, a method using interactive delineation tool to save the coordinate information of interested area is realized. The information in the delineated area is accurately extracted by the method. Introduction In inverse planning system of Intensity Modulated Radiation Therapy (IMRT), the optimization algorithm needs the coordinate information of interested area to obtain the statistic of dose distribution, such as the planning target volume, organs at risk, etc... It is high complexity at the edge distinguish of the target area because the tumor cells and the normal tissue are often conglutinating together tightly. We can not precisely identify the target region by feature extraction methods. It is needed that the experienced physicians to delineate the interested area border manually [1]. Thus, the first step for interested area data generation is to fill the delineated interested areas. The classical area filling algorithms are seed filling algorithm, scanning beam filling algorithm, boundary filling algorithm and the edge flag algorithm. It’s very difficult to find the seed in the seed filling algorithm, push and pop the seeds will also reduce the efficiency of this algorithm. The scanning beam filling algorithm needs to do intersection computation between each scanning and contour line, sorted the intersection points, and the polygon area with CQX vertices can not be properly filled. The advantage of boundary filling algorithm is very simple that has nothing to do with the line order, the shortage is that each pixel will be visited several times, and also need to do the intersection computation between each scanning and contour line. The edge flag algorithm needs not to do intersection computation and the sort operation, each pixel only be visited once time, so it has higher efficiency. But this algorithm is proposed based on idealized images. Due to the diversity of human organ, a lot of problems will be encountered in the practical medical application, such as vertex problem, the level line problem and the scan conversion problem for polygon border, and so on [2, 3, 4, 5, 6]. This paper uses the point selection tool which provided by VTK toolkits, and the Bezier curve generation algorithm, to select a series of control points by mouse, and generate the Bezier curve (also called outline of the interested area) among those control points, implement the coordinate data generation in the interested area using improved Watershed algorithm finally.

290

Manufacturing Systems and Industry Application

The Data Generation Method for Interested Area Bezier Curve Algorithm. In the numerical analysis field, Bezier curve is a very important parameter curve in computer graphics. The higher dimensions of Bezier curve is called Bezier surface [7]. In the application of Bezier curves, we obtain the needed curve by interactively identifying a set of polygon points. The curve is expressed as parameter formulation:  x = x(t )   y = y (t ) . Assume,

(1) Pi (i = 0,1,

, n)

Bi ,n (t ) = Ci ,n t i (1 − t ) n −i , i = 0,1,

is

the point in the plane, there are total n + 1 points.

, n , and Ci ,n = n !/ (i !• (n − i )!) , Bi , n (t ) is the n + 1 basic function of n

Nth Bernstein polynomial with 0 ≤ t ≤ 1 , P (t ) = ∑ PB i i , n (t ) is the Nth Bezier curve, Pi (i = 0,1,

, n)

i =0

is the control point. The polygon composite with the control point is called control polygon [8]. Traditional Watershed Algorithm. Watershed algorithm comes from geography initially, the basic idea of the algorithm is take the image as the topology physiognomy of the geodesy. Each pixel’s gray value in the image expresses the height above sea level of that point, and each local minimum value and its affected area are called catchments basin, and the boundaries of the basin form the Watershed. The concept and formation of Watershed can be illustrated by the simulating of immersion process: a small hole is cut through the local minimum value surface, and the whole model is submerged slowly, with the deepening of the immersion, each affected area of the local minimum value expands outward slowly, a dam will form at the confluence area of these catchments, that is Watershed [9]. The different understanding of Watershed algorithm corresponds to the different algorithm’s implementation. At present, the more famous and more used two algorithms are the bottom-up simulation of flooding algorithm [10], and the top-down simulation of rainfall algorithm [11]. In recent years, the morphological segmentation method based on Watershed algorithm is been greatly concerned because which rapid calculation speed. It can pinpoint the edge of the image accurately. But it has a serious over-segmentation phenomenon, how to overcome that problem has been a research focus. Improved Watershed Algorithm. Aiming at the shortcomings of traditional Watershed algorithm on noise sensitivity, over-segmentation, etc... Combined with the needs of the IMRT inverse planning system, this paper proposes an improved watershed algorithm which combines the original image preprocessing methods with the control marker method. Using vtkLinearExtrusionFilter class provided by VTK toolkits, a closed curve generated by interactively delineate is stretched along the coordinates of the delineated image firstly. The intersection operation is implemented between the stretched contour of the interested area and the original image, the pixel value of the intersecting point between the original image and the contour line of the interested area is changed. The local minimum value is marked in the gradient image directly, and the threshold processing is applied to the marked value. The modified marked values are taken as the local minimum value of the original gradient image compulsory, the noise and the false local minimum value produced by the detail of the picture texture are shielded , the watershed algorithm takes segmentation operation in the modified gradient image, extracts the data of the interested area. The flow chart of the algorithm is showed as Fig.1.

Yanwen Wu

291

The pre-processing image is obtained by taking intersection operation between the original image and the contour line of the interested area.

Get the gradient image ∇I

Mark the minimum value in as ∇I

∇I

mark

The maximum entropy process to

∇I mark

∇I mark as the local minimum value mark in ∇I , get the modified gradient image ∇I R , and Take the modified

segment the

∇I Rmark by Watershed algorithm.

Extracted the Coordinate Information of the interested area Figure 1

Flowchart of the improved Watershed algorithm

1) Get The Gradient Image ∇I The gradient image ∇I is obtained by morphological gradient algorithm which based on mathematical morphological gradient algorithm. a) Single-scale morphological gradient algorithm If I ( x, y ) is the input image, B is the structure element with size N [12], the gradient image ∇I could be expressed as Eq.2: ∇( I )( x, y ) = δ1 ( I )( x, y ) − ε1 ( I )( x, y ) .

(2)

Here, δ1 ( I ) means that the expand operation is applied to image I using the structure element; ε 1 ( I ) means that the erode operation is applied to image I using the structure element. The result of the gradient image depends on the shape and the size of structure element B . If the structure element is a little large, it will cause a serious impact of the edge, and the maximum gradient is inconsistent with the edge. b) Multi-scale morphological gradient algorithm [13] n

MG ( f ) = 1/ n × ∑ [(( f ⊕ Bi ) − ( f ΘBi ))ΘBi −1 ] .

(3)

i =1

Here, Bi (0 ≤ i ≤ n) is a cycle group of structure element, each element means that the single-scale morphological gradient algorithm is applied to the image, and the radius of Bi

292

Manufacturing Systems and Industry Application

is 2i + 1 , i = 0,1, , n . A straight line with two pixels wide coincidence with the edge will be got by the operation of (( f ⊕ Bi ) − ( f ΘBi ))ΘBi −1 . Through the gradually iterative process of the structural element, the integration of the complex regional can be avoidded effectively, while it is easier that to distinguish the gradient between the object edge pixels and the pixels within the flat area. 2) Mark The Minimum Value ∇I mark Constrained watershed mark method inundates from the pre-designated area, the interested area can be accurately extracted by marking the associated object. Theoretically, the sign related with object can be extracted from the image through feature detection methods. Considering the gray degree and connectivity of the adjacent pixels in the gradient image of the original image, this paper extracts the local minimum value related with each object as the sign, constructs the labeled image ∇I mark . 3) The Maximum Entropy Process to ∇I mark The two-dimensional maximum entropy algorithm is applied to the local minimum value of the gradient image to get the threshold automatically, and remove the pseudo-minimum zone which are inconsistent with the objectives of the image itself in the labeled image. The maximum entropy threshold segmentation algorithm uses statistics information of histogram gray to construct the decision function, and obtains the best segmentation threshold by taking the extreme values of the decision function. The gray-level G of gradient image is obtained by appling multi-scale to the image , G could be 1, 2, , L . f1 , f 2 , , f L represent the appearance frequency of each gray-level in the image, and the domain average gray-level of the gradient image is also L level. The total pixel is expressed as N , the two dimension histogram is h(i, j ) = pij (0 ≤ i, j ≤ L − 1) , here, i is the gray-level of the pixel, j is the domain average gray-level. If fij is the pixel with the gray-level i and the domain average gray-level j in gradient images, then pij can be obtained by the following Eq [14] :

pij =

f ij N

L −1

, (∑ i =0

L −1

∑p

ij

= 1 ).

(4)

j =0

The cumulative probability distributions of target and background are: s t  p pij = ∑ ∑ 1  i =0 j =0  .  L −1 L −1 p = ∑ pij  2 i∑ = s +1 j =t +1

(5)

The two-dimensional entropy of the corresponding target and background can be defined respectively: j i  H = − pij log pij ∑ ∑  1 i =0 j =0  .  L L H = − ∑ ∑ pij log pij  2 i = s +1 j =t +1

(6)

Yanwen Wu

293

The discriminant function of the gradient image total entropy can be defined as:

H ( s, t ) = H1 + H 2 = log[ p1 (1 − p1 )] +

H1 H L − H 1 + . p1 1 − p1

(7)

The optimal threshold ( s ∗ , t ∗ ) should satisfy:

H ( s ∗ , t ∗ ) = max{H ( s, t )} . L

Here, H L = −∑ i=0

(8)

L

∑p

ij

log pij .

j =0

Using the threshold which was got by the maximum entropy algorithm automatically to modify the sign, the impact of pseudo-minimum value to segmentation result is shielded effectively, and the reasonable satisfactory marked image is obtained finally. 4) Segment ∇I Rmark by Watershed Algorithm After step 3, the modified sign is taken as the local minimum value of the gradient image forcedly, the modified gradient image ∇I Rmark is, ∇I Rmark = MIN (∇I | ∇I mark ) .

(9)

Here, MIN (•) means the minimum calibration operation. Finally, the Watershed segmentation algorithm is applied to ∇I Rmark to get the needed result: wat = watershed (∇I Rmark ) .

(10)

5) Extracted The Coordinate Information of The Interested Area Watershed algorithm is highly sensitive to the change of the pixels, so it could not only detect the changes of the contour about the interested area, but also the changes of the low-contrast in the uniformity area, results the over-segmentation, the correct contour will be overwhelmed by a large number of irrelevant contours. The selected region merging method is used to overcome this problem, which is showed as following: a. Establishing the region adjacency graph(RAG) to represent the neighboring relation between each region, calculating the non-similar value between the adjacency regions, sorting them by the heap data structure; b. Combining the adjacency regions with minimum non-similar value, modifying the RAG and the heap; c. If the number of the remaining area is less than the number of pre-defined, stop operation, otherwise, continue. By pre-setting the number of remaining area, the non-interested areas are merged into the background, the interested area and there coordinate information are got finally.

294

Manufacturing Systems and Industry Application

Experimental Results Fig.2 shows the process of interactive delineate using this method, and Fig.3 shows the closed curve after interactive delineate.

Figure 2 Interactively Delineate Process

Figure 3 Closed Curve

The closed curve showed in Fig.3 is stretched along the coordinates of the original image by the vtkLinearExtrusionFilter class provided by VTK toolkits, the result is showed as Fig 4.

Figure 4 Closed Curve after Stretch The intersection operation is taken between the stretched curve and the original image, the result is showed as Fig.5.

Figure 5 Intersect Between Stretched Curve and the Original Image The local minimum values are marked in the gradient image; the threshold process is applied to the marked values. The modified marked values are taken as the local minimum values of the gradient image compulsively, the watershed algorithm is used to segment the modified gradient image and extract the data of the interested area, and the result is showed as Fig.6.

Yanwen Wu

295

Figure 6 The image of the interested area In order to record the data of the interested area perfectly, and avoid the redundant information, the ternary array is used to record the relevant information of the interested area, each array expresses the two-dimensional coordinate information and the property information of each pixel respectively (“1” expresses target volume, “2” expresses organ at risk). The position of the current delineated image, the bounding rectangle coordinates of the delineated image are given at the beginning of the output file, as shown in Fig.7.

Figure 7 Coordinate information of the interested area Summary This paper proposes a method based on VTK toolkits to implement the data generation in the IMRT inverse planning system. It not only solves the application limitation of the classical filling algorithm on the IMRT inverse planning system, implement the quickly and accurately extract the coordinates of the interested area for the medical image, but also makes it possible to obtain the divide image of the interested area. Acknowledgment The work is supported by National 973 Planned Project (2006CB708307), the National Natural Science Foundation of China (60872112 and 10805012), the Natural Science Foundation of Zhejiang Province (Z207588), and the College Science Research Projects of Anhui Province (KJ2008B268).

296

Manufacturing Systems and Industry Application

References [1] Zhuo Chen. “Relevant Research on Visualization Based on VTK and Its Application in TPS”, Hefei University of Technology (2004). [2] La-sheng Yu, De-yao Shen. “A Refinement of the Scan Line Seed Fill Algorithm”, Computer Engineering, vol.29, 2003, p.70-74. [3] Xi-yao Chen, Wei Chen, Li-fang Tong. “The Existing Problem and Solution for Arithmetic of Scan--Line Filling”, Journal Of Northeast Dianli University Natural Science Edition, vol.26, 2006, p.52-56. [4] Xiao-song Hao. “The common problems and solutions of boundary-labeling method in the course of realization”, Journal of Xi’an University of Engineering Science and Technology, vol.20, 2006, p.215-220. [5] B.D. Ackland, N.H. Weste. “The edge flag algorithm–A fill method for raster scan display”, IEEE Transactions on Computers, vol.30, 1981, p.41-48. [6] M.R. Dunlavey. “Efficient polygon-filling algorithms for raster displays”, ACM Transactions on Graphics, 1983. [7] Shi-liang Xu. “The computer algorithm”, Tsinghua University Press, Beijing, 1992. [8] Hua Ma, Feng Liu, Chun-li Ren. “Computer aided drawing of the Bezier curve”, Journal of Xidian University, vol.29, 2002, p.566-571. [9] Nian Cai, Xiao-yan Tang, Shao-rui Xu, Fang-zhen Li. “Segmentation of MELK images based on watershed algorithm”, Application Research of Computer, vol.26, 2009, p.3175-3191. [10] Hong-xia You, Wen-bo Xu. “Iris Segmentation Based on Watersheds Algorithm”, Micro-Computer Information, vol.21, 2005, p.175-181. [11] E.N. Mortensen, A.Barrentw. “Toboggan-based intelligent scissors with a four-parameter edge model”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Washington DC: IEEE Computer Society, 1999, p.452 - 458. [12] Vapn IK v. “The nature of statistical learning theory”, NY Springer, 1995. [13] D.Wang. “Unsupervised video segmentation based on watersheds and temporal tracking”, IEEE Trans. on Circuits Syst Video Technology, vol.8, 1998, p.539-546. [14] Xin-ping Guan, Na Huang, Ying-gan Tang. “New watershed segmentation algorithm via marker threshold”, Systems Engineering and Electronics, vol.31, 2009, p.972-975.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.297

Topology Optimization in the Conceptual Design: Take the Frame of a Bender as Example WANG Yong a, ZHU Guo-niub , SUN Bo-yu School of Mechanical and Automobile Engineering, Hefei University of Technology, No. 193, Tunxi Road, Hefei, 230009, China a

email:[email protected], bemail:[email protected]

Key words: Topology Optimization, Design Process, Conceptual Design, SIMP, Bender

Abstract. The paper is concerned with topology optimization in the mechanical design process. The disadvantage of current process of mechanical design is discussed and a new design process based on structural topology optimization is presented. The design process with structural topology optimization in mechanical design is discussed by the example of the frame of a bender. Static analysis is made to the original model first according to the whole structure and working characteristic of the machine, the stress and deformation distribution are obtained and then topology optimization is carried out. On the basis of topology optimization, the layout of the initial design proposal is obtained and the weight of the frame is substantially reduced while the performance enhanced. The application of the method demonstrates that through innovative utilization of the topology optimization techniques, the conceptual proposals can be obtained and the overall mechanical design process can be improved substantially in a cost effective manner. Introduction The motivation of exploiting the available limited resources in a manner that maximizes output or profit is an everlasting topic of human technological activities. The topology optimization method solves the problem of distributing a given amount of material in a design domain subject to load and support conditions, such that the stiffness of the structure is maximized. Since its first introduction in 1988 by Bendsøe and Kikuchi, the method has gained widespread popularity in academia and industry and is now being applied to the design of automotive and airplane structures as well as in material, mechanism and Micro Electro Mechanical Systems(MEMS) . [1,2] The design process followed during a typical industrial development process can be broken into the following distinct phases: conceptual design, detailed design, physical model, test and finally, produce. Normally, the feedback of the simulations indicates only changes in the detailed design(CAD). These loops are rather cheap in comparison to the situation if changes in the conceptual design are enforced. Then it could happen that the whole development process is relocated to its conceptual phase, which is usually expansive in both, time and costs, and endangers to delay the whole schedule. Due to the fundamental role of the conceptual design phase, topology optimization became a valuable computational tool for the basic layout. [3] Structural Optimization in Mechanical Design The traditional development process is a manual process of iterative design. Engineers carry on the mechanical design with the aid of CAD, then submit to the factory to manufacture, and finally make the full scale test to the physical model. It would be necessary to modify the mechanical design, or even redesign, so repeatedly until the physical model meets all requirements in the physical test if the results cannot satisfy the functional requirements or fail. The traditional design process mainly depends on engineers’ experience, is a relative expensive and time-consuming process, has totally failed to satisfy the requirements of modern mechanical design. Fig. 1 illustrates traditional mechanical design process.

298

Manufacturing Systems and Industry Application

Fig. 1 Traditional mechanical design process With the development of the computer technologies, various numerical methods such as finite element method, boundary element method and multi-body dynamics techniques, have been widely utilized in the mechanical design. Engineers may take the virtual test(CAE) to verify its working stress, movement, life-span and other performances of the product based on the CAD model after accomplishing the preliminary design. They may immediately return to the engineers to modify or redesign if the results cannot satisfy the functional requirements, which greatly reduce the time and costs in the physical test. Fig. 2 illustrates CAE mechanical design process.

Fig. 2 CAE mechanical design process However, there are some limitations of the CAE techniques while widely used in the enterprises that it is still only being used in the later stage of the mechanical design for design verification. Engineers do not have sufficient degrees of freedom to make a comprehensive improvement to the structure, but can only do a local adjustment and hope that this adjustment will not cause other problems if a problem appears in this stage. The origins of the problem are that in the early phase of the design, which has the largest design freedom of the conceptual design stage, engineers can only rely on their experience or imagination which is very difficult to consider accurately all the properties of the product at the same time, and, all too often, due to the limitation of the experience, cannot give out innovative design. CAE technology has rarely been effectively utilized at this stage. Due to the fact that CAE simulations begin after a CAD draft has been completed, information from the simulation usually comes so late that the design phase of a part can hardly be influenced and has a negative impact on the efficiency of the component in question. [4] At present, structural optimization technologies such as sizing optimization, shape optimization and topology optimization have reached a level of maturity and successfully used for mechanical design, changed the traditional mechanical design process due to its unique advantages. Structural optimization technologies take all the necessary properties of the product into consideration, find the optimum mechanical design ideas under a given design space in the conceptual design phase, and give the improvement schemes directly, rather than just checking the product, which really help engineers to design innovative and reliable product in the virtual test phase. Topology optimization is brought into play at this point in time in the process chain. Already in the design phase, information is provided regarding loads and optimal weight of the component geometry in the available design space. Fig. 3 illustrates optimization-oriented mechanical design process.

design space

topology optimization

conceptual model

CAD

visual test

physical model

test

produce

Fig. 3 Optimization-oriented mechanical design process The Topology Optimization Problem In the literature, one can find a multitude of approaches for the solving of topology optimization problems. As an alternative method, the power-law approach, which is also called the SIMP(Solid Isotropic Material with Penalization), has got a general acceptance in recent years due to its conceptual simplicity and computational efficiency. In the SIMP method, material properties are assumed constant within each element used to discretize the design domain and the variables are the

Yanwen Wu

299

element relative densities. The material properties are modeled as the relative material densities raised to some power times the properties of solid material. [5] A topology optimization problem to minimize the compliance of the structure based on power-law approach can be written as N  T minimize : c ( x ) U KU ( xe ) p uTe k eu e = = ∑  x  e =1  V ( x ) subject to : KU = F, = f , 0 < x min ≤ x ≤ 1  V0

(1)

where x is the vector of design variables, c(x) is the compliance of the structure, U and u e are the global and element displacement vector, respectively, K and k e are the global and element stiffness matrix, respectively, F is the load vector, V (x) and V0 are the material volume and design domain volume, respectively and f (volfrac) is the volume fraction, x min is a vector of minimum relative densities, N is the number of elements used to discretize the design domain, p is the penalization power. Bendsøe and Sigmund prove that the power-law approach is perfectly valid when p is sufficiently big(in order to obtain true ‘0/1’ designs, p ≥ 3 is usually required). To ensure existence of solutions, the SIMP approach must be combined with a perimeter constraints, a gradient constraint or with filtering techniques. [6-8] Example: Topology Optimization of the Frame of a Bender

Numerical control bender mainly consists of right-side frame, left-side frame which is the same as the right one, slider, upper crossbeam, lower crossbeam and the vertical board. In this section we will illustrate the proposed approaches to improve the performance of the frame, which is a key part of the bender. The initial layout of the frame is displayed in Fig. 4. The frame bears an opposing vertical force of nominal 550 KN at the C-Grooves from the slider and the workpiece. According to the requirements of the working characteristic and taking the connection around into account, only the dark part in Fig. 4 is design domain.

Fig. 4 CAD model of the frame On the basis of the whole structure and working characteristic of the machine, we make static analysis to the frame first, obtain the stress and deformation distribution of the frame as displayed in Fig. 5 and Fig. 6. From the figures, we know that the maximum stress and the deformation of the frame is 115 MPa and 0.798 mm, respectively. The maximum stress occurs at the C-Grooves due to the stress concentration and the deformation occurs at the upper right of the frame.

300

Manufacturing Systems and Industry Application

Fig. 5 Stress distribution of the frame

Fig. 6 Deformation distribution of the frame On the foundation of static analysis, we carry on topology optimization to the frame. Volume and displacement are taken as objective and constraint, respectively. Fig. 7 illustrates a topology optimization raster of the frame model after 50 design iterations using the OptiStruct commercial topology optimization software. As a result, 47% weight is reduced and the first modal frequency increases 19.8% while the maximum deformation and stress are also allowable.

Fig. 7 Layout of topology optimization of the frame

Yanwen Wu

301

Summary The importance of topology optimization and its integration into the design process has been shown. The application of topology optimization in the conceptual design was discussed by the example of the frame of a numerical control bender. On the basis of topology optimization, 47% weight is reduced and the first modal frequency increases 19.8% while the maximum deformation and stress are also allowable. The application of the method demonstrates that through innovative utilization of the topology optimization techniques, the initial conceptual proposals can be obtained and the overall mechanical design process can be improved substantially in a cost effective manner. Engineers may use the topology optimization approach to generate innovative concept design proposals, obtain an optimal design proposal based on the design space, design targets and manufacturing process parameters [9]. However, this is only a conceptual proposal of the design and there are a lot to do in order to obtain the CAD model. It is an urgent task for engineers to put it into practice in engineering considering of manufacturing feasibility and other related problems. References [1] M. P. Bendsøe and N. Kikuchi: Comp. Meth. Appl. Mech. Eng. Vol. 71 (1988), pp.197-224 [2] Information on http://www.topopt.dtu.dk [3] Ramon Stainko: Advanced Multilevel Techniques to Topology Optimization. (Johannes Kepler University Linz, Austria 2006) [4] Information on http://www.fe-design.com [5] M. P. Bendøe: Struct. Optim. Vol. 1 (1989), pp.193-202 [6] O. Sigmund: Struct. Multidisc. Optim. Vol. 21 (2001), pp.120-127 [7] M. P. Bendsøe and O. Sigmund: Arch. Appl. Mech. Vol. 69 (1999), pp.635-654 [8] O. Sigmund and J. Petersson: Struct. Optim. Vol. 16 (1998), pp.68-75 [9] Information on http://www.altairhyperworks.com

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.302

An Efficient DoS Attacks Detection Method Based on Data Mining Scheme Xiang Chen Guangxi Vocational & Technical Institute of Industry, Guangxi, 530001, China [email protected] Key words: DoS detection, Quality of Services (QoS), TCM-KNN, Web server

Abstract. To defend against DoS attacks and ensure QoS of web server, we first propose an efficient network anomaly detection method based on TCM-KNN (Transductive Confidence Machines for K-Nearest Neighbors) algorithm. Secondly, we integrate a lot of objective and efficient DoS impact metrics from the perceptions of the end users into TCM-KNN algorithm to build a robust anomaly detection mechanism. Finally, Genetic Algorithm (GA) based instance selection method is introduced to boost the real-time detection performance of our method. Introduction Web server is a critical and necessary component for Internet applications and web applications dominate the most part of network traffic nowadays, while they are suffering from a great deal of attacks especially Denial-of-Service (DoS) attacks. DoS significantly degrade service quality experienced by legitimate users. The key point for DoS defenses is to detect it as soon as possible and neutralize this effect, thereby quickly and fully restore quality of various services to levels acceptable by the users. Current evaluation methodologies measure DoS damage superficially and partially by measuring a single traffic parameter, such as duration, loss or throughput, and showing divergence during the attack from the baseline case. These measures do not consider quality-of-service requirements of different applications and how they map into specific thresholds for various traffic parameters. They thus fail to measure the service quality experienced by the end users. Jelena, etc. proposed a novel measurement for DoS towards the web applications from the perspective of end users in [1]. In essence, the new measurement could evaluate the impact of DoS more accurately since “they measure DoS impact as a percentage of transactions that have not met their Quality-Of-Service (QoS) requirements and aggregate this measure into several metrics that expose the level of service denial”. However, based on their definition, they did not give any effective method or algorithm to substantially make use of it for DoS detection in [1], therefore, to adopt it for real applications, it should be further improved and the first and the most important problem to be seriously addressed, in our opinion, is how to integrate all the measure parameters into a reasonable DoS detection algorithm. Therefore, we firstly put forward to an effective anomaly detection method based on TCM-KNN algorithm to ensure the QoS of web servers, which is good at detecting anomalies with high detection rate, high confidence and low false positives than traditional methods because it combines “strangeness” with “p-values” measures to evaluate the network traffic compared to the conventional ad-hoc thresholds based detection and particular definition based detection (e.g., density-based) [2]. Secondly, we take into account the new measurement proposed by authors in [1] as the input feature spaces of TCM-KNN, to detect DoS for web server. The preliminary experimental results demonstrate it is actually effective in anomaly detection for web sever and could be further optimized in real applications as our future work.

Yanwen Wu

303

Introduction to TCM-KNN Algorithm Transductive Confidence Machines (TCM) introduced the computation of the confidence using algorithmic randomness theory [3]. Unlike traditional methods in data mining, transduction can offer measures of reliability to individual points, and uses very broad assumptions except for the iid assumption (the training as well as new (unlabelled) points are independently and identically distributed). There exists a universal method of finding regularities in data sequences. This p-value serves as a measure of how well the data supports or not a null hypothesis (the point belongs to a certain class). The smaller the p-value, the greater the evidence against the null hypothesis (i.e., the point is an outlier with respect to the current available classes) [4]. Users of transduction as a test of confidence have approximated a universal test for randomness (which is in its general form, non-computable) by using a p-value function called strangeness measure. The general idea is that the strangeness measure corresponds to the uncertainty of the point being measured with respect to all the other labeled points of a class. This measure defines the strangeness of the point in relation to the rest of the points. In our case the strangeness measure for a point i with label y is defined as Eq. 1:

αiy = ∑kj=1Dijy

(1)

where we denote the sorted sequence (in ascending order) of the distances of point i from the other points with the same classification y as Diy, thus Dijy will stand for the jth shortest distance in this sequence. Provided with the definition of strangeness, we will use Eq. 2 to compute the p-value as follows: p(αt ) =

#{i : αi ≥ αt } n +1

(2)

In Eq. 2, # denotes the cardinality of the set, which is computed as the number of elements in finite set. ai is the strangeness value for the test point. Detecting DoS Attacks Based on TCM-KNN For each web transaction as mentioned in [1], we also measure five parameters: (1) one-way delay, (2) request/response delay, (3) packet loss, (4) overall transaction duration and (5) delay variation (jitter). Jointly, these parameters capture a variety of application QoS requirements. The first thing for us to detect DoS attacks according to these measures is mapping them into a feature vector. Using these feature vectors, we can build a pattern model for normal behaviors of web applications end users. Therefore, we would acquire a lot of points (feature vectors) as a training dataset for TCM-KNN. Because it needs not build a classifier, thus the input parameters for TCM-KNN are only the k for the number of nearest neighbors, τ for the threshold of TCM-KNN. They were empirically selected as 300 and 0.05 respectively in our experiments. The detailed algorithm is depicted in Fig. 1 [2].

304

Manufacturing Systems and Industry Application

Parameters: k (the nearest neighbors to be used), m(size of training dataset), τ (preset threshold), r(instance to be determined) for i = 1 to m { calculate Diy according to Eq. 1 for each one in training dataset and store; calculate strangeness α according to Eq. 3 for each one in training dataset and store; } calculate the strangeness for r according to Eq. 3; calculate the p-values for r according to Eq. 2; if ( p≤τ ) determine r as anomaly with confidence 1−τ and return; else claim r is normal with confidence 1−τ and return;

Fig. 1. TCM-KNN algorithm for anomaly detection In more detail, in the training phase, we collect a lot of points formed by the above five parameters. These points represent the normal web server status (i.e., the QoS experienced by the end users), therefore, when encountering DoS attack, the web server status will change and the relevant points would be different from those normal points. Moreover, with the employment of our TCM-KNN algorithm, in the detection phase, the abnormal points could be diagnosed successfully and we may claim the web server is under DoS attack and the corresponding countermeasures should be taken, thus the effect of attack on web server and its service would be effectively alleviate and it will greatly ensure the QoS of web server. Relevant Optimization Since the detection method requires finding k nearest neighbors on each class, we need o(n) distance computations, per each point to be diagnosed, where n is the number of data points in the normal data set. Hence, to diagnose s points, the complexity would be o(ns). Moreover, to find out the k nearest neighbors for the normal data set, we require o(n2) comparisons. We observe that this step is done off-line and only once, before the detection of anomalies starts. However, if n is very large, the off-line computation may still be too costly. Therefore, we attempt to adopt genetic algorithm to select instances for the detection method aiming to reduce the computational cost. Genetic Algorithms (GA) are optimization algorithms based on natural genetics and selection mechanisms [5]. The basic concept of GA is designed to simulate processes in natural system necessary for evolution, specifically those that follow the principles first laid down by Charles Darwin of survival of the fittest. To apply genetic algorithms to a particular problem, it has to be decomposed in atomic units that correspond to genes. The genetic information (chromosome) is represented by a bit string and sets of bits encode the solution. The bit string may be of variable length. Then individuals can be built with correspondence to a finite string of genes, and a set of individuals is called a population. A criterion needs to be defined: a fitness function F which, for every individual among a population, gives F(x), the value which is the quality of the individual regarding the problem we want to solve. To apply GA to instance selection for the training dataset of TCM-KNN, two important issues should be addressed: the specification of the representation of the solutions and the definition of the fitness function.

Yanwen Wu

305

1) Representation: For TCM-KNN, training dataset is d noted as TR with instances. The search space associated with the instance selection of TR is constituted by all the subsets of TR. Then, the chromosomes should represent subsets of TR. This is accomplished by using a binary representation. A chromosome consists of genes (one for each instance in TR) with two possible states: 0 and 1. If the gene is 1, then its associated instance is included in the subset of TR represented by the chromosome. If it is 0, then this does not occur. After running GA algorithm, the selected chromosomes would be the reduced training dataset for TCM-KNN. 2) Fitness Function: Let be a subset of instances of TR to evaluate and be coded by a chromosome. For TCM-KNN based anomaly detection task, three measures should be seriously considered: detection rate, false positive rate and the percentage of training dataset reduction. Therefore, we define a fitness function that combines the three values: the detection rate (detect_rate) associated with, false positive rate (fal_rate), and the percentage of reduction (reduce_rate) of instances of with regards to TR. The TCM-KNN classifier is used for measuring the detection rate and false positive rate. F(x)=C*(detect_rate-fal_rate)+(1-C)*reduce_rate

(3)

where, reduce_rate is defined as

reduce _ rate =

| TR | − | S | *100 | TR |

(4) also, |TR| and |S| stand for the number of the original training dataset and of the reduced training dataset using GA. C is an adjustment constant and it always set by experiences. In our next experiments, we set is 0.6 empirically. The objective of the GA is to maximize the fitness function defined, i.e., maximize the detection rate and minimize the number of instances obtained as well as the false positive rate. In the experiments presented in this paper, we have considered the value in the fitness function, as per a previous experiment in which we found the best tradeoff between precision, false positives and reduction with this value. We use TCM-KNN to evaluate the fitness of a chromosome. Besides the fitness function and the representation problems to be serious considered when we apply it to anomaly detection, the three operators (namely recombination, selection and mutation) also should be considered in GA. Learning from the published literatures, a great deal of searching strategies for GA have been proposed and they include various of different implementation of the three operators. Four well-known models will be used in this paper. The first two are GGA (Generational Genetic Algorithm) and SGA (Steady-State Genetic Algorithm), the third is CHC (heterogenious recombination and cataclysmic mutation) adaptive search algorithm and the last is PBIL (Population-Based Incremental Learning). The principles about them can be found in [6]. In the experiments, we will adopt them for searching strategy. Table 1 illustrates the optimization results of GA based instance selection for TCM-KNN. Experimental Results To test the effectiveness of our method presented in this paper. We build up an experimental environment in our lab, which contains a server (on Linux platform, running apache web server), two normal user client (on Linux platform) and ten malicious user clients (on Linux platform). In the running process, the normal user clients send raw packet (web server accessing packets) towards server to compute the five parameters as discussed before, then construct the relevant normal feature vectors, the detection engine is also resident on the machine. To detect the DoS attacks, it will use the statistical results as input parameters and turn back to TCM-KNN algorithm to get the detection results. Moreover, after the training phase, we use the other ten malicious user clients to launch a series of DoS attacks toward the web server. When the web server is under attack and it has impact on

306

Manufacturing Systems and Industry Application

the QoS of web server, our detection engine based on TCM-KNN algorithm can successfully detect the attack and report. We collected the points representing 72 hours’ web server status as our training points, and the sampling time slot is 5 seconds, thus the number of points is 51,840 and they are normal. In the following one hour (as our detection phase), we also collected the corresponding points every 5 seconds, thus 720 points are acquired. Compared to the points in training phase, they are normal or abnormal because of the launched DoS attacks. We found that we could determine the abnormal points with accuracy of 100% and only 3.18% false positives. The ideal results attribute to two reasons: one is the adoption of end users’ experiences to evaluate the effect of DoS attack on QoS of web server, the other is employ the effective TCM-KNN anomaly detection algorithm to detect anomalies for web server. Moreover, to verify the effectiveness of instance selection for TCM-KNN, we compared the detection performance and computational cost (consuming time) of original TCM-KNN and the optimized TCM-KNN. In Table 1, building time denotes the time for calculating the strangeness and p-values of training data, detection time represents the time for diagnose all the test data. Therefore, by optimizing TCM-KNN, we can greatly save building time and detection time. Although the reduction scale is obvious, the TP (true positives) still holds high and the FP (false positives) is still manageable. Hence, these results substantially evident that our TCM-KNN algorithm could be effectively optimized as a lightweight anomaly detection method. Table 1. Results on original and new dataset after instance selection Algorithm

Building time

Detection time

TP

FP

TCM-KNN (original) TCM-KNN (optimized)

22218.62 Sec.

299.81 Sec.

100%

3.18%

363.86 Sec.

100.58 Sec.

99.38%

3.87%

Conclusions and Future Work This paper presents our work focusing on how to effective detect anomalies especially DoS attack against web server based on TCM-KNN algorithm, and hense ensure the QoS of web server. For our future work, we are attempting to detect anomalies for web server combining active methods with passive detection methods. The former is based on users’ perceptions toward the web server status as discussed in this paper. The latter concerns on sniffing the incoming/outgoing network traffic of web server and make a lot of statistics. Then, a data fusion mechanism would be used according to the results of the above two detection methods to determine whether the web server is under attack. We think the comprehensive and data fusion based detections are more robust and effective than single active or passive detection especially in reducing the unnecessary false positives. By doing these, we hope to more accurately and reasonably detect the anomalies for web server, hence ensure the QoS.

Yanwen Wu

307

References [1] E. ikovic, P. iher, O. ahmy, and R. homas, “Measuring Denial of Service,” in Proc. ACM QoP’06, pp. 53-58, 2006. [2] Y. Li, B.X. Fang, L. Guo, and Y. Chen, “Network Anomaly Detection Based on TCM-KNN Algorithm,” in Proc. ACM ASIACCS 2007, pp. 13-19. [3] M. Li, and P. Vitanyi, Introduction to Kolmogorov Complexity and its Applications. Springer Verlag, 1997. [4] K. Proedru, I. Nouretdinov, V. Vovk, and A. Gammerman, “Transductive confidence machine for pattern recognition,” in Proc. 13th European conference on Machine Learning, pp. 381-390, 2002. [5] J. Laetitia, D. Clarisse, and T. Ghazali, “A Genetic Algorithm for Feature Selection in Data-Mining for Genetics,” in Proc. of 4th metaheuristics international conference, pp. 29-33, 2001. [6] R. Jose, H. Francisco, and L. Manuel, “Using Evolutionary Algorithms as Instance Selection for Data Reduction in KDD: An Experimental Study,” IEEE transactions on evolutionary computation, pp.561-575, 2006.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.308

An Effective Intrusion Detection Model Based on Random Forest and Neural Networks Shaohong Zhong a, Huajun Huang and Aibin Chen Computer and Information Engineering College, Central South University of Forestry & Technology, Changsha, Hunan, 410004, China a

[email protected]

Key words: Intrusion detection, Neural networks, Random forest

Abstract. Intrusion detection is a very important research domain in network security. Current intrusion detection systems (IDS) especially NIDS (Network Intrusion Detection System) examine all data features to detect intrusions. Also, many machine learning and data mining methods are utilized to fulfill intrusion detection tasks. This paper proposes an effective intrusion detection model that is computationally efficient and effective based on Random Forest based feature selection approach and Neural Networks (NN) model. We firstly utilize random forest method to select the most important features to eliminate the insignificant and/or useless inputs leads to a simplification of the problem, in order to faster and more accurate detection; Secondly, classic NN model is used to learn and detect intrusions using the selected important features. Experimental results on the well-known KDD 1999 dataset demonstrate the proposed hybrid model is actually effective. Introduction Intrusion Detection System (IDS) plays vital role of detecting various kinds of attacks. The main purpose of IDS is to find out intrusions among normal audit data and this can be considered as classification problem. The two basic methods of detection are signature based and anomaly based [1]. The signature-based method, also known as misuse detection, looks for a specific signature to match, signaling an intrusion. They can detect many or all known attack patterns, but they are of little use for as yet unknown attack methods. Most popular intrusion detection systems fall into this category. Another approach to intrusion detection is called anomaly detection. Anomaly detection systems are computationally expensive because of the overhead of keeping track of, and possibly updating, several system profile metrics. There are many IDSs developed during the past three decades and most of the commercial and freeware IDS tools are signature based. As new attacks appear and amount of audit data increases, IDS should counteract them. In addition to this, as network speed becomes faster, there is an emerging need for security analysis techniques that will be able to keep up with the increased network throughput [2]. Therefore, IDS itself should be lightweight (means relatively low computational cost) while guaranteeing high detection rates. One of the main problems with IDSs is the overhead [3], Detecting intrusions in real time, therefore, is a difficult task. In this paper, we propose an effective intrusion detection model. Our model mainly focuses on how to effectively detect intrusions in the real-time network environment. Firstly, we extract several necessary and much important features (named core features) from KDD 1999 dataset by means of an effective hybrid feature selection method—Random Forest. Secondly, we adopt neural networks model to learn and form classifier to detect attacks based on the selected features. The results of experiments on KDD 1999 dataset indicate the feasibility of our model.

Yanwen Wu

309

Our Intrusion Detection Model The overall model of our approach is divided into two important phases. In the training process, network traffic data is preprocessed (label data packets for various classes such as normal, abnormal) and passed to our feature-selection engine using random forest approach. Afterwards, the dataset is then used to build the neural networks based intrusion detection model using selected features. In the testing process, network traffic data will directly sent to our intrusion detection model to detect. The most advantage of our lightweight model is that by means of feature selection, it can greatly reduce the redundant and least important features for intrusion detection, therefore reduce the computational cost in the process of intrusion detection. Moreover, neural networks model is proved to be a good classifier when provided enough input features. It is very effective in the field of intrusion detection. Feature Selection In terms of feature selection, several researches have proposed identifying important intrusion features through wrapper filter and hybrid approaches. Wrapper method exploits a machine learning algorithm to evaluate the goodness of features or feature set. Filter method does not use any machine learning algorithm to filter out the irrelevant and redundant features rather it utilizes the underlying characteristics of the training data to evaluate the relevance of the features or feature set by some independent measures such as distance measure, correlation measures, consistency measures [4], [5]. Hybrid method combines wrapper and filter approach. Even though a number of feature selection techniques have been utilized in the fields of web and text mining, and speech recognition, however, there are very few analogous studies in intrusion detection field. A typical hybrid algorithm [6] (shown in Fig.1) makes use of both an independent measure and a learning algorithm to evaluate feature subsets: It uses the independent measure to decide the best subsets for a given cardinality and uses the learning algorithm to select the final best subset among the best subsets across different cardinalities. The quality of results from a learning algorithm provides a natural stopping criterion in the hybrid model.

Fig.1 Hybrid algorithm

310

Manufacturing Systems and Industry Application

The overall flow of Random Forest (RF) is depicted in Fig.2 [7]. The network audit data is consisting of training set and testing set. Training set is separated into learning set, validation set. Testing set has additional attacks which are not included in training set. In general, even if RF is robust against over-fitting problem [8], n-fold cross validation method was used to minimize generalization errors [9]. Learning set is used to train classifiers based on RF and figure out importance of each feature of network audit data. These classifiers can be considered as detection models in IDS. Validation set is used to compute classification rates by means of estimating OOB errors in RF, which are detection rates in IDS. Feature importance ranking is performed according to the result of feature importance values in previous step. The irrelevant features are eliminated and only important features are survived. In next phase, only the important features are used to build detection models and evaluated by testing set in terms of detection rates. If the detection rates satisfy design requirement, the overall procedure is over; otherwise, it iterates the procedures.

Fig.2 Overall flow of Random Forest Intrusion Detection Based on Neural Networks Model An artificial neural network consists of a collection of processing elements that are highly interconnected and transform a set of inputs to a set of desired outputs. The result of the transformation is determined by the characteristics of the elements and the weights associated with the interconnections among them. By modifying the connections between the nodes the network is able to adapt to the desired outputs. The greatest strength of neural networks (NN) is in non-linear solutions to ill-defined problems [10]. The typical back-propagation network has an input layer, an output layer, and at least one hidden layer [11]. There is no theoretical limit on the number of hidden layers but typically there are just one or two. Some work has been done which indicates that a maximum of five layers (one input layer, three hidden layers and an output layer) are required to solve problems of any complexity. Each layer is fully connected to the succeeding layer. As noted above, the training process normally uses some variant of the Delta Rule, which starts with the calculated difference between the actual outputs and the desired outputs. Using this error, connection weights are increased in proportion to the error times a scaling factor for global accuracy. Doing this for an individual node means that the inputs, the output, and the desired output all have to be present at the same processing element. The complex part of this learning mechanism is for the system to determine which input contributed the most to an incorrect output and how does that element get changed to correct the error. An inactive node would not contribute to the error and would have no need to change its weights. To solve this problem, training inputs are applied to the input layer of the network, and desired outputs are compared at the output layer. During the learning

Yanwen Wu

311

process, a forward sweep is made through the network, and the output of each element is computed layer by layer. The difference between the output of the final layer and the desired output is back-propagated to the previous layer(s), usually modified by the derivative of the transfer function, and the connection weights are normally adjusted using the Delta Rule. This process proceeds for the previous layer(s) until the input layer is reached. The most important reasons that make us to adopt NN to intrusion detection is that it can effective solve the problem of intrusion detection which has the following distinct characters [12]: (1) A large amount of input (training data, including “normal” and “abnormal”) /output (various attack types and normal type) data is available, but we are not sure how to relate it to the output. (2) The problem appears to have overwhelming complexity, but there is clearly a solution. (3) It is easy to create a number of examples of the correct behavior. Experiments and Evaluations In our preliminary work, we’ll select KDD 1999 dataset to test the performance of our approach based on NN model because the dataset is still a common benchmark for us to evaluate our techniques in IDS. Moreover, considering for the fact that our approach is independent of the realistic dataset, it’s reasonable for us to select it as benchmark. Experimental Environment and Dataset All experiments were performed in a Windows machine having configurations Intel (R) Pentium (R) 4, 3 GHz, 2 GB RAM, and the operation system platform is Microsoft Windows XP Professional. We have used an open source machine learning framework – Weka [13] (the latest Windows version is Weka 3.6). It is a collection of machine learning algorithms for data mining tasks and it contains tools for data pre-processing, classification, regression, clustering, association rules, and visualization. For feature selection, we have selected a subset (It contains 536,872 records with 41 features and a class label. Its classes include normal, DoS, Probe, U2R and R2L. Approximately 20% represent normal patterns and the rest 80% of patterns are attacks belonging to the four classes) randomly from KDD 1999 dataset and performed random forest approach on it to acquire the most relevant and necessary feature. To identify attacks, we adopted 10-fold cross validation to verify the feature selection results. Afterwards, we also have adopted 10-fold cross validation to evaluate our ME-based lightweight detection model. Results and Evaluations From Table 1, we can clearly see that using random forest method to feature selection results in the top 12 as the most important features with their own feature selection algorithms. It demonstrates the results are reasonable and independent of feature selection algorithm. Table 1. Feature selection results based on random forest Rank Feature 1 src_bytes 2 dst_host_rerror_rate 3 dst_byte 4 dst_host_srv_rerror_rate 5 hot 6 num_compromised 7 srv_count 8 count 9 dst_host_srv_diff_host_rate 10 srv_rerror_rate 11 rerror_rate 12 service

312

Manufacturing Systems and Industry Application

It must be stated that the results in Table 3 are acquired from using the top 12 features selected from Table 1 as input learning features for ME model. The results of Table 3 are amazingly good (especially for the detection of DoS attack, its accuracy is 100%) and better than those in Table 2, and they demonstrate two important facts: a) The selected features play the same important role in intrusion detection; b) The computational cost can be greatly reduced without reducing any effectiveness when we make use of the selected features compared to all the 41 features. Therefore, they can be used in real-time lightweight intrusion detection environment. Table 2. Detection results on all 41 features Class Testing Time (Sec) Accuracy (%) Normal 30.3 99.4 Probe 32.8 99.3 DoS 31.9 100 U2R 15.4 99.6 R2L 16.2 99.6 Table 3. Detection results on selected features Class Testing Time (Sec) Accuracy (%) Normal 20.7 99.7 Probe 21.2 100 DoS 21.03 100 U2R 10.77 99.8 R2L 10.7 99.7 Conclusions In this paper, we proposed a new lightweight intrusion detection model. First, a feature selection based on random forest approach is performed on KDD 1999 training set, which is widely used by machine learning researchers. Second, by using NN model, these selected features were learned and used in intrusion detection. Experimental results on KDD 1999 dataset demonstrate the result is good and the model is reasonable. Moreover, its computational cost is relatively low attributes to the adoption of feature selection and NN and can be applied to real-time intrusion detection environment. In the future work, we’ll apply our model in realistic environment to verify its real-time performance and effectiveness. Acknowledgement This paper is supported by the Youth Scientific Research Foundation of Central South University of Forestry & Technology (No. 2009048B) and the Planned Science and Technology Project of Hunan Province, China (No.2010FJ3139).

Yanwen Wu

313

References [1] M. Bykova, S. Ostermann and B. Tjaden, “ Detecting network intrusions via a statistical analysis of network packet characteristics”, in Proc. of the 33rd Southeastern Symp. on System Theory, Athens, OH. IEEE, 2001. [2] C. Kruegel and F.Valeur, “Stateful Intrusion Detection for High-Speed Networks”, in Proc. of the IEEE Symposium on Research on Security and Privacy, pp. 285-293, 2002. [3] T. Bass, “Intrusion detection systems and multisensor data fusion”, Communications of the ACM, 43 (4), pp. 99–105, 2000. [4] Dash M., Liu H., & Motoda H, “Consistency based feature selection”, Proc. of the Fourth PAKDD 2000, Kyoto, Japan, 2000, pp. 98–109. [5] H. Almuallim and T.G. Dietterich” “Learning Boolean Concepts in the Presence of Many Irrelevant Features”, Artificial Intelligence, vol. 69, nos. 1-2, 1994, pp. 279-305. [6] H. Liu and L. Yu. Towards integrating feature selection algorithms for classification and clustering. IEEE Transactions on Knowledge and Data Engineering, 17(3):1-12, 2005. [7] Dong Seong Kim, Sang Min Lee, and Jong Sou Park:Building Lightweight Intrusion Detection System Based on Random Forest.ISNN 2006, LNCS 3973, pp. 224-230, 2006. [8] Breiman, L.: Random forest. Machine Learning 45(1) (2001) 5–32 [9] Duda, R. O., Hart, P. E., Stork, D. G.: Pattern Classification. 2nd edn. John Wiley & Sons, Inc. (2001) [10] Hyvaerinen. A. Karhunen. J & Oja. E.: Independent Component Analysis. John Wiley, New York (2001) [11] Introduction to Backpropagation Neural Networks. http://cortex.snowcron.com/neural_networks.htm. [12] Dagupta. D, Gonzalez. F: An immunity-based technique to characterize intrusions in computer networks, IEEE Transactions on Evolutionary Computation (2002) 28-291 [13] “Weka Machine Learning Project”, http://www.cs.waikato.ac.nz/~ml/

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.314

The Application of Cloud Storage in the Library Zunxin Wang Shandong University of Technology, Zibo, Shandong, China [email protected] Key words: Storage Technology, Cloud Storage, Library

Abstract. The development of digital libraries put forward higher requirements for storage, the traditional storage technology has become increasingly unable to meet the needs of library data storage, the emergence of cloud storage technology provides us with new solutions. This paper analyzed the advantages of cloud storage technology, the library different data on the cloud storage has different needs, and cloud storage application in the library problems was studied. Introduction Store the most direct pressure from the growing amount of data. In recent years, with the continued construction of digital library development, a large number of network database, database mirrors, CD with the books, online video and other new service products, the emergence of electronic resources directly led to the growth of geometric data. Library surge in the amount of data makes the data storage is increasingly becoming concerns of the library community, the library more and more storage devices, management and maintenance of the investment is also growing. To ensure data security and business continuity, we need to establish the appropriate data backup systems and disaster recovery systems. In addition, the status of storage devices for regular monitoring and maintenance, hardware and software updates and upgrades are necessary, which requires professional and technical personnel, an increase of library maintenance, upgrades and management costs. Faced with this situation, the traditional storage solutions have become increasingly inadequate, and expensive. The emergence of cloud storage technology as a library, a new storage system provides a solution. Cloud storage overview At present, do not speak that can not be stored in the cloud storage. This is a great temptation storage technology, cloud storage can store full virtualization, which greatly simplify the application link, save construction costs customers, while providing more storage and sharing. All the equipment stored in the cloud is completely transparent to users, anywhere; Any authorized user can access via a connection line and cloud storage, the space and data access. Users do not care about storage device type, quantity, network architecture, storage protocols, application interfaces, application of simple and transparent. Cloud storage is an extension of the cloud computing concept and developed a new concept, refers to the application through the cluster, grid or distributed file system and other functions, the network, a large number of different types of storage devices through the application of the software collection work together, a common external access to data storage and business functions of a system. Therefore, cloud storage is a data storage and management as the core of the cloud computing system.

Yanwen Wu

315

Cloud storage is the core application software and storage devices combined to achieve through the application software storage device to store the service changes. The user is concerned, that cloud storage is not a specific device, but to a great many a storage device and a server composed of aggregates. Cloud storage to users, not to be used a particular storage device, but in the cloud storage system using a data access service. So, strictly speaking, cloud storage is not stored, but a service. Compared with the traditional storage, cloud storage, is not just only hardware, but also network equipment, storage equipment, servers, applications, public access interface, access, and client and other components of complex systems. Various parts of the storage device as the core, through the application software to provide outside access to data storage and business services. The advantages of cloud storage technology Compared with the traditional storage, cloud storage has the following technical advantages. Saving funds. Cloud storage provided the storage service, storage service locally through the network data stored in the storage service provider (SSP) to provide online storage space. Need to store service users no longer need to build their own data centers, storage services only apply to the SSP in order to avoid duplication of storage platforms, saving the expensive hardware and software infrastructure. With convenient, easy to manage. Traditionally, libraries need a professional IT staff with system maintenance. Traditional storage management is very complex, different manufacturers have different storage management interface, and data center staff often faces a variety of storage products, in this case, to understand the usage of each memory (capacity, load, etc.) to become very complex. Of cloud storage, the amount of storage servers, in the eyes of management, just a memory, each storage server usage, you can see in a management interface. The use of cloud storage is very convenient, because the data is no longer stored locally, to reduce and store most of the burden of maintenance-related work. Once the system is on the line, the user is almost no need for any maintenance operation. Hardware redundancy, automatic failover. Traditional storage devices, if the hardware damage will cause the service to stop, although the managers can find alternatives, such as the establishment of a fully redundant environment (power, network, disk array, etc.), but such work is very costly and complicated. Cloud storage to copy files and there is a different server, solved the problem of potential hardware damage. Cloud storage to know the location of the file stored in the hardware is damaged, the system will automatically read and write commands stored in another storage-oriented file on the server to maintain the service to continue. Equipment upgrade will not cause service interruption. Traditional storage system upgrade, need to backup the old files out of storage after the stop, put on a new storage device, which causes the service to stop. Cloud storage is not dependent on a separate storage server, the storage server hardware updates, upgrades and will not affect the provision of storage services, the system will store the old file on the server moved to another storage server, storage server and other new on-line, the file will then move back. Has good scalability. More storage space by storing in the cloud environment is very convenient. Expansion cloud storage is very simple, and assigned to the storage capacity of each project can exceed the actual storage capacity. Traditional storage using serial expansion, no matter how much it is then extended box, there is always a limit. Cloud storage structure is parallel to the expansion; capacity is not enough, as long as you can purchase a new storage server, an immediate increase in capacity, and almost no limit.

316

Manufacturing Systems and Industry Application

Load balancing. When more than one storage device, inevitably, there will be uneven distribution of workload, and some stores not in use, some of the load over, which became the overall storage performance bottleneck. Workload will be evenly distributed cloud storage to a different storage server, to avoid the individual storage server bottlenecks caused by excessive work to make the storage system to maximize performance. Better security. Cloud storage once it has been the normal configuration, the security and the data stored on the local. The data stored in the remote cloud to avoid the local place such as fire, flood, or remove such malicious data to the situation. Improve security includes the prevention of all possible situations to consider. Cloud storage vendors will provide more secure communication process, such as stored procedures require the use of SSL protocol to encrypt data stored data. Users need to ensure that the password to access cloud storage and access rights are properly managed. Library data types The type of data stored in different libraries, so the demand for storage is different from the library's daily operations and reader service model point of view, mainly in the following three major categories. Library service data. Over the years, including collection of paper books and catalog data, interview data over the years on paper, books, and paper books over the years flow data. Larger than those of business data, but the data structure is simple and basic stability of the annual increment of data, so this type of data storage reliability, stability, security, and access to high performance requirements, and adoption of storage capacity Storage Technology not ask for much. E-resource data. Over the years, including the introduction of the network database, database mirrors, electronic books, electronic journals and multimedia resources. With the digital library and network technology, electronic resources have become important to University Libraries resources component of the University Library Information Services in the share of the increase. Colleges and universities the introduction of electronic resources more frequently, a huge financial input, so the amount of data of electronic resources and the annual increment is very large storage capacity and require a huge expansion of libraries need to constantly adapt to the data storage device to increase the amount. Library characteristic database data. Features include self-built library and database data on the CD with the book contains a CD-ROM data. Medium, but such data are characteristic of the library collection or exclusive ownership, and therefore require higher security storage, a great loss if the data is lost. In libraries need to be resolved Mind-problem. Country more accustomed to pay for the library hardware rather than services, the library is always to put a lot of money to buy servers and storage products, to build their own data centers. Become the basis in the cloud storage facilities, libraries can be their own hardware and software "outsourcing" to a piece of "cloud storage", like electricity, like, pay on time. Once this pattern was established, it will give a huge library of IT management changes: no longer need to purchase a variety of servers, mass storage equipment, on-demand use of cloud storage, pay per use; no longer need so many the IT maintenance personnel, from the constraints of automation system supplier, and can be readily transferred to the service and price better system. How to choose cloud storage. Currently, many manufacturers have launched cloud storage products and the more famous of the Atmos with EMC's cloud storage infrastructure solutions. Atmos is a policy-based management system, enabling service providers that can build the capacity of different types of cloud storage. IBM side, XIV, IBM offers a new generation of cloud storage

Yanwen Wu

317

products. It uses grid technology, greatly improves the reliability of data, capacity, scalability, systems manageability. HP's ExDS9100 (StorageWorks 9100 Extreme Data Storage) for cloud storage product is scalable mass content of the document storage system. In addition, some domestic manufacturers have also launched cloud storage products. Faced with these products, how to choose the specific circumstances of the library, but also we must seriously consider. Security considerations. For libraries, data security remains the primary consideration. As cloud storage data transmission bandwidth for transmission through the ordinary, therefore, must ensure the security of data transmission. Conclusion As a new technology, the application of cloud storage in the library still faces many problems. At present, cloud storage application is still in the exploratory stage, but its presence is a storage system for the digital library provides a new choice. I believe that with the library community's attention on the storage technology and storage technology matures the cloud, cloud storage application in the library just around the corner. References [1] Gao Jianxiu: Research on the Application of Cloud Storage in Digital Preservation. New Technology of Library and Information Service. 2010(6), p.1-6 [2] Jiang Shan: How Library Confront with Cloud Computing. Library Journal. 2010(7), p.10-12 [3] Tao Lei: The Library Network Storage Under The "Cloud". Research on Library Science. 2010(13), p.66-70 [4] Li Yuming: Cloud Computing Environment Data Storage. Computer Knowledge and Technology. 2010(5), p.1032-1034 [5] Tuo Shouheng: Cloud Computing and Data Storage Technology. Computer Development & Applications. 2010(9), p.1-4

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.318

Path Planning Based on Warehousing Intelligent Inspection Robot in Internet of Things Wei Liansuo a, Guo Yuan b, Dai Xuefengc College of Computer and Control Engineering, Qi Qihaer University, Hei Longjiang 161006, China [email protected], [email protected], [email protected] Key words: Internet of Things, vague set, mobile robot, path planning

Abstract. A path planning approach using combination of the Internet of Things and vague set of multi-objective decision-making was presented aiming at mobile robots in structured environments. The information of environment constrains and path length was integrated in the fitness function which was computed to sort scores of function values in order to realize path planning of mobile robot. Finally, it is proved by computer simulations that the algorithm is rational and can be used in real-time path planning of mobile robot. Introduction By using the technique of RFID (Radio Frequency Identification Devices) and wireless data communication, the Internet of Things builds an “Internet of Things” which covers nearly all the things in the world. Through this net, things (goods) can “communicate” with each other without people’s control. This technique is based on RFID and can realize the auto-identification for things (goods) and their information through the internet. When constructing the Internet of Things, the label of RFID stores normal and mutual information. By using the wireless data communication, the information can be collected to the central information system. Thus, the identification of things (goods) and transparent management can be realized through the opening computer internet information exchange and sharing. [1,2] The utilization of RFID in intelligent warehousing management efficiently improves the information management of goods transference. It not only increases the amount of goods it can deal with in a single day, but also supervises all the information of them. Today, the store management of big factories and supermarkets is very important to their logistics. However, the level of storage management in our country is relatively low compared to the developed country. Most stores in our country still use manual labor to manage, which costs a lot of money to register the information of goods, vehicles and containers. It is not only a waste of human resources, but also has high risk to make mistakes. Through the Internet of Things, the warehouse management becomes efficient, accurate and tremendously save the human resources. In some big advanced warehouses, a complete automation can be reached. This requires the mobile robot to rapidly find the object’s information and arrive to the destination smoothly. Therefore, the mobile robot needs to automatically create a direct non-collision path according to its position. Problem Described According to the position of the goods, the problem can be described as a visual robot with a reader in a two-dimension flat, which contains obstacles with Edge as their bounds (goods), moves from any position Ps to the position of the object Pm, and then to the final destination Pd without colliding any obstacle (other good). Robot has no memory and has no sense of the information of the surroundings. However, it knows the position of Pd, and through its vision, it can “see” the information around within the radius rv. The bound is counted as əs. This area is called the robot’s visual field. Robot can identify the obstacles which are in its visual field, but cannot see the information behind. The points which the robot can observe through its vision are called measurable or visible. In the coordinate system, every point is represented as horizontal and vertical coordinate. The amount of obstacles are limited, counted as Ob1,Ob2,…,Obn. The obstacles are also expanded

Yanwen Wu

319

according to the size of robot, thus the robot can be treated as a point. Here we only require that the obstacles are bulge shape, no intersect with each other and no intersect with the Edge. Define d(Pi,Pj) as the distance between Pi(xi,yi), Pj(xj,yj) in the plane, then d ( Pi , Pj ) = ( xi − x j ) 2 + ( yi − y j ) 2 . Under the condition which the surroundings are unknown and the robot has no memory, the path plan based on flowed windows means that mobile robot can decide the advancing direction according to the information from its limited visual field. This path, which is created by some certain rules, may not be the best one, but it can realize the partial optimization as it receives the feedback of the surroundings. Since the final goal is to let the robot arrive to the destination Pd,the only premise is that the robot must go through the obstacles smoothly. Vague Theory Here we treat all the goods as an objects set according to the space angle D=D1,D2,…,Dm and restrained conditions set C=C1,C2,…,Cn. Suppose that the characters of object Di represented by Vague table under the restrained condition C are showed as below:[3-5] Di={[C1,ti1,1-fi1], [C2,ti2,1-fi2],…, [Cn,tin,1-fin]} (1) tij represents the degree that the object Di satisfies constrained condition Cj; fij represents the degree that object Di dissatisfies the constrained condition Cj. tij,1-fij ∈[0,1], tij+fij≤1 and 1≤j≤n, 1≤i≤ m. The robot needs to select a plan from the objects set, which satisfies the constrained condition C1,C2,…,Cp or condition Cs,the requirement of robot can be represent as: Cj AND Ck AND Cp OR Cs. The degree whether the object Di satisfies robot’s requirement is assessed by Function: E(Di) = ([tij,1-fij]∧[tik,1-fik]∧…∧[tip,1-fip])∨[tis,1-fis] = [min(tij,tik,…,tip),min(1-fij,1-fik,…,1-fip)]∨[tis,1-fis] = [max(min(tij,tik,…,tip), tis), max(min(1-fij,1-fik,…,1-fip),1-fis)] (2) where “∧”and“∨”respectively show “intersection” and “Union” of Vague value in formula(1), and E(Di) is a Vague value. It has the characters of three-dimension model. The supportive degree tDi=max(min(tij,tik,…,tip),tis), the opposite degree fDi=1-max(min(1-fij, 1-fik,…, 1-fip),1-fis),the neutral degree mDi=1- tDi- fDi. As it is shown below:

A

Rob

B C D Pd

Fig. 1Three-dimensional model figure of Vague

Fig. 2 The description of the points in robot path planing

All the Vague values are on the plane ABC. Therefore, according to the degree that E(Di) satisfies the requirement of the robot, a function will be led in:

N (E( Di ) = tDi − f Di − mDi

1 − tDi + f Di 2

(3)

Where N(E(Di)) ∈[-1,1]. The maximum value of N(E(Di)) is the best plan. Considering that at any moment, the information of partial environment observed by the robot is represented as Set Psub,Figure 1 shows an example where Psub={A,B,C,D}。 The shadow represents the obstacles,and the robot and the destination are shown. The visual field of Robot is represented as a circle in the figure. The area behind the obstacles is immeasurable. Suppose there is an obstacle on robot’s way to Pd, the robot can only move along the edge of the obstacle which is in robot’s visual field. Because the area behind the obstacle is unknown, the choice from A, B, C, D is a result of partial optimization which is reasonable and efficient.

320

Manufacturing Systems and Industry Application

The Algorithm After the starting point, destination, the visual field and footstep length of the robot are initialized, we can progress as below: Step 1: Pm and terminal point Pd are acted on intial subgoal, initial point Ps linked Pm then Pm linked direction of terminal point Pd acted on intial subdirection Dsm and Dst. Step 2: If Pm can be seen, the robot moves towards Pm; if Pd can be seen from Pm, then moves towards Pd until it reached the destination. Step 3: Refresh the information in its visual field and get the objects set Psub. Step 4: Find the best point Pst from Psub as the sub-object according to the Vague multi-objects and Vague restrained assess function. Step 5: Advance a footstep long along this direction and make it the new sub-direction Dst. Calculate the deviation between the actual position and expected position, then adjust it in the next optimization, repeat the second step. Based on the floating window, divide the total object Pd into many sub-objects. Now we need to make choices from the sub-objects set Psub. According to the satisfied optimization methods, several assesses index would be given and they can be represented by object function and restrained function. For the points Psk in Psub, the direction of line through Pc and Psk is marked as Dck, Pc and Pd as Dcd The index involved can be set manually. These index are shown as below: a)The angle θkd between Dck and Dcd; This object reflects the will that the robot has to access the destination by the best path. b)The angle Dct between Dck and the sub-direction Dct, this object reflects that the robot changes direction as few as possible when it is moving. Use θ(D1, D2) to represent the angle between D1 and D2, thus the problem of optimization is: Find Dck

s.t. min(θ (Dck , Dcd ))

(4)

min(θ (Dck , Dct )) Choose sub-function for these objects: π  θ ≤ 1 2 µ (θ ) =  1 + cos(θ ) 

θ >

π

(5)

2

Avoiding the obstacles is the restraint in the progress. It has sub-restraints too. Here the restraints are the amount of the obstacle on Dck and the distant to 1 Pc is represented as f i = , d(Pc, Pso) is the distant between the point of intersection 1 + d ( Pc , Pso ) of sight direction and obstacles. The amount of fi is also related to the amount of obstacles. So the whole optimization problem will be concluded as: Find Dck s.t . max( µ (θ ( Dck , Dcd ))) max( µ (θ ( Dck , Dct ))) (6) min( f i ( Dck ) ) Refer to the Vague set multi-objects optimization assessed function to make the problem (1) as an assessed function (*), namely build relevant object function and restraint function with weight considering the different factors to find the best Pst. E =

2

∑α i

m

i

µ i − b (∑ c j f j ) j =1

(7)

s .t . Psk ∈ Psub

The parameter b is variable, and it reflects the balance between the object and restraint. The result will be the best under this balance.

Yanwen Wu

321

Emulation Experiment According to the algorithms in this paper, the program is tested under MATLAB R2007a system. The results are shown as below. This test is processing in the emulation of warehouse. The dotted line represents the path of robot. The circles represent the obstacles (goods). The result indicates that the robot can avoid the obstacles and arrive to the destination smoothly.

Fig.3

a1=2 a2=1

cj =1,60 step

Fig.4

a1=2 a2=1 cj =1,49 step

Conclusion The algorithm of warehousing Intelligent inspection mentioned in this paper is exploration of robotic path planning by Internet of Things. This algorithm is easy to complete. By using the function of multi-objects vague decision under Vague set and by ranking the function value, the path planning of robot can be realized. Through computer’s emulation test, the algorithm can find a smooth path from the starting point to the destination. Therefore this algorithm can be applied to warehouse inspection robot’s path planning. The next research goal of this paper will focus on the combination of robot’s position system and software system based on RFID warehouse management. References [1] David Ferguson,Anthony(Tony)Stentz. Using Interpolation to Improve Path Planning: The Field D* Algorithm[J]. Journal of Field Robotics. 2006, 23(2): 79-101. [2] S.S.Ge,Y.J.Cui. Dynamic Motion Planning for Mobile Robots Using Potential Field Method[J]. Autonomous Robots. 2002, 13(3): 207-222. [3] Chen S M. Fuzzy system re1iability analysis based on vague set theory[A]. Pro ceeding soft the 1997 IEEE international Conferenceon Computational Cyberneticsand simulation[C], Orlando: IEEE Press, 1997: 1650-1655. [4] Castillo,O.,Melin, P.. A new method for fuzzy inference in intuitionistic fuzzy systems[A]. Proceedings of the Artificial Neural Networks in Engineering Conference [C]. St. Louis, Mo: American Society of Engineers Press. 2003: 20-25. [5] Knowles, J.,Corne,D.. Properties of an adaptive archiving algorithm for storing nondominated vectors[J]. IEEE Transactions on Evolutionary Computation. 2003, 7(2): 100-116.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.322

A Partial Unit Delaunay graph with Planar and Spanner for Ad hoc Wireless Networks Pengfei XU 1,2,a, Zhigang CHEN 1,b, Xiaoheng DENG 1,c and Jianping YU 1,d 1 2

College of Information and Engineering, Central South University, ChangSha 410083, China College of Mathematics and Computer Science, Hunan Normal University, ChangSha 410081, China; a b [email protected], [email protected], c [email protected], d [email protected]

Key words: Planar, Spanner, Voronoi Diagram, Delaunay Triangulation.

Abstract. This paper proposes a new geometry structure, namely Partial unit Delaunay graph (PuDel), to be as the underlying network topology of Ad hoc wireless networks. PuDel has the following attractive properties: (1) PuDel is a connected subgraph of unit Delaunay triangulation; (2) PuDel is a planar spanner of Unit Disk Graph (UDG) whose length stretch factor is at most 1.21π; and (3) PuDel can be locally constructed only based on the position information of 1-hops neighbors, and requires no message exchange between nodes in addition to maintaining UDG. Introduction In this paper, Ad hoc wireless networks is considered as a set S of n wireless nodes randomly distributed in the plane ℜ2, where no two nodes are overlapping and all nodes have the same transmission range (denoted by R). Assume that each node knows its position information and gathers the position information of its 1-hop neighbors (i.e. all nodes within its transmission range) by message exchange. Consequently, Ad hoc wireless networks can be defined as a Unit Disk Graph (UDG) in which there is an undirected edge u↔k between u and k if and only if u and k are 1-hop neighbors each other. Hereafter, assume that UDG is a connected graph. Many applications in Ad hoc wireless networks involved finding the proximity graph by removing some edges from UDG as the underlying network topology [1,2,3,4]. Generally, the proximity graph should have the properties connectivity, sparseness, planar, spanner and so on [1,2]. Unit Delaunay triangulation (uDel) is a proximity graph with all followed properties, but it can’t be constructed locally [2]. Fortunately, there are several replaced structures, such as Restricted Delaunay graph (RDG) [1], Planar Localized Delaunay triangulation (PLDel) [2], Partial Delaunay Triangulation (PDT) [3], Partial Unit Delaunay Triangulation (PUDT) [4] and so on. During locally constructing, in addition to maintaining UDG, RDG, PLDel and PUDT require message exchange between nodes, but PDT requires no message exchange between nodes. However, only PDT has not been proved to be spanner. This paper proposes a new geometry structure, namely Partial unit Delaunay graph (PuDel). The contributions include: (1) PuDel is a connected subgraph of uDel; (2) PuDel is a planar spanner of UDG; and (3) PuDel can be constructed based on the position information of 1-hop neighbors, and requires no message exchange between nodes in addition to maintaining UDG. Preliminaries For convenience, let ||uk|| be the Euclidean distance between u and k. Let N(u) be the nodes set of u and its all 1-hop neighbors, that is, k∈N(u) if and only if ||uk||≤R. Let L(u,k) be the

Yanwen Wu

323

perpendicular bisector line of u and k, and H(u,k) be the half-plane bounded by the line L(u,k) and containing u. Let B(u,r) be the circle centered at u with radius r, and disk(u,k) be the circle with diameter uk. Let ∏G(u,k) be the shortest path from u to k in the Euclidean graph G, and ||∏G(u,k)|| be the length of ∏G(u,k). The following begins with definitions of the Voronoi diagram and the Delaunay triangulation [5,6]. The Voronoi region of u∈S, denoted by V(S,u), consists of all points at least as close to u as to any other node,

V (S ,u )={ p ∈ℜ2 ||up|| ≤ ||px||, ∀x ∈ S ,x ≠ u }

(1)

For any point p∈ℜ2, p∈H(u,x) if and only if ||up||≤||px||, then V(S,u) can be written as the following, V (S ,u )= ∩ x∈S , x ≠u H (u , x)

(2)

The Voronoi diagram for S, denoted by Vor(S), is the union of all Voronoi regions V(S,u), where u∈S. Assume that no four nodes of S are co-circular, the Delaunay triangulation for S, denoted by Del(S), is the straight-line dual of Vor(S): there is an undirected edge u↔k between u and k in Del(S) if and only if V(S,u) and V(S,k) share the common boundary, see Fig.1(a) for an illustration. The undirected edge u↔k in Del(S) is called the Del edge. The common boundary between V(S,u) and V(S,k), which is the intersection of the perpendicular bisector line L(u,k) with V(S,u) or V(S,k), is called the dual Voronoi edge of the Del edge u↔k. The unit Delaunay triangulation for S, denoted by uDel(S), is the graph by removing all edges of Del(S) that are longer than R [2], that is, if the Del edge u↔k is uDel edge then ||uk||≤R. Given the node k∈N(u) and u≠k (i.e. ||uk||≤R), the directed edge u→k from u to k is called the local Delaunay (LDel) edge if and only if V(N(u),u) and V(N(u),k) share the common boundary [2], see Fig.1(b) for an illustration. The common boundary between V(N(u),u) and V(N(u),k) is called the dual local Voronoi edge of the LDel edge u→k. x1 x1 B(u,R/2) k1 Then there are following lemmas. k1 k2 b k2 b2 Lemma 1: For any point p∈V(S,u), no b1 2 b1 k3 k3 nodes of S are inside the circle B(p,||pu||). k u k u Lemma 2: For any point p∉H(u,x), k4 k4 B(u,R) ||up||>||ux||/2. x2 x2 Lemma 3: For any two nodes u and k of S, (a) Vor(S) and Del(S) (b) LDel edge ||∏uDel(u,k)||≤2.42||∏UDG(u,k)||, that is, uDel Voronoi edge Del edge LDel edge is a spanner of UDG whose length stretch Fig.1 Voronoi diagram and LDel edge factor is 2.42. (See [2] Theorem 5) Lemma 4: The dual Voronoi edge of the Del edge u↔k is V(S,u)∩L(u,k) or V(S,k)∩L(k,u), that is, there is the Del edge u↔k if and only if V(S,u)∩L(u,k)≠φ or V(S,k)∩L(k,u)≠φ. Lemma 5: The dual local Voronoi edge of the LDel edge u→k is V(N(u),u)∩L(u,k), that is, there is the LDel edge u→k if and only if ||uk||≤R (i.e. k∈N(u)) and V(N(u),u)∩L(u,k)≠φ. Lemma 6: For any node u∈S, V(S,u)⊆V(N(u),u). (See [7] Lemma 2) Proof: Since u∈N(u) and N(u)⊆S, Eq.2 can be written as the following, V ( S , u ) = (∩ x∈N (u ), x ≠u H (u , x)) ∩ (∩ x∈( S − N (u )) H (u , x)) = V ( N (u ), u ) ∩ (∩ x∈( S − N (u )) H (u , x)) Eq.3 implies that V(S,u)⊆V(N(u),u). ■

(3)

324

Manufacturing Systems and Industry Application

Direct Delaunay Triangulation path Given two nodes u and k in Del(S), let b0=u, b1 ,…, bm−1, bm=k be the nodes corresponding to the sequence of Voronoi regions traversed by walking from u to k along the segment uk, see Fig.2 for an illustration. The direct Delaunay Triangulation path from u to k in Del(S) [8], denoted by DT(u,k), is the path formed by concatenating all Del edges bi↔bi+1, where i=0,…,m-1. Then DT(u,k) has the following lemmas. Lemma 7: For all i, 0≤i||ux||/2 (by Lemma 2). Since x∈(S-N(u)), x∉N(u) and ||ux||>R. That is, ||up||>R/2, which contradicts to the assumption that ||up||≤R/2. Thus, it is only possible that p∈V(S,u). Note that p∈V(S,u) and p∈L(u,k), i.e. p∈(V(S,u)∩L(u,k)), then there is the Del edge u↔k whose dual Voronoi edge is V(S,u)∩L(u,k) (by Lemma 4). Note that ||uk||≤R (by LDel edge u→k), then the Del edge u↔k is also the uDel edge. Therefore, p is one point on the dual Voronoi edge of the uDel edge u↔k such that ||up||≤R/2. Then the corollary followed. ■ Theorem 1. PuDel is a symmetric graph. Proof:Given any LuDel edge u→k, from Corollary 1, let p be one point on the dual Voronoi edge of the uDel edge u↔k such that ||up||≤R/2. From Lemma 4, p∈(V(S,k)∩L(k,u)). From Lemma 6, V(S,k)⊆V(N(k),k), then p∈(V(N(k),k)∩L(k,u)) and p∈L(k,u). Note that p∈L(k,u), then ||kp||=||up||≤R/2. Note that ||ku||≤R (by uDel edge u↔k) and p∈(V(N(k),k)∩L(k,u)), then there is the LDel edge k→u whose dual local Voronoi edge is V(N(k),k)∩L(k,u) (by Lemma 5). Thus, p is one point on the dual local Voronoi edge of the LDel edge k→u such that ||kp||≤R/2, and there is the LuDel edge k→u. Therefore, there is a symmetric edge between u and k in PuDel(S). Then the theorem followed. ■

Yanwen Wu

325

m

Definition 2. If there are LuDel edges u→k and k→u, then the directed edge u→k and k→u are viewed as one undirected edge u↔k in PuDel(S) which is called the PuDel edge. Theorem 2. During locally constructing PuDel, there is no message exchange between nodes in addition to maintaining the position information of UDG by message exchange. Proof: In order to build the local Voronoi edges and corresponding LDel edges with the algorithm in [9], each node must gather the position information of its 1-hop neighbors by message exchange. During choosing LuDel edges, there is no message exchange between nodes because PuDel is a symmetric graph (by Theorem 1). Then the theorem followed. ■ Theorem 3. PuDel is a planar sparse subgraph of uDel. Proof: Given any PuDel edge u↔k, there are LuDel edges u→k and k→u. From Corollary 1, there is the uDel edge u↔k, that is, the PuDel edge is the uDel edge, which implies that PuDel is a subgraph of uDel. On the other hand, uDel is a planar sparse graph [2]. Then the theorem followed. ■ Corollary 2. If there exists one point p on the dual Voronoi edge of the uDel edge u↔k such that ||up||≤R/2, then there exists the PuDel edge u↔k. Proof: Obviously, ||uk||≤R (by uDel edge u↔k). Let p be one point on the dual Voronoi edge of the uDel edge u↔k such that ||up||≤R/2. From Lemma 4, p∈(V(S,u)∩L(u,k)). From Lemma 6, V(S,u)⊆V(N(u),u), then p∈(V(N(u),u)∩L(u,k)). Thus, there is the LDel edge u→k whose dual local Voronoi edge is V(N(u),u)∩L(u,k) (by Lemma 5), that is, p is one point on the dual local Voronoi edge of the LDel edge u→k such that ||up||≤R/2. Then there is the LuDel edge u→k. From Theorem 1, there are LuDel edges u→k and k→u. Again from Definition 2, there is the PuDel edge u↔k. Then the corollary followed. ■ Corollary 3. Given the uDel edge u↔k, DT(u,k) is a connected path from u to k in PuDel(S). Proof: Assume DT(u,k)=b0b1…bm-1bm, where b0=u and bm=k. According to Lemma 7, let zi be the intersection point of the segment uk with the dual Voronoi edge of the Del edge bi↔bi+1, where 0≤iR/2 (by hypothesis). From Lemma 4, q∈(V(S,u)∩L(u,k)), i.e. q∈V(S,u) k B(q,||uq||) k and q∈L(u,k). Let m be the midpoint of the segment uk, then ||um||=||uk||/2. Note that ||uk||≤R (by uDel edge u↔k), then q L(u,k) m ||um||≤R/2

(1)

c =1

where x are compositional ratio of material of new terrain, andF is friction coefficient of new terrain. fc The evaluaton of the slippage is mainly according to Fmax of the biggest friction coefficient of fc , then this region is not planetary terrain. when F of the slippage of analyzed area surpasses Fmax

thought to belongs to the zone, otherwise the traversable zone. And, the solution of F needs to consider the two factors. First, Fpatch of the slippage which the analysis window (i.e. the body size of planetary rover) covers the slipping region, next Fcell of the slippage which in the analyzed window each triangle covers the slipping region. If maximum slipping value of all triangles is max _ Ftriangle in the definited analysis window, then has

F = max(Fpatch , max _ Ftriangle ) (2) Similarly, to reflect planetary the traversablity of the rover in different slipping terrain to the path planning model, this article has introduced the sectional slipping cost function, and its definition is:  (F > Fmaxfc ) +∞  pitch (k f 2 Fmaxfc ≤ F < Fmaxfc )  cmax (3) f fc (F ) =  F fc fc × ( ≤ < ) 255 k F F k F f f 1 max 2 max fc  Fmax  pitch c min (F < k f 1 Fmaxfc )  In the formula, 0 < k f 1 < k f 2 < 1 . Partition evaluation of f fc (F ) is thought same as f pitch (θ ) . As a result of the complexity in planetary surface environment, its terrain feature usually demonstrate the different combination of each kind of cost. Therefore, when analyzing the traversablity of the triangle, we need to evaluate the analysis result of each kind of cost. If the trav definition the traversable cost function of the analyzed triangle is f triangle , then has trav f triangle = max ( f pitch (θ ), f roughness (D ), f fc (F ), f step (RF )) The detailed connents refer to the literature[9].

(4)

Path Planning Algorithm The proposed path planning method The following is specific steps of path planning based on genetic algorithm presented in this paper: Step1. Environment modeling. Do environmental modeling using grid method proposed by Simon, and Yang. Step2. Coding and initialization. To encode possible paths, set the counter of evolution algebra t←0; set the maximum evolution algebra T; generated n individuals randomly as initial population P(0).

384

Manufacturing Systems and Industry Application

N

Step3. Individual evaluation. Using the equation L(a n ) = ∑ (d i + f i ) + β i C to calculate each i =1

individual’s fitness value in population P(t). Step4. Determine whether the environment changes. If the environment changes, then run step 5. Otherwise, go to step 6. Step5. Re-evaluation for population P(t). Step6. Tournament selection and elitist selection. To retain the best individual in parent population, run tournament selection in parent groups,. Step7. Crossover operation. Use the single-point adaptive crossover operator on the group. Step8. Mutation operation. Use the adaptive mutation operator on the group. Step9. Other operations. Use other operator on group P(t). After these sessions of selections, we finally get the next generationP(t+1). Step10. To determine whether it is in the dynamic environment. If it is, then go to step 4. Otherwise, run step 11. Step11. To check if the final condition is meet If t≤T, then t←t+1 is right, go to step 3; if t>T, we can consider the minimum fitness value of the individual as the optimal solution, output it and stop computing.

Simulation Results To demonstrate the effectiveness of the algorithm, we have done the simulation test on PC, the main experimental parameter are set as follows: the environment is a 24 × 24 grid, population size M = 40, k1 = 0.8, k2 = 0.05, the maximum evolution algebra T = 500. From figure 1, we can conclude path planning using our method can be achieved in planetary environment.

Fig. 1

Simulation results

Conclusion The navigation of mobile robot in planetary environment is a very important issue. In this paper, an autonomous navigation algorithm for planetary rover based on slip pridiction is presented. This method does integrate directional slip prediction into the path planning algorithm resolving the essue of emerging higher-level behaviors such as planning a path with switch-backs up a slope. The result of simulation demonstrates that this method is effective.

Yanwen Wu

385

References [1] O. Hachour. The proposed hybrid intelligent system for path planning of Intelligent Autonomous Systems.INTERNATIONAL JOURNAL OF MATHEMATICS AND COMPUTERS IN SIMULATION. 2009,3(3):133–145 [2] M.W. Maimone and J. J. Biesiadecki, “The Mars Exploration Rover Surface Mobility Flight Software:Driving A [3] G. Ishigami, A. Miwa, K. Nagatani, and K.Yoshida,"Terramechanics-Based Analysis on Slope Traversability for a Planetary Explorati mbition,” IEEE Aerospace Conference, Big Sky,Montana, March 2006.on Rover," The 25th International Symposium on Space Technology and Science (ISTS 2006), pp. 1025-1030, 2006. [4] I.Halatci, C.Brooks, and K.Iagnemma. A study of visual and tactile terrain clas-sification and classifier fusion for planetary exploration rovers. Robotica 26(6): 767-779 (2008). [5] M. Bajracharya, A.Howard, L.Matthies, B.Tang, and M.Turmon. Autonomous Off-Road Navigation with End-to-End Learning for the LAGR Program. Journal of Field Robotics, Volume 26 Issue 1, pp.3-25,January 2009. [6] T.Howard, C.Green, A.Kelly, and D.Ferguson. State space sampling of feasible motions for high-performance mobile robot navigation in complex environments. Journal of Field Robotics, 2008,25(6-7):325–345. [7] K.Doogyu, K.Jayoung, L.Jihong, J.Hanbyul, and In-so K. Utilizing Visual Information for Path Planning of Autonomous Mobile Robot.Proceedings of the World Congress on Engineering and Computer Science 2009 Vol II WCECS 2009, October 20-22, 2009, San Francisco, USA [8] A.Angelova, L.Matthies, D.Helmick, and P. Perona. Learning and prediction of slip from visual information. Journal of Field Robotics, 2007,24(3):205–231. [9] L.F. Zhou. RESEARCH ON LUNAR ROVER INTELLIGENT NAVIGATION IN VIRTUAL ENVIRONMENT. Harbin Institute of Technology,Ph.D. thesis,2007

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.386

A Document Feature Extraction Method Based on Concept-word List Zheng-yu ZHUa, Jie HEb, Shu-jia DONG, Chun-lei YU College of Computer Science, Chongqing University, Chongqing 400044, China a

[email protected], [email protected]

Key words: concept-word, HowNet, Vector Space Model, document feature extraction.

Abstract. When describing a document in Vector Space Model (VSM), it often assumes that there is no semantic relationship between words or they are orthogonal to each other. In order to improve the inaccurate document description, a new document description method has been proposed in this paper by introducing a concept-word, which calculates the semantic similarity between words based on HowNet ontology database. Comparative experiments show that the new method can not only improve effectively the effect of document feature description in VSM, but also reduce significantly the dimension of a document vector. The research is very useful to document clustering, query word expansion in Web information retrieval and personalized service in e-business applications. Introduction VSM is a very useful model to describe a document. It assumes that there is no semantic association among the components of a vector. However, in an actual document, there often exist some semantic relationships among its terms. Literature [1,2] proposed a word clustering method to gather the words with a similar probability distribution as a new term. Literature [3] suggested a feature extraction method based on HowNet concept and used it in a Chinese text filter system. These methods can solve to a certain extent the inaccuracy description of VSM. A new extraction method of document feature based on a concept-word list has been presented in this paper. Firstly, based on the semantic description of HowNet for words (http://www.keenage.com), a method that can used to calculate the semantic similarity between words by using the lexical similarity calculation method in literature [4] has been introduced. Then, a concept-word list can be generated by analyzing the semantic similarity among the words included in all the given documents. After that, the words of a document which have close relationships with the concept-words in the list will be replaced. Finally, the TF*IDF weights of all the concept-words in the document will be calculated and the first n concept-words of the document be extracted as its final feature terms. Pretreatment of Documents Before generating the concept-word list, we need to do some pretreatment for all the given documents, including mainly the following work. Word Segmentation. This paper uses the ICTCLAS method of Chinese Academy of Sciences [5] to segment a Chinese document into a set of words and to apply the Part-Of-Speech (POS) tagging to these words. Stop Word Deletion. A lot of analysis show that [6], the content of a document is mainly expressed by its content words, such as verbs, nouns and adjectives, but not by its function words and its high-frequency words which often appear in high frequency in various documents. Stop words refer to the extremely common words which would appear to be of little value in helping to select documents which match a user need. In this paper, they include prepositions, articles, conjunctions and high-frequency words etc. In the POS tagging results of the ICTCLAS word segmentation method, the abbreviations of some POS tagging used in the paper are: u - accessory word, c - conjunction, e - interjection, y - modal

Yanwen Wu

387

particle, w - punctuation. So, we can firstly delete easily all the words with the abbreviations above from the POS tagging result of a document, and then use the stop words list from the website [7] to do a second deletion of stop words for the resulting document. Words Statistics. After the deletion of stop words, we remove all the pos tagging included in the resulting document until the whole text has been formed only by its content words, those are called the document’s terms. After that, we count the frequency of each term as its term frequency in the document, and output the result with each term having the format to express the original document. Then, we can always get a set of words without duplication for each document. With all the set of words of the whole given document set D, we can finally generate a word set R without duplicate items, which is the input of the lexical similarity calculation. Lexical Similarity Calculation. According to the description in literature [4], we have used the following formula (1) to calculate the (semantic) concept similarity between two words. Sim(W1 , W2 ) = max Sim( S1i , S 2 j ) . i =1..n , j =1..m

(1)

Here, S1i refers to the concept list of term W1 and S 2 j is the concept list of term W2 . The concept similarity of terms W1 and W2 equals to the maximum of the various concept similarities between the two concept lists S1i and S 2 j . We can now calculate all the concept similarities between any two words in the word set R. After that, we can find all the word-pair (word1, word2) tuples such that their similarity is bigger than a given threshold η and finally generate a word-pair tuple list. In this paper, we think that only those word-tuples that their concept similarities are larger than η are useful to generate the concept-words. Therefore, one key point in the algorithm is how to determine the value of η. As pointed out in literature [4], if two words belong to the same concept, their concept similarity value is 1.0. For example, “computer” and “PC” belong to the same concept and their concept similarity is 1.0. Synonym pairs often belong to the same concept, so their concept similarity value often is 1. However, if two words do not belong to the same concept, we need to use formula (1) to calculate their concept similarity. The significance to determine the value of η is to identify mainly the near-synonym pairs and the similar words on semantic conception. According to our experiments on a large amounts of data (some of them are listed in Table 1), η is set to be 0.9 in this paper.

Table 1 Words and their concept similarities Similar word tuple (format as ) "at present" "now" 1 "ok" "victory" 0.942 "highly" "famous" 0.846 "at present" "recent" 1 "badness" "optimum" 0.877 "concrete" "actually" 0.94 "most" "recent" 0.94 "important" "core" 1 "recent" "brand new" 0.94 "freshness" "fair" 0.846 "hard" "deepness" 0.9888 "following" "future" 0.9 "new" "brand new" 0.94 "important" "major" 1 "recent" "fair" 0.846 "China" "Chinese" 0.9 "problem" "risk" 0.94 "recent" "effective" 0.846 "Chinese" "America" 0.9 "Chinese" "country" 0.828 "good" "positive" 0.988 "freshness" "effective" 0.846 "long time" "without day" 0.9 "following" "from now on" 0.9 "China" "Shanghai" 0.988 "badness" "complete" 0.877 …

Note: all the words in the table are translated from Chinese.

Concept-word List In this paper, a concept-word is used to refer a special set of words such that the concept similarity between any two words among them is larger than the threshold η. If sim(w1, w2) is greater than η, we call that the two words w1 and w2 are similar, and represent it with w1 ~ w2. So each concept-word is composed of a set of words such that any two words among them are similar.

388

Manufacturing Systems and Industry Application

There are three advantages to represent a document in VSM based on the concept-words: 1) Clustering each set of words with close similarities into a concept-word can reduce the dimension of a document vector; 2) Representing a document with the concept-words can make the document description in VSM more accurate; 3) Keeping all the original words, which have been gathered together into a concept-word, in the concept-word list can make it possible to replace back these words when an application needs. Generation of Concept-word List. In HowNet, a word, especially a Chinese word, may belong to many concepts and a concept is composed of several original meanings to explain its different semantic meanings. The relationship between the words and the concepts is many-to-many. The concept similarity between two words depends on the maximum value of the various concept similarities between any one concept of one word and any one concept of another word. Obviously, there is no transitivity among these similar word pairs. According to the definition of concept-word, any two words in a concept-word are similar. Namely, if words A, B and C belong to a concept-word, we must have A ∼B, B ∼C and C ∼A. If we represent each word in the word set R with a vertex and draw one edge for each possible pair of vertices when they are similar, then we can get an undirected graph GR. Since any two words in a concept-word are similar, the concept-word can be changed into a biggest undirected complete sub-graph of GR. Therefore, the process to generate the concept-word list is actually the work to find out all the biggest undirected complete sub-graphs in GR. The process can be described as below: 1) Firstly find out a biggest complete sub-graph in GR, mark it as Gm, and then output all the vertices of the sub-graph Gm as a concept-word; 2) Delete all the edges of Gm from GR and let the remaining graph be the GR'; 3) Repeat processes 1) and 2) to find out another biggest complete sub-graph in GR' until there is no biggest complete sub-graph can be found in the remaining graph. Commonly, if there are more than three words such that they are similar with each other, there must exist some concept-word, which included these words. Although one word may belong to two or more concepts, there exist not two similar word-pairs between two different concepts. For example, “virus” belongs to two different concepts: computer science and biology. Although it is similar to “germ” in biology and similar to “Trojan” in computer science, “germ” and “Trojan” are not similar. Namely, even if “virus”∼“germ”, which means they belong to a concept-word, and “virus”∼“Trojan”, which means they belong to another concept-word, we cannot say “germ”∼“Trojan”, namely they belong to two different concept-words. This is a non-transitive example. Now we give an example to show the generation process of concept-word list. Assume that there are totally seven different words: R0={A, B, C, D, E, F, G} in a given document set D0; and their similar relationships are: A∼B, B∼C, A∼D, D∼E, B∼D and F∼G. So we have the following undirected graph GR0:

Fig. 1 An undirected graph GR0 According to the generation process, we will get its biggest sub-graphs {ABD}, {DE}, {BC} and {FG} one by one. So the process will return the concept-word list: {{ABD}, {DE}, {BC}, {FG}}. Obviously, in this concept-word list, some words do not belong to only one concept-word. For example, both of the concept-words {ABD} and {DE} contain the word D. Therefore, we cannot recognize a concept-word by only one word included in it. To determine uniquely a concept-word and avoid any confusion, we introduce a unique identifier for each concept-word by using a simply numeric index.

Yanwen Wu

389

Represent a Concept-word in a Document. Each concept-word will be used to replace all those words that are similar to it in a document. After replacement the concept-word should be processed just like the other terms in the document. To distinguish concept-words from other terms and to replace these concept-words back with the original words, we need to mark each concept-word carefully. For this purpose, we can use some special symbols which do not appear in any of the documents to represent various concept-words. In this paper, we have chosen the symbol @ as a start mark and used the form “@the unique identifier of a concept-word” to stand for the concept-word. Since there are several similar words in a concept-word, we need to distinguish that a current concept-word in a document represents which of these words. In this paper, for each concept-word, we have taken the original order of its words as its words’ indexes, which starts from zero, and used the form “#a word position index” to mark the word. For example, we assume a concept-word is “@1001 computer PC microcomputer”, which includes three similar words and its unique identifier is 1001. If the two terms “PC” and “microcomputer” are included in a document Di, then they will be replaced by “@1001#1” and “@1001#2” respectively. To recover the two terms, we can firstly find the concept-word by its unique identifier 1001 from the concept-word list, and then find the two words by their indexes 1 and 2 (represent the position 2 and 3 respectively) from the concept-word. Since the concept-word list is generated from the whole document set D but not from a single document, each document does not always contain all the words of a concept-word. Otherwise, some words might belong to two or more concept-words. In this paper, we propose the replacing principles: “If a document includes more than two words in a concept-word, then replace all of these words with the concept-word and emerge their symbols.” For example, for the document Di above, both “PC” and “microcomputer” are in the concept-word with the unique identifier 1001, so they should be replaced with “@1001#1” and “@1001#2” and merged to “@1001#1,2”. TF*IDF Value Calculation of a Concept-word This paper takes TF*IDF method to calculate the term weight of a concept-word. The formula is:

W ( t, d ) =

tf ( t , d ) × log ( N / ni + 0.01)



i∈d

tf ( t , d ) × log ( N / ni + 0.01) 

2

.

(2)

Here t is a term (or a concept-word) and d is a document in document set D. W (t , d ) is the weight of t in d and tf (t , d ) is the term frequency of t in d . N is the number of documents in D, ni is the document frequency of t in D and log( N / ni + 0.01) is the inverse document frequency, in which 0.01 is a correction value. The denominator is the normalized function for the term weight. For example, for the document Di above, its words “PC” and “microcomputer” have been replaced by “@1001#1” and “@1001#2”, and at last merged to “@1001#1,2”. In the sub-section of “Words Statistics”, we have calculated the term frequency of the two words, so the term frequency of the concept-word “@1001#1,2” should be the sum of the term frequency of the two words. Nevertheless, the document frequency of the concept-word should be different. In the example, “PC” and “microcomputer” of Di have been replaced by the concept-word with the unique identifier 1001. Since there are more than two words in the concept-word, we need to ignore the difference between “@1001#1” and “@1001#2”. Hence, to calculate the document frequency of the concept-word, all the symbols which start with the symbol “@1001” should be regarded as the same term. Finally, with the term frequency and the document frequency of the concept-word, we can calculate the term weight of the concept-word according to the formula (2).

390

Manufacturing Systems and Industry Application

Comparative Experiments The experiments have been done on the corpus TanCorpMin, which is a subset of TanCorp on Internet [8]. There are totally 951 documents in the corpus and they are divided into 10 categories. We have randomly extracted 500 documents from the corpus as training-set and 50 documents as test-set. Experiment Data and Methods. We have generated the concept-word list from the training-set, and then used the test-set to complete two comparative experiments: the comparison between Way 1 and Way 2, and the comparison between Way 2 and Way 3. In the experiments, we have extracted the term features of the documents in the test-set in three different ways to show the improved results by using the original concept-word list and by using a more precise concept-word list. In Way 1, we have done the work by using directly the traditional TF*IDF method without using the concept-words. In Way 2, we have done the work by using the TF*IDF method after replacing all those words in the documents with the original concept-word list. In Way 3, we have firstly filtered out all the nouns about place names or human names in the original concept-word list to generate a new concept-word list, and then done the same work as Way 2 but by using the new concept-word list. In all of the three ways, we have calculated each term’s TF*IDF weights in each document of the test-set by using the formula (2), and then fetched out its top n terms as the document feature for the two comparisons. Comparative Experiment Results and Analysis. Here we just fetch out four documents (D1, D2, D3, and D4) randomly from the comparative experiment between Way 1 and Way 2. For each document, we only fetch out its top 20 terms as its document feature. The result is shown in Table 2. Table 2. Comparative experiment results between Way 1 and Way 2 Doc D1

D2

D3

D4

Result without concept-words (Way 1) "Fossett" "fly" "plane" "hot balloon" "Adventure" "Fuel" "Driving" "Global" "Virgin" "Atlantic" "Travel around" "world" "Fusai" "Rutan" "Earth" "Record" "Intermittent" "speech" "weather" "Aviation" "HuaLin" "Group" "comparison over the same period" "total amount" "Sun Shu-hua" "Loan" "reporter" "production" "increasement" "wife" "achievement" "property" "department" "ZhouKou City" "HuaiYang County" "longitude and latitude" "tubing material" "Han Feng-long" "He Bai-song" "property" "debt ratio" "Exchange rate" "Euro" "week" "wave" "boundary" "day" "shake" "line" "support" "Central Bank" "Japan" "go up" "Rebound" "clean up" "down" "level" "Annual rate" "expectation" "into" "region" "Aucma" "stationed in China" "Home Appliances" "embassy" "Freezer" "Products" "Energy conservation" "Kitchen" "sanitation" "Environmental Protection" "Cabinet" "Innovation" "Technology" "Counselor" "Dai Wei" "constant temperature" "green" "Reliance" "Commercial" "International"

Result with the original concept-word list (Way 2) "Fossett" "fly" "plane" "hot balloon" "Adventure" "Fuel" "Driving" "Global" "Virgin" {"reach" "finish"} "world" "Atlantic" {"man" "human"} "Travel around" {"design" "plan"} "Record" "Fusai" "Rutan" "Earth" {"do" "create"} "HuaLin" "Group" {"reach" "achievement"} {"Capital" "property"} "comparison over the same period" "total amount" "Sun Shu-hua" {"home" "dragon"} {"management" "Business"} "Loan" "production" "increasement" "reporter" "wife" {"Development" "Income"} "department" "ZhouKou City" "HuaiYang County" "longitude and latitude" "tubing material" "Exchange rate" {"China" "Japan"} "week" "Euro" "wave" {"world" "area"} {"appear" "go up"} "boundary" {"Economy" "ratio"} {"increase" "rise"} "line" {"expect" "expectation"} "shake" {"below" "fall"} "support" "Central Bank" "Rebound" "clean up" "into" "level" "Aucma" "stationed in China" "Home Appliances" "embassy" "Freezer" "Products" "Technology" "Energy conservation" "Kitchen" "sanitation" {"turn into" "appear to be"} "Environmental Protection" "Innovation" "Cabinet" {"United Kingdom" "Germany"} {"gain" "reach"} "Industry" "management" "Counselor" "Dai Wei"

Note: all the words in the table are translated from Chinese.

Yanwen Wu

391

In Table 2, the terms of each document are sorted by their TF*IDF weights and each set of words encased by { } represents a concept-word. From Table 2 we can see that the use of concept-words has not only decreased the dimension of each document, but also changed the weights of terms, especially their orders in the document. The importance of some words which are similar to each other has been enhanced after being merged as one concept-word. For example, the words “man” and “human” are both out of 20th in the ordered term list of D1 in Way 1, but they have reached the 13th after being replaced with their concept-word {"man" "human"} in Way 2. “Property" is at 12th in the ordered term list of D2 in Way1, but the concept-word {"Capital" "property"} has reached the 4th after it has been merged with “capital” in Way 2. We can also see that, for D3 the concept-word {"expect" "expectation"} has taken a little more growth in rank than “expectation” and for D4 the concept-word {"gain" "reach"} has reached its top 20 terms. Of course, there exist some words that are wrongly determined to be a concept-word. For example, for D2 though {“home” “dragon”} is not a right one, it has been reached into its top 20 terms. Since the threshold η has been set to 0.9, such cases rarely occur. We can find a problem from Table 2. The words “China” and “Japan” in D3 and the words “United Kingdom” and “German” in D4 have been merged into one concept-word respectively. This is because that they belong to one concept “country” in the HowNet. Since their similarities based on HowNet, namely calculated by the formula (1), are 1.0, both of the two pair words have been gathered as one concept-word respectively according to our definition. However, different countries, more commonly the nouns about place names, would not be considered as a concept-word when describing a document. The nouns about human names would not be used to generate one concept-word too. During the process of word segmentation, the nouns of place names are labeled by “/ns” and the nouns of human names are labeled by “/nr”. So we can easily recognize a place or human name from a document by recognizing the POS tagging “/ns” or “/nr”. In this paper, to improve the accuracy of concept-words, we have deleted all the concept-words about place names and human names from the original concept-word list and then gained a new concept-word list. The comparative experiment between Way 2 and Way 3 has been down to show the improvement of the new concept-word list compared with the original one. Some data are listed in Table 3, which shows the new term features of the documents are more scientific and accurate. Table 3 Comparative experiment results between Way 2 and Way 3 Doc D1

D2

D3

D4

Result with original concept-word list (Way 2) "Fossett" "fly" "plane" "hot balloon" "Adventure" "Fuel" "Driving" "Global" "Virgin" {"reach" "finish"} "world" "Atlantic" {"man" "human"} "Travel around" {"design" "plan"} "Record" "Fusai" "Rutan" "Earth" {"do" "create"} "HuaLin" "Group" {"reach" "achievement"} {"Capital" "property"} "comparison over the same period" "total amount" "Sun Shu-hua" {"home" "dragon"} {"management" "Business"} "Loan" "production" "increasement" "reporter" "wife" {"Development" "Income"} "department" "ZhouKou City" "HuaiYang County" "longitude and latitude" "tubing material" "Exchange rate" {"China" "Japan"} "week" "Euro" "wave" {"world" "area"} {"appear" "go up"} "boundary" {"Economy" "ratio"} {"increase" "rise"} "line" {"expect" "expectation"} "shake" {"below" "fall"} "support" "Central Bank" "Rebound" "clean up" "into" "level" "Aucma" "stationed in China" "Home Appliances" "embassy" "Freezer" "Products" "Technology" "Energy conservation" "Kitchen" "sanitation" {"turn into" "appear to be"} "Environmental Protection" "Innovation" "Cabinet" {"United Kingdom" "Germany"} {"gain" "reach"} "Industry" "management" "Counselor" "Dai Wei"

Result with the new concept-word list (Way 3) "Fossett" "fly" "plane" "hot balloon" "Adventure" "Fuel" "Driving" "Global" "Virgin" {"reach" "finish"} {"man" "human"} "world" {"creation" "create"} "Atlantic" "Travel around" {"design" "plan"} "Record" "Fusai" "Rutan" "Earth" "HuaLin" {"reach" "achievement"} "Group" {"Capital" "property"} "comparison over the same period" "total amount" "Sun Shu-hua" {"home" "dragon"} {"management" "Business"} "Loan" "increasement" {"Development" "Income"} "production" "reporter" "wife" "department" "ZhouKou City" "HuaiYang County" "longitude and latitude" "tubing material" "Exchange rate" "week" "Euro" {"below" "fall"} "wave" "boundary" {"appear" "go up"} {"Economy" "ratio"} "line" {"increase" "rise"} "day" {"expect" "expectation"} "shake" "support" "Central Bank" "Japan" "Rebound" "clean up" "into" "level" "Aucma" "stationed in China" "Home Appliances" "embassy" "Freezer" "Products" "Technology" "Energy conservation" "Kitchen" "sanitation" "Environmental Protection" "Innovation" "Cabinet" {"gain" "reach"} {"lead in" "more than"} "Industry" "Counselor" "Dai Wei" " Constant temperature" {"hold" "do"}

Note: all the words in the table are translated from Chinese.

392

Manufacturing Systems and Industry Application

Conclusion The research in this paper starts from the basic requirement of VSM that its terms should be orthogonal to each other. In an actual document, its terms often have some semantic relationships. To improve the accuracy of the description of a document in VSM, based on the semantic descriptions for words given by HowNet we have introduced a new concept: concept-word, which represents a set of words with some similar relationship, into the description of a document. The problem of how to generate the concept-word list has been discussed and then some comparative experiments have been done. Experiments show that by replacing the words in a document with the concept-words we can not only improve the accuracy of document feature description in VSM, but also reduce significantly the dimension of the document vector. The new document representation method based on the concept-word list presented by this paper is very useful in many application fields, such as document clustering, query word expansion in web information retrieval and personalized service e-business applications. Acknowledgements Chinese National Key Technology R&D Program (2007BAH08B04) supports the research. Reference [1] Z.L. Jiang, X.K. Xu and S. Li: Feature extraction of text classification based on word clustering. Journal of Harbin Engineering University. Vol. 11(2008), p. 1205-1209 [2] Y. Wang, M. Zhang and L. Ma: Text categorization based on word aggregation and decision tree. Journal of Hebei University(Natural Science Edition). Vol. 03(2005), p. 338-342. [3] L. Zhao, T. Hu, X.J. Huang et al.: Hownet-based conceptual feature selection method. Journal of China Institute of Communications. Vol. 07(2004), p. 46-54. [4] Q. Liu and S.J. Li: Word similarity computing based on how-net. Computational Linguistics and Chinese Language Processing. Vol. 7(2002), p. 59-76. [5] Information on http://www.ictclas.org/. [6] J.J. Sun and Y. Cheng: Technology of information retrieval. Science Press. p. 166-167. (2004) [7] Information on http://download.csdn.net/source/1987618. [8] Information on http://www.pudn.com/downloads91/sourcecode/chinese/detail348916.html.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.393

An Overview of P2P Search Algorithms YAN Jingfeng a, TAO Shaohuab School of Computer Science and Technology, Xuchang University, Xuchang,

Henan, 461000, P.R. China

a

[email protected],

b

[email protected]

Key words: P2P; breadth-first; flooding search

Abstract: since the nodes of P2P network always join or exit dynamically, web-based P2P search technology is much more complicated than the traditional search technologies. P2P-based resource search algorithm is currently a research focus. This paper finds the advantages and disadvantages and range of application of each algorithm through an analysis and comparison of methods such as flooding search, breadth-first search (BFS), iterative depth method, directed breadth-first search (DBFS), random breadth-first search (RBFS) and other forwarding mechanism-based search. And the research findings of this paper aims to lay a technically necessary foundation for future high-performance P2P search algorithm. Introduction of Flooding search In terms of the present P2P network, search is mostly based on unstructured flooding broadcast mechanism. A node sends query messages to all of its neighboring nodes which in turn forward query messages to all of their own neighboring nodes, in which way query messages are forwarded constantly. To limit the scope of the search, an initial TTL (Time to Live) value is set for the messages[1]. TTL value is reduced by 1 each time the message passes a node, and search will be terminated when TTL value becomes 0. Flooding search has the following features[2]: (1) high coverage of peer nodes: 95% of nodes can be searched in Gnutella network (TTL=7) with flooding search, that's because the number of nodes increases exponentially with the constant forwarding of messages in flooding search; (2) good robustness: the failure or exit of a peer node has almost no effect upon the other nodes, that's because in flooding search each node sends messages to all of its neighboring nodes and the query messages can be sent back via all possible paths among nodes; (3) fast response time: nodes send query messages to their neighboring nodes in a parallel manner in flooding search. Despite the fast response time and easy implementation, flooding search tends to produce a large number of redundant messages, especially in case of large network size and high degree of connectivity between nodes, in addition, TTL setting enables flooding search to be used within a small area only. In real P2P network, redundant messages not only increase the processing burden on peer nodes but also take up large amounts of network bandwidth, and the peer nodes with poor processing performance even cause a system crash due to the flooding of forwarded messages. Breadth-first search (BFS) Breadth-first search is a typical graph traversal algorithm[3]. Its basic ideas are as follows: firstly visit the node v0 by starting from a certain node v0 in the graph, and visit all of the nodes adjacent to v0, namely v1, v2 …vi according to the adjacency list of v0, then visit all of the unvisited nodes adjacent to v1, v2 …vi in sequence, and lastly visit all of the unvisited nodes in the graph in sequence by starting from these nodes until all the nodes in the graph are visited. Just as flooding search, to limit the scope of the search, an initial TTL value can also be set. Table 1 is the comparison between flooding search and breadth-first search in terms of the number of message forwarding in the same simulated P2P network structure when the number of peer nodes N

394

Manufacturing Systems and Industry Application

and the average node degree D vary with the TTL value 4 [3,4]. Table 1 indicates that breadth-first search reduces the times of message forwarding to a certain extent, compared to flooding search. Breadth-first search reduces to a certain extent the number of redundant messages, however, TTL control mechanism enables the scope of the search and the resources searched to be limited. If the initial value is set too low, the resources needed may not be searched; if it is set too high, the response time for search may be long. Besides, in case of very high degree of connectivity between nodes, peer nodes will receive large numbers of duplicate messages. In view of this, breadth-first search needs to be improved. Table 1. comparison between flooding search and breadth-first search {Flooding} {BFS} {Difference {TTL {Number of {Average value} peer nodes} node degree} value D = 14 925 39 886 N = 24 D = 16 4524 80 4444 D = 18 4524 80 4444 D = 14 1639 54 1585 TTL = 4 N = 26 D = 16 4524 80 4444 D = 18 5636 90 5546 D = 14 4524 80 4444 N = 28 D = 16 4524 80 4444 D = 18 13457 123 13334 Iterative depth method Iterative depth method is widely used in many fields, such as state space search in artificial intelligence. In terms of iterative depth method, several times of breadth-first search are conducted, with the depth limit increasing each time (namely the initial TTL value set is increasing), and the process will be terminated when query results meet the requirements or the maximum depth limit is reached. Therefore, if satisfactory results can be achieved at a depth less than D, the query of many nodes will be made unnecessary and the consumption of enormous resources will be avoided, compared with breadth-first search directly at the depth D. In iterative depth search, an iteration strategy needs to be decided first for the system to indicate the depth of iteration each time. In the iteration strategy P= {a,b,c}, the query request of TTL=a will be sent first by the source node S to its neighboring nodes to launch the breadth-first search at the depth a, and the query message will be stored temporarily instead of be discarded by the node of the depth a after being received and processed [5]. As a result, queries will be "frozen" at all the nodes that are a-hop away from the source node s. And the results returned from the nodes at which query requests have been processed will be received by the source node s, after the waiting time W, if query results are satisfactory, the search process will be terminated; if not, another round of iterative depth will be started for the breadth-first search at the depth b. In the ideal case, iterative depth method can reduce considerably the number of nodes for query and reduce the resource consumption in search. However, in a less than ideal situation, for instance, large numbers of "resend" messages will be sent in the P2P network for the iteration at the depth c, which will take up larger amounts of network bandwidth than the breadth-first search directly at the depth c and will be more inclined to increase the processing burden of peer nodes. Directed breadth-first search (DBFS) In directed breadth-first search (Directed BFS), peer nodes select the neighboring nodes that give favorable responses to messages rather than send query requests to all the neighboring nodes, thus not only reducing the resource consumption of query but also ensuring the quality of query results. This search method is based on such a hypothesis: the neighboring nodes which gave favorable responses in the past will also give favorable responses in the future. To select the most effective

Yanwen Wu

395

neighboring nodes, a peer node needs to collect the information of its neighboring nodes, such as the quantity of results returned by these neighboring nodes in past queries, the response time or online time of neighboring nodes in the past. After an analysis of these statistics, the neighboring nodes with the best performance will be selected as the recipients of query messages, for instance: (1) select the neighboring nodes returning the most results in past queries; (2) select the neighboring nodes giving the fastest responses in past queries; (3) select the neighboring nodes with the longest time online; (4) select the neighboring nodes with high degree of connectivity. As query requests are sent only to part of the neighboring nodes in directed breadth-first search, the number of nodes and redundant messages can be reduced for query. However, the neighboring nodes selected in directed breadth-first search are those neighboring nodes having a good past record, if the resources needed happen to be located at the neighboring nodes having a poor past record, no resources will be searched by directed breadth-first search, and this is a disadvantage of directed breadth-first search. Random breadth-first search (RBFS) Similar to directed breadth-first search, in random breadth-first search (Random BFS), peer nodes send messages to part of neighboring nodes randomly rather than send query requests to all the neighboring nodes, which mechanism can reduce the extra consumption of resources by an order of magnitude but increase the response time by an order of magnitude. Two methods can be adopted as the termination mechanism of random breadth-first search: TTL and verification. The former means that search will be terminated automatically when the search depth reaches a certain number of steps while the latter means that the source node of query will be asked intermittently with the search of each hop of depth whether it should be forwarded continuously or terminated. The steps of random breadth-first search algorithm are as follows: (1) A query request message will be first created out of the content of query by the source node of query, and an initial TTL value shall be set if TTL mechanism is adopted. (2) One neighboring node or part of neighboring nodes will be selected randomly by the source node of query according to the P2P adjacency list as the recipients of query messages (and TTL value). (3) A search in its own resources will be conducted by the neighboring node which receives the query messages, and query results will be returned if there are resources which match the query messages. (4) If verification mechanism is adopted, the source node will be asked by the node which receives the query messages whether to forward the messages, and, if yes, part of neighboring nodes to which these messages haven't been forwarded will be selected randomly as the recipients of forwarded messages; if TTL mechanism is adopted, part of neighboring nodes to which these messages haven't been forwarded will be selected randomly as the recipients of forwarded messages and TTL value from which 1 has been subtracted. (5) Steps (3) and (4) will be repeated until "no forwarding" is verified or TTL value becomes 0. In the experimental simulation, let the number of randomly selected neighboring nodes be 2 and let TTL termination mechanism be the adoption of random breadth-first search, the data shown in Table 2 are obtained. The results show that random breadth-first search requires fewer times of message forwarding than breadth-first search does, thus reducing the consumption of query to a certain extent, but searches relatively fewer nodes than breadth-first search does. However, let search efficiency be e, the times of message forwarding be T and the number of searched peer nodes be N, random breadth-first search has higher search efficiency despite that it searches relatively fewer peer nodes than breadth-first search does. Similar to directed breadth-first search, random breadth-first search aims to reduce the number of query messages and visited nodes and meanwhile obtain satisfactory query results[6]. However, the disadvantage of random breadth-first search is its instability, and the success rate of search depends upon network topology and random selection of neighboring nodes.

396

Manufacturing Systems and Industry Application

Table 2. {TTL value, Number of peer nodes} TTL = 4, N = 28

comparison in the times of message forwarding and the number {Number of {Number of {Average {Times of {Times of peer nodes peer nodes node by BFS} by RBFS} searched by searched by degree} BFS} RBFS} D=4 6 2 6 2 D=6 13 13 6 6 D=8 18 18 8 8 D = 10 39 35 11 11 D = 12 41 37 11 11 D = 14 80 43 16 14 D = 16 80 43 16 14 D = 18 123 63 19 18

Other search techniques based on forwarding mechanism Flooding search, breadth-first search, iterative depth method, directed breadth-first search and random breadth-first search are all search techniques based on message forwarding mechanism, and they all search P2P network and obtain resources by forwarding query request messages. And the search techniques below are also based on forwarding mechanism. (1) Intelligent breadth-first search (Intelligent BFS) To reduce the number of query messages and the number of peer nodes to be visited, intelligent BFS sends queries to the neighboring nodes from which results are most likely to be returned, and this method places much emphasis on locating targets more effectively. When a peer node receives a query message, if the recipient node can provide results it will send the document results to the node which sends the query request; if not, the recipient node will send the query messages to its neighboring nodes from which results are most likely to be obtained. To prevent query messages from being spread endlessly in the P2P network, a maximum query depth is specified for query messages. As query results are returned to the node which sends query requests, the peer nodes on the path along which query results are returned will record the results and provide the peer node of the results. To decide the neighboring peer nodes to which request messages will be sent, the neighboring peer nodes need to be graded by peer nodes according to query messages. Each peer node needs to maintain the description of a neighboring node which describes and records the last received results and the neighboring nodes which receive the results. (2) Adaptive probabilistic search (APS) APS selects the next node for forwarding selectively and probabilistically according to the history of neighboring nodes. In this mechanism peer nodes can guide the search effectively according to the feedback from past search, and meanwhile peer nodes just withhold part of the information of neighboring nodes. APS has high accuracy, low bandwidth consumption, good robustness in a dynamic environment and the ability to discover more targets, which features come from the learning mechanism of APS. (3) Routing indices As to this method, each node maintains some indices of its neighboring nodes and forwards query requests to its best neighboring nodes according to these indices. As a data structure, these indices can calculate whether the neighboring nodes are good or bad to obtain some good nodes when they receive a query. These indices can be the total number of documents that can be returned by neighboring nodes in which case the documents will be classified by these indices, and these indices can also be the number of documents of the same type that can be returned by the neighboring nodes. The disadvantage of this method is that it is not suitable for rapidly and dynamically changing peer-to-peer network.

Yanwen Wu

397

Conclusion This paper firstly introduces flooding search with an analysis of its advantages and disadvantages, and secondly describes breadth-first search (BFS) by applying the breadth-first traversal algorithm of the graph theory to P2P network and compares BFS and flooding search in the number of redundant messages according to the experimental data gathered from simulation. Then, this paper improves BFS with a description and analysis of the three search techniques of iterative depth method, directed breadth-first search (DBFS) and random breadth-first search (RBFS), and lastly introduces briefly several other search techniques based on forwarding mechanism. References [1] Yang B. Improving Search in Peer-to-Peer Networks. Proc.22nd Int’l Conf. Distributed Computing Systems. IEEE CS Press, p5-15,(2002) [2] Kalogeraki V. A Local Search Mechanism for Peer-to-Peer Networks. Proc.11th Int’l Conf. Information and Knowledge Management. ACM Press, (2002) [3] Annextein FS, Berman KA, Jovanovic MA, et al. Indexing Techniques for File Sharing in Scalable Peer-to-Peer Networks . Proceedings IEEE ICCCN 2002. (2003) [4] Harren M, Heller stein JM, Huebsch R, et al. Complex queries in dht-based peer-to-peer networks. Proceedings of IPTPS02 . Cambridge, USA, (2002) [5] D Tsoumakos, N Roussopoulos. Adaptive Probabilistic Search (APS) for Peer-to-Peer Networks Technical Report CS-TR-4451, Un of Maryland, (2003) [6] A Crespo, H Garcia-Molina. Routing Indices for Peer-to-Peer Systems. ICDCS, p440-442, (2002).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.398

Application and research of data acquisition based on database technology of LabVIEW Hu Binga, Liu Xijunb, Li Shanc School of Electric and Information Engineering of XiHua University, ChengDu, China a [email protected], b [email protected], [email protected] Key words: LabVIEW; LabSQL; access database; data acquisition

Abstract. For automated testing system to manage harvesting data requirements, the paper discusses the method of using the tool of LabSQL database access to visit access database on LabVIEW programming circumstances, and gives an example of specific database management in data collection of application. The system communicated with fieldbus and based on VISA programming interface, acquisition to obtain multi-channel sensor signals. Reusing database technology to realize data query, backup, import management functions for the detection system collection income amount of data convenient and effective. This system has been applied in a automated testing system. Introduction It is always tended to produce the massive test data in automated testing process with the continuous development of technology, therefore we need a perfect function of database management system on the unified management. Test data query management needs to be solved urgently in automated testing field. The LabVIEW uses graphical interface simply, intuitive programming, data processing modular, built-in measuring functions and data communication function. It has been applied to data collection system design works widely. The LabVIEW processes a few datas available using Text files literacy function, but for extensive testing data it need to use the database to realize the corresponding management functions[1]. This paper is discussed developing a database management system by using the LabVIEW. Data Acquisition System Form Figure 1 gives a typical multi-channel data acquisition management system frame. The master computer management procedure which designed by the LabVIEW controls the lower level computer data acquisition by RS485. System has 16 road acquisition channels and with multi-channel analog switch which it can choose corresponding analog signal input, and the instrumentation amplifier circuit can complete the front-end amplification for the input signal. Microprocessor according to acquisition control commands collects data, and uploads the real-time data to master computer, to complete the database management and operation function.

Yanwen Wu

399

Figure 1 . Multi-channel Data Acquisition System Software Overall Design of Data Acquisition Management System Master computer realizes multi-channel data acquisition management system interface designed by LabVIEW. The total design structure function chart is showed as in figure2.

Figure 2. Overall design of data-collection management system Data acquisition management system includs data collection part and database management section. Data collection interface of system mainly completes each physical parameters configuration, choices acquisition channel, as well as collects and displays zero voltage, channel voltage in real-time. Database management interface mainly completes corresponding matched data query, display function, users can also collect data for the operations of backup, restoration and clear, and modify the system password. The flowchart of data acquisition management system program is showed as in figure3.

400

Manufacturing Systems and Industry Application

Figure 3. The flowchart of data acquisition management system program It mainly bases on VISA library function configured with instrument equipment parameters, and sends collection instruction to multi-channel data acquisition system, receives collecting voltage values. It also realizes all kinds of instruments I/O interface control in data acquisition management system interface design; It can be used for the collected data for the corresponding operation processing, through LabSQL connects to database to store gathering information. The whole system is simple and intelligent convenient. and considers the security and confidentiality. Design of Data Acquisition Management System Database Management Program The LabVIEW visits relevant database through the data source name. So it must be established data acquisition management system thru ODBC before connection with acquisition management system. Because of the direct versus ODBC API is more complex, therefore we use ADO database object model connect to the database[2,3]. Using ADO and ODBC access data acquisition management system database structure layers is showed as in figure 4.

Figure 4 . ODBC access database structure hierarchy

Yanwen Wu

401

The data source is named DSN when data acquisition management system connections access database. It is setted by ODBC data acquisition management system in Windows control panel, and appoints the name of data access database. It can establish a connection between ADO and DSN through setting ConnectionString in the system to visit multi-channel data acquisition management system database[4]. The SQL statements can be directly executed to collect data for typing,display,inquiring and modifing by using the access interface provided by LabSQL. The program design flow chart is showed as in figure 5.

Figure 5 . The flow chart of database management program design The system establishs a connection with ADO by calling “ADO Connection Create.vi”. It opens acquisition management system data sources by “ADO Connection Open. vi”, and the data source is specified by DSN through “ConnectionString”. It uses “SQL Execute. vi” to execute the operation and command of the database[5-7].Such as insert into collect (“CollectID”, “ChanelID”, “status”, “Zero”, “Value”, “delta”) values ('……') means to write data collection number, acquisition channel number, collecting state, zero voltage value, the channel data collection and data collection the difference to the relevant column of the database. The system uses “ADO Connection Close.vi” to close the connection with system database, and completes for database operation management of data acquisition management system. The system of multi-channel data acquisition management working interface is showed as in figure 6.

402

Manufacturing Systems and Industry Application

Figure 6 . The system of multi-channel data acquisition management Conclusion It introduces to call the ADO controls connection with ODBC,and use SQL language to realizate multi-channel acquisition management system of database access by the LabVIEW using external program interface ActiveX. During the acquisition and management system ,user not only can customize acquisition channel connect, the collection and display selected test data, also can query, delete, print the of historical data. The whole system is stable, good real-time performance, satisfy and demands all the design of data management. It has wide application prospects in automated testing field. Acknowledgment The work is supported by the Key Laboratory of Signal and the key scientific research project of Xihua university (NO:zg0720901).

Yanwen Wu

403

Summary Hu Bing: Ph.D, is currently working as an associate professor of School of Electrical and Information Engineering of Xihua University, china. His research interests are modern measurement and instrumentation, signal process and embedded system Etc. The Email is [email protected]. The telephone is 13088075258. Liu Xijun: is currently working toward the M.A. degree a graduate in School of Electrical and Information Engineering of Xihua University, china. His special fields of interest include modern measurement and control technology. The Email is [email protected]. The telephone is 15202851423. Li Shan: is a MS student at Xihua University, Sichuan. Her research interests include communications and control technology. The Email is [email protected]. The telephone is 15108308323. References [1] Chen Xihui, Zhang Yinhong. LabVIEW 8.20 programming from approaches to master. Beijing: Qinghua University Press, 2007 [2] Hu Jinhua, Zhang Wei, Yao Dongming. The Application of Database Technology Base on LabVIEW in Automatic Test System. Chinese Journal of Scientific Instrument, 2008,29(4),361-364 [3] Hou Guobing, Wang Kun, Ye Jixin. LabVIEW 7.1 programming and virtual instrument design. Beijing: Qinghua University Press, 2005 [4] Robert H.Bishop. The LabVIEW 6i practical tutorial translation. Beijing: Electronic Industry Press, 2003 [5] Wen Hao, Dong Xiaorui, Ma Yucheng, Nan Jinrui. The research of the databases connection methods in LabVIEW based on ADO.Computer Application and System Modeling (ICCASM), 2010 International Conference on, 2010,229-233 [6] Jing Junfeng, Nie Luhua,Wang Bo, Li Jiakun. Remote Laboratory Data Management System Based on LabVIEW., Measuring Technology and Mechatronics Automation (ICMTMA), 2010 International Conference on , 2010 , 1016 - 1019 [7] Xue Deqing, Yao Shifeng, Zhang Yanbin, Cai Jijun. A study of Mathematical Model of Venture Capital Syndication. Science Technology and Engineering, 2005,5(20),1567-1569

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.404

The Filling Algorithm for Scanning Based on the Chain Structure Wang Weiqing Department of Information Management, Southwest University Rongchang Campus, Chongqing 402460, China Keywords: scanning lines; data structures; algorithm; scanning chain

Abstract: By using the thought of Bresenham algorithm for drawing a line, it can generate basic graphics, at the same time it can create a regional point of scanning lines. According to the regional point of scanning lines it can be filled directly. It does not judge and calculate of the other pixels within the region. The time complexity of the algorithm has been markedly improved, at the same time, in the process of filling a polygon it only stores the coordinates of boundary points and the coordinates of regional points, and it does not store other pixels within the region. It only needs a small storage space, so its space complexity is also increased. Introduction Currently we study the graph-filling algorithm a lot, and the commonly used algorithms are ordered-polygon-side-table filling algorithm, the seed-filling algorithm, the polygon-logo-filling algorithm[1~6] and the scanning-line filling algorithm[7]. For the seed-filling algorithm, whether it is the simple seed-filling algorithm or the scanning-line-seed-filling algorithm, it had always been to repeatedly judge the pixels which have been filled. But for seed-filling algorithm based on the chain code, it need to judge the boundary chain code, and in the process of filling, it need to use the backtracking algorithm, and it does not finish the search for one time. The scanning-line-filling algorithm is to intercross the scanning line and filled area, it is also necessary to establish order-side table and activation-side table, its data structures is very complex, and its computing-efficient is low. According to the shortcomings of the above filling algorithm, and combined with the basic graph-generation algorithm, we proposed a new filling algorithm for scanning based on the chain structure (LCFA). When the algorithm generates basic graphics, it can generate the scanning regional point, and it can be filled directly in the filling area, so it needs less storage space. The theory of the Chain scanning-filling algorithm based on chain The basic idea of the algorithm is to start from the boundaries of the drawing area, it gets the border points’ coordinates of x and y, in which y is the scan line row number of the boundary, and x is the regional point of the scanning lines. It stores the x of the point in a single list of scan line including y. Finally, it can fill the drawing area according to the data in the linked list of each scanning-line. Related definitions Definition 1 Boundary point: the intersection point of any scanning-line and the closed polygon is called boundary point, and these points determine the filling area of the scanning-line. Definition 2 Scanning chain: the single linked list which is composed by the boundary points of a scanning-line is called a scanning chain. Definition 3 Scanning head pointer: the head node pointer which is pointing a certain scanning chain is called the scanning head pointer of the certain scanning chain. The data structure which is used by algorithm. Different graph polygon region has different vertices. In order to improve the utilization rate of storage space, the storage of vertices uses a single linked list, and each node uses three domains, where x, y domain are used to store x and y coordinates of the vertices, the right domain is used to link the next vertices. The structure of a single linked list node is defined as follows (node2 and 3 shown in Figure 1); In order to determine the polygon vertex points, the node is expressed with two domains. Where the domain num expresses the number of nodes in a polygon, and the right domain is used to express the first intersection of a polygon, the structure of a single linked list node in a polygon is defined as follows (node1 shown in Figure 1).

Yanwen Wu

405

In order to improve the filling efficiency, the storage of the intersection of a scanning-line and a polygon boundary has also used a single linked list, each node has two domains, where x-domain represent the x coordinate of the intersection of a scanning-line and a polygon boundary, and right domain is used to link to the next boundary point of this scanning-line, the structure of node qjdian is defined as follows (node2 and 3 shown in Figure 2); At the same time, the number of scanning lines in different graphic areas are different. In order to store each scanning-line and its number of nodes, we should create another head node of linked list, and those nodes are also expressed by two domains, the domain num is the number of the nodes in the scanning line, the domain right is the first intersection in a scanning line. The structure of a Scanning-head node is defined as follows (node1 shown in Figure 1): struct ddian{int x; int y; ddian *right; }; struct qjdian{int x; qjdian *right; } struct nddian{int num; ddian *right; }; struct nqjdian{int num; qjdian *right; }; 1 num

2 x1 y1

3 x2 y2

Fig. 1 Single-chain structure of a vertex

1 num

2

3

x1

x2

Fig. 2 single-chain structure of a boundary point

Chain-scanning-filling algorithm and its complexity analysis The theory of the algorithm As an example the filling process of a polygon is shown on Figure 2. According to the above storage structure of the single-linked list of a matrix, we can follow the following steps to fill a polygon: ① To calculate the maximum and minimum vertical ordinates miny and maxy of the vertices P1P2 ... P6 of a polygon; ② Created a pointer array with (maxy-miny) scanning-head nodes, the array are used to point (maxy-miny) scanning-chains; ③ To call the line-drawing algorithm of Bresenham to calculate the coordinates of each point (x, y) in the line P1P2 excluding the endpoint, and to create a boundary node, its data domain is x, and put the boundary node at the back of the y-miny scanning chain; ④ Create the boundary points for the endpoint of each line in a polygon boundary points, its data domain is x. Put the boundary node at the back of the y-miny scanning chain, the number of boundary points which are created for each endpoint is as follows: If the two edges connected with the endpoints are located below the endpoint, then the endpoint does not create any boundary point. If the two edges connected with the endpoints are located above the endpoint, then the endpoint creates two boundary points. If the two edges connected with the endpoints are located above and below the endpoint, then the endpoint creates only one boundary point. ⑤ To scan the boundary points of each scanning chain, and use the filling color to fill the field from the odd boundary points to the even boundary points, and not to fill the field from the even boundary points to the odd boundary points. Accordingly, the LCFA algorithm is as follows: 1) n=maxy-miny+1; // To determine the number of the scanning chains 2) for(int i=0;inum=0;smx[i]->right=NULL; }// smx[i] is the scanning-head node nqjdian 4) p=nd->right; p1=p->right;// nd is the linked list composed of polygon vertices. 5) BresenhamQujie(p->x,p->y,p1->x,p1->y,smx,miny);// Use the Bresenham algorithm to create the boundary node of the starting edge. 6) while(p1->right !=NULL) 7) {p=p->right;p1=p1->right;BresenhamQujie(p->x,p->y,p1->x,p1->y,smx,miny);} // Use the Bresenham algorithm to create the boundary nodes. 8) p=p1;p1=nd->right;

406

Manufacturing Systems and Industry Application

9) BresenhamQujie(p->x,p->y,p1->x,p1->y,smx,miny); // Use the Bresenham algorithm to create the boundary node of the end edge. 10)for(i=miny;iright=NULL;dd->x = x;Adddian(smx [y-miny],dd);dk=dk-2*dx;}} if(y0>y1) { dd->right=NULL;dd->x = x1;Adddian(smx [y1-miny],dd);} if(y3num!=0) { p=nd->right; i=1; while(p!=NULL) { if(i%2==1)// If it is an odd number. {x=p->x;} else// If it is the even times { for(j=x;jx;j++){SetPixel(j,y,color); }}// To fill the filling color between the odd domain x and the even domain x in a scanning chain. p=p->right;i++;}}} Time and space complexity analysis LCFA algorithm is mainly concentrated in the key function BresenhamQujie, According to the BresenhamQujie algorithm in 3.1, Imagine the horizontal ordinates of the beginning and end points in a line are respectively x1 and x2, Then the number of pixels in the line are x2-x1, let n = x2-x1, Then the time complexity that is used to create the boundary points of the line is O (n). Set the number of edges in the polygon is m, then the time m

complexity that is used to create the boundary points of the line is O(∑ ni ) , where ni is the number i =1

of points in the edge i. LCFA algorithm has another key function Tianchong, Set the maximum and minimum domain values of the boundary in one scanning are respectively max and min, let k=max-min, then the time complexity of Tianchong is O (k). Set the maximum and minimum ordinates of the boundary point in the polygon are respectively maxy and miny, then the number of scanning lines are maxy-miny, let ky=maxy-miny, from the 2) ~ 3) in the LCFA algorithm, the time of initializing the scanning chain is: T1=ky; And from the 5) ~ 9) in the LCFA algorithm, and from the above complexity analyzing of the function BresenhamQujie, the creation time of the Boundary point is: m

T2=O(∑ ni ) i =1

From the 10) ~ 11) in the LCFA algorithm, and from the above complexity analyzing of the function Tianchong, the filling time of the polygon is T3=ky×O(k), Set the maximum and minimum x-coordinates of the polygon are respectively maxx and minx, and set kx=maxx-minx, then T3=ky×kx.

Yanwen Wu

From

the

above

analysis,

the

total

conputing

407

time

of

algorithm

LCFA

is:

m

T=T1+T2+T3=ky+ O(∑ ni ) +ky×kx, so the time complexity of algorithm LCFA is: i =1 m

O(ky * kx + ∑ ni ) . i =1

During the filling process in algorithm LCFA, it only needs to store the number of the vertices and the boundary points in the polygon, from the previous 2.2, the data structure of a polygon vertex contains two data fields and a pointer field, and the data structure of the boundary point contains a data field and a pointer field, from the above time complexity analysis, the number of polygon edges is m, so the number of vertices is m +1. And from the previous time complexity analysis, the number m

of the boundary points is:

∑ ni . Therefore, the storage space of the algorithm LCFA is: 3(m+1)+ i =1 m

m

∑ ni . So the space complexity of the algorithm LCFA is: O(m + ∑ ni) . i =1

i =1

The simulation experimentation and its analysis We use two-dimensional random sparse matrix model to experiment, to compare the New Seed Fill Algorithm [1] and the algorithm LCFA. In order to produce the polygon filling area that has the practical feature, we use the mouse to draw polygons in the screen, meanwhile, in order to compare the computing time of the two algorithms, we have compiled statistics the pixel number in the filling field in each scanning line. Suppose the x-field value in the scanning line i is x1, x2, x3, x4, x5, x6 .... Xn, ( where x1 τ and M > T . Then explicit discrete EM approximate solution y (k ∆ ), k ≥ −m is defined as follows:  y (k ∆ ) = ξ (k ∆), − m ≤ k ≤ 0,  ∆ ∆  y ((k + 1)∆ ) = y (k ∆) + f ( y k ∆ , rk )∆ + g ( y k ∆ , rk )∆wk , 0 ≤ k < M , where ∆wk = w((k + 1)∆) − w(k ∆) and y k ∆ = { y k ∆ (θ ) : −τ ≤ θ ≤ 0} is a C ([−τ , 0]; R m ) -valurd random variable defined as follows: ∆ − (θ − i∆) θ − i∆ y k∆ = y ((k + i )∆) + y ((k + i + 1)∆), for i∆ ≤ θ ≤ (i + 1)∆, i = −m, − m + 1,..., −1. ∆ ∆ We hence have | y k ∆ (θ ) |≤| y ((k + i )∆) | ∨ | y ((k + i + 1)∆) | . We therefore obtain

| y k ∆ (θ ) |= max | y ((k + i )∆) |, for any k = −1, 0,1, 2,..., M − 1. − m≤i ≤0

In our analysis it will be more convenient to use continuous-time approximations. We hence introduce the C ([−τ , 0]; R m ) -value step process

424

yt =

Manufacturing Systems and Industry Application

M −2



M −1

r t = ∑ rk∆ 1[ k ∆ ,( k +1) ∆ ) (t ),

y k ∆ 1[ k ∆ ,( k +1) ∆ ) (t ) + y ( M −1) ∆ 1[( M −1) ∆ , M ∆ ) (t ),

k =0

k =0

and we define the continuous EM approximate solution as follows: − τ ≤ k ≤ 0, ξ (t ), (3) y (t ) =  t t ξ (0) + ∫0 f ( y s , r ( s ))ds + ∫0 g ( y s , r ( s ))dw( s ), 0 ≤ k < M , Clearly, y (k ∆) = y (k ∆ ), that is, the discrete and continuous EM approximate solutions coincide at the gridpoints. It is then obvious that || y k ∆ ||≤|| yk ∆ ||, for any k = 0,1, 2,..., M − 1. Moreover, for any

t ∈ [0, T ], sup || y t ||≤ sup || y ( s) || . This property will be used frequently in what follows, without 0 ≤t ≤T

− r ≤ s ≤T

further explanation. In the paper, we impose the following hypotheses: Growth Condition). There is a constant (H1)(Linear 2 2 2 m | f (ϕ , i ) | ∨ | g (ϕ , i ) | ≤ K (1+ || ϕ || ), for ϕ ∈ C ([−τ , 0]; R ) and i ∈ S .

K >0

such

that

(H2). ξ ∈ LFp0 ([−τ , 0]; R m ) for any p ≥ 2 , and there exists a nondecreasing function α (⋅) such that

E (sup −τ ≤ s ≤t ≤ 0 | ξ (t ) − ξ ( s ) |2 ) ≤ α (t − s ) with the property α ( s ) → 0 as s → 0. From Mao [9], we can easily show the boundedness of the solutions. Theorem 1. Under (H1) and (H2), for any p ≥ 2 , E ( sup | x(t ) | p ) ∨ E ( sup | y (t ) | p ) ≤ H , for any −τ ≤t ≤T

−τ ≤t ≤T

T > 0, where H is a constant dependent on ξ , p, K and T but independent of ∆.

Lemmas The primary aim of this paper is to establish the strong mean square convergence theorem for the Euler-Maruyama approximations. However, the proof of the theorem is very technical, so we present some lemmas in this section. Lemma 1. Under (H1) and (H2), then for any integer r > 1, E ( sup || y k ∆ − y ( k −1) ∆ ||2 ) ≤ β (∆) , where 0≤ k < M

β (∆) is independent of ∆. Proof. For θ ∈ [i∆, (i + 1)∆] , where i = − m, −m + 1,..., −1, we have || y k ∆ − y ( k −1) ∆ ||≤ sup | y ((k + i )∆) − y ((k − 1 + i)∆) | . − m ≤i ≤ 0

We therefore have E ( sup || y k ∆ − y ( k −1) ∆ ||2 ) ≤ E ( sup | y (k ∆) − y ((k − 1)∆) |2 ). 0≤ k < M

− m≤ k < M

When − m ≤ k ≤ 0 , by (H2), we have E ( sup | y (k ∆) − y ((k − 1)∆) |2 ) ≤ α (∆). When 1 ≤ k < M , we − m≤ k ≤0

obtain E ( sup | y (k ∆) − y ((k − 1)∆) |2 ) ≤ 2∆ 2 E ( sup | f ( y ( k −1) ∆ , rk∆ ) |2 ) + 2 E ( sup | g ( y ( k −1) ∆ , rk∆ )∆wk |2 ) . 1≤ k < M

1≤ k < M

1≤ k < M

By (H1) and Theorem 1, we computer E ( sup | f ( y ( k −1) ∆ , rk∆ ) |2 ) ≤ KE ( sup (1+ || y ( k −1) ∆ ||2 )) ≤ K (1 + H ) . 1≤ k < M

1≤ k < M

By the Holder inequality, Theorem 1 and E | ∆wk |2 r = (2r − 1)!!∆ r , for any integer r > 1 ,

E ( sup | g ( y ( k −1) ∆ , rk∆ )∆wk |2 ) ≤ [ E ( sup | g ( y ( k −1) ∆ , rk∆ ) |2 r /( r −1) )]( r −1)/ r [ E ( sup | ∆wk |2 r )]1/ r 1≤ k < M

1≤ k < M

0≤ k < M M −1

≤ [ E ( sup | K (1+ || y k ∆ ||2 ))( r −1)/ r )]r /( r −1) [ E ( ∑ | ∆wk |2 r )]1/ r 0≤ k < M

:= D(r )∆

( r −1)/ r

k =0

,

Yanwen Wu

425

where D(r ) = [21/( r −1) K r /(1− r ) (1 + H )]( r −1/ r ) ((2r − 1)!!)1/ r . Therefore, E ( sup || y k ∆ − y ( k −1) ∆ ||2 ) ≤ E ( sup | y (k ∆) − y ((k − 1)∆) |2 ) + E ( sup | y ( k ∆) − y (( k − 1) ∆) |2 ) 0≤ k < M

− m≤k ≤0

1≤ k < M

( r −1)/ r

≤ α (∆) + [2 K (1 + H )∆ + 2 D(r )]∆ :=β (∆). This proof is therefore complete. From Lemma1 and [8], we can similarly prove the following result and we omit the proof here. Lemma 2. Under (H1) and (H2), then for any integer r > 1, 2

E ( sup || y s − y ( s ) ||2 ) ≤ ζα (2∆) + ζ (r )∆ ( r −1)/ r := γ (∆) 0 ≤ s ≤T

where ζ is a constant independent of r and ∆, and ζ (r ) is constant dependent on but independent of ∆ . From [10], we can obtain the following result. Lemma 3. Under (H1), then there is a constant C , which is independent of ∆ such that T

T

0

0

E ∫ | f ( y s , r ( s )) − f ( y s , r ( s )) |2 ds ≤ C ∆ and E ∫ | g ( y s , r ( s )) − g ( y s , r ( s )) |2 ds ≤ C ∆. Convergence under the local Lipschitz condition In this section we shall show the strong convergence of the EM scheme on the HSFDEs (2) under the following non--Lipschitz condition: (H3)(Local Lipschitz Condition). For each integer j ≥ 1 and i ∈ S , there exists a positive constant L j such that | f (ϕ , i ) − f (ψ , i ) |2 ∨ | g (ϕ , i ) − g (ψ , i ) |2 ≤ L j || ϕ −ψ ||2 for ϕ ,ψ ∈ C ([−τ , 0]; R m ) with || ϕ || ∨ || ψ ||≤ j.

Theorem 2. Under (H1), (H2) and (H3), lim E (sup || x(t ) − y ||2 ) = 0, T > 0. ∆→0

0 ≤ t ≤T

Proof. Let j be a sufficient large integer. Define the stopping times u j := inf{t ≤ 0 :|| xt ||≥ j}, v j := inf{t ≤ 0 :|| yt ||≥ j}, ρ j := u j ∧ v j , where we set inf Φ = ∞ as usual. Let e(t ) = x(t ) − y (t ). For any δ > 0, E ( sup | e(t ) |2 ) = E ( sup | e(t ) |2 I{u j >T ,v j >T } ) + E (sup | e(t ) |2 I{u j ≤T or v j ≤T } ) 0 ≤ t ≤T

0 ≤t ≤T

0 ≤t ≤T

≤ E ( sup | e(t ) |2 I{ ρ j >T } ) + 0 ≤t ≤T

Now P(u j ≤ T ) ≤ E ( I{u j ≤T }

2δ p−2 E ( sup | e(t ) | p + 2/( p − 2) P{u j ≤ T or v j ≤ T }). p pδ 0 ≤t ≤T

|| xt || p H H ) ≤ p . Similarly, we have P (v j ≤ T ) ≤ p . Consequently, we have p j j j

E ( sup | e(t ) |2 ) ≤ E ( sup | e(t ∧ ρ j ) |2 ) + 0 ≤ t ≤T

0 ≤ t ≤T

2 p +1δ H 2( p − 2) H + . p pδ 2/( p − 2) j p

Hence, for any 0 ≤ t1 ≤ T ,

E (sup | e(t ∧ ρ j ) |2 ) ≤ 2TE ∫

t1 ∧ ρ j

0

0 ≤t ≤t1

| f ( xs , r ( s )) − f ( y s , r ( s )) |2 ds

+ 2 E (sup ∫

t1 ∧ ρ j

0 ≤t ≤t1 0

| g ( xs , r ( s )) − g ( y s , r ( s )) |2 dw( s )).

Since x(t ) = y (t ) = ξ (t ) when t ∈ [−τ , 0] , we have E (sup | xt ∧ ρ j − yt ∧ ρ j |2 ) ≤ E (sup | e(t ∧ ρ j ) |2 ). 0 ≤t ≤t1

0 ≤t ≤t1

By (H3), lemmas 3.2 and 3.3, we obtain

E∫

t1 ∧ ρ j

0

| f ( xs , r (s )) − f ( y s , r (s )) |2 ds ≤ E ∫

t1 ∧ ρ j

0

| f ( xs , r ( s)) − f ( y s , r ( s)) + f ( y s , r ( s)) − f ( y s , r ( s)) |2 ds t1

≤ 4 L j E ∫ sup | e(r ∧ ρ j ) |2 ds + 4 L jT γ (∆) + 2C ∆. 0 −τ ≤ r ≤ s

426

Manufacturing Systems and Industry Application

By the Doob martingale inequality, we compute E (sup | ∫

t∧ρ j

0

0≤t ≤t1

t1

[ g ( xs , r ( s)) − g ( y s , r (s ))]dw( s) |2 ) ≤ 16 L j E ∫ sup | e(r ∧ ρ j ) |2 ds + 16 L jT γ (∆) + 8C ∆. 0 0≤ r ≤ s

Then, we have t1

E (sup | e(t ∧ ρ j ) |2 ) ≤ 8L j (T + 4) E ∫ sup | e(r ∧ ρ j ) |2 ds + 8L jT (T + 4)γ (∆) + 4C ∆(T + 4). 0 0≤ r ≤ s

0≤t ≤t1

The Gronwall inequality yields

2 p +1δ H 2( p − 2) H E (sup | e(t ) | ) ≤ (8L jT (T + 4)γ (∆) + 4C ∆(T + 4))e + + 2/( p −2) p . p pδ j 0≤t ≤T p +1 Given any ε > 0, we can now choose δ sufficiently small such that 2 δ H / p < ε / 3 , then choose j sufficiently large such that 2( p − 2) H / ( pδ 2/( p − 2) j p ) < ε / 3 , and finally choose ∆ so small such 8 L jT (T + 4)

2

8 L jT (T + 4)

that (8L jT (T + 4)γ (∆) + 4C ∆(T + 4))e

< ε / 3 . Thus, E (sup | e(t ) |2 ) < ε , as required. The proof 0≤t ≤T

is therefore complete.

Acknowledgements The work was supported by the Foundation of Wuhan Polytechnic University under grands 2010Q23 and XM2010025.

References [1] M. Mariton: Jump linear systems in automatic control (Marcel Dekker, NewYork 1990). [2] C.T.H. Baker, E. Buckwar: Numerical analysis of explicit one-step methods for stochastic delay differential equations, LMS J. Comput. Math., Vol. 3 (2000), p.315--335. [3] Y. Hu, S.E.A. Mohammed, F. Yan: Discrete-time approximations of stochastic delay equations: the Milstein scheme, Ann. Probab. Vol. 32(2004), p. 265—314. [4]

X. Mao: Numerical solutions of stochastic functional differential equations, LMS Journal of Computation and Mathematics, Vol. 6(2003), p. 141--161.

[5] X. Mao: Stochastic differential equations and applications (2nd Edition, Horwood 2007). [6] X. Mao, S. Sabanis: Numerical solutions of SDDEs under local Lipschitz condition, J. Comput. Appl. Math. Vol. 151(2003), p.215--227. [7]

Y. Shen, Q. Luo, X. Mao: The improved LaSalle-type theorems for stochastic functional differential equations, J. Math. Anal. Appl. Vol. 318(2006), p.134--154.

[8]

F. Wu, X. Mao: Numerical solutions of neutral stochastic functional differential equations, SIAM J. Numer. Anal. Vol. 46(2008), p. 1821--1841.

[9] X. Mao, C. Yuan: Stochastic differential equations with Markovian switching (Imperial College Press, UK 2006). [10] C. Yuan, W. Gloverb: Approximate solutions of stochastic differential delay equations with Markovian switching, J. Comput. Appl. Math. Vol. 194(2006), p. 207--226.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.427

A Discrete Data Fitting Models fusing Genetic Algorithm Tongrang Fan1,a, Yongbin Zhao 1,b and Lan Wang 1,c 1

School of Information Science and Technology, Shijiazhuang Tiedao University, China a [email protected], [email protected], [email protected]

Keywords: discrete data; data fitting; Least Median of Squares Regression; Genetic Algorithm

Abstract. To address problems of Least squares method (LSM) fitting curves in application domains, the essay attempts to build a new model by using LMS (Least Median Squares) to analyze the error points, and pretreating the dynamic measuring errors and then getting the fitting curves of testing data. This model is used for electromotor parameters testing which includes load testing and unload testing. Experiments show that the model can erase the influence of outline points, while improving the effects of data curve fitting and reflecting the characteristic of the motor, provide more accurate data fitting curve with small sample data that is in discrete distribution compared with Least squares method. Introduction Performance curve with discrete date fitting is important in test data processing of many research fields, such as electromotor, sensor, materials and so on. It's used to design and improve products, reflect product design, related material quality and processing technology. However, the relations among discrete data variables are nondeterministic. The conventional method is selecting appropriate fitting function based on discrete data continuous graph and then computing fitting coefficient using Least Square Method (LSM), but the fitting curve by this way is susceptible to noise interference. In this paper, we proposed a discrete data fitting model fusing Genetic Algorithm (GA). We use Least Median Squares (LMS) fusing GA to analyze and process discrete data samples derived from tests, which could detect and delete anomalous points, and then have the perfect fitting curve using LMS for the processed data. Finally, the proposed model is applied to small sample data process in motor type tests and simulation results prove it could have desired fitting effect. Related works LSM is the most common data fitting model in current testing technology, but it has poor robustness which results from minimizing residual sum of squares in LSM, so only one disturbance point would alter the fitting results [1]. Rousseeuw [1] pointed out that only one disturbance point would alter the fitting results. Sometimes the curve tends to deviated seriously as the disturbance point is far away from others. Rousseeuw [2] suggested Least Median of Squares Regression. It uses residual median of squares instead of residual sum of squares, and its object is minimizing the residual median of squares. The method is effective in testing anomalous points. The literature [3] indicated that the method couldn’t alter curve fitting results in the condition that 50% of anomalous points exist. Although it has better fault-tolerate ability, LMS still has some deficiencies. Firstly, it slows down obviously in face of some issues such as large order of magnitude data and multi-coefficient processing; Secondly, the accuracy is affected because of the approximate estimation; Thirdly, it is difficult to solve the nonlinear problems because it is initially studied in

428

Manufacturing Systems and Industry Application

2-dimensional linear space and its application is limited to linear space [4][5]. In this paper, in reference to the relevant fusion research [6], we use LMS curve fitting process, fusing GA based on the natural genetic mechanism. GA has fast searching ability to search for the constant parameters that LMS requires, can makes up for the deficiency of LMS, and improves accuracy and efficiency of discrete data fitting. Model details Least median of squares regression. Least Median of Squares Regression use residual median of squares instead of residual sum of squares, and its object is minimizing the residual median of squares. The method is effective in testing anomalous points. The fitting model based on LMS is as follows: y i = xi1θ1 + ... + xipθ p + ei (i = 1,..., n)

(1)

Fitting straight line with LSM is to minimize residual sum of squares: y i = xi1θ1 + ... + xipθ p + ei (i = 1,..., n)

Where,

(2)

ri = yi − yˆ i , y is the observed value and yˆ i is the estimated value. i

Fitting straight line with LMS is to minimize the residual median of squares: minimize med ri 2 θ

(3)

i

The fitting effects of two methods are shown as Fig 1.

Fig 1. The comparison on fitting curves with LSM and LMS. Comparing (a) with (b), only one disturbance point has altered the fitting curve trend using LSM, while the fitting curve with LMS is not affected. Studies indicate that LMS would be tolerant to 50% of anomalous points, which couldn’t affect the fitting curve. For the robustness of LMS, we choose the LMS as the fitting method of our model. To address the deficiencies mentioned above (see Section II), we use GA fusing in it. Fusing GA. Based on the definition of fitness function in GA, the function value should be increasing continuously in chromosomal evolution until it is the largest while LMS minimizes each residual median of squares. Thus, minimum problem should be converted into maximum problem when solving LMS problem with GA, so the fitness function is: f = C − med ( y i − y i ' ) 2

(4)

Where, C is a large positive number, yi is measured value, and yi’ is the theory curve value that corresponds to yi.

Yanwen Wu

429

The discrete data fitting model fusing GA fits the N test points, and the algorithm steps as follows: Step 1 Determine the fitness function; Step 2 Set population size, determine initial population with M individuals and set the number of cycle step N; Step 3 Calculate the fitness function value of current population and judge the number of cycle step. If the number of step is 0, delete the anomalous points and re-fit curve and the algorithm end; Step 4 Evolve individuals, performing replication, crossover and mutation and finally generating new population. Step 5 Step number N decreases by 1, and return to Step 3. Experiment Motor type test is the classical method to determine motor parameters. The main tests are given in GB/T 1032-2005 and IEEE Std 112-1991[7, 8], including cold resistance test, no-load test, locked rotor test, load test, temperature rise test and so on, in which no-load test and load test are two important tests. Therefore, in this paper we choose the two test as examples to verify the actual effect on motor test data fitting with the proposed model. The data required to measured are voltage, current, temperature, rotating speed, torque and so on. No-load test data fitting. It is needed to test the parameters under different voltages in no-load test while the data under rated voltage are difficult to measure. We choose the actual data of three-phase asynchronous motor in no-load test to check up the model. The basic information of motor is type of F2-315L2-6 0050 120 EFF2 D102 SKF, rated power of 132KW, rated voltage of 400V, test frequency of 50HZ and its no-load test data as shown in Table 1. Table 1 The data of motor no-load test No

U0

I0

P0

R0

1

121.34

16.23

963

0.0209

2

156.58

26.56

1084

0.0209

3

183.23

32.62

1236

0.0209

4

278.48

37.4

1962

0.0209

5

359.74

64.75

3320

0.0209

6

400.31

97.8

5160

0.0209

7

438.15

148.9

8735

0.0209

8

458.17

186.9

11925

0.0209

There are no rated voltage of 440V measured and corresponding parameters when voltages float between 108.33V and 480.94V. The voltage and current fitting uses the traditional LSM and our proposed model respectively to compare their effects as the third and fifth points are anomalous points in no-load test data. Equation (5) is the voltage-current characteristic curve with LSM. y = 0.000015300065 x 3 − 0.011787080348 x 2 + 3.044823387720 x − 206.66653918837

(5)

Using our proposed model, we set variable values, initial population M=300, iteration times N=500, crossover probability as 0.6, mutation probability as 0.04. Equation (6) is the voltage-current characteristic curve we get with our model. y = 0.000013297600 x 3 − 0.009407295400 x 2 + 2.238685199600 x − 143.251345011000

The fitting effects of two methods are shown as Fig.2.

(6)

430

Manufacturing Systems and Industry Application

Fig 2. The fitting curves of no-load tests for error analysis. According to the Fig 2, the residuals of motor no-load test data fitting as shown in Table 2. Table 2 The residuals of motor no-load test data fitting No U0 I0 Residual of LSM Residual of LMS 1 121.34 16.23 0.350614 -2.589962 2 156.58 26.56 13.280265 1.128933 3 183.23 72.62 -22.993702 -39.708547 4 278.48 37.4 20.181549 0.412235 5 359.74 94.75 -19.177325 -31.012236 6 400.31 97.8 7.032128 0.642166 7 438.15 148.9 2.643281 1.273809 8 458.17 186.9 -0.281077 -0.281077 We could determine the third and fifth points are the anomalous points in Table 2 because their residuals of LMS are much larger, while we could not find the anomalous points because the residuals of LSM are quite uniform. The re-fitting curve with LMS is shown as Fig.3 (a) after deleting the anomalous points.

Fig 3. The model’s fitting effect validation of no-load tests. The characteristic curve preprocessing anomalous points would not be interfered, and its fitting effect is similar to (b) with no anomalous points. From the characteristic curve of U0-I0, the rated current is 98.10187 under the rated voltage in no-load test and it is less than the given value, accords with motor design requirement. Load test data fitting. It is needed to test the parameters under different powers in load test while the data under rated output power are difficult to measure. We choose the actual data of three-phase asynchronous motor in load test to check up the model. The basic information of motor is type of F2-132M1-2 0370, rated power of 7.5KW, rated voltage of 518V, test frequency of 60HZ and its no-load test data as shown in Table 3.

Yanwen Wu

No. 1 2 3 4 5 6 7

431

Table 3 The data of motor load tests U1 I1 P1 Pi 518.68 518.28 518.78 517.75 518.04 518.7 517.54

3.87 4.68 6.06 8.28 10.24 12.98 15.03

1920 2940 4380 6440 8160 10470 12120

1470.100331 2457.313102 3825.379599 5729.817131 7275.512514 9270.244620 10641.346876

Using our proposed model, we set variable values, initial population M=300, iteration times N=300, crossover probability as 0.6. Equation (7) is the output-input power characteristic curve we get with our model. (7)

y = 1.133720 x + 0.000082

Equation (8) is the output-input power characteristic curve with LSM. (8)

y = 1.104105 x + 646.622565

The fitting effects of two method is shown as Fig.4.

Fig 4. The fitting curves of load tests for error analysis. According to the Fig 4, the residuals of motor no-load test data fitting as shown in Table 4. Table 4 The residuals of motor load test data fitting No

P1

Pi

Residual of LSM

Residual of LM

1

1920

1470.100331

349.767954

-253.317205

2

2940

2457.313102

-580.245311

1154.093963

3

4380

3825.379599

490.243991

-43.089090

4

6440

5729.817131

532.943334

56.010559

5

8160

7275.512514

-1480.446388

-1911.603078

6

10470

9270.244620

411.947659

39.865369

7

12120

10641.346876

275.788760

55.688055

We could determine the second and fifth points are the anomalous points in Table 4, because their residuals of LMS are much larger, while we could not find the anomalous points because the residuals of LSM are quite uniform. The re-fitting curve with LMS is shown as Fig.5 (a) after deleting the anomalous points

432

Manufacturing Systems and Industry Application

Fig 5. The model’s fitting effect validation of no-load tests. Comparing (a) to (b), the characteristic curve preprocessing anomalous points would not be interfered, and its fitting effect is similar to (b) with no anomalous points. From the characteristic curve of Pi-P1, the output power Pi is less than the input power P1, that is, less than the given value, accords with motor design requirement. Conclusion and Future Work In this paper, studying the discrete data fitting problem on the plane, we suggest a new data fitting model. This model does error analysis with the LMS fusing genetic algorithm at first, and then re-fits the processed data. In practical application, the model is effective, especially for data in small sample space. Finally it is verified with the actual test data of motor tests and the experiment results show that it has higher precision, lower computational complexity, better fitting effect than the one using LSM. However, further research and improvement must be done, such as the initial value of boundary and order in genetic algorithm and so on. Acknowledgements This work is supported by Hebei Natural Science Foundation (No F2009000927) and The Project of the Network Platform for Measuring the Motor Parameters (No 08108141A). The author would like to thank the reviewers of this paper for useful comments and suggestion. References [1] P. J. Rousseeuw and A. M. Leroy: Robust Regression and Outlier Detection, John Wiley & Sons, New York, 1987. [2] A. Leroy and P. J. Rousseeuw: Technical Report No.201, Center for Statistics and O.R., University of Brussels, Belgium, 1984. [3] P. Huang, W. Wang and J.B. Li: Computer Applications and Software, 2001,18(4), pp. 22-27. [4] L.M. Desire and L. Kaufman: Analytica Chimica Acta, 1986, 187, pp. 171-179. [5] H.G. Liu, Y. Luo and Y.C. Liu: Journal of Biomedical Engineering, 2006, 23(4) pp. 873-877. [6] C.L. Karr, B. Weck, D.L. Massart and P. Vankeerberghen: Engng.Applic. Artif. Intell. , 1995, 8 (2), pp. 177-189. [7] GB/T 1032-2005 [8] IEEE Std 112-1991

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.433

A novel method to calculate frequency control word of direct digital synthesizer Guo Jiana, Zhu jieb, Liu Junc, Zhou Lid Information School, Beijing Wuzi University, Beijing,101149, China a

[email protected], [email protected], [email protected], [email protected]

Key words: Direct Digital Synthesis; Frequency Control Word; Signal Generator;

Abstract. When designing signal generator with adjustable frequency based on direct digital synthesis technology (DDS), it is very important to calculate the frequency control word. To address the shortcoming of large computation, low efficiency of frequency setting, more storage space occupied or significant cumulative errors of frequency setting when calculating frequency control word with traditional methods, a novel calculation way for frequency control word is researched in this paper. This new method does not need large computation and more space and simultaneously it can decrease the error greatly, which plays an important role in improving the frequency setting speed and precision in signal generator. Introduction With the extensive application of digital technology in instrumentation and communication systems, a digital control method of generating various frequencies from reference frequency source, that is, Direct Digital Synthesis (DDS) technology emerged. DDS is a new technology of all digital frequency synthesis to synthesize the waveform required based on the concept of phase [1]. DDS has been applied in a variety of modern electronic measuring instruments since it has the characteristics of high frequency resolution, multi-output frequency, fast frequency switching, and phase continuous when switching frequency, low phase noise output and arbitrary waveforms generated and so on [2]. When synthesizing signal with multi-frequency, the algorithm of calculating frequency control word is very important because of its influences on the performance, such as range of output frequency and speed of setting frequency, et al. The Basic Principle of DDS DDS chip consists of clock, phase increment register, phase accumulator, waveform memory, D/A converter and low-pass filter[3]. The diagram of working principle of DDS is shown as Fig.1.

Fig.1 The diagram of working principle of DDS

434

Manufacturing Systems and Industry Application

High-precision clock is adopted as the clock reference source in the DDS chip. Microcontroller is employed as input controller and phase increment register is used to store the frequency control words which correspond to output waveform to control the waveform of the signal generator. When the clock pulse arrives, the frequency control word will be added to the data stored in phase accumulator. And the output of phase accumulator is used as an addressing address of waveform memory after it is latched and the contents of this address unit is the amplitude of a waveform synthesis point. The amplitude value is converted by D/A converter and filtered by low-pass filter, and then the analog signal meet requirement is obtained. When the next clock pulse comes, the output of phase accumulator is updated again by adding the frequency control word value, which makes the address of waveform memory be in the next amplitude point of the synthesized waveform. At last, phase accumulator can retrieve enough point to constitute the entire waveform [4-6]. If the reference clock frequency is fc and the bit of phase accumulator is N, the output frequency of DDS is f out =

M * fc 2N

(1)

Here, M is the frequency control word and it value is preset by the external control circuit. When the frequency of reference clock fc and the number of phase accumulator bit is certain, the output frequency fout is decided by M. from (1), the frequency control word M is obtained as following. 2 N * f out M = fc

(2)

When M is equal to 1, the lowest frequency of signal synthesized is

f out =

fc . 2N

(3)

The result of (3) is the frequency resolution of DDS. The highest output frequency is determined by Nyquist sampling theorem and it is fc/2.

The Usual Method of Calculating Frequency Control Word M When the reference clock frequency fc is equal to 2fHz, the bit of phase accumulator is N, and the range of frequency output is aHz~fHz and frequency resolution is tHz, the calculation of frequency control word M is analyzed. Under the above condition, there are

( f −a ) t

+ 1 optional frequency

output which are aHz,(a+t)Hz,(a+2t)Hz,……,(f-t)Hz, fHz. Taking a=0.1Hz, f=1MHz, t=0.1Hz as an example, the range of frequency output is from 0.1Hz to 1MHz, which are 0.1Hz,0.2Hz……999.9998KHz,999.9999KHz,1000.0000KHz and the total frequency point is about ten million. Here the least frequency also should be the integral times of frequency resolution t.

Yanwen Wu

435

One of the Usual Methods to Compute the Frequency Control Word. At first, the value of frequency control word M written to frequency register corresponding to calculated by hand calculation. And

( f −a ) t

( f −a ) t

+1

frequency is

+ 1 numbers which are represented with N bits are

stored into system memory. After the frequency setting value is accepted by the signal generator, the value chased down from

( f −a ) t

+1

numbers is written to frequency register.

The advantage of this method is high setting frequency accuracy which can be up to frequency resolution of DDS chip,

fc . However, when the system is designed, the amount of data needed to 2N

compute is large, speed frequency setting is low and storage space required is more. Another Usual Method to Compute the Frequency Control Word. With (2), the frequency control words are calculated by programming. At first, the frequency control word M1 corresponding to f out = tHz is computed, and then M1 is truncated and M 1' .is gotten. Then under a certain frequency, the value M is written into the frequency register is M = B * M 1' , where B (=1,2,3,…) is the quotient when a certain frequency is divided by tHz since t is the frequency resolution. Taking t=0.1Hz as an example, when the frequency is 35.4Hz, the value needed to be written to the frequency register is M = 354 * M’ 1. This algorithm has the advantages such as simple programming, small hand calculation working and little storage space. But there are accumulating error of setting frequency. With the frequency (M 1 − M 1’)B Hz . M1

increasing, the absolute error of frequency ∆ is increasing too and its value is ∆ =

For example, if f c = 2 MHz ,N=28,t=0.1Hz and the output frequency is 1KHz, the absolute error is about 31Hz. Such a large error in the instrument calibration applications is not allowed

A New Method to Compute the Frequency Control Word To solve the problem of methods used in 2.1 and 2.2, a new algorithm to compute the frequency control word is put forward. The new method under the circumstance of no reducing the setting speed of frequency, a lot of storage space is saved and the absolute error of setting frequency is decreased clearly. At first, we will calculate the values of frequency control word M truncated corresponding to the output frequency M1{t,2t, ……,9tHz},M2{10t, 20t, ……,90tHz}, M3{100t, 200t, ……,900tHz},……,Mi{10i-1×t, 10i-1×2t, ……,10i-1×9t Hz},……, Mq { 10 q −1 × t , 10 q −1 × 2t , …, 10 q −1 × 9t } and write them into the frequency register. Here, q =  log10f t  . And then all the values will be saved into q arrays such as M1, M2,…, Mi,……, Mq. We can get the value of frequency control word M corresponding to mHz frequency from the following steps.

436

Manufacturing Systems and Industry Application

Step 1: calculate the multiple of m to t: B=m/t. Then the maximum of B is f/t(f is the largest output frequency) Setp2: display B with an expression as B = n1 + n2 × 10 + n3 × 102 + ...... + ni × 10i −1 + ......n p × 10 p −1 , 1 ≤ i < p,0 ≤ n i ≤ 9 , n p ≠ 0 . Step3: When the setting frequency is mHz, we can extract the value M1[n1] from M1, M2[n2] from M2,…, Mi[ni] from Mi, …, and Mp[np] from Mp and then add all these value, which means that M= M1[n1]+ M2[n2]+M3[n3]+…Mi[ni]+…Mp[np]. Here, when ni=0, there is no data extracted from Mi. Takig a=0.1Hz,f=1MHz,t=0.1Hz as an example, when the setting output frequency m equals to 7893.4Hz, we can get the value of frequency control word M according to the next three steps. Step1: calculate the multiple of m to t: B=m/t=7893.4/0.1=78934. Setp2: display B with an expression as B=4+3×10+9×102+8×103+7×104. Step3: extract the value M1[4] from M1, M2[3] from M2, M3[9] from M3, M4[8] from M4 and M5 [7] from M7 and then add all these value. We can get that M= M1[4] +M2[3] +M3[9]+M4[8] + M5 [7]. Since each array is calculated by hands, the maximum error is 0.5 when getting each array values of frequency control word M truncated corresponding to setting frequency. At this time, the corresponding error is ∆ ≤

algorithm is q × ∆ =

0 .5 × f c 0.5t Hz = ’ Hz . Therefore, the maximum absolute error of this N 2 M1

0 .5 × t × q Hz . M 1’

Summing up the above, we can get the advantages of this new method as following. (1) The data stored into memory is small. There are only q groups, which mean 9×q n-bit binary numbers. (2) Comparing to the method in 2.1, the speed of frequency setting is much faster since the value is retrieved from only q groups instead of from

( f −a ) t

+ 1 numbers. (3) The new method significantly

reduces the error accumulating since the value of q is small which leads to smaller absolute frequency error and higher setting frequency accuracy.

Conclusions As the third generation of frequency synthesis, DDS is developed following the direct and indirect frequency synthesis. It is used widely in a variety of instruments since it has superiority in a series of performance such as bandwidth, frequency conversion time, frequency resolution, phase continuity and integration. To address the disadvantages of large computation, low efficiency of frequency setting, more storage space occupied or significant cumulative errors of frequency setting when calculating frequency control word with traditional methods, a new calculation algorithm of frequency control word is developed in this paper. This new method does not need large computation and more space and simultaneously it can decrease the error greatly, which plays an important role in improving the frequency setting speed and precision in signal generator.

Yanwen Wu

437

Acknowledgment The paper is supported by the Funding Project for Academic Human Resources Development in Institutions of Higher Learning Under the Jurisdiction of Beijing Municipality (PHR201007145, PHR201108311), Funding Project for Base Construction of Scientific Research of Beijing Municipal Commission of Education(WYJD200902) and Funding project for Beijing excellent talents (2010D005009000002). References [1] P.Yang, D.H.Wu, L.G.Yang. Application of DDS Technology in Sine Wave Function Generator. Computer Measurement & Control. Vol. 16 (2008), p.1738-1740 [2] M.C.Zhang, K.Liu. Design of Frequency Modulation Continuous Signal Generator Based on DDS. Guidance & Fuze. Vol. 31(2010), p.14-18 [3] J.Chen. Design of High Frequency Signal Generator Based on AT89S51 and AD9850. Industrial Control Computer. Vol. 23 (2010), p.118-120 [4] J.P.Cui, M.Zhao, F. Jiang. Virtual Arbitrary Waveform Generator Based on DDS Technology. Computer Measurement & Control. Vol.11(2003),p.553-555 [5] S.Marcello, M.Arianna, B. Strfano, D.G. Domenico,S.Adelio, et al. High Spectral Purity Digital Direct Synthesizer Implementation by Means of a Fuzzy Approximator. Applied Soft Computing, Vol.4,(2004),p. 241 - 257. [6] Q.Sun, Q.Song. Portable Signal Generator Based on Direct Digita l Synthesis. Instrument Technique and Sensor. Vol.4(2009),p.67-70

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.438

Stability Analysis for Local Transductive Regression Algorithms Wei Gao1,2,a , Yungang Zhang1,3,b and Li Liang1,c 1

Department of Information, Yunnan Normal University, Yunnan Kunming 650092,China 2

Department of Mathematics, Soochow University, Jiangsu Suzhou 215006, China

3

Department of Computer Science, University of Liverpool Liverpool, L69 3BX United Kingdom a

[email protected], [email protected], [email protected]

Key words: algorithmic stability; transductive learning; cost stability; local transductive regression algorithm; reproducing kernel Hilbert space ; pseudo-target.

Abstract. In this paper, the stability of local transductive regression algorithms is studied by adopting a strategy which adjusts the sample set by removing one or two elements from it. A sufficient condition for uniform stability is given. The result of our work shows that if a local transductive regression algorithm uses square loss, and if for any x, a kernel function K(x, x) has a limited upper bound, then the local transductive regression algorithm which minimizes the standard form will have good uniform stability. Introduction The problem of transductive inference [1] was first introduced by Vapnik in 1982. The transductive inference problem can formulate many learning problems in natural language processing, information extraction, computational biology and other fields. In the transductive setting, the learning algorithm receives not only a labeled training set as in the standard induction setting, but also a set of unlabeled test points. The goal is to predict the future labels for test points. No other test points will ever be considered. This setting arises in a variety of applications. Often, there are orders of magnitude more unlabeled points than labeled ones and they have not been assigned a label due to the prohibitive cost of labeling. This motivates the use of transductive algorithm which leverages the unlabeled data during training to improve learning performance. Some recent studies can be referred in [2]-[7]. The notion of algorithmic stability can be used to derive bounds that tailored to specific learning algorithms and exploit their particular properties. A transductive regression algorithm called stable if for a wild change of samples, the regression function doesn’t change too much. [8] introduced and examined a very general family of tranductive algorithms, that of local transductive regression (LTR) algorithms, a generalization of the algorithm of [3]. It gave general bounds for the stability coefficients of LTR algorithms and used them to derive stability-based learning bounds for these algorithms. The stability analysis in [8] was based on the notion of cost stability and based on convexity arguments. As the continue work of [8], the contribution of this paper is: we define uniform cost stability and uniform score stability with respect to S \ i ' (“leave-one-out”) and S \ i ', j ' (“leave-two-out”), i.e., removing the i ' th element from sample set S and removing the i ' th and j ' th elements from S; we obtain a sufficient condition for local transductive regression algorithm has uniform cost stability. The notions are mainly refers from [8] and proofs of our results make use of techniques similar to those used in [8]. Setting Let X denote the input space and Y a measurable subset of R. In transductive learning settings, the algorithm receives a labeled training set S=((x1,y1),…,(xm,ym)) ⊆ X × Y of size m, and unlabeled test set T=(xm+1,…,xm+u) ⊆ X of size u, X=S ∪ T. The transductive learning problem consists of predicting accurately the labels ym+1, …,ym+u of the test examples. The specific problem where the labels are real-valued numbers, as in the case studied in our paper, is that of transductive regression. There are two differences compare to the standard inductive regression: 1) The learning algorithm

Yanwen Wu

439

is given the unlabeled test examples beforehand; 2) Can possibly exploit the information from test examples to improve its performance. The cost of an error of a hypothesis h on a point x labeled with y(x) denoted by c(h,x). The cost function used in common regression is the square loss c(h,x)=[h(x)-y(x)]2. We shall assume a square loss for the remaining of this paper, but our results generalize to other convex cost functions. The training error Rˆ (h) and test error R(h) of a hypothesis h are defined as follows: 1 m 1 u Rˆ (h) = ∑ c(h, xk ) , R(h)= ∑ c(h, xm + k ) . m k =1 u k =1 \i' For any i ' , j ' ∈ {1,…,m}, we use S to denote the sequence obtained from S by removing xi ' and S \ i ', j ' to denote the sequence obtained from S by removing xi ' and x j ' . We shall use the following notion of stability in our analysis. Definition 1 (Cost stability for “leave one out”): :Let L be a transductive learning algorithm and let h denote the hypothesis returned by L for X=(S,T) and h ' the hypothesis returned for X=( S ' , T ' ), where S ' is the sequence obtained from S by removing one point. L is said to be uniformly β1 -stable with respect to the cost function c if there exists β1 ≥ 0 such that for all x ∈ X, c(h' , x) − c(h, x) ≤ β1 .

(1)

Definition 2 (Score stability for “leave one out”): :Let L be a transductive learning algorithm and let h denote the hypothesis returned by L for X=(S,T) and h ' the hypothesis returned for X=( S ' , T ' ), where S ' is the sequence obtained from S by removing one point. L is said to be uniformly β 2 stable with respect to its output scores if there exists β 2 ≥ 0 such that for all x ∈ X, h(x) − h' (x) ≤ β 2 .

(2)

Definition 3 (Cost stability for “leave two out”): :Let L be a transductive learning algorithm and let h denote the hypothesis returned by L for X=(S,T) and h '' the hypothesis returned for X=( S '' , T '' ), where S '' is the sequence obtained from S by removing two points. L is said to be uniformly β3 -stable with respect to the cost function c if there exists β3 ≥ 0 such that for all x ∈ X, c(h'' , x) − c(h, x) ≤ β3 .

(3)

Definition 4 (Score stability for “leave two out”): :Let L be a transductive learning algorithm and let h denote the hypothesis returned by L for X=(S,T) and h '' the hypothesis returned for X=( S '' , T '' ), where S '' is the sequence obtained from S by removing two points. L is said to be uniformly β 4 -stable with respect to its output scores if there exists β 4 ≥ 0 such that for all x ∈ X, h(x) − h'' (x) ≤ β 4 .

(4)

Say that a hypothesis set H is bounded by B>0 when h( x) − y ( x) ≤ B for all x ∈ X, and h ∈ H. It is easy to see that βi -score-stability (i=2 or 4) implies 2B βi -cost stability for H bounded by B and the square loss. LTR algorithm can be viewed as a generalization of the so-called kernel regularization based learning algorithms to the transductive setting. The objective function that is minimized is of the form: C' u C m 2 F(f,S)= f K + ∑ c(h, xk ) + ∑ c (h, xm + k ) , (5) u k =1 m k =1 where ⋅ K is the norm in the reproducing kernel Hilbert space (RKHS) with associated kernel K,

C ≥ 0 and C ' ≥ 0 are trade off parameters, f is the hypothesis and c ( f , x) =(f(x)- y ( x) )2 is the error of f on the unlabeled point x with respect to a pseudo-target y . Pseudo-targets are obtained from neighborhood labels y(x) by a local weighted average or other regression algorithms applied locally.

440

Manufacturing Systems and Industry Application

Neighborhoods can be defined as a ball of radius r around each point in the feature space. We 2 4 denote by β loc and β loc the score-stability coefficient corresponding to Definition 2 and Definition 4. Main result and proof In this section, we use the bounded-labels assumption, that is we shall assume that for all x ∈ S, y ( x) ≤ M for some M>0. We also assume that for any x∈X, K(x,x) ≤ κ 2 . We will use the following bound based on the reproducing property and the Cauchy-Schwarz inequality valid for any hypothesis h∈H, and for all x∈X, h( x) = < h, K ( x, ⋅) > ≤ h K K ( x, x) ≤ κ h K . (6) Lemma 1 [8]: Let h be the hypothesis minimizing (5). Assume that for any x∈X, K(x,x) ≤ κ 2 . Then, for any x∈X, h( x) ≤ κ M C + C ' . Let h be a hypothesis obtained by training on S, h ' by training on S ' which remove one point from S, and h '' by training on S '' which remove two points from S. To determine the cost-stability coefficient β , we should upper-bound c(h, x) − c(h ' , x) . Let ∆1h =h-h’. Then, for all x∈X, c(h, x) − c(h ' , x) = ∆1h( x)[(h( x) − y ( x)) + (h ' ( x) − y ( x))] ≤ 2M(1+ κ

C + C ' ) ∆1h( x) .

As in Inequality (6), for all x∈X, ∆1h( x) ≤ κ ∆1h K , thus for all x∈X, c(h, x) − c(h ' , x) ≤ 2M(1+ κ

C + C ' ) κ ∆1h K .

(7)

C + C ' ) κ ∆2h K .

(8)

Similarly, let ∆ 2 h =h- h '' . Then, for all x∈X, c(h, x) − c(h '' , x) ≤ 2M(1+ κ

In the case of c , the pseudo-targets may depend on the training set S. This dependency matters when we wish to apply convexity of h, h ' and h '' . For convenience, for any three such fixed hypotheses h, h ' and h '' , we extend the definition of c as follows. For all t∈[0,1], c (th+(1-t) h ' , x)=((th+(1-t) h ' )(x)-(t y +(1-t) y ' ))2. (9) 2 c (th+(1-t) h '' , x)=((th+(1-t) h '' )(x)-(t y +(1-t) y '' )) . (10) We use the same convexity property for c as for c for any three fixed hypotheses h, h ' and h '' as verified by the following lemma. Lemma 2: Let h be a hypothesis obtained by training on S, h ' by training on S ' (remove one point from S) and h '' by training on S '' (remove two points from S). Then, for all t∈ [0,1], t c (h, x)+(1-t) c ( h ' , x) ≥ c (th+(1-t) h ' , x). (11) t c (h, x)+(1-t) c ( h '' , x) ≥ c (th+(1-t) h '' , x). (12) Proof: The result follows by applying the trick used in Lemma 10 in [8].  ' '' Lemma 3: Assume that for all x∈X, y ( x) ≤ M. Let S, S and S be three sample sets that S ' obtained by removing one point from S and S '' obtained by removing two points from S. Let h be the hypothesis returned by the algorithm minimizing the objective function F(f,S), h ' be the hypothesis obtained by minimization of F(f, S ' ) and h '' be the hypothesis obtained by minimization of F(f, S '' ). Let y , y ' and y '' be the corresponding pseudo-targets. Then for all i∈ [1,m+u], ' C C' C C' 2 C [c(h ' , xi ) − c(h, xi )] + [c (h ' , xi ) − c (h, xi )] ≤ 2AM( κ ∆1h K ( + )+ β loc ), (13) m u m u u ' C C' C C' '' '' 4 C   [c(h , xi ) − c(h, xi )] + [c (h , xi ) − c (h, xi )] ≤ 2AM( κ ∆ 2 h K ( + )+ β loc ), (14) m u m u u

where ∆1h = h ' -h , ∆ 2 h = h '' -h and A=1+ κ C + C ' . Proof: The result follows by applying the technology used in Lemma 11 in [8].



Yanwen Wu

441

Lemma 4: Assume that for all x ∈ X, y ( x) ≤ M. Let S, S ' and S '' be three sample sets that S ' obtained by removing one point from S and S '' obtained by removing two points from S. Let h be the hypothesis returned by the algorithm minimizing the objective function F(f,S), h ' be the hypothesis obtained by minimization of F(f, S ' ) and h '' be the hypothesis obtained by minimization of F(f, S '' ). Let y , y ' and y '' be the corresponding pseudo-targets. Then ∆1h

2

∆2h

2

K

' C C' 2 C + )+ β loc ), m u u ' C C' 4 C ≤ 2AM( κ ∆ 2 h K ( + )+ β loc ), m u u

≤ AM( κ ∆1h K (

K

where ∆1h = h ' -h , ∆ 2 h = h '' -h and A=1+ κ

(15) (16)

C + C' .

Proof: 1) By the definition of h and h ' , we have h= arg min F(f,S) and

h ' = arg min F(f,S’)

f ∈H

f ∈H '

Let t∈ [0,1]. Then h+t ∆1h and h -t ∆1h satisfy:

F(h,S)-F(h+t ∆1h ,S) ≤ 0

(17)

F( h ' ,S’)-F( h ' -t ∆1h ,S’) ≤ 0

(18)

' t ∆1

'

For notational ease, let ht ∆1 =h+t ∆1h and h = h -t ∆1h . Adding the two inequalities in Equations (17) and (18) yields: C' u C m C m   + [ ( , ) − ( , )] + [c(h' , xk ) − c(ht'∆1 , xk )] + c h x c h x [ c ( h , x ) − c ( h , x )] ∑ ∑ ∑ k t ∆1 k m+ k t ∆1 m+ k m u m k =1 k =1, k ≠ i ' k =1 2 2 2 C' u C' 2 ' '   [ ( , ) − ( , )] + [c (h ' , xi ' ) − c (ht'∆1 , xi ' )] + h K - ht∆1 + h ' - ht'∆1 ≤ 0. c h x c h x ∑ m+ k t ∆1 m+k K K K u u k =1 By the convexity of c(h, ⋅ ) in h, it follows that for all k ∈[1,m+u], c(h,xk)-c( ht ∆1 , xk) ≥ t[c(h, xk)-c(h+ ∆1h , xk)], (19) and c( h ' ,xk)-c( ht'∆1 , xk) ≥ t[c( h ' , xk)-c( h ' - ∆1h , xk)]. (20) By Lemma 2, similar inequalities hold for c . These observations lead to: C 't u Ct m Ct m ' '   + [ c ( h , x ) − c ( h , x )] + [ c ( h , x ) − c ( h , x )] ∑ ∑ ∑ [c(h' , xk ) − c(h, xk )] + k k m+k m+k m k =1, k ≠i ' u k =1 m k =1

C 't u C 't 2 '   [ ( , ) − ( , )] + [c (h ' , xi ' ) − c (h, xi ' )] + h K - ht∆1 c h x c h x ∑ m+k m+k u u k =1 It is not hard to show that: 2

h K - ht∆1

2 K

+ h'

2 K

- ht'∆1

2 K

2 K

+ h'

2

=2t(1-t) ∆1h K .

Simplifying the previous inequality leads to: Ct C 't 2 2t(1-t) ∆1h K ≤ [c(h ' , xi ' ) − c(h, xi ' )] [c (h ' , xi ' ) − c (h, xi ' )] . m u

C + C ' . By Lemma 3, it follows that ' C C' 2 2 C (1-t) ∆1h K ≤ AM( κ ∆1h K ( + )+ β loc ) m u u Taking the limit as t → 0 yields the statement of (15).

2 K

- ht'∆1

2 K

≤ 0.

(21)

(22)

Let A=1+ κ

(23)

442

Manufacturing Systems and Industry Application

2) For the h and h '' , the inequality (22) become Ct C 't 2 2t(1-t) ∆ 2 h K ≤ [c(h '' , xi ' ) − c(h, xi ' )] [c (h '' , xi ' ) − c (h, xi ' )] + m u Ct C 't [c(h '' , x j ' ) − c(h, x j ' )] [c (h '' , x j ' ) − c (h, x j ' )] . m u Using Lemma 3 twice, the (23) become ' C C' 2 4 C (1-t) ∆ 2 h K ≤ 2AM( κ ∆ 2 h K ( + )+ β loc ) m u u Taking the limit as t → 0 yields the statement of (16).  Now we give the main result of this paper. Theorem 1: :Assume that for all x ∈ X, y ( x) ≤ M and there exists κ such that for all x ∈ X, K(x,x) ≤ κ 2 . Let A=1+ κ C + C ' . 2 1) If the local estimator has score-stability β loc . Then, LTR has uniformly β1 -cost-stable with 2 4C ' βloc C C' C C' + + ( + )2 + ]. m u m u AM κ 2u 4 2) If the local estimator has score-stability β loc . Then, LTR has uniformly β3 -cost-stable with

β1 ≤ (AM)2 κ 2 [

4 2C ' βloc C C' C C' + + ( + )2 + ]. m u m u AM κ 2u ' C C' 2 2 C Proof: 1) From Lemma 4, we know that ∆1h K ≤ AM( κ ∆1h K ( + )+ β loc ),where m u u

β3 ≤ 2(AM)2 κ 2 [

∆1h = h ' -h and A= 1+ κ C + C ' . This implies that ∆1h the second-degree polynomial which gives

K

is bounded by the non-negative root of

2 1 C C' C C ' 2 4C ' βloc ∆1h K ≤ AM κ [ + + ( + ) + ]. (24) 2 m u m u AM κ 2u Using the above bound on ∆1h K in Equation (7) yields the desired bound on the stability

coefficient β1 of LTR. 2) Similarly with 1) above, we can give the upper bound for ∆ 2 h second-degree polynomial which gives

K

by the non-negative root of the

4 C C' C C ' 2 2C ' βloc ∆ 2 h K ≤ AM κ [ + + ( + ) + ]. (25) m u m u AM κ 2u Using the above bound on ∆ 2 h K in Equation (8) yields the desired bound on the stability

coefficient β3 of LTR and completes the proof.



Conclusion In this paper, we defined the stability for local transductive regression algorithms by removing one or two elements from the sample set, and give some results for this operation. The generalization bounds for local transductive regression algorithms are not considered in our paper, and this is a core issue which will be discussed in the future work.

Yanwen Wu

443

References [1] V.N. Vapnik: Estimation of Dependences Based on Empirical Data(Springer Publications, Berlin 1982). [2] O. Chapelle, V. Vapnik, and J. Weston, Transductive inference for estimating values of functions. In Neural Information Processing Systems, pp421-427. MIT Press, (1999). [3] C. Cortes and M. Mohri, On transductive regression, In Advances in Neural Information Processing Systems, pp305-312. MIT Press, (2007). [4] P. Derbeko, R. EI-Yaniv, and R. Meir, Explicit learning curves for transduction and application to clustering and compression algorithms, J.Artif.Intell. Res.(JAIR), Vol 22:117-142, (2004). [5] R. EI-Yaniv, and D. Pechyony, Stable transductive learning, In Conference on Learning Theory, pp. 35-49, Springer, (2006). [6] R. EI-Yaniv, and D. Pechyony, Transductive redemacher complexity and its applicatons, In Conference on Learning Theory, pp. 35-49, Springer, (2007). [7] M. Wu and B. Sch o lkopf, Transductive classification via local learning regularization, In Artificial Intelligence and Statistics, (2007). [8] C. Cortes, M. Mohri, D. Pechyony, and A. Rastogi, Stability Analysis and Learning Bounds for Transductive Regression Algorithms, Submitted.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.444

Data Stream Clustering Algorithm Based on Affinity Propagation and Density LI Yang a, TAN Baihong b College of Economics and Management, Tianjin University of Science and Technology, Tianjin dagunanlu 1038, China a

[email protected], [email protected]

Key words: data stream; density based clustering; Affinity Propagation.

Abstract. Data stream clustering is an important issue in data steam mining. In the field of data stream analysis, conventional methods seem not quite efficient. Because neither they can adapt to the dynamic environment of data stream, nor the mining models and result s can meet users’ needs. An affinity propagation and grid based clustering method is proposed to effectively address the problem. The algorithm applies AP clustering on each partition of the data stream to generate reference point set, and subsequently density based clustering is applied to these reference points to get the clustering result of each periods. Theoretic analysis and experimental results show it is effective and efficient. Introduction Cluster analysis is an important issue of data mining. Clustering algorithm is used to divide data object into proper cluster automatically according to the rule of higher similarity among cluster and lower similarity between clusters. Lots of cluster algorithms have been proposed. Data stream was brought forward in 1998, which has become a research focus in the field of data mining. It is a sequence of digitally encoded coherent signals used to transmit or receive information that is in the process of being transmitted [1]. Data stream is phenomenon driven, the velocity and sequence of data item arriving can not be control, which has the following characteristics: 1) data item arrives continuously online and changes with time; 2) data stream can be considered infinite; 3) the arriving time of data items are dependent; 4) data stream changes fast. According to the above characteristics, there would be some problems if applying traditional cluster algorithms such as K-means, EM into data stream analysis directly. Firstly, memory can not meet data flow process requirements. Due to limited memory and infinite data flow, cluster analysis can not be carried out after all data stream have been stored in memory. It can be improved by storing only a summary data structure that including features of the current data stream before cluster analysis. Secondly, data stream analysis need high requirements of real time for algorithms. Data stream arrives fast and continuously, algorithm should response as quickly as possible. So the algorithm can not be too complex to carry with the velocity of data stream. Thirdly, due to limited times of sequential scanning of data stream system can not adjust the arriving time of data item, who arrives dependently and continuously. So it’s difficult to improve the algorithm by adjusting the sequence of data item. The data item of data stream will be cast away after processing. Data stream can be accessed only once in sequence with consideration of the higher cost for random access. So for purposes of data stream clustering is to find a simple and efficient algorithm that use limited memory space effectively. Clustering data stream requirements: 1. Algorithm can be run in a small space, which is the primary problem of data stream clustering: the amount of stream data grow over time, not all data are stored to do cluster, which requires algorithm can do cluster with limited storage space for "unlimited" data stream. 2. Algorithm scans the data only once or a few times. Over time, there have been new data added to the data stream, the data will update the original memory in the temporary stream data, data stream clustering algorithm which requires only a small amount of data once or several times scanning.

Yanwen Wu

445

Currently, under the premise of meeting the above requirements of data stream clustering, most clustering algorithms sacrifice the accuracy in exchange for high efficiency of space and time. Therefore, the data streaming algorithms generally have similar characteristics of approximation and adaptability, approximation refers that the results obtained is not theoretically optimal but similar solution. Due to higher requirements for the responsiveness, therefore the algorithm has to sacrifice accuracy, but good algorithm should be able to ensure proper approximation. Adaptability refers the algorithm can automatically adapt to the changing speed of data arriving and the data itself. In addition, for different applications, the robustness to noise, the ability to handle different data types, etc. are also taken into account. AP Algorithm Affinity Propagation Clustering (AP) [2] is based on nearest neighbor information propagation proposed by Frey et al. The algorithm is fast, effective, and applied in face images clustering, "exon" discovering, optimal route searching and other aspects. AP algorithm is fast and effective to deal with large data sets clustering, such as pictures of thousands of handwritten ZIP codes, AP algorithm took only 5min to find a few pictures of various types which can interpret those pictures accurately, and K-means algorithm to achieve the same accuracy would spend 500 million years [3]. Affinity Propagation Clustering (AP) aims to find the most optimal set of representative points (each one representative point is corresponding to a data point in actual data set, exemplar), so that make maximum similarity of all data points to the nearest representative point. If the similarity of data points is the negative Euclidean distance of data points, then the objective function for AP algorithm are same with that for classical K-center clustering. But the principles are much different. AP algorithm takes each data point as a node in a graph, through the propagation of information to find the optimal set of representative points. While K-center algorithm is based on the principle of achieving most optimal center through Minimum replacement. In addition, AP algorithm and K-center algorithm take different method to determine the initial representative points: AP algorithm takes each data point as the candidate representative point, by which avoiding the clustering results are limited by the initial class representative points choice. K-center algorithm randomly select several points as the initial representative point, by which resulting in the initial class clustering result is very sensitive to the choice of representative points. Compared with the general clustering algorithm, the biggest advantage of AP algorithm lies it has no requirements for symmetry of similarity matrix, which also expands the scope of application of AP algorithm. Affinity Propagation Clustering has advantages in fast operation when dealing with large number of classes [3]. AP algorithm takes an input function of similarities, s(i,j),where s(i,j) reflects how well suited data point j is to be the exemplar of data point i. AP aims to maximize the similarity s(i,j) for every data point i and its chosen exemplar j, therefore an application requiring a minimization (e.g. Euclidean distance) should have a negative similarity function. Each node i also has a self-similarity, s(i,j), which influences the number of exemplars that are identified. Individual data points that are initialized with a larger self-similarity are more likely to become exemplars. If all the data points are initialized with the same constant self-similarity, then all data points are equally likely to become exemplars. By increasing and decreasing this common self-similarity input, the number of clusters produced is increased and decreased respectively. There are two types of messages passed in this technique. The responsibility, r(i,j), is sent from i to candidate exemplar j and indicates how well suited j is to be i’s exemplar, taking into account competing potential exemplars. The availability, a(i,j), is sent from candidate exemplar j back to i, and indicates j’s desire to be an exemplar for i based on supporting feedback from other data points. The self-responsibility, r(i,i) and self-availability, a(i,i), both reflect accumulated evidence that i is an exemplar. The update formulas for responsibility and availability are stated below:

446

Manufacturing Systems and Industry Application

r (i, j ) ← s(i, j ) − max {a(i, j ') + s(i, j ')}

(1)

  a(i, j ) ← min 0, r ( j , j ) + ∑ max {0, r (i ', j )} ∀i '∉{i , j }  

(2)

CH i = arg max{a(i, j ) + r (i, j )}

(3)

j ' s .t . j '≠ j

j

APStream The goal of clustering is to group the streaming data into meaningful classes.A data stream is a set of points from data space S that continuously arrives. We assume that data arrives in chunks D1, D2, ..., Dn, ..., at time stamps t1, t2, ..., tn, ..., Each of these chunks fits in main memory. Suppose that each chunk contains m points, and the current time stamp is t. In this paper, we assume that the input data has d dimensions, and each input data record is defined as the space X=X1×X2×…×Xd, a data element generated at the jth turn is denoted by ej=,eij, 1≤i≤d. At first all the data is divided into p partitions as initial clusters by affinity propagation clustering, we denote it as S = S1 ∪ S 2 ∪ ∪ S p and p initial core objects are denoted as c1 ,..., ci ,...c p . For each Si , i = 1,… , p is recorded as Si (ci , si , d , ni ) . si = ∑ dist (ci , x j ) ; x j is the data elements belong to ci; dist(ci,xj) is the Euclidean Distance from j

ci to xi; d is the farthest distance from all data elements to ci; ni is the number of data elements belong to ci. When a new data element et is generated at the tth turn in a data stream D, all the data elements that have ever been generated so far are denoted by the current data stream Dt={e1,e2,…,et}. The total number of data elements generated in the current data stream Dt is denoted by | Dt |. The algorithm applies affinity propagation clustering on each partition of the data stream Dt to generate k clusters and k reference points. When every reference point is generated, we recorded the amount number of data elements belong to current reference point as weight, the sum of distance between data elements to current reference point and the farthest distance from all data elements to current reference point. Definition 1 For any data stream Dt at time t, applies affinity propagation clustering, generate k clusters, and the reference points are c1,…,ci,…,ck. Dt can be recorded as Dk(ci,si,d,ni). Definition 2 For any two reference points (c1,s1,d,n1) and (c2,s2,d,n2), which satisfies the following two conditions is a group of adjacent dense unit cells(Fig.1). Condition 1: n1

n2

i =1

j =1

1 − E ≤ ∑ dist (c1 , xi ) / ∑ dist (c2 , x j ) ≤ 1 + E

(4)

E is the predefined threshold value. Condition 2: dist (c1 , c2 ) ≤ d1 + d 2

(5)

Yanwen Wu

(c1 , s1 , d , n1 )

447

(c1 , s1 , d , n1 )

(c2 , s2 , d , n2 )

(c2 , s2 , d , n2 ) (c3 , s3 , d , n3 )

Fig. 1

Example of distance relations

This paper presents an AP algorithm based on partition, the algorithm divide the data in the data stream into regions, a region is composed of s data points in order, the algorithm is based on hierarchical clustering, described as follows: (1) First, input s data points, according to AP clustering method to find out the cluster centers, the weight of each point is the number of data points belonging to the mean points. Repeat this step until s reference points have been generated in the memory, these s points are corresponding to the result of level 1 with hierarchical clustering algorithm for clustering. (2) Find out center points according to the s reference points, update these mean weights of points at the same time; continue to read into the s data points for the next level of clustering; (3) Generally, reference points of (i+1)- level will be generated according to the s reference points of i-level generated by AP clustering algorithm, and the weight of these mean points should be updated correspondingly. (4) When the data have been traversed, all the current central points should be carried out density clustering to generate k-clusters. Analysis of Algorithm This part is an analysis of complexity and quality of the algorithm. Computational algorithm consists of two parts: divide the data stream, m data points in order constitute a division, generate clustering overhead with AP clustering algorithm for m data points within each division; do density clustering for the m- cluster centers. Ap algorithm is fast and high efficient, measured in terms of square error between the pros and cons of algorithms, AP cluster square error than other methods and should be low. Algorithm complexity is O (n * n * logn), where n is the number of data points. Cluster for each division, as the number n of data points for each division is rather small, the algorithm has high efficiency. As the clustering quality, pre-processing of data streams is carried out with the algorithm which uses the features of simples and efficiency to generate sample data, which contains a subset of the full distribution of information in the original data set. As density clustering is adapt to any shape and strong anti-noise, it can be efficient to do density cluster of the sample for data stream having irregular data distribution or with noise (or higher dimensional data streams). Experimental Results We evaluate the quality and efficiency of the APStream and compare it with CluStream[6], clustering purity [15] is chosen as the measure. Clustering purity is one of the ways of measuring c the quality of a clustering solution. Let j class =i denote number of items of class i assigned to cluster j, Purity of this cluster is given by

448

Manufacturing Systems and Industry Application

1 max(| C j |class =i ) (6) | Cj | i The overall purity of a clustering solution could be expressed as a weighted sum of individual cluster purities k |C | purity = ∑ j purity (C j ) (7) j =1 | D | We have implemented the algorithm in VC++ 6.0. Experiments were performed on an Intel(R) Core(TM)2 duo CPU E8300 @ 2.83Ghz processor. The testing data set is a real data set used by the KDD CUP-99. It contains network intrusion detection stream data collected by the MIT Lincoln laboratory [6]. This data set contains a total of five clusters and each connection record contains 42 attributes. As in [6], all the 34 continuous attributes are used for clustering. purity (C j ) =

CluStream APStream

100

clusty purity(%)

95 90 85 80 75 70 200

400

600

800

1000

1200

Fig2. Effect comparison of CluStream and APStream

The efficiency of the APStream is higher than CluStream, because of Affinity Propagation Clustering is a fast, effective algorithm. Compared with the general clustering algorithm, the biggest advantage of AP algorithm lies it has no requirements for symmetry of similarity matrix, which also expands the scope of application of AP algorithm. And moreover, APStream is resolved another shortage that it is not capable enough to cluster arbitrary shapes and make clusters in periodic data. Conclusions In this paper, we propose APStream, a new algorithm for clustering stream data. Most of the existing algorithms adopted K medians (means) method to solve this problem, which are not suitable to address the problem of clustering high dimensional or abnormal distributed data streams. An affinity propagation and Grid based clustering method is proposed to effectively address the problem. The algorithm applies AP clustering on each partition of the data stream to generate reference point set, and subsequently density based clustering is applied to these reference points to get the clustering result of each periods. Theoretic analysis and experimental results show it is effective and efficient.

Yanwen Wu

449

References [1] Henzinger MR, Raghavan P, Rajagopalan S. Computing on data streams[DE/OL]. http://gatekeeper.research.compaq.com/pub/DEC/SRC/technical-notes/abstracts/src-tn-1998-0 11,1998. [2] Frey B J, Dueck D. Clustering by Passing Messages Between Data Points, Science[EB/OL]. (2007-02). http://www.psi.toronto.edu/affinitypropagation/FreyDueckScience07.pdf. [3]

Kelly K. Affinity Program Slashes Computing http://www.news.utoronto.ca/bin6/070215-2952.asp.

Times[EB/OL].

(2007-02-15).

[4] Guha S, Mishra N, Motwani R. Clustering data streams [C] .In: Proceedings of the Annual Symposium on Foundations of Computer Science, 2000:359-366. [5] LIU Min-juan, CHAI Yu-mei, ZHANG Xi-zhi. Similarity - based Grid Clustering Algorithm. Computer Engineering Applications, 2007, 43(7): 198- 201. [6] C. C. Aggarwal, J. Han, J. Wang, and P. S. Yu. A framework for clustering evolving data streams. In Proc. VLDB, 2003 : 81–92. [7] Nam Hun Park, Won Suk Lee. Statistical Grid-based Clustering over Data Streams, SIGMOD Record, 2004 March Vol. 33, No. 1: 32-37. [8] GAO Yong-Mei HUANG Ya-Lou. A Grid and Density2based Clustering Algorithm for Processing Data Stream. Computer Science, 2008 Vol.35 No.2: 134-137. [9] NI Wei-wei, LU Jie-ping, CHEN Geng, SUN Zhi-hui. Efficient Data Stream Clustering Algorithm Based on k-Means Partitioning and Density, Journal of Chinese Computer Systems, 2007 Vol. 28 No.1: 83-87. [10] WANG Kai-jun, LI Jian, ZHANG Jun-ying, TU Chong-yang. Semi-supervised Affinity Propagation Clustering. Computer Engineering, 2007,33(23):197-198,201. [11] XIAO Yu, YU Jian. Semi-Supervised Clustering Based on Affinity Propagation Algorithm, Journal of Software, Vol.19,No.11,November 2008:2803-2813. [12] Nam Hun Park, Won Suk Lee. Statistical Grid-based Clustering over Data Streams, SIGMOD Record, Vol. 33, No. 1, March 2004 [13] Liu YB, Cai JR, Yin J et al. Clustering text data streams. Journal of Computer Science and Technology 2008 Jan 23(1): 112-128. [14] SUN Yu-fen LU Yan-sheng. An Overview of Stream Data Mining, Computer Science, 2007 Vol. 34No.1: 1-11. [15] YAN Xiao-long, SHEN Hong. Subspace clustering method for high dimensional data stream, Computer Applications, 2007 July Vol. 27 No. 7: 1680-1710. [16] WANG Kai-Jun, ZHANG Jun-Ying, LI Dan, ZHANG Xin-Na, GUO Tao. Adaptive Affinity Propagation Clustering. Acta Automatica Scinica, 2007,33(12):1242-1246. [17] Feng Yu, Damalie Oyana, Wen-Chi Hou and Michael Wainer. Approximate Clustering on Data Streams Using Discrete Cosine Transform, Journal of Information Processing Systems, Vol.6, No.1, March 2010: 67-78. [18] Liu YB, Cai JR, Yin J et al. Clustering text data streams. Journal of Computer Science and Technology, Jan. 2008 23(1): 112-128. [19] YU Xiang, YIN Gui-sheng. An incremental irregular grid algorithm for clustering data streams, Journal of Harbin Engineering University, V01. 29No. 8 Aug. 2008: 846-850.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.450

New Criteria on Impulsive Stabilization and Synchronization of Delayed Unified Chaotic Systems with Uncertainty Yuanqiang Chen College, National Minoritces College of Guizhou , Guiyang, China e-mail: [email protected] Key words: Impulsive control; Synchronous control; Chaotic system

Abstract. This paper investigates stability and synchronization problems for delayed unified chaotic system with parameter uncertainty by employing the Lyapunov function method and matrix inequality techniques. Sufficient conditions for their asymptotical stability and synchronization are developed under given impulsive controllers. Finally, the validity of the obtained results is shown by a numerical example and its simulation. Introduction Study of chaotic system has been an active research area since the discovery of the Lorenz chaotic attractor in 1963. New chaotic attractors have been discovered, for example, those reported in [1]. In particular, the family of chaotic systems obtained in [2] and [3] cover those reported in [1] as special cases. This unified chaotic system is described by x ( t ) = Ax ( t ) + g ( x ( t ) ) , (1)    x1   0   − ( 25 a + 10 ) 25 a + 10 0        where A =  28 − 35 a 29 a − 1 0  , x ( t ) =  x 2  , g x ( t ) = −x1x3  and a∈ 0,1 .   x 3   x1x2  a +8 0 0   −3   For a practical system, the design of a control law which stabilizes and synchronizes the controlled system is fundamentally important. In the literature [3]-[4], various control methods and synchronization methods have been proposed for respective chaotic systems. The impulsive control method proposed in [5]-[7] has attracted a considerable attention because impulsive control laws have fast response time, low energy consumption, good robustness and resistance to disturbance. They have been used to stabilize and synchronize classes of unified chaotic system with parameter uncertainty in [6] and [7]. However, time delay will inevitably occur in practical system, for example the communication system. Since delays and uncertainty of system can affect the dynamical behaviors of the system, it is necessary to investigate both delay and uncertain effects on the stability and synchronization of unified chaotic system. But, there is not much work dedicated to investigate the stability and synchronization of delayed unified chaotic system with parameter uncertainty by impulsive control. In this paper, we shall deal with a class of delayed unified chaotic system with parameter uncertainty and derive some algebraic sufficient conditions. By utilizing the Lyapunov stability theory and matrix inequality techniques, we shall establish a sufficient condition for asymptotical stability of the delayed unified chaotic system with parameter uncertainty under impulsive controller. In Section II, delayed unified chaotic system with parameter uncertainty is introduced and some preliminary lemmas are presented. In Section III, based on the Lyapunov stability theory and the matrix inequality techniques, asymptotically stability and synchronization criteria are derived. Moreover, numerical example is presented in Sections IV. Section V concludes the paper.

(

)

[ ]

Yanwen Wu

451

Preliminaries Let P > 0 to denote a positive definite and symmetry matrices P , λ M (P ) is the largest eigenvalue

of P , and K denotes the set of continuous functions υ : R+ → R+ , where υ ( s ) is increase function and υ ( 0 ) = 0 . p ∈ PC (R+ , R+ ) is the set of all piecewise continuous functions p : R+ → R+ ,

p(t ) is continuous on R+ , except at the time points in the set {τ k }, and is left-continuous and has right limit at τ k for all k . We fist introduce some preliminary concepts which will be found useful in the paper. Consider the following impulsive control system:

x ( t ) = f (t, x ( t ) ) , t ≠ τ k ,

(2)

∆x ( t ) = I k ( t , x ( t ) ) , t = τ k , Definition1. For each ρ > 0 , define

{

}

S ρ = x ∈ R 3 : x(t ) < ρ . And for ( t , x ) ∈ (τ k −1 ,τ k ] × R n , k = 1, 2,..., let

1 D +V ( t , x ) = lim sup V ( t + h, x + hf ( t , x ) ) − V ( t , x )  . h→0 h

Definition2.

Let V1 be the set containing all functions V ( t , x ) : [ − r , ∞ ) × S ρ → R+ , which are

[τ k ,τ k +1 ) × S ρ (k = 1,2, ,) ,

continuous on [ − r ,τ 1 ) × S ρ and

and satisfy the following two

conditions:

(

)

lim V (t , y ) = V τ k− , x exists;

1) For each x ∈ S ρ , k = 1, 2,...,

(t , y )→ (τ k− , x )

2) V (t , x ) is locally Lipschitz in x . The following lemma gives sufficient conditions for asymptotic stability of system (2). Lemma1.[8] Assume that there exist α, β, c, µ ∈K , p ∈ PC (R+ , R+ ) ,

V ( t , x ) ∈ V1 , such that the following conditions are satisfied.

1) β ( x ) ≤ V (t , x ) ≤ α ( x ) , ∀(t , x ) ∈ [− r , ∞ )× S ρ ;

2) V (τ k ,ϕ (0) + I k (τ k ,ϕ )) ≤ µ (V (τ k− ,ϕ (0))), ∀(τ k ,ϕ )∈ R+ × PC([− r,0], S ρ1 ), where ρ1 ∈( 0, ρ) and ϕ( 0− ) =ϕ( 0) ;

(

)

3) D +V ( t , ϕ ( 0 ) ) ≤ p ( t ) c V ( t , ϕ ( 0 ) ) , t ≠ τ k and ϕ ∈ PC ([ − r , 0 ) , S ρ ) , where V (t ,ϕ (0)) ≤ µ (V (t + s,ϕ (s ))) , ∀s ∈ [− r ,0) ; q

ds 4) G2 = inf ∫ > G1 = sup q >0 c s t ≥0 g(q) ( )

t +τ

∫ p ( s ) ds , where τ = sup{∆τ

k

= τ k +1 −τ k } < ∞,

k

t

Then, the system (2) is asymptotically stable. Lemma2.[9] Let H , S , F be real matrices of appropriate dimensions and F T F ≤ I . Then, for any scalar δ > 0 , the following inequality holds. HFS + S T F T H T ≤ δ −1HH T + δ S T S .

452

Manufacturing Systems and Industry Application

Lemma3.[10]

Let P > 0 , Q be real symmetry matrices of appropriate dimensions. Then, for

any x ( t ) ∈ R3 , the following inequality holds.

λmin ( P −1Q ) x ( t ) Px ( t ) ≤ x ( t ) Qx ( t ) ≤ λmax ( P −1Q ) x ( t ) Px ( t ) . T

T

T

Main Results We fist consider the following delayed unified chaotic system with parameter uncertainty: x( t ) = ( A+∆A) x( t ) + Bx( t − d ) + ( I +∆I ) g ( x( t ) ) +Cg ( x( t − d ) ) ,

(3)

x( t ) =φ1 ( t ) , t ∈[ −d,0] ,

where B, C are 3-dimensions real matrices, and [ ∆A, ∆I] = EF[ H1, H2 ] , E, F, H1, H2

are real matrices of appropriate dimensions and F T F ≤ I , I is 3-dimensions unit matrices, and continuous function φ1 ( t ) : R+ → R 3 , d denotes delay of system. Under the impulsive controller {τ k ,uk (t, x)}, where uk ( t, x ( t ) ) = −ξ∆τk x ( t ) and ξ ∈ R + , we have the following theorem for asymptotically stability of system (3). Theorem1. Let P > 0 , then system (3) under the impulsive controller {τ k , u k (t , x )} is asymptotically stable if there exist positive scalars α , ε i ( i = 1, 2,3, 4,5 ) , such that the following inequalities hold. λmin ( P ) 2 2 1) x ( t − d ) ≤ x (t ) ; (4) α λmax ( P ) 1 − ξ∆τ k 2) −2 ln (1 − ξ∆τ k ) ≥ λmax ( P −1M 2 ) + τ

where

M 0 = ε 3 I + ε 4 H2T H 2

1 1 − ξ∆τ k

,

α

λmax ( P −1M 1 ) .

M1 = ε2 BT P2 B + ε5m12 I

(5) ,

{

m1 = sup x1 ( t ) t ≥0

 ε +ε  1 1 1 M2 = AT P + PA + ε1H1T H1 + P 1 4 EET + CCT +  P +  + m12λmax ( M0 )  I . ε5 ε3   ε2   ε1ε4 Proof. Consider the following Lyapunov function candidate V ( t , x ) = xT ( t ) Px ( t ) .

}

,

{

m1 = sup x1 ( t ) t ≥−d

}

(6)

Clearly, it satisfies condition 1) of Lemma 1. For any (τ k , ϕ ) ∈ R+ × PC [ − d , 0] , s ( ρ )  , we have

V (τk , uk (τk ,ϕ) +ϕ ( 0) ) = (1 − ξ∆τ k ) V (τ k− , ϕ ( 0 ) ) , 2

Thus, condition 2) of Lemma 1 is satisfied with µ ( s ) = (1 − ξ∆τ k ) s . Now, by taking the upper Dini derivative of Lyapunov function (6) along the trajectory of system (3), we have the following inequality, 1 D +V ( t , ϕ ( 0 ) ) ≤ (λmax ( P −1M 2 ) + (7) λ P −1M 1 ))V ( x ( t ) ) . α max ( 1 − ξ∆τ k Thus, condition 3) of Lemma 1 is satisfied with 1 (8) p ( t ) = λmax ( P −1M 2 ) + λ P −1M 1 ) and c ( s ) = s . α max ( 1 − ξ∆τ k 2

By (5), the inequality G1 < G2 holds, which shows that condition 4) of Lemma 1 is satisfied. Now, we see that all the conditions of Lemma 1 are satisfied. Therefore, by virtue of Lemma 1, system (3) under impulsive controller {τ k , u k (t , x )} is asymptotically stable. This completes the proof.

Yanwen Wu

453

Now we consider synchronization control of system (3). Let driven system of system (3) be the following system: y ( t ) = ( A + ∆A) y ( t ) + By ( t − d ) + ( I + ∆I ) g ( y ( t ) ) + Cg ( y ( t − d ) ) , t ≠ τ k , ∆y ( t ) = y ( t + ) − y ( t − ) = uk (t , y ( t )), t = τ k , y ( t ) = φ2 ( t ) , t ∈ [ − d , 0] , k = 1, 2,

(9)

.

and their error system:

e ( t ) = ( A + ∆A) e ( t ) + Be ( t − d ) + ( I + ∆I )ψ ( x ( t ) , y ( t ) ) +Cψ ( x ( t − d ) , y ( t − d ) ) , t ≠ τ k , ∆e ( t ) = uk (t , x ( t ) , y ( t )), t = τ k , e ( t ) = φ ( t ) , t ∈ [ − d , 0] , k = 1, 2,

(10)

,

where

e ( t ) = x ( t ) − y ( t ) ,ψ ( x ( t ) , y ( t ) ) = g ( x ( t ) ) − g ( y ( t ) ) , uk ( t , x ( t ) , y ( t ) ) = uk ( t , x ( t ) ) − uk ( t , y ( t ) ) , φ ( t ) = φ1 ( t ) − φ2 ( t ) . We have the following result for asymptotically stability of error system (10) under the impulsive controller uk ( t , x ( t ) , y ( t ) ) = −ξ∆τ k e ( t ) . Let

Theorem2.

P>0

,

Then

system

(10)

under

the

impulsive

controller uk ( t , x ( t ) , y ( t ) ) = −ξ∆τ k e ( t ) is asymptotically stable if there exist positive scalars

α , ε i ( i = 1, 2,3, 4,5 ) , such that the following inequality hold. 1)

2)

e (t − d ) ≤

λmin ( P )

2

λmax ( P ) 1 − ξ∆τ k

−2 ln 1 − ξ∆τ k

τ

e (t ) ; 2

α

≥ λmax ( P −1M 2 ) +

(11)

1 1 − ξ∆τ k

α

λmax ( P −1M 1 ) ,

{

(12)

}

{

}

where M 0 = ε 3 I + ε 4 H 2T H 2 , M 1 = ε 2 BT P 2 B + ε 5 M , mi = sup xi ( t ) , mi = sup xi ( t ) , t ≥0

t ≥− d

ε +ε 1 1 1 M 2 = AT P + PA + ε1 H1T H1 + P  1 4 EE T + CC T +  P + I + λmax ( M 0 ) M , ε5 ε3  ε2  ε1ε 4  m22 + m32  M = ∗  ∗ 

m1m2 2 1

m ∗

 m22 + m32 m1m3    0 , M = ∗  ∗ m12  

m1m2 2 1

m ∗

m1m3   0 . m12 

The poof of the Theorem 2 is similar to Theorem 1, it is neglected.

454

Manufacturing Systems and Industry Application

Numerical Examples In this section, we will consider an example to illustrate the results obtained in Section 3. Consider the unified chaotic system (3) with the following data:

a=0.5,B=C=I,E=H1 =H2 =I,F =diag(sint,sint,sint)

T

and d = 1,φ1 ( t ) = ( t − 9,6 − t, t − 4) . The time sequence for system (3) is shown in Figure 1 below. Through simulations, we obtain m1 = sup x1 ( t ) = 14 , m1 = sup x1 ( t ) = 14 . t ≥0

Let and

the P=I

impulsive

{

}

controller

ε i = 1, ( i = 1, 2,3, 4,5)

,

t ≥− d

of .where

{

}

system

(3)



be

uk ( t, x( t) ) =−ξ∆τk x( t) ,ξ =

8000 22

k

}

, uk ( t , x ( t ) )

τ0 = 4

,

, and

∆τ k = 0.0022 , k = 1,2 . Obviously, M0 = 2I , M2 = A+ AT + 397I , M1 =197I , λmax ( P−1M1 ) =197 , λmax ( P−1M2 ) = 436.8365 .

Then, when α = 1 , we have x ( t − 1) ≤ 5 x ( t ) , 2

−2ln (1− ξ∆τ k )

τ

2

= 1463.1 > 1421.8365 = λmax ( P−1M2 ) +

1 1− ξ∆τ k

α

λmax ( P−1M1 ) .

Consequently, it follows from Theorem 1 that the state of system (3) is asymptotically stable

{

}

under the impulsive controller τ k , uk ( t , x ( t ) ) . The time sequence for controlled system (3) is shown in Figures 2. 25

6

x(1) x(2) x(3)

20

x(1) x(2) x(3)

4

15

2

10 0

5 -2

0 -4

-5 -6

-10 -8

-15

0

1

2

3

4

5

6

7

8

9

10

0

2

4

6

8

10

12

4

x 10

Fig.1The time sequence chart for system (3).

Fig.2The time sequence chart for controlled system (3)

Conclusion In this paper, we proposed a design method for impulsive controller for the realization of asymptotically stability and synchronization performance. From the numerical example solved using this design method, we see that this impulsive controller is effective for this class of delayed unified chaotic system with parameter uncertainty. Acknowledgement This work was supported by the Machine–intelligence Talents Foundation of Guizhou under Grant 2010-1.

Yanwen Wu

455

Reeferences [1] L¨u J, Chen G. A new chaotic attractor coined, Int. J. Bifurcation Chaos, 2002,12(3): 659–661. [2] Lu J.A, Tao C.H. Parameter identification and tracking of a unified system. Chin. Phys. Lett. 2002,19(5):632-635. [3] Ju H.P. Stability criterion for synchronization of linearly coupled unified chaotic systems. Chaos, Solitons &Fractals, 2005,23:1319-1325. [4] Min F.H, Wang Z. S. Combine synchronization of unified chaotic systems . Physica, 2005,54(9):4026-4030. [5] Wang Y. W, Guan Z.H, Wang H. Impulsive control and synchronization of unified chaotic systems. Atomic Energy Science and Technology, 2004,38(3):256-260. [6] Ma T. D, Zhang H. G. Impulsive control of unified chaotic systems with parameter uncertainty. Journal of Northeastern University,2007, 28:917-927. [7] Luo R.Z. Impulsive control and synchronization of a new chaotic system, Physica, 2007,56:5655-5660. [8] Liu X, Ballinger G. Uniform asymptotic stability of impulsive delay differential equations. Computer and Mathematics with Applications,2001, 41:903-915. [9] Xu H, Liu X, Teo K. L. Robust stabilization with definite attendance of uncertain impulsive switched systems. Journal of ANZIAM,2005, 46:471-484. [10] Xu H, Liu X, Teo K. L. Delay independent stability criteria of impulsive switched systems with time-invariant delays. Mathematical and Computer Modelling ,2008,47:372-379.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.456

Generalization Bounds for Certain Class of Ranking Algorithm Wei Gao1,2,a and Yungang Zhang1,3,b 1

Department of Information, Yunnan Normal University, Yunnan Kunming 650092,China 2

Department of Mathematics, Soochow University, Jiangsu Suzhou 215006, China

3

Department of Computer Science, University of Liverpool Liverpool, L69 3BX United Kingdom a

[email protected], [email protected]

Key words: ranking; algorithmic stability; generalization bounds; truth function; strong stability; weak stability.

Abstract. The quality of ranking determines the success or failure of information retrieval and the goal of ranking is to learn a real-valued ranking function that induces a ranking or ordering over an instance space. We focus on a ranking setting which uses truth function to label each pair of instances and the ranking preferences are given randomly from some distributions on the set of possible undirected edge sets of a graph. The contribution of this paper is the given generalization bounds for such ranking algorithm via strong and weak stability. Such stabilities have lower demand than uniform stability and fit for more real applications.

Introduction A key issue in information retrieval is to return useful items according to user’s requests, and the items are ranked by a certain ranking function. Therefore, the ranking algorithm is the most important issue in search engines because it determines the quality of the list which will be presented to the user. The problem of ranking is formulated by learning a scoring function with small ranking error generated from the given labeled samples. There are some famous ranking algorithms such as rank boost (see [1]), gradient descent ranking (see [2]), margin-based ranking (see [3]), P-Norm Push ranking (see [4]) and so on. The generalization properties of ranking algorithms are central focuses in their research. Most generalization bounds in some learning algorithm are based on some measures of the complexity of the hypothesis used like VC-dimension, covering number, Rademacher complexity and so on. However, the notion of algorithmic stability can be used to derive bounds that tailored to specific learning algorithms and exploit their particular properties. A ranking algorithm is called stable if for a wild change of samples, the ranking function doesn’t change too much. [5] studied a special class setting of ranking problem, and [6] studied the generalization bounds for the extension of this ranking algorithm via uniform stability. However, the uniform stability is too restrictive for many learning algorithms (See [7]). In many applications, we should low down the demand of stability. Our paper as the continue work of above two papers, consider two classes of “almost-everywhere” stability-strong and weak stability for extension ranking algorithm raised by [6] and the generalization bounds for such ranking algorithms are given as well. The organization of this paper is as follows: we describe the setting of ranking problem in next section, and define notions of strong and weak stability latter. Using these notions, we derive generalization bounds for stable ranking algorithms. Setting In Rudin’s setting (see [5]), X be an input space called the instance space, and instances in X are drawn randomly and independently according to some (unknown) distribution D. The training sample given to learner a finite number of instances S={x1, …, xm} together with the corresponding ranking preferences π (xi,xj) for i,j ∈ {1,…,m},i ≠ j. Here, π :X × X → {0,1} is called truth function that assigns a binary ranking preference to each pair of instances (xi,xj). π (xi,xj)=1 ⇔ π (xj,xi)=0 ⇔ xi is ranked higher than xj.

Yanwen Wu

457

The goal of ranking algorithms is to obtain a score function f: X → R, which assigns a score to each instance, and ranks all the instances according to their scores. [6] extend this setting in two aspects: 1) π is not only take a binary value, but a real-valued preferences, i.e., π :X × X → [-M, M] and π (x,x’)=- π (x’,x) for all x,x’ ∈ X; 2) The ranking preferences are given randomly (independently of S) from some distribution Em on the set of possible undirected edge sets for a graph on m vertices {1,…,m}. As instance, E could be given by random graph with each edge determined by certain fixed probability p. In such extended setting, The goal of ranking algorithms is to obtain a score function f: X → R, by training sample T=(S,E, π ( S , E ) ) where π ( S , E ) is the restriction of π to {(xi,xj)| (i,j) ∈ E}. The ranking loss function l is used to punish the inconsistent situation which sgn(f(x)-f( x ' )) is not coincide with sgn( π (x,x’)). It always assume that l(f, x, x’, r) is a non-negative real number for any pair of instances (x,x’) and r ∈ [-M,M]. The quality of the ranking function is measured by the expected l-error: Rl ( f ) = E( X , X )∼D×D{l ( f , X , X ' , π ( X , X ' ))} . '

However, it cannot be estimated directly since distribution D is unknown. Instead, we use empirical l-error to measure ranking algorithm: Rl ( f ; T ) =

1 E

∑ l( f , x , x ,π (x , x )) . i

j

i

j

(i , j )∈E

For any i ∈ {1,…,m} and xi' ∈ X, we use S i to denote the sequence obtained from S by replacing xi with xi' and Ti=(Si,E, π

( S i ,E )

). Also, S i , j to denote the sequence obtained from S by replacing xi, xj,

with xi' x'j respectively, and Ti,j=(Si,j,E, π

(Si, j ,E )

).

Definitions To use notions defined above, we define strong and weak stability which are also good measures to show how robust a ranking algorithm is. We assume 0< δ1 , δ 2 0 such that φ(x1, , xN ) −φ(x1, , xk−1, xk' , xk+1, , xN ) ≤ ck , sup x1 ,

, x N ∈C − B , x k' ∈C

sup x1 ,

φ(x1, , xN ) −φ(x1, , xk−1, xk' , xk+1, , xN ) ≤ b.

, xN ∈C , xk' ∈C

Then for any ε >0, N

−ε 2 /8

P{ φ ( X 1 ,

, X N ) − E{φ ( X 1 ,

, X N )} ≥ ε } ≤ 2( e

∑ck2 k =1

+

N 2bδ N

).

∑c

k

k =1

For the proof of the second part of Theorem 1, we use the following simplified version for weak case. Lemma 2 [8]: Let X1,…,XN be independent random variables, each taking values in a set C. Let φ : C N → R be such that for each k ∈{1, …,N}, there satisfies two condition inequalities in Lemma 3 by substituting

λk

for ck , and substituting e−KN for δ . If 0< ε ≤ min T(b, λk ,K), and

N N ≥ max ∆ (b, λk ,K, ε ), then

k

k

N

− ε 2 N 2 / 40

P{ φ ( X 1 , The bounds T and ∆ are:

, X N ) − E{φ ( X 1 ,

, X N )} ≥ ε } ≤ 4 e

∑ λi2 i =1

.

15λk λ2K , 4λk K , k }, 2 b b 24 24 1 ∆ (b, λk ,K, ε )=max{ , λk 40,3( +3)In( +3), }. λk K K ε We are now ready to give our main result, which bounds the expected l-error of a ranking function learned by a algorithm with good strong (weak) stability in terms of its empirical l-error on the training sample. The tricks for proofing Theorem 1 mainly refer to the proof of Theorem 19 in [6]. And there two important things in the proofing process: 1) The extension McDiarmid inequality is used to obtain a bound conditioned on an edge set E with E ≥ ms, and the unconditional bound T(b, λk ,K)=min{

can be given by the bound on the probability that E 1 and a sequence ( δ m ) satisfying

δ m ≥ 0, lim m →∞ δ m =0, such that for all m, PE ∼ E (|E|80%

100

1.0

1.7

3.5

7.2

9.1

300

2.1

3.6

5.5

9.8

15.6

500

3.2

7.6

10.2

16.6

22.5

800

5.3

11.3

13.9

33.2

54.1

1000

8.6

15.5

31.1

72.2

110.9

Table.1 Scene 1 of the five groups under the sampling rate is relatively collision detection (the average of ten times) Fig.1 Scene 1: Two collision models (Each contains 1000 triangular facets)

The experiment shows that the efficiency of the algorithm is not only the number with the number of samples, but also by the particle size of group. Because the number of iterations and a fixed number of particles in each group of constraints, so the smaller particles to larger groups, the number of iterations, but the case for larger space, the number of smaller groups, so that the searching speed becomes very slow . Larger groups or on behalf of the cost of each large, in a fixed period of time, reducing the number of iterations, the particle convergence difficult, so I chose medium-sized groups, such as between 30 to 60, most of the problems for better optimization. Experiment2. Time complexity analysis: To the complexity of objects in virtual environment movement, set a dynamic virtual scene, assuming virtual environment of 46 moving objects, by the number of triangular patches after more than 68,000, then the following algorithm to analyze the performance of experiments. In order to validate algorithm performance, the experimental and the classical algorithm I-COLLIDE, based on the triangle intersection algorithm BCDA (Base Collision Detection Algorithm), bounding sphere intersection algorithm SPHERE, serial intersection collision detection algorithm SCDA (Serial Collision Detection Algorithm) and In this paper, based on mesh simplification and particle swarm optimization algorithm SS-PSO in time complexity, and frame rate to use when running 1600 time steps for comparison. We show the details of the experimental data in Table 2 and Figure 2. Time complexity analysis

frames (sec)

running 1600 time (sec)(ms)

BCDA I-COLLI DE SPHERE

O(n2) O(nlogn)

3.73 4.81

5132 4352

O(nlogn)

4.15

3241

SCDA

O(nlogn)

7.53

181

CDA

O(nlogn)

17.13

85

Table.2 Time complexity analysis of five algorithms

BCDA SCDA

I-COLLIDE SS-PSO

SPHERE

6000 5000 4000 3000 2000 1000 0

0 10 00 0 20 00 0 30 00 0 40 00 0 50 00 0 60 00 0 70 00 0 80 00 0

Algorith ms

Fig.2 BCDA,I-COLLIDE,SPHERE,SCDA and SS-PSO to run 1600 steps the comparison of experimental data

Yanwen Wu

481

From the experimental results can be seen that the proposed algorithm with other four kinds of algorithms, the time complexity of this algorithm is O (nlogn), compared with several other algorithms there is no obvious advantage, but better than the BCDA algorithm, but the algorithm frame rate and running speed in there is a clear advantage, 17.13 per second refresh is unmatched by other algorithms. Running time also have a distinct advantage, in step 1600 to run under the same conditions, the algorithm running time is about BCDA 1 / 120 and SCDA algorithm 1 / 4. Conclusions In this paper, we present an efficient stochastic collision detection based on surface simplification and particle swam optimization. We put three-dimensional virtual space is transformed into two-dimensional discrete space to solve this control algorithm can not only quality but also increase the performance and test the adaptability of the algorithm can also handle arbitrary polygonal objects. However, there can not detect all of the collision, but the high speed requirements for the detection relatively and the accuracy of real-time simulation system, the overall performance of the algorithm better than traditional algorithm. Acknowledgement This work was supported by Science Foundation of Jilin province under grant 20100214, 20101521, 20100155, 20100149and Doctor Foundation of Jilin Agricultural University201022. References [1] UNO S, SLATER M. The sensitivity of presence to collision response[C]// In Proc. Of IEEE Virtual Reality Annual International Symposium (VRAIS), Albuquerque, New Mexico, 1997, 95-96. [2] Zhao Wei, He Yanshuang, Li Wenhui. Research on Parallel Collision Detection Algorithm Based on Pipelining and Divide-and-Conquer [C]// Int'l Conf. Intelligence Computation and Application. Wuhan: China University of Geosciences Press, 2007:277-282. [3] Gottschalk S,Lin,M.C. and Monochord. OBB Tree: a hierarchical structure for rapid interference detection.[C]//SIGGRAPH 96 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, New Orleans, USA,Aug.1996:171-180. [4] G.Baciu,W. S.-K.Wong, and H.Sun. RECODE. An image–based collision detection algorithm [C]// The Journal of Visualization and Computer Animation, 1999:181-192. [5] M.Moore and J.Wilhelms. Collision detection and response for computer animation [J].ACM Computer Graphics, 1998, 22(4):289-298. [6] Kimmerle S, Nesme M, Faure F. Hierarchy accelerated stochastic collision detection [C]// In Proceedings of Vision, Modeling, Visualization, 2004:307-314. [7] Bradshaw G, O’ Sullivan C. Adaptive medial-axis approximation for sphere-tree construction. ACM Transactions on Graphics, 2004: 1226. [8] D.Baraff. Interactive simulation of solid rigid bodies[J]. IEEE Computer Graphics and Applications, 1995, 15(3):6-75 [9] Carlisle A, Dozier G. Adapting Particle Swarm Optimization to Dynamic Environments [C]// Processing of international Conference on Artificial Intelligence. Las Vegas, Nevada, USA, 2000, 429-433

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.482

Application of Wireless Sensor Network for M2M in Precision Fruits FANG Ligang1,a, LIU Zhaobin2,b, LI Hongli1,c, GU Caidong2,c, DAI Minli1,c 1

Department of Computer Engineering, Suzhou Vocational University, Suzhou, china

2

Jiangsu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise, Suzhou, China, 215104 a

[email protected], [email protected], [email protected]

Key words: wireless sensor network; M2M; precision fruits; typical application

Abstract. The study introduced foreign and domestic situation of wireless sensor network technology in precision agriculture in detail. Applications of wireless sensor network are universal in foreign precision agriculture, however in its beginning stage in domestic agriculture. The function of domestic system based on wireless sensor network is usually positioning measurement and processing of agriculture elements, which does not meet final requirement of precision agriculture. The study designed a universal wireless sensor network for M2M combined intelligent communication technology and big agriculture machinery. The key technologies of wireless sensor network for M2M include development plan of ISA SP100.11a, spectrum technology based on DSSS, network technology based on net routing and low power radio frequency design, which can meet the real-time character, reliability, robustness and low energy consumption requirement of wireless communication in precision fruits. Moreover, the study presented several typical applications in precision fruits (including farming machine, water-saving irrigation machine and picking machine). With development of precision fruits in breadth and depth, integration application of wireless sensor network would have a widen prospect in the future. Introduction Precision fruits belong to precision agriculture, and precision agriculture is introduced by American agronomist in the 1990s. Precision agriculture makes the use of the GPS, RS, GIS (3S), the variability processing equipment and decision supported system technology, and realizes real-time macro and micro monitoring for crops, land and soil in the agricultural production process. It can regularly obtain information of the crop growth, pests, the state of water and fertilizer and environmental condition, and make dynamic analysis and precision management of field product based on GPS and GIS integrated system. Precision agriculture can improve the income of farmers and social and economical benefits, and reduce the waste of resources and harmful effects on environment, which is an important development orientation of modern agriculture in the 21 century. In foreign and domestic precision agriculture, combine harvester, seeding-machine and fertilizer applying machine based on GPS, etc. are popular on the modern farms. The smart machines measure precisely and record crop yield per unit of land, soil nutrient and soil moisture using GPS positioning technology, however, some defects of GPS technology limit applications of the smart machines [1]. In addition, only a few smart machines can enforce the ability of communication by RS-232 and GPIB interfaces, the vast majority of machines and sensors have not local or long-distance communication and networking abilities. Now, appearance of M2M (Machine to Machine) technology made it possible for communication between machine and machine. M2M technology can connect communication network with intelligent sensor network and realize Information Intercommunion. However, long-distance communications with the movable terminal machines, data acquisition and transmission are difficult of accomplishment by cable network including Internet. The cable networks are used only for a short distance and unable for long-distance monitoring. Wireless sensor network includes sensor technology, micro electromechanical systems technology, wireless communications technology, embedded computing technology and distributed

Yanwen Wu

483

information processing technology, and is used to sense and acquire various information from objects monitored by cooperation of different integrated miniature sensors. It has many advantages, such as simple deployment, cost-effective and easy adjustment, and can acquire data of environmental science study by easy way. Wireless sensor network has been widely applied in meteorology, geography, natural and man-made disasters monitoring [2-8], has become multi-knowledge crossed hot study due to its broad application future. In 2003, in the technology review of MIT about future technology development reporting, wireless sensor network is one of ten advanced techniques for changing world [9]. Current Research Situation of Wireless Sensor Networks in Agriculture Wireless technology has been widely applied in agriculture, at the present time, however most applications are based on star structure LAN and not wireless sensor network. A few of wireless sensor network nodes form monitoring network in applications of agriculture, acquire information and exchange information real-timely with the data service center combining self-organization, mutual information and multi-hop routing of network. It would realize automation, intelligentize and tele-control of precision agriculture, help farmers find problems and define position of problems. Therefore, producing pattern of agriculture would transfer gradually from manpower for central to information and software for central, which leads to extensive use of mechanical equipments of automation long distance control. Research Advances in Foreign Agriculture. The Intel Company set up the first wireless Vineyard, Oregon, in 2002. Sensors nodes were distributed in every corner of the Vineyard, and soil moisture, soil temperature and the quantity of hazardous ingestion were detected every 1 minute, which would ensure grapes flourish and have a plentiful harvest. In Australia, CSRIOICT Center put wireless sensor nodes on animals for monitoring their physiology (pulse and blood pressure) and external environment, which was developed perfect animal grassland Models[8]. Kim Y and his colleagues [10]designed and built automatic irrigating and monitoring system based on wireless sensor network technology, which could acquire soil temperature, moisture and meteorology data, issue directly to a local monitoring center by Bluetooth wireless communication technology. Next, monitoring software deal with data acquired, make irrigation decision and forward command to Irrigation machine. G Vellidis and his colleagues[11] developed a prototype real-time, smart sensor array for measuring soil moisture and soil moisture. Integration of the sensors with precision irrigation technologies provided a closed loop irrigation system where inputs from the smart sensor array determined timing and amounts for real-time site-specific irrigation applications. J Balendonck and his colleagues[12] developed a low-power wireless sensor network system named FLOW-AID which has the objective to develop and test an irrigation management system that can be used under deficit. Research Advances in Domestic Agriculture. Applications of wireless sensor networks in precision agriculture have been developed in domestic science parks. Considering artificial irrigation sometimes may make soil have excessive or deficient water, or even irrigating not on time, He Qingbao et al. [13]described how the combination of wireless sensor network and single chip microcomputer was applied in agriculture and garden automatical irrigation, and one kind of irrigation system designed by author was introduced here. The system can irrigate fields, lawns and gardens automatically without man’s supervision. When it rains or soil has enough water, the system can cease irrigating automatically, and it also can be set by manual when it meets some particular situation. Liu Hui et al. [6] developed an in-field soil moisture and temperature monitoring system which meets the application requirement in farmland environment. This system consists of the soil monitoring wireless sensor network and remote data center. In the wireless sensor network, the sensor node was developed using JN5121 module, an IEEE 802. 15. 4/ ZigBee wireless microcontroller. The sink nodes for aggregating and delivering network data were based on ARM9 processor platform in order to meet the requirements of high-performance. A GPRS module was

484

Manufacturing Systems and Industry Application

integrated into the sink node for long distance communication. In the remote data center, the management software running on the host computer was developed for real-time data receiving and logging based on database management method. It also used ArcEngine, an embedded GIS developer kit to realized on-line spatial analysis of in-field data. This monitoring system may provide an effective research tool for spatial analysis and for irrigation decision making in precision agriculture. Li Zhen and his colleagues [14] developed and tested a wireless sensor network system. The system was composed of ten sensor nodes, one central node to collect data from the sensor nodes and one base node connected to a PC to retrieve, store, and present the data. TinyOS and ZigBee were applied as operation system and communication protocol, respectively. EC-5 low-power and low-cost soil moisture sensor was applied. Solar powering module met the energy requirements of both sensor and central nodes. Packet delivery rate (PDR) experiment results indicated that, overall, a stable data transmission was achieved since 7 out of 10 sensor nodes’ PDR were higher than 90% and another one was 89.2%. Due to manufacturing imperfection, two sensor nodes’ PDR was lower than 70%. This problem was fixed by replacing powering circuits of the two nodes. In addition, there still are some studies about wireless sensor network in agriculture. For example, in South China Agricultural University, a wireless sensor network was established for monitoring real-time moisture content and water height of field soil and smart water-saving irrigation system was designed based on the network [15]. Researches of Changshu Institute of Technology proposed an agricultural environment real-time monitor and control system based on 6LoWPAN sensor networks. The system can reduce the power consumption and shorten the delay time [16]. All in all, applications of wireless sensor networks are universal in foreign precision agriculture and in its beginning stage in domestic agriculture, and there is a certain gap in technology and application level. The function of system based on wireless sensor networks is very single and usually positioning measurement and processing of agriculture elements (such as soil moisture, nutrient and electric conductivity etc.), which does not meet final requirement of precision agriculture. The study designs a universal wireless sensor network for M2M combined intelligent communication technology and big agriculture machinery, which meet the real-time character, reliability, robustness and low energy consumption requirement of wireless communication in precision agriculture. Key Technology of Wireless Sensor Network for M2M Development Plan of ISA SP100.11a. The communication technology of the study was based on ISA SP100.11a, which is integrated in the CC2430 chip of TI. For the development of ISA SP100.11a communication technology, effective cross-layer network architecture was adopted, including fusion of data link layer and network layer, fusion of network layer and transport layer. At the same time, application layer was simplified correspondingly, which can minimize the platform of hardware. Spectrum Technology based on DSSS (Direct Sequence Spread Spectrum). ISA SP100.11a communication is usually based on DSSS technology of IEEE 802.15.4, however, in the course of industrial production, communication is often under the threat of accidental electromagnetism disturbs from the bad electromagnetism environment. Normal electromagnetic interference includes WiFi, motor start, large-size device start, microwave and radio frequency device, and Bluetooth. During electromagnetism disturbs, FHSS technology can guarantee safety of communication channel by the greatest extent. Network Technology based on Net Routing. Low power network is often disturbed by moving objects and people in the course of industrial production, with tremendous influence on network communication. In the ordinary course of events, the disturbing of people can result in the attenuation of 10-15 dB. Therefore, dynamic net routing technology can discovery new router, and then the new router will be put into effect, under the disturbing of objects and people. Design of Low Power Radio Frequency and Interface. The device in the study adopted the most advanced low power chip with a self-developed energy management system. The wireless device can adjust dynamically emission power while guaranteeing security and reliability of communication.

Yanwen Wu

485

External Flash will be added beside chip system, according to requirement of data storage in the course of industrial production. In the design of interfaces, terminal units reserved serial communication interface, digital signal interface, analog signal interface and ADC interface. Based on the interfaces, the terminal units can be seamlessly connected to equipments (such as sensors and big machinery) on the industry spots.

USART

ADC

SPI PWM

GPIO

802.15.4 Radio+MCU

Flash

TX/RX Combiner

PA

T/R

Crystal Oscillator

Fig.1 Basic principle of wireless device in the study Typical Applications in Precision Fruits The study can be applied to smart machinery, sensor and other devices of orchard, by means of adding wireless network in the present equipments. Then, advanced wireless sensor network of orchard can be set up with minimum of the devotion, which overcome the shortcomings of high cost and promote its widespread. Some typical applications in precision fruits are as follows: Farming Machine. The study is integrated with farming machine, based on wireless sensor equipment, the position etc. information can be returned to computer center. The computer center analyses soil temperature, chemical constituents and other characteristics, calculates optimal planting schedule including quantity of seed, fertilizer, and pesticide. Through the combining, speed of tillage machine is faster with reducing of seed, fertilizer, and pesticide, which raises the production efficiency, economizes the production cost. Water-saving Irrigation Machine. The wireless sensor module is installed on the irrigation machine of orchard. Hygrograph reads the moisture of plant leaves once in a while, and the position and moisture information can be returned to computer center in real time. The computer center analyses moisture data of soil and air in different plot of orchard. When plant is at water requirement, the computer center issues commands of irrigation, irrigation system will irrigate cropland in time. Picking Machine. Wireless sensor network is integrated with picking machine, for example, French scientists developed intelligent robot of picking apple, which can recognize mature apples. Time of picking an apple is only 6 s, with half of hand-selected time. An American corporation invented a kind of intelligent robot of picking mushroom, which can set picking diameter of mushroom. Average time of picking a mushroom is 6 s, with not damaging it. Conclusion and Prospect Nowadays, precision irrigation and fertilization has achieved important progress, and precision agriculture devices and automatic agriculture medicines are also used in agriculture production. Wireless sensor network has become multi-knowledge crossed hot study due to its broad application future, and provides new information acquirement and treatment technology. Based on wireless sensor network, producing pattern of agriculture would transfer gradually from manpower for central to information and software for central, however, applications of wireless sensor network in domestic

486

Manufacturing Systems and Industry Application

agriculture are still initial attempt, and a few key problems are pressing for solution. In addition, the price of sensors for agriculture monitoring is expensive, which limits extensive use of wireless sensor network in agriculture production. In recent years, customers demand for safety of crops product is increasing again, and product traceability system is developed. Precision fruits and farming devices are also a necessary part of food safety management system. We can predict that research and application of precision fruits technologies system is an advanced subject of agricultural sustained development in the 21 century, which would propel the agriculture modernization connotation. With development of precision fruits in breadth and depth, integration application of wireless sensor network would have a widen prospect in the future. Acknowledgment The research is supported by the Innovation Project of SuZhou Vocational University, Advanced Research Project of SuZhou Vocational University (2010SZDYY02) and the Opening Project of JiangSu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise (No. SX200901, SX201002). References [1] Han Gaolou. The advantages and disadvantage of GPS positioning technology[J]. Shanxi Building, 2010 (2):56-57. (in Chinese) [2] Wang Shu, Yan Yujije, Hu Fuping et al. Theory and application of the wireless sensor networks[M]. Beijing, Press of Beihang University, 2007. (in Chinese) [3] Ji Jinshui. The application of ZigBee wireless sensornetworking to an industrial automatic monitor system[J]. Industrial Instrumentation and Automation, 2007, (3): 71-76. (in Chinese with English Abstract) [4] Guo Shifu, Ma Shuyuan, Wu Pingdong, et al. Pulse wavemeasurement system based on Zigbee wireless sensor network[J]. Application Research of Computers, 2007, 24(4):258-260. (In Chinese with English Abstract) [5] Sun Zeyu, Li Meng. Improved Location Algorithm of Coal Mine Monitoring System Based on Wireless Sensor Network[J]. Computer Measurement & Control, 2010, 18(9): 2008-2011. (in Chinese with English Abstract) [6] Liu Hui, Wang Maohua, Wang Yuexuan, et al. Development of farmland soil moisture and temperature monitoring system based on wireless sensor network[J]. Journal of Jilin University :Engineering and Technology Edition, 2008, 38(3): 604-608. (in Chinese with English Abstract) [7] Fu Hailong , Jia Mingchun , Peng Guichu. Research on Continuous Environmental Radiation Monitoring System for NPP Based on Wireless Sensor Network[J]. Nuclear Electronics & Detection Technology. 2010, 30(6):839-842. (in Chinese with English Abstract) [8] Qiao Xiaojun, Zhang Xin, Wang Cheng, et al. Application of the wireless sensor networks in agriculture[J]. Transactions of the CSAE, 2005, 21(Supp2): 232-234. (In Chinese with English Abstract) [9] Estrin D, Govindan R, Heidemann J S, et al. Next century challenges: scalable coordinate in sensor network[A]. In: Proc 5th ACM/IEEE International Conference on Mobile Computing and Networking[C]. 1999: 263- 270. [10] Kim Y, Evans R G, Iversen W, et al. Instrumentation and control for wireless sensor network for automated irrigation[C]. ASABE Paper No. 061105. St. Joseph, 2006.

Yanwen Wu

487

[11] Vellidis G, Tucker M, Perry C, et al. Bednarz. A real-time wireless smart sensor array for scheduling irrigation[J]. Computer and electronics in agriculture, 2008, 61(1): 44-50. [12] Balendonck J, Hemming J, van Tuijl B A J, et al. Sensors and wireless sensor networks for irrigation management under deficit conditions (Flow-aid)[C]//Proceedings of the International Conference on Agricultural Engineering /Agricultural & Biosystems Engineering for a Sustainable World. - EurAgEng (European Society of Agricultural Engineers), Hersonissos, Crete, 2008-06-23/25. [13] He Qingbao , Zhou Dequan. Automatic Irrigating System Based on Wireless Sensor Network[J]. Journal of Agricultural Mechanization Research, 2010, 11:38-41. (in Chinese with English Abstract) [14] Li Zhen, Wang Ning, Hong Tiansheng, et al. Design of wireless sensor network system based on in-field soil water content monitoring[J]. Transactions of the CSAE, 2010, 26(2): 212-217. (in Chinese with English Abstract) [15] Xiao Kehui, Xiao Deqin, Luo Xiwen. Smart water-saving irrigation system in precision agriculture based on wireless sensor network[J]. Transactions of the CSAE, 2010, 26(11): 170-175. (in English with Chinese abstract) [16] Wang Xiaonan, Yin Xudong. Agricultural environment real-time monitor and control system based on 6LoWPAN sensor networks[J]. Transactions of the CSAE, 2010, 26(10): 224-228. (in Chinese with English Abstract)

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.488

Computational method research of the salvo catching probability for wake-homing torpedo based on compendium preparation JIANG Tao1,a, WU Dixiao1,b and RUI Li1,c 1

Department of Weaponry Engineering, Naval University of Engineering, Wuhan, China a

[email protected],[email protected],[email protected]

Keywords: wake-homing torpedo, fixed lead angle, compendium preparation, catching probability

Abstract. Under the condition of compendium preparation of shooting data, based on the correlation of each torpedo catching target during torpedo salvo, a computational method of catching probability for wake-homing torpedo salvo is proposed. Then a computational model of catching probability for wake homing torpedo’s double salvo is established. By using existing data, the simulating results show that the model proposed in this paper can optimize the model of the fixed lead angle for wake-homing torpedo salvo. Introduction Under the condition of compendium preparation of shooting data, wake-homing torpedo usually adopts the double salvo method to fix the lead angle and attack ship target. The catching probability has a direct effect on the efficiency of torpedo salvo. During double-torpedo salvo, since the positions of these two torpedoes have a certain correlation and the event of which the two torpedoes catch target is not mutually independent, the computational method of the catching probability for the double-torpedo salvo is completely different from that when launching two torpedoes separately. Based on the compendium preparation of the torpedo shooting data and the target attack by double-torpedo salvo, a computational method of salvo catching probability of wake-homing torpedo is proposed in this paper. Primary computational model Shown as in Fig. 1, ship target is at point M and fire platform is at point S at the moment of torpedo salvo. The velocity and the course angle of ship target are Vc and qc respectively. The torpedo velocity is VT and Dc is the initial distance of the ship target. Effective region length of ship target wake is Lq . The expansion distance and the expansion voyage of torpedo are dT and ST , and they meet the target course at point M1 and point M 2 respectively. ϕ is the fixed lead angle of salvo[1,5]. To the right side torpedo, the voyage distance from S to M1 is

Dc sin qc + Ds − Ds2 − 0.25dT2 − 0.5dT ctg (qc + φ ) sin ( qc + φ ) and the lapsed time is ST 1 =

 Dc sin qc  + Ds − Ds2 − 0.25dT2 − 0.5dT ctg (qc + φ )    sin(qc + φ )  During this period, the voyage distance of ship target from M is

(1)

1 VT

(2)

S11 = Vc t1

(3)

t1 =

The distance between the effective region of ship target wake and M is

Yanwen Wu

489

S12 = Vc t1 − L q = Vc t1 − k q Vc Then the distance between torpedo and M can be expressed as

Dc sin φ + 0.5dT csc ( qc + φ ) sin ( qc + φ ) Similarly, to the left side torpedo, the following can be obtained S10 =

t1 =

1 VT

 Dc sin qc + Ds − Ds2 − 0.25dT2 − 0.5dT ctg (qc + φ )  sin( q + φ ) c 

(4)

(5)

(6)

M Lq

qc M2

M1

Dc

dT φ D

S

Fig. 1 The salvo methods of wake-homing torpedo Define S 21 = Vc t 2 , S22 = (t 2 − k q ) Vc and S 20 =

D c sin ϕ − 0.5 d T csc (q c + ϕ ) . sin (q c + ϕ )

During double-torpedo salvo, suppose that at least one torpedo should fall in with the effective region of ship target wake, the following condition should be satisfied [3, 4]

S12 ≤ S10 ≤ S11

(7)

or

S 22 ≤ S 20 ≤ S 21

(8) Under the condition of compendium preparation of salvo parameters, the velocity and the course angle of ship target are unknown, so whether or not the above condition is satisfied cannot be judged directly. But on the assumption of which the course angle of ship target is constant, the ship target’s velocity span, which can satisfy equation (7) and equation (8) respectively, can be defined. The ship target’s velocity scope can be expressed by sets, i.e.

V11 = {Vc | S12 ≤ S10 ≤ S11}

(9)

V21 = {Vc | S21 ≤ S20 ≤ S22 } (10) During the double salvo of wake-homing torpedo, in order to guarantee the hit probability, the velocity scope of attacked target has some certain requirements. Even under the condition of compendium preparation of shooting data, it is not necessary to define the target velocity, but the

490

Manufacturing Systems and Industry Application

fire commander should give an approximate judgment of the target velocity. Besides, only when the judgment meets the demands of the prescribed velocity, the torpedo salvo can be processed. Therefore, the influence of this factor should be considered when computing catching probability. Use set to express the prescribed velocity, that is

V0 = {Vc | Vcmin ≤ VC ≤ Vcmax } (11) Mark the event which the right side torpedo falls in with the effective region of ship target wake as event A, mark the event which the left side torpedo falls in with the effective region of ship target wake as event A, mark the event which the torpedo catches the target as event C. Obviously, under the condition of which the course angle of ship target is constant, the target velocity set when event A happens is V1 = V11 ∩ V0 and the target velocity set when event B happens is

(12)

V2 = V21 ∩ V0 The target velocity set when event A and event B happen at the same time is V3 = V1 ∩ V2 Let

(13) (14)

V1 min = min{Vc | Vc ∈ V1 }

(15) V1max = max{Vc | Vc ∈ V1 } Then the probability of which the right side torpedo falls in with the effective region of ship target wake is P(A | q c ) =

1 Vcmax − Vcmin

V1 max

∫ f (V )dV c

(16)

c

V1 min

where f (Vc ) is the distribution density function of the target velocity. Similarly, the followings can be obtained 1 P(B | q c ) = Vcmax − Vcmin

V2 max

∫ f (V )dV c

(17)

c

V2 min

1 P(AB | q c ) = Vcmax − Vcmin

V3 max

∫ f (V )dV c

(18)

c

V3 min

Therefore 1 P(A + B | q c ) = Vcmax − Vcmin

3 Vimax

∑ ∫ f (V )dV c

(19)

c

i =1 Vimin

When torpedo falls in with the ship target wake, there are still many other factors which influence whether or not the target is captured, especially the angle of which the torpedo enters the effective region of the wake, which can be expressed as q c + ϕ . If the torpedo enters the effective

region of the wake at a different angle and the catching probability is expressed as p b (q c + ϕ ) , then

p (q + ϕ ) 3 imax P(C | q c ) = P(A + B | q c ) p b (q c + ϕ ) = b c ∑ ∫ f (Vc )dVc Vcmax − Vcmin i=1 Vimin V

(20)

Yanwen Wu

491

If span of the course angle of ship target is [qc min , qc max ] , and the distribution density is f (qc ) , the catching probability can be expressed as q cmax  3 Vimax  1 P(C ) = Pb = ⋅ ∫ ∑ ∫ f (Vc ) dVc  p b (q c + ϕ )f (q c )dq c Vcmax − Vcmin q cmin  i =1 Vimin 

(21)

Calculation example and simulation experiment Suppose that the target velocity and the course angle of target are both uniform distribution within their own span[2] and p b (α ) = 1 . Based on the model proposed in this paper, the catching probability of a type of wake-homing torpedo is calculated and simulated. The calculation and simulation results of two practical examples are shown in Fig. 2 and Fig. 3. 0.85

0.8

Catching Probability

0.75

0.7

0.65

0.6

0.55

0.5 0.2

0.3

0.4

0.5 0.6 0.7 Normalized Initial Distance

0.8

0.9

1

Fig. 2 The curve of the catching probability versus the initial distance Seen from figure 2, different initial distance of target corresponds to different catching probability, so for a particular initial distance, in order to improve the catching probability of torpedo, it is necessary to select reasonable fixed lead angle or adjust the shooting data, such as the torpedo expansion distance, etc. Meanwhile, according to the tactical and technical performance of wake-homing torpedo, ascertain the maximal initial distance of double-torpedo salvo under the condition of which the catching probability is not less than the prescribed catching probability. 0.8 0.75 0.7

Catching Probability

0.65 0.6 0.55 0.5 0.45 0.4 0.35

0.1

0.2

0.3

0.4 0.5 0.6 0.7 Normalized Fixed Lead Angle

0.8

0.9

1

Fig. 3 The curve of the catching probability versus the fixed lead angle

492

Manufacturing Systems and Industry Application

Seen from figure 3, the catching probability changes with the fixed lead angle and there exists an optimal fixed lead angle which makes the catching probability be maximal. So based on the model proposed in this paper, an optimizing model of the salvo fixed lead angle for wake-homing torpedo can be established. Meanwhile, seen from figure 2 and figure 3, the simulated results coincide with the theoretical results and show the validity of the model proposed in this paper. Summary Under the condition of compendium preparation of shooting data and based on the correlation of each torpedo catching target during torpedo salvo, a computational model of catching probability of wake-homing torpedo’s double salvo is established in this paper. The proposed model not only can compute the catching probability for target, but also can lay a foundation for the optimization of salvo elements, such as fixed lead angle. During the modeling in this paper, only the primary search of torpedo is considered. The main reason, which the research of torpedo is not considered, is that different kinds of wake-homing torpedoes have different research strategies and modes. The model proposed in the paper can also be complemented for a particular torpedo model according to the characteristics of its research if necessary. References [1] Jiang Xingzhou, Chen Xi and Jiang Tao, in: Design Principle of Torpedo Guidance, edited by Naval University of Engineering Publishing, Wuhan (2001). [2] P.G.Bergman, in: Physics Basics of Hydro acoustics, edited by Science Publishing House, Beijing (1958). [3] Paul C. Etter, in: Underwater acoustic modeling-principles, Techniques and applications, edited by Elsevier applied science publishing, London and New York (1991). [4] Chen Chunyu: Optimization of Key Parameters and Calculation of Operating Range in Acoustic Wake Homing System, Torpedo Technology, Vol. 11 (2003), p2. [5] Andrei Tyurin, Alexander Stashkevich and Emmanuel Taranov in: Basic of Acoustics, edited by Ship Press publishing, Moscow (1966).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.493

Error estimates of H1-Galerkin expanded mixed finite element methods for Heat problems Che Haitao1, a, Li Meixia1,b and Liu Lijuan2,c 1

College of Mathematics and Information Science, Weifang university, Weifang,261061, China 2

NO.1 Middle school of Weifang, Weifang,261061, China

a

[email protected], [email protected], c [email protected]

1 Key words: H -Galerkin expanded mixed finite elementmethod; Error estimates; weak formulation ; Heat problems.

Abstract. H1-Galerkin expanded mixed element method are discussed for a class of second-order heat equations. The methods possesses the advantage of mixed finite element while avoiding directly inverting the permeability tensor, which is important especially in a low permeability zone. H1-Galerkin expanded mixed finite element method for heat equations are described, an optimal order error estimate for the methods is obtained. Introduction In this paper, we consider the following initial-boundary value problem of heat system utt + ut − ∇ ⋅ (a( x, t )∇ut + a( x, t )∇u ) = f ( x, t ), ( x, t ) ∈ Ω × J ,  u ( x, t ) = 0, ( x, t ) ∈ ∂Ω × J , u ( x, 0) = u , u ( x, 0) = u , x ∈ Ω. 0 1 t 

(1)

where Ω is convex polygonal domain in Rd (d=1,2,3) with Lipschitz continuous boundary ∂Ω , J=(0,T] is the time interval with 00 independent of h and t such that u − uh H 1 ≤ ch min( k +1,m ) ,

∇ ⋅ (σ − σ h ) ≤ ch k1 , u − uh + p − ph + σ − σ h ≤ ch min( k +1,m +1) .

c

σ

k +1

depends

, σt

ptt

k1 +1

k +1

, σ tt

k +1

, p

k1 +1

p

on

, pt

k1 +1

, ptt

k1 +1

, p

k1 +1

, pt

k1 +1

k +1

, pt

k +1

, ptt

,

k +1

,

. Here k ≥ 1 and m > 1 for d=2,3. The index k can berelaxed to include the case of k=0 for

d=1. Proof. Since estimates of θ ,η and α can be found out from (5),(7), it suffices to estimate ξ , ζ and β . Choosing vh = ζ ,in (12) to have

ζ

2

(

≤c θ

2

+ ξ

2



2

).

(13)

Further, set wh = ξt in (12) and qh = ζ in (10) and subtract the resulting equation to obtain (ξt , aξ ) + (∇ ⋅ ζ , ∇ ⋅ ζ ) = (η , ξt ) − (θ t , ζ ) − (aθ , ξ ), (14)

496

Manufacturing Systems and Industry Application

(14) can be rewritten that d 1 (ξ , aξ ) + (∇ ⋅ ζ , ∇ ⋅ ζ ) = (η , ξt ) + (ξ , atξ ) − (θ t , ζ ) − (aθ , ξ ) . (15) dt 2 Integrating this equation with respect to time t from 0 to t, and using Cauchy-Schwartz inequalities to yields t

t

ξ + ∫ ∇ ⋅ ζ ds ≤ c ∫ ( η + θt + ζ 2

2

0

2

2

2

0

In order to the estimate of ξ

2

2

+ ξ

2

+ ξt )ds .

(16)

, we should obtain the estimate of

2

ξt . Differentiate the

equations in (10) and(12) respect to t to get (ξtt , qh ) − (∇ ⋅ ζ t , ∇ ⋅ qh ) = −(θtt , ∇ ⋅ qh ), ∀qh ∈ H h ,

(17)

(ζ t , wh ) = (atξ , wh ) + (aξt , wh ) + (atθ , wh ) + (aθt , wh ) − (ηt , wh ), ∀qh ∈ H h , We set wh = ξ tt in (18) and qh = ζ t in (17) and subtract the resulting equation to obtain

(18)

(ξt t , aξt ) + (∇ ⋅ ζ t , ∇ ⋅ ζ t ) = (ηt , ξt t ) − (θ t t , ζ t ) − (atθ , ξt t ) − (aθt , ξt t ) − (atξ , ξt t ), Then (19) can be changed as 1 d d d d d (ξt , aξt ) + (∇ ⋅ ζ t , ∇ ⋅ ζ t ) = (ηt , ξt ) − (aθ t , ξt ) − (atθ , ξt ) − (atξ , ξt ) 2 dt dt dt dt dt +2(atθt , ξt ) + (aθt t , ξt ) + (at tθ , ξt ) − (ηt t , ξt ) 1 + (atξ , ξ ) + (attξ , ξt ) + (atξt , ξt ) − (θ tt , ζ t ), 2 Integrating this equation with respect to time t from 0 to t, t 1 (ξt , aξt ) + ∫ (∇ ⋅ ζ t , ∇ ⋅ ζ t )ds = (ηt , ξt ) − (aθ t , ξt ) − (atθ , ξt ) − (atξ , ξt ) 0 2

(19)

(20)

t

+ ∫ (2(atθt , ξt ) + (aθt t , ξt ) + (at tθ , ξt ) − (ηt t , ξt )) ds 0

1 + ∫ ( (atξ , ξ ) + (attξ , ξt ) + (atξt , ξt ) − (θ tt , ζ t ))ds, 0 2 We use Cauchy-Schwartz and Young inequalities to bound the right side term by term t

2

t

(21)

ξt + ∫ ∇ ⋅ ζ t ds ≤ c( ηt + θ + θt + ξ ) 2

2

2

2

2

0

t

+ c( ∫ ( ηtt

2

0

Since there is ξt

2

+ ξt

2



2

+ θt

2

+ θtt

2

2

2

+ η + ζ t )ds )

(22) 2

in the right side term of (22), so we need get the estimate ξt . Set wh = ζ t in

(18) to obtain (ζ t , ζ t ) = (atξ , ζ t ) + (aξt , ζ t ) + (atθ , ζ t ) + (aθ t , ζ t ) − (ηt , ζ t ) . For(23), apply Young inequalities to have 2 2 2 2 2 2 ζ t ≤ c( ξ + ξ t + θ + θ t + ηt ) .

(23) (24)

Use (24) and apply Gronwall inequality to (22) to get 2

t

ξt + ∫ ∇ ⋅ ζ t ds ≤ c( ηt + θ + θt + ξ ) 2

2

2

2

2

0

t

+ c( ∫ ( ηtt

2

+ ξt

0

2

+ θtt

2

2

+η + ξ

2



2

+ θt

2

+ θtt

2

2

+ ηt )ds )

(25)

Combine (25) and (13) and apply Gronwall inequality to (16) to obtain t

ξ ≤ c( ∫ ( ηtt + θtt + η + θ + θt + θtt + ηt )ds) . 2

2

2

2

2

2

2

(26)

0

Then (13) and (25) can be rewritten respectively as

ζ

2

2

≤ c( η + θ

2

t

+ ∫ ( ηtt 0

2

+ θtt

2

2

+η +θ

2

+ θt

2

+ θtt

2

2

+ ηt )ds ) .

(27)

Yanwen Wu

497

Set qh = ζ in (10) to have (ξt , ζ ) − (∇ ⋅ ζ , ∇ ⋅ ζ ) = −(θt , ζ ).

(28)

Using Cauchy-Schwartz inequalities and (27),(28) to bound the estimate ∇ ⋅ ζ ∇ ⋅ζ

2

≤ c( ξt

2

+ ζ

2

≤ c ( ηt

2

2

+ θt )

2

2

+ η + θt

t

+ c( ∫ ( ηtt

2

0

+ θtt

2

2

+θ ) 2

+η +θ

2

+ θt

2

+ θtt

2

2

+ ηt )ds ) .

(29)

Choose vh = β in (11) to have (ξ , ∇β ) = (∇β t + ∇β , ∇β ) − (θ , ∇β ). (30) noting d (∇β,∇β ) = 2(∇β t,∇β ) . dt (30) can be rewritten 1 d (∇β,∇β ) + (∇β,∇β ) = (ξ , ∇β ) + (θ , ∇β ) . (31) 2 dt Integrating this equation with respect to time t from 0 to t and using Cauchy-Schwartz inequalities

∇β

2

t

≤ c∫ ( ξ

2

0



2

2

+ ∇β )ds .

Apply Gronwall inequality to the above equation and use (26) to get ∇β

2

t

≤ c∫ ( ξ 0

t

2



≤ c( ∫ ( ηtt 0

2

2

2

+ ∇β )ds

+ θtt

2

2

+η +θ

2

+ θt

2

+ θtt

2

2

+ ηt )ds ) .

(32)

Since Vh ⊂ H 01 , then β ≤ ∇β , so we can obtain the estimate β . We use (26),(27),(30),(31), (5), (7),(8) and the triangle inequality to complete the proof of theorem. Remark 1. Under the condition of the Theorem1, the following L∞ estimates holds for d=1 and 2, d −1 u − uh L∞ ≤ c ln h h min( k +1,m +1)

Summary In this paper, H1-Galerkin mixed finite element method combining with expanded mixed element method are discussed for a class of second-order heat equations. The methods possesses the advantage of mixed finite element while avoiding directly inverting the permeability tensor, which is important especially in a low permeability zone. There are important issues to be addressed in the areas, forexample, we can consider the nonlinear heat equations, it is important and challenging to solve some practical problems.

498

Manufacturing Systems and Industry Application

References [1] J.Nagumo, S.Arimoto,S.Yoshizawa, An active pulse transmission line simulating nerve axon, Proc.IRE 50(1962), 91-102. [2] C.V.Pao, A mixed initial boundary value problem arising in neurophysiology, J.Math.Anal.Apple.vol.52 (1975), 105-119. [3] R.Arima, Y.Hasegawa, On global solutions to a class of mixed problems of a semi-linear differential equation, Proc.Jpn.Acad. vol.39 (1963), 721-725. [4] G.Ponce, Global existence of small of solutions to aclass of nonlinear ,Nonlinear Anal. vol.9(1985), 399-418. [5] Weiming Wan, Yacheng Liu, Long time behavior of solutions for initial boundary value problem of pseudohyperbolic equations, Acta Math Appl. Sin.vol.22(2)(1999), 311-355. [6] Hui Guo, Hongxing Rui, Least-squares Galerkin procedure for pseudo-hyperbolic equations, Appl. Mathe. Comput. vol.189(2007), 425-439. [7] Jinchao Xu and Aihui Zhou,Local and parallel finite element algorithms based on two-grid discretizations, it Math.Comp.69(1999) : 881-909.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.499

Anonoymizing Methods against Republication of Incremental Numerical Sensitive Data Xiaolin Zhang, Jie Yu , Yuesheng Tan, Lixin Liu Department of Information and Engineering Inner Mongolia University of Science and Technology Baotou 014010, China zhangxl@imust .cn,[email protected] Key words: Incremental data; Numerical sensitive data; privacy protection

Abstract: Privacy protection for numerical sensitive data has become a serious concerned in many applications. Current privacy protection for numerical sensitive data base on the static datasets. However, most of the real world data sources are dynamic, and the direct application of the existing static datasets privacy preserving techniques often causes the unexpected private information disclosure. This paper anaylisis various leakage risks of republication of incremental numerical sensitive data on numerical sensitive data, and proposes an efficient algorithm on anonoymizing methods against republication of incremental numerical sensitive data,The experiments show that this method protects privacy adequately. Introduction Along with social progress and constant development of computer technology, people more and more pay attention to the privacy protection technology. The existing anonymous issue technology is totally based on a static data set, in other words, the majority of data set not support data insertion, deletion, update operation. Therefore, dynamic republication to numerical sensitive data is a urgent problem at present. In connection with numerical sensitive data, we propose the effective republication anonymous method to incremental numerical sensitive data. In the premise of privacy protection well, we realize dynamic republication to incremental numerical sensitive data. Inadequacy of Known Generalization Principles Now current privacy preserving techniques is divided into two branches,one is categorical sensitive data,the other is numerical sensitive data. Privacy preserving for categorical sensitive data:k-anonymity[1,2],privacy is guaranteed by ensuring that any record in a released dataset be indistinguishable (with respect to a set of attributes, called quasi-identifier) from at least(k-1) other record in the dataset. A more effective principle of l-diversity is proposed by the Machanavaj jhala[3], they consider the relationship between QI-attribute group and sensitive attribute, which requires each QI-group to contain at least l ‘well-represented’ sensitive values. Xiao first study anonymous re-publication of dynamic dataset. He proposes a new principle m-Invariance [4] to support both insertions and deletions. Privacy preserving for numerical sensitive data:Variance Control [5] specifies a theshold t and demands that in every QI-group, the variance of the sensitive values must be at least t. Unfortunately, no matter how large the variance is, the QI-group may still suffer from proximity breach. Two more effective principle of (k,e)-anonymity[6] and t-closeness[7]are proposed .They can not solve proximity breach well for numerical sensitive attributes. Li et.al proposes new principle(ε,m)-anonymity[8].It requires that given a QI-group G, for every sensitive value x in G, at most 1/m of the tuples in G can have sensitive values “similar”to x.(ε,m)- anonymity model can solve proximity breach well for numerical sensitive attributes, but it concentrate on the static datasets, which need only “one time”release.

500

Manufacturing Systems and Industry Application

Current privacy protection method for numerical sensitive data are shortcomings. On this basis, we propose a anonoymizing method to solve re-publication of Incremental numerical sensitive data. This method can solve proximity breach well for numerical sensitive attributes and has a higher protection for privacy . Anonoymizing method for Incremental Numerical Sensitive Data Basic Concepts .Definition 1,Quasi-identifier: A database table T (A1, A2, ..., An), Quasi-identifier which can be related to external information and disclose personal identity is the minimum attribute, QT={ A1,A2,…,Ai} ⊆ { A1,A2,…,An }. Definition 2, Sensitive Attribute: It contains the attributes of personal privacy data, such as illness, salaries and so on. Definition 3, Generalization [9]:Generalization is a popular methodology of privacy preservation. It divides the tuple into QI-group, and then transforms the QI values in every group to a uniform format. Definition 4, Consistent generalization[10]: If Gf(T)(t) ⊆ Gf1(T∪∪T)(t), some group of tuples and tuples t(t ⊆ T) are generalized together in T and T∪△T. Analysis the privacy leak of incremental data. In order to eliminate proximity breach of numerical sensitive data . we use the thought of (ε,m)anonymous method to ensure proximity breach does not occuring. But ,when inserting a new record and directly making a anonymous to existing data, will inevitably lead to privacy disclosure. As the original data table 1, name is the identifier attribute; age and zip code as the accurate identifier attribute ; salary is sensitive data.

Name Lark Patty Anson Nancy Lindy Anna Alice

Table 1 The original staff data Age Zip 17 12k 19 13k 24 16k 29 21k 34 24k 39 36k 45 39k

Salary 1000 1010 5000 16000 31000 33000 24000

Table 2 Anonymous publishing table Group ID Age Zip Salary 1 [17,24] [12k,16k] 1000 1 [17,24] [12k,16k] 1010 1 [17,24] [12k,16k] 5000 2 [29,34] [21k,24k] 16000 2 [29,34] [21k,24k] 31000 3 [39,45] [36k,39k] 33000 3 [39,45] [36k,39k] 24000

Name

Table 3 Inserted the update records Age Zip Salary

Gerek

18

15k

1200

Mark

35

27k

17000

Jack

40

37k

39000

Yanwen Wu

501

Table 4 Updated anonymous publishing table Group ID Age Zip Salary 1 [17,24] [12k,16k] 1000 1 [17,24] [12k,16k] 1200 1 [17,24] [12k,16k] 1010 1 [17,24] [12k,16k] 5000 2 [29,34] [21k,24k] 16000 2 [29,34] [21k,24k] 31000 3 [35,39] [27k,36k] 17000 3 [35,39] [27k,36k] 33000 4 [40,45] [37k,39k] 39000 4 [40,45] [37k,39k] 24000 Table 5 The extrapolation table is based on Table 2 and Table 4 Group ID Age Zip Salary 1 [17,24] [12k,16k] 1000 1 [17,24] [12k,16k] 1200 1 [17,24] [12k,16k] 1010 1 [17,24] [12k,16k] 5000 2 [29,34] [21k,24k] 16000 2 [29,34] [21k,24k] 31000 3 [35,39] [27k,36k] 17000 3 39 36K 3300 4 [40,45] [37k,39k] 39000 4 [40,45] [37k,39k] 24000 In the process of anonymous, the original table identifier attribute is hidden, use the number of Group ID instead of it. Table 2 is anonymous publication table for table 1. Table 3 is a new inserting record table. If the records of table 3 directly insert into the table 1,which is the original record table , we can get updated anonymous publication table 4. Through the comparison of table 2 and table 4, we will be able to get a deduction table 5. Based on the original background knowledge, attacker can easily know 2 records of 3 groups’ quasi-identifiers from table 5, so the personal privacy is disclosure. Through the above analysis we can learn, direct to make a anonymity together on the incremental data and original data , the attacker can learn personal privacy through the first publication of the data sheet and the updating data, so personal privacy is leaked. Anonymous method. As we have already analyzed the incremental updating, the attacker how launches personal privacy through background knowledge. Here we introduce a new anonymization method, Anonymizing Incremental Numerical , which eliminates proximity breach in publishing numerical sensitive attributes. This algorithm can best solve the republication of incremental numerical sensitive data. This paper use (ε,m)-anonymous thought to eliminate proximity breach of numerical sensitive data. we only study absolute (ε,m)-anonymous ,where ε is non-negative value.The m is given by custom,we can knowing the upper bound of proximity breach is 1/m. Use emax algorithm obataining the max {e1,e2},contrast with the sensitive value of the original record, and ultimately determine the value of e1, e2. For newly inserted records, we use the principle of consistent generalization. when inserting new records, we make a anonymity independent for it and release it with the original anonymous data. Our algorithm mainly contains the following step: (1) When adding series records, for each OI-dimension Ai(1 ≤ i ≤ d),we invoke the quick selection algorithm[11] to obtain the median Ai –value v in G .Then ,with a single scan of G ,we assign each tuple in G to G1or G2,by comparing its Ai –value to v. (2) Sorting the tuples in G1and G2 by ascending order of therir sensitive values . (3) We check whether G1 and G2 satisfies anonymity condition or not, if satisfy, G1 and G2 is direct to generalizing and publishing with original anonymous data.

502

Manufacturing Systems and Industry Application

(4) If G1 or G2 violates anonymity condition , first of all, using algorithm obtain the maxsize value. Secondly, we will partition it into g samller subsets G1,G2,…,Gg, where g equals maxsize(G1/ G2).Specifically, we scan the tuples in G1 or G2 in ascending order of their sensitive data, assign the i-th (1 ≤ i ≤ |G1/G2|) tuple to Gj, where j=(i mod g)+1.All of the resulting G1,G2,…,Gg are guaranted to obey anonymity condition, direct to generalizing and publishing with original anonymous data . Through the above algorithm, in the premise of protecting personal privacy well, we realize the republication of incremental numerical sensitive data. Experimental performance analysis

Radio of safe tuples(%)

Operating system: Windows XP, Development language: JAVA;Database: SQL2000;Test data: DeerHunter data sets. In our experiments, DeerHunter includes three attributes Age, Zipcode,and Salary.We treat Age, Zipcode as the quasi-identifier, and Salary as the sensitive attribute. Experiment conclusion is that our metholds compared with the Anonymiziny current Data anonoymizing methods. Figure 1 shows the ratio of safe tuples with respect to the size of the initial data set,where m is set to 2,and the incremental update size is kept as 30% of the initial data set.That shows the ratio of safe tuples in our method is always 100%. 120% 100% 80% 60% 40% 20% 0%

Anonymiziny Current Data ours

1000

2000

3000

4000

Initial table size

Fig.1 The group privacy intensity of the original database size changes Figure 2, We varied the incremental updata size from 10% to 70% of the initial data set size (i.e.,from 1000 tuples to 5000 tuples)where m is set to 2.As the incremental update size becomes large, our method is effective and efficient. Radio of safe tuples(%)

120% 100% 80% 60% 40% 20% 0%

Anonymizing Current Data ours

10%

30%

50%

70%

update proportion

Fig.2 The group privacy intensity of the updates size changes

Running time(ms)

Figure 3 is the testing the performance of the running time with the updated size changes.With varied the incremental updata size, our method is much shorter than anonymizing the whole data set. 4000 3000 2000 1000 0

Anonymizing Current Data ours

10%

30%

50%

70%

update propotion

Fig.3 The running time of the updated size change

Yanwen Wu

503

Conclusion The article presents a relative new method to the privacy protection of incremental updated, analyzing the privacy disclosure situation of the incremental update released , using an efficient method to solve many privacy leaks of incremental updates released and ensuring that the inference table has no leakage of privacy. Theoretical analysis and based on real data set of the experimental results show that the method is an effective and practicable. For the future work, we will add to deleting and updating operations for sensitive data. Acknowledgements This work is supported by “Chunhui Plan” fund of Ministry of Education under Grant(Z2009-1-01024). References [1] L .Sweeney. K-anonymity: A model for protecting privacy. International Journal on Uncertainty, Fuzziness, and Knowledge-based Systems, Singapore ,World Scientific Publishing ,2002,p. 571-588. [2] L.Sweeney. Achieving k-anonymity privacy protection using generalization and suppression . International Journal on Uncertainty, Fuzziness, and Knowledge-based Systems, Singapore, World Scientific Publishing, 2002, p.571–588. [3] A. Machanavajjhala ,J. Gehrke, and D. Kifer. l-diversity: privacy beyond k-anonymity.In Proc. of International Conference on Data Engineering, Piscataway ,Computer Society,2006,p.24-36. [4] X.Xlao, Y.Tao. m-Invarianee:towards Privacy Preserving re–publication of dynamic datasets. In Proc. Of the ACM SIGMOD Conference on Management of Data, New York ,Association for Computing Machinery ,2007,p.689-700. [5] K. LeFevre, D.J.Dewitt, and R.Ramakrishnan. Workload-aware anonymization. In Proc. of ACM Knowledge Discovery and Data Mining, New York, Association for Computing Machinery, 2006,p.277-286. [6] Q. Zhang, N. Koudas, D. Srivastava, and T. Yu. Aggregate query answering on anonymized tables. In Proc. of International Conference on Data Engineering, Piscataway, Computer Society, 2007,p.116-125. [7]

N.Li, T.Li, and S.Venkatasubramanian. t-closeness: Privacy beyond k-anonymity and l-diversity.In Pro. of the International Conference on Data Engineering, Piscataway,Computer Society 2007,p.106-115.

[8] Jiexing.Li, Yu.fei.Tao, and Xiaokui.Xiao. Preservation of Proximity Privacy in Publishing Numerical Sensitive Data. In Proc. of ACM Knowledge Discovery and Data Mining, New York, Association for Computing Machinery, 2008,p.473-486. [9] P. Samarati and L. Sweeney. Generalizing data to provide anonymity when disclosing information. In Pro. of the ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database,New York, ACM, 1998,p.188. [10] J. Pei, J.Xu, Z.Wang, W.wang, and K.Wang, Maintaining k-anonymity against incremental updates. In Proc. of the International Conference on Scientific and Statistical Database Management , 2007,p.1-10. [11] R. Floyd and R. Rivest. Expected time bounds for selection. In Communications of the ACM (CACM) Vol. 18(1975),p.165-172.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.504

Error estimates of H1-Galerkin mixed finite element methods for nonlinear Parabolic problem Che Haitao College of Mathematics and Information Science, Weifang university, Weifang,261061, China [email protected] Key words: H1-Galerkin mixed finite elementmethod; Error estimates; weak formulation; nonlinear Parabolic problems.

Abstract. In this paper, H1-Galerkin mixed element method is proposed to simulate the nonlinear Parabolic problem. The problem is considered in one dimensional space. L2 and H 1 optimal error estimates are also established. In particular, our methods can simultaneously approximate the scalar unknown and the vector flux effectively, without requiring the LBB consistency condition . Introduction H1-Galerkin method involving C 1 -finite element spaces is discussed for parabolic problems problems in [1,2]. Apart from difficulties in constructing such spaces, computationally more attractive and widely used piecewise linear elements are excluded from this class of finite dimensional spaces. In order to relax C 1 -smoothness and use C 0 -elements, we first split the problem into a first-order system and then propose a nonsymmetric version of a least methods that is an H1-Galerkin procedure for the solution and its flux. In recent years, a lot of researchers have studied mixed finite element methods for partial differential equation. This method was initially introduced by engineer in the 1960s [4-6] for solving problems in solid continua. Since then, it has been applied to many areas such as solid and fluid mechanics [7,8].However, this procedure has to satisfy LBB condition on the approximating subspaces and restricts the choice of finite element spaces. Pani [3](in 1998) proposed a new mixed finite element method called H1-Galerkin mixed finite element procedure which is applied to a mixed system in p and its flux u. Compared to standard mixed methods, the proposed methods have several attractive features. they are not subject to the LBB condition. The finite element spaces Vh (for approximating p) and Wh (for approximating the flux u) may be of differing polynomial degrees. Moreover, the L2 and H 1 error estimates do not require the finite element mesh to be quasi-uniform. Although we require extra regularity on the solution, a better order of convergence for the flux in L2 norm is obtained. Throughout this paper, c will denote a generic positive constant which does not depend on h and t. Parabolic problem in one space dimension We consider the following one-dimensional parabolic partial equation: pt − (a ( x) px ) x = f ( p ), x ∈ [0,1], t ∈ (0, T ] . with Dirichlet boundary condition p (0, t ) = p (1, t ) = 0, t ∈ (0, T ] and initial condition p ( x, 0) = p0 ( x), x ∈ I . 0 < a1 ≤ a ( x ) < a2 ,

(2) (3)

∂f ∂ f ∂ f ∂ f ∂ f + + + + 2 ≤M 2 ∂p ∂p ∂p∂x ∂p∂t ∂ p∂x 2

we assume that

(1)

2

2

3

Yanwen Wu

505

For H1-Galerkin mixed finite element procedure, we first split the parabolic equation into a 1 first-order system. Now introduce u = apx and set α = , Then (1) can be written in the form of a a first-order system as pt − u x = f ( p ), a ( x ) px = u . (4) Denote

the

{

natural

}

inner

product

in

L2 (I )

by

(⋅,⋅)

,

and

let

H 0 = v ∈ H 1 (I ) : v (0 ) = v (1) = 0 .Further, we use the classical Sobolev space W m , p (I ),1 ≤ p ≤ ∞ 1

written as W

m, p

. The norm on W m , p is denoted by ⋅ m, p . When p=2, we write W 2, p as H m and

denote the norm by ⋅ m . For use in the formulation of the H1- Galerkin mixed finite element method for the system (1)-(3), 1 1 we consider the following weak formulation: find { p, u} : [ 0, T ] → H 0 × H such that

( px , vx ) = (α u , vx ),

v ∈ H 01 ,

(5)

(α ut , w) − (u x , w) = ( f ( p ), w), w ∈ H 1 . (6) 1 To analyze the H -Galerkin mixed finite element approximation to the system (5) and (6), Let Vh and Wh be finite-dimensional subspace of H 01 and H 1 respectively, associated with the grid ∆ h ,with the following approximation properties: for 1 < p < ∞ and k , r positive integers,

{ inf { w − w

inf v − vh

+ h v − vh

vh ∈vn

Lp

wh ∈wn

h Lp

}≤ ch v }≤ ch k +1

w1, p

+ h w − wh

r +1

w1, p

, v ∈ H 0 ∩ W k +1, p , 1

w k +1, p

w

w r +1, p

, w ∈ W r +1, p .

The semidiscrete H1- Galerkin mixed finite element method for the system (4) consists in determining { ph , uh } : [ 0, T ] → Vn × Wn such that ( phx , vhx ) = (α uh , vhx ),

vh ∈ Vh ,

(7)

(α uht , wh ) − (uhx , wh ) = ( f ( ph ), wh ),

wh ∈ Wh .

(8)

with given uh (0) and ph (0) . Following Wheeler [9], we now define elliptic projection (u h , p h ) ∈ Wh × Vh as

(p

x

)

− p hx , vhx = 0, vh ∈ Vh ,

(9)

A(u − u h , wh ) = 0, wh ∈ Wh .

(10)

where A(q , w ) = (q x , wx ) + λ (q , w ) . Here λ is chosen so that A is H1-coercive , that is 2 A(w, w ) ≥ ℵ0 w 1 , w ∈ H 1 ,

(11)

Where ℵ0 is a positive constant. Moreover it is easy to show that A(⋅,⋅) is bounded.

Let η = p − p h and ρ = u − u h , The following estimates for ρ = u − u h , and η = p − p h , are well known from : ∂u (12) ρ j + ρt j + ρtt j ≤ ch k +1− j ( + u k +1 ) , ∂t k +1 and

η j + ηt

j

≤ ch r +1− j ( p

r +1

+ pt

r +1

),

(13)

Further, for j=0,1, and 1 ≤ p ≤ ∞ , we have

η and

W j,p

≤ ch k +1− j p W j , p

(14)

506

Manufacturing Systems and Industry Application

ρ W ≤ ch r +1− j u W . (15) Note that, for p = ∞ , we require a quasi-uniformity condition on the finite element mesh. 1 To analyze the temporal discretization on a time interval I, let N > 0, ∆t = , and N tn = n∆t ,Define 1 φ n = φ (⋅, tn ) , ∂ tφ n = (φ (tn ) − φ (tn−1 )). ∆t n ∂φ approximating using finite differences gives ∂t ∂φ n 1 + δ n = (φ (tn ) − φ (tn −1 )) . ∂t ∆t n where δ is the truncation error. A Family of discrete time H1- Galerkin mixed finite element procedure can be defined as n n follows: find (u , p ) ∈ Wh × Vh such that j ,p

j ,p

h

h

( p hxn , vhx ) = (α u hn , vhx ),

vh ∈ Vh ,

(16)

(α∂ t u nh , wh ) − (u hxn , wh ) = ( f ( phn ), wh ),

wh ∈ Wh .

(17)

Error estimates for H 1- Galerkin mixed finite element method In this section, we shall discuss error estimates for the discrete Galerkin procedure. Let u − uh = u − u h + u h − u h = ρ + ξ , p − ph

( ) ( ) = ( p − p ) + ( p − p ) =η +ζ . h

h

h

Using (5)-(6), (17) and auxiliary (2.9)-(2.10), we obtain error equations in ξ and ζ as

(ζ xn , vhx ) = (αρ n , vhx ) + (αξ n , vhx ),

vh ∈ Vh

(18)

(α∂ tξ , wh ) + A(ξ , wh )) = ( f ( p ) − f ( p ), wh ) n

n

n h

n

+ λ (ξ n + ρ n , wh ) − (α∂ t ρ n , wh ) + (αδ n , wh ),

wh ∈ Wh

(19)

THEOREM 1 With u (0) = ap0 x and assume that uh (0)= u( h 0), there exists a constant c>0 independent of h and t such that u n − uhn + p n − phn ≤ c(h mim ( r +1,k +1) + ∆t ) .

Proof. Since estimates of ρ and η can be found out from (12) and (13), it suffices to estimate ξ and ζ .Substituting the functions vh = ζ n and wh = ξ n in (18) and (19) respectively. we obtain

(ζ xn , ζ xn ) = (αρ n , ζ xn ) + (αξ n , ζ xn ) , (α∂ tξ n , ξ n ) + A(ξ n , ξ n ) = ( f ( phn ) − f ( p n ), ξ xn ) + λ (ξ n + ρ n , ξ n ) − (α∂ t ρ n , ξ n ) + (αδ n , ξ n ) Using (20) to have 2

(

2

ζn ≤c ρ + ξ x

2

)

(20) (21)

(22)

At any point x ∈ I , by the Taylor theorem, f ( phn ) − f ( p n ) = f ′( p h )( phn − p n ) for some value p h . Therefore ( f ( phn ) − f ( p n ), ξ xn ) = ( f ′( p h )( phn − p n ), ξ xn ) = −( f ′( p h )(ξ n + η n ), ξ xn ) .

(23)

Yanwen Wu

507

Consequently, from equation (21), (α∂ tξ n , ξ n ) + A(ξ n , ξ n ) = −( f ′( p h )(ξ n + η n ), ξ xn )

(24)

+ λ (ξ n + ρ n , ξ n ) − (α∂ t ρ n , ξ n ) + (αδ n , ξ n ) 1 By the inequality (ξ n , ξ n −1 ) ≤ ((ξ n , ξ n ) + (ξ n −1 , ξ n −1 )) and Young inequality 2 1 1 2 2 2 1 ((α 2 ξ n , ξ n ) − (α 2 ξ n −1 , ξ n −1 )) + (ℵ0 − ε ) ξ n ≤ c( ξ n + η n 2∆t + ρ

n 2

+ ∂t ρ

n 2

+ δ

n 2

+ ξ

n 2

(25)

)

is obtained. Here we also used (22). On substitution and summing from n = 1, 2, ⋅⋅⋅, n , the resulting equation becomes (1 − c∆t ) ξ J

2

J

≤ c∆t ∑ ( η n

2

i =1

2

+ ρn ) + ∫

tJ

0

tJ

J −1

ρt ds + (∆t ) 2 ∫ utt ds + c∆t ∑ ξ i 2

2

0

2

i =1

Assume that 1 − c∆t > 0 . Then an application of Gronwall lemma with triangle inequality n completes the estimate of p n − phn , Since ζ ∈ H 01 , for ζ

2

≤ ζ xn

2

(

2

≤c ρ + ξ

2

) ,using (22) and

triangle inequality complete the estimate of u n − uhn . THEOREM 2 With u (0) = ap0 x and assume that uh (0)= u( h 0), there exists a constant c>0 independent of h and t such that u n − uhn ≤ c(h mim ( r +1,k ) + ∆t ), 1

≤ c(h mim ( r ,k +1) + ∆t ).

p −p n

n h 1

Further, for 1 ≤ p ≤ ∞

u n − uhn

Lp

+ p n − phn

Lp

≤ c(h mim ( r +1,k +1) + ∆t ) .

Since Theorem 1, we have a superconvergence result for ζ x , it is sufficient to obtain a super convergence estimate for ξ in H1-norm.Using (22) and the triangle inequality to complete the error estimate for p n − phn . 1

p −p n

n h 1

≤ η

n 1

ξ −ξ n

Choose wh = (α∂ tξ n ,

ξ n − ξ n −1 ∆t

∆t

+ ζ n ≤ c(h mim ( r ,k +1) + ∆t ). 1

n −1

in (19) to obtain

) + A(ξ n ,

ξ n − ξ n −1

)) = ( f ( phn ) − f ( p n ),

ξ x n − ξ x n−1

) ∆t n n −1 n n −1 n n −1 n n ξ −ξ n ξ −ξ n ξ −ξ + λ (ξ + ρ , ) − (α∂ t ρ , ) + (αδ , ) ∆t ∆t ∆t

∆t

= −∂ t ( f ′( p n )η n , ξ x n ) + (∂ t ( f ′( p n )η n , ξ x n −1 ) + ( f ′( p n )ζ x n , ξ x n ) + ( f ′( p n )ζ n , ξ x n −1 ) + λ (ξ n + ρ n , + (αδ n ,

ξ n − ξ n−1

ξ −ξ n

∆t

∆t n −1

) − (α∂ t ρ n ,

8

) = ∑ Ii . i =1

ξ n − ξ n−1 ∆t

)

(26)

508

Manufacturing Systems and Industry Application

As to the left-hand side of relation (26), we note that (α∂ tξ , n

ξ n − ξ n −1 ∆t

1 2

)= α

2

ξ n − ξ n−1

,

∆t

(27)

And

ξ n − ξ n −1

1 ∆t ) = ∂ t A(ξ n , ξ n ) + A(∂ tξ n , ∂ tξ n ) . ∆t 2 2 We estimate the right-hand terms of (26),

A(ξ , n

(28)

−∂ t ( f ′( p n )η n , ξ x n ) + (∂ t ( f ′( p n )η n , ξ x n −1 ) + ( f ′( p n )ζ x n , ξ x n ) + ( f ′( p n )ζ n , ξ x n −1 ) + λ (ξ n + ρ n , ≤ c( ∫

ξ n − ξ n −1 ∆t

2

tn

ηt ds + ∫

tn−1

) − (α∂ t ρ n ,

ξ n − ξ n −1 ∆t

2

tn

) + (αδ n ,

2

2

ξ n − ξ n −1 ∆t

8

) = ∑ Ii i =1

2

2

2

ρt ds ) + c( ζ x n + ζ n + ρ n + ξ n + ξ n−1 )

tn−1

−∂ t ( f ′( p n )η n , ξ x n ) + 6ε

2

ξ n − ξ n −1

+ c∆t ∫

∆t

tn

tn −1

2

utt ds .

(29)

combining (27)-(33), we have

α

1 2

2

ξ n − ξ n−1 ∆t ≤ c( ∫

2

tn

tn−1

+c( ζ x

1 ∆t + ∂ t A(ξ n , ξ n ) + A(∂ tξ n , ∂ tξ n ) 2 2

ηt ds + ∫

n 2

2

tn

tn−1

+ ζ

n 2

ρt ds )

+ ρn

−∂ t ( f ′( p )η , ξ x ) + 6ε n

n

n

2

2

2

+ ξ n + ξ n −1 )

ξ n − ξ n −1

(30)

2

+ c∆t ∫

tn

2

utt ds

tn −1 ∆t Multiplying both side of (34) by 2 ∆t and summing from n = 1, 2, ⋅⋅⋅, n , the resulting equation becomes, 2

1 ξ n − ξ n−1 2∆t ( − 6ε ) a1 ∆t 2

tn

≤ c( ∫ ηt ds + ∫ t0

tn

t0

n

+ c∆t ∑ ( ζ x i

2

+ (1 − c∆t ) ξ n

2

2

ρt ds )

+ ζi

2

+ ρi

2

2

+ ξi

2

+ ξ i −1 )

i =1

+ ηn

2

n

+ c(∆t ∑ ξ i −1 i =1

2 1

tn

2

) + c(∆t ) 2 ∫ utt ds t0

Assume that 1 − c∆t > 0 . Then an application of Gronwall lemma with triangle inequality n completes the estimate of p n − phn , Since ζ ∈ H 01 , for ζ

1

2 L

p

≤ ζ xn

2

(

2

≤c ρ + ξ

2

) ,using (22)

and triangle inequality complete the estimate of u n − uhn . 1

Summary In this paper, an H1-Galerkin mixed finite element method is proposed to simulate the nonlinear Parabolic problem.The problem is considered in one dimensional space. L2 and H 1 optimal error estimates are also established. In particular, our methods can simultaneously approximate the scalar unknown and the vector flux effectively, without requiring the LBB consistency condition .

Yanwen Wu

509

References [1] Douglas, J, Jr., Dupont, T. F.,Wheeler, H1-Galerkin methods for the Laplace and heat equations. Mathematical aspect of finite elements in partialdifferential equation,New York: Academic Press, (1975),383-415. [2] A. K. Pani and P. C. Das, An H1-Galerkin method for quasilinear parabolic differential equations , in: C.A.Micchelli, D.V. Pai, B. V. Limaya (Eds.), Methods of Functional Analysis in Approximation Theory ISNM 76, Berkhauser-Verlag, Basel, 357-370,(1986). [3] Pani A K. An H1- Galerkin mixed finite element method for parabolic difference equations, SIAM J. Nmer.Anal.,35(1998): 712-727. [4] B.Fraeijs de Veubeke, Displacement and equilibrium models in the finite element method. Stress Analysis, edited by 0. C.Zienkiewics and G. S. Holister ( Eds.),Stress Analysis [C], JohnWiley and Sons Ltd., London.,145-197,(1965). [5] K. Hellan, An analysis of elastic plates in flexure by a simplified finite element method,Acta Polytech. Scand. Ci. Ser., v. 46, (1967). [6] L. Herrmann, Finite element bending analysis for plates', J. Engng Mech. Diu., ASCE, 93(1967), 13-26. [7] J.Douglas, R. Ewing and M. Wheeler, A time-discretization procedure for a a mixed finite element approximation of miscible displacement in porous media, ROIRO Anal, number.,17(1983):249-265. [8] J. Douglas, R. Ewing, and M. Wheeler, The approximation of the pressure by a mixed method in the simulation of miscible displacement, ROIRO Anal, number.,17(1983):17-33. [9] M.F.Wheeler, The priori error estimate for Galerkin approximations to parabolic differential equations, SIAM J Numer Anal ., 10(1973),723-749.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.510

A Conceptual Framework for Animation Design based on E-learning System Chai Gang, Huang Xiaoyu Guilin University of Electronic Technology Key words: e-learning system, project-based learning, collaboration.

Abstract. A conceptual framework for the design of PILE for animation design is sketched out. PILE for animation design takes the concept of project as principal axis and focuses on interaction, collaboration, communication and critical thinking. Three main modules (VLMS, PCLP and PFS) of PILE run synergistically under the cooperation of six types of instruction technologies. The application of these elements makes the important aspect of PILE for animation design. In order to center on improving learning, the model of learning is changed from unilateral and close model to multilateral and open model. Accordingly, the framework of PILE for animation design provides a implementation of learning theories, including interactions, Hill's learning theory and projected-based learning model. Introduction Originality is directly derived from talent. And talent is the core competence of animation industry. As the most direct, efficient, and basic way of enlarging talent pool, higher education of animation design takes important duties and responsibilities for promoting animation industry. In many countries, a variety of higher education forms are being actively encouraged. A big problem is that the present education of animation design follows the traditional, unilateral and "teacher-lead" model, and this model is simply inadequate [1,2]. Especially in China, students of animation design typically study merely in classroom environment and work alone on simple assignments that emphasize short-term content memorization, and rarely make collaboration, communication and presentations. Due to such learning environment, student of animation design cannot acquire some essential skills, such as joint literature search, cooperatively learn or even challenge any source of arts and technologies, and so on. It results in the lack of important abilities, such as interaction, collaboration, and communication, which is necessary for student to show their versatility in the animation studios. It urges the growing demands for creating new pedagogical models for higher education of animation design. Some studies [3-6] showed that project-based learning model can help students to possess long-term challenges and compound abilities to resolve the complex interdisciplinary problems that are involved in real-life works. In fact, every animated film produced at present is actually a hybrid of arts and technologies and based on a project systematically planned. There exists a common consensus that students should learn the essential skills and abilities from a series of animation projects. Therefore, project based learning approach is considered for student of animation design. Many new technologies are being developed and are becoming increasingly popular in animation design today. Particularly, information technologies (IT), such as internet and intranet, are attracting more and more attention from the field of higher education [7,8]. Many colleges have been becoming equipped to modernize teaching and learning processes from "classroom-based" environments to "web-based" environments. There is another central problem, how to utilize these advanced equipments effectively for creating new teaching and learning systems, such as e-learning systems. Motivated by these pedagogical models and IT technologies, and based on the experiences from the teaching research project of "Principles of pattern design" supported by Hubei university of technology, therefore, a conceptual framework of e-learning system is established and named project-based integrated learning environment (PILE) for animation design. This framework is based on three innovative paradigms, i.e.,

Yanwen Wu

511

interactions, Hill's learning theory and project-based learning model. Six types of IT technologies are taken as supportive footstones of PILE for animation design. And, the PILE for animation design targets on making students of animation design acquire the compound abilities of interactivity, collaboration and communication. The rest of this paper is organized as follows. Projected-based integrated learning environment for animation design is examined in Section 2, followed by conclusion and future work in Section 3. PROJECT-BASED INTEGRATED LEARNING ENVIRONMENT FOR ANIMANON DESIGN This section contains three parts that describe the conceptual framework of PILE for animation design in what follows. Three Bases of PILE A basic problem of PILE for animation design is how to offer a unified framework approach to make full use of existed technologies and equipments to create a comprehensive, high flexible and integrated learning environment for animation design. Such a structured approach should be set on three important bases for designing the framework of PILE. First, PILE for animation design should take innovatively philosophical, pedagogical and technical theories and models as sound foundations for redesigning and adjusting the traditional teaching and learning system. Second, a variety of instruction technologies, such as web-based information technologies, traditional classroom-based instruction technologies, should be wrapped up as a whole for enhancing the effectiveness, efficiency and benefit of new pedagogical paradigm, but not for favoring one technology and discriminating against the other technologies. Third, as the principal axes of pedagogical paradigm of PILE for animation design, projects should be deliberately planned according to classic animation fundamentals, covering in teaching materials of animation design, to play critical roles in transferring essential knowledge, individualized skills and compound abilities to students. Figure 1 illustrates the three bases of PILE. Web-based technologic

Learning theories

System design

PILE for animation design

Instruction technologies

Well- planned nrojects Pedagogical paradigm

Fig. 1 Three basics of projected-based integrated learning environment Learning Theories for PILE Interactions, derived from American pragmatism, is proposed by George Herbert Mead in 1930's. Mead argued that personal selves are social products, but that these selves are also goal-directed and inspired. Herbert Blumer [9], who founded symbolic interactions, specifies three basic principles of this theory. First, human beings can think. Second, the capacity of thought can be molded by social interaction, and conversely social interaction allows people to improve their ability to think. Third, people's actions and interactions are carried on by the meaning of the symbols can be used to interpret their situation. Fourth, due to the interactions among human beings, their action patter can be changed and bring groups and societies into being.

512

Manufacturing Systems and Industry Application

From the nature and nurture development points of view, Hill's learning theory [10], directly inspired by the interactions, takes into consideration the processes of development and treats motivation and reinforcement as two critical aspects of the learning process. This theory pays close attention to the compound ability of symbolic learning and problem solving. It argues that the flexibility of behavior and cognition is resulted in the interactions within groups and processes, and is influenced by earlier on later learning. The core idea of project-based learning [11] is that real- world projects attract students' interest and provoke critical thinking when they apply their acquired knowledge in a pilot-project context. According to problems of animation design students are experiencing, the teacher acts as facilitator and partner, and design real- world project and meaningful tasks that accelerate students' knowledge and skill development, and evaluate compound abilities of every student. Some researcher argued that project-based learning helps students cultivate the thinking and collaboration skills required in the workplace. Due to the characteristics of animation design, therefore, is logically valid for creating integrated learning environment for animation design. The three learning paradigm play important roles in the construction of PILE for animation design. Targets of PILE for Animation Design Teachers launch each learning unit by putting students into a real-life or virtualized project that engages their interest and pass on animation design fundamentals they should grasp. These projects are designed to deal with multifaceted subjects requiring versatilities. The PILE targets are as follows: ·To encourage collaboration in project team ·To learn communication skills in and between project teams ·To cultivate critical thinking from multifaceted subjects ·To develop literature search skills in exploring sources of arts and technologies for animation production ·To achieve the essential arts and technologies for animation design ·To practice the careers of animation design within virtualized internships. Through the interactive and collaborative platform of PILE for animation design, students grasp the fundamentals of classic and modern animation design and realize that projects of animation courses are not only the matter of animation media and technologies, but the essential form of learning animation design that fully clarify what and how they should learn. In order to assure construction of PILE for animation design, therefore, project should be taken as the core component to coordinate all function modules consisted in PILE. Projects are carefully decomposed into a series of uncomplicated web-quest or computer-aid research tasks. And it should be quite simple to evaluate these tasks by using several clear and definite criteria. Through these well-planned projects, students are anticipated to collaboratively and synthetically exercise fundamentals, arts and technologies of animation design in effective and efficient ways to help them assess and present what they had learned. Conceptual Structure of PILE PILE for animation design is made up of three main function modules that work together. The function modules include a Virtualized Learning Management System (VLMS), a Project-based Course Learning Platform (PCLP) and a Project Files System (PFS). The operations of the three modules are relatively independent and grounded on projects. The projects used in PILE are detailed and complicated in design and planning according to classical animation fundamentals. The development of VLMS, PCLP and PFS, also straightforwardly influenced by the three theories aforementioned, is gradually being widely implemented in campus wide. Correspondingly, a series of information technologies will be adopted in the construction of PILE to facilitate the sharing of resource within teams and to boost animation design learning in this environment.

Yanwen Wu

513

According to the strategies of PILE, these technologies utilized are categorized into the following six types: ·Collaborative technologies ·Communicative technologies ·Thinking technologies ·Literature search technologies ·Project files management technologies ·Computer-aid assessment technologies Collaborative technologies, including web-based conferencing, server trusteeship, and so on, will be used to throw students into several planned course project teams. Communicative technologies, including discussion forums, chartrooms, bulletin board system (BBS), and so on, will be used to encourage students to communicate with their academic peers, experts and teachers. Thinking technologies, including blog, micro-blog, and so on, will be used to facilitate the presentation of every student's visions and ideas. Literature search technologies, including online information solution providers, intra- campus literature search services, and so on, to help students acquire handy animation arts and technologies, make own critical decisions, improve the productivity and outcomes of their animation works. Project files management technologies, including large-scale databases, on-line analytical processing (OLAP), visualization application programming interface (API), and so on, will be used to automatically collect and combine the files produced within the progress of project, such as texts, information, or sets of discussions. Computer-aid assessment technologies, including intelligent discrimination systems, computer-aid score systems, and so on, to integrate objective and subjective scores into comprehensive rank scores as standards for assessing the performance of every team and the quality of its animation works. VLMS, PCLP and PFS are created synthetically by using these technologies to form the basic component of PILE and to provide a way of achieving an interactive, collaborative, reliable learning environment for animation design. Figure 2 illustrates a conceptual framework of PILE for animation design. And Table I lists several facets of PILE for animation design. Teacher

Student

Portal

VLMS

PCLP Animation projects

Web

PFS

Fig. 2 A conceptual framework of PILE for animation design

514

Manufacturing Systems and Industry Application

Table1 SEVERAL FACETS OF THREE MODULES WITHIN PILE FOR ANIMATION DESIGN Facets

VLMS

Technologics



"just-in-time" classroom



server trusteeship



OLAP



web-based conferencing



discussion forums



Visualization API



course forum



chartrooms



server trusteeship



BBS



intelligent discrimination systems



backstage database

• online information solution providers •

intra-campus literature search services

PCLP

• Platform

Roles teachers

Roles students

of

of

PFS

computer-aid score systems



traditional classroom

• virtualized animation project process



virtual classroom



intra-campus network



intra-campus network



internet



Leader in learning environment



Independent third party



system manager



System manager



Pilot



rule regulator



Information promulgator



Project process manager



junior partner



senior partner



files manager

• information receiver

• bilateral communicator •

In-between referee

CONCLUSIONS AND FUTURE WORKS There are two more complicated problems that should be tackled. The first problem is too specific and implements the proposed framework of PILE for animation design. The second problem is whether students using PILE for animation design can acquire the targeted competency development, such as social interactivity, team collaboration and communication, and so on. Therefore, the future studies should be not only theory oriented, but also application-oriented.

Yanwen Wu

515

References [1] O. Rivka, "Think-maps: teaching design thinking in design education," Design Studies, vol. 25, pp. 63-91, 2004. [2] D. Udall, and A. Mednick, Joureys through our classrooms, Dubuque, IA: Kendall/Hunt, 1996. [3] B. F. Jones, C. M. Rasmussen and M. C. Moftt, Real-life problem solving: A collaborative aroach to interdisciplinary learing, Washington, DC: American Psychological Association, 1997. [4] W. R. Penuel and B. Means, "Designing a performance assessment to measure students' communication skills in multi-media-supported, project-based learing," in the Annual Meeting of the American Educational Research Association, 2000. [5] J. W. Thomas and J. R. Mergendoller, "Managing project-based learing: Principles from the feld," in the Annual Meeting ojthe American Educational Research Association, 2000. [6] J. W. Thomas, J. R. Mergendolier and A. Michaelson, Project-based learing: A handbookfmiddle and high school teachers, Novato, CA: The Buck Institute for Education, 1999. [7] J. Ismail, "The design of an e-Iearing system: Beyond the hype," The Interet and Higher Education, vol. 4, pp. 329-336, 200 I. [8] K. A. Pitucha and Y. Lee, "The infuence of system characteristics on e-learing usc," Computers & Education, vol. 47, pp. 222-244, 2004. [9] H. Blumer, Symbolic interaction ism: perspective and method, New Jersey: Prentice-Hali Inc, 1986. [10] W. Hill, Learing: A Survey of Psychological Interpretations, 7t cd., London: Methuen, 2001. [11] J. Larmer, D. Ross, and J. R. Mergendolier, PBL Starter Kit: To-the-Point Adice, Tools and Tps fI Your First Project. Buck Institute for Education, 2009.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.516

Modeling Microfibril Angle of Larch Using Linear Mixed-Effects Models Yaoxiang Li1,a, Lichun Jiang2, b* 1

College of Engineering and Technology, Northeast Forestry University, Harbin, Heilongjiang, P. R. China 2

College of Forestry, Northeast Forestry University, Harbin, Heilongjiang, P. R. China

[email protected], b*[email protected] *Corresponding author, Email:[email protected]. This work was supported by the grant from the NSFC (30972363), Special Fund for Forestry-Scientific Research in the Public Interest (201004026), the Fundamental Research Funds for the Central Universities (DL09CB06, DL10CA06), and Science Foundation for The Youth Scholars of Ministry of Education of China (200802251024). a

Key words: linear mixed-effects models; microfibril angle; dahurian larch

Abstract. Earlywood microfibril angle (MFA) was determined at each growth ring from disks at breast height (1.3 m) from 6 dahurian larch (Larix gmelinii. Rupr.) trees grown in northeastern China. Significant variation in microfibril angle was observed among growth rings. MFA at breast height varied from 7.5°to 21.5°between growth rings and showed a descreasing trend from pith to bark for each tree. A second order polynomial equation with linear mixed-effects was used for modeling earlywood MFA. The LME procedure in S-Plus is used to fit the mixed-effects models for the MFA data. The results showed that the polynomial model with three random parameters could significantly improve the model performance. The fitted mixed-effects model was also evaluated using a separate dataset. The mixed model was found to predict MFA better than the original model fitted using ordinary least-squares based on absolute and relative errors. Introduction Dahurian larch is the most widely planted and important commercial species in northeastern China. This species is utilized for lumber, plywood, oriented strand board, parallel stranded beams and pulpwood products. The microfibril angle (MFA) is the winding angle of the cellulose microfibrils in the dominating S2 layer of the secondary cell wall of tracheids and the long axis of the cell [1]. MFA is highly inversely correlated with specific gravity (SG), modulus of elasticity , modulus of rupture, and tangential shrinkage, which is positively correlated with the longitudinal shrinkage of wood. For example, a large angle causes longitudinal shrinkage of the tracheid and consequently causes longitudinal shrinkage of the wood. In addition, wood stiffness and bending strength are negatively affected by large microfibril angles [2-5]. Because of these relationships, MFA has become an important indicator of wood quality to the forest products industry. Linear mixed-effects models have recently received a great deal attention in the statistical literature beacuse they are flexible models for unbalanced repeated measures data where the observations are ordered by time or position in space [6-8]. Repeated measures data arise in numerous forestry related experiments and multiple measurements are taken on each individual of several subjects, such as MFA measurements within the same disk at different trees [9]. A modern approach to repeated measures analysis utilizes linear mixed-effects models because their flexible variance-covariance structure allows fixed and random parameters to be estimated simultaneously. Mixed models also provide consistent estimates of the fixed parameters and their standard errors. Furthermore, the inclusion of random parameters captures more variation among and within groups. The objectives of this study were to develop MFA models for dahurian larch in northeastern China based on linear mixed-effects modeling approach and evaluate the predictability of the mixed-effects model based on a separate dataset.

Yanwen Wu

517

Materials and Methods MFA Samples. Six trees with good form were selected for destructive sampling from dahurian larch plantations located in Qitaihe forest bureau in Heilongjiang Province, northeastern China. Each sample tree was felled at ground level and discs (about 5 cm thick) were collected at breast height (1.3 m) from each tree. The discs at breast height for all sample trees were used for microfibril angle measurements. A radial strip 1.5 centimeters square in cross-section, extending from pith to bark, was cut from each disk, and ten tangential samples for each growth ring were measured. There are 1860 samples for MFA measurements in total. A plot of annual ring MFA by different wood disk is presented in Figure 1. It can be seen that the MFAs follow the same general pattern and vary within each growth ring from pith to bark. Figure 1 also indicates that MFA is large near the pith and decreases rapidly from pith to bark. 0

20

30

40

0

2

18

16

14

12

10

8

0

10

20

30

40

10

3

є єє є є є є є є є єє є є є є є є є є є є єєє є єєє є є єє є єєє єє єє єєє єє є єє єєє єє є єє є є є єєє є єє є є єєє є є єє єє є є є є єє єє є єєє є єєєє єє є є єє єє єє єє єє є є єє єє є є є є єє є єє є є єє є єє є є є є єєє є є єє єє єє єє є є є єє є єєєєє єєє є є єє єєє єє є є єє є єєє єєє є єєє єє є є єєє є є є є є є є є єє єє єєє є є є єє єє є є є є є є єєє єє є є єє єє є єє єє єє єє єєє є є єє єє єєє єєє єє є є є є є єє єє єє є є є є єє є єє єє є єє єє є є є єє єє є єє є єє є єє єє є є єє єє єєє є є єє єє є є єє є є єє є єє єє є є є є є єє єє єєє єє є є є є єє є єє єє є є є є є єєє єєє є є єє є є є є є є є є є є є єє є є є є

20

Microfibril Angle(degrees)

10

1

22

є є є є є єєє є є єє єє єє є єє є є є єє є єє є єє є єє є є є єє є єє є єє є є єє є єє є є єє є є є єє є є єє є є є є є є є єє є є є є є єє є є є єє є єє є є єє єє єє єє є є єє є єє є єє єє є є є

0

10

20

20

30

40

0

4

є є є є єє єє є є є є є є єє є є єє є єє є єє єє є єє є єє єєє єє є є є є є є

30

5

є є є єє є є єє є єє є є єє є є єє єє єє єє єє є є є єє є єє є єє єє є є є єє є є є єє єє є є є єє є є єє є єє є єє єє є є єє єє є є є є єє є є є єє єє є є єє є є єє є єє є є є є єє є є є єєє є є єє є єє єє є є є єє є єє єє єє єє є є єє єє єє є є єє є є єє є є єє є є єє є єє є є є є єє є є є є

40

0

10

20

10

20

30

40

6

є є є єє єє є є єє єє єє є є єє є є є єє єє є єє єє єє є є єє єє є є єє єє єє єє єє є є єє є є єє єє є єє єє єє є єє є єє є є є єє є є є єє єє є є є єє є є є єє є є є єє є є єє єє є є єє є єє є є є єє є єєє єє є єє є єє є єє є єє є є є єє єє єє єє є є єє є є є є єє єє єє єєє є є єє є є єє є єє є є є

30

є є є є є єє є є єє єє є є є єє єє єє є єє є є є є єєє єє єє є єє є єє єє єє єє є є єє єє є єє є є єє є є є є є є єє єє є єє є є єє є є єє є є єє єє є єє є єє єє є є єє є єєє єє є єє є є єє єє є є єє є є єє є єє єє єє є єє є є єєє є є є є є є є єє єє є є є є єє є є є єє є єє єє є є є є є є єє єє єє є є є є є є є є є

40

Age(years)

Fig. 1. Plot of microfibril angle by different wood disk Linear Mixed-Effects Models (LME). For a single level of grouping, Laird and Ware [10] write the ni -dimensional response vector y i for the i th experimental unit as  y i = X i β + Z i bi + ε i  bi ~ N (0, D ) ε ~ N (0, σ 2 I ) .  i

(1)

Where β is a p × 1 vector of fixed effects parameters, bi is a q × 1 vector of random effects parameters, p is the number of fixed parameters in the model, q is the number of random parameters in the model, D is the variance-covariance matrix for the random effects, X i (of size ni × p ) and Z i (of size ni × q ) are known fixed-effects and random-effects regressor matrices, and

εi

is the

ni -dimensional within-tree error vector with a spherical Gaussian distribution. The

assumption Var( ε i ) = σ 2 I can be relaxed using additional arguments, such as correlation matrix, in the model fitting. Second order polynomial equation was selected to model microfibril angle of larch. The form of the model is: y = β1 + β 2 X + β 3 X 2 . Where y is microfibril angle (degree), x is age (years), β 1 , β 2 , β 3 are regression coefficients.

(2)

518

Manufacturing Systems and Industry Application

Results and Discussion An approach for determining parameter effects is to obtain separate fits for each individual (tree) and assess the variability of estimated parameters by considering the individual confidence intervals. The parameters with high variability and less overlap in confidence intervals across trees should be considered as mixed effects (Pinheiro and Bates 1998). Confidence intervals were obtained on the parameters in second order polynomial model based on individual fits. Fig. 2 gives the approximate 95% confidence intervals for the three parameters for each tree. It indicated that confidence intervals of three parameters showed more variability from tree to tree. Therefore, the three parameters were considered mixed effects. To avoid over-parameterization problems, we compare the mixed model (all three parameters as mixed effects) to several reduced models (some parameters as purely fixed) using likelihood ratio tests (LRTs). The LRTs were found to be significant at the 0.0001 level for all comparisons and further confirmed that the three random parameters were needed in the original model. The boxplot of residuals by tree was also constructed for visual comparisons (Fig. 3). The boxplot showed that the residuals of mixed-effects model are approximately centered at zero, but that the variability changes with tree. In this case, the mixed model was assumed to have a constant variance, though we observe several outlying observations and large residuals. (Intercept)

age

|||

6

I(age^2)

| | |

|||

5

|

| | |

|

|||

| | | |

|

|

6

|

5

|

|

4

Tree

Tree

4

|

|||

3

|||

2

| | |

|

| | |

|

|

|

|

|

3

2 1

|||

| | |

|

|

|

1 18.5 19.0 19.5 20.0 20.5 21.0 21.5

-0.50

-0.45

-0.40

-0.35 0.0025

0.0035

0.0045

0.0055

-2 Confidence intervals

Fig. 2. Ninety-five percent confidence interval on the model parameters for each tree

-1

0

1

2

Residuals

Fig. 3. Boxplots of residuals for mixed model

The performance of the mixed and fixed models was visualized by displaying the fitted and observed values in the same tree (Fig. 4). Both the fixed-effects model with random effects set to zero and mixed-effects model are compared. The mixed-effects model more closely followed the trend of actual values for most trees and indicated that mixed effects model described the microfibril angle of larch well. To demonstrate the predictive ability of the mixed and fixed models, we conducted a validation using the separate dataset. Absolute error (AE) and relative error (RE) were used for comparisons as following: AE = x − xˆ . (3) AE . (4) x For the test dataset, AE and RE were calculated for each sample (Table 1). ranges of absolute errors were 0.065-1.144 degrees for mixed model and 0.812-2.808 degrees for fixed model. The mixed model also showed lower relative errors than the fixed model for all test samples. RE =

Yanwen Wu

0

20

30

40

0

2

3

єє є є є є є є є є єє є є є є єє є є є є єє є є єє єє є єє єє є є є є єє є єєє єє єєє є є єє є є єєє є є є є є єє є є є єєє є є єє єє єє є єє єєє єє єє є єє є єє є є єє єє єє єє єє є є є єє є є є єє є єє є є є є єє є єє єє єє єє є є єє є є єє єє єє єє єє єє єє є єє єє є є є є є єє єєє є є єє є є єєє є є є єє єє єє є єєє є є є є єє єє єє є єє є є є є є є є єє є є є єє єєє є єє єє є є є єє є єє є є є єє є є є єє єєє єє є є є є єє є єєєє єє є єє єє є є є є єє є єє є є є є є є єє єє єє є єє є є є є єє єє є єє єє єє є є є єє єє єє є є є є є є є єє є єє є є єє єє є є є є є є єє єє є єє є є є єє є є є є єєє є є є є є є є єє є є є є єєє є є є є єє є є є є є

20

18

Microfibril Angle(degrees)

10

1

22

16

14

12

10

8

0

10

20

30

40

519

10

20

30

40

0

4

є є є є є єєє є є єє єє єє є єє є є є єє є єє є є є є єє є є єє є є єє є єє є є єє єє є є є єє є є є єє є є єє є є єє є є є є єє є є є є єє є є є є є є єє є єє єє єєє єє є є єє є єє є є є єє є є єє є є є єє є єє є є є є є єє є єє є єє єє є єє є є є єє єє єє є єє є є є є є

5

є є єє є є є єє є єє є є єє є єє є єє єє єє єє є є єє є є єє є є єє єє є є є єє є є єє єє є є єє є є є єє є єє є єє єє є є єє єє є є є єє є єє є є єє є є єє є єє є єє є є є єє є єє єєє є є є є є єє єє єє є є є є єє єєєє є є єє єє є є є є є є є є є єє є є є

є є є єє є є єє є єє є єє є єє є є є є

10

20

30

40

6

є є є єє єє є є єє є єє є є є єє є є є єє єє єєє є єє єє є є є є є є єє єє єє єє є є єє є єє є є єє є є є є єє єє є є єєє є єєє є єє є є є єє єє є є є єє є є є єє є єє є єє єє є є є є є єє є єє є є є єє є єє є єє є є є є єє єє є є єє є єє єє єє є є є єє є є єє єє є єє єє є є є є є єє єє є є єє є єє є є є

є є є є є єє є є єє єє є є є єє є є єє є єє є є є є є єє є є є є є єє є є є єє є єє є є є єє єє є єє є є єє є єє є є є є єє є є є є єє єє є є є є є є єє є є є єє є є єє є є є єє є єє єє є єє є є єє є єє є є єє є є єє є єє є є єє єє єє є є є єє є є є є є єє єє єє є є є є єє є є є є єє єє є є є є є є є єє є єє єє є є є є є є є є

є

0

10

20

30

40

0

10

20

30

40

Age(years)

Fig. 4. Comparison of fixed-effects and mixed-effects prediction (circle is actual height, solid lines are values from mixed-effects model, dot lines are values from fixed-effects model) Table 1. Absolute and relative errors for fixed and mixed models using the test dataset Sample Mixed model Fixed model Number AE/degrees RE AE/degrees RE 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

0.2581 0.1322 0.0838 0.3101 0.4139 0.1724 0.4511 0.9846 0.4279 0.5811 1.1441 0.2171 0.7000 0.0927 0.1047 0.2078 0.4302 0.1624 0.0954 0.1434 0.0815 0.4720 0.0651 0.2024 0.1303 0.0813 0.1675 0.1232 0.0748 0.1769 0.3324 0.6813

0.0145 0.0076 0.0048 0.0188 0.0259 0.0106 0.0297 0.0688 0.0295 0.0415 0.0873 0.0158 0.0490 0.0070 0.0080 0.0166 0.0358 0.0135 0.0080 0.0121 0.0071 0.0450 0.0062 0.0195 0.0130 0.0086 0.0176 0.0137 0.0083 0.0197 0.0396 0.0852

2.0714 1.9328 1.7036 2.0838 2.1734 1.5724 2.1808 2.6986 2.1258 2.2624 2.8084 1.8638 0.9286 1.7028 1.4864 1.7794 1.9818 1.6936 1.4148 1.3454 1.3854 1.8936 1.4394 1.1228 1.1438 1.3024 0.9986 1.2324 0.9756 0.8128 1.2594 1.5436

0.1164 0.1104 0.0985 0.1263 0.1358 0.0971 0.1435 0.1887 0.1466 0.1616 0.2144 0.1360 0.0649 0.1290 0.1135 0.1424 0.1652 0.1411 0.1179 0.1140 0.1205 0.1803 0.1371 0.1080 0.1144 0.1371 0.1051 0.1369 0.1081 0.0903 0.1499 0.1930

520

Manufacturing Systems and Industry Application

Conclusions In this study, a linear mixed-effects microfibril angle model was developed for dahurian larch in northeastern China. Linear mixed-effects modeling techniques were used to estimate fixed and random-effects parameters for a second order polynomial model. The results showed that the second order polynomial model with three random parameters was found to be best in terms of goodness-offit criteria. The mixed-effects model provided better model fitting and more precise microfibril angle estimations than the fixed-effects model. The fitted mixed-effects model was also tested using a separate dataset. The mixed model was found to predict MFA better than the original model fitted using ordinary least-squares based on absolute and relative errors. References [1] E. MacDonald and J. Hubert. 2002. A review of the effects of silviculture on the timber quality of Sitka spruce. Forestry, 75 (2): 107–138 [2] I.D. Cave and J.C.F. Walker. 1994. Stiffness of wood in fast-grown plantation softwoods: the influence of microfibril angle. Forest Products Journal, 44(5): 43–48 [3] C. Lundgren. 2004. Microfibril angle and density patterns of fertilized and irrigated Norway spruce. Silva Fennica, 38(1): 107–117 [4] S. Fang. 2004. Variation of microfibril angle and its correlation to wood properties in poplars. Journal of Forestry Research, 15(4): 261-267 [5] L. Jordan, R.F. Daniels, A.Clark and R. He. 2005. Multilevel nonlinear mixed effects models for the modeling of earlywood and latewood microfibril angle. Forest Science, 51(4):357–371 [6] M.J. Lindstrom and D.M. Bates. 1990. Nonlinear mixed effects models for repeated measures data. Biometrics, 46:673-687 [7] J.C. Pinheiro and D.M. Bates. 1998. Model building for nonlinear mixed effects models. Tech rep, Dep of Stat, Univ of Wisconsin [8] J.C. Pinheiro and D.M. Bates. Mixed-effects models in S and S-PLUS. Springer, New York, 2000, 528 pp [9] L. Jordan, R. He, D.B. Hall, A. Clark and R.F. Daniels. 2007. Variation in loblolly pine microfibril angle in the southeastern United States. Wood and Fiber Science, 39(2): 352–363 [10] N.M. Laird and J.H. Ware. 1982. Random-effects models for longitudinal data. Biometrics, 38:963-974

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.521

Using UML as Front-end for PLC Program Design Chongming Zhanga , Zuhua Fang b,* , Chunmei Wang c,Jifeng Ni d College of Information, Mechanical and Electronic Engineering, Shanghai Normal University, Shanghai 201418, China a

b

c

d

[email protected], [email protected], [email protected], [email protected] *

Corresponding Author

Key words: programmable logic controller, object-oriented, UML, state machine

Abstract. To minimize the influence of experiential factor and guarantee the software quality from the design phase, with the aid of unified modeling language (UML), an object-oriented design method for PLC program is presented. With UML as design tool, class diagram and state machine diagram are chosen respectively to describe the static structure and dynamic behavior of the PLC based control system, and PLC ladder diagram is acquired from state machine diagram subsequently. With the combination of object-oriented technology, UML and classic PLC design technology, the software reliability of PLC based control system is promoted, and the application area of object-oriented technology and UML is extended. Introduction Programmable logic controller (PLC) is a kind of embedded system, which is widely used in industrial fields. Although IEC 61131-3 [1] has defined five standard programming languages, ladder diagram is still the most popular one among the five languages in engineering practices. Traditional design methods for ladder diagram partly depend on error-prone human experiences and PLC designers usually need spending lots of time on debugging. In the field of software engineering, the technological hierarchy for object-oriented analysis and design is well established. In an object-oriented software system, unified modeling language (UML) [2] is widely used to model the artifacts of physical and virtual world. With the proper use of UML, the quality of a specific software system is guaranteed from the very beginning of its life cycle. Although some UML based software development frameworks for PLC are proposed in [3-5], they are not yet mature enough for practical applications. In this paper, we propose a practical design method that uses UML as the design front-end for the PLC based control system. Our focus is on the mapping from real world entities to objects. Proposed design method In object-oriented methodology, a design is made up of a number of objects that may represent real world entities. Each object is comprised of two parts: attributes and operations. Attributes are data items that are associated with the object and operations are pieces of functionality. Attributes describe something about the object and operations are things that it can do. If we apply object-oriented methodology to the design of PLC program, the trick is to break down the PLC based system into suitable objects. Attributes of the objects correspond to the states of PLC input devices, while operations correspond to the state updates or actions of PLC output devices. As the mainstream modeling tool in object-oriented world, UML has 10 plus diagrams to represent two different categories of views of a system model. Static views, like class diagram, describe the static behavior of the system using objects, attributes, operations and relationships. Dynamic views, like sequence diagram, activity diagram and state machine diagram, emphasize the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. In the proposed method, class diagram and state machine diagram are used respectively to describe the static structure and dynamic behavior of PLC based system.

522

Manufacturing Systems and Industry Application

Case study for UML based design As is seen in Fig. 1(a), we use a simplified liquid mixer as a demo case for the proposed UML based design. The operation sequence of the mixer is as follows [6]: Normally open start and normally closed stop push buttons are used to start and stop the process; When the start button is pressed, solenoid A energizes to start filling the tank; As the tank fills, the empty level sensor switch closes; When the tank is full, the full level sensor switch closes, solenoid A is de-energized, and the agitate motor starts running; After running for 3 minutes, the agitate motor stops and solenoid B is energized to empty the tank; When the tank is completely empty, the empty sensor switch opens to de-energize solenoid B; The start button is pressed to repeat the sequence.

class Mixer Mixer

(a)

+ + + + -

EMPTY: LevelSensor FULL: LevelSensor START: Button STOP: Button TIMER: Timer

+ + + +

Motor_Run() : void Solenoid_A_Energized() : void Solenoid_B_Energized() : void TIMER_Timing() : void

(b)

Fig.1 Mixer and its class diagram representation The whole mixer can be modeled as an object. The states of input devices, like level sensors and push buttons, are represented as attributes of the object. The state updates or actions of output devices, like solenoids and motors, are represented as operations of the object. PLC internal programming components, like timers and counters, should be regarded as virtual input and/or output devices according to their functions. Fig. 1(b) shows the class diagram for the mixer object. The state of the timer (TIMER) is regarded as an attribute, which indicate the timer’s internal state like time-out/done, timing, idle, and so on. The action of starting timing is modeled as an operation. The possible value for each attribute in the mixer object is as follows: level sensor ON/OFF, button ON/OFF, timer Timing/Done. According to the sequence of the mixer’s operation and the combinations of different attribute values, the operation process of the mixer can be divided into four states. Fig. 2 shows the state machine diagram for the mixer, which clearly shows the four states, operations in each state, and the guard conditions for the state transitions Next step is to translate the state machine diagram into PLC ladder diagram. We choose Omron style ladder diagram as a demo realization. The I/O addressing for each device is tagged in Fig. 2. Ladder diagram in Omron style grammar is shown in Fig. 3, in which SET and RSET(reset) instructions are used to switch the states. In summary, the basic steps for the proposed UML based design method are as follows: First, identify suitable objects from the PLC based system; Second, define attributes and operations for each object and get the class diagram; Third, for each object, determine each state according to the control logic and suitable combination of attribute values, then draw the state machine diagram; Last, translate state machine diagram into PLC program. With the adoption of object-oriented methodology and UML, the design process for the PLC based system becomes more reliable.

Yanwen Wu

523

stm Mixer

20000

State1 20001

[START is pushed 00000 ON] [STOP is pushed 00001 OFF]

+

do / Solenoid_A_Energied() 01001 ON

[STOP is pushed 00001 OFF] [EMPTY close 00002 ON and FULL close 00003 ON ]

State2 20002 [EMPTY opens 00002 OFF OR STOP is pushed 00001 OFF]

+ +

do / Motor_Run() 01003 ON do / TIMER_Timing() TIM00 # 3min enabled

[TIMER has run for 3 minutes. TIM00 is done] State3 20003 +

do / Solenoid_B_Energied() 01002 ON

Fig. 2 State machine diagram for the operation process

20000

00000 SET 20001

20001 00002 00003

RSET 20000 SET 20002

20002

TIM00

RSET 20001 SET 20003

20003

00002

RSET 20002 SET 20000

00001 RSET 20001 20002 20003 P_First_Cycle 20001

01001

20002

01003

20003

TIM 00 #3min 01002

Fig. 3 Ladder diagram program

Issues on complicated system For the page limitation of the short paper, the above-mentioned PLC based mixer is very simple and can be modeled as one object. In real applications, PLC is often used in complicated control system and the system should be modeled as several objects. Traditional state machine is not suitable for the modeling of complicated system. Due to the phenomenon known as state and transition explosion, the complexity of a traditional state machine tends to grow much faster than the complexity of the system it describes. Hierarchically nested states should be used to model the complicate system.

524

Manufacturing Systems and Industry Application

Fig. 4 shows an application scenario in which hierarchically nested states are useful. The PLC based flexible manufacturing system produce two types of metal part on the same stream line. The state machine diagram for the system should have two main states that represent the production process for each part respectively. According to the state of some input device, the system switches between these two main states.

Processing Part 1

Processing Part 2

Fig. 4 Hierarchically nested states Summary and final remarks Using object-oriented methodology and UML as the front-end for PLC program design would decrease the debugging time and improve the reliability of PLC based system. The quality of the PLC program is guaranteed from the design phase. Currently we translate the UML diagrams to PLC program manually. In future, some software tools should be mature enough to be used to automatically make the mapping between UML diagrams and IEC 61131-3 programming languages. Designers for PLC based control system should always keep an eye on the development in the software engineering community. Acknowledgements This work was partially funded by Leading Academic Discipline Project of Shanghai Normal University of China under grant No. DZL810. References [1] K.-H. John, M. Tiegelkamp: IEC 61131-3 : Programming Industrial Automation Systems (Springer, 2010). [2] B.P. Douglass, Real Time UML : Advances in the UML for Real-time Systems(Addison-Wesley, 2004). [3] B. Vogel-Heuser,D. Witsch,U. Katzke, in: Proceedings of International Conference on Control and Automation(ICCA '05), 2005, vol. 2, pp. 1034-1039. [4] K. Han,J. Park, in: Computer and Information Science 2009, Studies in Computational Intelligence 208, edited by R. Lee, et al., Springer, 2009, pp. 33-45. [5] D. Witsch,B. Vogel-Heuser, in: Proceedings of IEEE Conference on Emerging Technologies & Factory Automation (ETFA'09), 2009, pp. 1-6. [6] F.D. Petruzella: Programmable Logic Controllers(McGraw-Hill Higher Education, 2005).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.525

M & A Impact in China and Its Norms CUI Wei1, a, LIU Yang1, b, CHEN Yan1, QIAN Si-yu1, YE Jia1 YANG Hai-feng2, LI Ya-jun2 1

Transportation Management College, Dalian Maritime University, Dalian Liaoning, P.R.China

2

Development Planning Department, Chongqing Electric Power Corp., Chongqing, P.R.China a

[email protected],[email protected]

Key words: Foreign M & A; Impact of foreign capital; Strategic adjustment

This article has been supported by Chinese National Natural Science Fund in 2010, “Strategy research of using supply chain partner’s knowledge in enterprise knowledge creation process”, Project approving code: 71072124. It was also supported by Fund of Liaoning Province reform of higher education "Research of service outsourcing personnel training mode" in 2007, Fund of Dalian Science and Technology Plan "Research of ways expanding Dalian outsourcing service industry market" in 2008, Fund of Dalian Science and Technology Plan "Research of Dalian comprehensive prediction system of electric power and energy" in 2009, Fund of Chongqing Electric Power Corporation "Research of Chongqing comprehensive prediction system of electric power and energy" in 2008 and Project of Dalian Maritime University reform of graduate education and teaching “Construction of teaching content system of Management Science and Engineering based on the platform of motion and internet of things ” in 2010. Abstract. Foreign acquisitions of domestic enterprises on the Chinese industries now become the focus of attention, through the analysis of mergers and acquisitions development process of foreign investment in China, it is found that foreign mergers and acquisitions are beneficial in some areas. However, there have been a lot of issues, causing foreign investors monopoly to China's market. Irregular mergers and acquisitions process and lack of legal basis has resulted in a serious state-owned assets loss. Distortions in the foreign investment and preferential policies lead to a competition inequality for domestic enterprises. Finally, this work established and improved a system to attract foreign investment in the basic principles: higher prices for better quality, industrial safety and economic security, fair taxation; eventually established to promote the establishment of fair competition, create a favorable investment environment for mergers and acquisitions. Introduction Foreign M & A, also known as cross-border mergers and acquisitions, is the main form of foreign investment in the international community. Including foreign companies, enterprises, economic organizations and individuals directly or through the purchase of shares or assets to merge and acquire business in the territory. Some years ago, foreign investment deals such as U.S. Carlyle’s acquisition of Su Xugong, France Cyber (SEB)’s acquisition of SUPOR [1], has given rise to our community's attention. Some people worry that if foreign investors in Chinese M & A business will monopolize the service, if it will not be a danger to industry security and economic security? Today, the Chinese Government and even the whole society seem to re-examine foreign investment in the Chinese economy.

526

Manufacturing Systems and Industry Application

China's foreign investment process Why should China attract foreign investment? Initial there are two objectives: adding the domestic shortage of funds, introducing in advanced technology and management. Differed from other countries and regions, due to the imperfection of capital markets, for a long time, China's foreign investment has been mainly green, that is, investment in new enterprises [2]. In recent years, foreign investment in China by way of mergers and acquisitions has increased, but investment is mainly in small scale and small-scale mergers and acquisitions. At present, as China's rapid economic development, China's savings rate is high, foreign exchange reserves grow too fast, the overall baking’s liquidity is surplus, many enterprises introduced foreign capital do not use the acquired foreign exchange to buy foreign equipment and other products. In that case, can foreign capital be able to bring advanced technology and improve management level? In fact, it may be not. In fact, to give national treatment to foreign investment rather than super-national treatment is the fundamentality and the core that we should re-examine the Chinese foreign policy. Today, China is no longer a low-cost production base for foreign-funded enterprises. On the contrary, China has become the largest foreign investment target markets, if we continue the super-national preferential policies for foreign investment, it will be a great deal of damage to domestic-funded enterprises, thus distorting the overall economic environment and it’s not conducive to long-term economic development. Foreign M & A on China’s impact The impact of Foreign M & A on China is multi-faceted. It mainly plays an active role in introducing advanced technology and mature management, enhancing enterprise competitiveness; reducing excessive competition within the industry, playing a scale. Positive impact of Foreign M & A. Foreign M & A investment is a new trend in the current international capital flows, in China it is also the main form of attracting foreign investment, for moving the stock of domestic assets and optimize the industrial structure, promote technological progress, it has played an active role. Foreign M & A brings new opportunities to the transformation of China's economic zone. An important way to the reorganization and transformation of state-owned enterprises. Summed up the experience in China to attract foreign investment and promote the state-owned enterprises, the 16th CPC National Congress advanced that the use of foreign direct investment is also an important means and ways to speed up the reform of state-owned enterprises, and pointed out that through a number of ways using the medium to long term foreign investment, make the use of foreign capital combined with Domestic and foreign economic restructuring, reorganization and transformation of state-owned enterprises. The introduction of capital, technology, advanced management ideas, experience, is bound to raise the competitiveness of the industry as a whole, is also active in the market to a certain extent, to the entire industry will also have a demonstration effect. Of course, the economic structural adjustment does not mean that only the use of foreign capital, Foreign M & A is not the only way to choose, but it cannot be denied that one of the important ways. Help to further improve China's securities market asset allocation function. Foreign M & A is conducive to mergers and acquisitions of listed companies to rationalize the ownership structure. There are a number of multinational companies having the strength into the listed companies, they can also bring about a more advanced manufacturing technology and management methods to listed companies, enhance global competitiveness of related listed companies. Through Foreign M & A, it will help to further reduce the A-share listed companies in the territory of the concentration of ownership. The report of Hua Baoxingye fund industry said that as stock prices go down enough to attract industrial capital, the market will shift to industrial investment perspective to judge the intrinsic value of the enterprise. In addition Changjiang Securities also suggested that Foreign M & A can be able to carry out the strategic layout, including integration and resource sectors, it will also take the largest share of huge revenue and assets brought about by long-term rapid growth of China's economy and gain the industrial and financial capital of the double income.

Yanwen Wu

527

Negative impact of Foreign M & A. However, the ultimate goals of Foreign M & A is to maximize the market and take profits, serve for multinational companies to achieve global strategy, for the development of China’s national industry, optimize the industrial structure and achieve economic take-off, it will also have a negative impact. Possible hostile takeover of Chinese enterprises, monopoly of Chinese market. The largest negative effect of holding foreign M & A is that it could lead to the monopoly. After transnational corporations controlling the use of capital operation of China's M & A business, with its strength gradually occupied a larger market share. China's industry in particular, control of strategic industries would be a monopoly or attempt to monopolize some of the domestic industry [3]. The gross industrial output of foreign-funded enterprises accounted for the industry's output value increased from 1990's 2.28% to more than 35% now. In light industry, chemical industry, medicine, machinery, electronics, foreign-funded enterprises products have occupied more than 1 / 3 of the domestic market share. With their technology, economies of scale and brand advantages, transnational corporations build higher barriers to entering the industry; the price can be raised to the level of perfect competition to gain huge monopoly profits. If foreign M & A form the monopoly, foreign control not only the domestic market, the development of monopolies and price strategy to carve up the market and undermine market competition order, impair the interests of consumers, but also constraints and easy-funded enterprises in the growth and technological progress, restricting the development of domestic infant industries. Could lead to the loss of state assets. Foreign M & A regulations and the lack of legal basis, result in serious losses of state assets. In the Foreign M & A state-owned enterprises, there are two prevalent problems [4]: First, in most cases it does not record in the total enterprise value some intangible assets such as trademarks, patents, goodwill, leading to this part of the loss of state assets; Second, the former state-owned enterprises’ relatively high technical content of the labor force is not recorded in the value of the total value of the enterprise, which belong to the same loss of state assets. The basic principles of treating Foreign M & A at the present stage Because of this cross-border M & A investment and understanding of the procedures were not comprehensive enough; some people regard foreign M & A as a "great scourge." Secretary-General of Boao Forum for Asia, Long Yongtu, believes that foreign M & A is a form of FDI absorption worldwide, and is an effective means of the market. M & A does not necessarily constitute a monopoly of the industry, M & A itself will not endanger the industry safety and economic security, the key is to strengthen the review and supervision. Head of Ministry of Commerce, the Foreign Investment Department also said that economic globalization today, it is necessary to look at the global foreign M & A in the normal state of mind. We should not demonize it, and also should not contempt it. Strengthen the study of mergers and acquisitions, improve laws and regulations of M & A, and improve industrial safety and anti-trust mechanism for early warning. We should promote fair competition; create a conductive environment for investment development which is good to M & A. First of all, foreign M & A should be reflected in principle of higher prices for better quality, not cheap to buy [5]. To the implementation of opening-up policy, to attract foreign investment, foreign M & A is not around. Its arrival has objective necessity. Since we cannot stay out of economic globalization, the rejection of all foreign M & A is ill-advised. We should allow foreign M & A, but we cannot let matters drift. The most basic principles are fair and just higher prices for better quality, cannot be cheap to buy for M & A. In fact, in joint ventures or foreign M & A, many state-owned enterprises are likely to be accompanied by the loss of state assets; quality of state-owned enterprises did not achieve favorable prices. InBev Belgium as the Group of nearly 5.9 billion to buy Fujian Sedrin 100% stake in price than 8 times the net assets of the premium, I am afraid it is the rare case. Most or undervalued, or cheap to buy, or free to sell brand intangible assets. Better quality state-owned enterprises don’t have better prices, has a suspicion of corruption in state-owned assets.

528

Manufacturing Systems and Industry Application

Secondly, foreign M & A should have the threshold that would not endanger the safety of industrial and economic security for the principle [6]. Opening to the outside world is limited and cannot accept any offer .We should not allow both garbage companies to move in, not all industries are opening to the outside world. Every country restricts on foreign M & A, and foreign M & A in China before have almost no defense. Therefore, China should abide by international practice, should announce clearly enterprises bearing the major national and equipment development, production, especially those related to national security, business equipment and the acquisition of foreign capital cannot be controlled. With the State Department A number of views on speeding up the revitalization of equipment manufacturing industry issued, the Carlyle acquisition of Xugong, Cyber (SEB) acquisition of Supor, and other foreign M & A cases, should not only concern was whether or not to buy cheap, but also be concerned about whether or not endanger the M & A Industrial safety and economic security. Thirdly, foreign M & A should be able to be used by “I”, prevent foreign capital regarding China as a tax paradise. The open policy is in order to introduce fund, introduce technology, and introduce advanced management experience. Having opened to the outside world more than three decades, what have we gained? Statistics show that in recent years, foreign national tax revenue increases rapidly at an annual rate of about 20%, in 2005 foreign-related taxation revenue arrived at 634.849 billion yuan, accounting for 20.57% of the national tax revenue [7]. And a wide spread of the data is, in foreign enterprise avoided tax payment is 300 billion yuan, according to estimates of some Chinese local tax officials, this figure may be up to 127 billion yuan. In other words, a number of foreign-funded enterprises regard China as a tax haven [8,9,10], that is to say the Chinese government has not received receivable tax from the foreign-funded enterprises. And with corporate income tax of differentiated, giving concessions to foreign investment, “cannot bring tangible benefits to the state, to the people" and "equal to hand over the Chinese market simply." Summary Of course, we should recognize that foreign M & A is not only an economic issue but also a social and political problems ,it’s very complex. In addition to the establishment of a sound legal environment, there should be a good market environment, including good capital market, consummate intermediary services market system, property rights trading system and monitoring system, only this way to improve all-round environment, can promote the rapid success of foreign M & A. China should seize the favorable opportunity of current international capital flows and cross-border M & A, at the same time of safeguarding national sovereignty, uphold the hands of the people's livelihood and a key industry in the area of control and the development of the initiative, guarding against financial risks, create the necessary policy framework and a sound investment environment for the effective use of global cross-border M & A investment, reasonable guide and regulate the conduct of foreign mergers and acquisitions to promote the restructuring of China's state-owned enterprises reform, to promote China's rapid economic all-round coordinated development.

Yanwen Wu

529

References [1] Ji Shupeng: Use chopsticks to eat Western M &A — Xugong. China M & A comments, Vol. 4, (2007). [2] Lu Liangbiao: Lu Liangbiao lawyer interpret Foreign M & A in 2007 event. Corporation, Vol. 12, (2007). [3] Li Sujuan: The economic causes of Foreign M & A. Chinese foreign investment, (2006). [4] Gu Lieming: What is the bottom line of Foreign M & A. Chinese foreign investment, (2006). [5] Han Caizhen: The problems and the policy directions of foreign acquisitions on domestic enterprises. Chinese foreign investment, (2006). [6] Yu Hui: Trada associations and their development of China in transition, (2002). [7] Zhang Yang and Huang Ping: Dialyse four hot industry of Foreign M & A. China's foreign trade Press, Vol. 9, (2002). [8] Zhuo Xiangyu: Foreign M & A and China’s securities market. China Securities News, (2002). [9] Cai Hong: China’s legislative research of Foreign M & A regulatory system. The exploration of international trade, Vol. 2, (2002). [10] J.Fred Weston: Merger, Restructuring and Corporate Control (Economic Science Press, China 1998).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.530

A Detection Method for Overlapped Spikes Jia Qi1,a, Min Dai1,b , Gang Zheng2,c, Tong-Tong Liu2,d 1

Tianjin Key Lab of Intelligent Computing & Novel Software Technology, Tianjin University of Technology, 300384 Tianjin, China

2

Key Laboratory of Computer Vision and System, Ministry of Education, Tianjin University of Technology, 300384 Tianjin, China a

b

c

[email protected], [email protected], [email protected], d [email protected]

Key words: spike sorting; overlapped spikes; window detection method; nonlinear energy operator

Abstract. A new spike detection method is proposed in order to detect the overlapped spikes. In order to avoid missing overlapped spikes, the method adds threshold detection based on window detection method. Moreover, nonlinear energy operator is introduced to make the method strong even under low signal-to-noise ratio situation. In addition, the method solves the repeated detection problem by estimating slopes. Experiments show that the method is good for any occasion whatever the low signal-to-noise ratio or baseline wander. Especially for the overlapped spikes detection, it has much lower false-negative-rate than other traditional detection methods. Introduction Most neurons in the brain communicate by firing spikes. Extracellular recordings with low impedance electrodes are capable of recording such activity from several neurons near the tip of the electrode. Neuroscientists often need to know which spikes come from which neurons in order to understand the neuronal circuitry. In order to that, the signals of whom fired are gained via multi-channel electrodes, and then the spikes will be classified to different clusters corresponding to putative neurons. All the procedure is called “spike sorting”. But before the classifying step, however, every single spike should be extracted out of the signals mixed with unknown background noises, which is known as “detection” step. Since the multi-channel extracellular recordings contain the spike streams of several neurons adjacent to each electrode, spikes from different neurons may overlap temporally and produce novel waveforms, and noise inevitably distorts spike waveforms, it is necessary to decipher the recordings to how the number of neurons contributing to each electrode recording, their characteristic waveforms (templates) and spike temporal sequence of each neuron. Neurons usually generate action potentials with a characteristic shape. For many neurons, the most prominent feature of the spike shape is its amplitude or the height of the spike. Based on that, the threshold detection method [1] and window detection method [2~4] are most widely used. There also are many other methods introduced the mathematics theory into spike detection, such as, the method based on Bayes modal [5], the method based on wavelet transform [6], etc. Generally, window detection method has an advantage in detecting baseline wandered spikes. Massive overlapped spikes, however, will be ignored by this method, which will result in high false negative rate (FNR). To solve the problem mentioned above, a new detection method is proposed in this paper. That is an improvement of window detection method as follows. In each max-min amplitude difference higher than threshold window, a new threshold would be applied in it to avoid ignoring overlapped spikes. All the extreme points detected before will be filtered again with nonlinear energy operator (NEO) to overcome the problem of low signal-to-noise ratio (SNR). Meanwhile, whether the extreme point near the window boundary is a spike will be decided by calculating its slopes, thereby solving repeated detection problem and avoiding miss detection of spikes between adjacent windows.

Yanwen Wu

531

Problems of Window Detection Method for Overlapped Spikes Nowadays, Multi-electrode array (MEA) is widely used as the data acquisition equipment, which can extracellularly collect the electrical signals fired by the neurons. Many neurons can be found around each tip of electrode. So, when these neurons simultaneous discharge, superposition of signals was recorded [2]. So far, methods of spike detection have been studied in different ways. Unluckily, each has its own limitation. Especially when towards overlapped spike, higher FNR will be got. Window detection method was first proposed by Chiappalone.M [3], with the help of it, higher FNR and false positive rate (FPR) can be reduced when the raw data was in the case of baseline wander. As Fig.1 shows, a window with certain width defined sliding from left to right along the time-axis. The max-amplitude point in a window would be considered as a spike time if the max-min amplitude difference in that window is larger than predetermined threshold.

Fig.1 Window detection method for solving baseline wander problem The threshold was defined by Eq. 1: (1)

threshold = std ( X (n)) * b

Where, X(n) represents all original data points, b is a constant, usually in the range of 3 to 5 [2]. Because the window detection method takes how sharply data changing into consideration (both slope and amplitude), it shows better result than threshold detection method, especially when the baseline was drifting. However, this method has two drawbacks: computational burden and higher FPR caused by repeated detection [2]. As Fig.2 shows, the same spike will be detected both in window A and B if the max-min amplitude difference in that two windows both higher than the threshold. The problem with huge computation has been solved by Yao Shun in Huazhong University of Science and Technology [2,4]. The running time will be greatly shortened for the standard deviation of all the max-min amplitude difference in windows instead of raw data is chose to be the threshold. However, when messive overlapped spike was recorded, more than one spike will exist either in a window or between adjacent windows. As Fig.3 shows, the extra spike can easily be ignored and result in high FNRs.

Fig.2

(a)

(b)

Fig.3 Fig.2 Repeated detection on window boundary Fig.3 Overlapped spikes detecting using window detection method: (a) overlapped spikes in a window; (b) overlapped spikes between windows

532

Manufacturing Systems and Industry Application

A Method for Detecting Overlapped Spikes According to the unsolved problems, negligence of overlapped spikes and repeated detection on window boundary, mention above, two new methods will be proposed below to solve them relatively. A. Detecting Overlapped Spikes To deal with the problem of massive overlapped spikes were ignored by window detection method, after finding out all the upper-than-threshold windows, a new threshold would be applied in each these windows. The new threshold can be set as constant times (range from 0.5 to 1) than the max-min amplitude difference in each window. All the extreme points larger than that threshold will be picked out, and the overlapped spikes in the windows would not be missed so that FNR will be reduced greatly. For the tip of electrode is not too small to be put into a cell, the SNR of extracellular recordings can be very low depending on the recording environment, and it cannot be avoided in some applications such as neural prosthesis [7]. As a result, all the extreme points picked out before may not be a spike, although it has big amplitude. So take the case of low SNR into consideration, those points should be filtered again by nonlinear energy operator (NEO) to clear away the noise. The points have been screened out are determined as a spike finally. For a discrete-time sequence x(n), the NEO is given as Eq.2: ψ d [ x ( n )] = x ( n ) 2 − x ( n + 1 ) x ( n − 1 )

Where, x(n) stands for the amplitude of raw data in our work [8]. To sum up, the flow chart of our method is shown as Fig. 4:

Fig.4 Flow chart of method for overlapped spikes detection

(2)

Yanwen Wu

533

In the figure above, the parameter value of winThrsh(i) is calculate by Eq.3: winThrsh(i)=b*diff(i)

(3)

Where, b is a constant number range from 0.5 to 1, and diff(i) stores the max-min amplitude of window i. Therefore, winThrsh varies with diff in the certain window. B. Avoiding Repeated Detection In addition, to solve the problem with repeated detection on window boundary and missing overlapped spikes between adjacent windows, a new algorithm based on the method above is proposed. Whether the extreme point on window boundary is a spike is determined by its both sides of slopes. The flow chart of this algorithm is shown as follows:

Fig.5 Flow chart of method for avoiding repeated detection As the figure above shows, when the qualified point appears as the first data in the current window, it would be determined as a spike if the value of its previous data is smaller than its; similarly, when the qualified point appears as the last data in the current window, it would be determined as a spike if the value of its next data is smaller than its. Due to the extreme points that brings repeated detection problem are all on the border of the windows, it can be seen that the problems of repeated detection and missing detection of overlapped spikes between windows will be both solved at the same time. Experiments and Results The experiment data used in our work is the simulation data coming from the University of Leicester. The raw data including 507 spikes from 3 different neurons is stored in the file of test.mat, and the spike times and templates used for confirming the experiment results are stored in the file test_spikes.mat. The experiment is constructed to compare the detection effect of our method with threshold detection method, window detection method and improved window detection method in the case of low SNRs, baseline wander and many overlapped spikes relatively. Detection effect includes following three aspects: FNR、FPR and running time. A.Low SNRs As the spikes can be extracted from the raw data using the spike time and templates, the test data with different SNRs could be constructed. In the experiment, the experimental data is processed as follows: First extract the noise from the raw data, and zoom in or out noise signals according the Eq. 4 as in [9] to synthesize different SNR experiment data. The spike times of new test data is the same as raw data.

534

Manufacturing Systems and Industry Application

N

∑ = (

SNR

spike

(i) 2

i =1



(4)

)2

N

noise

(i) 2

i =1

Where, spike(i) (i=1,2,…,M) denotes the amplitude of the spike i, and noise(i)(i=1,2,…,N) stands for the amplitude of noise. The experiment result of SNR 5 and 10 relatively is shown in the Table 1. Table 1 Experiment results under different SNRs SNR 10

5

Methods Threshold DetectionMethod Window DetectionMethod Improved WindowDetection Method Our Method Threshold DetectionMethod Window DetectionMethod Improved Window Detection Method Our Method

FPR 0.14003 0.23865 0.19921 0.13215 0. 44773 0.54832 0.50690 0.22485

FNR 0.14398 0.23668 0.12228 0.11242 0.56410 0.55424 0.53057 0.53057

Running Time[s] 0.031 0.062 0.078 0.062 0.031 0.063 0.062 0.062

As shown in table 1, the method proposed in this paper has a better effect when SNR is low. Its FPR and FNR are all lower than other methods when SNR is 10. While SNR dropped to 5, the method in this paper has a great reduction in FPR, for the noise with high amplitude has been sifted out by energy filter. B. Baseline Wander To simulate the data with baseline wander, a 0.2Hz sinusoidal wave was added to the raw data. The comparison result between different methods is shown as table 2. Table 2 Experiment results with baseline wander Method Threshold Detection Method Window Detection Method Improved Window Detection Method Our Method

FPR 0. 46351 0.25049 0.19329 0.12820

FNR 0.46351 0.22288 0.22288 0.15976

Running Time[s] 0.016 0.078 0.062 0.047

As shown in table 2, the FPR and FNR of threshold detection method are both very high despite its running time is the shortest. For the repeated detection problem solved, improved window detection method has lower FPR than window detection method. The method proposed in this paper has not only better accuracy than other methods, but also the shorter running time just larger than threshold detection method. C. Massive Overlapped Spikes To simulate the signals with massive overlapped spikes, the raw data is shifted 10 points to the left and added to the initial data. The comparison result between different methods is show as table 3. Tab.3 Experiment results for overlapped spikes Method Threshold Detection Method Window Detection Method Improved Window Detection Method Our Method

FPR 0. 27317 0.29783 0.14201 0.14201

FNR 0.27218 0.57199 0.58579 0.27120

Running Time[s] 0.031 0.062 0.063 0.063

Yanwen Wu

535

It can be seen in table 3, the FNR and FPR are both very high by window detection method and its improved method, because of ignoring the overlapped spikes in windows and between two adjacent windows. The method proposed in this paper has as low FNR as threshold detection method, but much lower FPR owing to its three filters. Summarize and Discussion To ensure the accuracy of spike sorting, spike should be detected precisely first. For the high FNR because of ignorance of overlapped spikes in a window or between adjacent windows, a new detection method was proposed in this paper. That is, based on window detection method, using threshold detection method and energy detection method in certain windows to determine spikes. And whether the point on the border of a window is a spike will be judged by its both sides of slope. Experiments show that, besides not affecting the running time, the accuracy of the method in our paper is better than other traditional detection methods under either low SNR or baseline wander situation. Especially when numerous of overlapped spikes exist, FNR of our method is greatly reduced than window detection method. There is also a problem with our method: some parameters, such as the threshold used in the window or the threshold of energy need to be set manually, which is a time-consuming task. These values should be different at various testing situation, but the proposed method uses the compromise experiential values. Therefore the future study should be focused on the no supervision method for determining parameters in the future work. Acknowledgment The project supported by Tianjin Natural Science Foundation under Grant No. 10JCYBJC00700 and the foundation of Tianjin Municipal Education Commission under Grant No.SB20080052. References [1] Wei-Dong Ding, Jing-Qi Yuan and Pei-Ji Liang: Study on the detection and sorting of multi-electrode neural spike. Chinese Journal of Scientific Instrument, Vol. 27(2006), p. 1636-1640. [2] Shun Yao: The Study of Spike Detection and Burst Analysis of Hippocampal Neural Network. Huazhong University of Science and Technology, Wuhan, P.R.China, 2005. [3] Chiappalone M and Vato A: Networks of neurons coupled to microelectrode arrays: a neuronal sensory system for pharmacological applications. Biosensors and Bioelectronics, Vol. 18(2003), p. 627~634. [4] Shun Yao, Hai-Long Liu, Chuan-Ping Chen and Xiang-Ning Li: Neural Network Spike Detection under Effect of Local Field Potential and Derivative. Computer and Digital Engineering, Vol 33(2005), p. 19~23. [5] Mz Song,Hb Wang: A spike sorting framework using nonparametric detection and incremental clustering.Neurocomputing,Vol. 69(2006), p. 1380~1384. [6] Zoran Nenadic and Joel W. Burdick: Spike detection using the continuous wavelet transform. IEEE Transactions on Biomedical Engineering,Vol. 52(2005), p. 74~87. [7] Choi JH, Jungh K and Kim T: A new action potential detector using the MTEO and its effects on spike sorting systems at low signal-to-noise ratios. IEEE Trans Biomed Eng, Vol. 53(2006), p. 738-746. [8] Kaiser JF. “On a simple algorithm to calculate the ‘energy’ of a signal,”. 1990 International Conference on Acoustics, Speech, and Signal Processing. Albuquerque, NM: IEEE Signal Processing Society(1990), p. 381~384.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.536

The design of motor parameter test system based on WAP Tong kuanzhang1, a, Zheng yingfeng2, b, Fan tongrang1, c 1

School of Information Science and Technology , Shijiazhuang Tiedao University, Shijiazhuang 050043,China 2

Modern education technology center,huanghe science and technology college, zhengzhou 450063,china a

[email protected], [email protected], c [email protected]

Key words: Motor parameter test, mobile working, enterprise WAP system, J2EE.

Abstract. In order to solve the problem of Enterprise employees’ access to motor parameter anytime, anywhere, understand the work state of motor. We apply the WAP related technologies, design motor parameters test based on WAP system, meanwhile, we give the function modules design, software architecture, the key technologies of system realization as well as some problem-solving methods.

Introduction With the development of the economic globalization, the expanding of company market and scale, information products are widely adopted by enterprises to replace manual labors; therefore, the working efficiency has been improved greatly. But owning to the restraint of online equipment, time and place, the previous information system cannot meet the demand of flexible working models requirements because of its shortcomings: hard to taken and lack of timeliness, if the staff member of the enterprise is on his business trip, he cannot get online on time, some important work will be delayed or even be left out, that may caused unexpected results. In recent years, with the rapid development of mobile communications, the wide use of 3G mobile phones in commerce, the expanding of mobile phone user numbers, the trend of modern commerce migrates to mobile internet platform is more obviously. As a result, establishing mobile information system in enterprise becomes more urgent and important. Through enterprise mobile information system, staff members can complete emergency work in hand in any time and at any place to improve their working efficiency; so constructing wireless motor test system in enterprise has great significance. Based on online motor parameter test system, using technologies of mobile computing, internet and WAP comprehensively, we combine JSP and WML, adopt MVC model to design and develop motor test system based on WAP according with motor staff’s mobile working demands. Through mobile phone terminal, test operators and monitors can connect to the system at any time, obtain related data and information rapidly and test the quality of motor, master the working conditions of motor in time, make rapidest responses to the problems of motor, reduce the costs greatly, enhance the quality of motor products and increase enterprise economic benefits.

Yanwen Wu

537

System Development Technology WAP or Wireless Application Protocol (Wireless Application Protocol) is a new media phone standard. Internet and WAP mobile phones or other wireless devices to establish a unified global open standard for the existing wireless communications network with a bridge between the Internet. WAP is a digital mobile telephone, Internet or other personal digital assistant (PDA), computer application for communication between the open global standard [1], which is formed by a series of agreements. WAP mobile network and the Internet and corporate network closely linked with the network to provide a type of terminal equipment operators and independent mobile value-added services. WAP technology not only includes a set of wireless Internet access for the agreement includes a wireless application programming model and language rules, WAP-based programming can be used to comply with standards of the Wireless Markup Language XML WML and XHTML to achieve. The aim of WAP is to use WAP technology, introduce large amount of Internet information and various services to mobile phones and PALM wireless terminals.WAP system is set up by this technology; customers can use WAP mobile phones anytime, anywhere, enjoy unlimited online information or resources. In 2008, with the telecom restructuring and the issuance of 3G licenses, WAP services market faces enormous opportunities and challenges. In 2009, China WAP industry research and consulting report [2] shows that China now has 117 million people use mobile Internet; mobile data services accelerates and the WAP traffic is exploding. Believe that with the increase of wireless network bandwidth, the development of the mobile terminal users number and the changing in customers’ psychology, WAP will be the main choice of future wireless consumers. WML (Wireless Markup language) is an instance of the XML-based markup language, WAP Forum is designed for narrow band communications equipment, micro-mobile browser built-in mobile equipment can explain this markup language. It is mainly used to mark and illustrate Internet information sending and receiving by WAP mobile terminal. For developers, it helps them not only open up a new application field with great potential, but also helps them take full advantage of the user interface. In addition, XHTML MP is one of the WAP application development languages. XHTML MP, also be viewed as XML, is a data file according with XML standards. The advantages of using XHTML MP lies that the developed site can be used both on WEB and wireless Internet, but the phone coverage of XHTML MP is lower than that of WML. So at present WML is still used widely at abroad. WML can provide browsing support, data input, hyperlinks and text, image features. Currently, WAP Gateway is not mature enough in the mutual transformation technology between HTML and WML. Although a number of conversion tools has appeared, but many problems are still existing, so the development of WAP sites also need use WML language for different situations "Tailor-made. " Development environment WAP-based motor parameter testing system belongs to WAP service access mode. Since in practice the realization and operation of the system involves different departments, so the mobile terminal simulated browser will be used to test for running effectiveness. The development of application system is carried out in the Windows platform. Server runs on the Windows operating systems such as Windows XP / Window 2003 Server, database server using SQL Server 2000, WAP server using Tomcat5.5 construction, development tools have Eclipse 3.4.2, Dreamweaver 8, and JDk1.6, etc. The study uses Chinese WAP emulator to run the client application, after careful screening, we decide to use Opera7.06 Simplified Chinese WAP browser and M3gate simulation

538

Manufacturing Systems and Industry Application

browser. Opera7.06 WAP, links quickly, is a browse tool on your computer to access WAP sites. Compared to other simulators, the biggest feature of M3gate analog browser is under a stricter test, the M3gate programs tested successfully by M3gate browser can be accessed through other browses. Open the program by simulating a browser, the program can be connected to a PC, as a result, the data on the analog browser and PC can keep the same pace. Function Design WAP-based motor parameter test system is designed to meet the demand of mobile office in motor enterprises, combining the advantages of the wireless network into the internal sections of motor enterprises, researching a set platform for employees to work at their leisure time, and improving staff Job performance. Considering the application of mobile technology, the commonly used functions of motor parameters test system based on WAP are designed in detail. The function module of motor parameters test system based on WAP is shown below in Fig 1. Motor information input/update Information Module

Test data input/update Personal Information View/Update Cold Insulation Resistance test

Motor test system based on WAP

Test module

Ja

Hot Insulation Resistance test …test

Users and Equipment Identity Module

Add Users User Management Module

View Users Set user rights

Fig.1 : The motor parameters based on WAP function module test system Users and devices identity modules: to well integrate WAP-based motor parameters test system and online motor parameters testing system requires a separate "user and device identity module" to identify and authenticate the user's device type. The specific functions are as follows: the current station users visit motor parameter test system, the system identify he user device type automatically (PC machine or mobile terminal device), and select the appropriate system entry (WEB or WAP portal entrance) for the user to show the corresponding front page. When receiving a request to the user's identity information, the system check the user information, if the user is authenticated, the system automatically gives the user the appropriate permissions, and provide the appropriate operational functions to the user. Information Module: This module provides the user operation function of motor’s basic information and test data which includes examining all of the basic relevant information, modifying and deleting the existing record. After setting the report number, the fuzzy inquiry function will be provided according to the report number, power, model, nine yards, electromagnetic design number, and design point number. The current user can also update personal information by the module. Test module: Test modules include motor- involved fifteen tests, the user can enter individual test, input test information. After inputting the Insulation Resistance testing and Cold Resistance

Yanwen Wu

539

test data, the data of these two tests can be calculated. Calculation of the specific test operation shall be conducted in accordance with the following order: Insulation Resistance, Cold Resistance, Heat Load test, Load test, Temperature Resistance test (Additional Temperature Resistance), No Load test, Temperature Rise test (Additional Temperature Rise), Stall test, Star Connection Stall test, Noise test, and Vibration test. User management module: This module includes user group maintenance, user group permissions, user authorization and user View / Add module. The group maintenance is used to add, delete and modify user groups; user group permissions is used to set up each permissions for users; user authorization features provide user information and permissions modification, and IP binding; View / Add module is used to view and add users in each group. Software architecture Nowadays object-oriented technology and JAVA is developing rapidly. J2EE, JAVA –based programming language, is independent of platform. Its related technologies EJB, JSP, Java Servlet also experiences rapid development, so J2EE has become Ideal platform standards for enterprises. The system to be developed are enterprise applications, this study uses Eclipse 3.4.2 + Tomcat5.5 as the integrated development environment, the development using Struts framework which is strict accordance with the MVC pattern, the architecture of the motor parameters test system based on WAP is shown below in Fig 2. Presentation Layer View Jsp ActionForm s Control Actionservlet Instantiation ActionMapping (Struts-config.xml)

Business Logic Layer

Data Persistence Layer

Model

Database connection Pool

JavaBean

JDBC

DB

Fig.2 : WAP-based test system architecture of the motor parameters The entire system architecture consists of three layers, namely, the presentation layer, business logic and data persistence layer. The presentation layer uses Struts framework recommended by medium and large-tier Java application program. Struts provides a MVC framework for the application of Java Web. Model (M) represents the application state and business logic; view (V) provides an interactive interface that displays the model data to the client; controller (C) responses to customers’ requests, manipulating the model according to the customers’ requests and showing the response results to the customer through the view. In the presentation layer, ActionServlet is regarded as Controller, JSP + WML is used as View, Business Logic Layer Model provides business logic to the presentation layer, the user information, electrical information, test information in the system will be packed into the corresponding JavaBean class or EJB components. Each class has a corresponding action class, through a unified interface, Servlet encapsulates these packed JavaBean class through united business logic, and accepts the object returning by JavaBean, getting the data of the return object and passing to the

540

Manufacturing Systems and Industry Application

View, displaying to the terminal. Data persistence layer uses database connection pool JDBC to achieve object persistence, so the database can be done on the View layer which is completely transparent, as a result, the components of code reuse is also greatly enhanced. In details, the first request from the terminal is sent to the central controller ActionServlet in Struts framework to wait for processing. ActionServlet, including a set of ActionMapping objects based on configuration, each ActionMapping object implements a specific request to a specific part of the Model Action objects in the mapping between processors. ActionServlet accept client's request, read the Struts-config.xml configuration file, according to the requested URL to determine which is called Action objects. In the same time, all the data in the request form are put into the corresponding ActionForm, the instantiated ActionForm will be sent to Action object. Action uses ActionForm form data, calls the business logic of Model, saves the handling result to ActionForward object and returns to the central controller ActionServletneed need to jump to the page. ActionServlet parsing ActionForward in the information and jump to the corresponding Jsp page. Jsp pages sent the dynamic pages generated by Response to the terminal browser. In order to make the organizational structure clear and easy to understand, these components will be packed into the whole package according to realizing function, namely com. electromotortest package, user.manage packages and data.connection package. System uses web.xml and struts-config.xml to configure the system settings. The configuration information in web.xml includes Servlet’s name and mapping, Session of the setting, the Welcome screen and error handling. System server starts, the configuration information will be read from the web.xml configuration file, and then the process will be handled according to the mapping. Struts-config.xml defines the form-beans and action-mappings. The form-beans contain several form-bean used to configure the ActionForm Bean, In the entire Struts framework, you can reference a globally unique identifier to name ActionForm Bean; action-mappings define the action set, describe the Multiple paths from a specific request to the appropriate Action class map, which contains action submission path in the form and internal jump page and so on. WAP-based motor parameters test system designed based on internal characteristics and the employees flexible work needs, although its function is less than the network version of the electrical parameter test system, it is a complementary to the original system in the functional flexibility, timeliness, and mobile Sexual. The system has different characteristics such as mass, dynamic, at other time and in other places, achieving simultaneous real-time access, sharing information and operating tests at anytime and anywhere. The current prototype system design in the enterprise LAN environment, the requirements of the hardware conditions are relatively low which has certain practicality. System development and testing System development completes all the contents of module design, tests the system platform by using the simulator, the test results are satisfactory. Fig 3 and 4 are the motor test results graph using the Opera WAP browser and the M3gate analog browser to test the basic motor information. Fig 5 and 6 show the test result of Cold Insulation Resistance test.

Yanwen Wu

Fig.3 : Motor Information Query Interface

Fig.5 : Cold Insulation Resistance test

541

Fig.4 : Motor Information Query

Fig.6 : Cold Insulation Resistance test

System implementation issues and solutions WAP pages cannot pass Chinese parameters directly, so the variable cannot be Chinese characters or symbols. If the Chinese data need to be transferred from one page to another, you need transfer the Chinese value into utf-8 type in transferring page, (example for the View Resources ), in the interview page, the utf-8 characters must be converted into gb2312 type, then it can be displayed, then the Database queries and other operations can be carried out. In an interview page, you need to add the following code to convert the Chinese characters code: / / create utf conversion object ToUTF To = new ToUTF ();

542

Manufacturing Systems and Industry Application

/ / return value will be converted to Chinese gb2312 type String pa = ToUTF.U2GB (request. getParameter ("p")); If passing the Chinese value to a page containing flip function by using form, and inquiring database by receiving Chinese data, you need to determine whether the data is received from the form page or from the http, because the flip effect uses http to transfer the value in its own the page (such as Home .) Http-value need use ISO-8859-1 to remove the codes, and then use utf-8 output the code to the query operation. If it is passed by value from a form of Chinese, you can use String keys = String.valueOf (request. getParameter ("key")); the way to get. For example, keyword query code is as follows:

Search

Result.jsp accepted part of the page code: String keys = String.valueOf (request. getParameter ("key")); / / get the form by value String keys1 = new String (keys.getBytes ("ISO-8859-1"), "utf-8"); / / return value for http if (request.getParameter ("pages") == null) {/ / If the form passed by value, the direct reception keys = request.getParameter ("key"). trim (); } Else { / / If the HTTP transfer value, they have to convert encoding keys = keys1;} Conclusion Mobile communication technology and the rapid development of multimedia technology provides a good technical support for the enterprise information application using wireless system. This study, based on WAP technology and J2EE standard, constructs a mobile office system serves enterprise employees to work anywhere, anytime. System access to WAP motor test system in real-time via employees’ hand-held terminal type. The employees can obtain data and information when they are in their spare time and need that and test the motor quality. Therefore, the work efficiency can be improved and the better profit can be earned by enterprise. References [1] WAP1Protocol[EB/OL]. http://baike.eccn.com/eewiki/index.php/WAP%E5%8D%8F%E8%AE% AE. [2] Zhongyan Puhua corporation.2009 Industry Research and Consulting Report[R/OL].2009-4. http://www.chinairn.com/yanjiubaogao/8625wap.html.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.543

An Evaluated Priority Based P2P Streaming Media Scheduling Algorithm Song You-jua, Tang Rui-chunb, Xu Guangc College of Information, Ocean University of China, Qingdao, 266100, China email:[email protected], bemail:[email protected], cemail:[email protected]

a

Key words: P2P; streaming media; evaluated priority; EPBSMSA

Abstract. Research P2P streaming media scheduling algorithm. Aim at how to make sure that the smaller number data blocks reach priorly, propose an evaluated priority based P2P streaming media scheduling algorithm(EPBSMSA). The algorithm calculates the data block priority according to its rarity, urgency and transmission time. Simulation proves that the algorithm is effective. Introduction Streaming media scheduling algorithm is an important research area in streaming media system. Scheduling algorithms are divided into tree topology based algorithms and mesh topology based algorithms. The current streaming media applications deployed on the Internet include PPLive, QQLive and so on. The earliest mesh topology based large scale P2P streaming media scheduling system is CoolStreaming and gets a wide application. Mesh topology based scheduling algorithms mainly include Random algorithm and RF algorithm. However, RF algorithm does not take into account the transmission sequence of data blocks. In order to solve the problem, an effective method is to request the smaller number data blocks priorly so that it could buffer enough data to ensure smooth playback in the absence of sufficient bandwidth. This paper is structured as follows. The second part is related work, the third part is the evaluated priority based P2P streaming media scheduling algorithm, the fourth part is algorithm analysis and the fifth part is the conclusion. Related work Research environment. As is shown in Fig. 1, the research environment is based on P2P architecture topology, each node in the network has the same function and status. For each streaming media data block, how much of the data block the neighbor nodes hold is defined as the data block rarity, how much of the data block the requester node holds is defined as the data block urgency. When client requests the required streaming media data block, it P2P collaborates with the neighbor nodes. Client sends request to the neighbor nodes, neighbor nodes response the request and returns the data block rarity, urgency and transmission time. Client confirms the information, calculates the priority of the data block and notifies the provider, the provider provides the data block to the client.

544

Manufacturing Systems and Industry Application

Fig. 1 P2 P strea m ing m edia sch eduling e nvi ro nm en t

Problem Statement. The paper [1] set the data block deadline and calculated its priority according to urgency. The data block urgency based algorithm could make full use of the network resource and decrease server loads. The paper [2] proposed a node capability based scheduling algorithm which integrated the RF algorithm and CoolStreaming algorithm. The algorithm could ensure balance of the nodes load and real-time transmission of data blocks. The paper [3] considered the number of data block provider and the urgency of the data block and proposed a combination factors based P2P streaming media data scheduling algorithm to enhance the efficiency of data transmission. However, the algorithms didn’t take into account how to ensure that the smaller number data blocks arrive priorly. The paper [4] took the data block rarity and urgency into account to determine the request sequence and proposed an adaptive data scheduling algorithm. The algorithm could automaticly adjust the provider node. The paper [5] proposed a least priority scheduling algorithm which at first obtained the smaller number data frames to ensure the start-up delay minimum. However, the algorithms didn’t take into account how to calculate the data blocks priority according to itself factors including rarity, urgency and transmission time. This paper considers how to guarantee that the smaller number data blocks arrive priorly and proposes an evaluated priority based scheduling algorithm. EPBSMSA Description Priority Calculation. Suppose N i is the neighbor nodes number which hold the data block Ri , offsetik is the caching size on the k th neighbor node, offsetiR is the caching size on the requester. Ni offset k i , 0 < offset k ≤ min( B, size ) rarityi = 1 − ∏ i i k =1 sizei

(1)

In formula (1), if the k th neighbor node doesn’t hold Ri , the caching size offsetik is 0. The larger the offsetik is, the lower the rarity of Ri is and its priority is lower. offsetiR urgi = 1 − , 0 ≤ offsetiR < min( B, sizei ) sizei

(2)

In formula (2), the larger the offsetiR is, the lower the urgency of Ri is and its priority is lower. sizei − offsetiR timeistart + w timei = 1 − deadlinei

(3)

Yanwen Wu

545

In formula (3), deadlinei is Ri life period, timeistart is Ri start transmission time. If

timei < 1 , that is, Ri could arrive before its deadline. The smaller the timei is, the lower the prority is.

priorityi = timei × rarityi × urgi

(4)

In formula (4), the larger the priorityi is, the more priorly the data block Ri should be requested. EPBSMSA. Suppose the network is dynamic and stable, each node capability is similar. EPBSMSA is described below. (1) The current data blocks set client requests is Request-now ; (2) Sort data blocks according to sequence number from small to big and the data blocks are marked respectively, obtain their size sizei , their neighbor nodes number N i in sequence; i (3) If the client neighbor nodes number is less than or equal to a threshold value M , then turn to (4), R

otherwise turn to (5), remove these data blocks from Request-now ; (4) Add the data blocks that meet the condition into Request-Collection and judge whether it is full, if full then turn to (6), if not turn to (5); (5) Add the data blocks into Request-Collection according to neighbor nodes number from small to big; (6) Process the request queue Request-Collection in sequence, turn to (7); until the data blocks in Request-Collection are processed completed;

(7) Obtain detail information of neighbor nodes including the node IP, the caching size offsetik ; (8) Calculate the priority of data block according to formula (4). For the high priority data block, choose the most suitable provider acccording to IP matching principle, denoted by P, turn to (9); (9) Feedback mechanism. The provider P sends a message to the requester R and provides the data block sequence number to the requester. The requester compares if there were larger sequence number data block that has arrived, if there were any, feedback an unefficient information to the provider and the provider gives up sending data block; if not, feedback an efficient information to the provider and the provider provides the data block to the requester.

546

Manufacturing Systems and Industry Application

Algorithm Provement Lemma 1 The algorithm can guarantee that the smaller number data blocks arrive priorly. There are Ri , R j

in Request-Collection , if Ri < R j , then Ri arrives priorly to R j .

Proof: Assume R j arrives priorly to Ri , that is, before the arrival of R j , Ri has not reached yet. According to step (9) in the algorithm, when R j arrives, there are not the smaller number data block that has arrived earlier than R j . In addition, since Ri arrives later than R j , that is, Ri > R j . Contradiction with the precondition, assumption does not hold, that is, Ri arrives priorly to R j . Lemma 2 The algorithm can choose the provider P among the multiple providers and the provider P sends data block to the requester using the shortest time. Proof: Suppose there are providers P and P′ . According to steps (1)-(5) in the algorithm, the data blocks are sorted from small to big in Request-Collection . According to step (6), process the smaller number data blocks priorly to ensure requesting the smaller number data blocks priorly.

priorityi = rarityi × urgi × timei , timei ≤ 1 and priorityi′ = rarityi′ × urgi′ × timei′ , timei′ ≤ 1 ∵ priorityi > priorityi′ and rarityi = rarityi′ , urgi = urgi′

∵ tim e = 1 − i

sizei − offsetiR tim eistart + w p,R

deadlinei

∴ w p , R > w p ′ , R and

, tim ei ′ = 1 −

∴ timei > timei′

sizei − offsetiR tim eistart + w p ′, R

deadlinei

t p,R < t p′, R . Conclusion is proved.

Theorem The algorithm can select the provider P and the smaller number data block can be provided by P priorly. Proof: According to Lemma 2, assume the provider for data block Ri is P . According to Lemma 1, as Ri < R j , Ri arrives priorly to R j . As P is the provider for Ri , P provides Ri for requester and the time needed is the shortest. Take into account the following three situations: (1) If P has the data block R j and P is the provider of R j , P will provide Ri and R j to requester in sequence. (2) If P has the data block R j and P is not the provider of R j , then P will only provide Ri to requester and will not provide R j to requester, but Ri < R j , so Ri arrives priorly to R j . (3) If P does not has the data block R j , there will be another provider providing R j to requester, but Ri < R j , so Ri arrived priorly to R j .The theorem is proved completely.

Yanwen Wu

547

Algorithm Analysis The algorithm time complexity is o(m) in which m is the requested data blocks number. Simulation environment is Microsoft Visual C++ 6.0, compare EPBSMSA algorithm with RF algorithm. Simulation results show that the algorithm can effectively reduce the data block scheduling time so as to decrease user delay.

Figure 2 Client delay comparision between EPBSMSA and RF

With the increasement of clients, both EPBSMSA and RF client delay decrease. At the same time, EPBSMSA client delay is less than RF.

Figure 3 Buffer space utilization comparision between EPBSMSA and RF

With the increasement of clients, both EPBSMSA and RF buffer space utilization percentage increase. At the same time, EPBSMSA client delay is higher than RF.

Conclusion Research P2P streaming media scheduling algorithm. Consider how to schedule the smaller sequence number data blocks, propose an evaluated priority based P2P streaming media scheduling algorithm. The algorithm can effectively reduce the data block scheduling time and decrease user delay.

548

Manufacturing Systems and Industry Application

References [1] Wang Fu-chen,Jin-Hai,Cheng Bin,Liao Xiao-fei: submitted to the Journal of Huazhong University of Science and Technology (2006). [2] Zhou Zhi-yong,Zhang Guo-qing,Zhang Guo-qiang: submitted to Computer Engineering & Design(2008). [3] Zheng Xiao-le,Zheng Quan,Li Jun: submitted to Computer System Application(2010). [4] He Ke,Fang Min: submitted to Systems Engineering and Electronics(2009). [5] Sun Ming-song,Zhou Hong-min,Tang Liang: submitted to Computer Application(2008)

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.549

A Flexible Query Answering Approach for Autonomous Web Databases Xiangfu Meng1, a, Xiaoyan Zhang1, b and Xiaoxi Li2, c 1

College of Electronic and Information Engineering, Liaoning Technical University, China 2

College of Mechanical and Electronic Engineering, Xidian University, China a

b

c

[email protected], [email protected], [email protected]

Key words: Web database, flexible query, attribute weight, query relaxation, ranking.

Abstract. Users often have imprecise ideas when searching the autonomous Web databases and thus may not know how to precisely formulate queries that lead to satisfactory answers. This paper proposes a novel flexible query answering approach that uses query relaxation mechanism to present relevant answers to the users. Based on the user initial query and the data distribution, we first speculate how much the user cares about each attribute and assign a corresponding weight to it. Then, the initial query is relaxed by adding the most similar attribute values into the query criteria range. The relaxation order of attributes specified by the query and the relaxed degree on each specified attribute are varied with the attribute weights. The first attribute to be relaxed is the least important attribute. For the relevant result tuples, they are finally ranked according to their satisfaction to the initial query. The efficiency of our approach is also demonstrated by experimental result. Introduction With the rapid expansion of the World Wide Web, more and more Web databases are available online for lay users. The existing Web database query processing models have usually assumed that users know what they want and they supported only a strict query matching model. But with the increasing of the scope and complexity of the E-Commerce Web database, it is unrealistic to make users to know the database structure and contents, and their query intentions are usually vague and imprecise as well. Often, these users may not know how to precisely express their needs and may formulate queries that lead to unsatisfactory results. Therefore, providing some flexibility to the initial query can help the users to improve their interactions with the system for possibly retrieving. Several researches have been proposed to handle the flexibility of queries in the database systems for presenting more information that relevant to the initial precise query. These researches can be classified into two main categories. The first one is based on fuzzy set theory [1]. Tahani [2] firstly advocated the use of fuzzy sets for querying conventional databases. In recent years, the approaches proposed in [3] and [4] relax the query criteria by using the membership functions, domain knowledge and α-cut operation of fuzzy number. However, it should be noted that the flexibility approaches based on fuzzy sets are highly dependent on the domain knowledge when constructing the membership functions. The second category focused on relaxing the query criteria range (query rewriting) such as [5] [6], and [7], which handle flexibility based on distance notion, linguistic preferences, and etc. These approaches do not need to construct membership functions in priori, but in most cases they are not fully automatic and require the database designers or users to provide distance metrics. Unfortunately, such information is hard to elicit from the users. This paper proposes a novel flexible query answering approach—FQAA, which can find most relevant result tuples to the initial query and does not require any user feedback. This approach uses attribute weight, data and query history statistics in order to assist the relaxation process. Furthermore, for the relevant result tuples returned by a flexible query, FQAA ranks them according to their satisfaction to the initial query. The rest of this paper is organized as follows. Section 2 gives the definition of the flexible query and overviews the solution. Section 3 proposes the method of attribute weight assignment. Section 4 presents the query relaxation method while Section 5 presents the similarity estimation method. The experimental results are presented in Section 6. The paper is concluded in Section 7.

550

Manufacturing Systems and Industry Application

Problem Definition and Solution Overview This section gives the definition of flexible query and then overviews the solution for this problem. Problem Definition. Consider a Web database relation D with n tuples {t1,…, tn} with schema consisting of m categorical and numerical mixed attributes D(A1,…, Am). Let Dom(Ai) represents the active domain of attribute Ai. Given a selection query Q over D with a conjunctive selection condition of the form Q = i {1,…, k}Ci, where Ci takes the form of Ai = ai, k ≤ m and each Ai in the query condition is an attribute from A and ai is a value in its domain. The objective of the problem is to find all tuples of D that show similarity to Q above a threshold α (0, 1] is obtained. Specifically, Ans(Q, D) = {t | t∈D, Similarity (Q, t) > α}, where, the threshold α is given by the user or system designer. The constraints are (i) D only supports the boolean query processing model (i.e. a tuple either satisfies or does not satisfy the given query). (ii) The answers to Q must be determined without altering the data model or requiring additional guidance from users. Solution Overview. The basic idea of answering flexible queries over autonomous Web database is that: firstly, based on the user original query and data distribution, we speculate how much the user cares about each attribute and assign a corresponding weight to it. The original query is then relaxed as an approximate query by adding the most similar attribute values into the query criteria range. The relaxation order of specified attributes and the relaxed degree on each specified attribute are varied with the attribute weights. The first attribute to be relaxed is the least important attribute. Finally, the relevant query results are ranked according to their satisfaction to the initial query. Attribute Assignment This section first introduces the Kullback-Leibler (KL) distance and then describes the attribute weight measuring method based on the KL distance. Kullback-Leibler Divergence. In the real world, different users have different preferences. Hence, it is necessary to surmise the user’s preference when we make recommendations to the specific user. To address this problem, we start from the query the user submitted. We assume that the user’s preference is reflected in the submitted query and, hence, we use the query as a hint for assigning weights to attributes. A measure of the distribution difference of Ai in database D and in base set R is the Kullback-Leibler distance [8]. Suppose that Ai is a categorical attribute with value set {ai1, ai2, …, aim}. Then the KL-distance of Ai from D to R is: m

P ( Ai = aij | D )

j =1

P ( Ai = aij | R )

DKL ( D || R) = ∑ P ( Ai = aij | D ) log

(1)

in which P(Ai = aij | D) refers to the probability that Ai = aij in D and P(Ai = aij | R) refers to the probability that Ai = aij in R. Note that, if Ai is a numerical attribute, its value range is first discretized into a few value sets, where each set refers to a category, and then the KL distance of Ai can be calculated as in (1). Attribute Weight Measuring. To calculate the KL-distance in Equation (1) we need to obtain the distribution of attribute values over D. In this paper, we propose a probing-and-count based method which is adopted from the algorithm proposed in [9] and [10] to build a histogram for an attribute over an E-Commerce Web database. By issuing a set of selected queries to D, we can extract the number of query results to get the attribute value distribution of Ai. An equi-depth histogram [11] is used to represent the attribute value distribution, from which we will get the probability required in Equation (1). Algorithm 1 shows the algorithm for building a histogram for attribute Ai over D.

Yanwen Wu

551

Algorithm 1. Histogram construction algorithm Input: Attribute Ai and its value range, database D and its total number of tuples |D|, min bucket number n Output: A single attribute histogram HDi for attribute Ai 1. If Ai is a categorical attribute 2. For each distinct value aij of Ai 3. Query D using “Ai = aij” and get its occurrence count c 4. Add a bucket (aij, c) into HDi 5. If Ai is a numerical attribute with domain range [alow, aup) 6. λ=|D|/n /*λ is a threshold of tuple number in a bucket*/ 7. low = alow, up = aup 8. Do 9. Query D using “low ≤ Ai ψi 6. Add v into the query range of Ci 7. End If 8. End If 9. If Ai is a numerical attribute 10.

Replace the query range of Ci with [q–h

1 - ψi ψi

, q+h

1 - ψi ψi

]

11. End If 12. Q = Q ∪ Ci 13. End For 14. Return Q

Similarity Estimation This section first gives the query-tuple similarity estimation, and then describes the categorical values similarity estimation. Similarity Definition. Let Q be a conjunctive query over an autonomous Web databases D, t is an relevant result tuple for Q, the similarity of the relevant result tuple t to the query Q can be defined as,  Sim (Q. Ai , t. Ai )  if Dom(Ai )=Categorical m  Wimp ( Ai ) ×  | Q. Ai − t. Ai | ∑ Similarity(t, Q) = i =1 1 − Q. Ai   if Dom(Ai )=Numerical

(6)

where, m is the total number of the attributes in D, Wimp(Ai) is the weight of attribute Ai, Sim(Q.Ai, t.Ai) measures the similarity between categorical values as explained below. Categorical Values Similarity Estimation. Unlike the numerical values, categorical values are discrete and thus difficult to measure the their similarities. In this paper, we discuss an approach for deriving similarity coefficient of categorical values by using query history-log of past user queries on the database. The query history information can reflect that the frequency with which database attributes and values were often requested by users and thus may be interesting to new users. The intuition is that if certain pairs of values often “occur together” in the query history, they are similar. Let f(u, v) be the frequency of the values u and v of categorical attribute A occurring together in a IN clause in the query history. Also let f(u) be the frequency of occurrence of the value u of

Yanwen Wu

553

categorical attribute A in a IN clause in the query history, and f(v) be the frequency of occurrence of the value v of categorical attribute A in a IN clause in the query history. Then, we measure the similarity coefficient between u and v by using the following Equation (7). Sim(u, v) =

f (u , v) + 1 max( f (u ), f (v)) + 1

(7)

The Equation (7) indicates that, the more frequently occurring together of the same pair of attribute values is, the larger their similarity coefficient is. Note that, in this formula the Sim(u, v) would not be zero even if the pair of values is never referenced in the query history. Experiments Experimental Setup. For our evaluation, we set up a used car database CarDB (Make, Model, Year, Color, Engine, Price, Mileage) containing 100,000 tuples extracted from Yahoo! Autos. The attributes Make, Model, Year, Color and Engine are categorical attributes and the attributes Price and Mileage are numerical attributes. We used Microsoft SQL Server 2005 RDBMS on a P4 3.2-GHz PC with 1 GB of RAM for our experiments. We implemented all algorithms in C# and connected to the RDBMS through ADO. Efficiency of Query Relaxation. To verify the efficiency of the query relaxation and results ranking methods of FQAA, we requested 15 subjects to behave as different kinds of buyers, such as rich people, clerks, students, women, etc. and each subject was asked to submit one queries for CarDB according to their preference. For each test query Qi, a set Hi of 30 tuples, which likely to contain a good mix of relevant and irrelevant tuples to the query, is generated. Finally, we presented the queries along with their corresponding Hi’s to each user in our study. Each subject’s responsibility was to mark the tuples in Hi relevant to the query Qi and mark the top-10 tuples that they preferred most. We then use the Recall metrics to evaluate the efficiency of our query relaxation method. Recall is the ratio of the number of relevant tuples retrieved to the total number of relevant tuples. The Recall of answers for different relaxation threshold is shown in Figure 1.

Recall

Tsim=0.6

Tsim=0.7

Tsim=0.8

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1

2

3

4

5

6

7 8 9 10 11 12 13 14 15 Queries

Figure 1. Recall of answers for different relaxation threshold It can be seen that the recall of answers are different for different relaxation threshold, the lower the threshold the higher the recall is. Moreover, this experiment also identified that the recall of answers achieves a high level (averaged 0.9) when the threshold is 0.6. This can provide a reference value of the threshold for system designers or users. Accuracy of Results Ranking. Along with the experimental setup mentioned above, we use the precision to estimate the accuracy of the flexible query results ranking. The precision is measured by the overlap of the top-10 tuples returned by FQAA and the top-10 tuples user marked. The higher the overlap the higher the accuracy of FQAA is. The precision of the ranked answers is shown in Figure 2. The averaged precision of ranked answers for FQAA is 0.72.

554

Manufacturing Systems and Industry Application

Precision

Precision of FQAA 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15

Queries

Figure2. Precision of ranked answers for FQAA

Summary In this paper, we present FQAA, for presenting ranked relevant answers to the user. FQAA’s contributions include (1) a method for automatically estimating attribute importance for different queries and user preferences and (2) an efficient query relaxation approach for obtaining the relevant answer tuples. The effeciency of FQAA has been identified by the experiments results. It is interesting to investigate how to expeditiously provide the top-k answer tuples without computing the ranking score of each tuple in the query results. Acknowledgments. Work is supported by the National Science Foundation for Young Scientists of China (No.61003162). References [1]. L. A. Zadeh. Fuzzy sets. Information and Control. 1965, 8(3):338-353. [2]. V. Tahani. A conceptual framework for fuzzy querying processing: a step toward very intelligent databases systems. Information Processing Management. 1997, 13, 289-303. [3]. N. Hachani, H. Ounelli. A knowledge-based approach for database flexible querying. Proceedings of the DEXA Conference, 2006, 420–424. [4]. Z. M. Ma, X. F. Meng. A knowledge-based approach for answering database fuzzy queries. Proceedings of the KES Conference, 2008, 5178, 623–630. [5]. W. Kieling. Foundations of preferences in database systems. Proceedings of the VLDB Conference, 2002, 311–322. [6]. X. F. Meng and Z. M. Ma. Providing flexible queries over web databases. Proceedings of the KES Conference, 2008, 5178, 601–606. [7]. F. Rabitti. Retrieval of multimedia documents by imprecise query specification. Proceedings of the EDBT Conference, 2008, 416, 202–218. [8]. R. O. Duda, P. E Hart, D. G. Stork. Pattern classification. John Wiley & Sons, USA, 2001. [9]. U. Nambiar, and S. Kambhampati. Answering imprecise queries over autonomous web databases,” Proceedings of the ICDE Conference, 2006, 45-54. [10]. W. Su, J. Wang, and Q. Huang. Query result ranking over e-commerce web databases. Proceedings of the CIKM Conference, 2006, 575-584. [11]. S. G. Piatetsky, C. Connell. Accurate estimation of the number of tuples satisfying a condition. Proceedings of the SIGMOD Conference, 1984, 256-276.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.555

Study on the interference ratio of right-turning vehicles at signalized intersection under mixed traffic environment Shanshan Lee 1, 2 a, Dalin Qian 1, 2,b , Dongmei Lin 1,c , Zhaoyong Peng1,d 1

School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China

2

MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044, China a

[email protected], [email protected], [email protected] d [email protected]

Key words: traffic engineering, vehicle-bicycle conflict, delay, gap theory, traffic wave

Abstract. The objective is to describe the interference degree between motor vehicles and bicycles at signalized intersection. The interference degree was expressed by conflict delay. Via analyzing the microscopic actions of the motor vehicles crossing through the bicycle flow at a typical two-phase signalized intersection, a conflict delay model of the right-turn vehicle was proposed applying the gap acceptance theory and traffic wave theory. The model was verified and compared with the existing conflict delay models. The result showed that the proposed model is stable and suitable for the condition of unsaturated to calculate the conflict delay of right-turn vehicle. Sensitivity of the conflict delay model with respect to the flow rate of the bicycle and the width of the bicycle lane was analyzed. It showed that the increase of the width of the bicycle lane within limits could reduce right-turning vehicle’s conflict delay effectively when the motor vehicle’s flow rate was higher and the bicycle flow rate was varying in a certain range. Introduction Mixed traffic is a typical characteristic of urban traffic in China, especially at road-intersections. Vehicle-bicycle conflict has great impact on the Intersection Level of Service. As defined by Yang, interference ratio could be regarded as one of the intersection Level of Service evaluation indices under mixed traffic to provide a quantitative description of the degree of the interference. The inference ratio is defined as the ratio of delay induced by vehicle-bicycle interference to free travel time in the process of driving through a signalized intersection [1]. Protected left turn signal phase is used at most intersections in domestic cities in order to separate left turn traffic flow from other traffic flows. Therefore this thesis gives a study on interference ratio model of right-turn motor vehicle. This research has drawn much attention from domestic and international scholars, and some achievements have been obtained. Brilon developed delay models of bicycles and pedestrians at two-way stop control intersection based on conflict technique [2]. Walsh proposed delay models of bicycles and pedestrians with different signal timing plan under different mixed traffic conditions [3]. Liang established a travel time model of right-turning motor vehicle based on queue theory and gap acceptance theory[4]. At the same time, a travel time model of right-turn motor vehicle based on binary regression model using right-turn vehicle flow rate and straight-move bicycle flow rate as independent variables. The studies mentioned above have theoretical and practical significances. However, vehicle flow is regarded as a whole research object in existing studies. Thus, it is difficult to obtain average conflict delay for each vehicle. Therefore, on the basis of existing studies, this paper developed an interference ratio model of right-turning motor vehicle based on gap acceptance theory and traffic wave theory. The effectiveness and stability of the proposed model are validated, and the sensitivity analysis of the model is given in the end of this paper.

556

Manufacturing Systems and Industry Application

Right-turn vehicle conflict analysis There are two types of conflicts of right-turn motor vehicle with a conflicting through moving bicycle: (1) a right-turn motor vehicle conflicting with moving bicycle in the same approach, on an adjacent lane, during a green/yellow phase, and (2) a right-turn motor vehicle conflicting with opposing moving bicycle, during a red phase. When right-turn motor vehicle conflicts with moving bicycle, motor vehicle needs to wait for an acceptable gap because the through bicycle has a right of way over the right-turn motor vehicle during a green phase. This may cause conflict delay. This paper focused on the former one and developed right-turn motor vehicle interference ratio model. When vehicle goes through intersection without interruption, free travel time is approached to a definite value which could be obtained through practical investigations. Therefore interference ratio is defined in terms of conflict delay in this paper. Conflict delay defined in this paper is the difference between the travel time actually experienced and the free travel time. When motor vehicles arrive at conflict zone, right-turn motor vehicle would adopt different strategies according to bicycle traffic conditions at conflict zone on the major street: (1) if there is no bicycle in the conflict zone motor vehicles will pass through the conflict zone without interruption, (2) if bicycles pass through the conflict zone on saturation flow motor vehicles will decelerate or stop to wait for bicycles through conflict zone, and (3) if bicycles arrive at the conflict zone randomly motor vehicles will decelerate or stop to select and pass through gaps in a conflicting bicycle flow. Right-turn motor vehicle conflict delay model Taking a signal cycle as a unit, average conflict delay of right-turn motor vehicle could be calculated using the total delay during this period caused by automobile-bicycle mutual crossing through behavior and the total motor vehicles passing through the intersection. The arrival of right-turn motor vehicles during the red phases won’t be interrupted by the bicycles. Thus the number of motor-vehicles passing through the intersection is NR, and there is no additional time. According to the analysis on the strategies of right-turn motor vehicles passing through conflict zone, green phase is divided into three periods: (1) Period I: Bicycles usually preoccupy conflict zone at the start of green time since bicycles have quick start and priority. Motor vehicles should decelerate or stop to clear the initial queue of bicycles. In this period, the total delay caused by bicycles occupying conflict zone is T1, and there is no motor vehicles passing through the conflict zone. (2) Period II: The initial queue of bicycles congested in period I was cleared, and then bicycles arrive randomly. If there is no bicycle arriving in the residual green time, the number of motor-vehicles through the intersection without interruption caused by the bicycle flow is N2. There is no additional delay to these motor-vehicles during period II. (3) Period III: If there are bicycles arrive randomly in residual green time, the driver must decelerate or stop to determine both when a gap in the major stream (bicycle flow) is large enough to permit safe entry and when it is the driver’s turn to enter on the basis of the relative priority of the competing traffic streams. The total additional conflict delay to motor-vehicle is T3, and the number of motor-vehicles through intersection in period III is N3. Period I. Bicycles pass through the conflict zone with saturation flow rate in period I. Right-turn motor vehicles must decelerate or stop to wait for the initial queue of bicycles passing through the intersection in period. The driver of the first vehicle in the queue must observe until the initial bicycle queue clears and react to the change by releasing the brake and accelerating through the intersection. The second vehicle in the queue follows a similar process, except that the reaction and acceleration period can occur while the first vehicle is beginning to move. The third and forth vehicles follow a similar procedure. After four vehicles, the effect of the start-up reaction and acceleration has dissipated. Successive vehicles then move past the stop line at a steady speed until the last vehicle in the original queue caused by the interfered bicycle flow has passed. The conflict delay in period I could be calculated via the following formula

Yanwen Wu

557

2

 v f k0      v f k0 v f k0 T1 = λv ⋅  × tr  −  λv ⋅ × tr  ⋅  λv ⋅ × tr − 1 / 2λv + N0 ⋅V /(2a) v k −v k      vm ks − v f k0 vm ks − v f k0 f 0  m s     

(1)

Where vm is high-flow speed of bicycles, m/s; ks is maximum density of bicycles at the start of green time, v/m2; vf is free-flow speed of bicycle flow, m/s; k0 is density of bicycle flow on roadway, v/m2; λv is right-turn motor vehicle arrival rate; N 0 is 2~3; V is expected velocity of right-turn motor vehicle; a is start acceleration of motor vehicle. Period II. The number of right-turn motor vehicles passing through the intersection without interfering with through bicycles in green time is N 2 . The right-turn motor vehicles passing through the conflict zone won’t be interrupted by the through-bicycle flow in period II. The total conflict delay T2 in this period is 0.

N2 =

  v f k0 − λB  t g − ×tr    v k −v k e  ms f 0 

⋅ λv ⋅ t g

(2)

Where λB is bicycle arriving rate. Period III. The distribution of time intervals between the arrivals of successive bicycles obeys the negative exponential distribution. The expected number of right-turning motor vehicles passing through the gap in bicycle stream could be given by

N ex p

e − λ Bτ = 1 − e − λB h

Where τ is the critical gap in the bicycle stream that will allow the entry of one right-turning motor vehicle, s; h is the time interval of right-turning motor vehicles using one bicycle gap, s. The probability Ptr of right-turn motor vehicles passing through the conflict zone is the probability of which there is bicycle arrival and there is accepted gap in the bicycle flow. This can be expressed by the following equation:

Ptr = (1 − e Where t2 = e− λ

B

 v f k0 − λv ⋅t g /  t g − × tr  v k m s − v f k0 

( t g − tSB )

⋅ (t g − tSB )

 v f k0  ⋅( tg − × t r − t2 )  v k m s − v f k0 

) e − λ Bτ

is duration of period II

The frequency Nτt3 of right-turn vehicles passing through the conflict zone in analysis period is calculated by λB ⋅ (tg − tSB − t2 ) ⋅ Ptr . So the total right-turning motor vehicles N3 passing through the intersection in period III is calculated using Eq. 3. v f k0 e− λBτ − λ ⋅t ( t − t − t ) / ( t − t ) N3 = Nexp ⋅ Nτt3 = λ (t − × tr − t2 ) 1 − e v g g SB 2 g SB e− λBτ (3) − λB h B g vm ks − v f k0 1− e

(

)

The average of delay of right-turn motor vehicle in period III is calculated using Eq. 4[5]. γ +εx D = D m in (1 + ) (4) 1− x Where γ and ε are constant since right-turning motor vehicles arrive randomly. In this thesis, γ is 0 and ε is 1 approximately. x is the v/c ratio, and x is given by x = λv ⋅ t g / ( t g − tSB ) / qm . Where can be calculated via the formula qm = λB ⋅

D m in =

1

λB

e− λBτ . Dmin is Adams delay. 1 − e − λB h

⋅ ( e λ B ⋅τ − λ B τ − 1)

The average conflict delay of right-turning motor vehicles in period III is calculated using Eq. 5.

qm

558

Manufacturing Systems and Industry Application

D =

1

  

λv ⋅ tg /  tg − ⋅ ( e λ B τ − λ B τ − 1) ⋅ [1 / (1 −

v f k0 vm k s − v f k0

 × tr    )]

qm λB The total conflict delay of right-turn motor vehicles in period III is calculated using formula T3 = D × N3

(5) (6)

The average delay Td of right-turning motor vehicles is calculated combining with the analysis mentioned above. This is done using Eq. 7. Td =

T1 + T 2 + T 3 N2 + N3 + NR

(7)

Model validation and sensitivity analysis Numerical example condition. Take the southbound approach to Fuchengmen intersection in Beijing for example. There is pedestrian overpass at signalized intersection and the vehicles do not suffer from the interference of pedestrians. The control signal is three-phase with a protected left-turn, and there is a bicycles signal consistent with vehicle’s signal control. The right-of-way of bicycles is only assigned together with right-turning motor vehicles movement. The cycle length is 130 s, the amber time is 2 s. Model validation. For the given conditions, the microscopic behaviors of right-turn motor vehicles and through bicycles from the south approach are reproduced using VISSIM. The traffic volumes of right-turning motor vehicles implemented in this numerical example are varied from 540veh/h to 860veh/h with an interval of 40veh/h. For each certain right-turn motor vehicle volume, the average conflict delay of right-turn motor vehicles is calculated with through bicycles volume varied from 500unit/h to 1100unit/h. The data samples which don’t meet basic assumptions are removed in this numerical example. Combing the simulation results, the effectiveness of the proposed model is validated by calculating the average relative error between the proposed model result and the existed model result. Furthermore, the intersection geometry is changed via changing the width of bicycle lane. Meanwhile, the stability of the proposed model is validated by calculating the average relative error and the standard deviation of the delay under different traffic conditions. The parameter values of theoretical model are shown in Table1. Parameter tr/s tg /s a(m/s2) V(m/s)

Table1 Parameter values of theoretical model Parameter Value Parameter Value 83 B/m 6 h/s 45 vf(m/s) 4 ks(v/m2) 0.7 k0(v/m2) 0.016 τ/s 2.78 vm(m/s) 1.5

Value 2.6 0.3 2.8

Liang introduced two travel time model with focus to right-turn motor vehicles at the intersection [4]. The average relative error of the right-turn motor vehicles using the model based on queuing theory, the binary regression model and the proposed model introduce in this paper. For each flow sample, the average relative error using motor-vehicle average conflict delay calculated based on the models mentioned above and the VISSIM simulation data are 37.41%, 7.93% and 18.08% respectively.The comparison of simulation and theory value of right-turn vehicle conflict delay, and the result could be obtained: (1) the conflict delay calculated though the proposed model in this paper is less than the vissim simulation result; (2) the proposed model is fitting well overall, and the accuracy requirement is met.In order to validate the stability of the proposed model (model I) and the binary regression model (model II), the bicycle lane width is changed varied from 3.5m to 7.5m in this

Yanwen Wu

559

paper to change the traffic condition. For each kind of intersection, the mean relative error, the average of mean relative error and standard deviation of the right-turning motor vehicles are calculated. The result is shown in Table 2.

Model Model I ModelII

3.5 20.93% 45.33%

Table2 Model stability comparison Width of bicycle lane/m MeanValue (E ) 4.5 5.5 6.5 7.5 13.36% 15.64% 12.28% 17.33% 15.91% 58.17% 34.18% 11.74% 21.71% 34.23%

Standard deviation ( σ ) 0.034 0.184

The mean relative errors calculated using model I and model II are 15.91% and 34.23%, and the standard deviations are 0.034 and 0.184 respectively. The result shows that: (1) The average mean relative error obtained using model I is less than model II, which reflects that the universal of model I is better. The reason is that the calculation of conflict delay based on binary regression model is abstract and vague. Although it could get a better prediction accuracy using binary regression model in certain traffic condition, the stability of this model is poor. In contrast, model I is developed by analyzing the conflict between the right-turning vehicle and the through bicycles at signalized intersection and considering the impact of traffic condition on the motor-vehicles conflict delay, such as bicycle lane width, signal timing plan. Therefore model I proposed in this paper could get broader applicability. (2) The standard deviation measures the spread of the data about the mean value. It is useful in comparing sets of data which may have the same mean but a different range. It is shown in Table 2 that the standard deviation of model I is less than model II, which indicates that model I is more stable. Sensitivity analysis. The sensitivity analysis of the proposed model is implemented by changing bicycle volume q and bicycle lane width B. Relationship analysis between conflict delay and bicycle volume. Fig. 1 illustrates the average conflict delay under different bicycle flow to analyze the impact of bicycle volume on the conflict delay. The result shows that: (1) With the increase of motor-vehicle volume, this results in increased vehicle conflict delay and increased slope of the curve, which increases bicycle volume. (2) When the bicycle volume is lower, the vehicle volume has less impact on the conflict delay, and when the bicycle volume is higher, the vehicle volume has greater impact on the conflict delay.

Fig. 1 Sensitivity of right-turning vehicular conflict delay through bicycle volume Fig. 2 illustrates the bicycle lane width affecting average vehicle conflict delay when the right-turning motor vehicle volumes are lower, middle, and higher. The result shows that: (1) With the increase of the vehicle volume, the slope of the curve for certain bicycle lane width is increasing. The bicycle lane width has less impact on conflict delay when vehicle volume is lower. The conflict delay becomes more sensitive to bicycle lane width with the increase of bicycle volume when vehicle volume is higher. (2) In Fig. 2(b), when q lies between 750unit/h and unit/h, the increase of bicycle lane width between 3.5m and 4.5m could decrease the conflict delay effectively. Thus, when bicycle lane width is larger than 4.5m, the increase of lane width impacts on conflict delay less. In this thesis, the points have characters mentioned above are defined as (q=750~950,B=4.5). These points observed in Fig. 2(b) are (q=950~1100,B=5.5), (q=1100~1200,B=6.5),(q=1200~1300,B =7.5).

560

Manufacturing Systems and Industry Application

(b)λv = 540veh/h (c) λv = 740veh/h (a)λv = 340veh/h Fig. 2 Relationship between the conflict delay and lane width in a series of motor-vehicles’ flow Summary (1) An alternative model for calculating motor-vehicle conflict delay is established by analyzing the conflict between the right-turning vehicle and the through bicycles at signalized intersection and considering the impact of traffic condition on the motor-vehicles conflict delay, such as bicycle lane width, signal timing plan. The calculation for conflict ratio at signalized intersection under mixed traffic condition is given. (2) At typical two-phase signalized intersection, the more the through-bicycle volume is, the greater the conflict ratio is. The higher the vehicle volume is, the greater the grow rate of conflict ratio is. When the vehicle volume is higher, there is a significant increase in conflict ratio of intersection with vehicle volume increased. When the vehicle volume is lower, the conflict ratio shows a much smaller increase caused by vehicle increased. (3) When the right-turning motor vehicle volume is lower, the increasing of bicycle lane width impacts on the conflict ratio less in general. When the vehicle volume is higher, for fixed vehicle volume, when the bicycle volume is changing within certain range the limited increase of bicycle lane width could reduce conflict ratio effectively. But with further increase of the bicycle lane width, the bicycle lane width impacts queue length less. So, there is certain condition to decrease conflict ratio by widening the bicycle lane. Acknowledgments This research was supported by National Nature Science Foundation of China (No.70972041). References [1] Yang Lu-ping. Study on signalized intersection service level classification in mix traffic conditions[D]. Beijing jiaotong university, 2008. [2] Brilon W, Miltner T. Capacity and delays at Intersections without traffic signals[C]. The 84th Annual Meeting of the Transportation Research Board (CD-ROM), Washington, D.C., 2005. [3] Smith Jr R L, Walsh T. Safety impacts of bicycle lanes[J]. Transportation Research Record, 1988, 1168: 49-56. [4] Liang Chun-yan, Wang Chun-guang, Shen Zhan, et al. Calculation method of travel time of right-turning vehicle at motor-and nonmotor-vehicle mixed traffic intersection[J]. Journal of Jilin University(Engineering and Technology Edition), 2007, 37(5): 1053-1057. [5] Wang Dian-hai. Traffic Flow Theory[M]. Beijing: China Communications Press, 2002.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.561

A New Acoustic Emission Source Location Method Based on the Linear Layout of Sensors Zhong-ning ZHANG1,2,a, Jian TIAN1,b* 1

School of Mechanical Engineering Shenyang University of Chemical Technology Shenyang 110142, China 2 School of Mechanic Engineering and Automation Northeast University Shenyang 110004, China a

[email protected], [email protected]

Key words: time difference method; acoustic emission; location method.

Abstract. For the problem of acoustic emission source location, it has always been one of the important issues to look for simple and convenient layout of sensors. This paper presents a new time difference method for locating acoustic emission source in a plate, which the sensors for locating are arranged in a straight line, and does not need the pre-determining of the acoustic wave propagation velocity. The method makes the acoustic emission source locating task simplified. Introduction Acoustic emission may be defined as the stress or pressure waves generated during dynamic processes in materials. The acoustic emission sources may be formed by the defects or the dislocations which generate, develop, and propagate in surrounding area, releasing energy in elastic waves within solid materials. Since the 40s of the 20th century, acoustic emission technology has been widely applied to studies of cracking progresses in laboratory specimens and structures, fracture studies, fatigue studies, diagnoses of machinery faults, analyses of grease lubrication, the problems of source location and so on [1-9]. There are many acoustic emission source location methods developed to the acoustic emission signal types. Two location methods are commonly used in engineering for sudden acoustic emission signals, region location method and time difference location method. Generally speaking, the time difference location method needs the pre-determining of the velocity of acoustic wave with an analogous acoustic emission source. Due to the fact that the analogous acoustic emission source cannot be the same with the actual acoustic emission source, thus, the pre-determined velocity of acoustic wave cannot be equal to the velocity of actual acoustic wave. This situation will result in some errors in the calculated source locations. In this paper a method for locating plane acoustic emission source is proposed by setting the sensors in a straight line, which does not need the pre-determining of the velocity of acoustic wave. The method not only makes the acoustic emission source locating task simplified but also eliminates the error of the velocity of acoustic wave due to the pre-determining of the velocity of acoustic wave with an analogous acoustic emission source. Presentation of the Proposed Method The followings are the details of the method presented in this paper. First of all, four positioning sensors are set up linearly around neighborhood of the acoustic emission source location in a plate that is to be detected. And the distances among the four sensors are recorded. In order to finally determine the direction of acoustic emission source location, a directional sensor is arranged in the vicinity of any of the four positioning sensors. Secondly, to monitor the acoustic emission source signals of those sensors and record the time differences of these signals between each sensor. Then, a Cartesian coordinate system in two dimensions is established with the origin at the acoustic emission source location and the corresponding formulas are derived based on such system.

562

Manufacturing Systems and Industry Application

Finally, to replace the corresponding parameters of the above mentioned formulas with the distances and the time differences and calculate the coordinates of acoustic emission source. And according to the information of the time difference of the directional sensor the exact location of acoustic emission source will be determined. In the Figure 1 of the sensor layout, the origin of the Cartesian coordinate system is the acoustic emission source location, Y axial is parallel with line AD, and four positioning acoustic emission signal sensors are arranged on the line AD. The coordinates for the four sensors are: A(x, y), B(x, y + l21), C(x, y + l31) and D(x, y + l41). Let OA = r, OB = r + r21, OC = r + r31 and OD = r + r41. The time differences among the signals travelling from acoustic emission source location O(0, 0), the origin of the coordinate system, to positioning sensors B, C and D relative to positioning sensor A are t21, t31 and t41 respectively. If the acoustic emission signal propagating velocity is v, then r21 = vt21, r31 = vt31 and r41 = vt41. Let M(x, 0) be the intersection between line AD and X axial, then triangles MOA, MOB, MOC and MOD are all right triangles. Therefore

x2 + y2 = r 2 ,

(1)

x 2 + ( y + l 21 ) 2 = (r + vt 21 ) 2 ,

(2)

x 2 + ( y + l31 ) 2 = (r + vt 31 ) 2 ,

(3)

x 2 + ( y + l 41 ) 2 = (r + vt 41 ) 2 .

(4)

Subtraction of Eq. (1) from Eq. (2), Eq. (3) and Eq. (4) respectively yields 2 2 2l 21 y + l 212 = 2rvt 21 + t 21 v ,

2l31 y + l312 = 2rvt 31 + t 312 v 2 , 2 2 2l 41 y + l 412 = 2rvt 41 + t 41 v .

(5) (6) (7)

From Eq. (5), we get 2rv =

2l 21 y + l 212 − t 21v 2 , t 21

(8)

Substituting Eq. (8) into Eq. (6) and Eq. (7) yields

2l31 y + l312 =

2l 41 y + l 412 =

t 31 (2l 21 y + l 212 ) − t 21t 31v 2 + t 312 v 2 , t 21

(9)

t 41 2 2 (2l 21 y + l 212 ) − t 21t 41v 2 + t 41 v . t 21

(10)

From Eq. (9), we get t 21 (2l31 y + l312 ) − t 31 (2l 21 y + l 212 ) v = , t 21 (t 312 − t 21t 31 ) 2

(11)

Substituting Eq. (11) into Eq. (10) yields y=

t 41 (t 41 − t 21 )(t31l212 − t 21l312 ) − t31 (t31 − t 21 )(t 41l212 − t 21l412 ) , 2[t 41 (t 41 − t 21 )(t 21l31 − t31l21 ) + t31 (t31 − t 21 )(t 41l21 − t 21l41 )]

(12)

Yanwen Wu

563

Let

ξ = t 41 (t 41 − t 21 )(t 31l212 − t 21l312 ) − t31 (t31 − t 21 )(t 41l 212 − t 21l412 ),

(13)

η = t 41 (t 41 − t 21 )(t 21l31 − t31l21 ) + t31 (t31 − t 21 )(t 41l21 − t 21l41 ),

(14)

Then

ξ . 2η

y=

(15)

Substituting Eq. (15) into Eq. (11) yields v2 =

( t 21 l 31 − t 31 l 21 ) ξ + ( t 21 l 312 − t 31 l 212 )η , t 21 t 31 ( t 31 − t 21 )η

 ( t l − t 31 l 21 )ξ + ( t 21 l 312 − t 31 l 212 )η  v =  21 31  t 21 t 31 ( t 31 − t 21 )η  

(16) 1/ 2

.

(17)

Substituting Eq. (17) and Eq. (15) into Eq. (8) yields

r=

2 2 (ξ + l 21η )l 21 − ηt 21 v , 2ηt 21v

(18)

Substituting Eq. (18) and Eq. (15) into Eq. (1) yields x = ±

{[( ξ + l 21 η ) l 21 − η t 212 v 2 ] 2 − t 21 ξ 2 v 2 } 1 / 2 . 2 t 21 η v

(19)

Y

D(x, y + l41)

C(x, y + l31)

B (x, y + l21)

OA= r

A (x, y) A1(x1, y)

X

O (0, 0)

M (x, 0)

Fig. 1 The sensor layout The acoustic emission source location then can be obtained by using any two groups of data from x, y, r. Now there is one more unsolved question remaining about the acoustic emission source location. That is, the relative position of the acoustic emission source to the line AD. The acoustic emission source could be on either side of the line but it is not clear yet. With the help of sign of the x value the relative position problem could be easily solved. The sign of the x value (positive value or negative value) could be determined by the information of the time difference of the directional sensor. Putting a directional sensor on one side of any of the four sensors (there is an adequate distance between the two sensors and Y coordinates are kept the same). For example, adding a directional sensor at the point A1(x1, y). If the sensor at point A1 receives the signal from acoustic emission source earlier than the one at point A does then the sign of the x value is negative, which means acoustic emission source is closer to point A1 and symmetrical to the line AD in relationship to the origin point. If the directional sensor at point A1 gets the signal later than the one does at point A then the sign of the x value is positive, which means the origin point is the acoustic emission source.

564

Manufacturing Systems and Industry Application

Conclusions The merits of the proposed location method are as follows: 1. The geometry of signals sensors’ arrangement is simple. 2. The formulas for solving the acoustic emission source location are concise, which means the acoustic emission source locating task could be simplified. 3. This method does not need the pre-determining of the velocity of acoustic wave with an analogous acoustic emission source. Acknowledgements Address correspondence to Jian TIAN, School of Mechanical Engineering Shenyang University of Chemical Technology, Shenyang 110142, China, email: [email protected]. References [1] J. Baram, M. Rosen, Fatigue life prediction by distribution analysis of acoustic emission signals, J. Mater. Sci. Eng.. 41(1), 1979, pp. 25-30. [2] H.B. Teoh, K. Ono, Fracture induced acoustic emission during slow bend tests of A533B steel, J. Acoust. Emiss.. 6(1), 1987, pp. 1-12. [3] C.R.L. Murthy, B. Dattaguru, A.K. Rao, Application of pattern recognition concepts to acoustic emission signal analysis, J. Acoust. Emiss.. 6(1), 1987, pp. 19-28 [4] D.J. Buttle, C.B. Scruby, Characterization of fatigue of aluminum alloys by acoustic emission Part I – identification of source mechanism, J. Acoust. Emiss.. 9(1990), pp. 243-254. [5] D.J. Buttle, C.B. Scruby, Characterization of fatigue of aluminum alloys by acoustic emission Part II – discrimination between primary and other emissions, J. Acoust. Emiss.. 9(1990), pp. 255-270. [6] M.H.El Ghamry, R.L. Reuben, J.A. Steel, The development of automatated pattern recognition and statistical feature isolation techniques for the diagnosis of reciprocating machinery faults using acoustic emission, J. Mech. Syst. Signal Process.. 17(4), 2003, pp. 805-823. [7] T.M. Roberts, M. Talebzadeh, Acoustic emission monitoring of fatigue crack propagation, J. Constr. Steel Res.. 59(6), 2003, pp. 695-712. [8] L.D. Hall, D. Mba, Acoustic Emissions diagnosis of rotor-stator rubs using the KS statistic, J. Mech. Syst. Signal Process.. 18(4), 2004, pp. 849-868. [9] P. Nivesrangsan, J.A. Steel, R.L. Reuben, Source location of acoustic emission in diesel engines, J. Mech. Syst. Signal Process.. 21(2), 2007, pp. 1103-1114.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.565

Comfort and Energy-Saving Intelligent Shutter Su CHENa, Dongxing WANGb School of Mechatronics and Automobile Engineering, Yantai University, Yantai 264005, China a

[email protected], [email protected]

Key words: shutter, intelligent, energy-saving, comfort, photosensitive resistor

Abstract. Most currently used shutters are manually operated. The design of an intelligent shutter has been proposed. The intelligent shutter can be powered by a solar battery. Photosensitive resistors have been used to determine if it is in daytime or nighttime, if it is sunny or not, and if the light is turned on or turned off. Digital temperature sensors have been used to detect the indoor temperature and the outside temperature. They are also used to determine the current season. The intelligent shutter is automatically controlled according to the above information. It is turned off at night and is set in sleep mode to save energy. It is turned on partially on sunny day in summer. In rainy day, the shutter is turned off while the indoor light is on. The intelligent shutter can also be controlled using a wireless remote controller, which makes it very friendly. It is comfort and energy-saving using the intelligent shutter. Experiments have demonstrated the applicability of the design. Introduction With the development of technology, more and more intelligent household appliances have been developed. Life is becoming more and more comfort and convenient. At present, most shutters are manually operated. This is very inconvenience, especially for operating large shutters such as those used in villas, compound apartments, office buildings, which are both heavy and large. Operation of these shutters is extremely laborious [1, 2]. So far, there are already some electric controlled shutters with simple control functions [3]. Some of them can measure temperature and time [4]. They are simply turned off at daytime, and turned on at night, regardless of weather and season [5, 6]. They are not humanization enough and intelligent enough. In addition, the currently used shutters are powered by alternating current power. They are not energy-saving [6]. The design of an intelligent shutter controlled by a high performance microprocessor has been proposed, which can solve the fore mentioned problems. Design of an Intelligent Shutter The Structure of the Intelligent Shutter. The shutter is powered by solar energy, which is energy-saving and environmental friendly. For urgency, it can also be powered by alternating current power. It can distinguish between day and night automatically. In the evening the shutter is turned on automatically and is set in sleep mode to save energy. At daytime, the shutter need recognize if it is sunny or not. When it is not sunny, the shutter needs to recognize if the indoor light is on or off. If the light is turned on, the shutter will be turned on automatically. Otherwise, the shutter will be turned off to let room bright. The shutter needs measure temperature on a sunny day in the summer. Half of the shutter will be turned on when the room temperature is too high to block the light, thus to decrease room temperature. The shutter is turned off completely in other season. The remote control circuit is designed which makes the manually control of the shutter possible in some special circumstances. This can increase the practicability of the shutter. To realize these functions, the intelligent shutter should controlled by a microprocessor. As shown in Fig. 1, the intelligent shutter mainly consists of the following functional modules: power module, the main single-chip microcomputer module,keyboard and display module, light intensity measurement module, clock module, temperature measurement module, motor control module, and infrared remote control module.

566

Manufacturing Systems and Industry Application

Fig.1. Structure of the intelligent shutter The Main Single-Chip Microcomputer Module. The core of the module is T89C51CC01. There are many on-chip resources. It can be programmed in system (ISP). It is a functional single-chip microcomputer with high performance-price ratio. The watchdog circuit is max813, which will reset the system when it is just powered up or abnormal in program execution. The Keyboard and Display Module. ZLG7289 is adopted in the keyboard and display circuit of the shutter. The chip can drive 7-segment numeric LED displays of up to 8 digits, and 64 keys. It communicates with the main single-chip microcomputer by serial peripheral interface, which only takes three I/O pins of the computer. There are eight 7-segment numeric LED displays and six keys. Numeric LEDs display the status of the shutter including its state, the time for turning on and turning off, and also display the temperature of the environment, six function keys are used for configuring the intelligent shutter. The Temperature Measurement Module. Temperature sensor in the module is DS18B20, a high performance 1-wire digital temperature sensor, which requires only one port pin for communication. Each sensor has a unique 64-bit serial code, stored in an onboard ROM element. Power supply range is 3.0V to 5.5V. Its resolution is user-selectable from 9 to 12 bits. DS18B20 in the system is powered with an external supply. And the resolution is 9 bits. Its data line is connected with P1.2 of the microcomputer. The Clock Module. A trickle charge real time clock chip, DS1302, has been used in the module, which contains a real time clock/calendar and 31 bytes of static RAM. It communicates with a microprocessor via a simple serial interface. The real time clock/calendar provides information of seconds, minutes, hours, day, date, month, and year. The end of the month date is automatically adjusted for months less than 31 days, including corrections for leap year. Interfacing the DS1302 with a microprocessor is simply using synchronous serial communication. Data can be transferred to and from the clock/RAM one byte at a time or in a burst of up to 31bytes. Color figures are welcome for the online version of the journal. Generally, these figures will be reduced to black and white for the print version. The author should indicate on the checklist if he wishes to have them printed in full color and make the necessary payments in advance. The Infrared Remote Control Module. Equations Infrared remote control is an effective control method, universally employed, to remote control household electric appliances, which has merits like fine stabilization, high reliability, good direction, no interfering with other household electric appliances. So, if there are many shutters in one house, only one remote controller is enough. There are six function keys in the remote controller: (1) turn on the system/turn off the system, (2) automatic/manual, (3) turn on the shutter, (4) turn off the shutter, (5) rotate shutter boards clockwise, (6) rotate shutter boards counter-clockwise. The Motor Control Module. In according with the requirement of the shutter, four motors have been used. Two motors play the hole of turning on/turning off the shutter, rotating clockwise/rotating counter-clockwise the shutter boards. Two other motors adjust the orientation of the solar battery both in horizontal direction and in vertical direction. Turning on or turning off the shutter is drove by DC motors with reduction gears, which are controlled by special ICs numbered TA8050P. The solar battery are drove by two step motors. The circuit for driving a step motor is fulfilled by a full-bridge power amplifier LMD18245, which is a dedicated chip for driving a brushed type DC motor or a bipolar step motor. LMD18245 has a 4-bit D/A converter to control motor current. The step motor can run in modes of one-phase, two-phase, or micro-step by programming instead of changing the hardware. This gives the driving circuit good extensibility.

Yanwen Wu

567

The Light Intensity Measurement Module. As shown in Fig.2, the module consists of an A/D converter and a resistor array. RP1 to RP6 are photosensitive resistors, which have different resistance value under different light intensity. When light intensity is high, its resistance value is low. On the contrary, resistance value will increase at low light intensity. In the light intensity measurement module, fixed resistors and photosensitive resistors are used to divide up fixed voltage. RT1 to RT6 are variable resistors whose maximum value is 51k. They are connected with photosensitive resistors in cascade to get a voltage proportional to the light intensity. LM324 consists of four integrated operational amplifiers. They are connected to be voltage followers to realize impedance matching. TLC2543 is an Analog-to-Digital Converter for converting analog voltage signal to digital signal. The light intensity measurement module can measure both the indoor light intensity and the outside light intensity. RP1 is for measuring the indoor light intensity. RP2 to RP6 are for measuring the outside light intensity and the direction of the Sun light. According to the detected light intensity value, the system determines if is at daytime or night, if it is sunny or not, and if the indoor light is on or not. The microprocessor detects the light intensity every minute. If the light intensity detected by RP2 to RP6 are all the lowest, it is at night. Otherwise, It is at daytime. If the light intensity detected by RP2 to RP6 is equal to the value detected at daytime, it is regarded as cloudy. If they are unequal to each other, the resistor with the smallest resistance will indicate the direction of the Sun light. When it is cloudy, the indoor light should be measured to determine if the indoor light is on or off. If the indoor light intensity becomes high suddenly, the light is turned on.

Fig.2. Schematic of the light intensity measurement module The Power Module. The shutter is mainly powered by a solar battery. But when it is cloudy for a relatively long time period, the solar panels cannot work as usual. It should be capable of being powered by alternating current power. The alternating current power is only a standby supply. A 12V DC power supply should be produced to power DC motors and step motors. A DC-DC converter is adopted to convert the 12V DC power supply to 5V DC power supply to power the digital system. Program Design According to the working process of the shutter, the program of the shutter is designed in a modular structure. The program consists of the following modules: the module for managing the power system, the module for the remote control, the module for the keyboard and display, the module for light intensity measurement, the module for motor control. The flow chart of the program is showed in Fig.3. The working process is as follows. The first is the detection of battery energy. If the voltage is lower than the threshold value, the alternating current power supply will be switched on to power the shutter. A low-battery warning signal is sent through the flashing of a LED. If the voltage of the power supply is normal, it will check to see if it is in manual operation mode or not. In manual operation mode, the system recognizes remote key stroke and executing the corresponding command. In automatic mode, the control system acquires information of light intensity, analyses them, and then controls the shutter to perform corresponding operation.

568

Manufacturing Systems and Industry Application

Fig.3. Flow chart of the program Experiments and Conclusions An experimental intelligent shutter has been developed. Experiments of various operations have demonstrated the applicability of the design. The intelligent shutter base on the proposed design can be powered by a solar battery. Photosensitive resistors have been used to determine if it is in daytime or nighttime, if it is sunny or not, and if the light is turned on or turned off. Digital temperature sensors have been used to detect the indoor temperature and the outside temperature. They are also used to determine the current season. The intelligent shutter is automatically controlled according to the above information. It is turned off at night and is set in sleep mode to save energy. It is turned on partially on sunny day in summer. In rainy day, the shutter is turned off while the indoor light is on. The intelligent shutter can also be controlled using a wireless remote controller, which makes it very friendly. It is comfort and energy-saving using the intelligent shutter. References [1] F. B. WANG, K. J. WU, Z. JIANG: Industrial Instrumentation & Automation, Vol. 4 (2009), p. 31. [2] L. M. CHEN, F. F. PENG: Neijiang Technology, Vol. 8 (2008), p. 112. [3] C. L. GUO: Shanxi Electric Technology, Vol. 6 (2006), p. 52. [4] Y. SUN, W. Y. YANG, Y. X. ZHAO: Microcomputer & Its Applications, Vol. 13 (2010). [5] C. W. WANG, C. L. LIU, W. L. JIANG, C. Y. LIU: Journal of Jilin Normal University (Natural Science Edition), Vol. 2 (2010), p. 93. [6] L. CHEN, Y. WANG, W. W. ZHANG: Science and Technology Information Development and Economy, Vol. 20 (2007).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.569

Image Registration Based on MI and PSO Algorithm Junhong Xu 1a,2, Jin Li1b, Yanwei Wang 1c, Hong Liang 1d, Dongchao Tian1e, Nan Zhang1f, Zhiyuan Wang1g , Wang Cong1h 1

College of Automation, Harbin Engineering University, Harbin, Heilongjiang, China

2

North China Institute of Water Conservancy and Hydroelectric Power, Zhengzhou, Henan, China a

[email protected] , [email protected], c [email protected], dlh@ hrbeu.edu.cn, e [email protected], f [email protected], g358875409@ qq.com, h [email protected]

Key words: PSO; POWELL; Mutual Information; Image Registration

ABSTRACT. To improve the performance of image registration technology, a new method based on mutual information and PSO (Particle swarm optimization) is proposed in this paper. The MI (mutual Information) algorithm has been applied on image registration. The PSO algorithm is used to find the maximum MI. Compared the PSO and the POWELL algorithms, the results show that the PSO algorithm performs fairly well compared with the traditional algorithms. Introduction Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and by different sensors. Image registration is a crucial step in all image analysis tasks in which the final information is gained from the combination of various data sources like in image fusion, change detection, and multi-channel image restoration. Applying maximum mutual information to image registration is a hot issue recently. Because there is not require any assumption at the relationship of image gray from different program modes, and not require any preprocessing or image segmentation. But image registration based on mutual information has some problem ,such as the compute speed is slowly, the processing time is too long and the results maybe wrong in some extent. Therefore, in this paper, we proposed an image registration based on PSO and MI to improve the accuracy of image registration processing. Experimental results show that the proposed algorithm is effective and compares favorably with existing techniques. Besides make sure the image registration measurement between two images, image registration matter transform to optimize issue. The image registration optimize should consider the two side, one is the global optimize, the other is the optimize speed. In this field LIU Li and SU Min from Sichuan University, proposed a new method based on wavelet transformation and mutual information. [1] Sangeetha Somayajula ,Evren Asma and Richard M. Leahy propose a non-parametric method for incorporating information from co-registered anatomical images into PET imagreconstruction through priors based on mutual information [2]. Kohei Yamasaki and Tomoaki Ohtsuki research about the MI used in wireless sensor networks [3]. Image Registration based on mutual information MI is the shortened form mutual information which is a basic concept from information theory that measures the amount of information one image contains about the other .Entropy, proposed by Shannon, is used to representation to measure useful information from source. Assumption that source A sent messages is N, including n different messages, and the message i (which is 1,2,……,n) presents repeatedly, the repetition time is hi, then hi/N sent repetition frequency to every output messages, therefore probability Pi instead of hi/N, then the average messages, namely entropy is that:

570

Manufacturing Systems and Industry Application

n

H ( A) = −∑ pi log pi

. (1) Therefore the entropy is representation the complex or uncertainty of the system. If H(A|B) is the Poisson likelihood function and is PA (a) a prior on the functional image. Let the N feature vectors i =1

extracted from the functional and anatomical images be represented as Ai and Bi, respectively for i=1 to N, These can be considered as realizations of the random feature vectors A and B, Mutual information I(A,B) is defined as 2: I(A,B)=H(A)+H(B)-H(A,B)=H(A)-H(A|B) =H(B)-H(B|A).

(2)

Where H(A), H(B) and H(A,B) are random variable, A and B are the union entropy, then each H(A|B) and H(B|A) is the conditional entropy of the A and B,H(A|B) is the following. H ( A | B ) = −∑ PAB (a, b) log PAB (a | b) a ,b . (3) And the union entropy H(A,B) is the mutually statistical quantity of random variable A and B, which PA (a) and PB (b) is the probability distributing of the A and B, and the union probability distributing is the PAB (a, b) .From the 4, the union entropy H(A,B) is obtained. H ( A, B ) = −∑ PAB (a, b) log PAB (a, b) a ,b

. (4) Starting from a reference image A and floating image B, with intensity a and b , the mutual information is I(A,B), is calculated from the joint and marginal probabilities PAB (a, b) , I(A,B) is given by:

PAB (a, b) PA (a ) PB (b) . a ,b (5) Because of the mutual information is liable for the variability of the image overlap, the mutual information image registration is to find the relationship of the space transform. The mutual information of two images will the maximum trough the this transform. I ( A, B) = ∑ PAB (a, b) log

PSO Algorithm Mutual Information, MI is the short name, is a feature selection method that is used widely in image registration, but this method only takes long times and amount memory. In fact, we were mainly concerned with the processing time and the accuracy of the results. Therefore, it is necessary to apply PSO to improve the processing accuracy. There are many researchers study on this method. Zeng, Guanghui and Jiang Yuewen research on the modified PSO algorithm. [4] Elloumi, W and Rokbani, N study on Ant supervised by PSO [5].Wei Jing and Hai Zhao produced a optimized particle filter based on PSO algorithm [6]. The principle of PSO. In computer science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. PSO is inspired from studies of various animal groups and has been proven to be a powerful competitor to other evolutionary algorithms such as genetic algorithms [7]. PSO algorithm is a population based stochastic recursion procedure, which simulates the social behavior of a swarm of ants or a school of fish. A basic variant of the PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles) .it simulates social behavior among individuals (particles) “flying” through a multidimensional search space, where each particle represents a point at the intersection

Yanwen Wu

571

of all search dimensions. These particles are moved around in the search space according to a few simple formulae. The movements of the particles are guided by their own best known position in the search space as well as the entire swarm's the best position, and use those memories to adjust their own velocities and positions. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered. Motivated by this, a second order model is proposed to represent the traditional dynamics of particles. PSO Algorithm. If we let m denote the size of the swarm in D dimension aim searching space, the current poison i, which D is the dimension of the solution space, is denoted b y zi (i=1,…,m). According to the giving adaptive function to compute the value of the zi , and evaluating the performance of the particles position. vi is the speed of the particle i,and pi is the better position of the local particles from now on. The pg is the best position in global particles. The particle updating by the 1 and 2 in every iterate processing. vidk +1 = vidk + c1 r1 ( pid − z idk ) z idk +1 = z idk + vidk +1

.

(6)

. (7) Where i=1,2,…,m, d=1,2,…, and D, k is the iterate time, r1 and r2 is the stochastic between the 0 and 1.These two parameters aim to holding the variety of the colony. The c1 and c2 have the capability of the self-learning and learning for the excellent particle to improve itself, are the learning factor, name as acceleration factor. If adjusted propriety, they will reduce local minimum trouble, at the same time the better convergence speed will acquired. POWELL Algorithm. POWELL Algorithm is a method to Begin develop the conjugate directions using only a one-dimensional search at each iteration .let we definite the x1 and x2 are two vectors generated by one-dimension searches in Input Image I II the same vector V, and the different points x1-x2 is mutually conjugate to V, the standard POWELL Algorithm can be describe that there are a set of linearly independent vectors X Variable initial (from x1 to xn),and the object function is min(x).ALL in all ,POWELL is advantage in local optimize ,but poor in whole optimize. Transform image I Image Registration Algorithm based on MI. PSO used in MI algorithm to find the max MI in the particle, then the image registration is accompanied by this method. the fig 1 describes Mutual information the flow chart of the algorithm. First step is input two images, one is the reference image. The other one is float image. Then the uniform coordinate series is Optimize assigned to make sure the image space transformation. From the Fig1, we will conclusion that the initial is very important N for the PSO to restrain the max speed, because there is no MAX MI mechanism to restrain the particle speed. let vmax is denoted the Y max speed, vmin is denoted the min speed, location zi from zmin to zmzx,the threshold value is ε, the max iterative is Nmzx.Test Over every adaptive value Dik,and compare the optimize value with the history optimize value , and get the best optimize value Dg Fig.1 Algorithm Flow Chart (k) to store, record the corresponding position Pg (k), If Dg (k) >ε, then continue, otherwise jump the circle. Update the vik,zik and k=k+1 ,if k> Nmzx,output the result.

572

Manufacturing Systems and Industry Application

Experiments and Conclusion We test the several groups images two images in group first are 107*107 pixel, fig1 a is the reference image, fig 2 b , which is the rotation one, is the float image. The POWELL algorithm find the peak point the x axis is 20, y axis is 14 which angle is positive61,and the MI value is 1.1304. While the PSO algorithm search the peak point is (20,20) ,and the angle is positive49,the MI value is 1.1022,the value is the average of the 26 tests.

a Input ImageI b Input ImageII c PSO Fig.2 The Results Of First Group

d powell

In second group, two images are 190*190pixel, fig3 a is the reference image, fig 3 b , which is the distortion one, is the float image.

a Input ImageI b Input ImageII c PSO d powell Fig.3 The Results Of The Second Group The POWELL algorithm find the peak point is (1,-14) and the angle is 3,and the MI value is 0.7655. and the PSO algorithm search the peak point is (7,-12) ,and the angle is 20,the MI value is 0.7706,the value is the average of the 32 tests.

a Input ImageI

b Input ImageII c PSO d powell Fig.4 The Results Of the Third Group

Two images, in group third, are 150*150pixel, fig4 a is the reference image, fig 4 b , which is the anamorphous one, is the float image. The POWELL algorithm find the peak point is (0,6) and the angle is 0,and the MI value is 0.5918. and the PSO algorithm search the peak point is (9,12) ,and the angle is 0,the MI value is 0.5933,the value is the average of the 36 tests. Table.1 Values of PSO and POWELL algorithm Item Test Group1 Group2 Group3

PSO 1.1022 0.77065 0.5933

MI[1] POWELL 1.1304 0.76555 0.5918

The best 1.10222. 0.770645 0.593298

Elapsed time[s] PSO POWELL 23.31 16.05 50.23 36.75 36.50 26.16

Yanwen Wu

573

From the table1 , we can see that each the PSO and POWELL algorithm used in MI has advantages and disadvantages. From the Elapsed time, the POWELL algorithm is better than the PSO, but from the MI accuracy, the PSO algorithm is better than the POWELL, there is because that the POWELL is liable to local optimize, the result is not the best. In order to verify the available of the results, all of the results are the average value from several tests. Pictures and table1 depict that the PSO algorithm is better than POWELL algorithm in accuracy and stability. Summary In this paper, we proposed the image registration based on MI and PSO. The PSO algorithm utilizes the global exploration abilities which is better than the POWELL algorithm, we test the group images, such as rotations, distortion and condensation conditions, used the two optimize algorithm, and the proposed approach has been tested and examined to demonstrate its effectiveness and robustness. Acknowledgment This paper is supported by the International Exchange Program of Harbin Engineering University for Innovation-oriented Talents Cultivation, the national 863 project which number is 2008A12AA218-51,Harbin Discipline Leaders Fund and Postgraduate Culture Fund of Harbin Engineering University. Reference [1] LIU Li, SU Min. Chinese: Medical Image Registration Based on Wavelet Transformation and Mutual Information. Journal of Image and Graphics, Forum Vol. 13(2008), p.1317 [2] Sangeetha Somayajula, Evren Asma, Richard M. Leahy, in: PET Image Reconstruction using Anatomical Information through Mutual Information Based Priors. edtied by 4th IEEE International Symposium on, Arlington, ISBI (2007) in press. [3] Kohei Yamasaki, Tomoaki Ohtsuki, in:Design of Energy-Efficient Wireless Sensor Networks with Censoring on-off, and censoring and on-off sensors based on mutual information, edtied by IEEE 61st . VTC(2005) in press. [4] Xin Chen, Yangmin Li .Chinese: A Modified PSO Structure Resulting in High Exploration

Ability With Convergence Guaranteed . IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B. Forum Vol.37,(2007),P1271 [5] Elloumi WRokbani, Ni Alimi A.M. Ant supervised by PSO .Computational Intelligence and Intelligent Informatics, Vol. 4 (2009), p.161 [6] Wei Jing , Hai Zhao , Chunhe Song , Dan Liu .A optimized particle filter based on PSO algorithm. Future BioMedical Information Engineering.Vol. 12 (2009) p.122 [7] J. Kennedy , R. C. Eberhart. in:Particle Swarm Optimization. IEEE Int. Conf. edtied by Neural Australia, (1995) in press.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.574

Obstacle Avoidance Optimization of Joint Robot Based on Partial PSO Algorithm Junwei ZHAOa, Yanqin LIb, Guoqiang CHENc School of Mechanical & Power Engineering, Henan Polytechnic University, Jiaozuo, 454003, China a

[email protected], [email protected], [email protected]

Key words: joint robot, obstacle avoidance, grid method, partial PSO algorithm.

Abstract. Aiming at the joint robot path plan in unknown environment, the paper adopts the method of obstacle avoidance in X-Y plane. The obstacles exist in the Cartesian space are transformed into the joint blind regions in the Joint (C) space through geometry principle and inverse kinematics. The simulation using the partial particle swarm optimization (PSO) algorithm is utilized in seeking the angles that can avoid obstacles. Finally the path in the Cartesian space is obtained through transforming angles. The method is verified to be simple and effective. Introduction With the fast development of robot technology in recent years, joint robot is widely applied in the outer space, the automatic production line, office, family services and so on. As the controlling foundation of robot, the path planning is getting more and more important. Planning robot path reasonably can raise the production directly, bring economic efficiency. The joint robot is different from other robots (mobile robots). The joint robot has Cartesian space and C space that are misalignment mapping, so the path plan of joint robot is difficult [1]. Besides, there are some obstacles in the operating region of joint robot in some occasions. Therefore, the spatial transformation and obstacles processing are vital steps to path planning research of joint robot. System Description Fig.1 shows the flow of robot P4 capture parts in automatic P3 production line. This robot has four P5 joints. The joint 1 and 2 angle scopes are [-100°, 100°], the longitudinal motions of joint 3 and rotation of joint 4 are unimportant P1 to obstacle avoidance. So planning P0 P2 robot angles of joint 1 and 2 reasonably is the key of obstacle avoidance. In order to simplify motion modeling, the paper supposes the Fig.1 The flow of robot capture parts robot move in the two-dimensional space. Environment vertical view is shown in Fig.2. There is a circular obstacle 1 and a triangle obstacle 2 in the space. The regions surrounded by curve 1 and 2 express puffing obstacles. OA1 , OA2 , A1B1 , A2 B 2 expresses respectively that the two limit positions of joint 1 and 2. The circle with OA1 radius expresses motion scope of joint 1, the circle with OB1 radius expresses robot inner circle scope, the circle with OA1 + OB1 radius expresses robot addendum circle scope, and the region surrounded by line 3 is robotic free motion space.

Yanwen Wu

575

Fig.2 Joint robotic end trajectory in Cartesian space

System plan is shown in Fig.3. The obstacles exist in the Cartesian space are transformed into C space, then the grid method is utilized for C space environment modeling, and the simulation using the partial PSO algorithm is used to seek the angles which can avoid obstacles successfully.

Fig.3 System plan

Environment Modeling Because of joint robot particularity and the partial PSO algorithm validity, the grid method is utilized to build environment model. Two key problems need to be solved in the environment modeling. One is that the grid realization and mark. Both the angle scopes of joint 1and 2 are [-100°, 100°]. If the segmentation precision is 5, joint coordinate region can be divided into 40 strips. Parallel lines are made, which can divide C space plane into 1600 boxes. Meanwhile, the grids are marked by the rectangular coordinates of the left bottom corners. The other is the equivalent transformation between obstacles in Cartesian space and joint blind regions in C space. As is shown in Fig.2, OCF and ODG are the limit positions that joint collided the obstacle 1. The angles of joint 1 in these positions - blind region limit values of joint 1, are calculated through inverse kinematics. Moreover, blind spot increment of joint 2 is 15° according to the geometry principle. The other limit values in C space are also calculated in the enveloping obstacle 2 and puffing obstacles with the method. Results are shown in Fig.5 [2,3,4]. Algorithm Design In the optimal process of obstacle avoidance in unknown environment, original state is that angle increases along the polar coordinate radius. Before angle changes each time, it is necessary to judge whether there are joint blind spots between the current angle and the next angle. If blind spots do not exist, angle approaches target point directly. If blind spots exist, the partial PSO algorithm is used to transform coordinate, and angle increases along new polar radius until avoiding blind spot. Afterwards, polar coordinate is transformed once more. Angle continues to approach target point along newer polar radius.

576

Manufacturing Systems and Industry Application

The partial PSO algorithm is mature. The question searching space is analogized by flight space of the bird, and each bird is attributed a candidate solution of the question. Searching for the optimal solution is equated to seeking food [5]. Compared with the overall PSO algorithm, the convergence rate of partial PSO algorithm is slow. But partial PSO algorithm may avoid falling into the local optimum solution. General obstacle avoidance mechanism of partial PSO algorithm was discussed in [6]. The particle position and velocity are initialized stochastically X i = [ xi ,1 , xi ,2 ,… , xi ,d ] Vi = [vi ,1 , vi ,2 ,… , vi ,d ] Through evaluating various particle fitness functions to decide the best position and optimum value in neighborhood particles at t time ( pbest ) Pi = [ pi ,1 , pi ,2 ,… , pi ,d ] (nbest ) Pn = [ pn ,1 , pn ,2 ,… , pn ,d ] Speed and position of various particles are renewed according to Eq.1, Eq.2 vi, j (t +1) = wvi, j (t) + c1r1[ pi, j − xi, j (t)] + c2r2[ pn, j − xi, j (t)] (1) xi, j (t +1) = xi, j (t) +vi, j (t +1), j =1, , d

(2)

where w is the inertia factor, c1 and c2 are the acceleration constant, and r1 and r2 are numbers of uniform distribution random on the interval (0,1). The entire flow chart is shown in Fig.4. The fitness function of algorithm design is the sum of distances between current spot to initial spot and current spot to target spot Fitness = fitw1* a + fitw2* b

(3)

fitw1 = fitw2 = 1 (4) where a is the distance between current spot and initial spot, and b is the distance between current spot and target spot.

Fig.4 Algorithm design flow chart

Simulation Results and Analysis When the PSO initial parameters are c1 = 1.4962 , c 2 = 1.4962 , w = 0.7298 , particle scale N = 30 , iterative times MaxDT = 1000 , the simulation figures of joint angles are shown in Fig. 5 and Fig. 6. Initial point S is the 1st box – left bottom, target point G is the 1600th box – right top, the regions which surrounded by the circle are joint blind regions, and the black curves are angle orbits.

Yanwen Wu

Fig.5 The result of joint angle obstacle 1 avoidance

577

Fig.6 The result of joint angle obstacle 1,2 avoidance

In Fig.5, the angle orbit and the enveloping circle have intersection around the second inflection point, which does not affect result. The figures indicated that the joint robot can avoid joint blind regions in angle scopes with the designed algorithm. In Fig.5 and 6, the reason why the initial point do not connect with the second inflection point, is that robot path plan process is in unknown environment. Before angles change each time, it is necessary to judge whether there are blind spots. The robot judges that the obstacles exist when the robot at the inflection point instead of the initial point. The above simulation angles are utilized to produce terminal curve of the robotic end – curve 4 in Fig.2 through normal kinematics. It is evident that obstacles are avoided, so the method is verified to be effective. Conclusions Obstacles in the Cartesian space are transformed equivalently into blind regions in C space. And then the partial PSO algorithm is used to seek the joint angles which can avoid obstacles successfully. This method was verified to be able to realize obstacle avoidance of joint robot in the two-dimensional space. Because the characteristic of partial PSO algorithm is imperfect [7], the terminal path is not the best, which needs further improvement. References [1] Yin Jiying, He Guangping. Joint robot: Beijing Chemical Industry Press, No.1,(2003), p.68-82 [2] Information on http:// blog.china.alibaba.com/blog/adtechcn/article/b0-i14032115.html [3] Ding Fuqiang, Fei Yanqiong, Han Weijun and Zhao Xifang: Real-time collision-free motion plan of the dual-arm Scarates robot. Journal of Shanghai Jiaotong University.Vol.37, No.11, (2003), p.1690-1693 [4] Qian Donghai, Ma Yixiao, Zhao xifang:Dual-arm algorithm quickly in C space. Mechanical Science and Technology.Vol.18, No.1, (1999), p.65-68 [5] Wang Ling, Liu Bo: Particle swarm optimization and dispatch algorithm. Beijing Tsinghua University Press, No.1, (2008), p.1-2 [6] Lu Dan: PSO algorithm in moving robot path planning applied and research. Wuhan Science and Technology University Press. (2009), p.25-33 [7] Dominik H, Christian W, Heinz W: On-line path planning optimal C-space discrerization. Proc 1998 IEEE/RSJ Int Conf on Robots and System. Victoria. Canada, IEEE, (1998), p.1479-1484

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.578

Earthwork Actuarial Software System Design and Development Su Qiaomei a, Wang Jianmin b, Guo Jiaojiao c Department of Surveying Science and Technology, Taiyuan University of Technology, Taiyuan,Shanxi, China a

[email protected], [email protected], [email protected]

Key words: earthwork; volume accuracy; triangle net; 3D model

Abstract. Earthwork is an important content in engineering construction. This paper explores a scientific method of earthwork calculation, designs and develops corresponding software which accurately calculates the earthwork, effectively reduces engineering investment and budget deviation. It mainly studies the three-dimensional model building by the complex form through two period observations. The model building methodology mainly adopts the elevation of two period observation data`s mutual insertion to establish the two period superposition triangle net. And then getting the method of calculating tri-prism volume formed by triangular. This method, the volume calculation principle is simple, high precision, and the design of the special software system is based on the above method. At present, the software system applies in the engineering, and it has computational fast, convenient, results accuracy and reliable etc., advantages. Introduction Earthwork is an important content in engineering construction, but it has the characteristic of big workload and trivial in engineering design and construction, meanwhile the calculating process is more complex. The traditional calculation is section method, pane nets method, scattered multi-designing dot method. According to these methods, some foreign and domestic companies have developed some comparatively mature earthwork calculation software. For example, H Zhenqi, Gao Yongguang etc., apply DEM theory combined with the square nets methods, with the help of ERDAS IMAGE remote image software to calculate earthwork. [1] Ke Xiaoshan, Zhang Wei etc., calculate it by utilizing the existing topographic map contours and elevation dots etc., the basic materials, using irregular triangulation interpolation ways to create digital elevating model. And they have proved that this method has high feasibility. [2] Zhou Yuexuan, Liu Xuejun etc., use DEM model to calculate earthwork and analysis its accuracy. The conclusion what they have gained has the guiding significance to the actual engineering. Wu Yang, Liu Weiguo etc., by using GIS principle, propose new ideas of using grid method on volume of earthwork automatic calculation and blend. At the same time, they discuss the slop earthwork automatic computing problem that is not involved in the existing earthwork calculation software, and what has certain perspectiveness. [3] HTCAD is a set of earthwork calculation and drawing software based on Auto CAD platform; Southern CASS software, etc. However, these software use the traditional methods for general volume calculations, thus precision is low, and it is difficult to calculate the complex form’s volume. Therefore, this paper adopts two period observation data to create complex 3D model to calculate the volume, thereby precise earthwork, cut down the engineering cost and budget deviation. Method of Earthwork Actuary Obtaining Two Period Observation Data. In order to build complex 3D model, to determine the

A.

B. Fig. 1 Irregular Form Profile

earthwork volume, we need measure the surface feature dots before the excavation in engineering and another time after the excavation. These feature dots with particular plane coordinates system and elevation value are called two-period observation data (as shown in fig. 1), and surface surrounded form measured before and after is the irregular form.

Yanwen Wu

579

Building Complex Form Model. If we want to convenient, quick, precision calculate the accumulation volume, we must establish a reasonable model algorithm. Put the TIN-triangulated irregular network respectively in form surface before the excavation which is called the first period triangulated network, and after the excavation which is called the second period one. And the common edges of two period triangulated networks compose form the boundary of form. In measuring the second period dot, the dots marked in boundary are called the boundary dots, and these boundary dots can form closed polygon by the way of clockwise or anti-clockwise. It is inconsistent between the coordinates elevation and amount of dots in two periods of experimental data, so the triangulated network arranged in bottom and top surface are different. When arranging the triangulated network, we only construct it within the boundary, namely triangulated network with constraints of boundary. This paper uses the algorithm of [4] to construct it within the boundary (as shown in fig. 2). In order to calculate the volume convenient, we add the two period data together to constitute a common triangulated network. In the process of superposition, we need to use the first period data to interpolate elevation value of the second period data. Similarly, we need to use the second period data to interpolate elevation value of the first period data, and the method follows as [5]. So each dot in superimposed triangulated network has two elevation values. Thus a triangle (superposition triangles) corresponds to generate a tri-prism, and the summation of all tri-prism volume is namely the irregular form volume.

A. After Excavation

B. Before Excavation

C. Superimposed triangulated network

Fig. 2 Structure of Tin Calculating Complexity Form Volume. The core of earthwork calculation method is the volume calculation about each pair of triangle (pairs of triangle) between the first and second period. And its fill &excavation volume calculation is nothing more than four relations, as fig. 3 shows. [6]

A.

B.

C.

D.

Fig. 3 Tir-prism Geometric Scheme In Fig. 3, B and C result from the equal elevation value of one dot in first period △A1B1C1 and two dots in the second period △A2 B2 C2, and they are the special case of A. △ABC is the projection triangle of △A1B1C1 and △A2B2C2 in horizontal plane. In D, △A1B1C1 and △A2B2C2 are cross, P1P2 is the cross edge. Therefore, in fig. 3 A, B and C, the volume calculation equation is: [6][7] (1)

580

Manufacturing Systems and Industry Application

In this equation, s is the area of △ABC; h1, h2, h3 separately represent the height difference of the corresponding dots between two period triangles. In fig. 3 A, B, C, it is one triangle in another one, if △A1B1C1 in, volume with “+” said excavation; vice is △A1B1C1 below, volume with “-” said fill. For D, it displays the volume has both excavation and fill, and its volume calculation according to the following equation, For partial dig and fill the tri-prism can be divided into two parts, the wedge P1P2-B1C1C2B2 and the triangular pyramid A1-A2P1P2. So the wedge volume V is:

.

(2)

The volume V1 of triangular pyramid A1-A2P1P2 is: (3)

.

In Eq.2 and 3, S2,S3,and S4seperately represent the area of △C1P1P2, △P2B1C1 , △AP1P2; h1,h2 and h3 respectively are the height difference of the corresponding dots. Among them the triangle area calculation can use the following Henlen equation, (4)

. .

.

And a, b, c are as trilateral length of triangle. L is half perimeter. In calculation of triangle`s length of side, we must consider its three-dimensional structure, and calculate with the elevation. The calculation equation of tri-prism derives from the equation of solid geometry volume. Its equation and calculated results are accurate closely, at the same time it also has advantages such as operation flexible, simple, high work efficiency etc.. Although the calculation is trivial, it is rapid and accurate to use the programming calculation in programmable calculator and microcomputer widely used today. Because of its above characteristics, this method has achieved good effect in the production practice. Software System Design Database Design. Two key data tables in earthwork actuarial software system design as follows, all measured data stored in the database, see table 1 and 2. Table 1 [Measured Data Table] Field Zone Attribute

type text byte

DotNo X Y H

text double double double

instruction Actual measured region 1 for first period dots, 2 for the second period dots, 3 for boundary dots Dot number Dot coordinates Dot elevation

Table 2 [Result Data Table] Field

type

Instruction

Zone

text

Actual measured region name

Vol1

double

Excavation volume

Vol2

double

Fill volume

Yanwen Wu

581

Main Data Structure. The data collected in field are discrete dots, and we design dot structure and the classification of processing data with the method of object-oriented though. typedef struct _struDot { char strDotName[255]; double dDotX ; double dDotY; double dDotH; byte iArrb; ‘ }struDot; class CTin::Object// Tin class composition { UINT m_dotCount; // UINT m_Count;// double m_V1[i];// double m_V2[i];// bool structTin(long iDotCount, struDot *pDotArray);// void computerH(long iDotCount,struDot *pDotArray);/ …… } CtypedPtrArray m_ DotArray;// System Design. We design and develop a special software system based on VS 2005 platform according to the aforesaid method. The operational process sees fig. 4. Read the Two Period Observation Data Structure the model of irregular TIN and Building the Complex Form Model

Connecting to the database

Drawing graphics

Calculating the filling volume and the excavation volume Inputting into result data tables Output results Fig. 4 Software System Operation Process

Data processing goes as the following steps: Importing database the ground feature dot coordinates, and i for number of the ground dot before excavation. In reading data, numbering the dots by turn and storing into dot groups. Inserting the elevation value that corresponding to two period data dots. Constructing the irregular triangular network with boundary. Calculating each tri-prism volume. Inputting into result data tables. Software System Evaluation. This system can show not only triangulation net, but also form’s 3D structure through constructing net and model as fig. 5 and 6. As the surface of irregular triangular is a very flexible terrain model, the system uses flat, not overlap, irregular shapes of triangle to approximate the ground. It can adapt to the ground change, and can contain some special dots and

582

Manufacturing Systems and Industry Application

lines. Based on the ground approximation of the triangle can be simplified as to the ground triangulations problem. The system introduces the constraint border into the earthwork calculation and it keeps the original precision of original data. It has following advantages, described ground model lifelike, form selection flexible, and can raise calculation accuracy etc..

Fig.5 irregular Form’s 3D model

Fig. 6 Two Period Triangulation Net with Boundary

Yanwen Wu

583

Summary As the widely used earth calculation method is not accurate enough, this paper designs the software system. It superimposes two period data and gets the elevation of the corresponding dots, thus forms tri-prism, calculates every tri-prism volume, and finally precisely finishes the automatic computation of amount of earthwork volume about the fill & excavation. The software system can not only display triangulation net, but also the form’s three-dimensional graph by contrasting net and model. At present, the software system applies in the engineering and it has fast, convenient, results correct and reliable etc., advantages. References [1] Hu Zhenqi, Gao Yongguang and Li Jiangxin: Application of Erdas for Calculating Earthwork in Land Consolidation Project. China Land Science (Chinese Version with English abstract), Vol. 20(2006) , p. 50-54 [2] Ke Xiaoshan, Zhang Wei and Wang Rongjin: Application of Triangulated Irregeular Network interpolation to calculation of earthwork in prophase of land consolidation project. Transactions of the Chinese Society of Agricultural Engineering(Chinese Version with English abstract) , Vol.30(2004) , p. 243-247 [3] Wu Yang, Liu Weiguo and Hu Shenjun: Research of the Software about the Auto-calculation of the Amount and Distribution of the Earthwork Based on GIS. Journal of Hunan University of Arts and Science(Natural Science) (Chinese Version with English abstract), Vol.16(2004), p. 62-63 [4] Wang Jianmin: “Delaunay Triangulation Based On Nesting Island with holes”. The 2010 International Conference on Computer Application and System Modeling, p.V14-224, Ocotober 2010 [5] Liu Shaohua, Wu Dongsheng and Luo Xiaolong: Research on Algorithm of Delaunay Triangulation Net Interpolating Polygon. Journal of Geomatics Science and Technology (Chinese Version with English abstract), Vol.24(2007), p. 136-138 [6] Zeng Jianping, Liu Faquan: A three-diacentional model based on VBA triangle earthwork excavation. Journal of Shaanxi Institute Of Technology(Chinese Version with English abstract), Vol 21(2005), p. 31-33 [7] Wang Shaoyun: The Application of Irregular Triangulation Network Method in the Calculation of Earthwork Engineering. Beijing Surveying And Mapping(Chinese Version with English abstract), Vol.2(2007), p. 51-53

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.584

Application of Multi-fractal Spectrum to Analysis the Vibration Signal of Power Transformer for the Detection of Winding Deformation Feng-hua Wang1, a, Jun Zhang1,b, Zhi-jian Jin1,c and Qing Li2,d 1

Key Laboratory of Control of Power Transmission and Transformation, Ministry of Education, Department of Electrical Engineering, Shanghai Jiaotong University, Shanghai 200240, China 2

Henan Electric Power Research Institute, Zhengzhou 450052, Henan Province, China

a

[email protected], [email protected], [email protected], [email protected]

Key words: power transformer, vibration signal, multi-fractal spectrum, winding deformation

Abstract. The method of monitoring the tank vibration is becoming an effective method to detect the winding deformation of power transformer for the reliable and secure operation of power system. In order to further identify the relationship between the winding deformation fault of power transformer and the measured tank vibration signals, the short-circuit test is done in a 220kV transformer with the developed vibration measurement system and the vibration frequency response curves for some pre-setting winding deformation faults are obtained. Multi-fractal spectrum is applied to analyze the of the obtained vibration signals when winding conditions of power transformer are changed. It is shown that multi-fractal spectrum can effectively indicate the geometric structure features of vibration signals. The parameter variations of multi-fractal spectrum are agreed well with the preset winding faults, which provides another method for the feature extraction of vibration signal to detect the winding deformation of power transformer. Introduction Statistics shows that most of the failures of power transformer are provoked by the fault of outlet short circuit [1]. The large short-circuit currents result in loosen or deformation of the transformer winding with the consequence of weak mechanical strength and degraded insulation between turns. Furthermore, a change in the distances among conductors implies a variation in series and shunt capacitances and the voltage distribution in case of lightning or switching over-voltage being different from the design value to increase the failure risk of transformer. Therefore, it is necessary to detect, analyze and identify the winding deformation of power transformer timely and accurately to prolong the service life of power transformer and ensure the secure and reliable operation of electric grid. Since the transformer tank vibrations are closely related to the variation of internal structure of power transformer, method of monitoring the tank vibration of power transformer draws more and more attentions to the detection of winding deformation of power transformer and some interesting conclusions have been obtained [2,3]. However, the vibration characteristics of power transformer are very complicated which is influenced by many factors such as the variations of precompression force exerted on the winding, the nonlinearity property of block, etc. Meanwhile, the tank vibration signals are always non-stationary and highly time-varying, and it is sometimes difficult to extract the signal feature with the common signal analysis methods, such as Fourier transformer and Wavelet analysis which are always based on stationary signal or subsection stationary signal [4], especially when the transformer winding is in the short-circuit conditions. In the past several years, some works about the vibration analysis method to detect the winding deformation of power transformer have been made by the research team of Shanghai Jiaotong University [5,6]. In order to further investigate the relationship between the winding deformation fault of power transformer and the measured tank vibration signals, the short-circuit test is done in a 220kV transformer with the developed vibration measurement system and the vibration sweep frequency response (VSFR) curves are obtained. The multi-fractal spectrum and its relative parameters are calculated and applied to analyze the vibration signals. The results could lay a certain basis for the early detection of winding deformation or winding loosen which is important for the condition monitoring of power transformer.

Yanwen Wu

585

Theory of Vibration Frequency Response Method The axial forced vibration equation of multi-degree of freedom of transformer winding is written by [M][X] + [C][X] + [K][X] = [f (t)]

(1)

where, [M], [C] and [K] are the mass matrix, damp matrix and stiff matrix of transformer winding separately, [f(t)] is the exciting force vector matrix. Transformed the Eq.1 via the FFT, the accelerator frequency response function of transformer winding can be defined and obtained through the appropriate rearrangement of vibration equation. H a ( jω) = ([M] −

j[C] [K] −1 − 2) ω ω

(2)

It is seen that the accelerator frequency response function would change when the mass, damp and stiff of winding are varied. Vibrations in a transformer are mainly generated by the different forces appearing in the core and the winding during the normal operation of power transformer. Core vibration is caused by magnetostriction and magnetic forces, which is proportional to voltage squared based on the relationship between applied voltage and magnetic induction. Winding vibrations are caused by electro-dynamic forces resulted from the interaction of the current in the winding with leakage flux, which is proportional to the current squared. Therefore, core vibration can be detected through the no-load test of transformer and winding vibration can be detected through the short-circuit test of transformer by virtue of accelerometer adhered to transformer tank. If the windings of the power transformer in the short-circuit test are regarded as the vibration source and the whole transformer including the core and tank is composed of the vibration system, the frequency response curve can be obtained when the sinusoidal excitation source with variable frequency and constant current is input to the vibration system. When the frequency is selected in a certain degree, the VSFR curve of transformer winding for some given monitoring points can be plotted. Through the comparison of VSFR curve of different transformer with different degree of winding deformation, the relationship between winding deformation and VSFR curve can be obtained. This is the basic principle of VSFR method.

Calculation of Multi-fractal Spectrum When calculated the multi-fractal spectrum, it is necessary to compute the distribution of probability measure for some physical quantity with the box-counting method adopted [7]. Supposed that vibration signals are divided into several one-dimension small boxes with the size of ε ( ε < 1 ) along the time axis. For the ith small box with the size of ε , Si(ε) is the sum of amplitude of all the vibration signal. Then the probability measure is expressed by

Pi(ε) = Si(ε) / ∑ Si(ε)

(3)

According to the fractal theory, the following scale relation is existed for the probability measure Pi(ε) in the area of scale-free self-similarity. Pi(ε) ∝ ε α

(4)

where, α is the singularity scale exponent, which describes the local singularity intensity of the probability measure. α is always a limit value in the fractal curve with every value, that is to say, there is α ∈ [α min , α max ] .

586

Manufacturing Systems and Industry Application

Supposed that the number of box with the identical symbol of α is N(ε) . The scale relationship between N(ε) and ε is also existed in the scale-free area, that is N(ε) ∝ ε − f ( α )

(ε → 0)

(5)

where, f (α) is the fractal dimension of the sub-set with same value of α . For any α ∈ [α min , α max ] , f (α) is always the smooth unimodal function, which is usually called the multi-fractal spectrum. Since it is difficult to measure the value of α and f (α) in the experiment, the partition function χq (ε) is defined as follows.

χq (ε) = ∑ Pi (ε)q

(6)

where, q is the weight factor. χq (ε) is another distribution form of probability measure with scale relation existed in the scale-free area, that is

χq (ε) ∝ ε τ(q)

(7)

where, τ(q) is the mass exponent. As the important indexes to describe the same physical object, α , f (α ) and τ(q) are linked with each other through Legendre transformation and is listed in Eq.8 [8]. Then the multi-fractal spectrum of the given fractal structure can be computed. Consequently, the three important indexes, that is the width of multi-fractal spectrum ∆α = α max − α min , the difference between the maximum and the minimum probability subset fractal dimension ∆f = f (α max ) − f (α min ) and the maximum of multi-fractal spectrum f max (α) can be obtained.

α = dτ(q) / dq  f (α ) = q ⋅ α (q) − τ(q)

(8)

Experiment Descriptions Fig.1 is the schematic diagram of vibration signal measurement system. It includes the computer with signal control software, signal source, power amplifier, accelerometer and the DH5920 signal acquisition. The test transformer is a 120MVA, 220kV transformer with winding connection types of star grounding. The low voltage side of the test transformer is in short-circuit conditions in the experiment. The signal frequency is swept from 115Hz to 310Hz. Eleven accelerometers are placed on the surface of the middle part and the bottom part of the tank separately and are illustrated in Fig.2. In order to investigate the influence of the variation of precompression force to the axial vibration of transformer winding, two compression nail of A-phase winding are loosened to simulate the variation of precompression force. The whole experimental steps are as follows. Step 1. Connected the low-voltage winding in short-circuited conditions through the copper bus-bars. Step 2. Placed the accelerometers in the tank and accessed the excited signal to the three-phase high-voltage winding. Step 3. Select and set the range of sweep frequency in the control signal software in the computer and obtained the corresponding VSFR curve. Step 4. Suspended the core of tested transformer, preset firstly the winding fault of transformer manually. Step 5. Put the core of tested transformer in the original position again and repeated the step 3. Step 6. Suspended the core of tested transformer again and preset secondly the winding fault of transformer manually. Step 7. Repeated the step 5.

Yanwen Wu

Fig.1 Schematic diagram of measurement system

587

Fig.2 Position of accelerometer in the tank

Results and Discussions Fig.3 is the VFRA curve of the accelerometer 6 and 7 of A-phase winding of test transformer. Here, the winding is in healthy condition in State 0. The winding is loosened in a certain degree in State 1 and State 2 and the looseness degree in State 2 is more severe that of State 1. It is seen that the amplitudes in parts of frequency are slightly increased in the VSFR curve after the windings are deformed by man-made. The frequency corresponding to the peak point is moved distinctly.

(a) No.6 accelerometer

(b) No.7 accelerometer

Fig.3 VFRA curve of B-phase winding of test transformer Fig.4 and Fig.5 are the f (α ) ~ α curves of VFRA curve corresponding to the healthy condition and fault condition of transformer winding separately. It is seen that the f (α ) ~ α curve is more smooth for the winding in healthy condition than that of in fault condition. The reason is that the probability distribution of vibration signal is even relatively when the winding is in good condition. When the winding is loosened or the precompression force is decreased, the singularity of vibration signal is strengthened. Furthermore, it is seen that the multi-fractal spectrum is different for the different positions where the accelerometers are placed, which results from the difference of transfer path of vibration signal from the transformer winding to the tank.

588

Manufacturing Systems and Industry Application

(a) f (α) ~ α curve in State 0

(b) f (α) ~ α curve in State3

Fig.4 f (α) ~ α curve of No.6 accelerometer

(a) f (α) ~ α curve in State 0

(b) f (α) ~ α curve in State3

Fig.5 f (α) ~ α curve of No.7 accelerometer

Table 1 gives the values of the three parameters of multi-fractal spectrum of ∆α , ∆f (α ) and f max (α) . It is seen that the ∆α is increased clearly, the ∆f (α) is decreased and the f max (α) is increased when the winding is loosened. Since the non-uniform degree of the probability distribution in the whole fractal structure is described by the value of ∆α and consequently the fluctuation degree of measured vibration signal, the increase of the value indicates the strengthen of the non-uniform degree of winding vibration. The value of ∆f (α) gives the percentage of the subset element occurred in the maximum and minimum place. when the ∆f (α) is positive, the subset number of large probability is larger than that of small probability and vice versa. That is to say, the proportions of the maximum and the minimum in the whole measured vibration signals are indicated by the value of ∆f (α) . Moreover, the decrease of ∆f (α) implies that the proportions of the minimum in the whole vibration signals for the loosened winding are larger than that of the healthy winding, which indicates apparently the vibration aggravation of winding vibration. Clearly, the larger of the value of f max (α) , the greater of the vibration of transformer winding. Obviously, the variation tendencies of the parameters of multi-fractal spectrum are agreed well with the preset fault of winding and consequently the vibration characteristics of the transformer winding. Therefore, multi-fractal spectrum can be applied as the effective method to extract the feature of vibration for the detection of winding deformation of power transformer.

Yanwen Wu

589

Table 1 Result of ∆α , ∆f (α ) and f max (α) of multi-fractal spectrum parameters

∆α ∆f (α) f max (α)

winding condition normal fault normal fault normal fault

No.6 0.7491 7.1022 1.7155 0.1077 5 5.5

No.7 0.2684 7.8996 0.4266 0.3176 5 5.5

Conclusions Based on the developed vibration measure system, VFRA curves for the winding in healthy condition and in fault condition are obtained from the short-circuit test made in a 220kV transformer. Through the calculation of multi-fractal spectrum of the obtained vibration signals with the values of the three important parameters of ∆α , ∆f (α) and f max (α) . It is seen that the multi-fractal spectrum are better indicate the geometric structure features of vibration signals. The variations of multi-fractal spectrum are agree well with the preset winding faults, that is to say, the ∆α is increased clearly, the ∆f (α) is decreased and the f max (α) is increased when the transformer winding is loosened. Therefore, multi-fractal spectrum and the corresponding parameters can be applied as another method for the detection of winding deformation and condition monitoring of power transformer. However, the determinations of evaluation indexes of the multi-fractal spectrum still need to further investigate. This is our next work.

Acknowledge This work is supported by the research project of Science and Technology of Shanghai Municipality (09dz1205900), P. R. China.

References [1] Y. M. Jiang: Transformer, Vol.42(2005), p.34-38 [2] B. Garcia, J. C. Burgos and A. M. Alonso: IEEE Trans. Power Delivery, Vol.21 (2006), p. 157-163 [3] B. Garcia, J. C. Burgos and A. M. Alonso: IEEE Trans. Power Delivery, Vol.21 (2006), p. 164-169 [4] W. H. Xiong, R. S. Ji, in: Proceeding of 6th World Congress on Intelligent Control and Automation, (2006), p.5494-5496 [5] F. H. Wang, J. Xu and Z. J. Jin, etc, in: Proceedings of 2010 IEEE PES Transmission and Distribution Conference and Exposition, (2010), p.1-6 [6] P. A. Xie: Ph. D. dissertation, Dept. E. E., Shanghai Jiaotong University, 2008 [7] Y. Yuan, B. L. Li and J. S. Shang, etc, in: Proceedings of 2010 Second International Conference on Future Networks, (2010), p. 164-167 [8] Information on http://www.paper.edu.cn

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.590

The Networking Integrated Character Education (NICE) Project: An Experimental Study Hsin-Hung Kuo 1,a, Szu-Wei Yang 2,b and Yu-Chin Kuo3,c National Taichung University of Education, Taichung City, Taiwan a

[email protected], [email protected]

Key words: Blog, Blog-assisted Instruction, Character Instruction, Technology Enhanced Learning, Video.

Abstract. There are few studies connecting character education with Blog-assisted online learning environment in elementary school scenarios. Therefore, the Networking Integrated Character Education (NICE) project has been planned, designed, developed and implemented in the study. The NICE system focuses on the interaction and communication to enhance the awareness and behavior of character for children. There were sixty-nine fifth graders of a public elementary school in Taiwan involved in this study. The online questionnaires were given to find out the feasibility of the project. Data were analyzed by ANCOVA. The experimental results showed that there were significant in all of the two core values Thanksgiving and Helping Others. Based on the online survey, this Blog-based NICE project could help teachers or instructors facilitate students’ learning and would be a very useful and valuable instructional platform for young learners to implement and make progress on character development. Introduction Scholars from many countries found that the world’s new crises are normal degeneration and the ignorance of character education [1]. To our surprise, Taiwanese educational reform of 2004 cancelled “ethic courses” at elementary schools when the 21-century varied moral challenges came [1]. To foster well-behaved citizens is much more important than before [2]. It is imperative for us to regenerate morals in our society now [1,3]. In addition, the rise of technology has considerably changed the landscape of education [4]. That is, the convenient Internet makes possible a digital revolution in learning and teaching [2]. However, there is a big problem for young learners who lack computer skills to effectively engage themselves in modern learning. Therefore, the authors considered different kinds of technical software and media. As we know, Blogs have been applied to educational scenarios for their widespread use. Moreover, Blogs can greatly lessen technology obstructions to web publishing more freely and effectively [5]. Most importantly, Blogs are just the right tool for children to meet the their technology demands and can help them respond learning feedback on the Internet. Hence, the authors dicided to select Blogs as the instructional platform to teach young learners. Basically, character education curricula for today's children are very important. However, there is very few learning websites or Blogs have the function of enhancing learners’ character traits and behaviors. Additionally, there is a need to construct a more interactive as well as friendly learning environment encircling the usage of technology [6]. Furthermore, there is an easy but effective way to check students’ learning progresses and to send frequent feedback for facilitating students’ involvement with the teacher through the learning process [7]. Therefore, the Networking Integrated Character Education (NICE) project has been launched, constructed and developed in order to achieve the study goal. Research Question. The focus of this study is to compare the effects of two methods on the character cognitive performances in an elementary school context. Besides, this study also aims to investigate the impacts of the NICE project. The research questions: (a) Is there a difference in Taiwanese children character cognitive performances between the NICE system and the traditional instruction? (b) What are the effects of the NICE project?

Yanwen Wu

591

Significance of the Study. This study introduces the design and implementation of the NICE system, which focuses on the understanding as well as implementation of character core values and provides a basis for experimental form of teaching and studies. To enhance students’ character learning, the system puts emphasis on the interaction and communication to better students’ character comprehension and performances. In brief, this study makes a practical contribution to put the novel NICE instructional design into action. The NICE Online Learning Environment The NICE system aims to facilitate the mutual communication between teachers and learners for character and social learning. This platform is designed just for kids to participate in character activities. This online tool integrates character education with technology, writing and social curricula. This technology-enhancing tool provides various animated activities and allows teachers or students to keep a journal or to publish. The NICE system is completely functional, which consists of four parts: (a) MSN Area, (b) Online Questionnaires, (c) Video Area, and (d) Response Area. The NICE online learning environment is illustrated in Fig. 1.

(a) MSN Area

(c) Video Area

(b) Online Questionnaire

(d) Response Area Fig. 1 The NICE Learning Environment.

Experiment The effects of character education instruction on the fifth graders via the NICE project concerning character instruction were under investigation. In order to evaluate the effectiveness of the innovative system, a quasi experiment was conducted. Analysis of covariance (ANCOVA) was carried out to verify the core values of Thanksgiving and Helping Others on learning application. All of the students were asked to participate in the pre- and post-test at the beginning and the end of the experiment.

592

Manufacturing Systems and Industry Application

Experimental Group Control Group

Table 1. The Experimental Deign Pre-test Treatment O1 X O2

C

Post-test O3 O4

O1, O2:Pre-test grades; O3, O4:Post-test grades X: Treatment for the experimental group; C: Treatment for the control group Participants. Two classes of a public elementary school in an urban area participated in the study. In this study, the total of sixty-nine fifth graders participated in the study. After receiving the fundamental technology knowledge in class, the participants were divided into a control group (n = 34) and an experimental group (n = 35). Instruments. The questionnaires were developed to obtain the students’ data and perceived degree of learning as well as learning application in taking an online course. They were two online questionnaires: Scale of Character Education for both groups and Feedback Sheet Regarding the NICE Use just for the experimental group. These instruments were developed by the authors. Experimental Results Reliability. To verify the internal consistency reliability of the Scale of Character Education, coefficient alphas were calculated. For the student behavior items, alphas of the subscales are .916 for Thanksgiving and .932 for Helping Others. Coefficient alpha of the whole scale was .949. Table 2. Reliability of the Scale of Character Education Test Items Cronbach’s alpha Thanksgiving 1~10 .916 Helping Others

11~21

.932

Total

1~21

.949

Analysis. It is well-recognized that an essential assumption of ANCOVA is the homogeneity of regression. To measure the different variables of two instructions, the usage and analysis of ANCOVA are performed. Before analyzing ANCOVA result of the study, it is essential to know whether the distribution satisfied the normality criterion. The F-value is .064 (p > 0.05) and this supports the usage of ANCOVA tests. In other words, the result shows that the samples have homogenous variances and normal distribution of data. The NICE Effectiveness. One unique finding from this study is the fact that the posttest scores of the experimental group were significantly higher than those of the control group on all of the two dimensions of Thanksgiving and Helping Others. That is, the students within the experimental group performed better results than those within the control group. The original means as well as standard deviations of each group are shown in Table 3. The mean and standard deviation of the post-test were 94.24 and 13.678 for the experimental group, and 83.80 and 13.497 for the control group, which indicated that the students receiving the NICE treatment had a higher mean. Moreover, there is a significant difference (F = 13.525, p < .05) between both groups. Obviously, ANCOVA results revealed that students in the treatment condition scored higher than those in the comparison context. In other words, the NICE project could be effective for students receiving the NICE instruction.

Yanwen Wu

593

Conclusion This study aims to explore the effects between different types of instruction on children's character performances. First, the authors described the learning environmnet of the NICE project, which focuses on reciprocal interaction between instructors and users. Then, this research investigated whether NICE system can facilitate children's character learning. ANCOVA results showed that the students of the experimental group scored higher on the post-test achievement than those of the experimental group. In other words, there was a significant impact on the two core values for students in the experimental group using the new system. Through the implementation, the NICE project not only provides users with an alternative approach to solve problems they really encounter in campus, but also enables this tool to take more reciprocal interaction with the users via the Internet access. In agreement with Lim & Kim’s viewpoint [7], the NICE system put emphasis on mutual interaction between teachers and students by checking students’ learning progresses, sending frequent feedback, and encouraging unskillful learners in need to increase students’ cognitive as well as emotional involvement during the learning process. Based on the survey, the Blog-based course seemed to meet the course goal to achieve significant learning benefits through the instruction of the innovative system. The experimental findings implied that the impacts of the NICE project were active, positive and effective. In brief, this innovative Blog-based character project could help teachers facilitate children to enhance the awareness and behavior of character, and would be a valuable tool to implement on character education in practical classroom scenarios. Acknowledgements The authors would like to thank all of the students who participated in this course and completed the online survey. In addition, we are grateful to the anonymous reviewers. References [1] Cheng, C. S. (2007). Character education and character-trait development:A enrichment for college students. Retrieved December 10, 2010, from www.kyu.edu.tw/93/96paper/ 96%B9q%A4l%C0%C9/96-163.pdf [2] Resnick, M. (2002). Rethinking learning in the digital age. In Kirkman, G. S., Cornelius, P. K., Sachs, J. D., & Schwab, K. (Ed.), The global information technology report: Readiness for the networked world (pp. 32-37). Oxford: Oxford University Press. [3] Maritan, J. (1965). The education of man: the educational philosophy of virtues. NY: Basic-Book. [4] Spire Research and Consulting Pte Ltd (2010). Brave new world: The changing landscape of education and technology. Retrieved December 20, 2010, from www.spireresearch.com. [5] Gupta, V. K., & Meglich, P. (2008). Weblogs to support learning in business education: Teaching the virtual generation. Retrieved January 12, 2011, from www.midwestacademy.org/Proceedings/2008/papers/Gupta&Meglich_13.pdf [6] Hawi, N. S. (2010). The exploration of student-centered approaches for the improvement of learning programming in higher education. US-China Education Review, 7(9), 47-57. [7] Lim, D.H. & Kim, H.J. (2003). Motivation and learner characteristics affecting online learning and learning application. Journal of Educational Technology Systems, 31(4), 423-439.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.594

Discuss the design method of topology discovery system based on SNMP Xue Sujing North China Univerity of Water Resource and Electric Power [email protected] Key words: SNMP, ASN.1, topology discovery,

Abstract. Since the network was born, the network management has been the material effect factor which the computer network develops. Taking the reasonable network topology technology has already become the entire network management foundation,and for the isomerism, diverse and changeable network, the importance of network topology survey discovery is also enhancing. studying the highly effective network topology discovery method has the quite vital significance and the value to guarantee the network effective and safe operation. Introduction With the swift development of computer technology and the communication, the computer network has already become the information society’s infrastructure now, and it succeeds into every field. Since the network was born, the network management has been the material effect factor which the computer network develops. Whether the network management successes has often decided the network operation success or failure! According to the definition of open system interconnection reference model (i.e. OSI), there are five functions in the network management at present. 1.The fault management territory (also calls expiration management); 2.The disposition management territory; 3.The performance management territory; 4.The safety control territory; 5.The cost management territory [1]. And the disposition management territory is the foundation part. Its major function is: Discover the network topology, monitor and manage the disposition situation of network equipment; the foundation of monitoring and managing network equipment is the network topology, so taking the reasonable network topology technology has already become the entire network management foundation. Moreover, for the isomerism, diverse and changeable network, the importance of network topology survey discovery is also enhancing. Now, with the rapid development of network technology, the scale of network is getting bigger and bigger, the structure is also getting more and more complex, and the function is also stronger; The network real-time monitor and control need some effective tools urgently, so as to be more convince for all administrators to understand the network operation situation accurately, simultaneously, to acquire the management information and to locate the breakdown and so on. Thus, the network topology discovery which is occupying very important status in the present age network management system's development has become the core of disposition management and foundation of fault management. [2] The network topology discovery can manifest the existence of network equipment as well as their connections, and help the network administrator rapidly to grasp the network topology, and locates the breakdown to have the place, definite breakdown influence scope, and may also discover the network equipment and transfer other management functional module, locate the network

Yanwen Wu

595

breakdown, find out the network bottleneck, understand the current network condition and so on, and thus optimize and manage entire network well [3] .The network discovery topology is all elements in network, meanwhile have the information collection function. The topology discovery is an important standard to measure a commercial network management system. Moreover, the current network topology discovery technology has widespread application prospect and very high research value in network characteristic research and analog network as well as network optimization and so on. [4] Therefore, studying the highly effective network topology discovery method has the quite vital significance and the value to guarantee the network effective and safe operation. The research outline of network topology discovery In this system design, it mainly used SNMP, ASN.1, VC++ and so on technologies. With these technologies, it proposed many algorithms to realize the network topology discovery domestic and abroad. System design technology introduction.1. SNMP SNMP (simple network management protocol) is the most widespread network management protocol in the present TCP/IP network. The SNMP superiority lies in the simplicity. At present, the SNMP basic function mainly includes: deploy the network equipment, supervise the network performance as well as examine and analysis the network mistake and so on. The SNMP network management model is mainly is composed of three parts: management information database MIB, management information structure SMI as well as SNMP itself. In order to manage the network resource, it make each resource expresses an object. The object set is the management information database MIB.MIB is an visit spot of management station collection proxy, it gives a data structure of all possible management object set. Figure 1 is an example of a part of management information database, which is called the object naming tree.

Figure 1 Object Naming Tree 2. ASN.1 ASN.1( Abstract Syntax Notation One) is a method and standard to describe digit object, which is used for defining the communication protocol unit and a data type which is called “the abstract syntax”. At present, many encoding methods are also based on this standard. 3. OID (Object Identifier) Object Identifier is a basic data type of ASN.1, which is used for marking an information object uniquely.

596

Manufacturing Systems and Industry Application

The outline of network topology discovery research. The network topology discovery refers to determining the interconnection position relations between the network elements through certain technological. Here said network element, what in the general sense refers to the interconnect equipment (router, bridge, switchboard and so on) and subnet and so on. [5] The main content in the network management includes carrying on survey and discovery to network topology. At present, in domestic and overseas, realizing the network topology discovery method mainly concentrates in gaining some route information of route equipments in the network according each kind of searching algorithm and the related protocol, and then constructs the network topology chart according to the information obtained. Commonly used network topology discovery technology.At present, obtaining the network topology information is mainly by some protocol and tools, for example SNMP, ARP, DNS, ICMP, RIP and so on, as well as Ping, Trace route tool, also and some manufacturer private protocol, for instance CDP in Cisco; And the somewhat new algorithm raises. 1.SNMP:The basic philosophy of SNMP is hoped that through all network equipment, maintaining MIB is also manages the database, and preserving all movement advancement related information activity; And carry on response suitably to inquiry management workstation. The SNMP protocol has already succeeded in describing a method which gains information from MIB, and the unique need to equipment is support SNMP, moreover there must be rich in information in MIB[6]. 2. ARP:As it is well known, all network equipments supporting ARP should maintain an ARP table which recorded the corresponding relationships between IP address and MAC address of network equipment which connected by this equipment in the Ethernet [7], so through the characteristic of ARP table, from a known router or in switchboard's ARP table, we can discover other network equipments which is connected in the identical Ethernet, and then discriminate router, switchboard and so on from these recent discovery's equipment, and then continue to carry on the discovery according to these equipment ARP table. Doing like this, you might obtain the entire network topology. 3.DNS:The goal of developing DNS is to be convenient for people to remember the IP address which is used by machine and difficult to remember. DNS is used to relate a name and an IP address, so it can also retain some information related computer. Therefore, we can transmit the same territory suffix computer tabulation by this kind of region transfer capability of DNS. “zone transfer” is the order which is used, so this order may help us discover the main engine and router in the territory.[8] DNS may also discovery other equipment fast, and then we can carry on more surveys and the discovery by these equipments. 4.ICMP:There are mainly three kinds of texts in the network topology survey discovery: echo response request message, echo reply text and overtime text. Moreover, the common PING tool in topology discovery used the echo text to discover the network connectivity, but Trace route used the overtime text to discover the way between two nodes [9]. 5. RIP:The RIP protocol may help the user to obtain the routing information of route equipment, using this information we can analogize once more, and then discover the new route equipment, and judge linkage information between these equipments, thus obtain the entire network topology [9] 5. Ping/Broadcast Ping:The Ping order is one of the most ancient tools in IP. Its leading role is to use for monitoring whether the node is available and how many the round-trip latency of nodes. Usually a Ping order only involves to the source node and goal node in the network, but neglects the network detail. [6]

Yanwen Wu

597

6. Trace route:Trace route is another ancient TCP/IP tool. It may discover all network entity used in the routing which from one node usually called source host to another node called destination host, and at the same time, it is good effect to give round-trip delay from the source host to any node. [6] 7. The introduction of Cisco Discovery Protocol:CDP (Cisco Discovery Protocol) is the Cisco Corporation private protocol. It mainly runs on the switchboard, route serial products. Its function is to discovery the network equipment which directly connected and to preserve some basic information of equipment. 8. The algorithm based on SNMP and Ping:There are many algorithms of network topology survey discovery, it introduces several emphatically here. First introduce the algorithm based on SNMP and the Ping.This algorithm's merit is: it may obtain the topology information in real-time and realize simply, as well as the speed is very quick, but, this algorithm needs the jurisdiction to inquire MIB, moreover, the network equipment need to have opens SNMP function [9]. 9. The algorithm based on Trace route and DNS Zone Transfer:This algorithm's use is very widespread, which is its principal advantage, but, this algorithm's shortcoming is also very obvious, first, it carries out slowly and easy to cause the big network load, second, but also needs DNS Zone Transfer [9]. 10. The algorithm based on SNMP and ARP:This algorithm is widespread and realizes simply, moreover, the speed is very quick, which is this algorithm principal advantage, but the insufficient is, this algorithm also needs to have the jurisdiction of inquiring MIB, moreover, it also requested the equipment to open SNMP service [9]. System design General method of survey network topology.Surveying network topology may complete by gaining some information of data link layer or the network level from the network equipment and also combine the two’s together. In ordinary circumstances, people are familiar with using each kind of technology from someone node in the network, and gain and record its neighboring equipment information, and then gain this equipment's neighboring equipment's information again according to this information. Doing like this, finally can gain the entire network topology information. Then, how to gain the information of neighboring equipment? Because at present, there has not released a standard protocol to survey the network equipment's neighboring equipment in international, regarding to some versatile commercial network management software, it is generally that gaining the neighboring equipment information with the indirect way. The key to realize topology discovered is how to gain topology information. According to the system design demand, the basic thought to gain topology information is that from some equipment to start, bring back this equipment's CDP neighbor equipment information by the SNMP operation, and then process some useful information and preserve. Then further gain more neighbor equipment's CDP information according to the neighbor equipment IP address. So move back and forth until search the entire network. System major function design.According to the requirement of system, there are mainly 3 parts in this system. 1. The acquirement of topology connection information From some specific equipment to start, gaining its CDP project information through SNMP and carrying on the analysis one by one could find its 1 level neighbor equipment; According to 1 level neighbor equipment IP address, continue to search 2 level neighbor equipment information ......Until satisfies the condition which establishes in advance.

598

Manufacturing Systems and Industry Application

2. Topology graph draw module Use the node of tree control to express network equipment. When prepare the information in tree control, it might draw according to these information. And, what needs to pay attention: (1) Choose the network equipment controls. This is the first question which needs to solve, namely what controls to be chose to represent the node network. (2) The method of drawing graph. In the procedure design, drawing graph carries on another independent view, it divides into two steps: distributing the equipment controls and drawing segment between equipment. 3. Graph revision module The data gained by processing CDP protocol information may be not incomplete, so the system should provide the function to change the information. There are two goals like doing this: one is to increase the node, the other one is to merge the node. When increases the node, what must indicate is the father node of new node and physics linkage information; then increase a sub-node in tree control for the node which assigns. Summary In the network management, the network topology discovery is the foundation of all other management function. Only grasping the correct and complete topology can carry on the management operation correctly. The network topology chart provided an intuitional mean to understand the global network connection for network administrator. In order to producing the network topology graph, it must search each kind of information which constructs network topology graph at first. So designing and realizing a highly effective network topology discovery system has being an important part in the network management domain. This paper precisely carries on the topic selection under this background, uses the protocol principle of SNMP and CDP of CISCO, and completes the draw of the large-scale local area network topology graph automatically. At present, the network topology discovered in domestic is still a full with challenging domain. Although it has designed a system that basically realizes the function talked about, it still has much work to do. Reference [1] William Stallings.SNMP Network Management[M], Beijing:China Electric Power Press,2001. [2] Luo Xiapu, Guo Chengcheng,Yan Puliu. The algorithm for automatic topology discovery in heterogeneous ip networks[J].Journal of Wuhan University,2001(3):364-368. [3] Liu Jie. The study of multi-level topology discovery technology[D]. Sichuan:Sichuan University,2004. [4] Wang Juanjuan. The study of topology discovery system algorithm[D].Hubei: Wuhan University of Science and Technology,2008. [5] Yang Jiahai, Ren Xiankun,Wang Peiyu. Principles and implementation of network management technology.Beijing: Tsinghua University Press. [6] Zhao Kai. Brief Probe into Network Topology Discovery [J].Science & Technology Information,2007,(28):203. [7] Wang Zhigang, Wang Ruchuan, Wang Shaoli. Research on network topology discovery algorithm[J]. Journal of China Institute of Communications, 2004,25(8):36-43. [8] Hwa-ChunLin,Hsin-LinagLai,Shou-ChunaLai. Automatic Link Layer Topology Discovery of IP Networks, IEICE, 1999. [9] Wu Yuan. The study and implementation of topology discovery system based on SNMP[D]. Zhengzhou University,2006.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.599

Consistent Extension of Dynamic Knowledge Updating in the Multi-Agent System Meihong Wu Department of Psychology, Peking University, Beijing, P.R.China [email protected] Key words: Dynamic epistemic logic; Incompatible knowledge; Consistent

Abstract. In this paper we explore the use of dynamic epistemic default logic to offer a natural way of communication policies for the management of inter-agent exchanges in the multi-agent systems. We firstly focus on the acquisition of the extension in the dynamic epistemic theory based multi-agent system when its background set absorbing new information constantly, then we add the constrained default sets to restrict the agent's inference behavior and manage to obtain the extension of constrained epistemic default logic theory via default reasoning. We also discuss the characteristic of the dynamic updating when agent meets incompatible knowledge in the logical framework of multi-agent systems and finally we proved the related theorem for knowledge updating. Introduction A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. MAS require coordination mechanisms to facilitate dynamic collaboration of the intelligent components, with the goal of meeting local and/or global objectives. Although each agents may be deployed to perform independent missions, but in multi-agent systems they may need to exchange local information from time to time to coordinate more efficiently and to achieve better performance as a group. In the case of MAS, the coordination structure should provide communication protocols to link agents having inter-related objectives and it should facilitate mediation and integration of exchanged knowledge. However, exchanging information may incur a cost associated with the risk of revealing it to competing agents. Assuming that communication may not be reliable adds another dimension of complexity to the problem. Therefore, how to gather veridical information from the vast of information provided by the constantly changing world become extremely important. Knowledge reasoning in MAS includes knowledge about facts, but also higher-order information about information that other agents have. The ability to reason about higher-order information distinguishes epistemic logic from many other approaches to information. Epistemic logic is one of the promising approaches to deal with what agents consider possible given their current information. Dynamic epistemic logic is an umbrella term for a number of extensions of epistemic logic with dynamic operators that enable us to formalize reasoning about information change. One of the characteristics of communication in MAS is that it does not change the bare facts of the world, but only information that agents may have. As in default reasoning, the default rules simplify knowledge representation instead of describing all possibilities one can construct rules and exceptions, and default reasoning allows construction of different knowledge bases from the same set of rules by agents operating in different environments and with different sets of facts. Therefore we apply the dynamic epistemic default reasoning methodology for describing agent’s knowledge and use an extension of default logic to construct beliefs for a single agent. Furthermore, we put forward some new concepts for constrained default theory by introducing modality word which transform the default rule and differentiate the knowledge and the faith, thus it can better portray the agent’s mental state. Since epistemic logic deals with what agents consider possible when given their current information, then we add the constrained default sets to realize the extension of epistemic default theory and restrict the agent’s inference behavior, which make it obtain the two important natures: extension existence and semi-monotonicity.

600

Manufacturing Systems and Industry Application

In this paper, we propose a new frame of multi-agent system based on the dynamic epistemic logic, which works as an inference tool describing the distributional multi-Agent system, then we provide a logical approach to change of information and our approach is logical in the sense that we are interested in reasoning about change of information and distinguishing valid from invalid reasoning. This paper is organized as follows: in the 2nd section, we introduce the communication rules for agents in this new frame of multi-agent system based on dynamic epistemic logic; and in the 3rd section we show the consistent extension of information acquire in the logic framework and reasoning about the information change in this dynamic process; and in the last section we discuss the characteristic of the dynamic knowledge updating and make a summary of the whole paper. The Logical Framework of Multi-Agent System The system includes a group of n agents and in this logical framework Ki ϕ ( i ∈1, ,n ) represents "agent i knows ϕ " and for each i = 1, ,n represents a single agent. The agent operates within a chain of command subject to security restrictions and there is linear order in agents’ private knowledge. Definition 1. The multi-agent system frame ∑ ,∆1 , ,∆n consists of a set of agents, where ∑ represents n Agents, ∑ = {1, ,n} . According to the inference rules and axiom in logical language

( Σ, P ) [2], where∑ is a finite set of agents and P denotes a countable set of atoms in the language LK [ ] ( Σ, P ) , the three axioms 1) Kiϕ → ϕ , 2) Ki K jϕ → Kiϕ ( i ≠ j ) and 3) ¬Ki ¬ϕ ⇒ Ki ¬Ki ¬ϕ , where LK [

]

ϕ refer to any formula in language LK [ ] ( Σ, P ) , hold in this system.

A Kripke interpretation is a model of pair M c , w , where w ∈ W , W denotes the possible worlds and Mc is a Kripke structure. That is, we give a possible worlds interpretation of belief, it is appropriate, to give a semantic treatment of belief revision in terms of possible worlds. Given any formula ϕ in LK[ ] ( Σ, P ) , ( MC ,w) |= ϕ denote a truth definition of belief. Definition 2. A default theory [1] Γ is a pair Γ = D,W , where W is a set of consistent formulas of first-order logic (the facts) and D is a set of default rules to make W complete. A default is a rule of the form α : β1 , , βn γ ( n ≥ 1) , where α , β , γ are formulas of first order logic, α is the prerequisite of the default D, β is the justification of the default D and γ is the consequent of the default D. A distinguishing feature of default logics is their appeal to consistency for handling absent information, and constrained default logic is an alternative approach to default logic that relies on a more “global” notion of consistency. Definition 3. A constrained epistemic default logic theory is a triple D,W ,C , where W is a set of consistent formulas (the facts) which has the form K α , D is a set of default rules to make W complete and C called constrained sets. A default is a rule of the form K α : Kβ1, , Kβn Bγ ( n ≥ 1) , where K, B are modality word means “know” and “believe” respectively, K = ¬K¬ , and α , β , γ are formulas in LK [ ] ( Σ, P ) , α is the prerequisite of the default D, β is the justification of the default D and γ is the consequent of the default D. In this multi-agent system frame,Δi= represent the constrained epistemic default theory of the ith (1 ≤ i ≤ n ) Agent, where the default inference rule authorizes an inference to a conclusion that is compatible with all the premises, even when one of the premises may have exceptions. Furthermore, the extension Ei ofΔi = was regarded as the belief set about the world of ith (1 ≤ i ≤ n ) Agent and the knowledge of the world is characterized by the extension. Definition 4. Let ∆ = D,W ,C be a constrained default theory and E be the extension ofΔon F, and then the definition of default set D∆ ( E,F ,∆ ) of E inΔis given as follows:

{

(

)

}

D∆ ( E,F,∆ ) = Kα : K β1 , ,K βn Bγ ∈ D Kα ∈ E,¬ K β ∧ Bγ ∉( E ∪ F )

(1)

Yanwen Wu

(

601

)

Definition 5. Let Kiϕ ϕ ∈ LK[ ] ( Σ ,P) ,i ∈ N be the actual knowledge of extension E of constrained epistemic default theory ∆ = D,W ,C on F, where F is the supporting set of E, if and only if Kiϕ ∉ E ,

¬Kiϕ ∉ E and F ∪ { Kiϕ}┬ ⊥ . Corollary 1. Let Kiϕ be the actual knowledge of extension E on F of default theory ∆ = D,W,C , then Kiϕ is the actual knowledge of W. Proof Supposing Kiϕ is not the actual knowledge of W, then we have ( Mc ,w) |= Kiϕ or ( Mc ,w) |= ¬Kiϕ .According to definition of extension, either Kiϕ ∈ E or ¬Kiϕ ∈ E ,which contradicts the fact that Kiϕ is actual knowledge of extension E. Therefore Kiϕ is actual knowledge of W. Definition 6. Let Kiϕ ϕ ∈ LK [ ] ( Σ ,P ) ,i ∈ N be the incompatible knowledge in the extension E of

(

)

constrained epistemic default theoryΔ=, if and only if Ki ¬ϕ ∈ E , Kiϕ ∈ E and there exist a Kripke model M such that M |= Ki ¬ϕ . When Kiϕ is incompatible knowledge in the extension E , we can further divide it into two parts: If Kiϕ ∈ E and W → ϕ , then agent i meets with the incompatible knowledge Kiϕ in W; If Kiϕ ∈ E and W → ϕ , then agent i meets with the incompatible knowledge Kiϕ in E. Let K hϕ be the knowledge that the agent h transfers to agent i. According to axiom in Definition 1, we have Ki Khϕ → Kiϕ , that is, the agent i obtains knowledge Kiϕ . These definitions provide an appropriate characterization of the agents’ knowledge in multi-agent system. In the following we prove the maximal applicable default sets whenΔi= receives new knowledge from other Agents in MAS, and the new knowledge is also an actual knowledge. Epistemic Extension of Incompatible Knowledge Communication, the process of sharing information, is an obvious source for changing one’s information state. As information is communicated in this dynamic process, knowledge and belief are by no means static, and in this cognitive process agents update their knowledge on condition that the rules are satisfied with a given set of constraints. Following we discuss the law of applicable default sets D∆ ( Ei ,Fi , ∆ ) and extension Ei whenΔi= obtain the new actual knowledge. Because the formula of W indicate the Agent’s fact sets, which reflect the facts in the model world, then there continually appears contradiction between the new facts, then we have W┬¬Kϕ , therefore W ∪ {Kϕ} is consistent. In Kripke model M, the formula EM ( Kϕ ) which is consistent to K ϕ in the extension corresponding to Agent is computed as: EM ( Kϕ ) = { H i | H i ∈W ,M |= H i ,M |= Kϕ } ,where M is ideal if and only if there does not exist another Kripke model M ' , such that EM ( Kϕ ) ⊂ EM ' ( Kϕ ) , thus EM ( K ϕ ) is maximal. Lemma 1. Let Ei be the extension of Δi= and Kiϕ be the actual knowledge that agent i receives from other agents, then there exist extension E′i of ∆i′ = D∆ ( Ei ,Fi , ∆i ) ,Wi ∪ { Kiϕ} ,Ci , on Fi' ,such that Ei ⊆ Ei’, F=F’and D∆ ( Ei′,Fi' ∆i′) = D∆ ( Ei ,Fi' ,∆i ) holds. According to Lemma 1, the applicable default set of Δ i= is still applicable in ∆i′ = D∆ ( Ei ,Fi ,∆i ) ,Wi ∪ { Kiϕ} ,Ci , and Ei is the subset of extension of ∆i′ . Corollary 2. Let Ei be extension ofΔi= on Fi and Kiϕ be the new actual knowledge that the Agent i receives from any other Agents, then the extension E′i of ∆i′ = Di ,Wi ∪ {K iϕ} ,Ci

(

)

exists such that Ei ⊆ Ei′ ,Fi ⊆ Fi′ holds and we have D∆ ( Ei ,Fi ,∆i ) ⊆ D∆ Ei′ ,Fi′ ,∆i′ .

602

Manufacturing Systems and Industry Application

Definition 7. Let E be the extension ofΔ= on F, the definition of previous fact set of Agent i is given as: EK = {Kα | Kα ∈ E} . i

Corollary 3. Let Ei be the extension ofΔi= and Kiϕ be the effectual knowledge Agent i receives from other Agents, and let E′i be extension of ∆i′ = Di ,EKi ∪ {K ϕ } ,Ci on Fi , then we

)

(

have D∆ ( Ei ,Fi ,∆i ) ⊆ D∆ Ei′ ,Fi′ ,∆i′ . The extension of both Di ,Wi ∪ {Kiϕ} ,Ci and Di ,Ek ∪ {Kiϕ} ,Ci all are the superset when the Agent i

i receive new actual knowledge from other Agents, therefore we get the following theorem: Theorem. Let E be the extension of ∆ = Dh ,Wh ,Ch on F and K hϕ be the effectual knowledge that agent h receives, and then the extension E’ on F’ must also be the extension of ∆′ = Dh ,Ekh ∪{Khϕ} as well as that of ∆′′ = Dh ,Wh ∪ {K hϕ} . Proof Suppose E0′′ = W ∪ {Kϕ} , F" , for any h ≥ 0 , we can obtain following formulas according 0 =C to above definitions: Eh′′+1 = Th ( Eh′′) ∪ {Bhγ | Khα: Kh β1 , ,Kh βn Bhγ ∈ Dh ,Khα ∈ Eh′′,¬( Kh ( β1 ,…, βn ) ∧ Bhγ ) ∉ ( E′ ∪ F' )} (2)

{

}

Fh′′+1 = Fh′′∪ Kh β1 , ,Kh βn ,Bhγ K jα: K j β1 , ,Kh βn Bhγ ∈ Dh ,Khα ∈ Eh′′,¬( Kh ( β1 ,…, βn ) ∧ Bhγ ) ∉ ( E′ ∪ F ′) ∞







h =0

h =0

h =0

h =0

(3)

We prove E′ = ∪ Eh′′,F ′ = ∪ Fh′′ in the two conditions, firstly prove E′ ⊇ ∪ Eh′′,F ′ = ∪ Fh′′ as follows: i. Obviously we have E0′′ = W ∪ {Kϕ} ⊆ EK ∪ {Kϕ } = E0′, F0′′ = C = F0′ . ii. Supposing that Eh′′ ⊆ Eh′ ,Fh'' = Fh' , andψ ∈ Eh′′+1 , in order to prove Eh′′+1 ⊆ Eh′ +1 , we only need to verifyψ ∈ Eh′ +1 . If ψ ∈ Th ( Eh′′ ) ,as Eh′′ ⊆ Eh′ , soψ ∈Th ( Eh′ ) ⊆ Eh' +1 , Fh''+1 = Fh' +1 = Fh' ; If K h α : K h β1 , ,K h β n Bhγ ∈ Dh , where K hα ∈ Eh′′ , and ¬( Kh ( β1 ,…,βn ) ∧ Bhγ ) ∉( E′ ∪ F' ) , then

(

)

we have ψ = Bhγ , as Eh′′ ⊆ Eh′ such that K hα ∈ Eh′ and ¬ K h ( β1 ,… , β n ) ∧ Bhγ ∉ ( E ′ ∪ F' ) , ∞



h =0

h=0

soψ ∈ Eh′ +1 , Fh'' ∪ { Kh ( β1 ,…,βn ) ,Bhγ } = Fh' +1 . Therefore E ′ ⊇ ∪ Eh′′ ,F' = ∪ Fh'' holds. ∞







h=0

h=0

h=0

h =0

Second, prove E′ ⊆ ∪ Eh′′,F ′ = ∪ Fh′′ in the other condition. We firstly prove E ⊆ ∪ Eh′′,F ′ ⊆ ∪ Fh′′ : i. Obviously E0 = W ⊆ W ∪ { Kϕ} = E0′′,F0′′ = C = F0 holds. ii. Suppose Eh ⊆ Eh′′,Fh ⊆ Fh'' andψ ∈ Eh+1 , in order to get Eh +1 ⊆ Eh′′+1 , only need to verifyψ ∈ Eh′′+1 . Ifψ ∈ Th ( Eh ) , since Eh ⊆ Eh′′ , thenψ ∈ Th ( Eh ) ⊆ Eh′′+1, Fh+1 = Fh ⊆ Fh′′ = Fh′′+1 ; If K h α : K h β1 , ,K h β n Bhγ ∈ Dh , where K hα ∈ Eh and ¬( Kh ( β1 ,…,βn ) ∧ Bhγ ) ∉( E ∪ F ) , then we can obtain that ψ = Bhγ ,{Kh ( β1 ,…, βn ) ,Bhγ } ⊆ F . And according to Eh ⊆ E′′h , we have Khα ∈ Eh′′ .Because Eh ⊆ E′′h and K h ( β1 ,… , β n ) ∧ Bhγ ∈ E ⊆ E ′ that E′ is inconsistent and it is contradictory for that

if ¬( Kh ( β1 ,…,βn ) ∧ Bhγ ) ∈ E′ .Therefore, we have ¬ ( K h ( β1 ,… , β n ) ∧ Bhγ ) ∉ E ′ and K hα ∈ Eh ⊆ Eh′′ , then we getψ ∈ Eh′′+1 . And according to Corollary 2 that E ⊆ E′,F ⊆ F′ and E ′ ∪ F ′ is consistent, then we have Eh' ∪ Fh' ∪ {Kh ( β1 ,…, βn ) ,Bhγ } is consistent, thus ψ ∈ Eh′′+1 and we can obtain that

{

Fh+1 = Fh ∪ Kh ( β1 ,…,βn ) ,Bhγ

}

{

}

⊆ Fh′′∪ K h ( β1 ,… , β n ) ,Bhγ = Fh′′+1 . ∞

According to the above proof, E ⊆ ∪ Eh′′ holds. h=0

Yanwen Wu

603



Next we prove E ′ ⊆ ∪ Eh′′ in the following: h=0 ∞

i. As E ⊆ ∪ Eh′′ , then there exist some integer m > 0 such that E ⊆ E′′m . According to E0′′ = W ∪{Kϕ} , h=0

we can obtain E0′ = EK ∪ { Kϕ } ⊆ Em′′ ∪ { Kϕ } = Em′′ ; ii. Supposing Eh′ ⊆ Eh′′+ m and θ ∈ Eh′ + m +1 , in order to prove Eh′ +1 ⊆ Eh′′+ m+1 , we only need to verify ψ ∈ Eh′′+ m +1 . If ψ ∈ Th ( Eh′ ) , since Eh′ ⊆ Eh′′+ m , then θ ∈ Th ( Eh′′+ m ) , so that we get ψ ∈ Eh′′+ m +1 ; If K h α : K h β1 , ,K h β n Bhγ ∈ Dh , where K hα ∈ Eh′ and ¬ ( K h ( β1 ,… , β n ) ∧ Bhγ ) ∉ ( E ′ ∪ F ' ) , and as

Eh′ ⊆ Eh′′+m then we have K hα ∈ Eh′′+ m and ¬ ( K h ( β1 ,… , β n ) ∧ Bhγ ) ∉ ( E ′ ∪ F ' ) , thereforeψ ∈ Eh′′+ m +1 . Consequently, for any h ≥ 0 , there exists some integer m > 0 , such that Eh′ ⊆ Eh′′+ m . ∞

Therefore E ′ = ∪ E h′′ holds, and E′ is the extension of ∆′′ = Di ,Wi ∪ {K iϕ } ,Ci′ . h =0

The extension of the constrained default theory Dh ,Wh ∪ {Khϕ} ,Ch

can be obtained

by Dh ,Ek ∪ {K hϕ } ,Ch when agent h obtains actual knowledge. Therefore the agent can get the only h

intuitive result by communicating with other agents and constantly receiving new actual knowledge. The information of some agent acquired from another agent constrains the possibility of the worlds according to the acquired information. However, since an agent perceives the possibility that other agents may be unreliable, he will not blindly believe all the acquired information. Thus, the set of possible worlds according to acquired information from some particular agent may be different from that associated with its own belief state. When Agent’s own belief sets is unable to make an accurate subjective or objective judgment on ϕ , namely ¬Kϕ ∈ E , and if Agent obtains new knowledge K ¬ϕ , that is K ¬ϕ regarding to the Agent is effectual, then Agent will absorb the new consistent knowledge K ¬ϕ , and will update its own belief sets at the same time, such that K¬ϕ ∈ E . Following we will discuss the variation law of applicable default sets G∆ ( Ei ,Fi ,∆ ) and extension Ei whenΔi= obtain the incompatible knowledge. Because the formula of W is the Agent’s fact sets, which reflect the facts in the model world, then there continually appears contradiction between the new facts, then we have W┬¬Kϕ , therefore W ∪ { Kiϕ } is consistent. The process of obtaining the applicable default set [6] concerning to Wi ∪ {K i ¬ϕ} is given as follows: i.

Calculate the formula set DelM ( Kϕ ) ( Ei ,Ki ¬ϕ ) which is inconsistent to Ki ¬ϕ in Ei : DelM ( K ¬ϕ ) ( Ei ,Ki ¬ϕ ) = Ei − EM ( K ¬ϕ ) ; i

i

ii. Calculate the set CONTRA that is not applicable for default rules:

{

}

CONTRA = d | d ∈ G∆ ( Ei ,Fi , ∆i ) ,J i ( d ) ∩ DelM ( ¬ϕ ) ( Ei ,K ¬ϕ ) ≠ ∅ i

iii. Calculate the set BLOCK that is not applicable for default rules in G∆ ( Ei ,Fi , ∆i ) via the deletion of the set DelM ( ¬A) ( Ei ,Kiϕ ) , and then let D0 = ∅ , for any h ≥ 0 , we can conclude that:

{

(

)}

Dh +1 = d | d ∈ G∆ ( Ei ,Fi ,∆i ) − CONTRA − BLOCK ,Ph ( d ) ∈ Th Wi ∪ {Ki ¬ϕ } ∪ Ci ( D0 ∪ ... ∪ Dh ) .

Let Ei be the extension ofΔi= on Fi and Kiϕ be the effectual knowledge in Wi. Let Si be the extension of G∆ ( Ei ,Fi ,∆i ) ,Wi ∪ {Kϕ}

on Ui, then there exist the extension E′i of

∆i′ = Di ,Wi ∪ {K iϕ} ,Ci on Fi ' , such that Si ⊆ Ei′,U i ⊆ Fi ' . According to the proof of Theorem 1, we can conclude that the extension of the constrained default theory Dh ,Wh ∪ {Kiϕ} ,Ch can be deduced by Dh ,Ek ∪ {K hϕ } ,Ch when agent h obtains actual knowledge. Therefore we can get the maximal h

604

Manufacturing Systems and Industry Application

applicable default rule set in G∆ ( Ei ,Fi ,∆i ) on Wi ∪ {K iϕ} when agent i meet effectual knowledge. After obtaining the extension S of G∆ ( Ei ,Fi ,∆i ) ,Wi ∪ {Kiϕ} on U, we can get the extension Ei′ of Di ,S K ∪ {Kiϕ} on F’, and consequently E′i must be the extension of Di ,Wi ∪ {Kiϕ} on Fi ' . Therefore

agent i can communicate with any other agents and constantly receiving effectual knowledge to construct a cognitive process, which accelerates the obtaining of dynamic extension of knowledge. Conclusion A central aim of the field of Artificial Intelligence is to create computational agents that replicate the activities of the human mind. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. In this paper we explore the use of dynamic epistemic default logic to offer a natural way of communication policies for the management of inter-agent exchanges in the multi-agent systems. Default reasoning is one of the leading formalisms known in Artificial Intelligence for non-monotonic reasoning, and modal logics of knowledge have been proposed as a formal tool for specifying and reasoning about multi-agent systems in a number of disciplines. They all are a means to represent some structured information, and much research on for instance non-monotonic reasoning was motivated by the kind of issues involving the subscription of a knowledge set. We focus on obtaining extension in the dynamic epistemic theory based multi-agent system when its background set absorbing new information constantly, then we add the constrained default sets to restrict the agent's inference behavior and manage to obtain the extension of constrained epistemic default logic theory via default reasoning. Constrained default logic is a default logic approach which enforces joint consistency and guarantees the existence of extensions and enjoys, among others, the formal properties of semi-monotonicity and strong regularity. We also discuss the characteristic of the dynamic updating when agent meets incompatible knowledge in the logical framework of multi-agent systems and proved the related theorem for knowledge updating. The new method shows the usefulness of logical tools in the dynamic process of information acquisition. Acknowledgment This work is supported by the National Science Foundation for Post-doctoral Scientists of China (No. 20100480151). References [1]

R.Reiter, A Logic for default reasoning. Artificial Intelligence, vol.13(1980), p. 81-132.

[2]

H. van Ditmarsch, W.v.d.H.a.B.K., Dynamic Epistemic Logic(Synthese Library), Springer Press(2006).

[3]

Kaile S. ,Constraints on extensions on a default theory, Journal of Computer Science and Technology, vol.16(2001),p.329-340.

[4]

Antoniou, G.,On the dynamics of default reasoning, International Journal of Intelligent Systems,vol.17(2002), p. 1143-1155.

[5]

Johan van Benthem, Jan van Eijck and Barteld Kooi, Logics of communication and change. Information and Computation, vol.204-11(2006), p.1620-1662.

[6]

Antoniou, G., A tutorial on default reasoning. The Knowledge Engineering Review,vol.13(3) (1998), p. 225-246.

[7]

Wu,M., Zhou,C.L., et al. Reasoning on Constrained Epistemic Default Logic, Journal of Information and Computational Science, vol.6(1)(2009),p.227-233.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.605

Upgrading Water Distribution System based on GA-RBF Neural Network Model Hongxiang Wang a , Wenxian Guo b North China University of Water Resources and Electric Power, Zhengzhou, China a

[email protected], [email protected]

Key words: Water distribution system; Calibration; Genetic algorithm, RBF neural network

Abstract. Hydraulic network calibration model is to minimize the sum of the squares of the differences between the calibrated and initial pipe roughness estimates, under a set of constraints determined from a sensitivity matrix. The upgrading problem of water distribution system was put forward after the preferable network model was obtained. Radial Basis Function neural network (RBF) based on genetic algorithm (GA) was proposed to solve the model. Genetic algorithm was applied to optimize the parameters of the neural network, and overcome the over-fitting problem. Case study concludes that using Radial Basis Function neural network (RBF) based on genetic algorithm (GA) and good results were obtained. Introduction Urban water supply network renovation and expansion project directly affects the investment cost of water, power and safe water supply and other issues. At present, most of the water companies were set up in the pipe network pressure monitoring points. Water has to meet the requirements of measurement precision, science and technology are becoming increasingly standardized file management, pipe network map, the user water meter reading statistics also The corresponding complete, so the network has been initially established microscopic model with the conditions for the water supply network renovation and expansion has created opportunities for optimization [1]. Establish a more accurate model of the network is the renovation and expansion of micro-foundation. At present, artificial neural network (ANN) has also been applied in parameter identification and classification. The significant research achievements are obtained without any complicated mathematical analyses. Radial Basis Function neural network (RBF) as an important branch of neural network has the best approximation and the best overall performance. In this paper, RBF neural network based on GA is introduced to upgrading the water distribution systems [2,3]. Optimization Method RBF neural network. Artificial neural networks are loosely based on the neural structure of the brain which provides the ability to learn from the input data they are given and then apply this to unknown data, in effect they can generalize and associate unknown data. There are many types of ANN have been proposed. The neural network model with multi-hierarchic structure which is based on Radial Basis Function (RBF) arithmetic, the most widely used ANN in hydrologic modeling, is used in this study [2]. RBF is a three-layer neural network. The input layer is made up of the signal source node. The second tier is hidden layer, as described in its modules are based on the needs and problems. The third tier is the output layer, which responded to the role of imported models. RBF implied by the input space to transform the hidden space is non-linear, from the hidden layer and output layer space to transform the output space is linear. The transformation function of hidden units is Radial Basis Functions, and the network can achieve the following mapping between the input and output: q

yi = ∑ w j ,iφ ( x − c j ) i = 1, 2, j =1

,m

(1)

606

Manufacturing Systems and Industry Application

Including x = ( x1 , x2 , , xm )T is input vector, yi is the output value for the i units, w j ,i is the value from the j hidden units to the i hidden units to the i output units. is Euclidean norm. φ () is radial basis functions, c j is the central element vector of the j hidden units. Radial Basis Function is a local distribution center of the radial symmetry of the non-negative non-linear attenuation function, Gaussian function is relatively common: φ (v) = exp(−v 2 / 2δ 2 ) (δ > 0, v ≥ 0) (2) Theory shows that the Radial Basis Function networks, Radial Basis Functions φ () selected for the network performance impact is not significant. RBF network design and the training focus on determining the network structure. In which the number of the input and output modules are determined by samples of it, pending the network parameters include the number of hidden units q, the center vector c, width parameter δ (Radial Basis Functions of this paper used Gaussian function), the network connecting value w j ,i , these parameters of the network to be determined is the key issue of RBF network, needs to be completed through learning and training. RBF neural network based on GA. The evolutionary computing in neural network optimization has two major applications: (1) to optimize each level’s connection weight value in network; (2) to optimize the network’s topology. In this article the neural network of GA is to optimize RBF network each level’s connection weight value. And structure chooses is by gradual increase method, which starts from a simple network and gradually increases the number of neuron in concealment level until to fulfill the purpose. The GA-RBF neural network is presented as the following steps:(1) Select training sample;(2) Design a near optimal network architecture by the given input and output specimens; (3) Encode the network; (4) Generate an initial population of network at random; (5) Define the fitness function;(6) Run an evolvement process, mainly consists of three main steps, selection, crossover, mutation; (7) Calculate the value of fitness function for each individual; (8) If the best network found is acceptable or the maximum number of generations has been reached, stop the evolutionary process. Otherwise, go to step (6) [4,5]. Upgrading Water Distribution System model The network model parameters must be corrected before establishment of micro-model. Friction coefficient can not be directly measured, so the calibration process is more difficult. Non-linear optimization algorithm was used to adjust roughness coefficients until the model predictions and measured is closely enough. The objective function is that the difference between unknown decision variables ε and ε * is smallest, the expression is as follows [1]: N

min f (ε ) = ∑ (ε i − ε i∗ )2

(3)

i =1

Where: N = number of water pipe networks; i = specified as to which pipe sections. Subject to pmj − pcj ≤ Θ j

∀j

(4)

Where: pmj = j sensor measurements; pcj = calculated value of sensor j; Θj = maximum tolerable error. K M point in assuming a different operating state of the pressure value can be measured, the value of the initial rough estimate of model building, the calculated value of the node pressure Pjk. In the N-Repeat this process on the pipe, so get a rough value can be perceived in different node pressure matrix, called the roughness of the pipe node is responsible for pressure-sensitive matrix. Each matrix element ai,M(k-1)+j is ai , M ( k −1) + j =

∂p jk ∂ε

∗ i



p′jk − p jk ∆ε i∗

(5)

Where: i = i pipe, N = total pipe segments; j = j, sensors, M = sensor number; k = s k-operating state; p jk = assumed roughness ε i∗ calculated value of pressure; p ′jk = rough calculated degree of pressure after the change; ∆ε i∗ = roughness change in volume, compared with the value the value should be small enough.

Yanwen Wu

607

Case Study Figure 1 Expansion of the area to be pipe network layout diagram, the initial water level in the figure are set to the two towers of 26 meters. Pipe head loss was calculated using Hazen-Williams formula. According to the Beijing construction budget for the manual, the cost of pipe are in Table 1. When considering this case, the highest use of water, fire when the water situation as verification, to ensure that water pressure is set to 15 meters. Pipe [1], [5], [10] for the existing pipe, considers transformation, pipe [13], [14], [15], [16], [17] for the pre-built pipeline.

Table 1 Pipe options for case study design

[管段标号]-管径( 管长( )- 节点流量(

系数 )-地面标高( )

Diameters(m) 0.2 0.25 0.3 0.35 Fees(Yuan/m) 283.16 389.28 538.49 626.57 0.6 Diameters(m) 0.4 0.45 0.5 Fees(Yuan/m) 729.69 832.42 958.5 1214.1

Figure 1 Small pipe network for case study The programs of GA-RBF are compiled by MATLAB 7.0. After many experiments and comparison, the GA-RBF parameters are as follows: the structure parameter of RBF neural network is 6-16-1; the size populations are 25; number of generations is 500; mutation rate is 0.05 and crossover rate is 0.75. The results are in Table 2 and 3. Table 2 Operation data for the optimum scheme Pipe ID Q/L/s Node ID Node pressure/m Pipe ID Q/L/s Node ID Node pressure/m [1] [2] [3] [4] [5] [6] [7] [8] [9]

210.76 65.09 17.56 2.49 -64.54 2.37 5.53 6.35 -13.40

2 3 4 5 7 8 9 10 11

32.09 29.51 25.30 26.78 25.77 28.24 25.68 22.39 25.08

[10] [11] [12] [13] [14] [15] [16] [17]

52.82 18.68 8.62 32.25 50.85 14.60 -17.15 -38.15

12

29.36

Table 3 The best schemes Total Pipe1 Pipe 5 Pipe 10 Pipe 13 Pipe 14 Pipe 15 Pipe 16 Pipe 17 fees (mm) (mm) (mm) (mm) (mm) (mm) (mm) (mm) 8951004 Change for 600 stay stay 250 300 200 200 250

608

Manufacturing Systems and Industry Application

Conclusion Upgrading Water Network is an optimization problem when the size of the network is large, which requires high computing speed. In this paper, GA-RBF model was proposed to predict river temperature and GA was applied to optimize RBF neural network’s initialized weights, which avoids the search blindly and achieves the global optimization as fast as possible. The case study results show that the forecasting accuracy and convergence speed of this model has been improved greatly. Moreover it can meet the needs for large-scale renovation and expansion calculation of pipe network optimization. Acknowledgment This work was financially supported by North China University of Water Resources and Electric Power funded projects (200910) and Scientific Research Found for Returned Overseas Researchers. Reference [1] R. Demoyer and L. B.Horwitz, Macroscopic distribution-system modeling, Journal of American Water Works Association, pp. 377-380(1975) [2] M. Cittorio. Genetic evolution of the topology and weight distribution of networks. IEEE Trans. on Neural Networks, Vol.5(1994), p.39-53 [3] J. Kennedy, R.C. Eberhart and Y.H.Shi. Swarm intelligence (Morgan Kaufman Publishers, San Francisco 2001). [4] C. Neely. P. Weller and R. Dittmer. Is technical analysis in the foreign exchange market profitable? A genetic programming approach. J.Financial Quant.Anal., Vol.32(1997), p. 405-426 [5] D. Z. Xia, X. F. Wang, L. Zhou, et al. Short-Term Load Forecasting Using Radial Basis Function Networks and Expert System. Journal of Xi-an Jiaotong University, Vol.35(2001), p.331-334

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.609

Application of Artificial Neural Network Model based on Improved PSO in Water Supply Systems Hongxiang Wang a, Wenxian Guo b North China University of Water Resources and Electric Power, Zhengzhou, China a [email protected], [email protected] Key words: Artificial Neural Network; Water Distribution Ssystem; Microscopic Model; Macroscopic Model

Abstract. Parameter calibration, data collection and simulation to control element were used to improve the accuracy of microscopic model. In order to overcome the shortage of macroscopic model, theoretical and empirical equation was adopted. The artificial neural network based on PSO method was introduced to improve simulation ability of water distribution system model from microscopic model and macroscopic model. There are two hidden layers with a maximum of 64 nodes per layer in the model. The Particle Swarm Optimization (PSO) algorithm is implemented to optimize the node numbers of the hidden layers in the model. The study indicates that the artificial neural network connecting with improved PSO method is an attractive alternative to the conventional regression analysis method in modeling water distribution systems. Introduction A computerized network model, using as little as possible with the water flow and pressure data network topology to inquire the status of the remaining nodes is the optimal scheduling of the premise. How to rapidly and accurately simulate the working conditions of water distributions systems by using a limited number of known parameters has been a major research topic in the water industry. Due to the lack of available data on domestic water distribution systems, the macroscopic model is used to model the water distribution system in this study. Based on parallel processing, artificial neural networks are able to deliver computational efficiency, and have been widely applied to problems involving in prediction and simulation [1]. In this study, a neural network is used to model a water distribution system. Artificial neural networks architecture is choosing the optimal number of nodes for the hidden layers. Improved Particle Swarm Optimization (PSO) algorithm is an evolutionary method that imitates the activities of ants in searching for food [2]. The PSO algorithm is utilized to search for the node number in the hidden layer of the neural networks. The study indicates that the artificial neural network connecting with improved PSO method is an attractive alternative to the conventional regression analysis method in modeling water distribution systems. The improved Particle Swarm Optimization Particle Swarm Optimization (PSO) algorithm [2] is similar to Genetic Algorithm but no selection and crossover operators. It is initialized with a population of random individuals named particle. Each particle i is randomly positioned in the D-dimensional search space, and updates its position according to two extreme values. One is the best position it visited so far, referred to as pBesti, and the other is the best position by the entire population, referred to as gBest. The standard PSO is originally developed for continuous function optimization. However, the location of monitoring stations is a combinatorial optimization problem; it is difficult to update the particle’s velocity. A modified PSO incorporating the crossover and mutation operations in Genetic Algorithm, called improved PSO, is proposed to the problem. We use the same coding method proposed in the work [3]. Considering each particle i updates velocity and position through tracking two extreme values gBest and pBesti in every iteration, we introduce crossover and mutation operations in GA to PSO, which are the hybrid of PSO and GA. Let the current position of each particle i crossover with the gBest and pBesti successively (called global crossover, local crossover, respectively). After crossover is over, the new produced position is mutated randomly to avoid the local optimal solutions.

610

Manufacturing Systems and Industry Application

Artificial Neural Network model based on Improved PSO Artificial neural networks are loosely based on the neural structure of the brain which provides the ability to learn from the input data they are given and then apply this to unknown data, in effect they can generalize and associate unknown data. There are many types of ANN have been proposed. It is reported that a neural network with appropriately selected inputs improves the simulation performance of real time series when significant non-linearity is present in the system being modeled [3]. There is a three-layer neural network, in which there are two hidden layers [4]. The input layer is made up of the signal source node. The second tier is hidden layer, as described in its modules are based on the needs and problems. The third tier is the output layer, which responded to the role of imported models. It implied by the input space to transform the hidden space is non-linear, from the hidden layer and output layer space to transform the output space is linear. The transformation function of hidden units is Radial Basis Functions, and the network can achieve the following mapping between the input and output: q

yi = ∑ w j ,iφ ( x − c j ) i = 1, 2,

,m

(1)

j =1

Including x = ( x1 , x2 , , xm )T is input vector, yi is the output value for the i units, w j ,i is the value from the j hidden units to the i hidden units to the i output units. is Euclidean norm. φ () is radial basis functions, c j is the central element vector of the j hidden units. Radial Basis Function is a local distribution center of the radial symmetry of the non-negative non-linear attenuation function, Gaussian function is relatively common: φ (v) = exp(−v 2 / 2δ 2 ) (δ > 0, v ≥ 0)

(2)

Theory shows that the Radial Basis Function networks, Radial Basis Functions φ () selected for the network performance impact is not significant. Artificial neural networks network design and the training focus on determining the network structure. In which the number of the input and output modules are determined by samples of it, pending the network parameters include the number of hidden units q, the center vector c, width parameter δ (Radial Basis Functions of this paper used Gaussian function), the network connecting value w j ,i , these parameters of the network to be determined is the key issue of neural network, needs to be completed through learning and training. Macroscopic Model Theory - Empirical model. Macroscopic model is generally applicable in the proportion of the load. Based on the theory - empirical model [1] can better overcome the lack of macro and micro models. For a given water supply system, known in the nodal flow, under the premise that the water pressure and the node is only function of the amount of the water supply, due to the water network, water pipe and pipe related losses h flow Q, can be expressed as: h = SQ2, and the water pressure is equal to the node to node, the reference point along the head loss and the reference point. Therefore, water points can be based on experience with water points and the water pressure expressed as a function of water supply as follows: n n (3) H i = Ci + α ( ∑ Ci , j Q ) 2 = C i + ( ∑ α i C i , j Q j ) 2 s

s

j =1

j =1

Where, Ci , α , Ci , j are the undetermined coefficients; Q j water supply points; n s ,number of water supply; H i water pressure.

Yanwen Wu

611

Instability analysis based on neural network .Predicting some of the key nodes of the quantity and pressure is still a complex task, and sometimes the error is predicted to reach 50% or more, which can not meet the needs of simulated conditions. The pipe network has led to many inaccurate model prediction uncertainties. Therefore, to determine instability of the results of network model is an important issue. In this paper the following sections to elaborate the issues in this regard. The mathematical model of water distribution use conservation of matter law of conservation of energy. Model and the network node pressure, pipe flow related, can be expressed as the following equation [4]. (4) z = g (x) + ω Where, z is measurement vector, g(x) is Nonlinear function describing the system, x is State vector, ω is Unknown Vector. Because ω is unknown, but they can not be ignored, the above equation becomes the actual value and the calculated values to find the difference between the minimum problems: min E ( xˆ ) = xˆ

1 ( z − g ( xˆ )) T W ( z − g ( xˆ )) 2

(5)

Where, xˆ is estimate of the state vector; E() is objective function; W is measurement weight matrix. A three-layer neural network can quickly solve the network the following linear equation: ∆z ( k ) = J ( k ) ∆xˆ ( k ) + r (6) The corresponding optimization problem is as follows [5]: 1 min E(∆xˆ(k) ) = (∆z(k) − J (k)∆xˆ(k) )T W(∆z(k) − J (k)∆xˆ(k) ) (k ) xˆ 2

∆z ( k ) = z − g ( xˆ ( k ) )

(7) (8)

Where, J (k ) is the Jacobian matrix on the estimated value xˆ ( k ) , r is residual vector, ω is the value of the estimate; k is estimation process steps;, ∆xˆ ( k ) is k-step estimation process of the correction vector. This system can use a simple neural network to build this system. Model built with data-driven neural network model is that it contains the difference between a day and a week delay components, through the analysis of measurement data can be lagged effects of these two is obvious. While others lag time of the pressure or flow value and the models exhibit a high degree of input linkages, it can not be considered. Experiments show that adding too many entries will add to the burden of neural networks, but also the neural network simulation error increases [5,6].

Results and Conclusion The network used in this article are shown in Figure 1, the figure or the right end of the pipe were lower friction coefficient and the tube length, each node traffic, ground elevation, the time coefficient of variation of the node and the flow of pipe diameters are listed in software. Water pump with a formula for the curve: Head = 106.67 − 0.001185 * ( Flow) 2 .

612

Manufacturing Systems and Industry Application

Figure 1.

Water distribution network model

In this study, pump discharges at present, as well as the first regular and first seasonal lags of the hourly pressure series, are considered as artificial neural networks inputs. As mentioned previously, 48 data groups are kept for testing. Conventional regression analysis is also carried out in order to compare the predictions from the artificial neural networks model. In Fig. 2, predicted nodal pressure values obtained from the artificial neural networks model based on improved PSO are compared with the actual pressure values. In the Fig. 2, the results from the ANN based on improved PSO model are denoted as ‘N’ and the results from the ANN model are denoted as ‘R’. As can be seen from the results, the ANN based on improved PSO model provides more accurate pressure values than the ANN model. Pressure (MPa)

1.0

Actual Model(N) Model(R)

0.8 0.6 0.4 0.2 0.0 1

6

11

16

21

26

31

36

41

46

Time (hour)

(a) Node 2 Pressure (MPa)

1.2

Actual Model(N) Model(R)

1 0.8 0.6 0.4 0.2 0 1

6

11

16

21

26

31

36

41

46

Time (hour)

Figure 2.

(b) Node 7 Actual pressure versus modeled values at nodes 2, 7

Yanwen Wu

613

Acknowledgment This work was financially supported by North China University of Water Resources and Electric Power funded projects (200910) and Scientific Research Found for Returned Overseas Researchers. Reference [1] R. Demoyer and L. B.,Horwitz, Macroscopic distribution-system modeling, Journal of American Water Works Association, (1975), p. 377-380 [2] M. Lv, Multi-objective mixed directly-optimal dispatch on large scale water supply system(Harbin Institute of Technology, PhD Thesis, 1998). [3] B.Gabrys and A.Bargiela, Neural networks based decision support in of uncertainties, Journal of water resources planning and management, Vol. 125(1999), p. 272-280 [4] J. J. Shi, Reducing prediction error by transforming input data for neural networks, Journal of Computing in Civil Engineering, Vol. 14(2000), p. 109- 116 [5] G. Lachtermacher, and J. D.Fuller, Back propagation in time series forecasting, Journal of Forecasting, Vol.14(1995), p.381-393 [6] Q.Wang, M.HELLER, Hybrid Box-Jenkins and neural network forecasting of potable water demand, in Proceedings of the Artificial Neural Networks in Engineering, Missouri(1996).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.614

Adaptive Control of Vertical Stage of A Robot Arm for Wafer Handling Chi Zhang1, a, Guangzhou Zhao1,b and F.P.M. Dullens2 1

Ningbo Institute of Technology, Zhejiang University, China

2

Department of Mechanical Engineering, Technische Universiteit Eindhoven, Holland a

b

[email protected], [email protected]

Key words: Adaptive Control, Vertical Axis Control, Robot Arm, Wafer Handling

Abstract. In this paper, a 4 degree-of-freedom robot arm used for wafer handling is proposed. The dynamic modeling of the vertical axis which includes a simplified friction model is derived. This friction model is fitted through the measured friction data. An adaptive controller is designed for this model and the influence of the control parameters as well as the learning rate is studied. In the experimental stage, the adaptive control parameter and learning rate are tuned. Finally the performance of the adaptive controller is compared to a PD controller and it is shown that the performance of the adaptive controller is better. Introduction In semiconductor industry, front-end processing consists of a multiple-step sequence of photographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of pure semiconducting material. These processing environments are usually associated with high temperature and toxic chemical materials, in which humans are unlikely to stay and carry out any operation. A robot is thus a better substitute to move the wafer from one production process to another. There are many wafer handling robots such as frog-leg robots, SCARA (Selective Compliant Articulated Robot Arm) robots and four-bar linkage robots available in the market [1]. SCARA robot can transfer the wafers in a narrow space and its trajectory planning is easy and flexible, which results its wide application in semiconductor industry [2-4]. A four DoF robot arm with 3 horizontal links and 1 vertical axis has been proposed in [5]. The robust decoupling control of these 3-links has been discussed but the control of vertical axis was not mentioned. Traditional PD controller can be used to control the vertical axis. However, the friction changes in different velocity, PD controller can not adapt these changes to obtain the optimized performance. Furthermore, when the robot arm picks and puts the wafers, the parameters of the wrist have to be updated during operation. Adaptive control can be used to estimate the mass of the wafer in vertical stage. Many adaptive control methods can be found in literature. Two well known concepts are Model Reference Adaptive Control (MRAC) and Self-tuning control [6, 7]. Both concepts use a servo controller which contains the adjustable parameters. MRAC updates the model parameters based on the error between the reference position and velocity, and the measured position and velocity. The objective of the adaptation is to converge the tracking error to zero. Self-tuning control works as follows: The parameter estimator collects the input and output of the plant and an input-ouput fit is made based on a least-square method. The self-tuning control will not be able to estimate the mass fast and accurate and thus MRAC is chosen in this paper. Mechanical Structure and Vertical Axis Modeling Mechanical Structure. Fig. 1 shows the mechanical structure of the proposed wafer handling SCARA robot. The SCARA has 3 links named shoulder, elbow and wrist respectively. The wafer is put on the panel of the end-effecter which works as the wrist link. The wafer can be adhered on the panel with the air-pressure generated by pumping the air through the holes drilled on the panel. Three permanent magnet brushless DC motors are installed at the joints of the links and rotate the links to

Yanwen Wu

615

pick up a wafer and transfer it from one magazine to another. The SCARA can be moved in vertical direction by the rotational movement of an AC motor and motion transmission to vertical axis via a belt and a ball-screw. The vertical axis has a working limit of 0.88 meter and an encoder resolution of 0.444 × 10−6 m.

(a) Should, elbow and wrist

(b) Vertical Axis

Fig.1 Mechanical Structure of the Robot Arm

Vertical Axis Dynamic Modeling. The model of the vertical axis is assumed to be of the form: mz = u − mg − F f

(1)

where g is the gravity and Ff is the force due to friction. u is the input force to the motor. The relation between digital counts to the driver and generated force is determined when the system is in closed loop and when it is not moving. The generated force compensates for the gravity force and thus m ⋅ g = 20.5 ⋅ 9.81 = kdac 2 force ⋅ DAC . This results in kdac 2 force = 1 .

Fig. 2 Experimental friction data (blue) and simplified model (dotted) The friction parameters are determined by moving the vertical axis with a constant velocity and the average DAC output is calculated. The obtained friction curve and the fitted model are shown in Fig. 2. The peak in the friction force at low velocities is due to the Stribeck effect. A fit is made using a simplified friction model: F f = f c ⋅ sign( z ) + f v ⋅ z

where f c and f v are Coulombs and viscous friction respectively.

(2)

616

Manufacturing Systems and Industry Application

Adaptive Control Design The model is assumed to be of the form of Eq. 1 and Eq. 2. If the mass and friction are known exactly, the next feedback control can be used to achieve perfect tracking:

u = m( zref + g − 2λ e − λ 2 e) + f c sign( z ) + f v z

(3)

where λ is a positive definite control parameter. Using this controller will lead to exponential convergent error dynamics: e + 2λ e + λ 2 e = 0

(4)

where e = z − zref . However, the mass m and friction parameters f c and f v are not known exactly. Therefore mˆ , fˆc and fˆ are used instead, which leads to the error dynamics: v

ms + λ ms = φ1v1 + φ2 v2 + φ3v3

(5)

where s is the combined tracking error, vi is the signal quantity and φi is the difference between estimated parameter and the true value: s = e + λe

(6)

v1   z + g − 2λ e − λ   v = v2  =  sign( z ) v3   z

2

e    

 mˆ − m    φ =  fˆc − f c  ˆ   fv − fv 

(7)

(8)

The update law of the mass is chosen to be:

φ = −γ vs

(9)

where γ = [γ 1 γ 2

γ 3 ] , which is called the learning rate. '

Together with this update law, the stability can be proven by Lyapunov theorem. The positive definite Lyapunov function is then: 1 V = ms 2 + φ 2

γ

(10)

And its derivative becomes:

V = mss + 1/ γ ⋅ φφ = mss + 1/ γ ⋅ φ (φˆ − φ ) , φ = 0since parameters are constant =mss + 1/ γ ⋅ φ (−γ vs ) =mss + 1/ γ ⋅ φ (−γ v1s ) , if γ 2 = γ 3 = 0, i.e. no adaption for friction = mss − φ1v1s − φ2 v2 s − φ3v3 s = mss − (ms + λ ms ) s + (φ2 v2 + φ3v3 ) s − φ2 v2 s − φ3v3 s = −λ ms 2 ≤ 0

(11)

Yanwen Wu

617

Since V is smaller than or equal to zero, stability is proven. In order to prove that s converges to zero, Barbalat’s lemma is used and the second derivative of the Lyapunov function is: V = −2λ ms (φ v − λ s )

(12)

Since s and φ are shown to be bounded earlier and since V is bounded, it means that V is continuous in time and s goes to zero for t goes to infinity. Since s converges to zero, e and e converge both to zero. Next parameter convergence will be discussed. From Eq. 8 it follows that φˆ becomes zero and thus

φˆ becomes constant. Differentiation of Eq. 4 and since s and φ are zero for t → ∞ finally gives: 0 = φ v . So convergence of parameter estimation will only be guaranteed if v is persistently exciting. The schematic structure is given in the figure below:

Fig. 3 Adaptive Control Diagram Simulation Results Several simulations are done to study the adaptive control. The maximal tracking error can be increased/decreased by adjusting the control parameter λ . Increasing the learning rate, γ , results in faster adaptation and therefore smaller tracking errors. However, the estimation becomes also more sensitive for noise. For a control parameter λ = 10 and γ = 30, the maximal error is then 0.006 m for a step size of 0.01 m. The estimated mass converges to the real value of 20.5 kg. For λ = 200 leads to the wrong estimation of the mass since the tracking error is kept too small. The system’s mass (20.5 kg), is estimated by 21.3 kg. In case of a payload of 0.2 kg the mass estimated is 21.5 kg. This means that the mass of the payload still can be estimated accurately. It can be concluded that tuning the parameters γ and λ is a tradeoff between small tracking error and fast and accurate estimation. To study the influence of friction some other simulations are done. In Fig.4 the tracking error and the mass estimation are shown for three simulations with different friction constants. The friction constants of the system are equal to f c = 100 and f v = 1200. The reference trajectory is a second order step response of 0.01 m. One can see that in case of wrong or no friction compensation it will take long before the estimated mass converges to a fixed value. Furthermore, the mass does not converge to the real value in case of very large Coulomb friction parameter errors. Due to wrong viscous friction parameter the estimated mass will converge to the real value only when the velocity has become small. Obviously faster convergence is obtained for a better fit of the model. Next, tuning of the control parameters will be done on the real system.

618

Manufacturing Systems and Industry Application

Fig. 4 Position error and mass estimation for step response of 0.01m step Experimental Results The algorithm is implemented and the controller is tested. First is started with a low adaptation rate and the control gain λ is increased until resonances can be heard during the motion. The final value is λ = 400. The adaptive controller can move with a maximal velocity of motion 180 mm/s and a maximal acceleration of 1800 mm/s2. For fast and accurate adaptation the error may not be too small and thus the control gain is decreased until it still can follow the reference trajectory well. A fast adaptation rate is needed to use the adaptation online when picking up a wafer. But when the adaptation gain is too large, the estimation becomes very sensitive for disturbances. Several tests are performed to check if the mass can be estimated accurately for different values of γ . Finally γ = 200 and λ = 200 are used. The results are shown in the Table 1. The mass is not estimated with any repeatability. The average of the estimated payload is equal to 2.2 kg. The large changes in estimation are caused since the friction model is not accurate enough. Including the Stribeck effect into the friction model will improve the mass estimation. The Lugre friction model can be used therefore. However, since the friction is not constant at different positions, the mass of a wafer can not be estimated accurately enough, but the mass of a magazine might be estimated roughly, such that decoupling of the three coupled links can be improved. For now the adaptation rate is set to zero with initial estimation of 21 kg and the control parameter λ = 400. Table 1 Mass estimation with and without payload Motion No Payload With 2.3 kg Payload 1 16.5 18.1 2 16.4 20.4 3 19.1 19.8 4 18.0 19.8 5 18.4 16.8 6 16.8 22 7 16.6 20.1 Mean 17.4 19.6 In order to make a comparison for the performance of the adaptive controller, a PD controller is designed. As shown in Fig. 5, the performance of the adaptive controller is better than the PD controller since the dynamic error as well as the static error is smaller. The PD controller can perform a motion with maximal velocity of 90 mm/s and a maximal acceleration of 450mm/s2. The static error of the PD controller can be reduced to zero by enabling the static integrator.

Yanwen Wu

619

Fig.5 Position error comparison between PD and adaptive control during the motion Conclusions In this paper, the dynamic model of the vertical axis which includes a simplified friction model is derived. This friction model is fitted through the measured friction data. An adaptive controller is designed for this model and the influence of the control parameters as well as the learning rate is studied. The higher the control gain, results in smaller error but worse and slower estimation of the mass. Increase of the learning rate will lead to faster adaptation, but the estimation becomes more sensitive to noise and other disturbances. In the experimental stage, the control parameter and learning rate are tuned. Finally the performance of the adaptive controller is compared to a PD controller and it is shown that the performance of the adaptive controller is better. References [1] M.Cong and D. Cui, “Wafer Handling Robot and Applications”, Recent patents on Engineering, Vol.3, No.3, pp.170-177, 2009. [2] M. W. Spong, S. Hutchinson and M. Vidyasagar. Robot Modeling and Control. New Jersey: John Wiley & Sons, Inc, 2006. [3] H. A. ElMaraghy and B. Johns, An Investigation Into the Compliance of SCARA Robots. Part I: Analytical Model, Journal of Dynamic Systems, Measurement, and Control, Volume 110, Issue 1, pp18-23, March 1988. [4] Meng Joo Er., Moo Teng Lim, Hui Song Lim, Real-time hybrid adaptive fuzzy control of a SCARA robot, Microprocessors and Microsystems 25, 2001.. [5] C. Zhang, G. Zhao, Y. Xiao, X. Yang, Decoupling Robust Control of Three-Link Direct Drive Robot Arm, Proceedings of the 2nd International Conference on Intellectual Technique in Industrial Practice, Sept. 2010 [6] C. Canudas de Wit and P. Lischinsky, Adaptive Friction Compensation with Partially Known Dynamic Friction Model, Int. J. of Adap. Contr. and Sig. Process, Vol. 11, No. 1, pp. 65-80, 1997 [7] D.S. Bayard, A Forward Method for Optimal Stochastic Nonlinear and Adaptive Control, IEEE Trans. of. Autom. Control, Vol. 36, No. 9, pp. 1046-1053, 1991

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.620

Research on Computer Virtual Experiment System Shoucheng Ding1, 2, a 1

College of Electric and Information Engineering, Lanzhou University of Technology, Lanzhou, Gansu, China

2

Key Laboratory of Gansu Advanced Control for Industrial Processes, Lanzhou, Gansu, China a

email: [email protected]

Key words: Virtual experiment; Computer; Java; Database

Abstract. This paper studied the application of JSP technology on computer virtual laboratory. Background programs was designed using Java language, and the system website was designed using ORACLE10g database. The system had three modules: administrators, teachers and students. The administrator were mainly to manage the personnel changes of students and teachers. The teacher was primarily responsible for the production and upload virtual experiments. Students can visit the system, according to the teacher requirements to do a virtual experiment and upload test, then use the site free to send test report to the teacher by mail. Practice shows that the virtual experiment system has run high-speed stability, low cost, reliability, adaptability, ease of operation, interaction and other advantages. Introduction Virtual experiment teaching website is not a general portal. Its pages and the database should have a very strong interaction. Therefore, this system mainly used JSP(Java Server Pages) technology to produce an interactive virtual experiments relatively high performance of the system's website. This network can make experiments and laboratory management are very convenient, so that students can conduct virtual experiments without leaving home, and the teacher can be in the office to manage a variety of experiments. System using Java’s JSP mainly realized the management of users and experiments. Virtual Laboratory Website The virtual system website had used JSP technology. The development tools is MyEclipse 7.1. Developed a Java program that uses a ORACLE10g database for development version with Java seamless.Application Falsh technology producted experiments of system. Use of its powerful interactive performance and ease of online publishing, the client only need to allow load actionX control and can easily do experiments on the web. The main function of the virtual system is to Falsh uploaded to the Internet, then students visit the web site and conduct experiments, and the electronic version experiment report will be sent to the teacher through the website e-mail. Users of the system in general is the students and teachers, and all staff are created by the system administrator. Virtual system consists of three modules: the administrator module, the teacher module, student module. SSH Framework Struts2, Spring2.0 and Hibernate3.1 (SSH) framework had built the system website. JSP technology.JSP technology is a dynamic web technology standards. JSP technology is the HTML file in a traditional web page (*. htm, *. html) in the Java program into segments and JSP tags (tag), to form the JSP file (*. jsp). JSP development of the web can be run under linux, but also on other operating systems.

Yanwen Wu

621

JSP technology uses Java programming language class of XML tags and scriptlets, to encapsulate the processing logic to generate dynamic web pages. Page tags and scriptlets can also access the service through the end of the resources of logic. JSP page logic and web page design and display separation, support reusable component-based design, web-based application development is rapid and easy.JSP and Java Servlet is executed on the server, and usually returned to the client is an HTML text, so long as the client browser is to be able to visit. JSP page is embedded in HTML code and the Java code. Client requests the server to be on the Java code after processing, then the generated HTML page back to the client browser.Java Servlet is JSP technology base, and large-scale web application development requires Java Servlet and JSP support to complete. JSP is platform-independent and secure, mainly for the Internet and so on. Struts2 Technology.Compared with the struts1 and struts2, has a lot of revolutionary improvements. It is developed on the basis webwork, but struts2 did not inherit the struts1 characteristics, but the inherited characteristics of webwork. Or, the struts2 derived from webwork, and struts2 not derived from the struts1. Obviously, struts2 webwork is an upgrade, struts1 and webwork absorbed the advantages of both. It is a framework for better stability. Spring Technology.Spring's main objective is to make enterprise-class java easier to use and promote good programming practice. It is through a wide range of environments in both POJO-based programming model to achieve this goal. Spring is committed to providing a unified and powerful programming model. For example, although the underlying transaction does not involve coordination of the business, still provides a abstraction layer on any other transaction management strategy. It is easy to use and make the code easier to test. Spring will provide the final model for Java development. Hibernate3.1 Technology.Hibernat3.1 is an open source object-relational mapping framework. It had a object package for JDBC so that Java programmers can use arbitrary object programming thinking to manipulate the database. Hibernate can be applied on the any JDBC occasion, either in the Java-client applications, or in Servlet / JSP in web .It is very important: Hibernate can replace the CMP in EJB-J2EE architecture, to complete the important task of data persistence. Hibernate3.1 has five core interface: session, sessionfactory, transaction, query, and configuration. This five core interface will be used in any development. Through these interfaces, not only can be accessed on the persistent object, but also for transaction control. Factory Design Pattern The factory pattern create new of instance object, and generated an instance object according to class. Sample the class, for example, if create an instance object: Sample sample = new Sample (); However, the reality is, usually in the Sample instance created, need to query the database and other assignments. First of all, to use the sample constructor and create an instance: Sample sample = new sample (parameters); In this case, first of all, need to separate an instance work created and an instance work used. Then the factory model will need to generate object, and can no longer above the simple new sample (parameters). Also, if there is a succession of sample as mysample, in accordance with the programming to an interface, and need to sample as an interface abstraction. Now sample is an interface, and there are two sub-classes MySample and HisSample. Instantiated as follows: Sample mysample = new MySample () Sample hissample = new HisSample () Factory Method.Sample instance of a specialized production factory: public class factory ( public static sample creator (int which) ( / / Get class sample generally produced using dynamic class loading into the class

622

Manufacturing Systems and Industry Application

If (which == 1) return new sampleA(); else if (which==2) return new sampleB(); } }

Then in the program, if want to instantiate Sample,as follow: Sample sampleA = Factory.creator (1); When using the factory method to define the product interface, such as the above, sample, sample products, the interface under the interface implementation class, such as sampleA, secondly to have a factory class, used to generate product sample. Abstract Factory.Factory mode: factory method and abstract factory. difference between the two modes need to create on the complexity of the object. Assumptions: sample has two concrete classes sampleA and samleB, but sample2 has two concrete classes sample2A and sampleB2. So, above the case factory becomes an abstract class, and the common part will be encapsulated in the abstract class, and the different parts of it implemented by subclasses, the following is: factory in the previous example will be expanded into an abstract factory: public abstract class factory{ public abstract sample creator(); public abstract sample2 creator(string name); } public class simple factory extends factory{ public sample creator(){ …...... return new sampleA } public sample2 creator(string name){ …...... return new sample2A } } public class bombfactory extends factory{ public sample creator(){ …... return new sampleB } public sample2 creator(string name){ …... return new sample2B } }

Kown from the above two plants each produce a sample and sample2. In simplefactory, the production methods between sample and sample2 has been certain connection, so it tied the two methods in a class. Thus, factory method for the system architecture provides a very flexible and powerful mechanism for dynamic expansion. As long as the replacement of the factory about the specific methods, without changing other parts of the system, it is possible to carry out major changes in system functionality.

Yanwen Wu

623

System Database ORACLE10g Database. ORACLE database product is the current typical representative of database technology. ORACLE10g provides a flexible data partitioning, and a partition can be a large table, it can be easy to manage a small piece of the index, according to the value of data partitions. ORACLE10g effectively improved the system operations capacity and data availability, reduced the I/O bottlenecks. ORACLE10g enhanced parallel processing capabilities. such as bitmap index, query, sort, access and general index scans operations introducted parallel processing technology to improve the parallelism of a single query. ORACLE10g through the parallel server to improve system availability. ORACLE10g provided automatic backup and recovery, improved the distributed operating system support, such as strengthening the SQL replication operation parallelism. To help customers effectively manage the entire database and application systems, ORACLE also provided business management system. Database administrator can drag and drop from a centralized console graphical user interface management system ORACLE environment. Through a secure server ORACLE10g enhanced security services, strengthened some user authentication and user management of the ORACLE Web Server. Databases.The system designed four tables: user management table --- sys_users, sector management table --- sys_dept, professional management table --- sys_special, experimental management table --sys_exper. Specific database tables and relationships between the fields in the table shown in Fig.1. Id of the field in each table is the primary key of this table, and the primary keys are the string of 32-bit number that is automatically generated by the hibernate and will not repeat the figures, so that operations can make the minimum data redundancy.

Basic user User Id Login name User password User departments User professional User ID User role Experiment management Experiment Id Experiment name Experiment description Experimental storage path

Sector management Department Id Department name Department description Professional management Professional Id Professional name Subordinate departments Professional description

Fig. 1 database tables and relationships

Due to experimental management table is a separate table, and there is no relationship with the other table. Its role in the current system only records all of the experiments. After students can visit and they will do the experiment. In the table if there are no experiments to show that teachers do not have upload experiments. Login Design.This system is only one login screen. The background process to jump to the corresponding page by a judge in order to achieve a login screen that users can adapt to the page.Login process shown in Fig.2. When the user name and password input text box, the two data points will be forwarded to the login method of the login action class by struts2 framework, which find the sysusers DAO class login method in the configuration file. In the login method by hibernate to query sys_users for the database table. If the table is not the user, the system will jump to the landing page. If the user exists, continue to determine the user's role, according to the different roles to jump to different pages.

624

Manufacturing Systems and Industry Application

Student Module. Students will be able to view the current experiment status, and do the related experiments.When the students user enter the system, it has an experimental instructions. Students to read and click the button to start the experiment will enter a list of all experiments. Clicking to start the experiment will be linked to related experiments. After the experiment can submit test reports to the teacher through the website email. Hardware Configuration. First, the server configuration: Enter the URL Enter the user name and password Submitted to the background and query the database

Personnel records of a registry? N

Y

Check the user's role Role-admin

Role-teacher

Role-student

Manage.jsp

Teacher.jsp

Index.jsp

Fig. 2 Landing flowchart

Processor: Intel Xeon Quad Core E5504 80W 2.00GHz Memory: 4G DDR3-1033MHz Memory HDD: 500G SATA Power: 450W Great Wall Motherboard: Asus Lan: Dual Gigabit Ethernet Drive: CD-RW/DVD Combo Drive Rack: 2U Rack Second, the client Recommended: Processor: Pentium 4 2.4GHz Memory: 512M HDD: 40G Monitor: 15 'and above Browser: IE6.0 and above Software Configuration. Configuration Tomcat6.0 and above; Configuration Jdk; Configure MySQL; Install Navicat for MySQL; Installing MyEclipse 6.5 and above; Install Macromedia Dreamweaver 8 and above; IE6.0 and above browser. System used SSH framework, as long as the database and the server is stable, the system operation is relatively stable. using the Tomcat web server architecture, the server is still relatively small number of users, stable, multi-user concurrent operation when the time may be quite serious crash. If it use other servers, such as jBoss or weblogic, then this problem can be solved.

Yanwen Wu

625

Another destabilizing factor is that the virtual experiment is made with Flash, and the aster Flash software people are relatively small.But it is hardy online, it can do not need to install any server software or plug-ins. With the trend of its development, this stability is not a threat. Summary Virtual laboratory made up the deficiencies of the traditional laboratory, and it made practical teaching more lively and more flexible design of the experiment. In the innovation reform process of experimental teaching, the virtual laboratory reduced the dependence on hardware devices, and it was also the direction of the development of experimental teaching reform. This topic was a Lanzhou University of Technology teaching reform projects. The results also timely deposited into a computer, and experimental analysis was also done on the computer. Experimental platform and can be submitted to the system from the experimental. The virtual system for the majority of electrical and electronic test provides teachers and students to more advanced and scientific experimental methods. It was better to meet the electrical and electronic test and experiment teaching electrical requirements. The system was designed experiments report system, which has uploaded experimental report. For the majority of teachers and students, the virtual experiment system is a more advanced and scientific experimental methods. The development and use of the system will not only help resolve the contradiction between the current scale and quality of education, but also conducive to solving the problem of limited experiment resources and so on. References [1] Qin Shuren. Intelligent virtual controls new concept of virtual instrument .Proceedings of ISIST’2002, 2nd International symposium on instrumentation science and technology. Harbin: Harbin institute of technology press(2002). [2] Shoucheng Ding, Wenhui Li, Jianhai LI, Wanqiang Lu. Design and Implement of the Electric and Electronic Virtual Experiment System. International Conference on Multimedia Information Networking and Security(2010). [3] Nadia Thalmann. Virtual Reality and Education .IEEE Virtual Reality Conference(2007). [4] Guimares, E. G,Maffeis, A. T, Pinto, R. P. REAL-a virtual laboratory built from software components. Proceedings of the IEEE(2003). [5] GUO Ming-qing, QIN Shu-ren. Manage information system of virtual instrument experiment. China Measurement & Test. 35,5( 2009). [6] Christian J. Carleton Randy A. Dahlgren and Kenneth W. Tate. A relational database for the monitoring and analysis of watershed hydrologic functions: II. Data manipulation and retrieval programs .Computers and Geosciences, 31,4(2005). [7] Ko C C,Chen B M,Hu S Y,el al. A web-based virtual laboratory on a frequency modulation experiment. IEEE Transactions on Systems, Man, and Cybernetics,Part C:Applications and Reviews. 31,3(2001). [8] Baoping Tang, Fabin Cheng, Shuren Qin, et al. Study on virtual instrument developing system based on intelligent virtual control .J. Phys. Conf. Ser. 13(2005).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.626

Synthesis and Analysis of the Handheld Computer Power Consumption Amir Mahmoudi1, a, Khalil Monfaredi1, b, Hassan Faraji Baghtash2, c and Ali Bahrami3, d 1

Islamic Azad University-Miyandoab Branch, Miyandoab, Iran

2

Electronics Research Center, Electrical and Electronic Engineering Department, Iran University of Science and Technology (IUST) 3

Photonics and Nanocrystals Research Lab. (PNRL), Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 51664, Iran a

[email protected], [email protected], [email protected], d [email protected]

Key words: Portable, Handheld PC, Power Consumption, Battery Life

Abstract. In this work we present some experiments based on a measurement set up to evaluate the power consumption of VAIO SZ series while running various programs. The power consumption of the laptop is measured while running MP3 and some other programs such as Matlab and graphical games. Also, the effect of display brightness on the power consumption of the computer has been studied. The maximum power consumption is related to graphical games which are measured to be 375W. Running the MP3 with half brightness application consumed the least power of 150W. The battery life time is inversely proportional to the power consumption. Introduction Energy consumption is considered to be one of the main challenges for both manufacturers and also users of portable devices. Pocket PCs as one of the most versatile equipments are interested by many common users and also professional researchers, so it is important for them to gain an insight toward their energy consumption. Most laptops, cell phones and handheld PCs use batteries that take approximately 1.5 to 4 hours to charge but can run on this charge for only a few hours [1]. For marketing purposes the cost of handheld computers are steadily decreasing, and the shorter battery life puts a strain on the quality of the provided equipment. The investigation shows that different types of applications, softwares and hardwares have different affect on the power consumption of the computer. Relevant fundamental design aspects of various types of power consumption have been examined from the software-power consumption and hardware-power consumption interrelationship point of view [2]. Suitable low power design techniques and efficient power management strategies require a thorough understanding of the main causes of the power consumption breakdown and also an analysis of the various components that built a handheld computer. Power management at battery-powered portable devices aims at prolonging battery life by deliberately controlling of the energy guzzling parts of the device [2]. In [3], the power usage of the main parts of an IBM ThinkPad laptop was studied concluding that the total system power consumption varies greatly based on the operation type. The authors in [3] also observed that a huge variation in power consumption of its different components such as CPU power; display power, wireless card power exist. The power breakdown for the ITSY handheld computer, a computer which was developed at Compaq's Western Research Lab in order to provide researchers with a flexible platform for pocket computing research, was examined in [4]. The effect of GUIs (Graphical User Interfaces) on energy consumption of handheld computers is studied in [2], [5] which provides an insight into the energy consumption of GUIs implemented on different platforms. The energy consumption of network interfaces in PDAs is analyzed in [6]. In [7], an IBM ThinkPad 560X laptop running Linux was configured as an Odyssey client communicating with a server using a 2Mb/s Wave LAN network. In [8], the authors attempted to reduce energy consumption by trying to increase the laptop disk idle times.

Yanwen Wu

627

In this paper we present some experiments based on a measurement set up to evaluate the power consumption of VAIO SZ series while running various programs. The power consumption of the laptop is measured while running MP3 and some other programs such as Matlab and graphical games. Also, the effect of display brightness on the power consumption of the computer has been studied. The maximum power consumption is related to graphical games which are measured to be 375W. Running the MP3 with half brightness application consumed the least power of 150W. The battery life time is inversely proportional to the power consumption. This paper is organized as following: section II describes the power consumption measurement set up. Measurement Method is discussed in section III and the results are provided in IV. Section V concludes the paper. Power Consumption Measurement Set Up The power consumption measurement set up utilized for this experiment includes a high sensitive digital multi-meter, isolator electronic circuit in order to protect the computer under test and a digital oscilloscope which measures the current flown through the resistor included in the mesh. The digital oscilloscope has two channels with 500 MHz bandwidth and a maximum sample rate of 4 GS/s. This deliberately arranged measurement set up is shown in Fig. 1. The oscilloscope and the multi-meter both measure the signal simultaneously in order to ensure the correctness of the measured results to prevent any unwanted errors which may arise either from the equipment imbalance or from human mistakes. The value measured by digital oscilloscope plus the power supply voltage must be equal to the value shown by digital multi-meter. Averaging the input signal will remove the uncorrelated noise and improve measurement accuracy substantially. Measurement Method The equation 1 shows the common steady state power (P) consumption formula used to calculate the power of the pocket computers. P (Watt) = V (Volt) × I (Ampere)

(1)

where 'V' is the power supply voltage and 'I' is the current flown from the computer. As mentioned in previous section in this work, the multi-meter is obliged to measure the series voltage fallen across both the power terminals and the incorporated resistance. In order to prevent the thermal noise to interfere with the precise value which should be achieved we selected the resistor not to be very large. On the other hand, in order to keep the current flown through the resistor with in permitted values, it must be relatively large. So, there is a trade off in selecting the resistor for test purposes. The isolation circuit is included in order to keep the under test computer secured. A suitable resistor depicted as 'R' in Fig. 1 is located in series with the under test equipment to measure its current consumption. With the aid of the voltage values measured using the digital oscilloscope across the resistor, VR, and using I=VR/R the current flown from the computer is calculated and stored synchronously in the memory. The measurement situation is changed smoothly to follow and study the variation nature of the achieved results. This data is used to compute the real time instantaneous power consumption by multiplying this current with the measured supply voltage. This is similar to the approach utilized in [2].

628

Manufacturing Systems and Industry Application

Fig. 1. Experimental Setup to perform power measurement The configuration of the oscilloscope is initialized such that a predefined number 'N' of voltage measurements are done across the power supply just by adjusting its sampling time. Thus the number of power consumption values achieved would be equal to 'N'. In this paper, N is selected to be 1000 which the final value of the test is extracted averaging these acquired values. This calculation can be performed using the equation 2.

PAvg

1 = N

N

∑P (i )

(2)

i =1

Results This section discusses about the experimental results for the power consumption achieved from the designed set up in different circumstances. The average power consumed by the VAIO SZ series while running various programs or MP3 is shown in Fig. 2. The effect of the variation of the brightness level of the display on the consumed power was also studied and can be seen clearly in Fig. 2. According to this figure, it can be seen that the reduction of the brightness by 50% while the MP3 player is running, decrease the average power consumption by almost 40 percent. The LCD brightness is the main issue in decreasing the power consumption of the measured samples. The power consumption of the graphical programs such as games is more than other experiment situations. In addition to this, in the case of the running some mathematical programs such as Matlab, we will experience more power consumption in comparison with the MP3 case. These results that present the increase in power consumption of VAIO laptop while running different softwares and detect the power requirement and battery life time are schematically shown in Figures 3 and 4. As expressed, the brightness of LCD while running different programs has the significant effect on the power consumption and the battery lifetime of laptop. By some reduction in brightness, more battery life time can be provided for the users when there is no access to electricity sources. Finally, it can be concluded that the laptop consumes more power in the case of running the graphical works in comparison with the other states.

Yanwen Wu

629

Conclusion In this work we presented some experiments based on a measurement set up to evaluate the power consumption of a VAIO SZ series while running various programs. The power consumption of the laptop was measured while running MP3 and some other programs such as Matlab and graphical games. Also, the effect of display brightness on the power consumption of the computer had been studied. The maximum power consumption is related to graphical games which are measured to be 375W. Running the MP3 with half brightness application consumed the least power of 150W. The battery life time is inversely proportional to the power consumption as can be seen from Fig. 4 in comparison with Fig. 3. 375 W 335 W 310 W 270 W

152 W

Fig. 2. Power consumption increment for different experimental series

375 W 335 W 310 W 270 W

+219 % +157.1% 105 W

+195.2 % +257.1 %

152 W +44.7 %

Fig. 3. Average power consumption of VAIO SZ series

1. MP3 Player (Full Brightness) 2. MP3 Player (Half Brightness) 3. Running Matlab 4. Running Microsoft Office Word 5. Running Game

630

Manufacturing Systems and Industry Application

330min -19 % -48.8 %

-36.3 % 267 min

-54 %

-61.2 %

210 min

1. MP3 Player (Full Brightness) 2. MP3 Player (Half Brightness) 3. Running Matlab 4. Running Microsoft Office Word 5. Running Game

169 min 150 min 128 min

Fig. 4. Battery life time reduction for different experimental series Further Works This work could be done on different models of laptops and PCs, in different circumstances, such as running different programs individually or in parallel, which could be performed in different frequencies while considering external peripherals consumption at the same time. Acknowledgment The authors gratefully acknowledge Mr. Mohammad Hossein Zoualfaghari from the university of Birmingham for their collaboration to stabilize the set up equipment and perform the experiment for this project. References [1] R. Rao, S. Vrudhula and D. Rakhmatov, IEEE Computer Vol. 36 (2003) No. 12. [2] Assim Sagahyroon, in: IEEE Asia Pacific Conference on Circuits and Systems (APCCPS 2006) (2006). [3] A. Mahesri and V. Vardhan, in: Workshop on Power Aware Computing Systems part of the 37t International Symposium on Microarchitecture (2004). [4] J. Barlett et al, The Itsy Pocket Computer, VVRL Research Report (2000). [5] L. Zhong and N. Jha, in: Proceedings of the International Conference on Compilers, Architecture and Synthesis for Embedded Systems (2003). [6] M. Stemm and R. Katz: IEICE Transactions on Communications Vol. E80-B (1997), p. 1125. [7] J. Flinn and M. Satyanarayanan, in: Proceedings of the 17th ACM Symposium on Operating Systems Principles (1999). [8] T. Heath et al: IEEE Transactions on Computers Vol. 53 (2004) No. 8.

Yanwen Wu

631

Amir Mahmoudi was born in Naghadeh, Azarbayjane Gharbi, Iran in 1988. He is currently pursuing his B.Sc. Degree in Computer Engineering in Islamic Azad University, Miyandoab Branch.

Khalil Monfaredi was born in Miyandoab, Azarbayjane Gharbi, Iran, in 1979. He received the B.Sc. and M.Sc. degrees from Tabriz University in 2001 and Iran University of Science and Technology in 2003, respectively. He is with Electronic Research Center Group, since 2001 and is also an academic staff of Islamic Azad University, Miyandoab Branch since 2006. He serves as the Research and Educational Assistant of Miyandoab Sama College since 2009. He is the author or coauthor of more than twenty national and international papers and also collaborated in several research projects. He was also the chairman of 2010 Electronic and computer scientific conference (ECSC2010) held in Islamic Azad University, Miyandoab Branch. He is currently pursuing his Ph.D. degree in Iran University of Science and Technology, Electrical and Electronic Engineering Department. His current research interests include current mode integrated circuit design, low voltage, low power circuit and systems and analog microelectronics and data converters. Hassan Faraji Baghtash was born in Miyandoab, Iran, in 1985. He received the B.Sc. and M.Sc. degrees from Urmia University in 2007 and Iran University of Science and Technology in 2009, respectively. He is the author or coauthor of more than ten national and international papers and also collaborated in several research projects. He is with Electronic Research Center Group, since 2007 and cooperates with Islamic Azad University-Miyandoab Branch, West Tehran Branch and also Miyandoab Sama Colledge since 2008. He was also the reviewer of 2010 electronic and computer scientific conference (ECSC2010) held in Islamic Azad University, Miyandoab Branch. He is currently pursuing his Ph.D. degree in Iran University of Science and Technology, Electrical and Electronic Engineering Department. His current research interests include current mode integrated circuit design, low voltage, low power circuit and systems and analog microelectronics and RF design. Ali Bahrami was born in Naghadeh, Iran, in 1984. He received the B. Sc. degree in electrical engineering from the Urmia University, Urmia, Iran, in 2006 and the M. Sc. degree in electronics engineering from University of Tabriz, Tabriz, Iran, in 2009. He is currently pursuing his Ph. D. degree at the Iran University of Science and Technology (IUST), Electrical and Electronics Engineering Department, where his research interests include semiconductor optoelectronic devices and optical integrated circuits.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.632

An Improved Differential Evolution and Its Application in Function Optimization Problem YAN Jingfeng a, GUO Chaofengb School of Computer Science and Technology, Xuchang University, Xuchang, Henan, 461000, P.R. China a

[email protected],

b

[email protected]

Key words: Differential evolution; Evolutionary Algorithm; crossover operator; Function Optimization;

Abstract. An Improved Differential evolution (IDE) is proposed in this paper. It has some new features: 1) using multi-parent search strategy and stochastic ranking strategy to maintain the diversity of the population; 2) a novel convex mutation to accelerate the convergence rate of the classical DE algorithm.; The algorithm of this paper is tested on 13 benchmark optimization problems with linear or/and nonlinear constraints and compared with other evolutionary algorithms. The experimental results demonstrate that the performance of IDE outperforms DE in terms of the quality of the final solution and the stability. Introduction of Differential evolution Differential evolution (DE) is a simple yet powerful evolutionary algorithm for global numerical optimization. Differential Evolution (DE) [1] is a simple yet powerful population-based, direct search algorithm with the generation-and-test feature for global optimization problems using real-valued parameters. DE uses the distance and direction information from the current population to guide the further search. It won the third place at the first International Contest on Evolutionary Computation on a real-valued function test-suite [2]. Among DE’s advantages are its simple structure, ease of use, speed and robustness. Price and Storn[3] gave the working principle of DE with single scheme. Later on, they suggested ten different schemes of DE [3,4]. However, DE has been shown to have certain weaknesses, especially if the global optimum should be located using a limited number of fitness function evaluations. In addition, DE is good at exploring the search space and locating the region of global minimum, but it is slow exploiting of the solution [5]. In this paper, and a novel hybrid self-adaptive crossover operator is also introduced to enhance the global search ability of the algorithm and the search of non-convex areas. Meanwhile, a multi-parent search strategy and stochastic ranking strategy which maintains the diversity of population is used, so as to improve the algorithm’s ability of dealing with constraint problems. The superiority of IDA is verified With a test on 13 benchmark test functions. Related Reserch Work As mentioned above, DE is good at exploring the search space and locating the region of global minimum, but it is slow exploiting of the solution [5]. Recently, many researchers are working on the improvement of DE hybridized with other methods. Reference[6] adopts Homomorphic Mapping method to deal with constraints by mapping search space into feasible space. This method makes the original problem simpler; however, it cannot obtain optimal solution on many occasions. Reference[7] adopts self-adaptive penalty function method to deal with constraint function, and meanwhile brings in separating selection strategy based on feasible solution. However, this algorithm does not perform well when solving multi-modal functions Optimization Problems with Constraints.

Yanwen Wu

633

DE. Yang et al. [6,7] proposed a neighborhood search based DE. Experimental results showed that DE with neighborhood search has significant advantages over other existing algorithms on a broad range of different benchmark functions [6].Wang et al. [5] proposed a dynamic clustering-based DE for global optimization, where a hierarchical clustering method is dynamically incorporated in DE. Experiments on 13 benchmark. Improved Differential evolution In order to enhance the convergence speed and the ability of searching global optimal solution and keep the diversity of population, improvement on EA is proposed as follows. 1) Heuristic self-adaptive crossover operator. Two mutation operators are designed as follows formula (1), (2) M

P1 = ∑ a i * X i i =1

P2 = X r1 + F * ( X r2 − X r3 ) + (1 − F )( X r4 − X r5 )

(1) (2)

M

∑ ai = 1

, M stands for the number of parent individual a is a random real number, a i ∈ [−0.5,1.5] , i =1 selected. Formula 7 is mentioned in reference[4]; random select a i from interval [-0.5, 1.5] may enhance the search ability of non-convex domain. 2) Using multi-parent search strategy and stochastic ranking strategy. parameterμ is used to controlled the process of sorting, NP represents population size, M individuals are choosed as population of multi-parent by sorting. The process is as follows 1. for i := 1 to NP do 2. for j := 1 to NP-1 do 3. t=random(0,1) 4. if (f(X j)> f(X j+1 ) and (t 1 ; Fuzzy C Means achieved by an iterative optimal objective function is to optimize a process. uij And v j by the following formula: c

∑(

u ij = 1

x i −v j

xi − vt

t =1

n

vj =

)

2 m −1

(3)

n



u ij m x i

i =1

∑u

m

(4)

ij

i =1

This process is a randomly selected cluster centers from the beginning, the objective function by finding the minimum value, and continuously adjust the cluster centers and the fuzzy membership of each sample, to determine the best type of sample.

Improved Fuzzy C Means Algorithm The Mountain Clustering Method. . Mountain clustering method based on the first data set in advance the sample space divided into a limited value of N grid, the grid and the intersection of all the cluster centers as an alternative candidate for the center, at each crossing point calculated by mountain function This mountain function value And then continue to modify the mountain function for function values as the intersection of the largest cluster center of the grid [2]. The algorithm steps are as follows: (1) According to the density of the sample, the sample space constructed in the data grid. (2) Press style structure that peaks Gaussian density function indicators, the high point of peak function is equal to: N

m (v j ) =

∑ exp(−

v j − xi

2

2σ 2 )

(5)

i =1

(3) In the cluster center set V in the candidate function selecting the maximum value of peak point for the first cluster center (if there is more than the maximum point, choose one), that is,

m(ck ) = max( m(v j ))

(6)

(4) Slashing the step (3) cluster centers obtained, modified mountain function to eliminate the cluster centers have been identified through the impact, that is, minus the proportion at the center of Gaussian function in the construction of new mountain function.

m ( v j ) = m (v j ) − m (ck ) exp( − v j − ck

2

2β 2 )

(7)

(5) Subtraction, and then select V has the largest value of the new mountain function points for the second cluster center. This is repeated slashing the mountain function and find the next cluster center process, until the cluster centers are enough. Below peaks algorithms were used fuzzy C means algorithm, an improved fuzzy C means algorithm, referred to AMCA-FCM.

722

Manufacturing Systems and Industry Application

AMCA-FCM algorithm is as follows: (1) Pretreatment of the sample data to determine the L, µ ,c values. (2) Carries on the pre-cluster center using the improvement mountain peak law to the data set the choice. (3) Step (2) selecting the cluster centers as initial cluster centers FCM, FCM algorithm and then get the optimal division of thought[3]. AMCA-FCM-based Intrusion Detection Algorithm The purpose of data mining is mining data from a large number of useful information, and for the noise of useless data or being called, is often ignored. Many data mining algorithms to try to reduce or eliminate the impact of outliers. Some clustering algorithms such as DBSCAN, BIRCH, STING, ROCK with some exceptions such as processing power, but because their main goal is to produce meaningful clusters, not detected abnormalities, usually in the process as noise and ignore the exception Or tolerance. This paper presents an improved clustering algorithm based on AMCA-FCM intrusion detection algorithm that can efficiently abnormal aggressive behavior, the identification of the intrusion detection and defense provide a strong basis[4] The Algorithm Principle. .AMCA-FCM-based intrusion detection intrusion detection algorithm in two stages: first, the use of AMCA-FCM clustering algorithm to cluster the data set; the second stage, the use of unusual behavior anomaly detection methods to determine and make evaluations. Intrusion Detection Process. . Application of intrusion detection intrusion detection algorithm mainly through the data preparation stage, the initialization cluster center selection, data clustering, to determine abnormal behavior, the algorithm evaluation. Specific detection process described as follows: Data preparation phase.Intrusion detection data set to use is not readily available; you first need to collect from the network transmission network data flow. These data need to be cleaned. After pretreatment of the data is eventually want to experiment with the standard data set. Pretreatment process is divided into: discrete data continuous, data standardization, data normalization. The following detailed description (1) Continuous discrete data The experimental data contained in the original symbol-based discrete numeric attributes and continuous attributes, for the discrete attributes, their values, said one state, no association between them, must be discrete attribute values into continuous feature variables. (2) Data Standardization In order to eliminate the influence on clustering of the different measures to deal with standardized measurement of property values, as follows: x ij − x j

x ij′ =

S x

j

, ( i = 1, 2 , ...n ; j = 1, 2 , ... m )

(8)

j

=

Sj =

1 n

n



x ij

(9)

i =1

2 1 n xij − x j ) ( ∑ n − 1 i =1

(10)

Yanwen Wu

723

Where x j and S j is the first j characteristic property of the data mean and standard deviation. After standardization, the data set to be converted to a standard unit space. (3) Data normalization After the standardization of data, while in a unit space, but may not in the range of the interval [0,1], we can put through the normalized range of the various features of the property falls [0,1] interval. Transformation formula is as follows:

xij′′ =

xij − min ( xij ) 1≤i ≤ n

max ( xij ) − min ( xij ) 1≤ i ≤ n

, (i = 1, 2,..., n; j = 1, 2,..., m)

(11)

1≤ i ≤ n

Initialize the cluster center selection.The experimental data sets using standard clustering algorithm with adaptive peaks choose the initial cluster centers to get near optimal cluster centers. FCM do not only reduce the number of iterations to improve the clustering precision, but also in terms of complexity is also very effective. Data Clustering.Will choose a good initial cluster centers and the number of clusters input FCM, seeking clustering results. Determine Abnormal Behavior. Experiment data records for the network with the following two characteristics: in some measure, the similar objects or objects of the same type together, or the normal data and abnormal data are gathered in different classes. Algorithm Evaluation. Evaluation of intrusion detection algorithm is good or bad, is currently the most widely used is the detection rate and false detection rate. Detection rate and false positive rate is a measure of the main intrusion detection system performance indicators. Determination method based on abnormal behavior, determine which record what is abnormal is normal record, classify the various record labels, based on detection rate and false positive rate improved algorithm and compare the size performance of the original algorithm [5]. Simulations Experiment Description of Experimental Data. . Experiment use of the sample data is authoritative intrusion detection test data KDD CUP 1999, the data sets simulated by the Massachusetts Institute of Technology Lincoln Laboratory established the U.S. Air Force LAN network traffic data test data set, provided approximately 4,900, 000 data, there are 41 dimensions of each data attribute and a property label, 41-dimensional properties are continuous attributes, there are discrete attributes. Data set contains four categories of 24 attack. Were: Dos (Denial of Service Denial of Service attacks), U2R (User to Root remote host attacks illicit access to the local host authority), R2U (Remote to User a non-root user access to local super user privileges attacks), PROBING (system vulnerabilities Detection of attack). Experimental Results and Analysis. .KDD data set from a random method to select four groups of data sets, data set 1, respectively, by 12,000 records, of which type of attack options Dos 1100; data set 2, composed of 5,000 records ,in which PROBE type of attack 480 were randomly selected data; Data set 3, from 530 records, of which U2R attack category select 52; Data set 4, the 3500 records, of which 320 selected R2L attacks. Classes were identified for each of the data itself in each group were calculated data rate detection rate and false detection, the results shown in Table 1-4.

724

Manufacturing Systems and Industry Application

Table 1

Attack Test Results of DOS

1st Attack(DOS)

2nd

Detection rate(%)

Error detection rate

Error Detection rate(%)

(%)

detection rate (%)

Back

78.41

0.07

77.96

0.11

Neptune

48.34

4.35

49.16

4.23

Pod

77.67

0.13

77.67

0.13

Smurf

98.78

0.03

99.32

0.02

Teardrop

84.34

0.21

85.56

1.21

Table 2

Attack Test Results of PROBE

1st Attack (PROBE)

2nd

Detection rate (%)

Error

Detection rate

detection rate

(%)

(%)

Error detection rate (%)

Ipsweep

54.67

3.70

63.75

4.13

Nmap

52.79

1.03

53.01

0.71

Postsweep

96.30

0.03

95.70

0.03

Satan

85.63

0.67

84.64

0.79

Table 3 Attack Test Results of U2R 1st Attack(U2R)

2nd

Detection rate (%)

Error

Detection

Error detection

detection rate

rate

rate

(%)

(%)

(%)

Buffer_overflow

99.10

0.00

90.62

0.00

Loadmodule

34.78

7.82

32.67

8.54

Perl

45.60

3.40

45.63

3.34

Rootkit

76.00

2.45

79.98

1.31

Table 4 Attack Test Results of R2L 1st Attack(R2L)

Detection rate (%)

2nd Error detection rate (%)

Detection rate (%)

Error detection rate (%)

ftp_write

12.00

5.89

8.78

5.12

Guess_password

100.00

0.34

100.00

0.00

Phf

99.56

0.00

80.00

2.20

Spy

100.00

0.00

100.00

0.00

Multihop

30.57

6.34

73.12

9.54

Warezmaster

21.00

10.23

71.34

7.64

Imap

90.67

0.23

45.67

4.31

Warezclient

100.00

0.12

78.32

2.12

Yanwen Wu

725

To verify the AMCA-FCM-based intrusion detection algorithm is effective superiority of FCM-based intrusion detection algorithm in the same experimental data set of results as compared,From the detection rate and false detection rate compared the former proved significantly better than traditional intrusion detection detection algorithm based on FCM, compared the results shown in Table 5. Table 5 Total Attack Test Results Attack

FCM

AMCA-FCM

Detection rate (%)

Error detection rate (%)

Detection rate (%)

Error detection rate (%)

DOS PROBE U2R

78.82 75.23 60.43

2.1 4.62 7.72

79.96 85.22 70.28

0.07 0.56 0.23

R2L

48.12

6.45

77.69

0.16

algorithm

Summary This chapter describes the intrusion detection based on improved FCM algorithm, the principle of the algorithm described in detail, describes the process of intrusion detection algorithms, and finally KDDCUP improved algorithm on data sets were simulated, The result shows that FCM algorithm is improved compared with ordinary method in 4 attacks have increased in terms of detection rate, false detection rate dropped significantly, demonstrate that the improved algorithm is indeed superior to the traditional algorithm. References [1] Dunn JC.A Fuzzy Relative of the ISODATA Processed Its Use in Detecting Compact Well-Separated Cluster.Cabernet J, (1994) , p.1-8. [2] Chen Xiaoyun, Min conveying Channels Zheng Liangren and so on. a fast mountain clustering algorithm. Computer studies, 2008, 25 (7): p.2044-2045. [3] Lankewica L, Benard M. Real-time Anomaly Detection Using a Nonparametric Pattern Recognition Approach. Proceedings of the Seventh Computer Security Applications Conference. San Antonio, TX, 1991(9):p.124-128. [4] Lankewica L. A Non-parametric pattern recognition to Anomaly Detection, Tulang University, Dept. of Computer Science, 1992(6): p.230-236. [5] Xiangyang Li. Clustering and Classification Algorithm for Computer Intrusion Detection. Arizona State University, 2001(5): p.150-156.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.726

Measurement study of IPv6 users on private BT Naixiang Ao a, Changjia Chen b School of Electronic Information Engineering, Beijing Jiaotong university, China a

[email protected], [email protected]

Key words: IPv6 users, evolution trend, private BT

Abstract. The Internet continually evolves in scope and complexity, with the wide deployment of more capacity backbone links and IPv6, the emergence of streaming applications, and even the rapid changing nature of the same applications. These changes lead us to question how is the evolution trend as well as present situation of IPv6 users on the most popular Internet applications (such as Bittorrent). To tackle this issue, we conducted this measurement study on a private BitTorrent System, which is deployed over a campus network serving more than 25,000 users with three external links connecting to Telecom, CERNET, and CERNET2 (IPv6). With packet traces collected from external links and the 10-month log files from the private tracker, we first provide amount and components of IPv6 traffic, and compared with those of IPv4 traffic, then present in depth measurement and analysis from users group evolution, each group’s status, and level of activity. Introduction Internet Protocol version 6(IPv6)[4] was designed to replace the current primary protocol(IPv4), mainly because of the rapid depletion of IPv4 address. In China, network equipments supporting IPv6 have been wildly deployed over networks served by main ISPs such as Telecom and CERNET 2(IPv6 only), and large-scale IPv6 topology has been build up. But how to promote IPv6 in Internet users who have accustomed to using IPv4 for many years is still a realistic question. Although zero-cost policy for traffic generated by IPv6 seems to be good promotion strategy, the reality that resources can be accessed over IPv6 network are very limited so far, may frustrate IPv6 users seriously. Many popular Internet applications (BT,IPTV) servers locate in certain campus networks have provided service over IPv6 network since several months ago. Users who had no access to these servers over IPv4 formerly, are possible to experience accordingly applications after IPv6 protocol has been enabled on their PCs. Among all applications, BT contributes the largest proportion of Internet traffic. Nowadays, more and more private BT sites (BT darknets [1]) exsit on the Internet, and also several over campus networks provided by CERNET (at present CERNET 2, as weel). According to analysis of packet traces collected from external links of our campus network, most portion of traffic (both over IPv4 and IPv6) is generated by BT, which is similar to the situation on the whole Internet. So we conduct a breadth and depth measurement study on a famous private BT deployed over our campus network. With the 10-month log files of the private tracker, we investigate the scale of members who use IPv6. Our main contributions are as follows: First, we focus on the most popular Internet application — BT, reveal its IPv6 users group’s evolution trend as well as the present situation by 10-month log file of a private tracker, which serves a famous BT darknet. Although there have been a few studies on BT darknet [1,2] or private tracker[3], to best of our knowledge, this paper is the first study focus on characteristics of IPv6 users of BT. Second, we pay attention to the IPv6’s Hotspot (CERNET 2) in China, reveal the present situation of IPv6 traffic by investigating into realistic packet traces collected from external links(mixture of IPv6 and IPv4) of a campus network. In section 2, we describe the datasets and illustrate our analysis methodology, the situation of IPv6 traffic over campus network is also revealed; Section 3 shows the evolution trend of IPv6 users group on a famous BT darknet during a 10-month duration; we discuss our study and summarize related works in section 4; conclusion is drawn in section 5 with a summary of future work.

Yanwen Wu

727

Datasets description and analysis methodology Dataset 1: packet traces We captured packets from the interface at the export link of a student department building from Jun.7th.2010 to Jun.8th.2010. The initial moments are respectively 9:00, 12:00, 14:00 17:00, 20:00 and 23:00 of each day, and every capture action lasts 20 minutes. We used CoralReef as the tool of traffic statistics, differentiated IPv4 traffic and IPv6 traffic by the contents of IP-address field in packet header, and distinguished the flows of all kinds of Internet applications by their common ports. Date Time Throughput(Mbit/s) IPv6 traffic(% of total) BT traffic(% of IPv6 traffic) Date Time Throughput(Mbit/s) IPv6 traffic(% of total) BT traffic(% of IPv6 traffic)

Table 1 Traffic statistics 2010-7-7 9:00 12:00 14:00 17:00 265.43 304.49 342.75 351.24 43.21% 40.07% 40.22% 50.09% 88.12% 63.78% 79.16% 73.85% 2010-7-8 9:00 12:00 14:00 17:00 277.5 314.02 430.54 430.4 41.92% 55.93% 39.17% 42.59% 90.72% 82.91% 89.74% 71.77%

20:00 388.23 49.32% 70.04%

23:00 834.53 36.36% 78.30%

20:00 487.05 41.01% 85.96%

23:00 637.54 40.56% 79.68%

Table 1 shows that the IPv6 traffic is commensurate with IPv4 traffic and sometimes IPv6 traffic accounts larger proportion than IPv4 does. This means a lot of students have become users of IPv6. Besides, we find out that BT accounts surprisingly large proportion (more than 90% during the specific period of time) of total IPv6 traffic.(such feature is also found through packet traces collected from the external IPv6 link of the campus network.) Dataset 2: log files of private tracker(from Oct.1st.2009 to Jun.30th.2010) Since most of IPv6 traffic is generated by BT, BT is an appropriate Internet application for investigating the evolution of IPv6. So we conducted a measurement study on a private BT system with more than 40,000 registered accounts, 110,000 torrents and 25,000 active torrents. This BT darknet is deployed over the same campus network referred above, from which we collect the packet traces. Users of this BT community can be divided into three users groups according to their network environments and IP protocols: IPv6 users group: members outside the campus network on which this BT darknet is deployed, who can visit the BT website, communicate with tracker and exchange file pieces only over IPv6; IPv4 users group: members inside the campus network who haven’t enabled IPv6 protocol on their PCs, they can visit the BT website, communicate with tracker and exchange file pieces only over IPv4; Mixed users group: members inside the campus network who have enabled IPv6 protocol, they can visit the BT website and communicate with tracker over IPv4, while IPv6 and IPv4 can both be used to exchange data. With 10-month log files collected from tracker server, we do analysis of many aspects. For instance, with the finding that the numbers of the three users groups evolve over time, we reveal the evolution of the scale of IPv6 users group; the number of “completed” times (per day) as well as download bytes (per day) of the three users groups reflect the present status of IPv6 in the BT darknet.

728

Manufacturing Systems and Industry Application

Evolution trend of IPv6 users group on private BT The proportion of three types of users changes over time, which can reflects the evolution trend of users who have enabled IPv6, intuitively. For the BT system, amount of active peers (the number of unique account or the number of unique IP addresses) in each users group can represent their respective proportion best. Besides, to some extent, the total download bytes (per day), and the number of completions (per day) of the three users groups can reflect their status in the BT darknet, respectively. Before we show measurement results about the proportion of the three users groups, it’s necessary to illustrate our statistics methodology, with simple explaination of log file’s record. Every record of tracker’s log file begin with the IP address field, if the content of IP address field is a IPv6 address, then this request comes from IPv6 user (because only members of IPv6 users group communicate with tracker over IPv6); To determine which users group the request beginning with IPv4 address comes from, we need the help of another field — available IPv6 address field, if a record beginning with IPv4 address include available IPv6 address field in correct form, then this request comes from Mixed users group, otherwise, the request is from IPv4 users group. 7000

average number of active accounts(per day)

IPv4 users group Mixed users group 6000

IPv6 users group

5000

4000

3000

2000

1000 2009.9 2009.102009.112009.12 2010.1 2010.2 2010.3 2010.4 2010.5 2010.6

month

(a) 9000

average number of active IPs(per day)

8000

IPv4 users group Mixed users group IPv6 users group

7000

6000

5000

4000

3000

2000

1000

0

2009.9 2009.102009.11 2009.12 2010.1 2010.2 2010.3 2010.4 2010.5 2010.6

month

(b) Figure.1 Number of active peers (per day), (a) unique passkeys (accounts); (b) unique IPs

Yanwen Wu

729

In Figure.1, we show the evolution trend (by month) of the proportion of the three users groups by counting active peers (the average number per day in the very month) Considering either account or IP, the scale of IPv6 users group becomes larger and larger. Since the private BT limits new account registration while allow users login the same account from different IP addresses simultaneously, so the number of unique IP can reflect the scale of users group more accurate. From Figure.1, the IPv6 users group has been the one with largest scale, which prove that more and more outside users have joined in the BT darknet over IPv6 network. Then we pay attention to inside users (IPv4 users group and Mixed users group), from Figure 1, we find out the scale of Mixed users group is larger than that of IPv4 users group concerning either the number of unique account or the number of unique IP. Moreover, the scale of Mixed users group become larger over time, while IPv4 users group performs oppositely. This phenomenon reflect that most Internet users inside the campus network prefer to enable IPv6 protocol on their PCs. Under the premise of limited IP address space, such preference in users’ behavior leads to smaller IPv4 users group whose users access to Internet (naturally include BT application) over pure IPv4 network. We discuss the main reason of this preference combined with comparing the three users groups’ downloading performance in another ongoing study.

average number of completions(per day)

12000 IPv4 users group Mixed users group IPv6 users group

10000

8000

6000

4000

2000

0

2009.9 2009.102009.112009.12 2010.1 2010.2 2010.3 2010.4 2010.5 2010.6

month

Figure. 2 Average numbers of each users group’s completions (per day) 12

18

average download bytes (per day)

16

x 10

IPv4 users group Mixed users group IPv6 users group

14 12 10 8 6 4 2 0

2009.9 2009.10 2009.11 2009.12 2010.1 2010.2 2010.3 2010.4 2010.5 2010.6

month

Figure. 3 Average download bytes (per day) of each users group

730

Manufacturing Systems and Industry Application

On the other hand, the download bytes and the number of completions reflect the status of each type of users in the BT darknet. From each day’s log file, we extract records that include “event=completed”. These records are classified into three categories (IPv4 users group’s, IPv6 users group’s and Mixed users group’s) according to the criteria referred above. The statistics of each users group’s download bytes are obtained by adopting a similar approach. Figure.2 shows average number of completions (per day) in each of the 10 months, while Figure.3 shows the corresponding download bytes of each users group. Firstly, we focus on the characteristic of each users group individually. Both the number of completions and the amount of download bytes of IPv4 users group have decreased during the 10-months duration, while the other two groups keep the two volumes increasing. Then, we make comparision among the three users groups. At the beginning of the 10-months interval, the two volumes of Mixed users group are smaller than those of IPv4 users group, besides, the two volumes of IPv6 users group are much smaller than those of IPv4 users group. But at the end, the Mixed users group’s volumes are several times larger than IPv4 users group’s, while the two volumes of IPv6 users group catch up those of IPv4 users group. Combined with information obtained from Figure.1, we find that, although the present scale of IPv6 users group is largest among the three, its number of completions as well as the amount of download bytes is corresponding small, which reflect that the overall level of download activity of IPv6 users is lower than that of members in the other two groups. Despite the low level of activity, users outside the campus who join the swarm over IPv6 network still have played a important role in the BT darknet because of the rapid expansion of their group’s scale On the other hand, with more users inside campus enabling IPv6 protocol on their PCs, Mixed users group have been in the most critical (own most users with high level of activity and ever contribute the largest proportion of traffic) position of the BT darknet, all of these reveal the reality that IPv6 have been in the trend of rapid spreading on the Internet or at least on the BT application. Related work and discussion Two bodies of work are related to this study: first, BT darknet or private tracker has been studied by means of measurements [1][3], or theoretical analysis [2]; Second, with spread deployment of IPv6 topology, features of BT traffic over IPv6 has also been focused on [5]. Although there have been so many studies of BT as well as IPv6, this paper is the first one focus on the scale and performance of private BT system’s IPv6 users group, providing access to BT-like popular Internet applications over IPv6, combined with 0-cost on IPv6 traffic, is believed to be a good strategy for IPv6 promotion. Conclusion In this paper, traffic analysis is done with real packet traces collected from external links of campus network. IPv6 traffic is found to account for considerable portion, and flows generated by BT occupy more than half of IPv6 traffic. With 10-month tracker’s log files of a popular BT darknet deployed in the very campus network, amount of members who use IPv6 is revealed to keep increasing gradually, which reflects that providing access of popular Internet application over IPv6 network can be good strategy for IPv6 promotion. Performance (download time) gap among the three users gorups is found in our another ongoing study. A natural next step is to modeling the private tracker’s operating mechanism and the features of the thtee users groups, investigate factors can probably affect users’ performance, and then discover the major.

Yanwen Wu

731

Acknowledgement Thanks the support from China National Science and Technology Plan 973:2007CB307101-1. Thanks for the valuable comments. References [1] Z. Liu, P. Dhungel, D. Wu, C. Zhang and K.W. Ross: Bittorrent darknets, in IEEE INFOCOM, 2010. [2] Z. Liu, P. Dhungel, D. Wu, C. Zhang and K.W. Ross: Understanding and Improving Incentives in Private P2P Communities, In ICDCS 2010, [3] X. Chen, Y. Jiang, X. Chu, “Measurements, Analysis and Modeling of Private Trackers”, in IEEE tenth international conference on Peer-to-Peer Computing, 2010. [4] S. Deering, R. Hinden: Internet Protocol, version 6 (IPv6), Specification RFC 1883, IETF, 1995. [5] C. Ciflikli, A. Gezer, A.T. Ozsahin and O. Ozaksap: Bittorrent packet traffic features over IPv6 and IPv4, SIMULATION MODELLING PRACTICE AND THEORY, Oct.2010.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.732

Research on Decision Tree Algorithm Based on Information Entropy DU Minga, WANG SHU-meib, GONG Guc College of Computer Science and Technology, Xuzhou Normal University. Xuzhou 221116, China a [email protected], [email protected], [email protected] Key words: decision tree; Classification algorithm; entropy; information gain; gain ratio

Abstract. Decision tree is an important learning method in machine learning and data mining ,this paper discusses the method of choosing the best attribute based on information entropy .It analyzes the process and the characters of classification and the discovery knowledge based on decision tree about the application of decision tree on data mining .Through an instance, the paper shows the procedure of selecting the decision attribute in detail ,finally it pointes out the developing trends of decision tree. Introduction The decision tree technology is the effective method that the humanity uses the computer to imitate the human decision-making, which is used widespread in medical service, insurance industry, telecommunication industry, manufacturing industry, image recognition and robot navigation and so on. The decision tree method uses the information theory to seek the attribute field with the greatest information content in the database, establishing a decision tree's point and tree's branch according to this attribute field different values establishment until leaf pitch point. This may make the data regular visible, not need long time to build structure process, and the output is easy to understand, what is more,the precision is higher, therefore decision tree is applied broadly in the knowledge discovered system. There are many algorithms to carry on this aspect, like ID3, C4.5, C5.0, CHAID, CART and SLIQ and so on. Information Entropy Theory The information theory was proposed in 1948 Shannon (C.E.Shannon), the information content and the entropy were defined respectively in the information theory: Information=-log2Pi and Entropy=-∑Pilog2(Pi). In fact, the entropy is weighted average of the system information, namely system average information. The famous ID3 algorithm was proposed in the paper named "Induction of Decision Trees" of J. Ross Quinlan in 1986 printed. sends the motto was in Machine Learning Journal, the ID3algorithm was proposed based on Shannon's information theory. The decision tree study is a induction study method. Training set T is one that includes analyzed data elements in the database group. A unit in training set T is called training sample, each training sample has a category mark. A concrete sample form may be ( V1, V2, ..., Vn; C), Vi means attribute value, C expresses the category. Supposed that training data set T (S samples) have m discrete values with marking attribute, defining m different Ci (i = 1,2, m). Si is the sample number of Ci, Pi is the probability of the free sample belongs to Ci, calculated with Si/S. Supposed that attribute A has v different values {a1, a2... Av}, which divide the training set into v subsets {T1, T2... Tv}, Sj contains all samples concentrated on A of the training set. Sij is the sample number of Ci in subset Sj the kind, Pij=Sij/|Sj|is the probability in the Sj sample belongs to Ci. Item (S1j+…+Smj)/S acts as the weight of the jth subset, equal to the sample number of subset (namely A value is aj) divided by the training data set sample total. Defination 1 if attribute A has v different values, then entropy (unit bit), A relative to v values’ classification s is defined as: v s1 j + + s mj (1) E ( A) = ∑ I ( s1 j , s 2 j , , s mj ) j =1

s

Yanwen Wu

733

In (1), | si | is the number of sample in subset Sj, | s | is the number of sample in training set T. I ( s1 j , s 2 j ,

, s mj )

is the expectation value of the information (in bit) of Classification, its expression m

is:

I ( s1 j , s 2 j ,

, s mj ) = − ∑ Pi log

2

(2)

Pi

i =1

Definition 2 when attribute a is valued v, information Gain (unit bit)of relative sample set T can be expressed as: Gain(T , a ) = E (T ) − I ( Sv)

(3)

Here, Sv is a set of all possible values with attribute A, Sv is a subset which includes the value v of attributes A in the training set. To create decision tree using information gain, information gain is inspected firstly of all the attributes, the attribute of the biggest information gain become the root node, then decision tree is generated recursively. This thought is used in ID3 algorithm, the attribute with the first largest information gain value is root, with division, then calculating the attribute information gain maximum of remaining branches. This recursive algorithm is applicable to each tree node, until all samples of every node belong to a particular category, a top-down, greedy search method is adopted to create a decision tree in all possible space. All attributes are discrete or classified in ID3. ID3 was expanded to C4.5 by Quinlan, to gain ratio (Gain ratio) is attribute selection metrics in C4.5 algorithm. If based on the decision tree algorithm, it must satisfy key requirements. Such as: attribute description, each attribute value is discrete or continuous, you must have a request: attributes used to describe the samples must be not the same mutually. For discrete attributes, which is divided into subsets according to the discrete value of T , thus calculating information gain ratio of discrete attributes; For each continuous attributes, attribute values in T are sorted with Sorting Algorithm. Assuming that sort results are v1v2…vn , then i from 1 to n-1, when v = (vi + Viet +1) / 2 ,T is divided into two subsets: Tv1={vj|vj≤v}and Tv2={ vj|vj>v }, and calculating its information gain, corresponding gain (v `), as the largest local segmentation, Finally, according to the division of T , information gain ratio is calculated, and use it as information gain rate of attributes [2,7,8] . In C4.5, attribute with the most information gain ratio are made as splitting attribute of a nodal point for his choice of algorithm, meanwhile, making a judgment that whether splitting attribute is continuous attributes, if it is continuous attributes, with sequential search algorithm in the samples, searching attribute values not more than partial segmentation v `, and as node segmentation points. If T is divided into subsets T1, T2 ... T S by splitting attribute, N node will be generated into s affiliate nodes. Of course, if the splitting attribute is continuous attribute, s = 2. If it is discrete attribute, s is the number of different discrete attribute values. Next making a judgment that whether each sub-Ti (i ∈ (a, s)) is empty, if Ti is empty, Ni-node is set to leaf node, and marked as a category which has the largest number in a male parent node. If Ti is not empty, the same process applied to the node Ni. A decision tree will be created in this recursive. Supposed that T-training set is divided into T1,T2,……Tn according to n different values of a discrete attribute, information gain ratio classified by a is expressed as: (4) Gain _ ratio ( a ) = Gain ( a ) / Split _ Info ( a ) Here , Split _ Info ( a ) is expressed as: n

Split

_ Info ( a ) = − ∑ ((| T i | / | T |) × Log i =1

2

(| T i | / | T |))

(5)

734

Manufacturing Systems and Industry Application

In (5), | Ti | is the sample number of T1, | T | is the total number of samples in training set T. The use of gain ratio to create a decision tree is more vigorous than the use of Gain (a ) , we can use Gain _ ratio ( a ) standard to help us to create a decision tree, test attributes with the biggest ratio is

selected. C5.0 algorithm is one that is developed on the basis of C4.5 algorithm, in C5.0, information Gain ratio of attribute is used to select attributes. The calculating time of information entropy is the most among the time of creating a decision tree. Some scholars believe that many continuous operations are occupied 70% of the time during the process of creating a decision tree. The advantage of C5.0 algorithm is generating understanded rules; In relative terms, computation quantity is not great with handling continuous and discrete attributes; Decision trees can clearly show which attribute is more important. More difficult prediction of continuous attribute is the algorithm’ shortcomings; When types of data sets are too many, the errors may increase faster, but under a classification attributes, the overall situation is not optimal, if combined previous algorithm C5.0 , better results will be received

Algorithm validation Decision Tree Algorithm based on information theory insists on a principle that computation quantity is the least when a sample in database is classified, computation quantity in the choice of properties, for the classification of a certain attribute, makes the information quantity for the minimum result of sample classification. Generally speaking, it is required that the complexity of the decision tree is closely related with information quantity of attribute values. We use an information systems of the literature [3] as the data set of information entropy analysis, as shown in Table 1, the information data is information of expression related to weather and sports. Table 1 Training T(a information system) U

Condition attributes(C)

Decision attributes(Y)

Outlook(a1)

Temperature(a2)

Humidity(a3)

Windy(a4)

Class(C)

1

sunny

hot

high

false

N

2

sunny

hot

high

true

N

3

overcast

hot

high

false

P

4

rain

mild

high

false

P

5

rain

cool

normal

false

P

6

rain

cool

normal

true

N

7

overcast

cool

normal

true

P

8

sunny

mild

high

false

N

9

sunny

cool

normal

false

P

10

rain

mild

normal

false

P

11

sunny

mild

normal

true

P

12

overcast

mild

high

true

P

13

overcast

hot

normal

false

P

14

rain

mild

high

true

N

Yanwen Wu

735

The nine samples in training set belong to P of the Class, five samples belong to N of the Class, the entropy before district is calculated from formula (1) as: I ( S 1 , S 2 ) = I ( 9 ,5 ) = −

9 × log 14

2

(

9 5 )− × log 14 14

2

(

5 ) = 0 . 940 ( bit ) 14

.

The data set T is separated into three subsets {sunny, overcast, rain} by Attribute Outlook (a1), sunny has a total value of five samples, two samples belong to the Class of P, three samples belong to the Class N, overcast has a total value of four samples, four samples belong Class P, 0 samples belong to the Class N, rain has a total value of five samples, three samples belong to the Class of P, two samples belong to the Class N, then the test information on a1 is calculated by the formula (2)as: 5 4 5 I (2, 3) + (4, 0) + I (3, 2) 14 14 14 5 2 2 3 3 4 4 4 0 0 5 3 3 2 2 = ( − log 2 − log 2 ) + ( − log 2 − log 2 ) + ( − log 2 − log 2 ) Information 14 5 5 5 5 14 4 4 4 4 14 5 5 5 5 = 0.694( bit ).

E ( a1) =

gain of attribute outlook is expressed as the formula (3) Gain ( a1) = I ( S 1 , S 2 ) − E ( a1) = 0 .246 (bit ) .

Similarly, Information gain of temperature is expressed as Gain(a2) = I (S1 , S2 ) − E(a2) = 0.029(bit) ; Information gain of Humidity is expressed as: Gain ( a 3 ) = I ( S 1 , S 2 ) − E ( a 3 ) = 0 . 151 ( bit ) ; Information gain of Windy is expressed as: Gain ( a 4 ) = I ( S 1 , S 2 ) − E ( a 4 ) = 0 . 048 ( bit ) . From the above calculation results, attribute Outlook obtains the biggest information gain, as a decision tree root, attribute outlook values are divided into three branches in samples, information gain of each branch is calculated again and compared to the next division. Next, what to do is calculating information gain ratio, from the training is focused on the attribute information, the combination of the above, in accordance with the formula (5) Split Info is obtained. Split _ Info ( a 1 ) = −

5 log 14

2

5 4 − log 14 14

2

4 5 − log 14 14

2

5 = 1 . 577 ( bit ) 14

;

From formular (4), we can get: Gain _ ratio ( a1) = Gain ( a1) / Split _ Info (a1) = 0.246 / 1.577 = 0.156 Similarly, Gain _ ratio ( a 2) = 0.019 , Gain _ ratio ( a 3) = 0 .151 , Gain _ ratio ( a 4) = 0.049

From the above, we can know that Gain ratio (a1) is the largest, according to information gain rate criteria, decision tree algorithm will choose test a1 T as the initial test of a regional database T, and the same as the initial result of information gain above . After the initial district (showed in Figure 1), each node includes a number samples of T, and then verification and optimization process are repeated on each node, ultimately generating a decision tree.

736

Manufacturing Systems and Industry Application

Outlook

sunny

rain overcast

a4

C

U

1

U

hot

a2

high

a3

false

N

3

hot

2

hot

high

true

N

7

cool normal true

8 mild

high

false

N

12

mild high

false P

13

hot

9 cool normal 11

mild normal true

Fig. 1

a2

a3 high

a4

C

U

a2

a3

a4

C

false

p

4

mild

high

false

p

p

5

cool normal false

P

P

6

cool normal true

N

P

10

mild normal

14

mild high

true

normal false

P

false P false

P

The results of first training set segmentation after calculation of information

Decision Tree creation is a segmentation process of the training sample set. The decision tree’s branches are established gradually in segmentation of data. When a of the data branch is no longer meaningful, corresponding branches of decision tree will no longer growth; When segmentation of all data branches are no longer meaningful, decision tree building process has come to an end, then a complete decision tree is formed. Figure 2 is the second division after the second segmentation, Figure 3 is the ultimate decision tree. Outlook

rain sunny

Outlook

overcast

sunny

rain

Windy Humidity overcast Windy

Humidity

C U C

U C

P

1

N

9

P

P

2

N

11 P

P

8

N

P

U C U C

6

N

4

P

5

P

high

N

normal

P

false

true

P

N

P

10 P 14

P

Fig.2 The second division after the second segmentation

Fig.3 The ultimate decision tree

After the formation of a complete decision tree, it is not generally used immediately in the new classification of data or forecasts because that the complete decision tree is not a best one of analyzing new data object at this time. The major reason is: the description of the complete decision tree’s training samples is "too precise", and unable to achieve a reasonable analysis of new data target. In general, for the decision tree which is created is done with pruning and other work until a healthy tree is created, so decisions and rules may be useful for policy-makers. CONCLUSION In this paper, the decision tree algorithm is used to create data classification and prediction model, and clearly reflecting which attribute is more important in the classification of the database and generating understandable classification rules, the decision tree algorithm based on the calculating information entropy can provide very valuable information and knowledge for policy makers or. The future research emphasis of decision tree’s classification Algorithm will focus on the integration between improvement of algorithms and data mining.

Yanwen Wu

737

References: [1] Mehmed Kantardzi, thanslated by Shan S Q. Chen Y. Cheng Y. Data Mining-Concept, Model, Method and Algorithm[M].2003(8):120-143. [2] Li Tianchi Zhang, Dezheng Wang, Zongjie. Based on C4.5 analysis on measuring error for Digital Gas Field[J]. Control & Automation, 2006, 5-3:10-12. [3] Quinlan J R. Induction of Decision Trees[J].Machine Learning,1986,1(1):81-106. [4] Quinlan J R.C4.5:Programs for Machine Learning[M].San Mateo, California: Morgan Kaufmann,1993. [5] Quinlan J R. Simplifying decision trees[J]. Int J of Man-machine Studies,1987, 27( 3):221- 234. [6] Quinlan J R. Bagging, Booting and C4.5 in proc of 13th National Conference on Artificial Intelligence Portland[J].Oct 1996,725-730. [7] Quinlan J R. Decision tree and decisionmaking [J]. IEEE Transaction on Systems, Man ,and Cybemetics,1990,20(2):339-346 [8] Swere E,Mulvaney D J.Robot navigation using decision trees[R].Loughborougk, UK: Electronic Systems and Control Division Research, 2003. [9] Ruggieri S.Efficient C4.5[J]:IEEE Transactions on Knowledge and Data Engineering, 2002,14(2) :438-444.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.738

Topological Optimization Based on Topologically-Critical Nodes in Unstructured P2P Network Wei FAN a, Dong-fen YE b, Ming-xia YANG c , Lu ZHANGd College of Electrical and Information Engineering ,Quzhou University, Quzhou, Zhejiang,China a

[email protected], [email protected], c [email protected], d [email protected]

Key words: Topological Optimization, P2P Network, Topologically-Critical Nodes

Abstract. In order to ensure that each node is connected with one another in P2P network and strengthen the invulnerability of network topology, in this paper, topological optimization in unstructured P2P network was studied. Based on the weakest part in network topologically-critical node, existing algorithm to search and eliminate topologically-critical nodes was analyzed and improved. The experiment results show that the improved CAM algorithm (ECAM algorithm) can greatly reduce the network consumption and communication costs and improve discovery efficiency while ensuring the discovery accuracy. 0 Introduction Distributed and self-organized P2P network has developed into enormous scale with surprising speed in the past few years and is widely applied to many fields, such as resource sharing [1-5], real-time information [6], cooperative work [7], distributed computing [8] and so on. P2P network is a carrier of various applications and the topology characteristic of its coverage network is one of the most important factors to affect the quality of application service. Based on the requirement and feature of applications, the topology of coverage network can be adjusted and optimized to effectively improve its application performance. However, with the increasing information in network, how to enhance network expandability and address the problem of network bandwidth devouring as well as how to improve the recall ratio and precision ratio of information search through efficient message routing in P2P network is the key of P2P technical study. Whether these problems are solved well or not has a direct impact on the effectiveness of P2P application and on the further development of P2P technology. Moreover, large P2P network size, strong dynamic nature and other characteristics also pose great challenges for the topological optimization technology of P2P network. Network connectivity is the premise to optimize the topology of P2P network. In order to ensure that each node is connected with one another in P2P network to strengthen the invulnerability of network topology, this paper studies the topology of unstructured P2P network and concludes that certain nodes in network which are the only channel of two or more separate subnets have important influence on network topology. The failure of these nodes is likely to cause the network to be divided. This kind of node with special significance is called "topologically-critical node". If the topologically-critical nodes in unstructured P2P network are effectively detected and reasonably eliminated through a distributed approach, the resistance of network to division can be essentially enhanced and the fault-tolerance of system will be significantly improved. In this paper, topological optimization in unstructured P2P network is studied. Based on the weakest part in network-topologically-critical node, existing algorithm to search and eliminate topologically-critical nodes is analyzed and improved. The experimental results show that the algorithm is feasible.

Yanwen Wu

739

1 Related work S. Saroiu et al[9] have studied the fault-tolerance of unstructured P2P network Gnutella through careful measurements. The experimental results showed that: In Gnutella, the connectivity of most nodes is extremely low and only a few nodes get very high connectivity. More specifically, Gnutella is of the power law distribution with α=2.3, so it has high fault tolerance for random node failure, but with poor capability to resist malicious attacks. Liu et al[10] have studied the division of unstructured P2P network from the perspective of topologically-critical node for the first time. They designed a distributed algorithm called CAM (Connection Adjacency Matrix) to detect topologically-critical node. CAM algorithm can effectively detect the reachability relation of nodes to avoid repeated search, so the communication overhead will be reduced to very low. However, CAM algorithm can not accurately detect cut vertex unless there is no limit for the TTL of message routing hop count. Ren Hao [11] has defined topologically-critical node based on graph theory. P2P network is considered as an undirected graph and topologically-critical node is the cut vertex of undirected graph. In a connected undirected graph, if a cut vertex is removed from the graph, the connectivity of graph will be damaged, so some nodes in the graph are no longer reachable. He presented an adaptive distributed algorithm to search cut vertex, by which a node can independently determine whether it is a cut vertex or not. He further proposed an active search algorithm of cut vertex to verify and improve the adaptive algorithm. The algorithm is of high discovery accuracy, but its realization is more complicated. 2 Improved algorithm (ECAM) based on CAM algorithm to search topologically-critical nodes Definition 1 (Reachability relation): In unstructured P2P network, if node A is able to go to node B and node B is able to go to node C, node A and node C is reachable, denoted by A→→C. Reachability relation is transitive, that is, if P→→Q and Q→→R, then P→→R. Reachability relation is generally asymmetrical (unless a network can be viewed as an undigraph). Definition 2 (Topologically-critical node TN): In unstructured P2P network, node C is called topologically-critical node TN if the set of its neighbor nodes will be divided into two or more subsets S1, S2, S3, ⋯, Sn (n>1), the shortest path of which is mutually unreachable, after node C is deleted. In Figure 1, if node C is deleted, the set of its neighbor nodes will be divided into three subsets S1, S2 and S3, the shortest path of which is unreachable. Therefore, node C is a TN.

Figure 1 TN Nodes C 2.1 Detection algorithm CAM of TN Based on the above definition of TN, the main process to detect TN is as follows: First, the candidate node C of TN should be detected that if the shortest path of its neighbor nodes is reachable after node C is deleted, then the set of neighbor nodes should be divided into several subsets with the mutually unreachable shortest path, finally it can be determined that whether node C is a TN or not according to the number of subsets. If only one subset is obtained, node C is not a TN, otherwise, node C will be determined to be a TN.

740

Manufacturing Systems and Industry Application

CAM algorithm is the most classic search algorithm of topologically-critical node. Based on flooding message transmission and with TTL control, the algorithm can center on detected node (candidate node) and access the network topology in local area to effectively determine whether the candidate node is topologically-critical node or not. But with flooding-based message transmission, the number of messages sent each time increases exponentially, so the network consumption is very large. If all the nodes concurrently launch detection at a certain time, there will be a large number of detection messages in network to cause network congestion or even network collapse in severe case. Moreover, in CAM algorithm, the detection of each candidate node is completely independent of each other without information interaction, so a candidate node can get full detection result only after the entire detection is finished. In fact, if a mechanism of information interaction is established for the detection of all candidate nodes, full result will be obtained at certain intermediate step instead of after the detection is completed to determine topologically-critical node. This can decrease the frequency of flooding delivery greatly, improve discovery efficiency and reduce network consumption. Therefore, in this paper, Efficient CAM (ECAM), which is improved CAM, is proposed, so that the candidate node will maintain a CAM table itself, that is, for the detection launched by a candidate node, a CAM table will be created in the cache of the node and all the nodes involved, called ECAM table. Its structure design is shown in Table 1: Table 1 ECAM Table

The corresponding item of Target Node indicates that the nodes in Connected Nodes List and Target Node are mutually reachable. ECAM table can be defined as a table to describe the mutual reachability relation of nodes. For example, the data in Table 1 describe that node C, node N1 and node N3 are mutually reachable. For neighbor nodes, the maintenance mode of ECAM table is the same as CAM table, in which Target Node records candidate nodes and Connected Nodes List is a record of new neighbor nodes received Msg_Probe. The candidate node and its neighbors are inevitably reachable. The steps of Efficient CAM (ECAM) are described below: (1) Network initialization: the candidate node C sends a message Msg_Init to each neighbor N1, N2, …, Nn to start a detection. Neighbor node received the message Msg_Init must reply a message Msg_Response that contains all ECAM data of the node. If node C does not receive any reply from its neighbor node Ni, node Ni is known as failure and will be removed from the table of neighbor nodes of the candidate node. (2) Collection of reachability relation: the candidate node C collects the message Msg_Response sent by its neighbors and continues to calculate the reachability relation of its neighbors based on ECAM data (shown in Figure 2.11). If the collection of reachability relation of all the neighbors of node C is completed, the detection should be finished and step (6) should be initiated directly to divide subset, otherwise Step (3) should be initiated for the detection. (3) Initiation of detection: the candidate node C sends a detection message Msg_Probe to the neighbor has responded to it. This message contains the address of node C, hop count limit TTL and neighbor number Ni. (4) Detection of reachability relation: A node M will firstly traverse its ECAM table after receiving the message Msg_Probe and then directly returns a message Msg_Arrival which contains all neighbor numbers of the candidate node C in Connected Nodes List to node C if there is already

Yanwen Wu

741

a record of the candidate node C. This message should not be sent to each neighbor of node M. If the candidate node C in Msg_Probe is new, node M will add a record to ECAM table, of which Target Node is C, and the neighbor number Ni in message will be added to Connected Nodes List. If there is already a record with Target Node of C in ECAM, but the neighbor number Ni in Msg_Probe is not present in Connected Nodes List of the record, node M will add the neighbor number Ni to Connected Nodes List of the candidate node C in ECAM table. At last, if the TTL of message is not zero, node M will send the message to each neighbor (except for the message sender.) (5) Collection of reachability relation: If there is more than one neighbor number in Connected Nodes List for node C in ECAM table of node M, a message Msg_Arrival should be sent to the candidate node C. This message contains all neighbor numbers in Connected Nodes List, which in fact can inform the candidate node C which neighbors are reachable. The arrival scope of detection message that a candidate node send to different neighbors is non-overlapping (only at intersection, the message Msg_Arrival will be sent back to report the reachability relation of neighbors) to avoid repeated reachability detection. (6) Division of subset: the candidate node C continues to collect the message Msg_Arrival to calculate the reachability relation of neighbors. All the reachable nodes will be put in the same subset (which is actually to calculate equivalence class). Given the dynamic nature of P2P network, a few Msg_Arrival messages may be lost, so a timeout value Timeout can be set for node C. After Timeout, final division is considered to be obtained. (7) Determination of TN: After the division of subset, the node C can determine whether it is a topologically-critical node or not in accordance with the number of subset: If all the neighbor nodes are in the same subset, node C is not a topologically-critical node, otherwise, node C is a topologically-critical node. 3 Elimination algorithm of TN The process to eliminate topologically-critical node is to convert topologically-critical node to ordinary node. Now there are two mature algorithms which have been proposed, linear connection elimination algorithm and chordal ring elimination algorithm. Assuming that the neighbors of the node are divided into several subsets S1, S2, …, Sn, a node Mi which is the most suitable to be bordered should be selected from each subset Si as the "representative node" of the subset, which will be then connected in some way. Liu et al [10] proposed linear connection elimination algorithm for topologically-critical node. The algorithm is specifically described as: If the candidate node is a topologically-critical node, there will be more than one same block set of all its neighbors. These sets will be arranged as S1, S2, …, Sn. At first, two nodes should be randomly chosen from S1 and S2 respectively to be connected, then two nodes should be randomly chosen from S3 and S4 respectively to be connected, and so on, finally two nodes should be randomly chosen from Sn-1 and Sn respectively to be connected.

Figure 2 Schematic diagram of linear connection elimination algorithm

742

Manufacturing Systems and Industry Application

As shown in Figure 2, node C is a topologically-critical node in the network. Its failure will cause the network to be divided into three subnets S1, S2 and S3. In order to eliminate topologically-critical node, three representative nodes M1, M2 and M3 are selected from three subnets as well as are linearly connected one by one. All the representative nodes will form a closed chain which incorporates a number of mutually unreachable subsets. The linear connection elimination algorithm is the most simple and the most basic algorithm, based on which a variety of improved algorithms can be proposed. This paper adopts the elimination algorithm to prevent and eliminate TN. 4 Algorithm Analysis 4.1 Accuracy analysis for discovery algorithm of topology key points The accuracy of discovery algorithm of topology key points refers to the percentage of the true topology key points in the identified topological key points. For ECAM algorithm and CAM algorithm, as the discovery of topology key points is restricted by TTL of the hop count of query messages, there will be some non-topological key points misidentified as the topology key points or some topology key points misidentified as non-topology key points. For the CAM algorithm, the judgment is based on the CAM data exchanged between neighbor nodes which are collected in real time; for ECAM algorithm, the judgment is based on two parts: CAM data collected in real time and ECAM data stored in nodes. Therefore, the accuracy of ECAM algorithm depends on the accuracy of ECAM data stored in nodes. As ECAM algorithm at least has the accuracy of CAM algorithm, the algorithm efficiency can be improved with larger percentage of ECAM data stored in nodes. Especially when the percentage of ECAM data stored in nodes reaches 100%, the discovery efficiency of algorithm will be highest and the algorithm accuracy will directly depend on the accuracy of ECAM data. According to the data maintenance mechanism of ECAM algorithm, the accuracy of ECAM data depends on the frequency of starting maintenance mechanism. Let p be the percentage of ECAM data and f be the maintenance frequency of ECAM data, the calculation formula of the discovery accuracy of ECAM algorithm will be as follows: EECAM= (1-p)ECAM +pQf Qf in the formula is the accuracy coefficient of ECAM data. To maintain the EECAM at the same discovery accuracy of ECAM, the low limit of EECAM should be kept within -5%, that is, to ensure Qf be ≥ 0.95ECAM. The formula can be proved by the simulation results, that is, to prove that the ECAM algorithm can maintain the discovery accuracy of CAM algorithm unchanged. 4.2 Communication cost analysis Network communication cost is a function value of the network bandwidth consumed by node communication and other related parameters. It is the parameter the network administrators are concerned about most, for its major impact on the network service performance. Significant network communication costs could severely limit the scalability of P2P systems. For ECAM algorithm, in the best case, the candidate node can acquire all the information and finish probe with only one network initialized operation, and the communication cost to probe all the nodes is O (2nc) (Suppose the probability as p1(t ), where t refers to the time). In the case that the node acquires all the information and finishes probe with K probes, the communication cost is O(nc (k +2)) (Suppose the probability as p2 (t)). So the worst case is a process of complete CAM algorithm with the communication cost as O(nc (T +2)) (Suppose the probability as 1 - p1 (t) - p2 (t)). Therefore, the formula calculating the communication costs of ECAM algorithm, SECAM, is: SECAM= O(nc[2 p1+( k+2) p2+ (T+2)( 1- p1- p2)])

Yanwen Wu

743

The simplified formula is SECAM= O(nc(T+2- p1T + p2k- p2T)) As k is ≤ T, so SECAM= O(nc(T+2- p1T + p2k- p2T)) ≤O(nc(T+2- p1T)) ≤O(nc(T+2))= SCAM that is SECAM≤SCAM So it is proved that the communication costs of ECAM algorithm are less than those of CAM algorithm and the greater p1 (t) and p2 (t), the smaller the communication costs. And the formula will be tested in the subsequent simulation. 4.3 Analysis of simulation results In this paper, simulation platform Peersim is used to perform the stimulation of discovery algorithm of topology key points, with stimulation program written in JAVA and unstructured P2P networks generated by WireScaleFreeDM algorithm [12]. There are 5000 experimental network nodes designed and the node average degree is 5. Firstly, the accuracy of the discovery algorithm is verified. The accuracy of ECAM algorithm depends on the maintenance frequency of data (f), and the greater f, the higher the frequency of maintaining ECAM data stored in nodes and the greater accuracy of ECAM data and algorithm.

Figure 3 Discovery accuracy of algorithm in a network without node failure and with maintenance frequency f = 0.2s

Figure 4 Discovery accuracy of algorithm in a network with an attenuation frequency of 10 regular node failures per 0.1s and maintenance frequency f = 0.2s

Figure 5 Discovery accuracy of algorithm in a network with an attenuation frequency of 10 regular node failures per 0.1 seconds and maintenance frequency f = 0.1 s

Figure 6 Communication costs when nodes start discovery algorithm one by one in interval of 0.01s

744

Manufacturing Systems and Industry Application

Simulation experiment is based on the maintenance frequency, and there are three network states: the first state is a network without node failure and its maintenance frequency f = 0.2s; the second state is a network with an attenuation frequency of 10 regular node failures per 0.1 seconds and its maintenance frequency f = 0.2s; the third state is a network with a attenuation frequency of 10 regular node failures per 0.1 seconds and its maintenance frequency f =1s. In such way, the discovery accuracy of ECAM algorithm in a complex network is tested under different TTL conditions. Through the simulation under the above three states, it can be verified that ECAM algorithm can guarantee an accuracy of the same level of CAM algorithm. Even when in a highly dynamic network, if the maintenance frequency of ECAM algorithm can be kept at high, its discovery accuracy is higher than that of CAM algorithm. In other words, ECAM algorithm is more competent than CAM algorithm in complex and highly dynamic P2P network environment. In the simulation experiment, when the nodes start discovery algorithm one by one in interval of 0.01s, as shown in the figure, the node communication costs are reduced by improved algorithm. 5 Conclusions In this paper, the topology optimization in the unstructured P2P network has been studied from the view of the weakest part of the topology network, topology key point of departure. The existing discovery and eliminating algorithms of topology key points have been analyzed and improved. The experiment results show that the improved CAM algorithm (ECAM algorithm) can greatly reduce the network consumption and communication costs and improve discovery efficiency while ensuring the discovery accuracy. The topology optimization technology of unstructured P2P network is an important subject in the area of P2P. This paper has just partially explored in this field, and the future research can be carried out in the following areas: (1) The eliminating algorithms of topology key points will add more and more connections in topology network, which can improve the network survivability, complexity and redundancy. As the self-adaptive costs to update the routing table increase with a higher connectivity of each node, to make the topology network more concise and efficient, the future research can pay attention to the design of algorithms reducing the connections to ensure network stability and reliability while reducing network redundancy. (2) Further research should focus on applying the concept of topology key points and discovery and eliminating algorithms into the structured P2P network. As the structured and unstructured P2P network are different in the topology structure, routing methods and many other ways, especially the overlay network of structured P2P network which cannot be viewed as an undirected graph, the concept of topology key points should be modified appropriately from the start to enhance the resistibility of structured P2P networks towards segmentation and its fault tolerance. Acknowledgement This paper is supported by Department of Education of Zhejiang Province:The Application and Research of P2P Based on JXTA in Intranet(Y201016195).

Yanwen Wu

745

References [1] Napster website.1999.http://www.napster.com [2] Gnutella protocol specification.2007.http://rfc-gnutella.sourceforge.net [3] KaZaA website.2007.http://www.kazaa.com [4] eDonkey website.2007.http://www.edonkey.com [5] BitTorrent website.2007.http://www.bittorrent.com [6] Skype website.2007.http://www.skype.com [7] Groove website.2007.http://www.groove.net [8] GPU project website.2007.http://gpu.sourceforge.net [9] B.Krishnamurthy, S.Sen, Y.Zhang, and Y. Chen. Sketch-based Change Detection: Methods, Evaluation, and Applicati- ons. SIGCOMM Internet Measurement Conference (IMC).2003 [10] Liu X,Xiao L,Kreling A,Liu Y.Optimizing overlay topology by reducing cut vertices.In:Proc.of the ACM Int’l Workshop on Network and Operating System Support for Digital Audio and Video(NOSSDAV).Newport:ACM Special Interest Group on Multimedia.2006 [11] REN HAO. Research ofTopological Optimization in P2P Network [D].Changsha: National University of Defense Technology.2007 [12] WireScaleFreeDM website.2009.http://xxx.lanl.gov/abs/cond-mat/0106144

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.746

An Algorithm Based On SURF for Surveillance Video Mosaicing* GUO Keyou1, a YE Song2, b JIANG Huming1, c ZHANG Chunyu3, d HAN Kai1, e 1

College of Mechanical Engineering, Beijing Technology and Business University, Beijing, China

2

3

Automotive Transportation Technology Research Center, Research Institute of Highway, National Center of ITS Engineering & Technology, Beijing, China Key Laboratory of Intelligent Transportation Systems Technologies, Research Institute of Highway, Center of ITS Engineering & Technology, Beijing, China a

[email protected], [email protected], [email protected], [email protected], e [email protected]

Key words: Machine Vision; Video Mosaicing; SURF; Homography Matrix

Abstract: Using the SIFT algorithm for image mosaicing is the study hotspot in recent years, which is in a wide range of applications. SIFT algorithm of large amount of data and the time-consuming calculation method is not applicable in higher real-time video mosaicing. Firstly using SURF extracts feature points, secondly using the nearest matching method, RANSAC and least-square method solve the homography matrix between images, and finally using normalized covariance related function for obtaining the best effect of the homography matrix. The algorithm not only meets the accuracy requirement of parameter estimation, but also has smaller computation and faster speed than SIFT. It has proved that the algorithm used in this paper has good real-time performance, high accuracy and the ideal effect, which can satisfy the requirement of real-time mosaicking. Introduction Video mosaicking is a kind technology of expanding the scene, which makes overlapping area of video images from a number of camera splice seamlessly to gain more broad horizons. It is widely used in video monitoring, virtual reality technology and the panorama image analysis, etc. In recent years, video mosaicking is the research focus in the field of image processing, but Limited by processing algorithm complexity and hardware conditions, most of the research mainly focus on static image processing which requires low real-time performance. Compared with static images, Video images of successive frame usually deals with more frames than static images, and the interval between successive frames is very small, usually only a few tens of milliseconds. It is the goal for all researchers to reduce single-frame time needed for image matching algorithms to realize the truly real-time mosaicking. In recent years, the research at home and abroad has acquired some achievements on image mosaicing [1, 2, 3], But most algorithm are influenced by image distortion, scene rotation, sampling noise, pixel luminance etc, which needs to improve. SIFT algorithm proposed by Loweis is a kind of scale invariant feature description method with better robustness [4, 5, 6], and applied to image mosaicing, image registration, etc, it has very ideal mosaicking effect. But SIFT algorithm of large amount of data, the time complexity and the time-consuming calculation method is not applicable in higher real-time video mosaicking.

Yanwen Wu

747

Analysis of the Existing Technology Image mosaicing generally includes two important aspects: image registration and image fusion. The image registration is more important, which matches two or more images obtained at different time, using different imaging equipment or under different conditions. Usually according to camera inside and outside characteristic matrix and object geometry motion model, and with the scenery relationship in common which exists in the corresponding images overlap section, two images is unified in the same coordinate system, changing one image’s pixels coordinate to make image registration into a seamless and larger image vision. Registration effect has great influence on the results of image mosaicing. Existing image registration method are mainly based on pixel gray values [7], feature matching [4, 8, 9] and transform domain [10] image registration method. The research of Based on feature matching image registration method is more extensive and has more significant achievement, whose typical algorithms include SIFT angular point matching algorithm, SUAN angular point matching algorithm and Harris angular point matching algorithm, while the SIFT is the most prominent. SIFT algorithm is first proposed by D.G.L owe [6], which initially used for key point feature extraction. SIFT feature point extraction algorithm, using hierarchical pyramid way, designs a multidimensional eigenvector with scale space concept, and extracts feature points by this vector. SIFT method can extract lots of feature points of image, which uniformly distributed in image. The quantity of extracting feature points is very important for object identification, so SIFT has great effect in object identification. But SIFT algorithm consumes a lot of time in feature matching, which cannot solve the essential problem of the real-time problem. SURF Algorithm Document [11] mentioned that we can extracts feature points with SURF method, which based on SIFT algorithm, used integral image for replacing image convolution, and used the mathematical properties of Hessian matrix for testing eigen values. Although the quality of extracting feature points is affected to a certain degree, with the greatly improvement of the algorithm speed, realizing the real-time mosaicking has become possible. This paper has finished extraction of feature points with SURF method, then obtains the matching points of two images using the nearest match, resolves homography matrix between images with RANSAC, and finally obtains the registration images. Homography Matrix. When the camera not only has translational motion, but also has Lens zoom, rotation motion and other motion. Supposing I is a level in the space, whose imaging under two viewpoints are I1 , I2, making

[x

y 1] ∈ I 1 , [ X T

Y 1] ∈ I 2 be any pair of corresponding points, image transformation T

relation is showed by formula (1) :

x X   y  = sH  Y       1   1  where S is nonzero scale factor, the formula for the calculation of H is as follows:

(1 )

748

Manufacturing Systems and Industry Application

 h11  H =  h21 h  31

h12 h22 h32

h13   h23  h33 

(2) where H is called the homography matrix for the image (I1, I2) about level I. Through the formula above, homography matrix contacts the point set position of source image plane with the point set position of target image plane. Hessian matrix. Hessian matrix is a second partial derivative matrix of multidimensional variable function. If it is positive definite, its minimum can be obtained by the variable whose derivative equal to zero, and contrariwise, its maximum can be obtained, or its extremum can not be determined. The eigen vector of the feature point in the image can be obtained using this feature. The detection of the SURF feature points is also based on scale space theory. The Hessian ∧

Matrix of a point x = ( x , y ) in the image, can be defined on the scale σas:

 L xx ( x,σ ) Lxy ( x,σ ) H =   L xy ( x,σ ) L yy ( x,σ )

(3)

L xx is the convolution of Second Derivative of Gaussian kernel function g (σ ) about x and I = ( x, y ) . L xy 、 L yy is similar with L xx . On the original image, Different scales of the image pyramid will be formed through Windows of different sizes, which are relative with different scales of second-order Gaussian filter. The values which result from convolution of Window template and image are

D D xx D xy , and yy , furthermore,

the Hessian matrix expression △is:

∆(H ) = D xx D yy − 0.9 D xy2

(4 )

After the extremums were calculated with Hessian matrix, the extremums can be resolved in the stereo adjacent region. Only when the extremum is greater or less than the 26 adjacent region values in the upper、lower and current scale can It be selected as candidate feature point. Then using interpolation operation in the scale space and image space, the stable feature point position and it’s scale value can be obtained. Rotation invariability. To ensure the rotation invariability, firstly using the feature point as center, calculate the Haar wavelet of the points in adjacent region of a certain length radius, and assign to these response values Gaussian weighting coefficient, making the weights of those close to the feature points big, and away from the feature points small. And then add those response within a certain phase angle to form a new vector, traversing the entire circular area, select the direction of the maximum cumulative value as the main direction of the feature points, and for all feature points in the same operation, the main direction of each feature point can be obtained. Using the feature point as center, first rotate the axis to the main direction, then normalize the vector, which can produce a certain anti-interference ability for light.

Yanwen Wu

749

RANSAC. Based on the work above, using the nearest neighbor matching method can produce a lot of matches, but there may exist wrong ones in these matches. RANSAC algorithm can be used to eliminate non-matching, and improve the matching accuracy. Basic idea is as follows: (1) Sample n times repeatly, randomly take 4 groups corresponding points to compose a sample and calculate homography matrix; Calculate Euclidean distance of each group supposing correspondence; Compare with set threshold, select the point in accordance with homography matrix as the candidate point. (2) Select a point set that contains the largest number of interior point (3) Re-select the match point in the point set, calculate the homography matrix. Use the least squares method to minimize errors, so that, before the final solution, firstly remove some non-candidate points that do not meet the relation that most points satisfy, remove the impact of mismatch, and finally get the homography matrix which is determined by the majority of match points. Optimal homography matrix. According to the property of homography matrix, a good performance homography matrix allows two images to be spliced together perfectly. However, due to the matching process of the feature points may exist the case of mismatch of feature points, the homography matrix obtained by actual solution may not the optimal homography matrix, splicing effect won’t be very good too. Therefore, How to get the optimal homography matrix in the splicing process becomes an important part of the video mosaic. Theoretically, based on homography matrix, the transformed image spliced together is equivalent to a image of wider vision, so the correlation of image in the overlap region of the two two images to be spliced is very high, that is, the higher the correlation, the better the effect of mosaic. Usually, we use the normalized covariance correlation function to describe the similarity of two point sets, and this function is used to describe the similarity of two images in this paper. After the position of splicing overlap region is determined, we can calculate the similarity of overlap region by formula (5):

∑∑ [I (i , j ) − I (i , j )]× [I w

C (I 1,I 2 ) =

h

1

1

1

1

1

1

2

(i2 , j 2 ) − I 2 (i 2 , j 2 )

]

m =0 n =0

2  w h   w h 2 ∑∑ I 1 (i1 , j1 ) − I 1 (i1 , j1 )  × ∑∑ I 2 (i 2 , j 2 ) − I 2 (i2 , j 2 )   m=0 n =0 m =0 n =0 

[

]

[

]

(5)

Where w is width of the overlap region and k is height. Clearly, the range of similarity C is (-1, 1), the greater the value, the higher the degree of correlation of the overlap region. In the process of video mosaic, we can head for the single-frame mosaic for the first several frame images and calculate the similarity of each mosaic, and calculate the maximum of recorded continuous similarity values. The homography matrix that the maximum corresponds to can be used for the follow-up images mosaic, without calculation of feature points of the follow-up mages to be spliced and matching, which has greatly reduced the time required for algorithm and met the requirements of real-time video mosaic. Results as shown, for a mosaic image with the resolution of 320*240 and in the hardware condition of the Core2 2.8G CPU, 3G memory, the single-frame splicing takes about 400ms or so, until the optimal homography matrix is obtained, mosaic time is far less than 40ms.

750

Manufacturing Systems and Industry Application

Fig.1. Splicing effect image

Conclusion This paper introduces the solution algorithm of optimal homography matrix based on the normalized correlation function. For video mosaic that demands high real-time performance, this algorithm can achieve the desired effect and also meet the requirements of the computing speed, which suggests a new way of resolving the image matching problem. The algorithm, with a strong feasibility, has especially ideal splicing performance for the video with relatively fixed background perspective and experimental results prove the effectiveness of the algorithm. Meanwhile, because the SURF algorithm simplifies the part of feature extraction, compared with SIFT algorithm, the robustness of the SURFalgorithm will decrease, and it’s also further improvement of this algorithm.

Acknowledgements This project is Beijing Young Core Talent Training Program, which is supported by the Research Foundation for Youth Scholars of Beijing Technology and Business University. The Project Code is PHR20110876. I would like to express my heartfelt gratitude to their support.

Yanwen Wu

751

References [1] SMITH S, Brady M. SUSAN A New Approach to Low Level Image Processing [J]. International Journal of Computer Vision, 1997, 23(1):45-78. [2] LEMESHEWSKY G P. Multispectral Multisensor Image Fusion Using Wavelet Transforms [J]. Pro. SPIE, 1999, 3716:214-222. [3] LUO Zhongxuan, LIU CHengming. Fast Algorithm Of Image Matching [J].Journal of Computer-Aided Design & Computer Graphics,2005,17(5):966-969. [4] LOWE D G. Object Recognition from Local Scale-Invariant Features[C].International Conferenceon Computer Vision, Corfu, Greece Sept,1999:1150-1157. [5] BROWN M, LOWE D G. Automatic Panoramic Image Stitching Using Invariant Features[J].International Journal of Computer Vision,2007,74(1):59-73. [6] LOWE D G. Distinctive Image Features from Scale-Invariant Keypoints [J]. International Journal of Computer Vision, 2004,60(2):91-110. [7] Chen Yu, Zhuang Tiange. The Non-rigid Image Registration Based On The Probability of Gray Value Corresponding [J].Journal of Shanghai Jiao Tong University.1999.33(9):1128-1130. [8] Dong Rui, Liang Qi. An Algorithm of Image Feature Point Matching Based on Color Gradient [J].Computer Engineering.2007.33(16):178-180. [9] Li Guangru, Zhang Chuang.Data Fusion of Radar Image and Electronic Chart Based on Harris Feature Point Detection Method.Journal of Dalian Maritime University. 2009.35(2):55-58. [10] Xu Junze, Hu Bo,etc. Multual Information Image Registration Method in the Log-Polar Coordinate Transform Domain [J]. Information and Electronic Engineering.2009.7(4):289-293. [11] BAY H, TUVTELLARS T, GOOL L Van. SURF: Speeded up Robust Features[C].Proceedings of the European Conference on Computer Vision,2006:404-417.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.752

Optimization for Order-picking Path of Carousel in AS/RS Based on Improving Particle Swarm Optimization Approach Yang Wei1, a, Li Xuelian1, b, Wang Haigang1, c, Du Yuxiao2, d 1

Mechanical & Electrical Engineering College, Shaanxi University of Science & Technology. Xi’an, China 2

International Business School, Jinan University. Guangzhou,China

a

[email protected], [email protected], [email protected], [email protected]

Key words: Automatic Storage & Retrieval System; particle swarm optimization approach; carousel; path optimization.

Abstract. In order to improve the access efficiency of small items storage system in Automatic Storage & Retrieval System, we use the particle swarm optimization approach to analyze and optimize its order-picking path by taking the carousel with double sorting tables as research object. Through the analysis of order-picking process, a mathematical model for solving optimization of order-picking path is brought forward, and a solving process based on particle swarm is designed aiming at this model and testing the effectiveness of this algorithm. The experimental simulation proves that PSO can be the fast, stable and effective solution to the optimization problem of order-picking path for double sorting tables, thereby improving the overall operation efficiency of Automatic Storage & Retrieval System. Introduction In Automatic Storage & Retrieval System (Automatic Storage & Retrieval System, AS/RS), the carousel is an important part, especially for small items’ storage, with characteristics of flexibility, high speed, which brings a very broad application prospects for improving access efficiency of small items, shortening in& out storage time and improving the accuracy of storage operations to achieve the integrated management of AS/RS. At present, the order-picking path of carousel research has mainly focused the optimization of single server, and less research on double servers, and the main research method is genetic algorithm. Lin Jiaheng and others [1] apply two-layer genetic algorithm for multi-carousel rotating with two servers in optimizing order-picking path, and achieve a certain results, but the method cannot guarantee the optimum of order-picking sequence of shelves in each layer. Then, he proves that if single carousel wants to get the optimal solution, the reverse number of shelves can only have one time[2]. Accordingly, Xue Yuan[3] integrate the heuristic algorithm with genetic algorithm, and firstly use heuristic algorithm to make optimal sequence of picking goods in each level, and then use genetic algorithm to optimize all goods allocation, therefore, the effect is significantly higher than before, but when adopting the genetic algorithm to solve such issues, selection, crossover and mutation and other operators’ selection will affect solution quality, and easy to fall into local minima. This paper describes the working principle of carousel, builds up a mathematical model to solve the optimization for order-picking path of carousel, and use improved particle swarm optimization approach to achieve model. Example shows that the method achieve good optimization results, in which the accuracy of the improved particle swarm optimization approach, that is optimization rate, is higher than genetic algorithm.

Yanwen Wu

753

Working principle and model description of carousel Carousel structure and working principle. Hierarchical and horizontal carousel is made up of an oval multi-shelve and two sorting tables, the shelve containing a large amount of warehouse and goods, and each warehouse has a fixed number (address). Both ends of the carousel are set sorting tables and each shelf has two sorting points. When the system is running, shelf in each level can be independently around the vertical axis in the horizontal direction forward or reverse (clockwise, anti-clockwise) rotation, and sorting tables independently go up and down to each sorting point to pick goods from the initial position. When doing access goods, the location of sorting tables is fixed, enter the number of warehouse where goods is, the warehouse is automatically rotated to sorting point ended up with stopping[4]. The picking process of carousel with double sorting tables is relatively much more complex than the single sorting tables. Firstly, the manifests being selected are made proper group and order to form two sub-manifests, and then separately control two sorting tables and carousel to sequence picking according to two sub-manifests. Path planning refers that as for the lot sizing manifest being performed, how to schedule the two sorting tables (task allocation) and the level of shelf (picking sequence) to spend the shortest time of picking task which a manifest assign. Therefore, the path optimization of carousel control system is mainly reflected in the implemental sequence of goods allocation in manifest. Mathematical models of double sorting tables in multilayer carousel on picking. In order to study conveniently and without loss of generality, in particular to put forward the following assumptions[3]: 1) The distance distribution between each goods allocation; 2) Goods allocation in each shelf with the number 1,2,3 ... in counterclockwise direction, the initial location of first picking point is chosen for the container 1, the initial location of the second is chosen for the 101 containers, and Initial position of the two sorting tables is the first layer; unfold the goods allocation points located at the first sorting point in each level, and the coordinates is shown in Figure 2; 3) Only deposit a kind of goods in each goods allocation, and the picking time of goods is constant, set Tr as unit time; 4) The speed of lifting of each sorting table is at the same speed and constant, each layer needs Tm unit time to increase or decrease; 5) The shelves in normal or reverse direction at each level are at the same and constant rate, turn around a container per unit time; 6) The two sorting tables cannot pick container at the same level, when a sorting table picking, the other must wait for the completion of picking for its picking, set the waiting time as Tw, which in the calculation process is in accordance with the need to determine the value.

Fig1. The sketch map of Hierarchical and horizontal carousel

Fig2.The

coordinates

of

goods

allocation

Mathematical model of double sorting table in carousel is as follows: Given a picking bill = {bill (1), bill (2) ... bill (m + n)} a total of m + n picking items (that is, goods number need to be picked). We use natural numbers 1, 2... (m + n) to mark number for these entries. Set M1 = { i1,i2,…,im }, M2 = {im +1, im +2, ..., im + n}, M = {M1, M2} = {i1, i2, ..., im, im + 1, im +2 ,..., im + n} is a random order of {1,2, ..., m + n}, so M represents an arrangement of bill entry. If M1, M2,

754

Manufacturing Systems and Industry Application

respectively, corresponding to the manifest entries is picked by NO. 1 and 2 sorting tables, and M1, M2 determine a kind of picking path of the carousel with double sorting tables. We call M1, M2 respectively, as the 1 and 2 sub-manifests of M. Set L (y) as the container number of the picking point at No.y level, XK, YK, respectively, represent the corresponding goods allocations’ level number and container number of No.k goods allocation entry, y = XK, C represents the total number of containers at each level of shelve. While the No.1 sorting table make sequence in accordance with M1, the needed time that container in the No.K task where it is should turn around the picking point at this level is required: TK 1 = min{ L( y ) − YK , C − L( y ) − YK }

(1) The sorting table going up or down to the level of No.K task where it is needs the time: TK 2 = X K − X K −1 × Tm

(2) The sorting table go up or down, while the shelves are rotated together horizontally, so the sorting table reach the level where No.k task and finish picking No.k to need the time: TK = max{TK 1 , TK 2 } + Tr + (Tw ) (3) Thereinto, the waiting time in brackets are optional value, this value needs to wait, or no such value. Then after No. 1 sorting table completing M1 sub-manifest and returning to the first level will need the time: m

T ( M 1 ) = ∑ K =1 TK + ( X m − 1) × Tm

(4) The picking situation of No.2 is the same as the No. 1 the process of picking, so it finishing picking M2 sub-manifest will need the time: m+n

T ( M 2 ) = ∑ K = m +1 TK + ( X m + n − 1) × Tm

(5) Therefore, the needed time is the assigned task that we finish picking all manifest entries, the objective function is: T ( M ) = max{T ( M 1 ), T ( M 2 )}

(6) Optimization objective of picking path is: find the best picking sequence T (M*) = {M1*, M2*}, so make the total picking time T (M*) to as the minimum [3]. Definition to optimize the manifest M* to the optimization rate η of original manifest M η=

T (M ) − T (M * ) ×100% T (M )

(7) As for the given Manifest M, T(M) is a constant value, η only related with T(M*). With the improvement of optimization results, the objective function value of optimizing manifest T(M*) has decreased, and optimization rate η increase. Therefore, as for the given manifest η, this reflects the good and bad optimization results to some extent. When η becoming bigger, the optimization result will be better. Application of PSO approach in the optimization path of carousel The brief introduction of particle swarm optimization approach [5]. PSO (Particle Swarm Optimizer, PSO) is an evolutionary computation technique based on approach swarm intelligence method. PSO was firstly brought forward by Dr. Eberhart and Dr. Kennedy in 1995, the algorithm is bionic algorithm to make simulation of birds feeding, and its principle can be simply stated as: on the basis of every bird (particle) in their optimal position to track finitely neighbors with the optimal position to step closer to the target food location. Detailed mathematical description of particle swarm optimization approach is in reference 5. The application of improving particle swarm optimization in model. Fundamental particle swarm optimization approach uses the particle swarm optimization to optimize all the manifest entries, which can be understood that the goods allocations’ coordinates is taken as a dimension space of a particle, while the length of the particles will be the number of the goods allocation entry in one allocation.

Yanwen Wu

755

Obviously, if the picking sequence at each level can reach optimal results, the overall optimal result can be a good approximation of optimal solution. Therefore, make improvement based on the fundamental particle swarm optimization to get two-step optimization method: the first step is to optimize the carousel at each level, so that each level can achieve the best optimization results. For a certain optimal path, if there is normal or reverse problem, it must meet the following two points: the number of shelves can only have one reverse; repeat a section of path is to avoid another section which is longer than it. The second step is adopt particle swarm optimization approach to optimize all manifest entries, which can be understood as the dimension of the fundamental algorithm being changed as the goods allocation’s coordinate of a section of manifest entry. The specific steps of Algorithm: Step one: Firstly, make classification of the given manifest with M items according to level and then optimize the manifest entry of each level. Step Two: Use particle swarm optimization approach to make whole optimization, that is, encode and decode the particle, connect the particle expression with practical problems, and then determine the fitness function (objective function), the final steps is in accordance with PSO step to calculate and get the global extremum , that is the optimal picking sequence. Algorithm Examples and Experimental Results Parameter setting. Assuming the shelf has 5 levels, each with total of 200 containers, and the total number of entries in a manifest is the M = 50, the data of sample of manifest is from the optimization results of Heuristic genetic algorithm in literature [3]. The picking time for the whole manifest is 811 unit time, computation time is 37 seconds. Model parameters: Tm = 5, Tr = 10; C = 200. Experimental results. According to step 3.2, use MATLAB software programming to achieve experimental simulation and the obtained results are as follows: the improved particle swarm optimization approach needs 623 unit time to pick the entire manifest. Comparison of algorithm results. Table 1 is optimization comparison between the improved particle swarm optimization approach and Heuristic genetic algorithm (Tmax = 50). Improved particle swarm optimization approach and heuristic genetic algorithm are two-step optimization, from Table 1, we can see that the improved particle swarm optimization approach in the average optimization rate, average computation time and optimal solution of average algebra is better than heuristic genetic algorithm. This is because the particle swarm optimization approach does not require complex crossover and mutation algorithms like genetic algorithm, but to use the search of every particle interaction with the global particles to enhance the ability of algorithm, which can be seen the advantages of particle swarm optimization approach. Table 1 Optimization comparison between the improved particle swarm optimization approach and Heuristic genetic algorithm (Tmax=50) Population size 20 40 60

The improved particle swarm optimization approach Average Average optimal solution of optimizatio computation average algebra n rate time(second) 62.8% 0.32 15-25 63.2% 0.65 10-20 63.5% 1.01 10-20

Heuristic genetic algorithm Average Average optimal solution optimization computation of average rate time(second) algebra 54.8% 9 20-50 56.6% 20 30-50 57.2% 30 20-40

Summary Carousel is an important part of Automatic Storage & Retrieval System, whose research will help improve overall efficiency of Automatic Storage & Retrieval System. This paper makes research on the optimization for order-picking path of carousel with double sorting tables, and builds up a mathematical model, using the particle swarm optimization approach to solve, and the living example proves that the method is feasible , promoting the solution of the optimization to order-picking path of carousel with double sorting tables. By comparison, it is proved that when solving the optimization for order-picking path of carousel with double sorting tables, the improved particle swarm optimization approach brought forward by this paper achieves better optimization results than the improved genetic algorithm, and also reflects the superiority of particle swarm optimization approach in search efficiency.

756

Manufacturing Systems and Industry Application

Acknowledgement Fund Project: National Natural Science Fund (11072192); Natural Science Special Fund of Education Department of Shaanxi Province (2010JK428). References [1] Linjia Heng, Li Guofeng, Liu ChangYou. Two-stage genetic algorithm of order-picking path optimization for Hierarchical carousel with dual servo [J]. Control and Decision, 1997, 12 (4): 332 ~ 336 [2] Lin Jiaheng, Wang Zhao, Liu Changyou. An optimized method for carousel access [J]. Chinese Control and Decision-Making Annual Conference Proceedings, 1995:412 ~ 415 [3] Xue Yuan. Research on logistics and distribution center technology of chain [D]. Master’s Thesis, Shenyang University of Technology, 2003, 3 [4] Lin Jiaheng, Li Guofeng, Li Jianxun. The application of Genetic algorithm in order-picking path optimization of carousel picking with double sorting tables [J]. Journal of Shandong University of Technology, 1997, 27 (3): 236 ~ 239 [5] Ji Zhen. Particle swarm optimization approach and its application [M]. Beijing: Science Press, 2009 [6] Zhang Pan, Tian Guohui, Jia Lei. New hybrid genetic algorithm solving the order-picking optimization problem of a multi-carousel system [J]. Journal of Shandong University of Technology, 2004, 40 (6): 34 ~ 38 [7] Zhang Xinmin, Kong Xiangzhuo, Han Xiaoguang. Modeling and Optimizing Fixed Shelf Order-Picking for AS/RS Based on Least Time [J]. International Conference on Automation and Logistics, 2008, 9 [8] X.Chen, J.-F.Jiang, Particle Swarm Optimization Algorithms with Immunity for Traveling Salesman Problem [J].Computer & Digital Engineering,2006,34 (6): 10~132 [9] Li Meijuan. Research on optimization method of Automatic Storage & Retrieval System [D]. Dalian University of Technology, Thesis, 2008

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.757

Modeling of Proportional Integral Derivative Neural Networks Based on Quantum Computation NAN Dongxiang1, a, ZHANG Yunsheng2, b and SUN Xueqiang 1

Kunming University, Kunming, Yunnan Province, China

2

Kunming University of Science and Technology, Kunming, Yunnan Province, China a

[email protected], [email protected], c [email protected]

Key words: Neural Networks; Quantum Computation; Proportional Integral Derivative;

Abstract: There has been a growing interest in artificial neural networks (ANNs) based on quantum theoretical concepts and techniques due to cognitive science and computer science aspects. The so called Quantum Neural Networks (QNNs) are an exciting area of research in the field of quantum computation and quantum information. We proposed a modeling of Proportional integral derivative neural networks based on quantum computation called QNNs-PID that maps a nonlinear function. We analyze the main algorithms and architecture proposed the modeling of QNNs-PID. The main conclusion is that, up to now, we prove the feed back and back forward algorithm based on quantum computation how to use and give clearly the results for the nonlinear function in the context of QNNs-PID. We simulate an example to show the property of QNNs-PID in nonlinear systems. Introduction Computational neural networks are composed of many simple units operating in parallel. These units and the aspects of their interaction are inspired by biological nervous systems. The network’s function is largely determined by the interactions between units. Networks learn by adjusting the values of the connections between elements [1]. The neural network employed in this study is a feed forward with back propagation of error network, which learns with momentum. The basic construction of a back propagation neural network has three layers: an input layer, hidden layer, and an output layer. The input layer is where the input data are transferred. The link between the layers of the network is one of multiplication by a weight matrix, where every entry in the input vector is multiplied by a weight and sent to every hidden layer neuron, so that the hidden layer weight matrix has the dimensions n × m , where n is the length of the input vector and m is the number of hidden layer neurons. A bias is added to the hidden and output layer neurons, the function of the bias is to add a translational degree of freedom to the nodes allowing the transfer function to shift to the left or right of the abscissa depending on the sign. Neural networks control that base on brain neural mechanics and analogy the some action of human brain is important branch for intelligence control. It has many advantages, such as parallel computation, nonlinear map and identity the complex system to control. However, artificial neural networks have the deficiency of learning speed slow, limiting the capacity of memory and catastrophic forgetting in the case of dealing with a great deal of information, etc. Conventional the control of proportional integral derivative (PID) is the most popular method for many control objects. The proportional section produced the output signal has plus scale to compare with a declination signal, so that proportional factor diminishes the declinational signal; The integral section produced the output signal has plus scale to compare with a ratio of declinational signal, the results show that integral section increases the adjusting ratio of the controller, to reduce the transitional time and the error. Derivative section produced the output signal has plus scale that diminishes the error of statically states to compare with the derivational value of declinational signal. We can get results of fast agility and calm accuracy if above three sections cooperate fitness. But, conventional PID control is a line control, which has deficiency of conventional control. There is only to get better effects for simple system of line single variant. It hardly gets better effects for the complex system in the control.

758

Manufacturing Systems and Industry Application

In recent years, there has been considerable research in the use of artificial neural networks (ANNs) for identification and control of nonlinear systems [2, 3]. An increasing demand in the performance specifications and the complexity of dynamic systems mandate the use of sophisticated information processing and control in almost all branches of engineering systems. The promise of fast computation, versatile representational ability of nonlinear maps, fault tolerance, and the capability to generate quick, robust, suboptimal solutions from neural networks, make the latter an ideal candidate for carrying out such a sophisticated identification or control task. Problems in nonlinear identification and control can be seen as the determination of the interactions between the inputs and outputs of multivariable systems. In the last two decades we observed a growing interest in quantum computation and quantum information due to the possibility to solve efficiently hard problem for conventional computer science paradigms. Quantum computation and quantum information encompasses processing and transmission of data stored in quantum states (see [4] and references therein). We proposed the modeling of QNNs-PID (quantum neural networks proportional integral derivative) based on quantum computation that can combine their advantages to produce a new modeling of identity and control for a linear or nonlinear system. Firstly, we review some basic theory of quantum computation. Then we construct the new modeling of QNNs-PID and present the details algorithm of feed forward with back propagation. The Basic Form of QNNs-PID PID will be settled quantum neural networks called QNNs-PID that is dynamical the multi-layer networks of feed forward. Dynamical property is not to be achieved by the connections of inner networks, instead of the inner proportional, integral and derivative units. The basic forms of QNNs-PID are 2×3×1, including two input layers, three hidden layers and one output layer. Since them receive target value, and another unit receive control value. Hidden layers neural units achieve proportional computation, integral computation and derivative computation, respectively. Output layer get output results that we expect value that need to achieve for a linear or nonlinear system. Fig.1 shows a QNNs-PID of single output.

The back forward algorithm single output of QNNs-PID is in terms of two input values according to the weight values of current networks, the function states of every layers and output function, which forms output of networks. Back forward algorithm of networks is just like following when time takes any value k . Input layer have two input values for single QNNs-PID. One of them is imputed by real value of system, and another is given by reference value. The networks connection between inputs and outputs is: x i (k ) = u i (k ) (1)

Yanwen Wu

759

Where, the input value of input layer of neural unit is u i (i = 1,2) , xi (i = 1,2) is the output value of input layer of neural unit. The state function and output function is all equal function for the units of input layer as Eq. (1). Hidden layers are the most important layers that have three neural units called proportional unit, integral unit and derivative unit, respectively. Their every input value is same as following: net ′j (k ) = Fˆ

2

∑W x (k ) ij

(2)

i

i =1

Where j = 1,2,3 ; x (k ) is input value of neural networks, Wij is the weight of input layers to output layer, Fˆ is a operator that always is unit vector according to the Walsh-Hadamard transformation. Define the proportional unit as follows: u1′ (k ) > 1

1  u1′ (k ) =  net 2′ (k )  − 1

−`1 ≤ u1′ (k ) ≤ 1

(3)

u1′ (k ) < 1

Define the integral unit as follows:

u 2′ (k ) > 1

1  u 2′ (k ) =  u 2′ (k − 1) + net 2′ (k )  − 1

−`1 ≤ u 2′ (k ) ≤ 1

(4)

u ′2 (k ) < 1

Define the derivative unit as follows:

1 u 3′ (k ) > 1  u 3′ (k ) =  net 3′ (k ) − net 3′ (k − 1) −`1 ≤ u 3′ (k ) ≤ 1  u 3′ (k ) < 1 − 1

Hidden layers output is as follows:

x ′j (k ) = u ′j (k )

(5)

(6)

Where, j = 1,2,3 is hidden units number. The output layer of QNNs-PID is very simple including a single unit that achieves the function of output for the network. The results of output layer are as follows: net ′′(k ) =

3

∑ w′ x ′ (k ) j

j

(7)

j =1

Where x ′j (k ) the value of output for the hidden layer is, w′j is connectional weight matrix from hidden to output layer. The state function of output layer is same than the state function of proportional unit as follows: u ′2′ (k ) > 1

1  u 2′′ (k ) =  net 2′′ (k )  − 1

−`1 ≤ u ′2′ (k ) ≤ 1

(8)

u ′2′ (k ) < 1

For a single QNNs-PID, the output value equals the Eq. (9) as follows: x ′′(k ) = u ′′(k ) , v(k ) = x ′′(k ) (9) Feed back algorithm rectifies the weight matrix, which achieves the function of study and memory for a real world system. The aim of QNNs-PID that is to be the minimum of the values of average declination by studying and training for the actual and ideal output values of networks as follows: E =

1 m

m

∑ [ v′(k ) − v(k ) ]

2

(10)

k =1

According to the gradient descend algorithm, the single output of QNNs-PID has been trained and studied n0 times. After the networks trained, that the weight metrics of every layers is changed in terms of as follows: W (n0 + 1) = W (n0 ) − η

∂E ∂W

(11)

760

Manufacturing Systems and Industry Application

Single Variable Nonlinear System The QNNs-PID is a dynamical network that includes a construction of multi layers back forward and inner dynamical dealing with unit. Consider a nonlinear dynamical system is as follows: y (k ) =

1 3

y (k − 1) +

1 3

y (k − 2) +

1 3

f [u (k − 1)]

(12)

Where, y(k ) is the output of system on the k time, u (k ) is the input of system. Where, f (u ) is: f (u ) = u 3

(13)

Where the input function of system is: u (k ) = sin

2πk 25

(14)

The construction of identical system sees Fig.2. The single output of QNNs-PID according to the feed back of algorithm studies and identifies the object. After studying 6 times, the single output of QNNs-PID and identical object output like the figure as Fig.3. The QNNs-PID error of study is very quickly as Fig.4.

Conclusion QNNs are a promising area in the field of quantum computation and quantum information. Several models have been proposed in the literature but for most of then it was not clear the operator requirements to implement such models. In this paper we analyze a QNNs-PID based on quantum neural networks and the theory of PID that can be interpreted as an implementation of a QNNs-PID. We analyze this model with a quantum computation and analyze to define the back forward algorithm and feed back algorithm from mathematical point of view. The figure shows that QNNs-PID map the nonlinear system is powerful, because it need only learning little times compare with the conventional artificial neural networks. An important question is if the quantum neural networks can be gained quickly a weight vector for all of the classification problems and a nonlinear system, especially, for a multi input multi output system, to work like classical neural networks as we can measure quantum sates. Besides, we are interested to analyze the potential of quantum neural networks for controlling MIMOS (multi input multi output system), from the point of view of the above analysis. These are further directions of our work.

Yanwen Wu

761

References [1] Fausett, L. Fundamentals of Neural Networks. Englewood Cliffs, NJ: Prentice-Hall. [2] K.J. Hunt, D. Sbarbaro, R. Zbikowski, and P.J. Gawthrop, Neural Networks for control systems a survey, Automatic, vol. 28, no. 6, pp. 1083-1112, (1992) [3] K. S. Narendra and K. Parhasarathy, Identification and control of dynamical using neural networks, IEEE Trans. Neural Networks, vol. 1, pp. 4-27, (1992). [4] M. Oskin, F. Chong, and I. Chuang, A practical architecture for reliable quantum computers. IEEE Computer, vol. 35, no. 1, pp. 79-87, (2002).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.762

Study on Multichannel Speech Enhancement Technology in Voice Human-Computer Interaction Lu Jixiang, Wang Ping, Shi Hongzhong, Wang Xin College of Mechanical &Electrical Engineering, Northwestern Polytechnical University, Xi’an, China School of Telecommunication Engineering, Xidian University, Xi’an, China Shaanxi Communication Hospital, Xi’an, China College of Mingde, Northwestern Polytechnical University, Xi’an, China [email protected], [email protected] Key words: Human-computer interaction, Voice human-computer interaction, Multichannel speech enhancement

Abstract. As the primary research area of the Multimoda1 Human-computer Interaction, Voice Interaction mainly involves extraction and identification of the natural speech signal, where the former provides the reliable signal sources, which are analyzed by the latter. The multichannel speech enhancement technology is studied in this paper, aiming at the Voice Interactive. The simulated results show the effectiveness and superiority of the improved algorithm proposed in the paper. Introduction Human-Computer Interaction mainly studies human, computer, and the interaction between them. It involves exchange of various kinds of symbols and actions between human and computer. This sort of exchange can be human beings inputting information to the computer, or the computer providing information to human beings. There are a number ways to exchange information, such as using keyboard, mouse, showing symbols or graphs on the display. Voice, posture or body actions also can be used. Graphical User Interface (GUI) is currently the primary way of human-computer interaction. As this limitation becomes an increasingly serious problem when more and more applications become available, natural human-computer interaction technology is proposed. The aim is to simulate human being’s voice, touch sensation, body actions and so forth. Multimodal Interaction (MMI) is proposed precisely based on this concept. In recent years, multimodal interaction developed rapidly as the main research direction of human-computer interaction. It complies with the “People-centered” natural interaction standard, and also promotes the information industry’s raid development in this Internet era. MMI uses various modalities to communicate with computer. These modalities here include all kinds of communication ways which are used by customers to express their intentions, perform actions or perceive feedback information, such as natural languages, movements of the head, eyes, lips and hands, facial expressions, physical posture, sense of touch, smell and taste, etc. As the most universal and natural interactive mode, Voice interaction is the main research direction in Multimodal interaction. How natural voice human-computer interaction can be realized is among the most active research fields in recent years, which mainly includes extraction and identification of the natural speech signal, while extraction provides the reliable signal sources, these sources are analyzed by identification. Feedback also occurs between these two aspects. The multichannel speech enhancement technology is studied in this paper, aiming at the Voice Interaction. Speech signal in natural environment will inevitably be interfered by various kinds of noise signals, hampering the speech’s clarity and therefore intelligibility. The purpose of speech enhancement technology is to reduce the noise and make the speech clearer, i.e., to raise the SNR. In this paper, we analyze several classical multichannel speech enhancement methods, and propose the optimized algorithm.

Yanwen Wu

763

Multichannel speech enhancement The interference is usually stochastic in natural environment, therefore it is almost impossible to extract completely pure speech signal. The main purpose of speech enhancement is to improve the speech quality, eliminate the background noise, and increase the speech’s naturalness to make it more accepted by the listener. According to the channel number of the input signal, the speech enhancement is divided into single channel and multichannel. In recent years, the study on the multichannel speech enhancement has attracted more interests. The paper mainly describes several multichannel speech enhancement methods. Fixed beam forming technology Beam forming is a common method in array signal analysis, in which fixed beam forming and selfadaptive beam forming are the main basic methods. The following discusses the fixed beam forming technology. Beam forming implements filtering by signal space, strengthening the objective signal in the specific direction and weakening interfering signal in other directions. It is similar to construct a beam, so it is called beam forming, which is the most important content in array signal process. Beam forming tracks the weighted signal by the electronic circuit, but not directionally receives by changing the physical structure. The beam forming is actually a multidimensional space filtering system with multi-input and single-output. In microphone array system, such performance is used to enhance the voice signal in the objective source signals and restrain the noise interference in other direction. Beam forming technology has two classes: fixed beam forming and self-adaptive beam forming, according to whether it depends on the input signals. The more mentioned one is the first, which is a closed-loop system. Its optimum design relies on the random statistical properties of the received signal. In order to eliminate the interference in other directions more effectively, the direction of all signals and interference speech source should be known in advance, which involves multi-position or the estimation of the Direction of Arrival (DOA). While the independent data beam forming is an open cycle system, which is similar to the general self-adaptive filter. Beam forming is also divided to the methods based on the direction of arrival and another based on the user transferred training sequence. All methods mentioned above have some shortcomings, and in recent years, many blind beam forming technology are proposed, which don’t need the priori knowledge of the array and the signal. Generally, beam forming comes into being a weighted combination to produce the output by means of the signals collected by the sensor array, that is, y(n) = w H x(n ),m=0,...,M-1 (1) T Where, the weighted vector produced by beam forming is shown as w = [ w0 w1 wM −1 ] .The following is the discussion of the easiest method called the delay-summation beam forming[1]. Simple beam forming. Assuming a one-dimensional microphone array, whose angle (the angle between the speech and the microphone array) is θ . There is a sound source in θ . That is s(t ) = e jwt . The sound wave is assumed to be the plane wave, not the spherical wave, and the signal of the microphone i from the sound source transmission is, xi (t ) = e jw( t −τ i ) (2) Where τ i is the time delay of the signal received by the i th microphone. If no process is implemented to the output signals of all microphones, which is directly added, then the output of the simple beam former is: N −1 N −1 (3) y ( t ) = ∑ e jw ( t − τ ) = e jw t ∑ e − jwτ i

i

i=0

i=0

Where, N is the microphone number. The space-frequency response function of the simple beam former is shown as, N −1 (4) H ( ω , θ ) = ∑ e − jwτ i

i=0

764

Manufacturing Systems and Industry Application

To a fixed frequency, the simple beam forming is a function varied with the signal direction: N −1

H (θ ) = ∑ e − jωτ i

(5)

i =0

Identically, when the signal direction is fixed, it is a function varied with the frequency: N −1

H (ω ) = ∑ e − jωτ i

(6)

i =0

The unit impulse response in the time domain of the array system is shown as the following expression according to (6): h ( t ) = ∑ δ [t − τ i ] (7) i

When the transferred direction is vertical to the microphone array ( θ =

π 2

), τ i = 0 , and its unit

impulse response is:

h(t ) |π = N δ (t )

(8)

2

In other cases, the system unit impulse response is made up of a series of unit impulses with different delay. They are uniformly-spaced in time or not. Some impulses can be coincided in time. All of this is related to the position of the microphone array and the transferred direction of the sound wave. After the concept of beam former is introduced, the properties of filters are not only related to the signal frequency, but also to the signal space position. Beam former is a space filter. Microphone array has the biggest gain in the direction of beam, and weakens differently in other directions. The traditional beam forming makes the microphone array beam and the voice signal in the same direction, by time delay compensating, to eliminate the interference noise in other directions. Fixed beam forming. Fixed beam forming compensates the delays between the sound source and each microphone by controlling delay. That is, it adjusts the delay of every signal received by microphone, making the microphone array beam directing the direction of the maximum power output. The beam is aiming at the sound source in space position. This algorithm was firstly proposed by Flanagan[3]. Theoretically, fixed beam forming keeps the amplitude of the speech signal unchanged, and eliminate the interference and noise signal. Fixed beam forming can be divided to three parts: time delay estimation, time delay compensate and summation, shown as Fig.1. This kind of microphone array speech enhancement is easy to realize, but needs more microphones to get the better performance to suppress noise. It is rarely used alone. x1 (t )

τ1

w1 (k )

τ2

w2 (k )

τK

wK (k )

x 2 (t )

x K (t )

Fig.1 Fixed beam forming

Self-adaptive beam forming Self-adaptive beam forming is a widely used microphone array speech enhancement method. The earliest algorithm was the Linearly Constrained Minimum Variance (LCMV) self-adaptive beam former[4], presented by Frost in 1972. Its basic idea is to make the output of the array output minimum, guaranteeing to keep the direction gain of the wanted signal established. On the basis of LCMV, Griffiths and Jim proposed a modified linear beam forming in 1982, called Generalized Sidelobe Canceller (GSC). GSC consists of three parts: fixed beam forming,

Yanwen Wu

765

block matrix and self-adaptive noise canceller. The system block diagram is shown in Fig.2. The fundamental principles of GSC is to separate the signal channels to self-adaptive channel and nonadaptive channels. The wanted signals can only be passed by the self-adaptive channel. The block matrix is used to filter the wanted signals, making the self-adaptive channel only including the multichannel noise reference signal. The estimated noise got by the noise estimator can offset the noise component in the non-adaptive channel. The algorithm of GSC is classical and is the basic framework of the subsequent algorithms[5]. From 2001 to 2004, Gannot proposed a Transfer Function GSC (TF-GS)[7] on the basis of GSC, and made an intensive study on it. The basic thought of these methods is: to the signals of gaze direction, a fixed frequency response is assumed in advance as required, guaranteeing the process of the wanted signals. At the same time, this also determines the output power value of the gaze direction. The system firstly implements the time delay compensation to the signals received by each microphone in array, to make every channel speech signal synchronized. Then, the system makes the entire output power minimum on the constraint condition mentioned above, to suppress the interference noise in other directions. This kind of algorithm is suitable for eliminating the coherent noise. When the number of the interference noise is less than the microphone number, its performance is better. But to the non-coherent noise or weak coherent noise, the traditional beam forming is in the lead. GSC constructs the block matrix by assuming that the DOA of the objective signal is prior known, which is often difficult to estimate. So the respected DOA and the real DOA are often different, called DOA mismatch. It will cause the wanted signal leaking into the self-adaptive channel, and offset some part of the wanted signal in output, making the speech signal distorted. To solve this problem, many improved algorithms are proposed one after another[9]-[13]. Post filtering The self-adaptive beam forming is appropriate to the condition when the noise number is less than the microphone number. That is to say, it keeps good performance only when the noise signals of every microphone are stronger coherent. But in the actual applications, the microphone array is often used in the closed situation, where the problems of multipath and reverberation are more serious because of the wall reflection. In such case, the noise source number is considered infinite. The simulations also prove that the delay-summation algorithm is better than the self-adaptive beam forming. The researchers introduced another beam former, which was with a post filter and maintains the better property in noise field. It is also self-adaption and usually includes three parts (shown in Fig.3): (1) time delay compensate module, as the pre-process, whose function is to make the microphone array aim at the speaker; (2) delay-summation and beam forming module, to the noncoherent noise, whose capability to eliminate the noise signal is 10log10 M (dB) , where M is the microphone number; (3) self-adaptive Wiener filtering module, which can remove the non-coherent noise furtherly. x1 ( n)

x2 ( n)

xM (n)

τ1

a1

τ2

a2

τM

y (n)





aM a1 a2

aM

Fig.2 Self-adaptive beam forming



766

Manufacturing Systems and Industry Application

x1 (n)

τ1

w1

τ2

w2

x2 ( n)

y (n)

...

...

...

∑ τM

wM

xM (n)

... Fig.3 Beam former with post self-adaptive filtering The Wiener filter’s coefficients are got according to the auto-correlation and mutual correlation of the received signals among signal channels. It means that the coefficients are self-adaptively changed. The noised speech signal after the delay-summation can get the objective estimated speech signal under the rule of MMSE after Wiener filter. This algorithm can get the better performance in the non-coherent environment by using less microphones. For instance, when 4 microphones are used in more complex circumstance, the algorithm can improve 4dB-6dB SNR compared to the delay-summation method. The following are the advantage and disadvantage of the multichannel speech enhancement. Fixed beam forming is simple in structure, but its performance of eliminating noise is limited to the non-coherent noise. It can’t set zero in the interference direction adaptively. It is sensitive to the evaluated error of the objective DOA. Self-adaptive beam former has better property to the coherent noise and is suited for time-varying sound environment. It depends on the proper estimation of the objective DOA, and is also limited to the non-coherent noise. The post filtering is simple to realize, and can effectively remove the non-coherent noise. The enhanced speech is with some distortion, so it is rarely used alone. Although there are some algorithm which can eliminate the coherent noise theoretically, the post filtering has the better performance compared to the self-adaptive beam former. Conclusion The paper mainly discusses several classical multichannel speech enhancement algorithms in microphone array, such as fixed beam forming, adaptive beam forming and post filtering. The unique advantage of the microphone array speech enhancement is proved in the sound environment with the reverberation and the noise signal. The advantage and disadvantage are described. Based on these disadvantages, many researchers proposed some improved methods, such as combination with the single channel algorithms, and it can get the better performance. Due to some limitations of the microphone array, it can be posted processed. Thus the noise is eliminated more effectively, which has the obvious superiority and is the important algorithm to be studied furtherly.

Yanwen Wu

767

References [1] Pu Jian tao: The key technology and future trend of the multichannel user interface, Journal of Computer Research and Development, 2001, 38 (6) [2] Dong Shi hai: Evolution of the human-computer interaction and its challenge, Aided Design and Computer Graphics, ,2004, 16 (1) [3] J. L. Flanagan :Computer-steered microphone arrays for sound transduction in1arge rooms, Journal of Acoustical Society of American. 1985, vol.78, no.5. 1508-1518 [4] O. L. Frost: An algorithm for linearly-constrained adaptive array processing, Proc. IEEE. Aug., 1972, vol.60, no.8. 926-935 [5] L. J. Griffths: An alternative approach to linear constrained adaptive beam forming, IEEE Transactions on Antennas Propagation. 1982, vol.30, no.1. 27-34 [6] S. Gannot, D. Burshtein and E. Weinstein: Signal enhancement using beam forming and nonstationarity with application to speech, IEEE Trans. Signal Processing. Aug, 2001, vol.49. 1614-1626 [7] S. Gannot, I. Cohen: Speech enhancement based on the general transfer function GSC and post filtering, 2003 IEEE International Conference on Acoustics, Speech and Signal Processing. Hong Kong. 2003. 908-911 [8] S. Gannot and I. Cohen:Speech enhancement based on the general transfer function GSC and post filtering, IEEE Transactions on Speech and Audio Processing. Nov, 2004, vol.12, no.6. 561-571 [9] Kevin M Buckley, Lloyd J Griffiths:An adaptive generalized sidelobe canceller with derivative constraints, IEEE Trans. on Antennas and Propagation. Mar,1986, vol. AP-34, no.3. 311-319 [10] Yongzhi Liu, Qiyue Zou and Zhiping Lin. Generalized sidelobe cancellers with leakage constraints, Circuits and Systems, ISCAS 2005. 3741-3744 [11] Osamu Hoshuyama, Akihiko Sugiyama:A robust generalized sidelobe canceller with a blocking matrix using leaky adaptive filters, Electronics and Communications in Japan. 1997, Part 3, vol.80. 56-65 [12] Yinman Lee, WenRong Wu. A robust adaptive generalized sidelobe canceller with decision feedback. IEEE Trans. on antennas and propagation. Nov, 2005, vol.53, no.11. 3822-3832 [13] Yinman Lee, WenRong Wu. An LMS-based adaptive generalized sidelobe canceller with decision feedback. IEEE International Conference on Communications. May ,2005, vol.3. 2047-2051 [14] P. Comon. Independent component analysis. a new concept? Signal processing. 1994, vol.36. 287-314

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.768

A new Orthogonal Projected Natural Gradient BSS Algorithm with a Dynamically Changing Source Number under Over-determined Mode Wang Ping, Lu Jixiang, Wang Xin, Shi Hongzhong School of Telecommunication Engineering, Xidian University, Xi’an, China College of Mechanical &Electrical Engineering, Northwestern Polytechnical University, Xi’an, China College of Mingde, Northwestern Polytechnical University, Xi’an, China Shaanxi Communication Hospital, Xi’an, China [email protected],[email protected] Key words: Blind source separation, Natural gradient, Orthogonal projected, Crosstalk error

Abstract. Blind source separation (BSS) attempts to recover unknown independent sources from a given set of observed mixtures. Algorithm based on natural gradient is one of the main methods in BSS. An analysis has been done on the problem that the old algorithm goes to diverging under over-determined mode. A new improved algorithm based on orthogonal projected natural gradient is studied in the paper. The simulated result using crosstalk error proves the capability to perform the BSS under over-determined mode and the better convergence stability of the new algorithm. It is also effective with a dynamically changing source number. Blind source separation In blind source separation, the goal is to extract statistically independent but otherwise unknown source signals from their linear mixtures without knowing the mixing coefficients. It stems from the research of “cocktail party” problem, that is, to separate the voice of someone or more than one from the speech signal received by several microphones. Recently, BSS has become an increasingly important research area due to its rapidly growing applications in various fields, such as telecommunication system, image enhancement and biomedical signal processing. Fig.1 shows a schematic diagram of the mixing and source separation system[1][2]. s = (s1 , s 2 , , s n ) T is the source vector consisting of the n unknown source signals; x = (x1 , x 2 ,

, x m ) T is the m-dimensional random vector representing m observed mixture received

by sensors; n = (n 1 , n 2 ,

, n m ) T is the additive noise vector, statistically independent to the source

signals s . The output y = (y 1 , y 2 , , y n ) T is the unknown estimation of the source signals. A = ( a ij ) m×n is a constant full column rank m × n mixing matrix whose elements are the unknown coefficients of the mixtures; W = ( wij ) n×m is the separate matrix solved by BSS. In Fig.1, the source number n, the available source component and useless source, the source characters, the mixed channel feature and the noise features are all unknown. Only the observed signal x(k ) is known, which includes the unknown or blind characters of source signals and mixed system. n( k )

s( k )

A

x(k )

W

y(k )

Fig.1 Block diagram of blind source separation According to the relation of the receiving sensor number m and the unknown source number n, BSS can be divided to three modes: under-determined m < n , positive mode m = n and over-determined m > n . The main previous algorithms assume that the source number is known, but the fact is often that it is unknown or dynamically changing. In under-determined mode, the source signal recovery is incomplete, that is, the entire source information can not be reconstructed. That is the reason the paper mainly studies the over-determined mode.

Yanwen Wu

769

Natural gradient algorithm Self-adaptive process is a basic method in signal processing. In BSS, the self-adaption block diagram is shown as Fig.2. s (k )

x(k ) A

y (k )

W

x ( k ) = As ( k ) y (k ) = W (k )z (k )

Fig.2 Self-adaptive BSS The self-adaptive processor is used to adjust the parameters of separating matrix W . W can be decomposed to two parts: whitened matrix Q and orthogonal normalized matrix U . This is kind of decomposition is not required, because the observed signals x(k ) can be directly processed. It all depends on the specific algorithm. The self-adaptive study is implemented by gradient. An objective function ε is defined based on end analysis before processing. When ε goes towards minimum (or maximum), the output y can obtain the expected character under some optimization. ε is a function of system parameter W . Thus the minimum or maximum of ε is completed by adjusting W step by step, which ∂ε depends on the gradient . That is, ∂W ∂ε ( W ) ∆W (k ) = W (k + 1) − W (k ) ∝ (1) ∂W ∂ε Generally, is connected with the statistical property of x and y . When this kind of ∂W property is replaced by one sample estimation, it is stochastic gradient. When the stochastic gradient is used, the gradient is defined as the steepest descent direction in rectangular coordinate system. The objective function ε is adjusted to descend as quickly as possible along this direction. But the objective function ε ( W ) is actually a curved surface. Finding a gradient in Riemann surface is more reasonable, so the natural gradient is introduced: ~ ∇ ε ( W) = [∇ ε ( W)] • W T W = I − ψ (y )y T W (2) [3] In natural gradient, the calculation of inverse matrix of W is avoided. In Informax , the self-adaptive updated formula is shown as follows: W (k + 1) = W (k ) + µ k [ W −T (k ) − ψ (y (k ))xT (k )] (3) When replacing stochastic gradient by natural gradient expressing the steepest descent direction of the curved surface, the updated formula based on natural gradient is attained: W ( k + 1) = W ( k ) + µ k I − ψ (y (k )) y T ( k ) W (k ) (4) Where, the theoretical value of ψ (y ) is determined by the probability density function (PDF) pi (si ) of the source component s i corresponding to y i , that is, p ′ (s ) ψ i (y i ) = − i i (5) pi (s i ) In self-adaptive algorithm, how to estimate the PDF pi (si )(i = 1, , n) of source signals is very important. It influences the performance seriously to choose a proper nonlinear function ψ (y ) [4]. If the truth is known in advance that PDFs of each source are all super-Gaussian or sub-Gaussian, an analytic function which is close to the real PDF and easy to be calculated can be found, then ψ (y ) can be decided, which is called activation function; When the PDFs feature is not known, it is only determined during the algorithm studying. The natural gradient algorithm above is based on the positive mode, that is, the mixing matrix and the separate matrix are both n × n square matrix. The n outputs are copies of n sources.

[

[

]

]

770

Manufacturing Systems and Industry Application

When the mutual information gets the minimum, the independent component can be obtained. But when the source number is unknown, or under over-determined mode m > n , the mixing matrix is a m × n matrix, while the separate one is a m × m square matrix. There are m − n redundant signals besides n copies of the source signals in m outputs, which are the linear transformation of the sources. In this case, the mutual information has m − n identical local minima. Each is corresponding to a redundant separated state. The equilibrium condition of natural gradient shown in expression (6) is not satisfied, E[I − ψ (y (k ))y T (k )] = 0 (6) The minimum of mutual information is not the algorithm equilibrium point, so the stable convergence is impossible. The following is the analysis of how this natural gradient divergence happens. The output component on the separation point of natural gradient is assumed as: i = 1, n  s yi =  i (7) i = n + 1, , m s i − n Then, E[I − ψ (y (k ))y T (k )] = − µ[w Tn +1 , w Tm ,0, 0, w1T , w Tm − n ]T (8) Where w i is the i th row of the separate matrix W , disturbance will happen between the vectors of different copies because of the mutual overlay. These copies belong to the same source considering the separate matrix. During the convergence, the separation matrix moves in the null space of AT , that is, making the invalid move in the equivalence class of W , which will cause the norm of W maximum, leading the algorithm going to divergence. The definition of null space will be explained next. The basic spaces related to matrix are: column space, row space and null space[5]. Considering a complex matrix A ∈ C m× n , the collection of all the linear combination of its column vector make up a subspace of m dimension vector space, which is defined as the column space of matrix A ∈ C m× n , shown as, (9) Col( A ) = Span{a1 , a 2 , a n } Similarly, the row space of A ∈ C m× n is defined as the collection of all the linear combination of its complex conjugate row vector, show as, * * * Row ( A) = Span{r1 , r2 , rm } (10) And, Row ( A ) = Col( A ) (11) The null space of A is defined as the collection of all the solution vector satisfying the linear homogeneous equation Ax = 0 , that is Null( A ) = {x ∈ C n : Ax = 0} (12) H Similarly, the null space of A is, Null( A H ) = {x ∈ C m : A H x = 0} (13) The improved over-determined natural gradient Orthogonal projection solves the divergence problem. As explained above, the invalid move of the separation matrix W in the null space of AT causes the algorithm divergent. The valid move which can lead the algorithm minimum output the mutual information is that the rows of W moves in the column space Col( A ) of the mixing matrix A , except for the movement multiplied by the nonzero constant. In fact, multiplication by the nonzero constant is the redundant move, so it can be avoided by the additive equilibrium condition of the natural gradient algorithm. Therefore, to solve the divergence problem, the invalid move of the separation matrix in the null space Null( AT )

Yanwen Wu

771

must be suppressed, which demands that the natural gradient and Null( AT ) are orthogonal. But from the divergence result, the natural gradient ∇ ε ( W ) = [I −ψ ( y ) y T ]W and Null( AT ) are not orthogonal, so the orthogonal projected algorithm is introduced. The space C m can be orthogonally decomposed as: Cm = Col( A) ⊕ Null( AT ) (14) ~

Each row of the natural gradient is orthogonal projected to Col( A ) along the direction of Null( A T ) , then the redundant component of the natural gradient along Null( AT ) can be eliminated, while the valid move in the valid space Col( A ) is completely reserved. The orthogonal projection is realized by the orthogonal projected matrix of Col( A ) along the direction of Null( AT ) right multiplied by the natural gradient. Orthogonal projected natural gradient algorithm. Right multiplying the natural gradient ~ ∇ ε ( W ) = [I − ψ ( y ) y T ]W by the orthogonal projected matrix PAproj = A( AT A) −1 AT , the projected natural gradient is get as shown:

[ ] = [I − ψ (y )y ]W • P

~ ∇ ε ( W) = I − ψ (y )y T W • A( A T A) −1 A T T

(15)

Aproj

Then the updated formula of the over-determined projected natural gradient is:

W ( k + 1) = W ( k ) + µ k [I − ψ ( y ( k ))y T ( k )]W ( k ) • PAproj

(16)

In BSS, PAproj = A( A T A) −1 A T can not be solved directly because of the unknown mixing matrix A . But according to the theorem mentioned in reference[7]: if the source is independent mutually, m × n matrix A is full column rank, then AT and m × k matrix XT = [xi +1 , , xi + k ]T

consisting of k observed signal vector {xi +1 = As i +1} |i =1,

,k

have the same null space at a

probability of 1. According to the theorem, the null space of AT can be estimated by the observed samples, then the BSS is realized under the over-determined mode. Simulate, verification and analysis Expriment1: Let’s choose a group of source signals, respectively being Sine signal, PM, AM and Sign signals. The mixing matrix is 5 × 4 stochastic matrix. The sample number is 10000, which is shown as the first 3000 to be observed clearly. The source is shown as Fig.3. Fig.4 are mixed signals and separation signals processed by orthogonal projected natural gradient algorithm. From Fig.4, the source signals are recovered, although the amplitudes and order are not determined, but the waveforms are copied. Each source can find one or more copy in separation output. The algorithm achieves the effective separation. Crosstalk Error (CTE)[8] is used here to evaluate the BSS performance. In the ideal condition, the separate matrix W must make the global matrix G=WA converge to the unit matrix I. Crosstalk Error is defined as the distance from the global matrix G to the unit matrix I. We use modified Formula(17) because the source number is unknown and it is under the over-determined ( m > n ) model. 1 E = m

  n G pq ∑ ∑ p = 1  q = 1 max G l  m

pl

  1 − 1 +  n 

   m  m − n G pq − 1 − ∑ ∑ n q = 1  p = 1 max G lq  l   n

(17)

It is clear that the CTE is invariant by a multiplication of a permutation matrix. One can easily verify that the CTE has a positive value and it is equal to zero when the condition is ideal, so that the algorithm gets the best performance. Normally researchers give the value of CTE in dB.Fig.5 is the CTE result of the new algorithm. It remains the better performance. Fig.6 is the old one’s result. It diverges seriously.

772

Manufacturing Systems and Industry Application

crosstalk error of projNG 3.5

1 0

3

-1 0

500

1000

1500

2000

2500

3000

500

1000

1500

2000

2500

3000

500

1000

1500 source signals

2000

2500

3000

500

1000

1500

2000

2500

3000

2.5

1 0

2

-1 0

1.5

1 1

0 -1 0

0.5

1 0

0 -1 0

0

2000

4000

6000

8000

10000

Fig.5 Crosstalk error result of Proj_NG Fig.3 Source signals crosstalk error of NG

5 0 -5 5 0 -5 2 0 -2 2 0 -2 5 0 -5

4

0

500

1000

1500

2000

2500

3.5

3000

3

0

500

1000

1500

2000

2500

3000

0

500

1000

1500

2000

2500

3000

0

500

1000

1500 2000 separated signals

2500

3000

2.5 2 1.5 1 0.5 0

0

500

1000

1500

2000

2500

3000

0

1000

2000

3000

4000

5000

6000

7000

8000

Fig.6 Crosstalk error result of NG

Fig.4 Separated signals by Proj_NG

Experiment 2: Another experiment is designed to prove the new algorithm is effective when the source number is dynamically changing. Adding a stochastic noise as the 5th source, and the first Sino signal is added after the first 1000 points. The third PM signal is removed after 6000 points and return in 7000 points. The dynamically changing source number is simulated in this method. The sample number is 8000. Fig.8 shows the source signals. The mixing matrix is chosen to be 8 × 5 stochastic matrix. Fig.9 is the result of orthogonal projected natural gradient algorithm. Fig.10 shows the CTE result, which proves that the improved algorithm is effective when the source number is changing. 1 0 -1

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

1 0 -1 1 0 -1 1 0 -1 1 0 -1

Fig.8 Source signals

Yanwen Wu

2 0 -2 2 0 -2 2 0 -2 2 0 -2 2 0 -2 1 0 -1 1 0 -1 2 0 -2

773

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

0

1000

2000

3000

4000

5000

6000

7000

8000

Fig.9 Separation signals 5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0

0

1000

2000

3000

4000

5000

6000

7000

8000

Fig.10 Crosstalk error result of Proj_NG Conclusion The improved orthogonal projected natural gradient algorithm can successfully solves the BSS problem under the over-determined mode, while the old method can only perform under the positive condition, which diverges seriously when m > n . The improved algorithm keeps the better performance when the source number is dynamically changing, which is more close to the real situation. References [1] Zhang Xianda, Bao zheng:Blind source separation, Chinese Journal of Electronics. Dec 2001, Vol 29(12). 1766-1771 [2] Yang Xiaoniu, Fu Weihong:Blind source separation---Theory, Application and Outlook. Communication Countermeasures . 2006, Vol 3. 3-10 [3] Bell A J, Sejnowski T J :An information-maximization approach to blind separation and blind deconvolution. Neural Computation. 1995, Vol. 7(6). 1129-1159 [4] Xi-Lin Li, Xian-Da Zhang : Nonorthogonal Joint Diagonalization Free of Degenerate Solution. IEEE Transactions on Signal Processing. May 2007, Vol 55(5). 1805-1808 [5] Zhang L Q, Cichocki A, Amari S:Natural gradient algorithm for blind separation of overdetermined mixtures with additive noise. IEEE Signal Processing Letters. 1999, Vol 6(11). 293-295 [6] Amari S:Natural Gradient Learning for Over and Under Complete Bases in ICA. Neural Computation . 1999, Vol 11(3). 1875-1883 [7] Ye J M, Zhu X L, Zhang X D: Adaptive Blind Separation with an Unknown Number of Sources. Neural Computation. 2004, Vol 16(8). 1641-1660 [8] Ali MANSOUR, Mitsuru KAWAMOTO, Noboru QHNISHI:A Survey of The Performance Indexes of ICA Algorithms. Proceedings IASTED International Conference MODELING, IDENTIFICATION, AND CONTROL, Februray 18-21,2002, Innsbruck, Austria

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.774

The Research of WebGIS-Based Data Mining FU Chunchang School of Computer Science and Technology, Southwest University for Nationalities, Chengdu Sichuan 610041, China [email protected] Key words: data mining; WebGIS; DDM; database

Abstract. Data mining provide us a technical found potential knowledge technical from mass data , the data mining technology is applied to GIS can promote GIS analysis of the data, so as to improve the ability of decision support level. WebGIS is traditional GIS space outspread, current network analysis and decision making support capability are low, the network-based integration of data mining and WebGIS can strengthen the analytical capacity. Introduction WebGIS is a kind of geographic information system, can use Internet/Intranet Web forms to release the spatial data, to provide users with spatial data to browse, search and analysis. Compare with traditional centralized GIS, its characteristic is: more extensive visit scope, Platform independence, Mass reduced system cost, more simple operation, Balance efficient computational load, etc. After a period of development, WebGIS have been relatively widely used in many applications. The traditional GIS have advanced data management and analysis functions, but WebGIS still only has a relatively simple function. GIS's development is inseparable from network, the inevitable requirement of WebGIS correspondingly improve its spatial data management and analysis skills. This paper is focused on how to improve the ability of data analysis of WebGIS [1]. Data Mining Data mining is emerging as the data explosion, and its task is to solve the conflict of the data explosion and the lack of knowledge. Data mining is a complex process from a large data extracting unearths the unknown and valuable mode or regularities knowledge [2]. Since the emergence of data mining, has been developing rapidly, and this particular knowledge discovery objects formed the spatial data mining branch. Domestic and international numerous scholars put forward the theoretical framework of data mining from different angles, and got a lot of valuable technology method. WebGIS and Data Mining. Data mining can be used to discover knowledge from massive data and potential model, born from the beginning of this technology will be used in commercial areas to help decision support. If data mining is to extract patterns from large databases of technology, then the GIS spatial database backend is a massive large-scale database, GIS is also need such a technology to find useful knowledge from massive data for analysis and decision support , therefore the data mining technology is not wait to introduce into GIS field [3]. Because of the spatial database has its own characteristics, such as database contains vector spatial data and attribute data exist spaces between autocorrelation, making the data mining in spatial database application has additional features, so in spatial database of data mining research led to the spatial data mining branch formation of disciplines. So-called spatial data mining is refers to extract from spatial database of interest to users of the spatial patterns and characteristics, the space with the spatial data generally relations and some other implied in the database common data characteristics [4]. The structure of data mining and WebGIS is shown in Fig.1.

Yanwen Wu

Define the Scope of Space

775

Determine Data sets

Spatial Data

Attribute Data

WebGIS Database According to actual condition, determine the implicit knowledge

Use various methods for data mining

Knowledge Test

Data Mining Results

Fig.1 Data Mining and WebGIS Present Problems. Under the network environment, distributed data mining research just start, covers the rich in content. For through WebGIS data mining study is less. the WebGIS environment data mining research now faces the following questions: Now WebGIS limited to the current operations and query spatial data, lack of data mining functions, lack of theoretical research and practical in WebGIS environment for data mining. GIS of data mining applications is also concentrated in the traditional centralized environment, but the development of the network cause application requirements to distribute and parallelism. The existing relevant data mining and GIS integration is either from purely based on network distributed data mining Angle, either from traditional GIS with centralized data mining integrated Angle, for the lack of WebGIS such specific environment of network data mining is studied. Distributed Data Mining Technology Concept. With the expansion of the amount of data, many organizations has taken seriously and through the data mining technology to use mass data resources what they have, is expected to be valuable information and knowledge. But with the development of network technology and database technology, the data of the distributed storage is becoming more and more common, so distributed data mining increasingly getting people's attention, also appeared some application model. Distributed data mining (DDM) is refers to the use of data mining technology through computer network connection of several local data collection process to obtain data set is implicit knowledge process. A distributed data mining simple structures as Fig.2 shows. Global Data Mining

Local Data Mining



Local Data Mining

Local Database



Local Database

Fig. 2 Distributed Data Mining Structure Mining Method. Distributed data mining knowledge discovery object is distributed environment database or documents, these files are stored in a network of local-local computer (or call database server), may exist between heterogeneous data sources, cannot access, data sharing

776

Manufacturing Systems and Industry Application

and interoperability very poor etc.Even in some cases, some data source is not accessible. Because the current circumstances distributed data between the sharing and interoperability could not resolve in a short time, so the current most WebGIS application of distributed data source have such properties of transparent and accessible. WebGIS Environment of Data Mining The current network application basically based on browser/server (B/S) three-layer structure. this kind of structure develop from the client/server (C/S) structure. B/S structure provides unified interface for most end-user, the browser workload is less, most of the treatment system in the server, fully utilize the server-side computing resources, because on B/S structure server only returns the necessary result to end-users, also reduce network burden. At present the mainstream is based on this kind of WebGIS based on BS/three-layer (or more multi-layer structure), its structure as shown in Fig. 3. Browser

WebGIS Server GIS Server

WWW Server

Spatial Database

Fig.3 WEBGIS Based on B/S Three-layer Structure

Features. Compared with WebGIS data mining and spatial data mining in GIS, it has own features: 1. Multi-user operating environment Due to the Web system is facing all Internet users, so WebGIS data mining system based on the establishment of large needs to consider the concurrent operation problems 2. Data of heterogeneous and transparency Compared with the traditional definition of spatial data mining, it through WebGIS data mining distributed data source is one of the characteristics of heterogeneous data problem, but if by each site each local mining program is developed to a certain extent, can solve this problem, if by a department developed to access the data source of local-local words all the procedures for developers requires data structure has transparency. 3. Global mining and local mining A data mining task for the object may be just the local data, also may be distributed in the network of various site database. To local data mining is that local mining, for local mining speaking, it can be a part of the global mining, and complete a mining task may need many local mining. 4. Mining components and the distribution of program execution Obviously WebGIS environment of data mining application has a big part in components form distribution across sites, these components is distributed program is running, so the execution of the program and calculation also has the parallelism.

Yanwen Wu

777

Summary WebGIS using data mining technology can bring WebGIS analysis ability the enhancement, can enable ordinary Internet users effective intuitively all agencies released by using various data, can make its use data of ability has been improved to a new level. Through WebGIS data mining and as based on the introduction of call it accord with current accumulate WebGIS requirements. References [1] Xu Zhoukui: WebGIS Environment of Data Mining. Wuhan university master degree theses 2005.5. [2] Zhu Ming: Data Mining (China Science and Technology University Press, Hefei 2002). [3] Li Zhuqing, Shao Peiji and Huang Yixiao: Data Mining Present Situation and Development of Research in China. Management engineering journal. 2004.18(3). [4] Li Deren, Guan Zequn: Space Information System Integration and Implementation. (Wuhan University Press, Wuhan 2002). [5] P. Tan, M. Steinbach and V. Kumar: Data Mining Introduction (Mechanical Industry Press, Beijing 2010.9). [6] Wang Shuliang: The Spatial Data Mining Perspective (Surveying and Mapping Press, Beijing 2008.10).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.778

The License Plate Recognition Technology Based on Digital Image Processing Juan-hua Zhu 1,a, Ang Wu 1, b and Juan-fang Zhu 2,c * 1

College of Mechanical and Electrical Engineering, Henan Agricultural University, 450002, China 2

The First Affiliated Hospital of Zhengzhou University, 450002, China a

[email protected], [email protected], [email protected] *corresponding author.

Key words: Image Pre-processing, Plate Locate, Character Cut, Character Recognition

Abstract. A rapid and convenient method of license plate recognition is discussed. The color plates are preprocessed by transform gray-scale transformation and image enhancement. The license plate is located by edge detection and region search algorithm, and the character segmentation is made by projection. Finally, the template is matched, and the license plate number is recognized quickly and accurately. The experiment shows that the method used in this paper can achieve better recognition results. Introduction The domestic and international began to research vehicle license plate recognition technology and achieved some results from the 90s of 20th century. For example, some products, such as the VECON of Hong Kong Asia Vision Technology, the See/Car System series of Israeli Hi-Tech company, the VLPRS of Singapore Optasia company are relatively mature. But most products can only identify their license plates, and can not recognize the characters of license plate. The equipments are also a huge investment. China also raised a number of research methods. For example, Hanti Huang of Huazhong University of Science and Technology proposed license plate recognition based on template matching method. Feihu Qi of Shanghai Jiaotong University proposed license plate recognition based on color segmentation method. Nanning Zheng of Xi'an Jiaotong University proposed a multi-level texture analysis of the license plate recognition method. However, the vehicle license plate recognition influenced by the ambient light is relatively large, so there is still a lot of work to do in the vehicle license recognition and algorithm optimize [1,2]. The license plate recognition system includes four modules: image preprocessing, license plate location, character segmentation and character recognition. In this paper, the blue ground white character license plate is processed. License Plate Image Preprocessing Vehicle license image preprocessing mainly includes the vehicle image grayscale, image enhancement and noise removal processing. It can improve vehicle image quality, especially the license image quality. The preprocessing preserves and enhances the original texture and gray license plate information and removes the noise that may affect the texture and color information. It is convenient for license plate location [3,4]. Image Grayscale. The image that the license plate recognition system captures is usually color image. It contains a large amount of color information. Although the visual effect of color image is good, it contains large amount of data. It not only occupies large storage space, but also affects the processing speed of the system, so it need change first color image to grayscale [5]. The gray processing of color image is based on the average of the three components ( r, g and b) of each image pixel instead of the different components respectively, that is rr = gg = bb = (r + g + b) / 3 . As shown in Figure 1, (a) is the color vehicle image that the system collects, and (b) is its gray image after grayscale.

Yanwen Wu

779

(a) (b) Fig1. The original vehicle image and grayscale image. (a) Color vehicle image; (b) gray vehicle image The License Plate Image Enhancement. The image after grayscale is dark gray, and the gray level difference between characters and background is small (as shown in Figure 1 (b)), so it is not conducive to the positioning of the character license. Gray-scale image enhancement can adjust the gray distribution make the image more clearly. In this paper, the histogram equalization method is adopted to enhance images. The basic principle of histogram equalization is that the gray values with large number of pixels, that the gray value plays a major role on the image, are broaden, while the gray values with less pixels, which does not play a major role on the image, are merged. Its purpose is to achieve clear image. The MATLAB simulation results are shown in Figure 2. (a) is the original gray image, (b) is its histogram, (c) is the gray image after equalization, and (d) is the histogram equalization of (c). Figure (b) shows the gray level number of original gray image is less, the gray set is concentrated and the gray value is small, so the image is dim. Figure (d), the histogram of (c), obeys uniform distribution and its gray levels are increased, so the image after equalization is clearer.

Fig 2. The original gray image and the enhanced image. (a) the original gray image; (b) the original gray image histogram;(c) the equalization gray image; (d) the histogram of qualization gray image. Character Positioning and Segmentation The license plate should be extracted before identification, therefore the vehicle image will be located and segmented. The License Location Algorithm based on Edge Detection and Regional Search. It can be seen from the car gray image that the license plate has distinct texture and large changes in gray value. In the edge image, the license plate has more detail edge information relative to other non-license area. The license plate exists relatively continuous transition gray (gray level change), and the distance between the two transitions is within a certain range. The other regions have generally not this

780

Manufacturing Systems and Industry Application

feature. Therefore, the plate can be located through the horizontal scanning and vertical scanning the edge image according to the license plate texture features. The equation 1 can enhance gray jump and reduce the jump caused the non-license area and noise. (1 ) distance: the distance between two adjacent jumps within the scan line; ρ1, ρ2: the range between two adjacent jumps within the scan line. ρ1 and ρ2 are experience values. The jump times of license scan lines within the region is relatively stable, so the upper and lower bounds can be located exactly by searching the area satisfied with the texture features. The constraint can be expressed as equation 2.

Line

( jump

≥ 14 ) ≥ τ

(2)

Line: the number of consecutive scan lines; Jump: the gray jump number within the scan line; τ : experience value; The image pretreated and located is shown in Figure 3.

Fig 3. The gray image plate The License Plate Character Segmentation. The image positioned and extracted is grayscale. It is not suitable for recognition processing, so it should be changed to black and white image by binarization algorithm to reduce the amount of data. The License Plate Image Binarization. The methods of global and local threshold values can be used to the license plate image binarization. In this paper, the global dynamic threshold method is used to binarization, and the optimal threshold is solved by iterative method. The iteration formula is shown in equation 3. L −1 Ti ∑ hl ⋅ l  ∑ hl ⋅ l l =T i + 1  i=0 T i + 1 = K  T i + L −1  hl h l l =∑  ∑ T i +1  i=0

(3)

h1: the pixel number of grayscale value to 1; When Ti +1 equals Ti, the iteration will end. The Ti value is used as the final segmentation threshold T. K = 0.5. The binarization of license plate image is shown in figure 4.

Fig 4. The license plate image binarization Character Projection and Segmentation. Every character of the binary image needs to be segmented for identification by the upper and lower boundaries. The license plate frame, character height, character width, the center of each character and the character width are calculated by the vertical and horizontal projection of the binary image and the projection valley, peak width and valley width. The binary license plate is surrounded by a character border, so the character border is required to remove the border segmentation before the character segmentation. The border has higher and

Yanwen Wu

781

narrower peaks than a general character in the vertical and horizontal projection, and it is set to ends of the projection. According to the border projection the valley of adjacent characters, the up, down, left and right borders can be removed. Removing the license plate frame, the split characters can be extracted according to the width of the character, and transformed into a standard sub-graph. The horizontal projection of the license is shown in Figure 5.

Fig 5. The vertical projection and horizontal projection of license plate image. (a) the vertical projection; (b) the horizontal projection; (c), (d) the binary license plate.

Fig 6. The segmentation results of license plate image Character Recognition After character images segmentation, characters, the dot matrix images of numbers, letters and Chinese characters are got. Due to the difference of the size and thickness of these lattice images, they should be normalized. A database of normalized images can be built. The license plate characters consist mainly of about 50 characters, 26 uppercase letters and 10 Arabic numerals. These characters as samples are trained by the neural network, and their features are extracted. When the database is established, you can identify the characters. The characters are recognized by template matching methods. Template matching method based on calculating the distance between the computer image and the template features determines the unknown image type by the minimum distance method. The similarity degree of character graphics and character template is represented by Conf and Diff. The specific method is that the sample subtracts the image in the database and the error is calculated. The image with minimum error is the recognition result. After the character recognition processing, the license plate number identified is Yu A652D0. Summary A fast processing of license plate recognition is researched in this paper. The license plate recognition processing system includes image preprocessing, license plate localization, character segmentation and character recognition. By the method, we tested 20 color images which include blue ground white character plates and black ground white character plates, and the recognition rate reaches 90%. White ground black character plates, after inverting the gray binarization image, also can be detected successfully.

782

Manufacturing Systems and Industry Application

Acknowledgements This study was financed by scientific research tackling key subject of Henan Province (No.102102210156) References [1] Gong Shengrong, in: Digital Image Processing and Analysis. Tsinghua University Press ( 2006). [2] Zhou Nina. The preprocessing algorithm of license plate recognition. Computer Engineering and Applications, Vol. 39 , No. 15 (2003) [3] Cui Jiang, Wang Youren. the key technology of license plate recognition. Computer Measurement and Control, Vol. 11, No. 4 (2003) [4] Chenchang Tao, Zhang Ling. Method to locate vehicle License plate based on the color variation. Computer Applications, Vol. 25, No. 12 (2008) [5] Li Gang,Lin Ling,Wang Mengjun. License Plate Location based on mathematical morphology algorithm. Chinese Journal of Science Instrument. Vol. 28, No. 7 (2007)

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.783

A Modified TopDisc Algorithm in WSN Du Zhiguoa, Hu Dahuib Rongchang Campus of Southwest University, Chongqing, PR China a

[email protected], [email protected]

Key words: WSN; Topology Control; TopDisc; Residual Energy;

Abstract. TopDisc algorithm, one classic algorithm based on minimum dominating set, puts forward an effective method—approximate topology to set up network. TopDisc algorithm, which only needs local information, is a fully distributed and expandable network-control algorithm. The shortcomings of such algorithm are as follows: the algorithm expense is considerable without taking nodes residual energy into account. After a close analysis of TopDisc algorithm, the author proposes a modified algorithm. The modified algorithm makes good use of the nodes residual energy and reasonably chooses backbone node form heavily energy-loaded ones, which makes the energy evenly consumed, network topology stable and network lifetime endurable. Simulation data indicate that the modified algorithm is very flexible, making the energy evenly consumed, net work topology stable and network lifetime endurable. Introduction In traditional wireless mesh network, topology control uses self-adaptation mechanism to combine a certain amount of nodes to make up a network, so do the WSN and wireless sensor networks. Compared with traditional wireless mesh network, WSN has more complicated distributive environment, limited energy supply of batteries, numerous nodes, changeable topology structure and high requirement of security. As to WSN, the major concern of topology control is to ensure network coverage and connectivity, adjust node capacity and form the backbone node on the basis of a certain rules and consequently make the process and transmission of network data accessible. So far, the research of topology control, home and aboard, is very fruitful such as LMA/LMN of Kubisch, LINT/LILT of Ramanathan, DRNG and DLMST of N.Li, and all of those focus on node capacity. TopDisc algorithm of Deb, the improved GAF of Santi, LEACH of Heinzelman, and HEED algorithm of Younis are based on hierarchical topology. Although the above algorithms provide the theoretical base for the achievement of topology control, they also needs improvement. The author makes a close analysis of TopDisc algorithm, points out its main deficiencies and brings forward a modified method making it more effective in topology control. Overview of TopDisc It uses initial node to send a topology discovery request at first, then such request is broadcast to identify the network backbone nodes, and thus an approximate topology is done with the combination of neighboring nodes’ information. After the formation of such approximate topology, in order to reduce network traffic of the algorithm itself only the backbone nodes are responsive to the request of the initial node. In order to identify the backbone nodes, greedy algorithm is applied in TopDisc algorithm. To be specific, TopDisc algorithm puts forward two similar methods: three-color and Four-color.

784

Manufacturing Systems and Industry Application

In three-color algorithm, white, black and gray are used to stand for the three states of nodes. White refers to unaffected nodes, that is to say, the nodes which have not received the request of the topology; black refers to the backbone nodes responsive to the request of the topology; gray refers to the normal nodes which shall be covered by black nodes. In other words, gray nodes are the neighbors of black nodes. In the first stage, all the nodes are colored white. The algorithm begins with an initial node and finishes with all the nodes being marked with black or gray respectively (supposing the network is connected). The concrete working process is as follows: ①an initial node is marked with black, broadcasting a topology discovery request in the network ②when a white node gets the topology discovery request form the black nodes, such white node will be colored gray and it will continue to broadcast the topology discovery request in delay-time TWB. TWB is in reverse proportion to distance between it and the black node. ③when the white nodes gets the topology discovery request from the gray, it will be marked black after the TWG, however, during TWG such node gets the topology discovery request from the black again, such nodes will be marked gray. Similarly, TWB is in reverse proportion to the distance between the white node and the black node. Whether the node is black or gray, all of them will continue to broadcast the request after the coloring. ④All the marked black or white nodes will neglect all the other topology discovery request. In order to make each new black node cover as many as possible uncovered node, TopDisc adopts the reverse of the distance between the nodes of delay time mechanism.

Fig.1 Working process of three-color algorithm As in figure 1, suppose a, the initial node, is colored black, broadcast a topology discovery request in the network according to step 1. Node b and c receive the topology discovery request and are colored gray and continue to broadcast a topology discovery request after a certain period of time. Suppose Node b is closer to Node a than Node c. That is to say, the waiting time for Node b is rather shorter and it is b which first broadcast the topology discovery request. Node d and e receive the topology discovery request from Node b and wait after a certain period of time respectively. Node a, already colored black, will neglect the topology discovery request from b in accordance with step 4. Suppose Node d is farther to Node b than Node e, then Node d and Node e are more likely be colored black. Suppose here Node d and Node e have not received the topology discovery request in TWG. Be aware that there is an intermediate node (Node b) between two black nodes and it is cover by both the nodes, which is the essence of three-color algorithm. From the above analysis, it is clear that there are overlapping areas among nodes. In order to increase the distance of between nodes and reduce the overlapping areas, four-color algorithm is also adopted by TopDisc. In other words, nodes have four different states which are colored white, black, gray and dark gray, among which the dark gray nodes refer to those who receive the topology discovery request but are not cover by any black node. Compared with three-color algorithm, the four-color has less clusters and less overlapping areas among nodes but some isolated and colored black nodes(as Node e) which do not cover any

Yanwen Wu

785

gray node. Although the number of black nodes is the same in three-color and four-color algorithm, the data transmitted by four-color algorithm are fewer than three-color algorithm. TopDisc, making use of the typical algorithm of graph theory, put forward an effective method of setting up network approximate topology and serves as one classic algorithm. TopDisc algorithm only needs local information and it is a fully distributed and expandable network-control algorithm. However, the algorithm also needs improvement. Such as, the algorithm expense is considerable without taking nodes residual energy into account. An improved TopDisc algorithm The number of nodes of wireless mesh network is enormous, and the environment is nasty without guards around which makes the recharging and change of batteries hard to achieve. Therefore, highly effective batteries are required to extend the longevity of the network. Since the improved TopDisc algorithm cannot enhance the energy the battery, it must save the energy. Therefore the improved algorithm must distribute the energy evenly to each node. When the battery energy of the node decreases to a certain critical value, such node cannot serve as the black node any longer and consequently prolong the network’s connection. The energy consumption of black node is much larger than gray node, and the nodes are colored black according to approximate probability. The black nodes is produced by random seed method in the improve method. In other words, node n is a randomly selected number from (0, 1) in r+1 cycle period and is compared with threshold T(n). When it is smaller than the threshold T(n), it is colored black. The calculation of threshold T(n) is as follows: T(n)=P/(1-P(r mod(1/p)))

n∈G

(1)

P is the proportion of expected backbone nodes in the total nodes, r is cycle period number and G is the collection of non-backbone in (r mod(1/P)) cycle period. In the above method, suppose each node has the same energy and function to ensure all the nodes consume the energy within similar time, otherwise the probability of more energetic node acting as the backbone node is comparatively larger. In such case, the probability of node acting as the backbone node is ratio of the present energy of the nodes and all the residual energy. Such approach involves evaluating residual energy in the network, which makes the expense considerable and the process complicated, while in random seed method each node can decide on its own to be the backbone node or not. The backbone node broadcast information to the net through CSMA protocol, and other sensor chooses the corresponding backbone node. Thus, the expense is relatively small and suitable to sensor network with numerous nodes but cannot guarantee the reasonable distribution of backbone nodes in the net. The realization of the algorithm includes three stages: Topology discovery, topological stability and topology reconfiguration. ①Topology Discovery: all the nodes are colored white and in active movement and the backbone node initiates three-color algorithm. Since in the first stage all the nodes have the same energy and the process is the same as that of three-color algorithm. When one node is chosen as backbone node and its energy is En, till the end of the algorithm. ②Topological stability: all nodes send the same length information to the backbone node with. After the sending, non-task gray node go sleeping and wake up to listen for information from backbone after the set time t1 and then send back the information along with its own residual energy. When it comes to the set time t2, the node goes to sleep again. However if the t2 is overdue and the backbone nodes has information to deliver, the node will become active at once. There

786

Manufacturing Systems and Industry Application

are two tasks for the backbone node to fulfill: to collect and transmit the data, to choose a new node. When the energy of the present node is satisfiable to formula (2), another new node needs to be selected. In formula (2), En stands for present residual energy of the backbone node, Emax is the maximum energy and c is a constant which is greater than 1 and less than 1. En< cEmax

(2)

③Topology reconfiguration: When the energy of the present node is satisfiable to formula (2), topology reconfiguration starts. The backbone node broadcasts the updated information and waits a while till the time is satisfiable to formula (3), in which represents the waiting time and is the propagation delay time. Tw >> 2Ta

(3)

In waiting time, if any node receives the information, it will remain active not sleeping till the selection of the new backbone node. When the waiting time is overdue, the backbone node will choose a new one according to the following mechanism: the node gets the updated information and remains active and wait till Tw. Then it judges itself whether it is qualified as the new backbone node in terms of propagation delay time algorithm, which can be described as follows: the time T that nodes become the new node is in reverse proportion to its distance with the present backbone node and also in reverse proportion to its own present energy. The calculation of T is illustrated by formula (4). T=c1Dd+c2Er

(4)

In formula (4), c1and c2 are constants, Dd stands for the distance between the present backbone node and the node,and Er is its present residual energy. A new backbone node is chosen according to propagation delay time algorithm, and the topological stability resumes. ④Algorithm Simulations: After the algorithm is done, MATLAB is applied to make a simulation analysis. In the 150 m X150 m area,100 sensor is randomly distributed, assuming each initial energy is 1 and when the energy drops to less than 0.2, and it is no longer qualified as the backbone node. After a while, 20 nodes are randomly chosen and their residual energies are recorded as well. Figure 2 is the comparison of the residual energy in TopDisc and in the modified algorithm. The fluctuating range of the new method is smaller and distribution of residual energy is much evener compared with the old TopDisc. Without taking the energy into consideration, TopDisc, some nodes are always act as the backbone, which will cause the quick consumption of the energy.

Fig.2 Residual energy of comparison two algorithms

Yanwen Wu

787

It is clearly demonstrated in figure 3 that the modified algorithm has a longer cycle than TopDisc. In the modified algorithm the probability of serving as backbone node is higher with more energy-loaded one and the dead node appears until 1000s, while in TopDisc the dead node arises at 500s. As time goes by, in TopDisc the speed of dead nodes will accelerate as is shown by the slope, whereas in the modified one, the speed of dead nodes will also accelerate after 3800s since each node is approaching the death.

Fig.3 Comparison life cycle of two algorithms Figure 4 is the comparison of the lifetime between TopDisc and the modified one. When the number of node is the same 100, the failed nodes threshold is 0.2 and communication radius is 20cm, the lifetime of the network differs in different monitored areas. The bigger the node density is, the longer the network lifetime is; the bigger the node density is, the more effective the improved result is.

Fig.4 Life cycle of two algorithms in different area Conclusions On the basis of TopDisc algorithm and energy control, a modified algorithm is advanced, which is more flexible and workable in controlling the residual energy distribution of the nodes As to the energy consumption control, the modified algorithm is very effective through the simulation experiment. The residual energy of the node is more evenly distributed and its lifetime is prolonged, which can ensure the approximate optimum of network topology. In this way the load of the network is balanced, the overlapping area of the nodes is reduced,the information transmission of the node is more independent, the information transmission load is less, thus the lifetime of the whole network id enhanced.

788

Manufacturing Systems and Industry Application

References [1] Deb B, Bhatangar S, Nath B. A topology Discovery Algorithm for Sensor Networks with Application to Networks Management[R].DCS Thchnical Report DCS-TR-441, Rutgers University, 2001. [2]

Heinzelman W,Chandrakasan A,Balakrishnan H. An Application-Specific Protocol Architecture for Wireless Microsensor Networks[J]. IEEE Transactions on Wireless Communications,October 2002,1(4): 660-670.

[3]

Xu Y,Heidemann J,Estrin D. Geography-Informed Energy Conservation for Ad Hoc Routing[C]. In Proceeding of the ACM/IEEE International Conference on Mobile Computing and Networking(MobiCom), July 2001: 70-84.

[4]

Xie Xin,Zhang Heng,Wu Peng. Topdisc aigoreithm for topology control research based on dump enercy[J].transducer and Microsystem Technologies, July 2010, 6(29): 8-14.

[5]

Zhao Baoguo,Zhang Wei,Liu Hengchang,et al. Cluster Partition Algorithm in Wireless Sensor Networks[J].Chinese Journal of Computer,Jan 2006,1(29):161-165.

[6]

Xie Xin,Zhang Heng,Yu Zhongping,et al. TopDisc Topology Algorithm Based on Energy and Power Control[J].Journal of East Jiaotong University, June 2010, 3(27): 58-87.

[7] Zhang Xue,Lu Sanglu,Chen Guihai,et al. Topology Control for Wireless Sensor Networks[J].Journal of Software, April 2007, 4(18): 943-954.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.789

A Collaborative Filtering Recommendation Algorithm Based on User Clustering in E-Commerce Personalized Systems Guanghua Cheng Zhejiang Business Technology Institute, Ningbo 315012, China Email: [email protected] Key words: electronic commerce, personalized systems, collaborative filtering, recommendation algorithm, user clustering

Abstract. Electronic commerce recommender systems are becoming increasingly popular with the evolution of the Internet, and collaborative filtering is the most successful technology for building recommendation systems. Unfortunately, the efficiency of this method declines linearly with the number of users and items. So, as the magnitudes of users and items grow rapidly, the result in the difficulty of the speed bottleneck of collaborative filtering systems. In order to raise service efficiency of the personalized systems, a collaborative filtering recommendation method based on clustering of users is presented. Users are clustered based on users ratings on items, then the nearest neighbors of target user can be found in the user clusters most similar to the target user. Based on the algorithm, the collaborative filtering algorithm should be divided into two stages, and it separates the procedure of recommendation into offline and online phases. In the offline phase, the basic users are clustered into centers; while in the online phase, the nearest neighbors of an active user are found according to the basic users’ cluster centers, and the recommendation to the active user is produced. Introduction Proposal is an integral part of daily life network. We usually rely on some external knowledge about the specific components or actions to make a wise decision, because when we go to the movies. This knowledge can be a common procedure. At other times, and in about an artifact of our popular preferences information is based on our decision. There are indications that the person making the decision, the ideal, like many of these factors as much as possible of the system model suggests a number of factors [1,2,3]. Collaborative filtering is a technique that is complementary to learn, each other's interests, content-based filtering and user preferences, or the social behavior of the target data model predicted that the existing database user's preferences [4,5,6,7]. Up to now, for the implementation of collaborative filtering recommendation system, the main model is based on nearest neighbor regression, or so-called memory-based technology. These systems generally use two-step method. First, some users find similar active users, the proposal has been submitted. Their basic shortcoming is that they can not handle scalability and sparse. This means that they are facing performance issues, when the data is very large, sparse. Probabilistic latent semantic analysis has been widely used in information retrieval, the potential terms and identification of semantic relations between documents. Probabilistic latent semantic analysis, a term document matrix is constructed low-level approximation. In collaborative filtering, it is similar to a trend from the user's personal preferences. In order to raise service efficiency of the personalized systems, a collaborative filtering recommendation method based on clustering of users is presented. Users are clustered based on users ratings on items, then the nearest neighbors of target user can be found in the user clusters most similar to the target user. Based on the algorithm, the collaborative filtering algorithm should be divided into two stages, and it separates the procedure of recommendation into offline and online phases. In the offline phase, the basic users are clustered into centers; while in the online phase, the nearest neighbors of an active user are found according to the basic users’ cluster centers, and the recommendation to the active user is produced. The collaborative filtering recommendation algorithm based on user clustering can improve the performance of the recommendation efficiency in the electronic commerce personalized systems.

790

Manufacturing Systems and Industry Application

User-based Collaborative Filtering 1. User-Item Rating Matrix The task of the user-based collaborative filtering recommendation system concerns the prediction of the target user’s rating for the target item, based on the users ratings on observed items. Each user is represented by item-rating pairs [8,9,10], and can be summarized in a user-item table, which contains the ratings rij that have been provided by the ith user for the jth item, the table as following. Table 1 User-item ratings table Ite Item … Ite m1 2 … mn r11 r12 … r1n 1 … User r21 r22 … r2n 2 … …… … … … … … … … … User rm1 rm2 … rmn m … Item User User

Where rij denotes the score of item j rated by an active user i. If user i has not rated item j, then rij =0. The symbol m denotes the total number of users, and n denotes the total number of items. 2. Similarity Measurements For the users the weight vector Up Vp = (V1p, V2p ,..., Vmp) and the weight vector user Uq Vq = (V1q, V2q ,..., Vmq), there are several similar measure Up and Uq of sim (Up, Uq) method: Standard cosine similarity. By the cosine of the angle between vectors to measure. m

∑ Vkp *Vkq sim(Up, Uq) = cos(Up, Uq) =

k =1 m

m

∑ (Vkp)2 * ∑ (Vkq)2 k =1

k =1

Which, Vkp users uk project is given the right value Ip, Vkq is user uk project Iq given weight. Modified cosine similarity. To fix different users have different score scales of prejudice, the modified cosine similarity measure the user by subtracting the average score for all projects to improve this defect. We choose matrix A (m, n) in Ip and Iq are the project score line, that is, the project has a score Up and Uq set of users, defined as U '. ∑ (Vkp − Ak ) *(Vkq − Ak ) uk∈U ' sim(Up, Uq) = ∑ (Vkp − Ak )2 * ∑ (Vkq − Ak )2 uk∈U '

uk∈U '

Which, Vkp users uk project is given the right value Up, Vkq is user uk project Uq given weight, Ak is the user uk weight on the average of all projects. Relevant similarities. According to Pearson's correlation coefficient to measure the proposed project the similarity between the. In order to calculate the fairness, similar to this of metrics will also be two projects have made in the evaluation of the user set U 'in the calculation.

∑ (Vkp − Ap)*(Vkq − Aq) sim(Up, Uq ) =

uk∈U '

∑ (Vkp − Ap) uk∈U '

2

*

∑ (Vkq − Aq)

2

uk∈U '

Which, Vkp users uk item is given the right value Up, Vkq users uk item is given the weight of Uq, Ap, Aq are the project Up and Uq on the average weight.

Yanwen Wu

791

3. Selecting Neighbors Choose a neighbor who will serve as a referee. Both techniques have been employed in the recommendation system: based on threshold selection, based on the similarity of the user exceeds a critical value as the target user's neighbors considered the top - N technology, which the n-best number of neighbors scheduled Is selected. 4. Producing Recommendation To generate a forecast user's rating, we use user-based collaborative filtering algorithms[11,12]. Now that we have been members of the user, we can calculate the weighted average of user ratings. When the count of sub-items excluding user rating object u, production forecasts for the project according to a recent rating of my neighbors. Production prediction formula is as follows: q

∑ (R Pui = Au +

− A p ) * sim ( u , p )

pi

p =1 c

∑ sim ( u , p ) p =1

Au: average ratings of the user u to the items, Rpi: the rating of the user m to the item i, Ap: average ratings of the user m to the items, sim(u, p): the similarity of the user u and the user m, q: the number of the neighbors. The Collaborative Filtering Recommendation System Based on User Clustering User-based clustering method proposed by the user of the item scores to cluster similar users, users with similar interests will be placed in the same class, when the target user arrives, first determine the user belongs to cluster, and then in the Clustering search target user's nearest neighbors, resulting in the user space as little as possible the target user's nearest neighbor search. 1. The framework

Figure 1:

The framework

In the proposed recommendation system, user clustering based collaborative filtering technology may be of interest to the user recommended resources. On the site as links to the user, the recommended structure of the system shown in Figure 1. It divided into Two parts: the first is off-line processing part, and the completion of the pretreatment of Web logs the user clustering and similarity coefficient calculation; second part online recommendation section, the use of off-line

792

Manufacturing Systems and Industry Application

phase of the processing results predicted by the algorithm Target users access to items of interest are not degrees, the high interest in the first N results as the recommended items recommended to the user. 2. User Clustering Method Traditional clustering algorithm is the initial cluster centers randomly selected in the experiment showed that there will be more clustered outliers [13,14,15,16]. Collaborative filtering algorithm and the nearest neighbor search based on the recommendations, so stand point personalized recommendations. Study found that more users score can represent the amount of users, these users as a cluster center has a good representative. Therefore, the amount of text, select the largest k-rate users as the initial cluster centers, the experiment proved to reduce the isolation of good points. Input: the data source D = (U, I, R), the user - project focuses on matrix A Output: k a cluster Method: From the collection of items I = {Item1, Item2, ..., Itemn} n to retrieve all the items; From the basic user set U = {User1, User2, ..., User m} m to retrieve all users; Select from the m user score the most amount of users as the initial k cluster centers, denoted by {W1, W2, ..., Wk}; k-cluster C1, C2, ..., Ck are initialized to empty, denoted by set C = {C1C2, ..., Ck}; repeat for each user ui ∈ U for each cluster center Wj ∈ {W1, W2, ..., Wk} According to the formula (4) Calculate the similarity of ui and Wj sim (ui, Wj) end for sim (ui, Wm) = max {sim (ui, W1), ..., sim (ui, Wk)} Cluster Cm = Cm + ui end for The members of the cluster does not change until 3. Predication Depending on the target in the respective cluster similarities between users, the highest similarity of the M users as the target user's nearest neighbors. prediction formula can be calculated according to the target users of the project's forecast, calculate the target user All items on the forecast does not score, then score the maximum recommended by the top N items to the target user, which is the target user's Top - N recommendation set. Summary In order to raise service efficiency of the personalized systems, in this paper, a collaborative filtering recommendation method based on clustering of users is presented. Users are clustered based on users ratings on items, then the nearest neighbors of target user can be found in the user clusters most similar to the target user. Based on the algorithm, the collaborative filtering algorithm should be divided into two stages, and it separates the procedure of recommendation into offline and online phases. The collaborative filtering recommendation algorithm based on user clustering can improve the performance of the recommendation efficiency in the electronic commerce personalized systems. Acknowledgment A Project Supported by Scientific Research Fund of Zhejiang Provincial Education Department (Grant No. Y200909659).

Yanwen Wu

793

References [1] Songjie Gong, An Efficient Collaborative Recommendation Algorithm Based on Item Clustering, Lecture notes in electrical engineering, Volume 72, pp:381-387. [2] Xue, G., Lin, C., & Yang, Q., et al. Scalable collaborative filtering using cluster-based smoothing. In Proceedings of the ACM SIGIR Conference 2005 pp.114–121. [3] Songjie Gong, Employing User Attribute and Item Attribute to Enhance the Collaborative Filtering Recommendation, Journal of Software, Volume 4, Number 8, October 2009, pp: 883-890. [4] K Honda, N Sugiura, H Ichihashi, S Araki. Collaborative Filtering Using Principal Component Analysis and Fuzzy Clustering,Lecture Notes in Computer Science, 2001 [5] B. Sarwar, G. Karypis, J. Konstan and J. Riedl, Recommender systems for large-scale e-commerce: Scalableneighborhood formation using clustering, Proceedings of the Fifth International Conference on Computer andInformation Technology, 2002 [6] Songjie Gong, A Collaborative Filtering Recommendation Algorithm Based on User Clustering and Item Clustering, Journal of Software, Volume 5, Number 7, July 2010, pp: 745-752. [7] Songjie Gong, Personalized Recommendation System Based on Association Rules Mining and Collaborative Filtering, Applied Mechanics and Materials, Volume 39, pp:540-544. [8] Songjie Gong, An Enhanced Similarity Measure Used in Personalized Recommendation Algorithms, Advanced Materials Research, Volume 159, pp:671-675. [9] L. H. Ungar and D. P. Foster. A Formal Statistical Approach to Collaborative Filtering. Proceedings of Conference on Automated Leading and Discovery (CONALD), 1998. [10] M. O. Conner and J. Herlocker. Clustering Items for Collaborative Filtering. In Proceedings of the ACM SIGIR Workshop on Recommender Systems, Berkeley, CA, August 1999. [11] A. Kohrs and B. Merialdo. Clustering for Collaborative Filtering Applications. In Proceedings of CIMCA'99. IOS Press, 1999. [12] Lee, WS. Online clustering for collaborative filtering. School of Computing Technical Report TRA8/00. 2000. [13] S.H.S. Chee, J Han, K. Wang. Rectree: An efficient collaborative filtering method. Lecture Notes in Computer Science, 2114, 2001 [14] D. Bridge and J. Kelleher, Experiments in sparsity reduction: Using clustering in collaborative recommenders, in Procs. of the Thirteenth Irish Conference on Artificial Intelligence and Cognitive Science, pp. 144–149. Springer, 2002. [15] J. Kelleher and D. Bridge. Rectree centroid: An accurate, scalable collaborative recommender. In Procs. of the Fourteenth Irish Conference on Artificial Intelligence and Cognitive Science, pages 89–94, 2003. [16] George, T., & Merugu, S. A scalable collaborative filtering framework based on co-clustering. In Proceedings of the IEEE ICDM Conference. 2005

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.794

Study on Contact Characteristics of Free Rolling Radial Tire Gang Cheng1,a, Weidong Wang2,b 1

School of Mechanical and Electronic Engineering, Shandong Jianzhu University, Jinan 250101, China 2

Department of Engineering mechanics, Shandong University, Jinan 25006l, China a

[email protected], [email protected]

Key words: radial tire, finite element analysis, free-rolling, nonlinearity

Abstract. To study the rolling properties of a radial tire, an accurate 3D 195/60R14 tire model is established. The modal includes the geometric nonlinearity due to large deformation, material nonlinearity, the anisotropy of rubber-cord composites, the nonlinear boundary conditions from tire-rim contact and tire-pavement contact. The model can be used to simulate the changes of a rolling tire and calculate the tire deformation for various operating conditions. The profile of the inflated tire is studied experimentally and numerically. The simulation result is in good agreement with the test result. Some contact problems, such as the tire deformation, the shape of contact area, the contact pressure distribution, are discussed in detail. Introduction The pneumatic tire is the only component transferring the load between the vehicle and the road. The tire must have performance characteristics such as comfort, high-speed running, tread-pattern life, low rolling resistance or power loss, durability and safety. Even though the tire behaviour is directly related to the vehicle/road interaction, the interface between the wheel (rim + tire) and the ground must be considered. Tielking and Abraham [1] measured footprint pressure distributions for two highway-type radial truck tires and a smooth-tread radial truck tire by using a triaxial load pin array. Wang et al. [2] theoretically studied the vertical load–deflection behavior of a pneumatic tire. Research showed that the tire vertical stiffness varies neither proportionally nor symmetrically as the slip angle of a cambered tire is changed. Pau et.al.[3] used ultrasonic method to study the contact of a motor-bicycle tire on a rigid surface. The raw reflection data were converted into graphic maps that display the contact area features and contain information about contact pressure. Rao et al. [4] simulated the dynamic behavior of a pneumatic tire by using an explicit finite element, and discussed the effects of camber angle and grooved tread on tire cornering behavior. Gim et al. [5] presented a semiphysical model, and experimentally formulated the vertical force as a function of the tire deflection, camber angle and lateral force. Tire materials are multiplayer, asymmetric and anisotropic. A lot of experimental studies have be done to establish an accuracy FE model [6]. This study presents a tire contact characteristics of a 195/60R14 radial tire on free rolling state. Different parts of the tire and their corresponding material properties, interference fitting of tire and rim, friction are taken into account in the finite element model. The effects of rolling velocities on the deformation and contact stresses generated while the tread interacting with the road surface were calculated.

Yanwen Wu

795

Finite Element Model of Radial Tire The tire is a complex composite structure, made up of many layers of rubberized fabric with reinforcement cords; the orientation of warp and weave and of reinforcement cords gives a tire its unique mechanical design characteristics. Mesh Generation and Element Selection. The radial tire mainly includes tread, shoulder, sidewall, carcass, belt and bead ring. Materials of sidewall, inside liner, apex, and tread are pure rubber with differing hardness. Belt, carcass, and bead ring are a single layer or multilayer cord-rubber composite. Figure 1 depicts a schematic view of 2D tire section and mesh constructed for the current analysis of the tire. The tire is composed of a single-ply polyester carcass, a single cap ply, two belt layers, and several steel bead cords. In the static tire analysis, those parts are usually modeled using solid elements like rebar elements. Crown Cap ply

Rim Tire

Inside liner

Belt

Sidewall

Carcass Road

Apex Bead ring

Chafer

Bead

Fig. 1 Finite element discretization of the tire cross section

Fig.2 Tire–pavement contact modal

The finite element software MARC affords the Rivlin and Ogden material models. After measuring the material parameter, it is easy to define the material model and properties. At the same time, the user can define the material stress–strain relation conveniently in the form of a user’s sub-routine or table. The non-linear mechanical properties of elastomers can be obtained by tests [6]. The corresponding material constants of elastomers are fitted from the testing data. The rebar model describes the properties of belt, carcass, chafer, and bead. The rubber matrix is defined by the Mooney–Rivlin model. The cords are defined by a linear elastic model. Treatment of Boundary Conditions. Figure 2 shows the boundary condition of the tire. The friction between the tire and the wheel rim is considered. The friction coefficient between them is 0.5. The contact between the tire and the road is a contact of large displacement and nonlinearity. Compared with the tire material, the road is regarded as the rigid body. The friction between the tire and the road is considered and the Coulomb for rolling friction model is adopted. According to Zhuang [7], the friction coefficient is 0.55. Loading Cases. First, the axial movement of the left and right rim is controlled exactly until it reaches the standard width of the wheel rim. So the assembly and positioning load case between the wheel rim and the tire is realized. Secondly, applying even face pressure on the tire inner surface, the normal inflation pressure is 0.25 MPa. Then, the road is controlled to move to the tire axes, to make the tire produce definite deflection as the tire is loaded. One control node in the tire axes is defined. The node is incorporated with the road. The load rating 5194N of tire is applied on this node. The deflection under the load rating is controlled exactly by the load applied on the control node. Lastly, the rotation axis and rotation speed of the tire are defined. Meanwhile, the relative speed between the road and the tire is needed to be defined.

796

Manufacturing Systems and Industry Application

Analysis Results Deformation of the pneumatic tire. Figure 3 shows the shape of tire deformation under inflation. The tire inflates, and the sidewall is extruded out. A compilation of the test data and modal results for inflation pressure 0.25PMa are listed in Table 1. Meanwhile, the analysis result accords with the testing result.

Fig. 3

Profile of the pneumatic tire

Table 1 Comparison of the computed deformation characteristics with the measured data under inflation pressure 0.25MPa unit: mm Simulation result Test Error % Overall diameter

589.5

590.2

0.12

Overall width

202.3

201.5

0.38

Contact Stress Distribution. Figure 4 shows variation of the ground counterforce and tire angular velocity (ω). The ground counterforce increases with the increase of the tire angle velocity, and is in direct proportion to ω 2 . During the study of the tire performance, especially at large rolling speed, the shape of the contact area is important. Normal stress distribution in the contact zone with different speed during free-rolling state for the 0.25MPa (36psi) tire inflation pressure is plotted in Figure 5. For the same pressure level, the maximum stress level increases when tire speed increases. Contact shapes and sizes are different when tire speed is different; contact areas decreases along circumferential direction of the tire as tire speed increase. The maximum normal stress is located at the shoulder. For the same load, a lower tire speed gives larger contact area, a higher tire speed gives higher stress concentration of the tire shoulder and the wear of the shoulder also becomes severe. When the tire rolls, the elastic ply of the tire along with the road and the whole tire enters into deformation status. Figure 6 shows the belt stress distribution of the tire cross-section of inflated tire and free rolling tire in contact zone, respectively. The larger stress extends to the edge of the belted layer in rolling state, but decreases in the central area. Figure 7 shows the normal stress distribution along the circumferential direction in the central cross-section of the contact zone with different tire speed during the free-rolling state. Figure 8 shows variation of the maximum normal stress with speed. The contact stress increases with the increase of the tire speed. Especially, the stress level is higher in the front of the contact zone while the tire speed is less than 140km/h. Then, the largest normal stress occurs in the central of the contact zone during the free-rolling state, and increases with the increase of the tire speed. The phenomenon shown during the free-rolling state the tire rotary speed increases, the tire centripetal force increases. The tread is extruded out with loading more ground counterforce. Meanwhile, the contact area decreases. It is similar with the effect of inflation pressure.

Yanwen Wu

797

Ground counterforce N

6600 6400 6200 6000 5800 5600 5400 5200 5000 0

Fig. 4

4 6 8 10 12 14 Rotary speed n/s

The relationship between ground counterforce and tire angle velocity

(a) 100km/h

(d) 160 km/h Fig. 5

2

(b) 120 km/h

(e) 180 km/h

(c) 140 km/h

(f) 200 km/h

Normal stress distribution in the contact zone with different speed during free-rolling state (motion direction→).

(a) Inflated state Fig. 6

(b) Free-rolling state 120km/h

Belt stress distribution of cross-section of inflated tire and free rolling tire

798

Manufacturing Systems and Industry Application

Fig. 7 Normal stress distribution in the contact zone with different speed during free rolling state. Maximum normal stress (MPa)

0.9 0.8 0.7 0.6 0.5 0.4 80

110

140

170

200

Speed (km/h)

Fig. 8

Variation of the maximum normal stress with speed.

Conclusions The contact performance of the radial tire is studied numerically by 3D FEM simulation. The finite element analysis shows: (1) The profile of inflated tire is studied experimentally and numerically. The simulation result is in good agreement with the test result. The validity of the 3D FEM model is verified. (2) Tire rotary speed increases will lead to tire stiffness increasing. The ground counterforce increases with the increase of the tire angle velocity, and is in direct proportion to ω 2 (3) The length of the contact zone along the circumferential direction decreases with the increase of tire speed under free-rolling. The largest normal stress increases with the increase of the tire speed, but the contact area reduces. Acknowledgements: The research work was supported by the national science foundation of China (50775132), Shandong Outstanding Young Scientist Research Award Foundation(BS2009CL047) and the science foundation of Shandong province of China (ZR2010EM032).

Yanwen Wu

799

References [1] J.T. Tielking, M.A. Abraham: Transportation Research Record Vol.1435(1994), p. 92 [2] Y.Q. Wang, R. Gnadler and R. Schieschke: Vehicle System Dynamics Vol.25 (1996), p. 137 [3] P. Massimiliano, L. Bruno, B. Antonio: Tire Science and Technology Vol.36(2008), p. 43 [4] K. Rao, R. Kumar, and P. Bohara: Tire Science and Technology Vol.31 (2003), p.104 [5] G. Gim, Y. Choi, and S. Kim: Vehicle System Dynamics Vol.43 (suppl.) (2005) , p.267 [6] Y.J. Guan, G.Q. Zhao and G. Cheng: Journal of Reinforced Plastics and Composites Vol.25 (2006), p.1059 [7] J. D. Zhuang: Automobile Tire (Beijing Institute of Technology University Press, Beijing 1999).

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.800

Response surface methodology for the optimization of Spicy black beans Yaojun Sun 1, Haiyan Gao2,a, Yuan Wang1 and Li Sun3 1 2

Department of Tourist Management, Henan Business College, Zhenzhou, 450044, China

School of Food Science, Henan Institute of Science and Technology, Xinxiang, 453003, China 3

Zheng Zhou Shi Commercial Technician Institute, Zhengzhou, 450121, China a

[email protected]

Key words: Black bean; Spicy ; Optimization; Response surface methodology.

Abstract. The aim of this work was to optimize the process parameters through the statistical approach for the tingle and bot black bean. The process parameters influencing the tingle and bot black bean production were identified by using response surface methodology. The variables screened were most significant and showed a positive interaction. Optimum formula of spicy black beans could be achieved at pepper 3.125%, Chinese prickly 2.15%, and salt 3.035%. Such conditions resulted in score of 92. Introduction The world beans are an important source of protein, carbohydrates, vitamins, and minerals for both human and animal consumption. Nevertheless, they contain antinutritional and toxic factors that impair, to various degrees, the biological utilization of their nutrients [1,2]. However, most colored seedsc ontain heat-stable pigments and high amounts of dietary fiber which may be responsible for the relatively low digestibility of these grains [3,4]. Recently, black beans are among the most common legumes consumed in China and leisure food produced by black bean is one of the most popular convenient foods. Like most legumes, black beans are very important sources of nutrients, especially proteins, and an excellent source of complex carbohydrates [5]. Response surface methodology (RSM) is one of the techniques used to explain the combined effects of all the factors in the process. In this paper, we describe the optimization of appropriate parameters for tingle and bot black bean production, with the help of full-factorial composite design using RSM. Materials and methods Materials. Black bean, salt, pepper, Chinese prickly and monosodium glutamate were obtained from a local market. Process. Black beans were cleaned and soaked with sufficient water at room temperature (25ºC) about ten hours. Pepper was cut and packed with Chinese prickly by using gauze and put into boiling water for about 30 minutes, then put into the soaked black beans and other material such as salt and monosodium glutamate and cooked 30 minutes. Then picked up the cooked black beans, cooled to room temperature, vacuum-packed. The obtained products were evaluated by sensory analysis. The products were divided into four grades and the scores were as follows: taste good (from 90 to 100), common (from 70 to 89) and taste bad (less than 70). Response surface methodology (RSM). Black beans production was further optimized by using RSM of central composite design (CCD). Levels of three independent variables selected [pepper (A), Chinese prickly (B) and salt (C)] were optimized by RSM. Each factor in the design was studied at five different levels (Table 1). A set of 23 experiments was performed. All variables were taken at a central coded value considered as zero. The minimum and maximum ranges of variables were used, and the full experimental plan with respect to their values in the actual and coded form was listed in Table 2. Upon completion of experiments, the average scores of black bean was taken as the dependent variable or response (Y).

Yanwen Wu

801

Table 1 Range of values for the response surface methodology Independent variables Levels Chinese prickly (%) Salt (%) Pepper(%) -α (-1.682) 2.159 1.159 2.159 -1 2.5 1.5 2.5 0

3.0

2.0

3.0

+1 α (+1.682)

3.5 3.841

2.5 2.841

3.5 3.841

Table 2 Experimental design and results of CCD of response surface methodology Exp. no. Score Pepper Chinese prickly Salt 1 1 1 1 72 2 1 1 -1 86 3 1 -1 1 79 4 1 -1 -1 78 5 -1 1 1 82 6 -1 1 -1 76 7 -1 -1 1 75 8 -1 -1 -1 62 9 1.682 0 0 70 10 -1.682 0 0 71 11 0 1.682 0 84 12 0 -1.682 0 73 13 0 0 1.682 83 14 0 0 -1.682 85 15 0 0 0 90 16 0 0 0 91 17 0 0 0 88 18 0 0 0 90 19 0 0 0 89 20 0 0 0 89 21 0 0 0 90 22 0 0 0 93 23 0 0 0 92 Statistical analysis and modeling. The data obtained from RSM on the score of black bean was subjected to the analysis of variance (ANOVA). The results of RSM were used to fit a second-order polynomial equation (1), as it represents the behavior of such a system more appropriately. (1) Y = b0 + b1A + b 2 B + b3C + b1b1A 2 + b 2 b 2 B2 + b3 b3C2 + b1b 2 AB + b1b3AC + b 2 b3BC where Y = response variable, b0 = intercept, b1, b2, b3 =linear coefficients, b1,1, b2,2, b3,3 = squared coefficients, b1,2, b1,3, b2,3 = interaction coefficients, and A, B, C, A2, B2, C2, AB, AC, BC = level of independent variables. Statistical significance of the model equation was determined by Fisher’s test value, and the proportion of variance explained by the model was given by the multiple coefficient of determination, R squared (R2) value. Design Expert (Ver.7.0) by STATEASE Inc., Minneapolis, USA was used in this investigation.

802

Manufacturing Systems and Industry Application

Results and discussion The results of CCD experiments for studying the effect of three independent variables are presented along with the mean predicted and observed responses [6]. The final response equation that represented a suitable model for the score of black bean was given below: Y=90.236+1.341 A+2.966 B+0.193C-2.500 AB-4.000 AC-2.750 BC-7.109A 2 -4.280 B 2 -2.336 C 2 (2) where Y = Score, A = Pepper (%), B = Chinese prickly (%) and C = Salt (%).

Source Model A B C AB AC BC A2 B2 C2 Residual

Table 3 Analysis of variance (ANOVA) for the quadratic model Sum of df Mean square F value P value squares 1552.082 9 172.4535 36.30928 < 0.0001 24.57057 1 24.57057 5.173218 0.0405 120.1029 1 120.1029 25.28709 0.0002 0.7486 0.508952 1 0.508952 0.107157 50 1 50 10.52726 0.0064 128 1 128 26.9498 0.0002 60.5 1 60.5 12.73799 0.0034 802.9441 1 802.9441 169.0561 < 0.0001 291.103 1 291.103 61.29036 < 0.0001 86.68544 1 86.68544 18.25121 0.0009 61.74443 13 4.749572

Lack of Fit

42.18888

5

8.437776

Pure error Cor total

19.55556 1613.826

8 22

2.444444

3.451817

0.0587

significant

not significant

The regression equations obtained after the analysis of variance (ANOVA) presented the level of the score of black as a function of the pepper, Chinese prickly and salt and the results were shown in Table 3. The model F value of 36.30928 implies the model was significant and the adequate precision of 19.119 indicates an adequate signal for the signal noise ratio (A ratio greater than 4 is desirable). The model presented a high determination coefficient (R2 = 0.9617) explaining 96.17% of the variability in the response. The value of the adjusted determination coefficient is also very high to indicate a high significance of the model [7]. The coefficient of variation (CV) indicates the degree of precision with which the experiments are compared. In the present case a low CV (2.65) denoted that the experiments performed were highly reliable. The P values denoted the significance of the coefficients and also important in understanding the pattern of the mutual interactions between the variables. The P values suggest that among the three variables studied, A (pepper) and B (Chinese prickly) was the most important factors affecting the score of black beans, and A (pepper), B (Chinese prickly) and C (salt) showed positive interactions between any two variables. The interaction effects and optimal levels of the variables were determined by plotting the response surface curves. Response surface contour plots of the response surface methodology as a function of two factors at a time, holding all other factors at fixed levels (zero, for instance), are more helpful in understanding both the main and the interaction effects of these two factors. These plots can be easily obtained by calculating from the model, the values taken by one factor while the second varies (from -1.682 to +1.682, step 1 for instance) with constraint of a given Y value. The response surface curves were represented in Fig 1-3. The shape of the response surface curves showed a moderate interaction between these tested variables. Lower and higher levels of both the vertical and horizontal axes did not result in higher score. The highest score was recorded in the middle levels of both the factors while the further increase in the levels resulted in a gradual decrease in score.

Yanwen Wu

803

Fig. 1. Response surface graph showing interaction between pepper and Chinese prickly.

Fig. 2. Response surface graph showing interaction between pepper and salt.

Fig. 3. Response surface graph representing the interaction between Chinese prickly and salt.

804

Manufacturing Systems and Industry Application

The optimum combination was found to be: pepper 3.125%, Chinese prickly 2.15%, and salt 3.035%. Further, to validate the proposed experimental methodology, the score 92 was very close to expected score 90.55 according to obtained optimized culture conditions.

Summary In the present work, we had demonstrated the optimization of spicy black beans by a factorial experimental design leading to a substantial increase of score of black beans. Our study showed a positive interaction affecting the scores between any of the three variables. Optimum formula of spicy black beans could be achieved at pepper 3.125%, Chinese prickly 2.15%, and salt 3.035%. Such conditions resulted in the score of 92. Literature References [1] A.Carmona, L.Borgudd, G.Borges and A.Levy-Benshimol: Nutritional Biochemistry Vol. 7 (1996), p.445-450. [2] A.Vargas-Torres,P.Osorio-Díaz,J.J.Islas-Hernández,J.Tovar,O.Paredes-López and L.A. BelloPérez: Journal of Food Composition and Analysis Vol. 17 (2004), p.605–612. [3] J.Tovar, I.M.Björck and N.G.Asp: Journal of Nutrition Vol. 122 (1992), p.1500–1507. [4] C.Melito and J.Tovar: Food Chemistry Vol. 53 (1995), p.305–307. [5] S.R.Minka,C.M.F.Mbofung,C.Gandon and M.Bruneteau: Food Chemistry Vol. 64 (1999), p.145-148. [6] J.L. Uma Maheswar Rao and T. Satyanarayana: Bioresource Technol Vol. 98 (2007), p. 345-352. [7] R. Kammoun, B. Naili and S. Bejar: Bioresource Technol Vol. 99 (2008) p. 5602-5609.

© (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/AMR.267.805

A Model of Semantic Development Chain for Multidisciplinary Complex Electronic Equipments Zhou Hong-gen1,2,a, Tang Wen-cheng1,b,Jing Xu-wen2,c, Zhao Xiang-jun2,d 1 2

School of Mechanical Engineering, Southeast University, Nanjing 211189, China

School of Mechanical Engineering, Jiangsu University of Science and Technology, Zhenjiang 212003, China a

[email protected], [email protected], [email protected], [email protected]

Abstract: During each period of modern complex electronic products development, the product is the core when organize and manage products. Completes the product development through coordination of different enterprise or department and then constitutes the complex electronic products development chain.This paper focus on the basic principle of complex electronic products semantic develop chain, definited its base elements and substantive characteristics; and established its work pattern, discussed its major function; finally, discussed the method and process of realizing semantic development chain model on the basis of the system structure of complex electronic products semantic development chain, then provided a logical foundation for the semantization integration and reusing of complex electronic products. Introduction The development process of complex electronic products is not only the question of product design, but also a synthesis system engineering combines with knowledge, process, resources and technical personnel in many related domains.The development of complex electronic products request not only innovation, agile response, but also characteristics of systematic characteristic, integration, heterogeneous, distribution and cooperativity and so on.With the improvement of complexity and technology content in complex electronic products, the sole enterprise can’t be competent with the entire development process of the complex electronic products[1,2].So it has become a common demand in many enterprises that complete the product development by the coordinate of enterprises or departments from multi-domains and different space or time.The change of market, external environment and the resource structure also put forward a new proposition for modern complex electronic product development theories and methods.The concept that complex electronic product development chain and its related research are proposed in this demand background[3-5]. Development chain links each stages of product development and developing members closely as an organic whole, and with the support of computer software and hardware tools and network communication environment, it completes collaborative product development process.With the support of the product development chain, developers can analysis and optimize product development process, driver software and hardware tools and resources, develop high quality products efficiently according to the understanding of user requirements.However, this process is based on the accurate organization, obtain, share and reuse product development data and knowledge, so this need adopt appropriate theory and method to organize and manage product development chain. Technical Framework Complex electronic products semantic development chain is a dynamic system composed by many factors that restrict and interacted with each other.The system includes mainly six elements: organization structure of members, development resources, product data, development task or activity, semantic support and coordination mechanism and so on.The member organizations are the executive subjects of the task in the development chain, they play different roles according to their division in development chain.The data object of the product is the carrier of targets and data in the development chain, with the increasingly extending of the development chain, the products object has

806

Manufacturing Systems and Industry Application

been constantly decomposed into objectives or sub-object.Meanwhile, the sub-chain members enforce development work, which makes the product data orderly flow in development chain, and forming data stream.Development task, the key of the product development chain, is the collection of the develop members around a series of product object in the development activity.Development resource contents all kinds of hardware and software resources which realize the complex electronic develop process.The semantic support mechanism mainly refers to the development chain semantic coordinates and manages the development process of complex electronic product. Based on the analysis above, the basic structure of the complex electronic products semantic development chain could be defined as follow: ESDC = {O, D, T, R, S, C}, while O is the organizational structure of developing chain; D is the product data of develop chain, T is the development task or activity set, R is the developing resources; S is the semantic support mechanism of the complex electronic product development chain ; C is the cooperative mechanism of the complex electronic product development chain . The Semantic Web provides common understanding mode for development activities and resources in the developing chain, and established cooperation relations for developers, namely, informal relations of cooperation.The cooperation is achieved by the integration of external resources or related enterprise cooperation, which is also a process of information creation and transfer.In order to improve the efficiency of product development and technology competitiveness, analysis the input and output, constraints and knowledge of each development activities or task is the first step, and then recognize how the related development activities or task of development chain interact, and makes provision for the semantic cooperation of developing chain.Based on the analysis above and some former research experience, this paper presents a system framework for the complex electronic products semantic of development chain, as shown in Fig 1. Data layer Data Source managers

Unstructured data

Unstructured data

Semistructured data Member No.1

Structured data

Semi-structured data

...

Ontology model layer

The Tools of ontology establishment, maintenance

Development chain modelers

Definition

Access

Organization model

Product model

Member No.n

Structured data

Maintenance Consistency Checking

Task model

logic description

Resource model

Semantic support

Model library Of ontology in development chain

Integration platform layer Develop chain working engine Product developers

Call

Task flow

Product conceptual design Product Details design Semantic mechanism

Data, knowledge reuse

Task - Step

Access

Access Data, knowledge sharing Data, knowledge exchange Data, knowledge retrieval

Task - Development chain ontology

Call

Call

Access

Product multidisciplinary optimization

Task tools

Product process design

Control -Development chain ontology

Step-Development chain ontology

Ontology mapping layer Mapping rules

Ontology mapping/semantic reasoning

Interface layer

Application system interface (OWL,XML,STEP,EXPRESS,WEB SERVICE) )

Application system layer System user

Application system

Application system

CAD CAE . . .

CAD

...

CAE . . .

Fig.1 The semantic development chain framework for complex electronic products

Yanwen Wu

807

Modeling Process of ESDCM Ontology Product modeling technology played a key role in product development and its process management.Although according to traditional cost calculation method, development costs only accounted for the entire product cost of 5%, it determine product 70% of the total cost[6].Capture the information of product development process has already become a hotspot research in product design domain, and developed several model and theory.The existed research shows that there is no universal theory and model presently, so the researchers also expect to construct a fully engineering activity ontology that could capture and manage development knowledge to describe the product development process.But the existed research provides a rich application background for ontology development.Judging from the public literature, there are several ontology that have been well applied in the industrial fields, besides, the importance of ontology has been recognized by different fields, such as cognitive psychology, artificial intelligence design, lifelike design, design cognitive theory, planning and knowledge engineering [7,8].As a structured knowledge and digital resource organization model, Ontology make the concepts(category) and the relation between them implicated in the information resources exhausted in ensuring the premise of semantic coherence.Based on ontology semantic, define relationship between development resources and targets, establish more common standardize classification model of the product development resources and improve the effectiveness of the developed chain management, which is a very realistic research. According to the characteristics of ESDCM model which belongs to collaborative environment of radar products, adopted the method shown in Fig 2, Multidisciplinary Domain Ontology Building(MDOB), to build ontology, the concrete construction process shown as follows.

Fig.2

The building process of ESDCM in MDOB method

(1) Selecting domain dependent ontology as the dependent ontology of the domain ontology.The multidisciplinary characteristics of ESDCM are reflected by its dependence ontology of multiple domains.For example, radar ontology chooses mechanical and electrical fields as its proper domain dependent ontology. (2) Dividing the target ontology of ESDCM into static and derivative ontology.Static ontology could be shared.When the whole ontology sense, they are effective.At this step, divided target ontology into static ontology and derivative ontology. (3) Selecting elements to construct ESDCM ontology.Selecting constructive element of ontology, which is a list of key terms and concepts in the field, form the development resources term list. (4) Selecting ontology theory to build the ESDCM ontology.Ontology theory provides the basic semantic that has no relation with specific language.Set theory is the basic theory of electrical and mechanical ontology, which provide term and relationship set for all types of ontology and can be applied to radar products ontology and development resources ontology.

808

Manufacturing Systems and Industry Application

(5) Selecting ontology description language.According to the integration and sharing requirement of product development network environment, selecting OWL of W3C to describe ESDCM ontology, through which form the concepts and the relationship between them. (6) Creating ESDCM instance.First, choose development resource and process classes, those need to be instantiated, from the class hierarchy structure, and then create an object for the class, and assigned for its various properties. (7) Evaluating ESDCM ontology.ESDCM ontology should not only satisfy universal principles of building ontology, but also the effective sharing of product development resources under the network environment, if it meets the conditions above, the ontology is complete, otherwise, it’s required to edit operation in the field of concept and relationship to make ESDCM ontology more successful. Establishment of ESDCM Concept Set The purpose of Ontology is to capture the knowledge of the related fields, providing the domain knowledge with common understanding, determining the accreditat conception in this area, and giving the definition of these conception and interrelationship of the notion from different levels of formal mode.Therefore, the definition of concept set is the key link of ontology model integrating[9]. Concept is the basic of establishing ontology.Object-oriented abstract methods coincide the way people know the world.Concepts and related-class in domain ontology can be defined by using object-oriented method to identify the object, by using the abstraction of the concrete object to discovery and identify the classification structure and assembly structure of class.For complex development chain resource data, the real world product development entities it can be reflected from abstract space concept, use the methods of clustering, generalization, classification, inherit and so on. ESDCM is the conceptualization of relevant information and knowledge about the complex electronic product development chain.By analyzing the related fields and knowledge in the complex electronic product development and according to the established processes of constructing the concept and relationships above, the relative concepts collection of the complex electronic product development chain is established.This set involves the concept, semantic description and their attributes and examples of the development organization, products, development activities, development resources.There are 322 concept classes (belonging to 4 groups), nine relations of base classes (expansion into 35 typical relationships), and 532 generic cases in ESDCM.The concept classes and description of example related the product developments activities in the ESDCM are shown in Table 1, the illustration of typical relationships are shown in Table 2. Example of ESDCM Modeling After creating description model of complex electronic product semantic development chain, we could use this model to describe the product development chain.As preamble describing, this paper is based on ESDCM ontology and use the OWL language to describe the develop chain modeling, and the semantic development model generated by this method is a strict formal model that computers can handle.For the particularity function of radar products and multidisciplinary properties of the development process, the paper constructs ESDCM ontology model of the radar products through general term provided by the model,to achieve cognitive consistent in the product development process,solve problems of heterogeneity semantic, and define the constraints used in the concept, it will make tacit knowledge that actually exist in application formally expressed.This paper selects Protege_4.1 developped by Stanford universityas as model tools of radar ESDCM ontology, to edit and increase the ontology elements such as development chain class, subclasses, attribute, examples, namespace through the man-machine interactive way, and strike out information model based on OWL in the concept hirerarchy. Due to the limited space, this section inbased on framework

Yanwen Wu

Table 1

809

The Conceptual Structure of Product Development Activities in ESDCM

No. 1 2 3 4 80 81

Concept term Client Design Brief

1

The Conceptual Relationship Structure of ESDCM

Relation name

Subject Industrial design Assigned to & formulation

2

Begins with

Concept design

3

Conducts

Detial design analysis

4 … 34 35

Example

The customer's tactical requirements Customer demand of a phased array RD

Design Requirements Design requirements Light weight, compact body Design Objectives The developing goal The full set of drawings of a RD Information sources Information resource Product information, LAN, etc … … … Design Properties and The Structural of heat sink effects Design attributes and relationships Relationship the cooling effect of the chassis Methods/Tools for Idea generation methods / tools TRIZ theory or brainstorming, etc. generating ideas

Table 2 No.

Meaning

Object Collaborative design team Concept generation

Definition Determine the industrial design and planning was assigned to thecollaborative design team Determine the starting point of design activities, such as conceptual design. Describe the type of activity, test, or analysis, Trade off studies these are the required steps to production of products and processing design.

Display components, materials, or parts of the Industrial design Formulation steps design activities in the industrial designing and & formulation planning. … … … … Industrial design Customer The relationship pointed out the target and Uses as input & formulation requirements input elements of an activity Uses as The relationship pointed out the target and Detial design Drawings output output elements of an activity Consists of

description discussed the semantic description of ESDCM through a case.Firstly, using the OWL clauses < RDF: RDF...XMLNS = "http://www.owl-ontologies.com/Products.owl#".../ > ,and quoting the product ontology to describe the field space; then through < owl:Class>, < rdtfs:subClassOf> to describe the ESDCM classes and concept structure of the properties; Finally, using < DatatypeProperty owl: >, < owl: owl:ObjectProperty>, < rdfs: domain > etc, the OWL sentence, to describe the DatatypeProperty (properties of statements describe minimum and maximum, default value, data type etc.) and ObjectProperty (properties, attribute unit, etc.).The description of BNF paradigm and the definition of OWL shown as Fig 3.E-manufacturing mode based on the semantic, the process system can carry on the corresponding process route strategy according to the state information of the manufacturing resources.The process system can also adjust the results when the abnormal conditions occur in the manufacturing resources, dynamically produce the optimal process decision which is suitable for the present situation.Thus it can improve the conductive degree of the plan result[10].The conditions of choosing equipment are the relatively idle and work well at the present manufacturing environment.According to the requirements of the task and the load conditions of the equipment, we can use the manual intervention or the system itself selects some suitable equipment automatically, the results of the choices are show in the collection. Summary Product information that distributed in different departments is not isolated, they linked each other more or less,it is these semantic relations that make it possible to share product information.And the construction of semantic development chain provided theoretical basis and realized tools for its realization, which by constructing semantic ontology systems among different fields or departments to build a semantic sharing platform for information exchange between them, and realize information highly share and integration in this platform, which could avoid providing false information or omissions important information to users in some extent.

810

Manufacturing Systems and Industry Application

::=

< ProductComponent>::= < ResourceComponent>::=

< OrgnizationComponent>::= < UnitComponent>::= < Projects>::= … < WorkFlow>::= < Products>::=



< Models>::=

< ConceptModel>::= < FunctionModel>::=



. . .

The Content

Description Form

p){ 6 stckEntry=stack.pop() 7 if (is_Result(stackEntry)){ 8 output stackEntry as a result 9 else { //pass keyword witness information to the top entry

814

Manufacturing Systems and Industry Application

10 for (1 ≤ j ≤ m) 11 if (stackEntry.Keyword[j]=true) 12 stack.top.Keywoed[j]=true 13 } //end else_9 14 } //end while_5 //add non-matching components of keyword to stack 15 if keyword in SRstack //using the Algorithm_SR in the above // if keyword in SR set of stack represent the node 16 for (p