Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems: The Third International Symposium on Software Reliability, Industrial Safety, Cyber Security and Physical Protection of Nuclear Power Plant (ISNPP) [1st ed.] 978-981-13-3112-1;978-981-13-3113-8

This book is a compilation of selected papers from the 3rd International Symposium on Software Reliability, Industrial S

442 24 21MB

English Pages XII, 290 [303] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems: The Third International Symposium on Software Reliability, Industrial Safety, Cyber Security and Physical Protection of Nuclear Power Plant (ISNPP) [1st ed.]
 978-981-13-3112-1;978-981-13-3113-8

Table of contents :
Front Matter ....Pages i-xii
Communication Design of Low Residual Error Probability Based on Function Safety (Gui-Lian Shi, Ming-Li Li, Gang Li, Jie Zhang, Chang-Yu Mo)....Pages 1-9
Apply FMEDA to Guide Self-diagnostic Design for Digital Circuit Board (Jie Zhang, Jin Fan, Gang Li, Ming-Li Li, Yi-Qin Xie)....Pages 10-16
A Reusable Functional Simulation Verification Method Based on UVM for FPGA Products in DAS (Xiu-Hong Lv, Yun-Tao Zhang, Zong-Sheng Cao, Fei Wu, Ling-Ling Dong)....Pages 17-27
The Method of Failure Analysis for Safety-Critical System Software Based on Formalization (Xiao-Bo Zhou, Jin Fan, Ru-Mei Shi, Ya-Dong Zhang, Qiao-Rui Du)....Pages 28-36
A Study About Software V&V Evaluation of Safety I&C System in Nuclear Power Plant (Peng-Fei Gu, Zhe-Ming Liu, Wei Xiong, Wei-Hua Chen, Sheng-Chao Wang)....Pages 37-47
A Study About Pre-developed Software Qualification of Smart Devices Applied in NPP (Sheng-Chao Wang, Tao Bai, Peng-Fei Gu, Wang-Ping Ye)....Pages 48-57
Applications of Data Mining in Conventional Island of Nuclear Power Plant (Zhi-Gang Wu, Xiao-Yong Zhang, Chang-Ge Xiao, Wen Chen)....Pages 58-71
A Hierarchically Structured Down-Top Test Equipment Debugging Method for RPS (Wang Xi, Tao Bai, Peng-Fei Gu, Wei Liu, Wei-Hua Chen)....Pages 72-77
Discussion for Uncertainty Calculation of Containment Leakage Rate (Yu Sun, Jun Tian, Tian-You Li, Zhao-yang Liu)....Pages 78-86
Research and Improvement of the Flowmeter Fracture Problem of Condensate Polishing System in Nuclear Power Plant (Hai-Tao Wu, Xin Ding, Tie-Qiang Lu)....Pages 87-94
Study on Optimization of Turbidity Control for Seawater Desalination System in Nuclear Power Plant (Hai-Tao Wu, Pan-Xiang Yan, Yong Yan, Hao Zhong)....Pages 95-103
Optimization Scheme of Turbine Frequency Regulation for Passive Nuclear Power Plant (Le-Yuan Bai, Kai Gu, Bin Zeng, Gang Yin)....Pages 104-113
Research and Optimization of the Control Cooperation Between Turbine Control System and DCS in Nuclear Power Plant (Xiao-Lei Zhan, Kai Gu, Bin Zeng, Xu-Feng Wang, Chong Zhang)....Pages 114-122
Risk Analysis and Management of Software V&V Activities in NPPs ( HuiHui-Liang, Peng-Fei Gu, Jian-Zhong Tang, Wei-Hua Chen)....Pages 123-128
The Optimization of Siemens Turbine Synchronization Strategy (Yan Liu, Pu Zhang, Gang Yin, Chong Zhang)....Pages 129-138
Research on the Verification and Validation Method of Commercial Grade Software in Nuclear Power Plants (Wang-Ping Ye, Ya-Nan He, Peng-Fei Gu, Wei-Hua Chen)....Pages 139-148
Research on Application of Sequence Control Strategy in Conventional Island System of Nuclear Power Plant (Hai-Ying Fan, Song-Di Ji, Xin-Nian Huang)....Pages 149-155
Optimization of Control Solution for Deaerator Water Level Protection in Nuclear Power Plant (Ying Meng, Jie-Qing Huang)....Pages 156-162
Study on Layout Design and Mechanical Calculation of Seismic Instrumentation Tubing in Digital Nuclear Power Plant (Shuai Huang, Yuan-Jiang Li, Xing-Gao Zhan, Hai-Tao Wu)....Pages 163-173
Research on the Verification and Validation Method of Safety Analysis Software in Nuclear Power Plants (Ya-Nan He, Wei Xiong, Peng-Fei Gu, Jian-Zhong Tang)....Pages 174-182
A Study About Configuration Management Process for Safety DCS Software V&V in Nuclear Power Plant (Wei Xiong, Ya-Nan He, Peng-Fei Gu, Hui-Hui Liang, Jian-Zhong Tang)....Pages 183-189
Research and Application on the Gateway Design of Digital Control System of Nuclear Power Plant (Yue-Liang Sun, Zhi-Jia Wang, Hong-Tao Sun, Wei Bai)....Pages 190-198
Algorithm Research of the ICCMS for Qinshan Phase II NPP Based on FirmSys Platform (Xin-Xin Fan, Bo Zhang, Hong-Tao Sun, Li-Min Xia, Wei-Zhi Zheng)....Pages 199-207
Application of Mosaic Instruments on Back-up Panel in Nuclear Power Plant (Zhi-Guo Ma, Chao Gao, Qing-Jun Meng, Hong-Tao Sun, Fu-Ju Xie)....Pages 208-218
Equipment Qualification and Methods Application for Class 1E Digital Instrumentation and Control System (Jin Fan, Liang Li, Yong-Bin Sun, Hua-Ming Zou)....Pages 219-225
Study on Itemized Requirements of Safety Digital I&C System in NPP (Tao Bai, Ji-Xiang Shu, Peng-Fei Gu, Ya-Nan He)....Pages 226-232
Instrument Survivability Assessment During Severe Accident in HPR1000 (Liu Li, Guo Lin)....Pages 233-240
Influence Analysis of the Halogen Cables Used in the Safety Related Circuits of AP1000 Nuclear Power Plant (Xin-Yu Wang, Cong Li, Jing-Yuan Yang, Qi Wu)....Pages 241-249
Network Risk Management Based on the ALARP Criteria for Nuclear Power Plant (Xiao-Jun Liu, Jun-Long Tan)....Pages 250-254
Analysis of Communication Failures in Radiation Monitoring System of a Nuclear Power Plant (Guang-Feng Li, Xin-Yu Wang, Jing-Yuan Yang, Hong-Wei Sha)....Pages 255-264
Design of Geological Disaster Monitoring and Early-Warning System for Mountainous Nuclear Facilities (Zuo-Ming Zhu, Jin-Xing Cheng, Wei-Wei Wen, You-Peng Wu, Xin Gao, Rong-Zheng Xu et al.)....Pages 265-273
Research on Anti-seismic Qualification for Nuclear Safety Class I&C Equipment Base on Single-Frequency Wave Technical (Yong-Bin Sun, Ze-Sheng Hao, Hua-Ming Zou, Lei Wang, Qiao-Rui Du)....Pages 274-282
The Approaches of Prevention, Detection, and Response for Cybersecurity of I&C Systems in NPPs (Jianghai Li, Chao Guo, Wen Si, Xiaojin Huang)....Pages 283-290

Citation preview

Lecture Notes in Electrical Engineering 507

Yang Xu Hong Xia Feng Gao Weihua Chen Zheming Liu Pengfei Gu   Editors

Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems The Third International Symposium on Software Reliability, Industrial Safety, Cyber Security and Physical Protection of Nuclear Power Plant (ISNPP)

Lecture Notes in Electrical Engineering Volume 507

Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Napoli, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, München, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, Materials Science & Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, Humanoids and Intelligent Systems Lab, Karlsruhe Institute for Technology, Karlsruhe, Baden-Württemberg, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Università di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Madrid, Spain Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität München, München, Germany Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Stanford University, Stanford, CA, USA Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martin, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Sebastian Möller, Quality and Usability Lab, TU Berlin, Berlin, Germany Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston North, Manawatu-Wanganui, New Zealand Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Kyoto, Japan Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Baden-Württemberg, Germany Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China Junjie James Zhang, Charlotte, NC, USA

The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering - quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning:

• • • • • • • • • • • •

Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please contact leontina. [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Associate Editor ([email protected]) India Swati Meherishi, Executive Editor ([email protected]) Aninda Bose, Senior Editor ([email protected]) Japan Takeyuki Yonezawa, Editorial Director ([email protected]) South Korea Smith (Ahram) Chae, Editor ([email protected]) Southeast Asia Ramesh Nath Premnath, Editor ([email protected]) USA, Canada: Michael Luby, Senior Editor ([email protected]) All other Countries: Leontina Di Cecco, Senior Editor ([email protected]) Christoph Baumann, Executive Editor ([email protected]) ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, MetaPress, Web of Science and Springerlink **

More information about this series at http://www.springer.com/series/7818

Yang Xu Hong Xia Feng Gao Weihua Chen Zheming Liu Pengfei Gu •









Editors

Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems The Third International Symposium on Software Reliability, Industrial Safety, Cyber Security and Physical Protection of Nuclear Power Plant (ISNPP)

123

Editors Yang Xu Department of Engineering Physics Tsinghua University Beijing, China

Hong Xia College of Nuclear Science and Technology Harbin Engineering University Harbin, Heilongjiang, China

Feng Gao China Nuclear Power Design Co., Ltd. Shenzhen, Guangdong, China

Weihua Chen China Nuclear Power Design Co., Ltd. Shenzhen, Guangdong, China

Zheming Liu Product Information Committee of China Instrument and Control Society Beijing, China

Pengfei Gu China Nuclear Power Design Co., Ltd. Shenzhen, Guangdong, China

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-13-3112-1 ISBN 978-981-13-3113-8 (eBook) https://doi.org/10.1007/978-981-13-3113-8 Library of Congress Control Number: 2018967732 © Springer Nature Singapore Pte Ltd. 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

In the recent years, along with the development of domestic research and international communications, more digital instrumentation and control (I&C) technologies are used in China’s nuclear power plants, such as the microprocessor-based safety I&C system named FirmSys developed by China General Nuclear Power Corporation, and the FPGA-based safety DCS named NASPIC developed by China National Nuclear Corporation, etc. In order to solve the problems in actual productions and applications, and to provide a platform for technical discussion, the 3rd International Symposium on Software Reliability, Industrial Safety, Cyber Security and Physical Protection of Nuclear Power Plant (ISNPP) was convened by related organizations and governmental divisions. Since 2016, this symposium has become an effective technical discussion platform for nuclear power builders, regulators, research institutions, and manufacturers annually. The 3rd ISNPP was successfully held in Harbin, China, from August 15 to 17, 2018. It attracted around 100 researchers, experts, and engineers from 34 organizations, including Tsinghua University, the Ministry of Ecological Environment, State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Engineering Company Ltd., etc., as well as institutions and companies from the aerospace industry. The symposium served as a platform for exchanging ideas on every aspect of nuclear power plants’ instrumentation and control system, and also promoted the military-civilian integration in China. More than 100 conference papers were submitted for the symposium, covering topics including digital instrumentation and control technology, electromagnetic compatibility, main control room and human–machine interface design, software verification and validation, etc. After anonymous peer review and selection by the experts, 33 outstanding papers were finally accepted to the proceedings published in Lecture Notes in Electrical Engineering by Springer, including seven remarked excellent papers. Keynote speeches “I&C Island Solutions Based on FirmSys”, “Digital Transformation of I&C System”, “I&C System components and parts localization” were presented at the symposium. These speakers shared with the audience their latest and most important research progress. In fact, many topics v

vi

Preface

discussed at the symposium provided important reference and strong support for the related works of nuclear power plant. We believe these papers could also benefit the entire nuclear instrumentation and control system industry. On the occasion of the publication of these papers, we would like to thank the organizers of the symposium for providing a good platform for the majority of nuclear power practitioners. We are also very grateful to the experts and scholars who provided support and guidance during the reviewing process. Finally, we would like to thank all the authors, and without whose efforts and studies, this volume would never have been published successfully. Shenzhen, China

Pengfei Gu

Organization

Sponsors Product Information Committee of China Instrument and Control Society (CIS-PIC) Nuclear Instrument and Control Technical Division of China Instrument and Control Society (CIS-NICT) Professional Committee of Nuclear Facility Cyber Security, Nuclear Safety Branch, China Nuclear Society (CNS)

Organizer China Nuclear Power Engineering Co., Ltd. (State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment) (CNPEC)

Co-organizers College of Nuclear Science and Technology of Harbin Engineering University China Techenergy Co., Ltd. (CTEC) China Nuclear Control Systems Engineering Co., Ltd. (CNCS)

Editors Yang Xu, Department of Engineering Physics, Tsinghua University, Beijing, China Hong Xia, College of Nuclear Science and Technology, Harbin Engineering University, Harbin, China Feng Gao, China Nuclear Power Design Co., Ltd., Shenzhen, China vii

viii

Organization

Weihua Chen, China Nuclear Power Design Co., Ltd., Shenzhen, China Zheming Liu, Product Information Committee of China Instrument and Control Society, Beijing, China Pengfei Gu, China Nuclear Power Design Co., Ltd., Shenzhen, China

Secretary of Organizing Committee Xiaolian Wang, Product Information Committee of China Instrument and Control Society, Beijing, China

Director of Executive Committee Yuzhou Yu, Product Information Committee of China Instrument and Control Society, Beijing, China

Contents

Communication Design of Low Residual Error Probability Based on Function Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gui-Lian Shi, Ming-Li Li, Gang Li, Jie Zhang, and Chang-Yu Mo

1

Apply FMEDA to Guide Self-diagnostic Design for Digital Circuit Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jie Zhang, Jin Fan, Gang Li, Ming-Li Li, and Yi-Qin Xie

10

A Reusable Functional Simulation Verification Method Based on UVM for FPGA Products in DAS . . . . . . . . . . . . . . . . . . . . . . Xiu-Hong Lv, Yun-Tao Zhang, Zong-Sheng Cao, Fei Wu, and Ling-Ling Dong The Method of Failure Analysis for Safety-Critical System Software Based on Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiao-Bo Zhou, Jin Fan, Ru-Mei Shi, Ya-Dong Zhang, and Qiao-Rui Du A Study About Software V&V Evaluation of Safety I&C System in Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peng-Fei Gu, Zhe-Ming Liu, Wei Xiong, Wei-Hua Chen, and Sheng-Chao Wang

17

28

37

A Study About Pre-developed Software Qualification of Smart Devices Applied in NPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sheng-Chao Wang, Tao Bai, Peng-Fei Gu, and Wang-Ping Ye

48

Applications of Data Mining in Conventional Island of Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhi-Gang Wu, Xiao-Yong Zhang, Chang-Ge Xiao, and Wen Chen

58

A Hierarchically Structured Down-Top Test Equipment Debugging Method for RPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wang Xi, Tao Bai, Peng-Fei Gu, Wei Liu, and Wei-Hua Chen

72

ix

x

Contents

Discussion for Uncertainty Calculation of Containment Leakage Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yu Sun, Jun Tian, Tian-You Li, and Zhao-yang Liu

78

Research and Improvement of the Flowmeter Fracture Problem of Condensate Polishing System in Nuclear Power Plant . . . . . . . . . . . . Hai-Tao Wu, Xin Ding, and Tie-Qiang Lu

87

Study on Optimization of Turbidity Control for Seawater Desalination System in Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . Hai-Tao Wu, Pan-Xiang Yan, Yong Yan, and Hao Zhong

95

Optimization Scheme of Turbine Frequency Regulation for Passive Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Le-Yuan Bai, Kai Gu, Bin Zeng, and Gang Yin Research and Optimization of the Control Cooperation Between Turbine Control System and DCS in Nuclear Power Plant . . . . . . . . . . 114 Xiao-Lei Zhan, Kai Gu, Bin Zeng, Xu-Feng Wang, and Chong Zhang Risk Analysis and Management of Software V&V Activities in NPPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 HuiHui-Liang, Peng-Fei Gu, Jian-Zhong Tang, and Wei-Hua Chen The Optimization of Siemens Turbine Synchronization Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Yan Liu, Pu Zhang, Gang Yin, and Chong Zhang Research on the Verification and Validation Method of Commercial Grade Software in Nuclear Power Plants . . . . . . . . . . . . 139 Wang-Ping Ye, Ya-Nan He, Peng-Fei Gu, and Wei-Hua Chen Research on Application of Sequence Control Strategy in Conventional Island System of Nuclear Power Plant . . . . . . . . . . . . . 149 Hai-Ying Fan, Song-Di Ji, and Xin-Nian Huang Optimization of Control Solution for Deaerator Water Level Protection in Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Ying Meng and Jie-Qing Huang Study on Layout Design and Mechanical Calculation of Seismic Instrumentation Tubing in Digital Nuclear Power Plant . . . . . . . . . . . . 163 Shuai Huang, Yuan-Jiang Li, Xing-Gao Zhan, and Hai-Tao Wu Research on the Verification and Validation Method of Safety Analysis Software in Nuclear Power Plants . . . . . . . . . . . . . . . . . . . . . . 174 Ya-Nan He, Wei Xiong, Peng-Fei Gu, and Jian-Zhong Tang

Contents

xi

A Study About Configuration Management Process for Safety DCS Software V&V in Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . 183 Wei Xiong, Ya-Nan He, Peng-Fei Gu, Hui-Hui Liang, and Jian-Zhong Tang Research and Application on the Gateway Design of Digital Control System of Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Yue-Liang Sun, Zhi-Jia Wang, Hong-Tao Sun, and Wei Bai Algorithm Research of the ICCMS for Qinshan Phase II NPP Based on FirmSys Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Xin-Xin Fan, Bo Zhang, Hong-Tao Sun, Li-Min Xia, and Wei-Zhi Zheng Application of Mosaic Instruments on Back-up Panel in Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Zhi-Guo Ma, Chao Gao, Qing-Jun Meng, Hong-Tao Sun, and Fu-Ju Xie Equipment Qualification and Methods Application for Class 1E Digital Instrumentation and Control System . . . . . . . . . . . . . . . . . . . . . 219 Jin Fan, Liang Li, Yong-Bin Sun, and Hua-Ming Zou Study on Itemized Requirements of Safety Digital I&C System in NPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Tao Bai, Ji-Xiang Shu, Peng-Fei Gu, and Ya-Nan He Instrument Survivability Assessment During Severe Accident in HPR1000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Liu Li and Guo Lin Influence Analysis of the Halogen Cables Used in the Safety Related Circuits of AP1000 Nuclear Power Plant . . . . . . . . . . . . . . . . . . 241 Xin-Yu Wang, Cong Li, Jing-Yuan Yang, and Qi Wu Network Risk Management Based on the ALARP Criteria for Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Xiao-Jun Liu and Jun-Long Tan Analysis of Communication Failures in Radiation Monitoring System of a Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Guang-Feng Li, Xin-Yu Wang, Jing-Yuan Yang, and Hong-Wei Sha Design of Geological Disaster Monitoring and Early-Warning System for Mountainous Nuclear Facilities . . . . . . . . . . . . . . . . . . . . . . 265 Zuo-Ming Zhu, Jin-Xing Cheng, Wei-Wei Wen, You-Peng Wu, Xin Gao, Rong-Zheng Xu, and Bin Zhang

xii

Contents

Research on Anti-seismic Qualification for Nuclear Safety Class I&C Equipment Base on Single-Frequency Wave Technical . . . . . . . . . . . . . 274 Yong-Bin Sun, Ze-Sheng Hao, Hua-Ming Zou, Lei Wang, and Qiao-Rui Du The Approaches of Prevention, Detection, and Response for Cybersecurity of I&C Systems in NPPs . . . . . . . . . . . . . . . . . . . . . . 283 Jianghai Li, Chao Guo, Wen Si, and Xiaojin Huang

Communication Design of Low Residual Error Probability Based on Function Safety Gui-Lian Shi, Ming-Li Li(&), Gang Li, Jie Zhang, and Chang-Yu Mo China Techenergy Co., Ltd. (CTEC), Beijing 100094, China [email protected]

Abstract. As the scale of petrochemical industry and electric power industry grows, the safety instrumented system (SIS) becomes more complex and the safety requirements of SIS are more rigorous. Generally, SIS is composed of sensors, actuators, logical control devices, and communication systems. The design of communication system is considered as a key part of SIS designing and residual error probability is an important index to evaluate safety of communication. Therefore, it is crucial to come up with a method to design a communication system with low residual error probability. On the basis of design experience of FirmSys which is a safety integrity level (SIL) 3 safety platform developed by China Techenergy Cooperation (CTEC) and according to the standard IEC 61508, this article presents necessary design measures to reach low residual error probability including data integrity assurance, diagnostic techniques, the number of bits in the block, etc. And also it provides the design method of each element. This design method is applicable to the design of the communication protocol which can meet the functional safety requirement. Keywords: Function safety

 Residual error probability  FirmSys

1 Introduction With the large-scale production of petrochemical industry, electric power and other industries, to ensure the safety and reliability of safety systems and avoid major industrial accidents become main concerns of safety production. The disasters that shocked the world such as the Bhopal gas spill in India, the Chernobyl nuclear power plant in the former Soviet Union, have given people an unprecedented focus on safety in industrial production. SIS is a category of safety-related systems (SRS), and it is an important measure to ensure production safety. SIS is required to correctly perform its safety functions before a dangerous event occurs, to avoid or reduce the occurrence of an accident [1]. Typically, SIS system consists of sensors, actuators, logical control devices, and communication systems. The design of the communication system is one of the key designs for the SIS system, and communication residual error probability is a quantitative index to evaluate communication safety [2]. Therefore, how to design communication systems with relatively low residual error probability is the foundation of the SIS system based on digital control system (DCS) technology. FirmSys is a nuclear power plant safety control system platform developed by CTEC, which is the brain and nerve center of a nuclear power plant. It plays a vital role © Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 1–9, 2019. https://doi.org/10.1007/978-981-13-3113-8_1

2

G.-L. Shi et al.

in ensuring the safety of nuclear power plant equipment and personnel and the environment. The safety communication of the FirmSys meets both the nuclear requirements and the functional safety requirements. This paper focuses on a key index of safety communication – the residual error probability of communication, including the definition of the communication design requirements based on the functional safety. In addition, based on the IEC 61508, it summarizes and analyzes the design factors of safety communication, and unifies the FirmSys experience in nuclear power and functional safety to design a safety communication.

2 Residual Error Probability Requirements of Communication in Functional Safety IEC 61508 proposes to use SIL for evaluating the risk reduction capability of a safety function. PFD (probability of dangerous failure on demand) or PFH (average frequency of dangerous failure [h−1]) are important quantitative indicators. PFD is used for low demand SIS and PFH is used for high demand SIS [3]. In this paper, PFD is taken as an example, and PFH can be used in a similar way. In terms of safety communication requirements, IEC 61508 makes reference to IEC 61784-3 which raises communication residual error probability as a quantitative evaluation index. The communication residual error probability is used to measure the probability that an undiagnosed failure will still occur after a series of measures have been taken. In standard IEC 61784-3, the communication residual error probability is required to be far less than the SIL requirements for this safety function loop, that is, the 1% for the maximum system PFD index, as shown in Fig. 1: 1%PFD

Communication sensor

Communication Logic Processor

actuator

PFD Fig. 1. Safety function loop

The corresponding relationship of residual error probability, the PFD and the SIL is shown in the following table [3]: In the SIS system design process, first, identify the required functions to reduce the original risk to an acceptable level on the basis of hazard identification and risk analysis, and the PFD requirements for the functions should be determined. Afterwards, determine the corresponding SIL for the designed safety functions. And further the

Communication Design of Low Residual Error Probability

3

requirements for the communication residual error probability should be assigned to SIS based on its SIL, as shown in Table 1. Table 1. Correspondence of residual error probability, PFD and SIL SIL 4 3 2 1

PFD [1.0e-5,1.0e-4) [1.0e-4,1.0e-3) [1.0e-3,1.0e-2) [1.0e-2,1.0e-1)

Residual error probability of communication [1.0e-7,1.0e-6) [1.0e-6,1.0e-5) [1.0e-5,1.0e-4) [1.0e-4,1.0e-3)

3 Design Method for Safety Communication with Low Residual Error Probability The design factors affecting the communication residual error probability are analyzed, and then the design is carried out for each factor. 3.1

Analysis of Relevant Design Factors

For how to evaluate the communication residual error probability for safety communications, the basic residual error probability calculation formula is specified in standard IEC 61784-3: KSL ðPeÞ ¼ RSL ðPeÞ  v  m where: KSL ðPeÞ Pe RSL ðPeÞ v m SL

ð1Þ

Residual error rate per hour of the safety communication layer with respect to the bit error probability; Bit error probability. Unless a better error probability can be proven, a value of 10−2 shall be used; Residual error probability of a safety message; Maximum number of safety messages per hour; Maximum number of information sinks that is permitted in a single safety function; Safety communication layer.

The residual error rate, which is based on the detection using a cyclic redundancy checking (CRC) mechanism, can be calculated using the Eq. (2) below (residual error probability for CRC polynomials). RCRC ðPe Þ ¼

n X i¼1

Ai  Pie  ð1  Pe Þni

ð2Þ

4

G.-L. Shi et al.

where: Ai the distribution factor of the code (determined either by computer simulation or a mathematical analysis); n is the number of bits in the block, including its CRC signature; By analyzing the above assessment methods, the corresponding design elements can be sorted out, including: redundant checksum codes, the number of bits in the block, transfer media, transfer rates, and number of information sinks. 3.2

Related Design Methods for the Residual Error Probability of Communication

The paper expands the design factors separately as follows: • redundant checksum codes The most commonly used redundant checksum codes is CRC, when there is a certain number of errors in the communication data, the CRC can detect the communication failures, which greatly reduces the communication residual error probability, so CRC is the first design factor to consider. Different CRC polynomials have different effects on the communication residual error probability, and the criteria are given for whether a CRC polynomial is appropriate in standard IEC 61784-3. Figure 2 shows the differences between the changing curves of residual error probability under proper CRC polynomials and improper CRC polynomials.

Fig. 2. Proper and improper CRC polynomials

Communication Design of Low Residual Error Probability

5

Investigations for the method of CRC have shown that for the particular class of socalled proper CRC polynomials, a weighting factor 2−r is applicable within the equation to build an approximation. The residual error probability approximation for CRC polynomials is shown in Eq. (3) below [2]. RCRC ðPeÞ  2

r

n   X n  Pek  ð1  PeÞnk  k k¼d

ð3Þ

min

Where, dmin Represents   the minimum distance between yards (the minimum n Hamming distance), represents the number of combinations N fetch k, r reprek sents for the length of CRC. Equation (3) shows that the communication residual error probability becomes lower with the increment of r, which means the safety and reliability of communication improve. • the number of bits in the block Equation (3) also shows that the reliability of communication will deteriorate as the number of bits in the block increases. Different number of bits in the block with the same CRC polynomial can also cause the smallest hamming distance to be smaller, as shown in the following illustration [4, 5]. So you should choose the proper number of bits in the block for the communication design, and refer to the Table 2 (Fig. 3).

Fig. 3. The trend of Hamming distance as the code values increase

IEEE 802.3 0x82608EDB {32}

8–10 – – 11–12 13–21 22–34 35–57 58–91 92–171 172–268 269–2974 2975–91607

91608–131072

HD

15 14 13 12 11 10 9 8 7 6 5 4

3 2

8 – 9–20 – 21–47 – 48–177 – 178–5243 – 5244–131072

Castagnoli (iSCSI) 0x8F6E37A0 {1,31}

8–16 – 17–18 – 19–152 – 153–16360 – 16361– 114663 – 114664+

Koopan 0xBA0DC66B {1,3,28}

8–11 – 12–24 – 25–274 – 275–32736 – 32737– 65502 – 65503+

Castagnoli 0xFA567D89 {1,1,15,15}

8–16 – 17–26 – 27–134 – 135–32737 – 32738– 65506 – 65507+

Koopan 0x992C1A4C {1,1,30}

8-32738 – 32739– 65506 – 65507+

Koopan 0x90022004 {1,1,30}

Table 2. Relationship between hamming distance and the number of bits in the block

8–65505 – – 65506+

– 65506+

Koopan 0x80108400 {32}

8–17 8–21 22–27 – 28–58 59–81 82–1060 1061–65505 –

Castagnoli 0xD419CC15 {32}

6 G.-L. Shi et al.

Communication Design of Low Residual Error Probability

7

The safety communication of FirmSys determines the number of bits in the block based on the system needs, and the original design uses an improper hamming distance CRC polynomial, after SIL authentication selected the proper CRC polynomial makes the communication residual error probability meet the requirement. • transfer media Due to the loss of transfer media and the influence of environment in communication transmission, the effect of the bit error rate ðPe Þ of communication is great. The standard IEC 61784-3 provides a default value of 10-2 if the error rate of transmission media is not clear. The transmission medium with high reliability and low transmission loss should be selected, such as optical fiber communication, shielded twisted pair, etc. • transfer rate From Eq. (1), when calculating the communication residual error probability of the communication system, the transmission rate of the communication V is also an important factor. Therefore, the transmission speed should be minimized under the precondition of meeting the requirement. • number of information sinks The number of information sinks, m in Eq. (1), is the number of terminals that receive safety communication. The number m is relatively small and it generally has little impact on the residual error probability of communication. The m is generally related to the architecture of the system and, a margin should be provided in protocol design. Validation the residual error probability of communication is required in the application of the SIS. 3.3

Redundant Schema Design for Communication Systems

The basic formula for communication residuals is specified in the standard IEC 617843, but communication architecture design is not considered. When communication packets become longer, it is difficult to meet the requirements of low communication residual error probability (such as SIL3) only through design methods in Sect. 3.2. The reliability of the system can be improved by using redundant architecture based on the system reliability theory. To further improve communication reliability, it is feasible either to adopt the design of redundant communication links or to adopt the design of redundant communication packages. In addition, cross comparison of redundant data needs to be implemented for both of them. For redundant design safety communication with its communication residual error probability calculation equation is as follows [6, 7]: KSL ðPeÞ ¼ CNM ðRSL ðPeÞÞM v  m

ð4Þ

where CM N represents the number of combinations N fetch M,which means if more than M blocks out of N blocks fail, the entire communication fails, RSL ðPeÞ represents the residual error probability of a safety communication.

8

G.-L. Shi et al.

Compared with Eq. (1), the communication residual error probability of the entire communication system in Eq. (4) has decent significantly. 3.4

Firmsys Safety Communication Design

There are many kinds of safety communication in FirmSys, and in this paper the safety communication between control stations is chosen as an example. The target of communication residual error probability is 1.0E-9. For the original design of communication, if the default value of 1.0E-2 is adopted for Pe, it cannot meet the requirements. However, according to the standard IEEE 802.3, the value of Pe could be set to 1.0E-8, since FirmSys communication adopts optical fiber transmission [8]. In that case, the calculated communication residual error probability can reach the target. Even though in some case the target is met, CTEC still decides to improve the communication design. Through analysis, appropriate CRC polynomials are selected and redundant communication packages are adopted. After implementing these measures, the safety of communication has strengthened and the communication residual error probability can satisfy the design requirements even for the most conservative assessment criteria (Using the default Pe value). 3.5

Summary

When applying this design method to design safety communication, first the number of bits in the block is determined based on the actual amount of communication data and it should be minimized since it has a negative effect on the communication error probability. And then select the proper CRC polynomial based on the number of bits in the block. Additionally, according to the application scenario, determine the communication rate, and then determine whether the communication architecture design needs redundancy. Communication design is an iterative process and residual error probability should be evaluated after each design change.

4 Conclusions Based on the requirement of functional safety, combining with the experience of FirmSys on nuclear safety design and functional safety authentication, this paper summarizes and analyzes the design factors of the low communication residual error probability of the functional safety communication. In order to meet the design requirements of low communication residual error probability, the design needs to consider the following design measures: CRC polynomial selection, number of bits in the block, communication medium, transmission rate and the number of information sinks. This paper focuses on the above factors and provides the design reference or acceptance criteria. At the same time, it can further reduce the residual error probability of communication through the architecture design of safety communication. This paper can provide a reference for safety communication design with high reliability and low communication residual error probability.

Communication Design of Low Residual Error Probability

9

References 1. Jin, J., Wu, Z., et al.: A review of the development of safety instrumentation systems at home and abroad. Chem. Autom. Instrum. 37(05), 1–6 (2010) 2. IEC 61784-3: Industrial communication networks-Profiles-Part 3: Functional safety fieldbuses-General rules and profile definitions (2016) 3. IEC 61508-2: Functional safety of electrical/electronic/programmable electronic safety-related systems-Part 2: Requirements for electrical/electronic/program able electronic safety-related systems (2010) 4. Koopman, P.: 32-bit cyclic redundancy codes for internet applications.In: The International Conference on Dependable Systems and Networks(DSN) (2002) 5. Fujiwara, T., Kasami, T., Kitai, A., et al.: On the undetected error probability for shortened hamming codes. IEEE Trans. Commun. 33(6), 570–574 (1985) 6. IEC 61025: Fault tree analysis (FTA)[S] (2006) 7. Mingli, L., Guilian, S., Qi, M., et al.: A method of quantitative risk assessment for safety communication residual error probability: China, ZL201310631726.0 (2016) 8. IEEE 802.3: IEEE Standard for Ethernet (2015)

Apply FMEDA to Guide Self-diagnostic Design for Digital Circuit Board Jie Zhang1(&), Jin Fan2, Gang Li1, Ming-Li Li1, and Yi-Qin Xie1 1

2

China Techenergy Co., Ltd, Beijing 100094, China [email protected] China Nuclear Power Engineering Co., Ltd, Beijing 100840, China

Abstract. Today Safety Digital Control System (DCS) is widely applied in industrial safety system. Safety DCS is mainly composed of input module, logic control unit, output module, and communication module. Each module is featured with a powerful fault diagnostic capability and it is able to detect the hidden failures. On the other hand, the diagnostic design increases the complexity and the failure of diagnostics may also trigger false alarm, which could lead to production loss. Therefore self-diagnostic measures design is very important for the digital module of safety DCS. Based on the development experience of FirmSys, a safety DCS platform developed by China Techenergy Co., Lit (CTEC), this paper proposes Failure Modes, Effects, and Diagnostic Analysis (FMEDA) technology to evaluate the diagnostic coverage (DC) and false alarm rate (FAR), and guide self-diagnostic design. Through the case study of Digital Output (DO) module, it demonstrates the feasibility of the proposed method. Keywords: FMEDA

 Diagnostic coverage  False alarm rate

1 Introduction Safety system is the system that automatically activates relevant equipments and performs protection functions when needed. It is widely used in different industries, e.g., oil & gas, nuclear, and rail transport, etc. In recent years, some DCS suppliers in China start to develop safety DCS under the encouragement of the safety I&C system localization strategy. CTEC has successfully developed a safety DCS platform named FirmSys, which can be applied to the reactor protection system of nuclear power plant and other industries in which high safety systems are required. Safety I&C systems should fulfill a specific Safety Integrity Level (SIL) according to the application requirement. The probability of failure on demand (PFD) is required to reach a defined target level for a specific SIL and DC always has a big influence on PFD [1]. Therefore self-diagnostics design for each module is a critical issue to deal with. A good diagnostic measure design should increase DC and meanwhile ensure a low FAR. There is some research work has been done on FMEDA applications, and they had proved that FMEDA is a suitable method to evaluate PFD and SIL for a system or a single equipment [2–4]. Their focus is on the evaluation of the diagnostic coverage and safe failure fraction. However, the FMEDA could also be used to © Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 10–16, 2019. https://doi.org/10.1007/978-981-13-3113-8_2

Apply FMEDA to Guide Self-diagnostic Design for Digital

11

evaluate the FAR. In this paper, based on the research and development experience for FirmSys, certificated as SIL3 by TUV, a method to improve diagnostic design is proposed. The paper is structured as follows. Section 2 describes the way to integrate FMEDA technology into FirmSys development and their relationship. In Sect. 3, it is described that how to apply FMEDA to improve self-diagnostic design for digital circuit board. In Sect. 4 there is a case study of DO module to demonstrate the feasibility of the FMEDA. And the conclusion is drawn in Sect. 5.

2 FMEDA in the Development of FirmSys The development of FirmSys follows a V model shown in Fig. 1, and it demonstrates different phases of the life cycle. Compliance with the V model, each phase starts with verification of the previous phase on the left side of the V model, and each phase in the right side of V model is a validation and verification process for each phase on the left.

System Concept

Validation

System Requirement

System design

System Validation

System integration

Module Requirement

Module Validation

FMEDA

Module design

Module testing

Module Implementation Output Analytic Verification Test Verification Validation

Fig. 1. FirmSys development lifecycle model

In the module requirement phase, different types of requirements are raised for the module, including the self-diagnostic requirements. The self-diagnostic requirements are normally derived from several sources, for example, the relevant standard (e.g. IEC 60671 and IEC 60880), the customer requirements, diagnostic capability of the similar products, etc. [5, 6].

12

J. Zhang et al.

In the module design phase, the pre-design of self-diagnostics is firstly conducted to fulfill the requirements defined in the previous phase based on the experience and good practices recommended by standards. And then the FMEDA technology is integrated and applied to evaluate the self-diagnostic measures and identify the flaws. After the FMEDA, the identified defects act as the feedback for the update of module design. It is an iterative process that can minimize the influence of designers’ skill and experience on module design and improve modules’ self-diagnostic capability. In the module test phase, the results of FMEDA could be the input of Fault Injection (FI) test, and also the effectiveness of diagnostic measures could be tested.

3 Self-diagnostics Assessment and Improvement 3.1

Enhance Diagnostic Coverage

After pre-design of module self-diagnostics, FMEDA will be implemented for every component for a quantitative evaluation of diagnostic effectiveness. In general, the failure mode and its occurrence ratio (Alpha) for each component are derived from some standards [7–9]. The effects of each failure modes on the module should be analyzed according to the specific application. And the severity of failure mode and whether it is a dangerous or safe failure could be decided. The failure mode could be categorized as five types: safe undetected (SU), safe detected (SD), dangerous undetected (DU), dangerous detected (DD) and not relevant (NR). The diagnostic effectiveness (DE) could be derived based on the standard IEC 61508. An example is given in Table 1. In addition, failure modes without self-diagnostic measures can be screened out. For the dangerous failure modes, self-diagnostic measures should be designed to make as less undetectable as possible.

Table 1. An FMEDA example of a resistor 1

2

3

4

Name

Failure rate (FIT) 1.7

Failure mode

Alpha Failure Effect

Open circuit

80%

Resistor R1

5

Transistor Q11 cut-off, DO stuck at 1

6

7

Severity Dangerous (1) or safe (0) High 0

8

9

10

Diagnostic measures

Failure type

DE

None

SU

NR

Apply FMEDA to Guide Self-diagnostic Design for Digital

3.2

13

Decrease the FAR

To improve the self-diagnostic measures includes not only increasing the DC, but also reducing the FAR. There are many causes for false alarm, including the measurement precision, communication error, hardware failure, etc. In this section only the false alarm caused by hardware failure is discussed. For self-diagnostic design, in some cases it is implemented by adding circuits. For example, the read-back check circuit is normally added to the digital output module. If some failures occur in the circuits, the diagnostic measures may announce a false alarm. In this case the FAR is increased. In this paper the FAR refers to the failure rate of the components whose failure could lead to a false alarm. Since the FMEDA is implemented through every component, this type of false alarm could be found in the effects column. FAR could be used as a reference for self-diagnostic measures selection. To reduce the FAR, the self-diagnostic circuit could use more reliable components or may change the diagnostic mechanism design. 3.3

Quantitative Assessment

After FMEDA process for each component, the self-diagnostic design could be assessed quantitatively. The assessment method is showing below: (1) According to the failure type (Column 9), NR failure type is not considered into calculation, and the rest four failure rates could be calculated: kSD ¼

X

each SD failure rate (Column 2Þ  Alpha (Column 4Þ  DE (Column 10Þ ð1Þ

kSU ¼ kDD ¼

X

X

each SU failure rate (Column 2Þ  Alpha (Column 4Þ  kSD

ð2Þ

each DD failure rate (Column 2Þ  Alpha (Column 4Þ  DE ðColumn 9Þ ð3Þ

kDU ¼

X

each DU failure rate (Column 2Þ  Alpha (Column 4Þ  kDD

ð4Þ

As the example shown in Table 2, the open circuit failure mode is a SU failure, so 1.7*80% is a contribution for the total kSU for the module. (2) Then the DC and SFF can be calculated by using the formula in IEC61508. The FAR is the sum of the product of the failure rates (Column 2) of which can lead to false alarm times Alpha (Column 4). (3) The assessment results will be compared with the predetermined requirements to decide if any change of self-diagnostics should be made. If there is any change in self-diagnostics, the results should be updated.

14

J. Zhang et al. Table 2. FMEDA analysis results of DO module

Name

Failure rate (FIT)

Failure mode

Alpha (%)

Failure Effect

Severity Dangerous (1) or safe (0)

Diagnostic measures

Failure type

DE (%)

CPU

40

RAM memory failure

10

High

1

March C

DD

90

Register failure

10

Unable to communicate with XCU module Unable to communicate with XCU module Unable to communicate with other module

High

1

Checkboard test

DD

90

High

1

Self-test by DD software

90

Unable to communicate with other module

High

1

Checkboard test

DD

60

Unable to communicate with other module Unable to communicate with other module

High

1

CRC

DD

90

High

1

Watch-dog without time window

DD

90

High Transistor Q11 is cutoff, DO stuck at 1 No effect NE (NE) No effect NE

0

0

NR

Read-back SU value stuck at 0, false alarm No NR

NR

No

NR

NR

0

Read-back check

SD

60%

1

Read-back check

DD

0

Command 10 decoding and execution failure 10 Program counter and stack pointer failure ROM 10 failure

Sequential Execution failure … Resistor R1

Transistor Q11

… 1.7

1

… Open circuit

Short circuit Parameter drift C open circuit

50

80

10 10 10

C and E 10 short circuit

High Transistor Q11 is cutoff, DO stuck at 0 Q11 short High circuit, DO stuck at 1

NR

Apply FMEDA to Guide Self-diagnostic Design for Digital

15

4 Case Study In this paper the Digital Output (DO) Module of FirmSys is taken as an example to explain how to improve the self-diagnostic design by FMEDA. The DO module is designed with self-diagnostic measures, e.g., watchdog, software self-diagnostics, communication protocol diagnostics, etc. According to the bill of material and the circuit diagram of DO module, the FMEDA is conducted for every component. The failure rate of each component comes from the component failure rate database of CTEC, which is based on the prediction according to MIL-HDBK-217, the data provided by vendors, etc. During the analysis, the output of the DO module is supposed to stay as 1 in the normal state and change to 0 when the design base event occurs. Part of FMEDA results is shown in Table 2. Through the FMEDA process, four types of failures rate of each components can be reached. And the total kDD, kDU, kSD and kDD are calculated respectively by summing up the corresponding values of each component. In addition, some flaws about the prediagnostic measures are discovered. For example, the DO is designed with read-back features to monitor the output, but some components failures that can lead to DO module failed as stuck-at cannot be detected. The stuck-at problem is considered as a dangerous failure, which should be detectable. It is not easy to identify this issue without FMEDA analysis. After the FMEDA analysis, some self-diagnostic measures are proposed to improve the diagnostic coverage, e.g., the dynamically self-checking, test pattern, supply voltage monitoring chip, etc. The failure rates of the DO module are obtained, as shown in Table 3. It indicates that the kDD of the DO module increases from 485.2 to 567.4 FIT, and the DC increases from 76.2% to 90.4%, the FAR decreases from 21.4 to 8.6.

Table 3. Failure rates of DO module Module Failure rate (FIT) Development stage kDD kDU kSD kSU kFAR DO Module 937.5 Initial detailed design 485.2 151.3 278.6 21.4 21.4 After improvement 567.4 69.1 285.2 14.8 8.6

5 Conclusion Based on the experience of developing FirmSys, the paper introduces a method to improve self-diagnostic measures for digital circuit board. FMEADA is applied to analyze the failure modes and effects of each components and it can clearly identify if the failure mode is detectable and if the effects is safe or dangerous. The self-diagnostic design could not be perfect, but it can improve the diagnostic capability, especially for the dangerous failures and meanwhile decrease the false alarm rate. A case study of DO

16

J. Zhang et al.

module is presented and it demonstrates that FMEDA is applicable to use for diagnostic measures design optimization. The method proposed in this paper could be a reference for the self-diagnostic design for digital circuit board.

References 1. IEC 61508: Functional safety of electrical/electronic/programmable electronic safety-related systems [S] (2010) 2. Kim, B.C.: Case Study on the Assessment of SIL Using FMEDA 3. The FMEDA approach to improve the safety assessment according to the IEC 61508. Microelectron. Reliab. 500, 9–11 (2010) 4. Ehiagwina, F.: A comparative overview of electronic devices reliability prediction methodsapplications trends and challenges (2016) 5. IEC 60880: Nuclear power plants–Instrumentation and control systems important to safety– Software aspects for computer-based systems performing category A functions [S] 6. IEC 60671: Nuclear power plants – Instrumentation and control systems important to safety – Surveillance testing [S] (2007) 7. IEC 62061: Safety of machinery-Functional safety of safety-related electrical, electronic and programmable electronic control systems [S] (2005) 8. Guidelines for Process Equipment Reliability Data, with Data Tables. Center for Chemical Process Safety of AIChE, New York, NY (1989) 9. Reliability Data for Control and Safety Systems: SINTEF Industrial Management. Trondheim, Norway (1998)

A Reusable Functional Simulation Verification Method Based on UVM for FPGA Products in DAS Xiu-Hong Lv, Yun-Tao Zhang(&), Zong-Sheng Cao, Fei Wu, and Ling-Ling Dong China Techenergy Co., Ltd, 5 Yongfeng Road, Haidian District, Beijing 100094, China [email protected]

Abstract. Functional simulation verification is an important part for Field Programmable Gate Array (FPGA) product verification. Many problems had been encountered in FPGA verification in nuclear instrument control system when adopting traditional verification methods, such as long verification cycle, poor reusability of verification testbench, low level of automation, etc. Universal Verification Methodology (UVM) has the characteristics of reusability, extensibility and automatic. We introduced UVM for FPGA verification, which improved the efficiency and quality of verification process, and saved the project time. At present, this technology had been applied in the IO product verification of Diverse Actuation System (DAS) and achieved good results, and this approach will be applied gradually in the project. Keywords: DAS

 FPGA  Functional simulation verification  Reusable

1 Introduction Because FPGA has the advantage over the microprocessor and software system, a number of instrument manufacturers adopt FPGA technology in the diversity systems in order to achieve defence in depth. I&C (Instrument and control) system based on FPGA technology has not yet mature experience in China, so the nuclear power owners and regulatory agencies require strict verification of it to ensure quality and reliability [1–3]. Functional simulation verification is the most complex and time-consuming part in the FPGA design process, which accounts for about 70% of the entire research and development cycle. Coupled with the urgency of product listing requirements, verification has become the bottleneck of FPGA design. Traditional simulation methods have many problems, such as long verification cycle, poor reusability of the verification testbench, low level of automation, etc. The accellera organization launched UVM to make up for the deficiencies of traditional verification. UVM uses the hierarchical model method, through the reuse of components, shorten the testbench construction time, and further shorten the verification cycle.

© Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 17–27, 2019. https://doi.org/10.1007/978-981-13-3113-8_3

18

X.-H. Lv et al.

In functional simulation verification of nuclear product based on FPGA, we also encounter the same problems above. Code coverage and functional coverage are the key quality property in general FPGA function simulation verification, which are also key quality requirements for FPGA in nuclear that UVM can meet. Beyond this, it also has advantages of reusability and efficiency, so UVM was introduced to our functional simulation verification process for FPGA. FitRel is a DCS (Distributed Control System) product based on FPGA technology which can be used as DAS system, and it is the result of independent R&C (research and development) of CTEC. In this paper, we take the FitRel project I/O board FPGA verification as object under test. UVM methodology is used to build reusable verification testbench. The quality of FPGA product and verification efficiency was improved, and the goal of better verification had achieved.

2 UVM Description 2.1

UVM Introduction

UVM is a new verification methodology in the IC (Integrated Circuit) field, which synthesizes the advantages of AVM, OVM and VMM, etc. It represents the latest development in the field of verification, which is characterized by object oriented, reusability and scalability [4–6]. It can greatly improve the efficiency of chip verification by building a flexible and reusable verification testbench by UVM method. 2.2

UVM Testbench

UVM introduces the concept of class, which has the characteristics of object-oriented, and it is a collection of many types of libraries of class. The UVM test testbench consists of reusable verification components. The verification component is a package, easy to use and configurable verification environment, which is used to verify the design of sub modules and interface protocols. These verification components stored in IP component library are developed by the verification staff. They can be used conveniently, and can be used in a variety of verification environments flexibly according to the requirement. Figure 1 has shown a verification environment, which consists of two agent and a virtual sequencer. Each agent verification component follows a consistent architecture, which is composed of a set of excitation signals, checks, and coverage statistics. Agent can be configured as ACTIVE mode, responsible for driving and monitoring the bus; if configured for PASSIVE mode, only responsible for monitoring the bus. The verification environment has multiple sequence mechanism, which can synchronize the clock and data of different interfaces to realize the control of the test environment and signal excitation.

A Reusable Functional Simulation Verification Method Based on UVM

19

Fig. 1. Schematic diagram of UVM testbench

3 Introduction to the Device Under Test In this paper, the I/O board FPGA is taken as the verification object. 3.1

Function of I/O Board

The main function of I/O board is implemented by FPGA. It is the core control unit of data acquisition and output control in FitRel system. As shown in Fig. 2, it is responsible for controlling the channel acquisition (or output) circuit, and is responsible for communication with the MPU board. It receives commands from the MPU board and performs the corresponding actions, and report data to the MPU board.

Fig. 2. Schematic diagram of I/O board application environment

MPU and I/O board make data communication through the SLINK protocol, physical layer interaction using RS485 bus. SLINK protocol is a self developing protocol, which is divided into application layer, data link layer and physical layer. All

20

X.-H. Lv et al.

communication processes are initiated by MPU, using a Q & A interaction mode. The communication sequence is divided into two stages: the configuration phase and the periodic communication stage. 3.2

FPGA Architecture of I/O Board

I/O board FPGA using the same top-level module partition structure. The FPGA top-level module structure is shown in Fig. 3: SLINK communication module, serial read interface module, clock module and print interface module are public modules; internal process module and channel signal processing module are different according to I/O board’s own characteristics.

Fig. 3. Schematic diagram FPGA overall architecture of I/O board

4 Functional Simulation Verification Testbench Through the analysis of the main functional requirements of FPGA, we put forward a test scheme which is suitable for FPGA function verification. Then we need to design the overall architecture based on UVM verification testbench. 4.1

Testbench Architecture for FPGA

It is necessary to build a testbench to validate the simulation. After the establishment of the verification component library, the top verification environment is built according to the verification requirement. The verification environment is composed of DUT and verification components. As shown in Fig. 4, the verification bestbench includes an external memory functional device, an ADC\DAC functional device, and the UVM verification environment for simulating the MPU master device. The MPU function model establishes

A Reusable Functional Simulation Verification Method Based on UVM

21

Fig. 4. Schematic diagram of test-bench architecture for FPGA

the connection with the DUT through the virtual interface. The verification developer builds different sequence according to the test outline and forms different test cases. Top layer controls the initialization and normal simulation execution processes. It calls the run_test method to achieve the implementation of the uvm_phase, uvm_phase control the order of activities execution, including the establishment of testbench, incentive and simulation results reported, etc. 4.2

Verification Component Library Creation

According to the verification testbench architecture, the design and implementation of the components of the verification environment are carried out. 4.2.1

Create External Memory Function Model

External memory function model just need to have read function. This function model reads the data in the related register, and the data will be sent to the DUT serial read interface through IIC protocol. Data register is different according to I/O board type. 4.2.2

Create Adc/Dac/Di Function Model

DAC function model is created,which is used to analyze the output current data through SPI protocol. ADC function model is created, which is used for AI type FPGA verification, and is used to complete the channel current acquisition. This model generates constrained random current input data. The DI function model is created to generate a constrained random digital input signal for verification of the DI type FPGA. Because the output of the DO board FPGA is digital quantity, no external DO function model is needed.

22

X.-H. Lv et al.

4.2.3

SLINK Protocol Component

MPU function is very complex, and in this paper only needs to meet FPGA SLINK bus data exchange and processing part. (1) Application layer component The application layer component is divided into configuration class component and communication class component according to the communication type. According to the direction of data transmission, can be divided into uplink and downlink components. From the MPU to the I/O direction is for the downlink, and vice versa. cfg_app_base_data application layer configuration data packet base class and msg_app_base_data application layer communication data packet base class are created. They are derived from the uvm_object base class. In the data packet based class, declared the I/O package shared variables, such as packet number, chassis, slot, board type, etc. For different I/O types, the data packet base class is derived to get the specific I/O characteristics of the sub data class. In Fig. 5 application layer data diagram, XX can be replaced by AI, AO, DI and DO.

Fig. 5. Application layer data packet

Next, the app_frame application layer data frame DATA base class is created, which is derived from the uvm_sequence_item base class. The base class declares a state byte, frame byte, CRC, and other variables other than the data packet type. In the app_frame derived app_frame_templet class, add the data packet type variable to form a complete application layer data frame DATA. In the DATA data frame class, it includes the random constraints of variables, the correct judgment of the variables, the application layer pack, and the application layer parsing unpack. Figure 6 shows the complete application layer data frame. When the application layer protocol is modified, it only needs the corresponding base class or the extension class to modify the variables.

Fig. 6. The complete application layer data frame

A Reusable Functional Simulation Verification Method Based on UVM

23

(2) Data link layer component library creation Take the application layer data packet as based variables, and then add synchronization byte SYN, delimiter, frame length, source address, destination address, parity information to it. The data link layer data frame is formed. First, the link_frame derived from the uvm_sequence_item base class is created, and it is the data link layer base class. In the link_frame base class, the relevant variables of the data link layer were declared, such as SYN. Get the link_frame_templet class derived from the link_frame base class. Application layer data packet DATA is added to link_frame_templet class, then the complete data link layer data frame is formed. In the data link frame, it includes the random constraints for the variables, the link layer data pack, the link layer packet unpack, etc. Figure 7 shows the complete data link layer framing process. If there is a link layer protocol change, simply modify the changes in the base class or derived class.

Fig. 7. The complete data link layer data frame

(3) Physical layer component creation The I/O board FPGA realizes communication with MPU through 485 bus. The frame structure consists of start bit, data, check bit and stop bit. A 485 verification component rs485_agent is created, used for implementing physical layer communication. 4.2.4

Create Env Component

The master_agent component is derived from the uvm_env base class. The data link layer and physical layer variables were declared and instantiated within the agent, then a virtual sequencer that is associated with the sequence is declared. As shown in Fig. 8. Master_agent receives the up physical layer data through rs485_agent, and gets the uplink layer packet from the physical layer data. In link layer components uwd_cfg_link_frame and uwd_msg_link_xx_frame will send frame to the reference & compare to complete the correctness of the data check. Such as byte length, source address, CRC and other link layer parameters. If there is an error then report it in the log.

24

X.-H. Lv et al.

Fig. 8. The master_agent component

Master_agent has also realized the function of sending down link packets to DUT. Master_agent can generate the downlink layer frames of dwd_cfg_link_xx_frame and dwd_msg_link_xx_frame in itself automatically, and data link layer frame is then converted into rs485_agent physical layer data and sent to the DUT. The master_agent component can automatically generate a constrained random down link packet, or it can be programmed by virtual sequencer to generate a predetermined data. A reference & compare module had been developed to check the validity of the response data. The reference part of this module will prepare the expect data. For example, the AO output value, the output port current value will also included in the uplink frame, then we can prepare the expect data. The module can also check other information, such as frame length, address, command type, chassis or slot number, CRC, etc. Compare part of this module will check if the data from DUT is correct. If expect data is not consist with the response data, this will be recorded and reported. 4.3

Master_Testbench Verification Environment

The testbench is derived from the uvm_test component. Integrate the master_agent component into testbech, meanwhile, the testbench also includes the main control chassis and the slot number, the master address, I/O type, I/O slot, I/O address and other necessary variables. The testbench is as shown in Fig. 9. The vir_sqr of the virtual sequencer component is declared in the testbench, which is used to establish connection between the sequence of the test case and the internal component of the testbench. In the test top layer, the sequence can call the sequencer, and program the corresponding variables in virtual sequencer, to achieve the desired test excitation signal to DUT. It is characterized by the fact that different test cases can be implemented without modifying the testbench. 4.4

Testcase Library

After completing the verification testbench, by modifying the sequence set, verification personnel can form different testcase depending on the test purpose. Testcase library has been established according to the test plan. Testers can focus on the development of testcase, so as to facilitate the discovery of valuable defects. This will improve the verification efficiency and product quality. With

A Reusable Functional Simulation Verification Method Based on UVM

25

Fig. 9. The master_testbench verification environment

the deepening of the verification activities, testcase library will be more and more perfect, and then to achieve a higher code coverage and functional coverage, improve the reliability of FPGA.

5 Functional Simulation Verification Implemention Traditional verification method often needs to modify the testbench for a new test cases, the code modification is large, the work is highly repeated, and is not easy to maintain and expand. The testbench and testcase of UVM verification testbench are independent, which enhanced the reusability and scalability. If new test cases or make some changes during test maintenance is needed, you only need to program the sequence to form the required testcase. The following part will show that the cost is reduced from several directions. The information is collected from the actual project implementation of FitRel. In the real FPGA verification for I/O board of DAS system, the prepare time for testbench is greatly reduced. The average preparing time for one specific testcase may save about 20%*30%. As shown in Table 1. For … traditional method, test execution is more of a human eye inspection method, which is difficult to achieve automation. The test process is time-consuming and laborious. In the UVM testbech, compare module is included to check the validity of response data, this greatly reduce the test executing time. The average time for one testcase execution and result check may save about 30%. As shown in Table 2. Traditional method can only execute one case at a time. When UVM adopted, more than one testcase can be executed at a time. In addition, the prepare time and execution and check time for one testcase is reduced. All these contribute to the shorten of the

26

X.-H. Lv et al.

Table 1. Average preparing time for one testcase before test execution in I/O board FPGA functional simulation verification I/O FPGA Testcase Traditional method UVM method Average preparing time for one testcase before test 2 h 1.5 h

Table 2. Average execution time for one testcase in I/O board FPGA functional simulation verification I/O FPGA Testcase Traditional method UVM method Average execution time for one testcase 3 h 2h

entire testing period, and the average project testing period for one I/O board FPGA may saved by 30%. As shown in Table 3. Table 3. Average testing period for one I/O board FPGA functional simulation verification I/O FPGA Testcase Average testing period for one I/O board FPGA project

Traditional method 300 h

UVM method 200 h

When using this testbench, if tester needs to change the object from the AI FPGA into DO or other FPGA, he also does not have to make any changes to the testbench, through changing the corresponding chassis number, slot number, I/O type and the corresponding parameters of I/O in the virtual sequencer to achieve the purpose.

6 Conclusion In this paper, UVM is used to simulate the I/O board FPGA of DAS system. Through the practical application in the project, the verification results show that the use of UVM for FPGA verification for DAS can effectively improve the efficiency of FPGA verification, shorten the product development cycle. UVM verification testbench has good configurability and reusability, effectively meet the functional simulation verification requirements.

References 1. Chen, Y.J., Zhang, C.L., et al.: Research on the application of FPGA in diversity system of nuclear power plants. Process. Autom. Instrum. 35(2), 46–49 (2014) 2. Chen, D.L., Zhang, Y., et al.: FPGA technology application in diversity actuation system of nuclear power plant. At. Energy Sci. Technol. B11, 976–979 (2014)

A Reusable Functional Simulation Verification Method Based on UVM

27

3. Liu, R., Li, C.L.: Verification and validation for FPGA based safety class I&C system of nuclear power plant. Nucl. Electron. Detect. Technol. 1, 103–106 (2014) 4. Pan, Y.J., Long, K.: Implementation of efficient and reusable Soc functional verification based on UVM. Electron. World 3, 180–183 (2016) 5. Xie, Z., Wang, T., et al.: A RISC CPU oriented reusable functional verification platform based on UVM. Acta Sci. Nat. Univ. Pekin. 50(2), 221–227 (2014) 6. Xu, J.P., Li, S.S., et al.: Adopting universal verification methodology to achieve reusability and automation verification. Microelectron. Comput. 11, 14–17 (2014)

The Method of Failure Analysis for SafetyCritical System Software Based on Formalization Xiao-Bo Zhou1(&), Jin Fan2, Ru-Mei Shi1, Ya-Dong Zhang1, and Qiao-Rui Du1 1

2

China Techenergy Co., Ltd. (CTEC), Beijing 100094, China [email protected] China Nuclear Power Engineering Co., Ltd, Beijing 100840, China

Abstract. As the digital instrument control system in the field of security become more and more widely used, the reliability of the software has drawn great attention. Identifying and eliminating potential errors in software is an effective way to improve software reliability. Most of the methods for identify software failures at this stage are evolved from the traditional failure analysis methods, such as fault tree, Failure Mode Effect Analysis methods (FMEA). These traditional failure analysis methods encountered some problems, such as the credibility of the results depends heavily on the skills of the executive staff and the analysis workload is huge. In this study, a formal method was adopted to describe the software design, and formal tools were used to find the software failure path. Formal technology is based on rigorous mathematical theory, and it is easy to carry out by computer processing, which can greatly reduce the impact of executive staff awareness on the analysis results. In addition, using formal tools can effectively reduce the workload of executives. Keywords: Failure analysis

 Safety critical software  Formalization

1 Introduction In recent years, with the application of software in the safety-critical system becoming more and more widespread, the requirements of reliability for safety-critical software are also getting stricter. Especially in the field of safety-related, many standards and regulations have been proposed for the software reliability. For example, it is explicitly required in the Nuclear Safety Guide HAD102/16 in the field of nuclear power plant that the reliability should be focused on safety-critical software [1]. GB/T13629 required that when the reliability of the target set out, it should have proven that the safety-critical system software can still meet the requirements of the target [2]. At this stage, expert in this field mainly focuses on the analysis of software reliability qualitatively. Through software failure analysis, ones can identifies software defects and modifies software defects to improve software reliability. Analysis methods for software failure generally use the classical reliability analysis methods commonly used in the field of hardware reliability analysis, such as FMEA © Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 28–36, 2019. https://doi.org/10.1007/978-981-13-3113-8_4

The Method of Failure Analysis for Safety-Critical System

29

and Fault Tree Analysis (FTA). When these traditional methods are applied to software failure analysis, the result is seriously dependent on the knowledge level of the executives and there is huge workload of analysis. In this paper, we combined FMEA method and formalization technique. We have made use of rigor mathematical theory of formalization technology, which can be easily realized by computer processing. This combined method can ensure the objectivity and validity of the failure analysis. At the same time, through the use of formal tools, we improved the efficiency of analysis. When using formal methods to conduct failure analysis of software, the modelling process requires a high level of ability of analysts. Meanwhile, the software modelling process is equivalent to a software reconstruction with a huge workload. In Sect. 3 of this study, a modelling method can effectively reduce the difficulty and workload of modelling. In Sect. 4, the formalized software failure analysis process is described, which can effectively guide the implementation of formalized software failure analysis activities.

2 Safety-Critical System Software Failure Analysis Principles Formally-based failure analysis software is achieved by analyzed software (such as software requirements, design, code, etc.) by creating a model using formal methods. The software uses state machine way to describe the analysis, and the process translate the problem domain into the analyze domain. Then the formal tool searches for the state transition path that does not satisfy the function definition of the software. Usually, this state transition path, which can not satisfy the definition of software function, represents a software function failure. Finally, we should analyze the failure path and identify the failure path to related to which kind of software failure and calculate the probability of failure, etc. This process is the process of transforming the analysis domain into the problem domain. Analysis of the process diagram is as follows in Fig. 1.

Fig. 1. Implementation process

30

X.-B. Zhou et al.

3 Safety-Critical System Software Model From above we can see that in the formalization of software failure analysis process, the establishment of the software model is crucial. Software model is equivalent to a refactoring of the software, generally, the workload is huge. Therefore, an common software model is established to reduce the workload of model an effective method. When model the software being analyzed, the software to be analyzed is broken up into a combination of simple software units. The units can be clearly defined by an input analyst as a measure of the adequacy of a software unit. By decomposing the analyzed software into multiple simple software units, the software model is simplified and a common model is built for the simple software units. The process is shown as follows in Fig. 2.

Fig. 2. Software structure model

This study abstracts the software unit into the behaviour of running a specific data according to a specific set of logic in a specific environment, as shown in the following Fig. 3.

Fig. 3. Software operate model

3.1

Software Unit Data State Model

Software behaviour can be abstracted as three categories: read (Read), write (Write) and initialization (Rest). software operating environment from the time (S.fresh) and space (S. space is abstracted from the two aspects and abstracted from time to time as

The Method of Failure Analysis for Safety-Critical System

31

the two states of “new” and “old”, which are spatially abstracted into two states of “empty” and “full”. The interaction between software behaviour and software runtime environment will lead to the change of software data state. Software’s data state (S. sdata) can be abstracted as: initial-data state (sinit), normal- data state (normal), dataloss and data-repeat (repeat) four states. The relationship between software behaviour, environment and data status is shown in the following Fig. 4.

Fig. 4. Software unit data state model

Manually determine the relationship between software unit input data and output data. Software’s data state (S.sdata) can be abstracted as: initial state of data (sinit), normal-data state (normal), data-loss (lose) and data- repeat (repeat) four states. Under certain data states, the relationship between input data and output data is specified in the form of assertions. 3.2

Software Modeling

As a basic element, through the data coupling to connect the various software units, software unit behaves to control the timing of each software unit, as follows Fig. 5.

Fig. 5. Software model example

32

X.-B. Zhou et al.

4 Software Failure Analysis Method Based on Formalized Safety Critical System Software The failure analysis process based on the formal safety critical software is as follows Fig. 6.

Fig. 6. Analysis process

(1) Identify software data flow and data flow diagram (a) Identify the information and find the source of the information; (b) Draw the data flow diagram according to the data in the transfer path between the various functional blocks. (2) Model the software to be analyzed (a) Altarica formalized language description for software unit data state model according to Sect. 2 is follows Fig. 7. (b) The common basic software model for the smallest granularity, according to the data flow direction and the data assignment. For example, a software containing two basic software functional units, that is, there are two basic software models (model A, B). The data flow from model A to model B, then

The Method of Failure Analysis for Safety-Critical System

33

Fig. 7. Software ware unit data state model base on Altarica

the data assignment, model A data assigned to the model B. In this way, we can establish basic software model to describe generalized software. (3) Define the software functions and describe them by using the states in the basic software model. For example, define the function of the software in the above example as the data b outputs true if the data state is normal. (4) Use the ARC tool to automatically search for all status transfer paths that do not satisfy the function definition, and each transfer path is a failure mode. (5) For each failure mode combined with the specific circumstances of the software system analysis of its causes of failure, frequency, etc., and fill in the FMEA form.

5 Based on Formal Software Failure Analysis Application Practice For software functions “perform software configuration data read (from FLASH), parse and parse the parsed data into dual port RAM” to perform FMEA analysis. (1) Identify the data flow and draw data flow diagram (Fig. 8): (2) Describe the configuration function formally(A function reads “read configuration data from FLASH”, B function reads “resolve configuration data” and C function reads “write configuration data to dual-port RAM”) (Fig. 9) (3) Define the data written to the dual port RAM that is valid for the configuration function and formalize the language description: [(C.S.sdata = normal)& (c! = Yes)] (4) Use the ARC tool to automatically search for all status transfer paths that do not satisfy the function definition (Fig. 10):

34

X.-B. Zhou et al.

Fig. 8. Data flow diagram

Fig. 9. Software model base on Altarica

Fig. 10. Software function description base on Altarica

The Method of Failure Analysis for Safety-Critical System

35

(5) The status transfer paths that do not satisfy the function definition may be failure modes. Analyze the cause and calculate the probability of the failure mode, then fill in the FMEA form (Fig. 11). Path Example 1:

Fig. 11. Path example 1

Path Example 2 (Fig. 12):

Fig. 12. Path example 2

Present the results in the form of FMEA (Table 1). Table 1. The table of FMEA Reason Nothing

Mode Not read from the FLASH data, direct write dualport RAM

Probability Impossible

Effect –

Measures –

FLASH reading function failure …

Write dual-port RAM data error

Possible

Configuration error







Diagnose “FLASH reading function” …

Remarks The MPU executes as a serial –

36

X.-B. Zhou et al.

From the table above, the software failure path is determined by both the software model and the failure definition. The model and the failure definition are derived from the software design solution, and the design proposal is relatively objective. Therefore, the problem that the analysis result is greatly influenced by the executive subjectively can be solved. In addition, by implementing the work with the software tools, we can greatly reduce the workload of analysts.

6 Conclusion This paper introduces a software failure analysis based on formalized method for safety critical system. This method adopts the software functional unit model method and the failure analysis process based on ARC language and tools. Through the formalization function on the software failure and its impact and through the use of formal tools ARC, the work efficiency is greatly improved. In the system software, when the number of software units becomes larger, the use of tools can deal with the state that will be an explosion, then the way should be used to deal with functional segmentation.

References 1. HAD 102/16 2004: Computer based safety important system software for nuclear power plant 2. GB/T 13629-2008: Criteria of computers in safety system for nuclear power plant

A Study About Software V&V Evaluation of Safety I&C System in Nuclear Power Plant Peng-Fei Gu1, Zhe-Ming Liu2, Wei Xiong1, Wei-Hua Chen1, and Sheng-Chao Wang1(&) 1

State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, I&C Equipment Qualification and Software V&V Laboratory, China Nuclear Power Engineering Co., Ltd., Shenzhen 518172, Guangdong, China [email protected] 2 Product Information Committee of China Instrument and Control Society, Beijing 100080, China

Abstract. Software verification and validation (V&V) is internationally recognized as an important technology to improve software reliability. The promulgation of new regulations and standards related to nuclear safety software may put forward new and higher requirements, so that existing software V&V technical solutions cannot fully cover the new requirements. There is still a gap between the new requirements of the new regulations, the new standards and the software V&V experience feedback on issues of Generic Design Assessment (GDA) of Office for Nuclear Regulation (ONR). Therefore, relevant research of software V&V is needed so as to meet the new requirements for domestic and international safety reviews. Based on the comparative analysis of the new and old nuclear safety standards, such as Institute of Electrical and Electronic Engineers (IEEE) 1012 and International Atomic Energy Agency (IAEA) No. SSG-39 and ONR review principles, as well as the technical opinion report of European Union (EU) safety software certification, the main technical differences were sorted out to provide technical reference for the establishment of better applicability or the optimization of the nuclear safety I&C system software V&V solution. Keywords: Software V&V  New requirements Technical differences  Optimization

 GDA 

1 Introduction Software verification and validation (V&V) is internationally recognized as an important technology to improve software reliability. Due to the complexity of software and the limitations of testing methods, software V&V needs to adopt a variety of methods and verify the products generated in each phase of the software life cycle to achieve the ultimate goal of improving software reliability. With the continuous accumulation of engineering practice experience and the maturity and application of new technologies and methods, software V&V related standards will also include good engineering practices and proven effective methods when upgrading. For example, the statistical test is adopted for the nuclear safety © Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 37–47, 2019. https://doi.org/10.1007/978-981-13-3113-8_5

38

P.-F. Gu et al.

digital system and equipment in the Generic Design Assessment (GDA) of the EPR, ABWR1000, AP1000 and the formal method is used in the verification of nuclear safety instrumentation and control (I&C) system software at Sizewell nuclear power plant of UK. The promulgation of new regulations and standards related to nuclear safety software may put forward new and higher requirements in terms of the scope and depth of V&V, technologies and methods suitable for use, so that existing software V&V technical solutions cannot fully cover the new requirements, and the correctness and effectiveness of new technologies and methods needed to be adopted are also to be assessed. Several major nuclear power groups in China are actively to develop the nuclear safety digital I&C system and equipment with independent intellectual property rights, and gradually form its own V&V laboratory to carry out the research of the key techniques of V&V and the evaluation of nuclear safety software, but do not use statistical test and formal method for engineering practice. There is still a gap between the new requirements of the new regulations, the new standards and the software V&V experience feedback on issues of GDA. Therefore, relevant research of software V&V is needed so as to meet the new requirements for domestic and international safety reviews. This study focused on the software V&V related regulations and standards such as Institute of Electrical and Electronic Engineers (IEEE) 1012 and International Atomic Energy Agency (IAEA) No.SSG-39 and ONR review principles, as well as the technical opinion report of EU safety software certification, and carried out a comparative study between the old and the new version standards. On this basis, the main technical requirement differences are sorted out for the technical reference of V&V solution establishment or optimization.

2 Comparison Between Old and New Versions of Standards The main executive standard of software V&V is IEEE 1012-2004, and IEEE 10122012 and 2017 have been released. The IAEA has adjusted its regulatory and standard system, and the specific safety guide No.SSG-39 related to the I&C system and software design has come into effect in 2016. Besides, RCC-E standard has already upgraded to 2016 edition. In 2015 the European Union issued a technical opinion report on licensing of safety critical of software for nuclear reactor. By comparing the regulations, standards and technical reports related to safety software V&V, the gaps between the old and new standards in the technical requirements, implementation scope, depth and procedures of V&V are analyzed. 2.1

Comparison Between Old and New Versions of IEEE 1012

U.S. Nuclear Regulatory Commission (NRC) endorsed that nuclear safety software V&V shall comply with IEEE 1012 integrity level 4 software V&V requirements by R.G. 1.168. The standard version of endorsement in R.G. 1.168-2004 is IEEE 10121998, while the standard version of endorsement in R.G. 1.168-2013 [1] is IEEE 10122004 [2].

A Study About Software V&V Evaluation of Safety

39

IEEE 1012-2004 is limited to software V&V, while IEEE 1012-2012 [3] and IEEE 1012-2017 [4] extend the scope of V&V to system and hardware. Accordingly, the concept of “software integrity level” is extended to “integrity level”, and the concept of “component” is extended from “software component” to “software component and hardware component”, and “V&V tasks” is subdivided into software, hardware, system and general V&V tasks. 1. Integrity level Integrity level of the IEEE 1012 setting value to quantify the complexity, critical, risk, security level, security level of confidentiality, the required performance, reliability, or other project unique features which the importance is based on the user and the buyer. The concept of integrity level is used to determine the degree of V&V tasks, activities, and strict and V&V execution strength level. As software integrity level declines, the necessary scope, intensity, and rigor associated with the V&V tasks should also decrease. For example, in the hazard analysis of software with integrity level 4, it can be officially recorded into the document and the module failure can be considered, in the hazard analysis of software with integrity level 3, only significant software failures are taken into account and can be informally recorded as part of the design review process. IEEE 1012-2012 and IEEE 1012-2017 do not require that all subsystems or components assigned to the system have exactly the same level of integrity, while IEEE 1012-2004 does not give a clear explanation for this. However, it is important to note that the NRC requires that the integrity levels of the system and all its components be the same. 2. V&V processes IEEE 1012-2004 allows V&V team to arrange design team to conduct V&V test specifications, test execution, and test records. IEEE 1012-2012 and IEEE 1012-2017 require V&V organization for testing of systems/software/hardware at integrity level 3 and level 4, which ensures the independence and diversity of testing between V&V organization and design organization. For integrity level 2 systems/software/hardware, testing can be performed by the design team and reviewed by the V&V team. 3. V&V activities The comparative analysis of software V&V activity differences among IEEE 1012 versions is shown in the Table 1. 4. V&V tasks IEEE 1012-2004 and IEEE 1012-2012 and IEEE 1012-2017 differ in the depth of V&V tasks requirements. Major differences include: • Hazard analysis In IEEE 1012-2012 and IEEE 1012-2017, the new requirement “evaluation and identification of mitigation measures to verify each hazard have been prevent, mitigate

40

P.-F. Gu et al. Table 1. Comparison of software V&V activities between versions

V&V activity Concept Requirement Design Implementation (or Construction) Test

Installation and Checkout Operation Maintenance Disposal

Rev.2004 q q q q (Implementation) Contains three tasks: Integration testing System testing Acceptance testing q q q

Rev.2012 q q q q (Construction)

Rev.2017 q q q q (Construction)

Activities broken down into three phases: Integration testing Qualification testing Acceptance testing q

Activities broken down into three phases: Integration testing Qualification testing Acceptance testing q

q q q

q q q

and control (record any harm unease, as a part of the system and software running attention)” is added in the design, implementation, test, installation and checkout, operation and maintenance phase of the V&V tasks. • Security analysis In IEEE 1012-2012 and IEEE 1012-2017, the new requirement “to ensure the security of the identified threats and vulnerabilities have been defensive to prevent, mitigate and control (record any security threats and vulnerabilities unease, and as a part of the system and software running attention)” is added in the design, implementation, test, installation and checkout, operation and maintenance phase of the V&V tasks. In Appendix J of IEEE 1012-2017, new security analysis method based on threat and system life cycle process assurance are added, which can provide operational guidance for implementation. • Source code and source code documentation evaluation In IEEE 1012-2012 and IEEE 1012-2017, the new requirement “verify that the source code and its interfaces with other components do not result in unnecessary, unintended or harmful consequences” is added in the implementation V&V task. In addition, compared with IEEE 1012-2004, IEEE 1012-2012 has the following appendices, including: – Appendix I system, software and hardware integration V&V. – Appendix J hazards, security and risk analysis. – Appendix K the system integrity hierarchy and changes sample in “supporting system functions”.

A Study About Software V&V Evaluation of Safety

41

IEEE 1012-2017 builds on IEEE 1012-2004 with the addition of Appendix M “system application V&V for the Nth time”. The basic idea is that complete hardware, software, and system V&V activities are performed for the first use of the system, while for the Nth use of the system, regression analysis is carried out firstly, and then V&V activities are determined according to the differences. If the application is too different due to user requirements or environment differences, the system should be considered as the first application and can be executed with reference to Appendix D reuse software V&V. To sum up, the differences between IEEE 1012-2004 and IEEE 1012-2012/IEEE 1012-2017 are relatively large, while the differences between IEEE 1012-2012 and IEEE 1012-2017 are relatively small. 2.2

Comparison Between Old and New Versions of IAEA

The IAEA’s newly published safety guide No.SSG-39-2016 [5] is a combination and modification of its original two safety guides NS-G-1.1-2000 and NS-G-1.3-2002. The main changes involve the continuous development of computer applications and the evolution of methods required for safety, security and practical use. In addition, human engineering development and the need for computer information security are also considered. Major additions and updates include: – Specific considerations for I&C in order to meet the requirements specified in GSR-3. – Design inputs to be considered when setting I&C system design benchmarks. – In the life cycle of I&C system, the characteristics of mutual dependence are designed and realized, especially for the complete I&C system, independent I&C system, software interdependency and requirements for human engineering input and computer information security input of the whole nuclear facility during the life cycle. – The use of computers, hardware description language programming devices, limited industrial equipment, and methods to ensure performance correctness. – The overall architecture of I&C system is considered to support the deep defense concept of nuclear power plant system design and to establish the deep defense protection system of the instrument control system itself to prevent common cause failure. – The data transmission between important safety systems should consider the situation that high safety level systems receive data from low safety level systems. – Provide measures to ensure the information security of digital security system. – Activities related to computer software development, including design, verification and validation, principles derived from the security guidelines. NS-G-1.1-2000 [6] requirements for software V&V mainly involve general requirements, static analysis, test strategy and scope, test preparation and implementation, hazard analysis, tool evaluation, inversion method, evaluation of operation history, documents, etc. Compared with NS-G-1.1-2000, the differences in V&V requirements of No.SSG39-2016 mainly include the following aspects:

42

P.-F. Gu et al.

1. V&V processes and activities No.SSG-39 presents the I&C system development life cycle process and V&V activities, including system V&V activities, software V&V activities, and hardware V&V activities, and adds V&V activity of the relationship between hardware requirements and software requirements in the software requirements V&V activities. These requirements are new and are the same as IEEE 1012-2012 requirements. 2. Hazard analysis NO.SSG-39 further details the requirements for hazard analysis of the I&C system in clauses 2.56 to 2.65 of Chap. 2. Additional requirements include: – Consider internal and external hazards, power plant equipment failures, I&C failures or accidental operations caused by hardware failures or software errors, etc. – Consider the state and operation mode of all power plants, switching process of different operation modes, state of degradation, etc. – The preliminary results of the I&C system hazard analysis need to be valid before the overall I&C design benchmark is determined. – Update hazard analysis is required at all stages of the I&C system development life cycle. – Measures to eliminate, avoid, or mitigate hazards identified as possible downgrades of system function.

3. Static analysis As for the formal code verification technology, NO.SSG-39 deleted the clause content of NS-G-1.1-2000, which is “When software requirements are formally specified, it is possible to verify formal code. However, formal verification generally requires a wide range of expertise, so consider consulting competent analysts”. 4. Software tools NO.SSG-39 further details the requirements for software tools in Sect. 7.148–7.164 of Chap. 7. Additional requirements include: – Information security testing tools have been added to the tools used in the I&C system development life cycle; – Configuration management of all software tools is required.

5. Reverse engineering (inversion) NO.SSG-39 deletes the provisions of reverse engineering (inversion method) in NS-G-1.1-2000. Note only in the “modifications” section of Chap. 2 that “since the design documentation for the old system may be incomplete or inaccurate, the modification or replacement of such systems requires a degree of reverse-engineering measures to regenerate the original design baseline or design specification”.

A Study About Software V&V Evaluation of Safety

43

6. Operation experience NO.SSG-39 adds the clause that “relevant operational experience can be a supplement to other validation technologies, but cannot replace them”. 7. Information security NO.SSG-39 adds the requirements of 9.82–9.94 for information security verification in software V&V of Chap. 9: – The software automation tool is used to examine the information security vulnerability of the code and manually assisted to review key parts of the code, including input and output processing, exception processing, etc. – For security systems, final applications need to be submitted for testing to ensure computer security (such as penetration testing), to verify that common security vulnerabilities are not easily detected, and to allow continuous improvement in software design and implementation.

8. Pre-developed software NO.SSG-39 puts forward requirements for pre-developed software used in safety systems and important safety systems respectively: – For safety systems, pre-developed software used in the safety I&C system should have the same level of identification as its application. – For the safety important I&C system, the user manual needs to describe the predeveloped software, including: function, interface, different behavior modes and their switching conditions, restriction conditions, reasonable demonstration of satisfying users or the requirements applicable to the I&C system. – More detailed identification requirements have been added to the identification of pre-developed items, as detailed in Sect. 6.78–6.134 of Chap. 6.

9. Third-party evaluation Additional requirements for third-party evaluation for NO.SSG-39 include: – Third-party evaluation should be adopted for the safety system software and executed in parallel with the development process. – Content of the assessment include: The development process, through quality assurance supervision, technical inspection of life-cycle process documents such as Outlines, software specifications, and full-scope testing activities; The final version of the software and any subsequent modifications are evaluated through static analysis, inspection, monitoring, and testing.

44

2.3

P.-F. Gu et al.

EU Safety Software Certification Technology Common Position Analysis

The common opinion report-2018 “Licensing of Safety Critical of Software for Nuclear Reactors, Common Position of International Nuclear Regulators and Authorised Technical Support Organisation” [7] adopts the method of classification requirements and management for software, taking software V&V as an example, its classification requirements are shown in the Table 2. Table 2. Classification requirements of software V&V with different safety levels Software type New software

Non-safety important systems /

Supplier V&V Field delivery test

/

Predeveloped software

/

Supplier V&V Field delivery test

/

Safety related systems

Safety systems

Based on selected standards such as IEC 62138 or IEC 61508 1. V&V outline 2. Validation at all stages of the development lifecycle 3. Independent confirmation Field delivery test

IEC 60880 IEEE 7-4.3.2

Feedback on operational experience in supporting software, libraries, and other reusable software Based on selected standards such as IEC 62138 or IEC 61508 1. V&V outline 2. Validation at all stages of the development lifecycle 3. Independent confirmation Field delivery test Relevant operational experience feedback

1. V&V outline 2. Validation at all stages of the development life cycle 3. Independent V&V 4. Independent evaluation Field delivery test Feedback on operational experience in supporting software, libraries, and other reusable software IEC 60880 IEEE 7-4.3.2 1. V&V outline 2. Validation at all stages of the development life cycle 3. Independent V&V 4. Independent evaluation Field delivery test Relevant operational experience feedback

1. Concerns in software verification – For software correctness and its impact on reliability, software components (operating system, library, application software, intelligent equipment, communication protocol, man-machine interface, etc.) need to be verified.

A Study About Software V&V Evaluation of Safety

45

– For selection of verification tools and methods, the combined use of different methods to achieve full coverage of functional and non-functional requirements, and consideration of formalized validation scope. And software modules must be tested and meet the coverage requirements. – Verification policies are balanced in terms of time, schedule, and resources. – Test coverage. 2. Concerns in validation and deliver the test – It is recommended to use statistical test to estimate system reliability, test case selection takes account of operational profiles, and the number of test cases depends on the required level of safety system reliability and confidence level. 2.4

ONR Technical Assessment Guide for Old and New Versions of Analysis

ONR technical assessment guide related to software V&V is NS-TAST-GD-046 “Computer Based Safety Systems”, which has been updated two editions in the last two years. Compared to NS-TAST-GD-046 (rev3, 2013) [8], the changes of NS-TASTGD-046 (rev4, 2017) [9] are mainly reflected in the updated version of the standard version it refers to, and there is no significant change in its review technical principles. Recently, ONR released NS-TAST-GD-046 (rev5, DRAFT). Compared to NSTAST-GD-046 (rev4, 2017), the changes of NS-TAST-GD-046 (rev5, DRAFT) mainly include: 1. Scope of application: new technical guide for this review are applicable to HDL systems. 2. In terms of the general review principles, additional or further clarifying requirements are as follows: – The functions of computer systems and the complexity of their implementation should be minimized and avoided. – For a diversified safety system, if one is based on computer technology, the other should adopt non-computer technology. – Production Excellence (PE): Demonstrate that potential systemic defects introduced in the software development process are minimized. – Independent Confidence Building Measures (ICBM): The emphasis on dependability comes from the diversity of independent execution, and the diversity of execution staff, evaluation techniques and methods. – In addition to the consideration of information security of safety important systems based computer, ONR gives the specific control requirements and control methods in Appendix 6. – Adding to the consideration of software tool identification, ONR clarifies the requirements for software tool identification in Appendix 7. – Based on current technology level and consideration of all relevant factors including complexity, ONR believes that the statement of 1e-4 reliability for the computer safety system is reasonable and credible.

46

P.-F. Gu et al.

3. Multi-legged arguments – New identification requirements for pre-developed items such as commercial grade smart devices and platforms are added. And ONR gives the classification identification method of commercial grade smart devices in Appendix 4.

3 Summary of Technical Differences Between the Old and New Versions By comparing the old and new versions of relevant regulations and standards of nuclear safety software V&V, the following differences in technical requirements are sorted out: 1. V&V object range expands to include HDL software. 2. The scope of V&V task is expanded and the task content is detailed. – Project planning V&V, configuration management V&V and disposal V&V are added. – In hazard analysis and security analysis, the task of evaluating mitigation measures is added. 3. The V&V task requires more clarity – Further clarify and regulate the V&V strategies and methods of reuse/predeveloped software. – Further detail the contents and requirements of hazard analysis and security analysis. – Specify the configuration management requirements and identification requirements of software tools. – Further clarify the functional and structural coverage requirements for testing. 4. Increased severity of task execution, for example: – The independence requirement is emphasized, and the third-party evaluation is required for the safety system software. – For safety system software V&V, the system test is required to perform by the V&V organization independent. – The diversity of V&V techniques and methods is emphasized. Statistical tests and formal methods are recommended. – For safety pre-developed software V&V, source code testing such as static testing, dynamic testing is emphasized.

A Study About Software V&V Evaluation of Safety

47

4 Conclusions Based on the comparative analysis of the new and old nuclear safety standards, such as IEEE 1012 and IAEA No.SSG-39 and ONR review principles, as well as the technical opinion report of EU safety software certification, this study sorted out the main technical differences to provide technical reference for the establishment of better applicability or the optimization of the nuclear safety I&C system software V&V solution. Although the IEEE and the IAEA have been published or updated the relevant regulations and standards of nuclear safety software V&V, the nuclear safety regulators of China mainly refer to the standards accepted by NRC regulatory guide, such as R.G. 1.168-2013 in endorsement of IEEE 1012-2004 and IEC 60880 for regulatory scrutiny of nuclear safety I&C system. As a result, the existing nuclear safety software V&V solution can satisfy the current safety evaluation requirements. The results of this study are forward-looking research results that take into account the requirements of GDA review and can deal with possible technical risks in the future nuclear safety review in accordance with the new standards, laying the foundation for the “going global” of Hua-Long No.1 project and meeting GDA review.

References 1. R.G. 1.168: Verification, Validation, Reviews and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants. Office of Nuclear Regulatory Research (2013) 2. IEEE Std.1012: IEEE Standard for Software Verification and Validation. Institute of Electrical and Electronics Engineer (2004) 3. IEEE Std.1012: IEEE Standard for System and Software Verification and Validation. Institute of Electrical and Electronics Engineer (2012) 4. IEEE Std.1012: IEEE Standard for System, Software and Hardware Verification and Validation. Institute of Electrical and Electronics Engineer (2017) 5. No.SSG-39: Design of Instrumentation and Control Systems for Nuclear Power Plants. International Atomic Energy Agency (2016) 6. NS-G-1.1: Software for Computer Based Systems Important to Safety in Nuclear Power Plants. International Atomic Energy Agency (2000) 7. Bel V of Belgium, BfE of Germany, CNSC of Canada, et al: Licensing of Safety Critical of Software for Nuclear Reactors. Common Position of International Nuclear Regulators and Authorised Technical Support Organisations, Regulator Task Force on Safety Critical Software (2018) 8. NS-TAST-GD-046: Computer Based Safety Systems. Office for Nuclear Regulation (2013) 9. NS-TAST-GD-046: Computer Based Safety Systems. Office for Nuclear Regulation (2017)

A Study About Pre-developed Software Qualification of Smart Devices Applied in NPP Sheng-Chao Wang(&), Tao Bai, Peng-Fei Gu, and Wang-Ping Ye State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, I&C Equipment Qualification and Software V&V Laboratory, China Nuclear Power Engineering Co., Ltd., Shenzhen 518172, Guangdong, China [email protected]

Abstract. According to the research and analysis about the standards and Electric Power Research Institute (EPRI) relevant reports of commercial grade dedication (CGD) such as the pre-developed software of smart devices which perform intelligent measuring, communication and actuation devices employing programmed electronic components (PEC) to enhance the performance, the requirements for the pre-developed software qualification has been identified. And in combination with the tasks of IEEE 1012, a V&V model was proposed to guide the concrete execution of qualification activities such as suitability evaluation, quality evaluation, operating experience evaluation, additional system test and comprehensive assessment. Besides, it also helps establish the specification and process for the pre-developed software qualification. On the basis of that, a pre-developed software qualification was performed for each qualification activity, and forming some good practice in the process. At the same time some special considerations are put forward for the pre-developed software qualification. Furthermore, some critical qualification points has been captured and may provide some technical reference for subsequent CGD such as the pre-developed software of smart devices which will be applied in the HPR1000 and other nuclear power plants (NPPs). Keywords: Pre-developed software Standard requirements



Smart devices



Software V&V



1 Introduction With the development of the smart technology, smart devices which can perform intelligent measuring, communication and actuation device employing PEC with embedded software to enhance the performance, have been increasingly used to replace the conventional devices in the safety instrumentation and control (I&C) systems of nuclear power plant (NPP) for improving economic efficiency. Although there are lots of advantages like greater accuracy, better noise filtering, in-built linearization and online calibration and diagnostics, a smart device is generally a commercial-off-the-shelf (COTS) product sold as black-box and it’s hard to demonstrate the reliability and potentially increases risk of common cause failure (CCF). Therefore, even though there is extensive and mature application in other non-nuclear industries, a smart device © Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 48–57, 2019. https://doi.org/10.1007/978-981-13-3113-8_6

A Study About Pre-developed Software Qualification

49

should be thoroughly tested and evaluated, or dedicated for NPP safety application, especially for the pre-developed software (also called COTS software) of the smart device, which is directly effecting the safe and reliable operation of NPP and should be paid more attention to guarantee the safety function to be implemented correctly. However, there are many issues for the pre-developed software qualification of smart devices applied in NPP, such as lack of unclear-specific, hidden changes, internal complexity, requiring manufacturer’s intellectual property. Besides, it’s lack of nuclear engineering experience for the pre-developed software independent qualification in the domestic. In order to meet related regulatory requirements and achieve the goal of going out of specified software matching with the HPR1000, it’s necessary to carry out software qualification standard and technology research for the autonomous predeveloped software. The study firstly researches the relevant guidelines and standards and refers to EPRI related technical reports, teases out the specific requirements for the pre-developed software. Then an appraisal plan is put forward including the qualification process and methods, and the implementation effort of the plan will be illustrated by a concrete engineering practice. Finally, the main technical points are summarized to provide technical reference for subsequent engineering practice.

2 The Analysis of Guides, Standards or Reports The concept of CGD has already been proposed in the nuclear safety guide HAD 102/16 by China, but it has not yet formed a perfect and enforceable scheme or procedure, which mainly refers to the relevant standard system in Europe, America or international organization [1]. The Fig. 1 is the context diagram of relevant documents for CGD like pre-developed software qualification. 2.1

Requirements or Criteria of China

The requirements and recommendations were proposed in the annex I of the HAD 102/16 for the application and validation of software developed in accordance with high standard in other industrial safety critical application [1]. (a) Define the functions of existing software and evaluate the impact of these functions on safety. (b) Clearly identify existing software like the program version. (c) Clearly identify and fully confirm the interfaces of existing software required by user or other software, and provide evidence to indicate that no other call sequence is available. (d) Develop and maintain the existing software according to good software engineering practice and quality assurance. (e) The existing software used by the safety system should be subjected to the same assessment as the final product of the newly developed application software. If necessary, the reverse engineering should be implemented to evaluate the full specifications of the existing software.

50

S.-C. Wang et al.

HAD 102/16 Rev. 2004

China

EPRI NP-5652 For commercial grade item Rev. 1988 (10CFR-21)

America

Endorsed by

Supplementary report

Generic Letter 89-02 Rev. 1989

EPRI TR-102260 Rev. 1994

IEEE 7-4.3.2 Rev. 2010 Endorsed by

R.G. 1.152 Rev. 2011 IEEE 1012 Rev. 2004

Almost the same

Endorsed by

EPRI TR-106439 Rev. 1996

Annex D V&V of reuse software Specific execution standard

Europe RCC-E Rev. 2012

C 5333 Programmed electronic conponents(PECs)

IEC IEC 60880 Rev. 2006

Clause 15 Qualification of pre-developed software e.g. microprocessor: MPU/MCU/CPU

IEC 62566 Rev. 2012

Aeeeptance process for programmable integrated circuits, native blocks and pre-developed blocks e.g. complex hardware logic: ASIC/FPGA/CPLD

Fig. 1. Relevant documents of pre-developed software qualification

(f) Get the design documents and source code if there need to modify the existing software. (g) The information should be available for the evaluation of the quality of existing software and the development process, and meet the requirements of assessing the quality level of existing software. Acceptance of existing software shall be performed as follows. (a) Verify the functions implemented by existing software meets all requirements described in the safety system requirement specification and other application specification. (b) Verify existing software didn’t refer to the functions that safety system requirements specification doesn’t require, and isn’t response to the adverse effects for the functions required. (c) Compliance analysis between standard requirements used and software design.

A Study About Pre-developed Software Qualification

51

(d) Validate the expected use of functions of existing software through test which includes the test completed by the supplier. (e) Ensure the functions of existing software aren’t used by the safety system, other software and user in the way that isn’t specified and tested. If possible, to get sufficient operation historical information and failure rate data and properly evaluate experience feedback based on the analysis of operation time, error report and delivery history in the related system’s operation. If relevant software development information isn’t sufficient and available, the risk assessment should be carried out for safety impacted by the software fault. 2.2

Requirements or Criteria of Europe and America

After the investigation and research of the American nuclear industry practice, The EPRI published the technical report EPRI NP-5652 to guide the application of commercial grade item like pre-developed software of NPP in 1988, which expounds evaluation background, objective, basic concept, general process and basic method and puts forward a verifying method that consists of technical evaluation and acceptance [2]. And the guide report was endorsed by NRC through the Generic Letter 89-02 in 1989. After that, the EPRI published the supplementary report EPRI TR-102260 to complement some key issues of EPRI NP-5652 in 1994, including how to implement technical evaluation, general acceptance and evaluate the assessment procedure for commercial grade item [3]. Besides, the EPRI published the EPRI TR-106439 to discuss about the evaluation and method of the key characteristics of digital devices in 1996 [5]. The critical characteristics are physical critical characteristic evaluation, performance critical characteristic evaluation and dependability critical characteristic evaluation. On the basic of a series of the studies by EPRI, the IEEE published IEEE 7-4.3.22010, which is almost the same with EPRI research reports above [4]. According to the requirements of this standard, the process of CGD of digital devices consists of the preparation phase, implementation phase and design review phase. And the NRC published the corresponding guidance R.G. 1.152-2011 to endorse the standard and EPRI TR-106439 through evaluation report [6]. For the requirements of guides, standards or technical reports, the IEEE 1012-2004 Annex D will be a good reference for the pre-developed software qualification, which put forward the detailed V&V activities and tasks [7]. Furthermore, the French standard RCC-E-2012 clause C5333 introduces the Characteristics and requirements of the programmed electronic components (PECs) [8]. For the PECs applied in the C1 and C2 classified systems, how to sufficiently guarantee in terms of their quality and reliability is related to development cycle, the follow-up of their software and hardware components, any existing experience feedback that may be available and their qualification.

52

2.3

S.-C. Wang et al.

Requirements or Criteria of IEC

The specific qualification requirements of pre-developed software is put forward in the IEC 60880 clause 15, mainly including suitability evaluation, quality evaluation, evaluation of operating experience and comprehensive assessment [9]. The IEC 62566-2012 provides how to select and assess pre-developed items when developing the HDL-Programmed Device (HPD) [10].

3 The Model of Pre-developed Software Qualification For this study, the research object of smart device is a breaker (C1 classified system) with micro-logic trip unit, which belongs to mature commercial grade item and has been ten years of good performance so far. The functions of breaker are mainly realized by pre-developed software developed by ASIC technology. And the development language includes VHDL and C++. Therefore, this study is at the same time to consider the requirements of RCC-E-2012, IEC 62566-2012 and IEC 60880-2006 when performing the pre-developed software qualification. The suitability analysis of the three standards sees the Table 1. Table 1. Standards suitability analysis of ASIC RCC-E C5333-2 Development cycle and relevant documents Qualification requirements Supervision for software and hardware components and requirements for software modification Available experience feedback data

Requirements of test and additional test

IEC 62566 clause 7 7.4 selection 7.4.2 Documentation review 7.6 Modification for acceptance 7.4 selection 7.4.3 Operating experience review 7.4 selection 7.4.2 Documentation review

IEC 60880 clause 15

Conclusions

15.3.2 Quality evaluation

The requirements of the standards are basically the same

15.4 Requirements for integration in the system and modification of PDS 15.3.3 Evaluation of operating experience

15.3.1 Suitability evaluation 15.3.2 Quality evaluation

After specifying the specific qualification requirements through Table 1, the qualification requirements were assigned to the verification and validation (V&V) activities and tasks of the IEEE 1012 that can be performed. The specific assignment and process of ASIC qualification can see the Fig. 2.

A Study About Pre-developed Software Qualification

53

Plant and System Requirements Concept V&V

Verification

( Additional Test )

( Suitability Evaluation ) Breaker System Design Requirements Requirements V&V

Validation

Breaker System Integration

Test V&V ( Quality Evaluation )

Verification

( Quality Evaluation ) ASIC Software Requirements Design V&V ( Quality Evaluation )

Verification ASIC Software Design

Implementation V&V ( Quality Evaluation )

Verification ASIC Software Implementation ( Comprehensive Assessment )

Fig. 2. Process of pre-developed software qualification

4 Qualification and Results of ASIC Because the breaker will eventually be used for the three generation NPP HPR1000 to perform safety functions, V&V team performed suitability evaluation, quality evaluation, evaluation of operating experience and additional system test in order to guarantee high reliability of the ASIC. (1) Suitability evaluation • Required input documentation – System specification documentation – PDS specification and user’s documentation • Evaluation requirements – Comparison of the system and PDS specification – Identification of modifications and missing point • Performing evaluation – According to the required input documentation and evaluation requirements, the adaptive V&V tasks are suitability analysis and traceability analysis which can be well to identify of modifications, missing point or inconsistencies through comparison of the system and PDS specification. – And the efforts of the V&V tasks performed had found two kinds of anomalies. The one is the requirements of system specification documentation don’t reflect in the PDS specification and user’s documentation. The other one is the requirements of timing characteristic of the ASIC can’t be proved. • Preliminary evaluation conclusion – The conclusion of suitability evaluation is that complementary work is needed to clarify the anomalies or provide convincing proof.

54

S.-C. Wang et al.

(2) Quality evaluation • Required input documentation – Design documentation – Life cycle documentation • Evaluation requirements – Analysis of design – Analysis of the quality assurance (QA) – Identification of missing point • Performing evaluation – According to the required input documentation and evaluation requirements, the adaptive V&V tasks are applicable standards compliance analysis of the RCC-E, IEC 62566 and IEC 60880 and traceability analysis. – The applicable standards compliance analysis is to evaluate the compliance between the requirements of the standards and input documentation. The AISC design shall be consistent with the constraint of the system architecture and deterministic internal behavior. If a behavior adopted is different from the requirement of the standards in the ASIC development, it shall be analyzed and justified. And if there is a secondary function of the software, the influence to the main function shall be analyzed. – The traceability analysis is mainly to validate the bidirectional tracing relationship between input documentation, ensuring its correctness, accuracy, completeness and consistency. – And the effort of the V&V tasks performed was to find an anomaly that the development team widely uses self-developed tools for ASIC development and test, which the reliability of the tools has not been fully guaranteed. • Preliminary evaluation conclusion – On the basis of the evaluation effort above, it’s necessary to require the additional test and documentation or operating experience evaluation. (3) Evaluation of operating experience • Required input documentation and evaluation requirements – The methods for collection of data and recording the PDS version operating time – The operating history of finding, defects and error reports and of modifications. • Performing evaluation – According to the required input documentation, evaluation requirements and the results of the suitability evaluation and quality evaluation, the evaluation of operating experience was to evaluate the evidence provided by the supplier of breaker including the pre-developed software. The evidence is the operating experience of the product collected globally by the supplier through automated management and configuration tools and the operating time is about ten years.

A Study About Pre-developed Software Qualification

55

• Preliminary evaluation conclusion – The conclusion of the evaluation is that the supplier provides sufficient operating experience. (4) Additional system test • Required input documentation – System design requirements specification documentation – PDS specification and user’s documentation • Evaluation requirements – Confirm that the breaker can meet the functional and interface requirements. • Performing evaluation – According to the required input documentation and evaluation requirements, V&V team firstly analyzed the test requirements and combed out the requirements items. Based on this, it’s to prepare V&V plan and corresponding test description. Then, it’s to design the test case for the each test requirement. – And the result of system test showed that the design of breaker covered the functional and interface requirements. However, there are some abnormal items, which is the performance parameter like the time response is inconsistent with specific requirement. • Preliminary evaluation conclusion – The conclusion of system test is that retest is needed or the development team needs to clarify the anomalies and make it justified. (5) Comprehensive assessment • Required inputs – The results of suitability evaluation, quality evaluation, evaluation of operating experience and additional system test. • conclusion – The applicability of the commercial grade item of breaker, which will be used in NPP to perform safety functions, depends on the handling of the found anomalies and the supplementary clarifications by the supplier. (6) Special consideration For the product of the breaker including the pre-developed software performing safety functions in NPP, the hazard analysis and security analysis are needed. FMEA method is recommended for hazard analysis. And the specific requirements of security analysis can see the clause 5.7 of IEC 60880, mainly focus on the security during design and development and the user access. After that, it’s to execute the risk analysis on the basis of the hazard analysis and security analysis. When the inputs of the tasks of the qualification activities above aren’t available and may reduce visibility into the pre-developed software products and processes, some techniques listed below are optional to compensate for the lack of the inputs. Each has varying strengths and weaknesses. Therefore, it’s need to consider

56

S.-C. Wang et al.

performing multiple techniques to offset the weaknesses of one technique with strengths of the others when high confidence is demanded. • • • • • • • •

Black box testing Review developer’s QA Operational history Audit results Artifacts Reverse compilation Prototyping Prior system results

5 Conclusions According to the research and analysis about the standards and EPRI relevant reports of CGD, the requirements for a smart device breaker including pre-developed software has been identified. And in combination with the tasks of IEEE 1012, a V&V model was proposed to guide the qualification activities, which also helps establish the specification and process for the pre-developed software qualification. On the basis of that, the pre-developed software qualification had been performed and formed some good practice in the process. All of qualification efforts can be the evidence as the evaluation for the reliability of the pre-developed software and promote the confidence of the software used to perform safety functions. Furthermore, some critical qualification points has been captured and may provide some technical reference for subsequent CGD such as the pre-developed software of smart devices which will be applied in the HPR1000 and other NPPs.

References 1. HAD 102/16: Nuclear Power Plants-Systems Important to Safety-Software Aspects for Computer-based Systems. National Nuclear Safety Administration (2004) 2. EPRI NP-5652: Guideline for the Utilization of Commercial Grade Items in Nuclear Safety Related Applications (NCIG-07). Electric Power Research Institute (1988) 3. EPRI TR-102260: Supplemental Guideline Application of EPRI Report NP-5652 Commercial Grade Items. Electric Power Research Institute (1994) 4. IEEE Std.7-4.3.2: IEEE Standard Criteria for Digital Computers in Safety Systems of Nuclear Power Generating Stations. Institute of Electrical and Electronics Engineers (2010) 5. EPRI TR-106439: Guideline on Evaluation and Acceptance of Commercial Grade Digital Equipment for Nuclear Safety Application. Electric Power Research Institute (1996) 6. R.G. 1.152: Criteria for Use of Computers in Safety Systems of Nuclear Power Plants. Regulatory Guide Office of Nuclear Regulatory Research of U.S. Nuclear Regulatory Commission (2011) 7. IEEE 1012: IEEE Standard for Software Verification and Validation. Institute of Electrical and Electronics Engineer (2004)

A Study About Pre-developed Software Qualification

57

8. RCC-E: Design and Construction Rules for Electrical Equipment of Nuclear Islands. French Association for Design, Construction and In-service Inspection Rules for Nuclear Island Components (2012) 9. IEC 60880: Nuclear Power Plants-Instrumentation and Control Systems Important to SafetySoftware Aspects for Computer-based Systems Performing Category A Functions. International Electro-technical Commission (2006) 10. IEC 62566: Nuclear Power Plants - Instrumentation and Control Important to SafetyDevelopment of HDL-programmed Integrated Circuits for Systems Performing Category A Functions (2012)

Applications of Data Mining in Conventional Island of Nuclear Power Plant Zhi-Gang Wu(&), Xiao-Yong Zhang, Chang-Ge Xiao, and Wen Chen State Nuclear Electric Power Planning Design & Research Institute CO., LTD, Beijing 100095, China [email protected]

Abstract. With the application of digital control system and field-bus technology in nuclear power plant, the production data has the trend of explosive growth. For the large amount of production data with the characteristic of high dimensional and multi-coupling, data mining technology will play an increasingly important role. This paper briefly introduces the data mining process and its commonly used methods. Based on the data size of conventional island in nuclear power plant and the current data application, this paper put forward the data mining application in Conventional Island (CI), and analysis the primary approaches and trends of the applications. Keywords: Conventional Island  Data mining Operation optimization  Soft-sensing

 Fault diagnosis 

1 Introduction and Background In recent years, big data analytics has advanced unprecedented mostly in Internet related research and development. While, big data is not new to the science and technology communities but comparing to what have been occurring in the Internet, data applications have primarily been in a stage to be used to prove the rightfulness of existing physical laws. Data sciences and technology are largely ignored. As a result, potentially prominent sciences remain uncovered. In nuclear power industry, data analytics are very important tools because 90% of the events which leads to the unplanned energy loss (such as unplanned shutdowns, outage extensions or load reductions) are due to equipment failure according to the statistics from World Nuclear Association (WNA) from 2008 to 2012, for the global NPPs [1]. Among such failure, the top 5 reasons are associated with: (1) turbine and auxiliary system; (2) electrical control system; (3) generator and auxiliary system; (4) reactor; and (5) main feed water and main steam systems. Over 70% of total unplanned energy loss, about 140 GWh, is caused by these top five equipment problems as listed above. Most of these failures can be resolved safely, but they can be the trigger to catastrophic disaster like Chernobyl in the former USSR and most recently Fukushima nuclear power plant in Japan. The main task of a nuclear power plant, once its construction is completed, is to keep its operating to be safe and at low cost. To do so, the operation and maintenance of a NPP require significant efforts to monitor and analyze the equipment status, which © Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 58–71, 2019. https://doi.org/10.1007/978-981-13-3113-8_7

Applications of Data Mining in Conventional Island

59

contributes a substantial portion of the operational cost. Make matter worse, there are no existing physical laws and models that can be used directly to ease the difficulties encountered in massive monitoring data acquired in the operation to analyze/separate abnormal from normal operations. Data enabled science and engineering might be a unique and significant tool to be used to assist a stable, reliable and economic operation of CI equipment. Presently, the operation of the production data of the NPP and fossil power plants usually include: (1) Direct digital control Direct digital control (DDC) is the automated control of a condition or process by a digital device. DDC allows automatic control of the equipment, and monitor the status of the unit performance when setting the control and alarm upper/lower limit value of the production data.. For example, the vibration instruments are usually used to measure the vibration values of the bearing of the pumps, and also an upper limit of vibration value should be set. Once an abnormal vibration signal exceeds the upper limit, an alarm will be generated to remind the operators that the current pump is in the abnormal state, or the alarm will initiate the pump to stop for the protection reason. DDC provides the basic control and monitoring means for a safe operation. The advantage of DDC is that the control and alarm upper/lower limit values can be obtained according to the characteristics of the systems or equipment, which is simple and feasible. Especially, DDC is combined with the use of Distributed Control System, it can be more useful. Therefore, DDC has become the primary means in many applications [2]. However, because the upper/lower limit values usually have a large margin, DDC may malfunction due to a fake marginal signal, which increases significant the operational cost. (2) Mathematical model analysis method Mathematical model analysis method (MMAM) is a method that uses the mathematical models for equipment performance and status analysis. Based on the mathematical model of the equipment, the device-related parameters are used to calculate the performance of the equipment or to further analysis the status of the equipment. For example, building the mathematical models of the turbine and using the related pressure and temperature data as given value, parameters of the steam turbine performance such as turbine efficiency, the flow area are calculated. And these performance parameters can be used to analyze the equipment status [3]. MMAM provides a further analysis for the production data. The advantage of MMAM is that the analysis can be accomplished safely and precisely if the mathematical model is accurate. Years of research, some of the mathematical model of equipment in CI had shown its usefulness which can basically meet the requirements of engineering application. However, MMAM is only applicable to the single device, and not suitable for the complex system with large correlation and strong coupling presently, and its reliability in real life application is often in doubt.

60

Z.-G. Wu et al.

(3) Fault database diagnosis method Fault database diagnosis method (FDDM) is a method to build a fault database based on the engineering experience. FDDM can help the operation and maintenance personnel to determine the fault causes and solve the fault [4]. Usually the historical fault database with fault characteristics is formed based on large number of statistical information of various types of equipment failure. The advantage of FDDM is that the method provides a historical fault database by refining and summarizing the experience of experts. So it can accurately determine the equipment failure which meets the fault characteristics in the database, and to provide technical support for the operation and maintenance personnel. However, FDDM can’t provide a comprehensive fault diagnosis of the devices because of the limits of the data base that relies on expert experience on past operation failures. It is also not suitable for the complex system with large correlation and strong coupling. (4) Data enabled science and engineering Data enabled science and engineering is a new concept of a big data applications in recent decades. Data enabled endeavors in science and engineering fields include primarily twofold efforts: means to acquire reliable data and means to mine the embed information in the large data sets. Its applications are mainly when there is no preexisting physical laws and models to follow, its goal is to seek the source physics, and to assist decisions of based on the past and present data. Data mining (DM) is one of powerful tools that is widely used when processing large data sets. In recent years, digital control system and field-bus technology are largely utilized in nuclear power plants, as a result, it generates huge amount of operation and production data Therefore, data enabled science and engineering provides a prominent potential to keep NPP operation and production safe. The purpose of this paper will introduce fundamentals of data enabled science and techniques which may be used in the safety operation monitoring of NPPs, approaches of data mining, and illustrations of example applications.

2 Date Mining Method DM is the process of applying some methods with the intention of uncovering hidden source physics in large data sets. The following are several commonly used methods. (1) Statistical Analysis Statistics provide a lot of discriminant and regression methods for DM, including Bayesian inference, regression analysis, and variance analysis. Bayesian inference is a method of statistical inference, which is used to update the probability for a hypothesis as more evidence or information becomes available. Regression analysis is a set of statistical processes for estimating the relationships among variables; it can also be used to model the probability of occurrence of certain events. Analysis of variance is a

Applications of Data Mining in Conventional Island

61

collection of statistical models, which can used to analyze the performance of the regression and the effects of the independent variables on the final regression [5]. (2) Decision Tree A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm. The biggest advantage of the decision tree method is intuitive, which is very effective in solving the problem of high dimensional data classification. The disadvantage is that with the increase of data complexity, the number of branches will increase and the management will become more and more difficult. In addition, this method has the problem of processing data with missing value [6]. (3) Neural Network Neural network is a computational model established by mimicking the structure and working mechanism of human brain neural network. Based on the MP model and Hebb learning rules, it establishes the feed forward network, feedback network and selforganizing network model. The biggest advantage of neural networks is the ability to accurately predict complex problems. Because of its good robustness, selforganization, parallel processing, distributed storage and high fault tolerance, neural network is very suitable for solving the problem of establishing classification model in data mining, so it has been paid more and more attention in recent years [7]. (4) Rough Set Theory Rough set theory as a data analysis method is proposed by Pawlak in 1982 [8]. It regards knowledge as a division of the domain, that knowledge is granular, and uses knowledge of relative core to analyze and reduce knowledge. Rough set theory can analyze and process the fuzzy or uncertain data with absence of the prior knowledge of the relevant data. So this is one of the main methods of DM, which is good at revealing the potential rule [9]. Table 1 shows the main methods of each phase of CRISP-DM.

Table 1. Main methods of each phase of CRISP-DM Phase Business understanding Data understanding Data preparation Modeling Evaluation Deployment

Main methods

Statistical analysis, Standardized, Visualization Decision tree, Statistical analysis, Neural networks, Rough set method Test set methods Decision tree, Statistical analysis

62

Z.-G. Wu et al.

3 Conventional Island Data Large amount of data is generated, which include from manufacture, construction, commissioning, operation and maintenance, and retirement. For the NPP in operation, the data can be divided into two categories, one is non-real-time data, and the other is real-time data. Non-real-time data is from the planning and design, procurement, construction and commissioning phases, which mainly includes design documents from designers, equipment information and data from the vendor or manufacturers, commissioning data. Real-time data, however, is when a NPP is in the operation phase. They can be acquired by the on-site instrument and controlled equipment. Such data are collected and accumulated over years, which are in need to seek its source physics. Haiyang Nuclear Power Plant Phase-1 is AP1000 relying on project. In the CI of Haiyang NPP, the real-time data is mainly from two parts: instrument and controlled equipment. Among them, instruments mainly include transmitters and switches which are used to measure temperature, pressure, flow, level, and vibration, controlled equipment mainly include air operated valves, motor operated valves, motors, and heaters., there are about 7000 real-time data points in a CI. If the sampling rate is 1 Hz, a CI will generate 100 TB of data in one year, or 273 GB per day.

4 Applications Digital control system and field bus technology have been gradually applied in NPP in recent years, as a result, a huge amount of data is acquired. But the data analysis and application are still in the infant stage. It is expected DM can greatly enhance safety operation of NPP in the following aspects: equipment fault diagnosis, optimization of unit operation and soft sensor. 4.1

Equipment Fault Diagnosis

The nuclear power plant maintenance, currently, follows so called planned maintenance and troubleshooting, also known as post-maintenance. Its advantage is that the planned maintenance is easier planning of maintenance and ordering spares and the costs are distributed more evenly. However, it can be “over-or under-maintenance”, which cannot provide the real-time status of equipment. And because of the complex of the equipment, the planned maintenance period is difficult to determine, of which the short period has higher cost and the prolonged period may lead to decrease equipment performance and even cause equipment failure. The faulty equipment will lead to the automatic unit shutdown or load reduction, and even affect the safety of the unit. As described in Sect. 1, the condition of equipment can only be determined whether the referenced value of an operating parameter changes, i.e. whether the value exceeds the boundaries. This method can remind the operators the condition of status to a certain extent, but failed to consider the change of condition when the parameters are within

Applications of Data Mining in Conventional Island

63

the normal range. Furthermore, it cannot differentiate the trend of the condition changes to provide the early warnings. Recently, to diagnose the rotating equipment, i.e., steam turbine, most scholars use the association rule learning method [10]. The authors of this paper proposed association rule learning method, and expresses the relationship among the vibration sign, the thermal parameter data and the fault type as the confidence and support degree of the association rule. This paper provided a rule-based database for a specific unit, and uses the database to implement the diagnosis of turbine. The process of rule mining, judgment and results analysis process is shown in Fig. 1. Based on this method, the authors tested the effectiveness of diagnosis on a 900 MW turbine of a fossil power plant and provided an example as follows:

Fig. 1. Rule mining, judgment and result analysis process

The turbine rotor has two bearings, the horizontal (X phase) and vertical (Y phase) vibration of each bearing is monitored(for bearing 1, refer to as 1X and 1Y; for bearing 2, refer to as 2X and 2Y). During the start-up stage of the turbine, 1X and 1Y is soared, the highest value of 1X is 198 lm, which leads to the turbine trip. As the bearing 1 and bearing 2 are located in the same rotor, the vibration of bearing 2 also has a corresponding change, the highest value of bearing 2 has is 138 lm. After analyzing the parameters of the thermal process, it was found that the main steam temperature (MST), 100% high pressure cylinder temperature (HPC-T), high pressure cylinder exhaust steam temperature (HPCES-T) has reached the highest values before the turbine trip as

64

Z.-G. Wu et al.

shown in Fig. 2. Compared with the vibrations of bearing 1 and bearing 2, it is found that both of them have reached the upper values to trip the turbine, and the phase changes are very large, which meets the symptom of thermal unbalance trouble. And association rule fault diagnosis system also diagnosis that the bearing 1 and bearing 2 have the thermal unbalance trouble, which proves the accuracy of the diagnosis system. This method is also applicable to the equipment of CI part in NPP. The association rule fault diagnosis system can accurately determine the cause of the fault and help the operators to discover and eliminate the fault in time, so that to ensure the stable operation and safety of CI systems and equipment [11]. Diagnosis of faulty operation of the heat exchanger includes the conditions i.e., specifically the condenser. Neural network or improved neural network have shown to be useful.

Fig. 2. Main thermal parameters variation Trend

The nonlinear principal component analysis neural network (NLPCANN) is used to reduce the data dimension and extract features. Then, the probabilistic neural network (PNN) is used to obtain the final diagnosis results. The process is shown in Fig. 3. In this paper, they summarized that there are 21 typical faults and 33 characteristics of the condenser. Two methods were used to diagnose the abnormalities: one is to use the PNN directly, the other is to use NLPCANN and PNN together as above mentioned. Table 2 shows the calculation results of an actual event happen in a fossil power plant. In the table, ui represents the probability of each typical fault. Comparing the two methods, the results are nearly the same that u8 has the higher value than others, which means the 8th fault is the reason of the condenser fault, that is the cause of the trouble is the imprecision of vacuum system, which coincides with the fact.

Applications of Data Mining in Conventional Island

65

Fig. 3. Schema of faulty diagnosis using nonlinear principal component analysis and probabilistic neural network

Table 2. Comparison of the results of the two diagnostic methods u2 Typical failure u1 Direct PNN 0.036 0.091 NLPCANN+PNN 0.019 0.084 Typical failure u8 u9 Direct PNN 0.449 0.164 NLPCANN+PNN 0.639 0.206 Typical failure u15 u16 Direct PNN 0.110 0.202 NLPCANN+PNN 0.259 0.127 Method 1: Diagnostic time is 74 63 ls

u3 u4 0.091 0.079 0.130 0.319 u10 u11 0.230 0.122 0.011 0.040 u17 u18 0.340 0.139 0.450 0.284 ls; Method 2:

u5 u6 u7 0.041 0.164 0.100 0.296 0.250 0.005 u12 u13 u14 0.164 0.340 0.110 0.236 0.362 0.156 u19 u20 u21 0.202 0.230 0.340 0.239 0.206 0.479 Diagnostic time is

The diagnostic results verify the reliability of the diagnosis method based on NLPCANN and PNN, and also the diagnosis speed has been improved, which is suitable for the occasion with complex system and high speed requirement. So for the diagnosis of the condenser in NPP, method 2can determine the failure cause speedy and correctly. Thermodynamic sensors are used primarily to acquire production data. These data are used as the basis of the monitor, control and analysis for the unit. In this case, the signal quality of the sensors is critical. If the signal quality is bad, the following response from control system or operators may be wrong, which may cause the serious accident. So recent years, researches use dynamic data mining (DMM) method to evaluate the sensor condition [12]. Thermodynamic parameter signals can be decomposed into a series of intrinsic modal functions and a trend margin to realize the dynamic mining on the feature information of the sensor fault using empirical mode decomposition method. Application of DM technology to diagnosis can realize the predictive analysis and active analysis for the critical equipment, change the post-maintenance to predictive maintenance, guide the maintenance personnel to focus on the equipment have the

66

Z.-G. Wu et al.

performance degradation. So that to effectively reduce emergency repairs and unplanned shutdown by reasonable arrangement of repair plans. 4.2

Optimization of Unit Operation

The operation optimization of the thermal system of a NPP unit is one of the important means to improve the efficiency and to reduce cost by meeting a set of target values. The main methods to determine the optimal target values may include: (1) Mathematical model This method is to build mathematical models of equipment or system, and then to conduct the optimization. However, because the equipment in CI works with the wet steam, it is very difficult to establish the proper working conditions of the variable in the thermodynamic models. Often, models are under the assumptions to be established, which limited their applications. (2) Engineering test optimization method Engineering optimization is accomplished by performing a series of engineering optimization tests under different load of the unit, from which the optimal target values can be sought. This method, compared to the previous one, is more reliable and has the higher accuracy, which can meet the requirements from the operation and maintenance [13]. But along with the operation of unit, due to the equipment wear and other reasons, the performance of the equipment will change, the optimal target values are also changed accordingly, and the original target values need to be modified by a new round of optimization tests. So this method requires more tests and greater economic investment. In addition, because there are many parameters that are related to the unit operation cost and they often correlated and coupled, it increases the difficulty to determine the optimal values. So the development of data enabled science opens new method to implement the optimization of unit operation. Many existing methods in data mining, statistics and probability can be used to implement the system operation optimization such as association rule learning, graph theory, and neural network For instance, the association rule learning is a common method and has been verified in the fossil power plant. This method consist of steps of: definition of related concepts, data preprocessing, building data structure, and generating association rules. Such as the reference, this paper is to apply the association rules on the determination of the operation optimization target value for one fossil power plant [13]. This paper establishes a complete process model from data preprocessing, rule evaluation and representation, and takes the historical data of a 10,000 MW fossil power plant as the mining target. Using the method, get the optimization target values s for 9 important parameters that can indicate the unit status under 100% loads, as shown in Table 3. Using this method, the optimization target values also can be obtained under different loads. In addition, improved fuzzy association rule mining method was used to extract the association rules from the operation history data to guide the optimization operation.

Applications of Data Mining in Conventional Island

67

Table 3. Intervals of optimization target values under 100% loads Parameters

Interval [2750 2770] Feed water flow (t  h1 ) Feed water temperature ( C) [295.4 295.8] Main steam temperature ( C) [601.5 601.88] Main steam pressure (MPa) [24.92 25.05] Separator Steam tank outlet steam pressure (MPa) [27.3 27.48] Moisture Separator outlet temperature ( C) [417.5 419] Water/coal ratio [8.51 8.58] [276.48 279.06] Net Coal Consumption (g  kWh1 )

and some papers showed the improved fuzzy association rule mining method in the fossil power plant, the application were verified to be successful [14, 15]. With the mining of the historical data, implementing the optimization research on the CI systems, so that to guide the operators and improve the unit economy. 4.3

Soft Senor Method

Soft sensor method (SSM) provides a relatively new approach that by establishing a mathematical model or relationship with variables that are easily measured. The model is then used to estimate other variables. For the important variables which are difficult to be measured or can’t be measured because of the measurement technical restriction, soft sensor method (SSM). In operations of NPPs, variables such as main steam flow, enthalpy, and steam humidity are not measured directly. SSM can be used. For example, one of the most important variables is the main steam flow to monitor the turbine performance, and control the operation process. Currently, differential pressure measurement method is used to measure the flow using a throttling device and differential pressure transmitter. But this method may cause the pressure loss of the measurement medium caused from the throttling device. Using this method to measure the main steam flow may reduce the quality of the main steam and cause a loss of 1%2% of the turbine output when conducting the measurement. Not to mention, the working temperature, pressure and flow of the main steam may change significantly when the unit load changes which will decrease the accuracy of the differential pressure measurement. Therefore, the large fossil power plants and NPPs usually do not use the flow measurement device to measure the main steam flow, but by SSM with the relevant variables. Traditionally, the combination of Flugel formula and the law of conservation of mass are used as the main method to calculate the main steam flow, improvement of this method has been done, but poor accuracy remains to be an issue. In years, the use of DM method to solve the problem has achieved some research results. For example, some authors proposed a SSM model for the main steam flow calculation based on generalized regression neural network [16]. The variables of the model are effectively reduced by the shield filtering of the average influence value, and the generalization

68

Z.-G. Wu et al.

ability of the model is improved by optimizing the distribution density, so that to provide an effectively calculation for the main steam flow. In this paper, we clarify the generalized regression neural network (GRNN) structure, as shown in Fig. 4. In this paper, a case is studied based on the 20 sets of data after the calibration of the main steam flow of a 600 MW fossil power plant. First, by the normalization of the

Fig. 4. Generalized regression neural network structure

GRNN input vector, these data are preprocessed. x0 ði; jÞ ¼

xði; jÞ  xmin ð jÞ xmax ð jÞ  xmin ð jÞ

ð1Þ

Where: xði; jÞ is the input vector value of the ith variable of the jth sample; xmax ðjÞ, xmin ðjÞ are the maximum and minimum values of the jth index; x0 ði; jÞ is the normalized serial number of the index eigenvalue. Through the transformation of this formula, the effects on the average influence value and GRNN model by the differences of meaning and unit can be avoided. Then, the variables are filtered based on Vimpavg , and get the average influencing value of input variables on dependent variables. Taking the first N variables which is accounted for 85% of the total influencing value as the input of the network, and the average influencing value in order is: 0.0307, 0.0274, 0.0231, 0.0213, 0.0184, 0.0154, 0.0124, 0.008 and 0.0073, represent respectively high pressure condenser pressure (HPC-P), low pressure condenser pressure (LPC-P), governing stage pressure (GS-P), main steam pressure (MS-P), high pressure

Applications of Data Mining in Conventional Island

69

cylinder exhaust pressure (HPCE-P), main condensate flow (MC-F), reheater hot side steam temperature (RHSS-T), generator power (G-P) and feed water flow (FW-F). The 9 variables is accounted for 85.33%. And the sample data after the filtering is shown in Table 4. Table 4. Sample data of main steam flow soft calculation

1 2 3   18 19 20

HPC- LPC-P P (kPa) (kPa)

GS-P (kPa)

MS-P (kPa)

HPEC- MC-F P (t h1 ) (kPa)

RHSS- G-P T (MW) (°C)

FW-F (t h1 )

Main steam flow (t h1 )

5.57 5.58 5.18   4.22 4.24 3.25

10.52 10.53 9.77   7.96 7.99 6.13

14.56 14.48 14.52   12.60 12.65 9.69

3.708 3.712 3.445   2.807 2.818 2.157

538.25 537.63 538.47   538.86 536.46 538.11

1766.02 1722.64 1656.41   1277.51 1329.51 930.39

1805.75 1807.62 1677.52   1367.09 1372.17 1050.82

3.78 3.78 3.51   2.85 2.86 2.18

1435.63 1443.57 1340.68   1119.97 1115.56 883.91

581.60 581.92 540.29   454.54 450.03 356.46

In Table 4, the data of first 15 samples were used as model training, and the data of last 5 samples were used as model test. The first 15 sets of data were introduced by Matlab programming, and the distribution density Ds was selected respectively. Then get the change pattern of d with Ds , so that to determine the value of Ds when d is the smallest. At this time, the network has higher training precision and generalization ability. After that, the SSM model for the main steam flow has been established, which uses the 9 variables as input and optimized Ds as the network distribution density parameters. At last, the reserved 5 sets of sample data are used to test the model. dðiÞ ¼ X ðiÞ  X 0 ðiÞ DdðiÞ ¼

dð i Þ X ðiÞ

ð2Þ ð3Þ

Where: XðiÞ, X0 ðiÞ are the actual value and output value of the model; dðiÞ is the absolute error of the actual value and output value; DdðiÞ is the relative error. Comparison results are shown in Table 5. From the table, we can see that the relative errors are within a reasonable range, which can fully meet the requirements. Therefore, the flow measurement device for main steam flow can be replaced by the SSM. Application of DM technology for SSM can cover the shortage of the commonly used instruments or traditional calculation methods, and have a great significance of improving the unit performance and reducing the project cost.

70

Z.-G. Wu et al. Table 5. Comparison results

Mode

Actual value (t  h1 )

Model output value (t  h1 )

1 2 3 4 5

1670.86 1655.97 1367.09 1372.17 1050.82

1661.65 1646.94 1372.59 1374.60 1061.67

Absolute error (t  h1 ) 9.21 9.03 −5.50 −2.43 −10.85

Relative error (%) 0.5510 0.5455 −0.4023 −0.1768 −1.0329

5 Conclusions Based on the DM technology, this paper put forward a solution to the problem of inefficient use of large amount of production data of CI in NPPs. By analyzing several application examples, DM shows a great application prospect in CI, which has the large amount of production data with the characteristic of high dimensional and multicoupling. In CI, DM can be applied extensively in the areas of equipment fault diagnosis, unit operation optimization, soft measurement calculation, to further improving the safety and economy of the unit.

References 1. Optimized Capacity: Global Trends and Issues 2014 edition. A Report by the World Nuclear Association’s Capacity Optimization Working Group 2. Xu, J.G.: Progress in the design of DCS for large-scale thermal power plants. Electr. Power 39(10), 84–87 (2006) 3. Wang, Y.M., Zhang, L.Z., Xu, D.M., Ma, H.L.: Application of characteristic flow area of steam turbine. J. Eng. Therm. Energy Power 27(2), 160–164 (2012) 4. LI, H.: Development of Database of Rotating Machine History Fault-cases and Precision Diagnosis. North China Electric Power University (2004) 5. Olaru, C., Geurts, P.: Data mining tools and applications in power system engineering. In: Proceedings of the 13th Power System Computation Conference. Norway (1999) 6. Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986) 7. Garcez, T., Miranda, V.: Knowledge discovery in neural networks with application to transformer failure diagnosis. IEEE Trans. Power Syst. 20(2), 717–724 (2005) 8. Pawlak, Z.: Rough sets. Int. J. Parallel Program. 11(5), 341–356 9. Su, H.C., Sun, X.F., Yu, J.L.: A survey on the application of rough set theory in power systems. Autom. Electric Power Syst. 28(3), 90–95 (2004) 10. Han, H.: The Research of Vibration Fault Diagnosis System for 900 MW Turbine Based on Data Mining. Shanghai Jiao Tong University (2009) 11. Hou, G.L., Sun, X.G., Zhang, J.H., Jin, W.G.: Research on fault diagnosis of condenser via nonlinear principal component analysis and probabilistic neural networks. Proc. Chin. Soc. Electr. Eng. 25(18), 104–108 (2005)

Applications of Data Mining in Conventional Island

71

12. Li, W., Yu, Y.L., Sheng, D.R., Chen, J.H.: Fault diagnosis of thermodynamic parameter sensors based on dynamic data mining. J. Vib. Measurement Diagnosis 36(4), 694–699 (2016) 13. Zheng, X.X., Yang, H.Y., Gu, J.J.: Optimization of the targeted value for thermal power based on association rules. Electr. Power Sci. Eng. 26(9), 48–51 (2010) 14. Li, J.Q., Liu, J.Z., Zhang, L.Y., Niu, C.L.: The research and application of fuzzy association rule mining in power plant operation optimization. Proc. Chin. Soc. Electr. Eng. 26(20), 118–123 (2006) 15. Li, J.Q., Niu, C.L., Liu, J.Z.: Application of data mining technique in optimizing the operation of power plants. J. Power Eng. 26(6), 830–835 (2006) 16. Wang, J.X., Fu, Z.G., Jing, T., Chen, Y.: Main steam flow measurement based on generalized regression neural network. Power Eng. 32(2), 130–134,158 (2012)

A Hierarchically Structured Down-Top Test Equipment Debugging Method for RPS Wang Xi(&), Tao Bai, Peng-Fei Gu, Wei Liu, and Wei-Hua Chen State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, I&C Equipemnt Qualification and Software V&V Laboratory, China Nuclear Power Engineering Co., Ltd., Shenzhen 518172, Guangdong, China [email protected]

Abstract. Reactor protection system (RPS) plays critical role in digital control system (DCS), and ensures the safety for nuclear power plant (NPP). System test is a necessary step during system development, verification and validation (V&V), which ensure the safety and reliability for RPS. The debugging of test environment and equipment is an important step that ensures the effective and efficiency of test. The system, such as RPS that contains complicated logic and large number of interfaces, cost a lot of time and human resource for debugging. A structured debugging method has been proposed in this paper, this method establishes debugging architecture with a hierarchical model in according to signal transmission path, and it designs the debugging process from down to top. The result from engineering practice show that this method has improved the effective and efficiency of debugging provides the support and reference for system test environment establishment. Keywords: NPP

 DCS  RPS  System test  Equipment  Debugging

1 Introduction The key point of digital technology in NPP is the introduction of safety software, the performance of software affects the safety and reliability of NPP directly [1, 2]. Reactor protection system (RPS) plays critical role in Digital control system (DCS), ensures the safety for Nuclear power plant (NPP). System test is a necessary step for system development, verification and validation (V&V), ensuring the safety and reliability for RPS [3–5]. The debugging of test environment and equipment is an important step, which ensures the effective and efficiency of test. The system, like RPS contains complicated logic and large number of interfaces, the test environment established with unstructured debugging cannot ensure a adequate and correct test, and result in much more reworks that cost a lot of time and human resource for debugging [6]. Therefore, to improve the debugging efficient and effective, save human and time cost, this paper research in structured debugging method for digital RPS test equipment.

© Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 72–77, 2019. https://doi.org/10.1007/978-981-13-3113-8_8

A Hierarchically Structured Down-Top Test

73

2 Test Architecture 2.1

System and Equipment

The test architecture of RPS is described in Fig. 1, including user interface, test tools, and target system, achieves following functions [7]:

I/O board User-interface Signal control Analog and Digital Signals

AO DO AI DI

Target system

Control machine

Fig. 1. Test architecture

(1) The user-interface provides configuration functions for operator, including test conditions, test cases and test data; (2) The test tool provides the translation and simulation for signals transmission and reflects the reception to user-interface. 2.2

Test Interface

According to signals transmission paths, the test architecture contains following interfaces: (1) (2) (3) (4)

The The The The

interfaces interfaces interfaces interfaces

between between between between

signals names and slot table; slot table and IO card of control machine; IO card of control machine and recuperate board; recuperate board and target system (cabinet).

As show in Fig. 2, among the large number of interfaces, the error in any interface may lead to error test results, and the multiple interface levels, the interface error should be detected and confirmed in each level. Therefore, how to do a fast and effective debugging is the key point for ensuring a correct test environment.

3 Hierarchical Debugging Model 3.1

Structured Multi-level

According to the signal interface loop that shows in Fig. 3, the test architecture has been abstracted into a structured multi-level, which is called test V model. The characteristics are described as follow:

74

W. Xi et al.

Interface configurati on table

Control machine I/O

Recuperate board

Transmission signals

User Interface

Target System

Reciptioin signals Interface configurati on table

Control machine I/O

Recuperate board

Fig. 2. Signal interface loop

User-level

Output signal

Input signal

Gate

Mappinglevel

Slot and I/O mapping

Boardlevel Cabinetlevel

Slot and I/O mapping Gate Output interface

Input interface

Input interface

Gate

Output interface

RPS

System-level

Gate

Monitor

Fig. 3. Debugging model

(1) The test architecture is divided into 5 levels, including user-level, mapping-level, board-level, cabinet-level and system level, according to the signal transmission path, it can be described as a V model; (2) The user-level is used by operator to configurate and monitor the input and output data; (3) The mapping-level is connected the signal name to slot and I/O interface of control machine; (4) The board-level input or output signals by translating them into voltage and current; (5) The cabinet level transports the signals between board and the target system; (6) The system-level means the internal of target system, where the signals can be monitored and changed by software, the monitor software is important for checking the correctness of debugging for higher level.

A Hierarchically Structured Down-Top Test

3.2

75

Debugging Criterion

Base on the debugging model, this paper proposes a down-top debugging method, the general debugging criterion in this method is described as follow: • Each level can be considered as a debugging level, to ensure the debugging result that not affected by lower level, the debugging cannot get into the upper level until the lower level has been well debugged; • The transition from lower to higher level is phase transition, which can be called the “gate”, the condition to pass the gate is the transition criteria; • The general transition criteria is the whole interface, including input and output, of the lower level, has been correctly checked, and the lower level can be “closed”.

4 The Down-Top Debugging Method 4.1

Debugging Process and Gates

The down-top debugging method proposed in this paper started at the button level, and ended at the top level, the Table 1 describes different processes and gates for each phase. Table 1. Debugging method and gate Phase Systemlevel

Method Confirm the correct connection between the monitor and system logic, the signal change can be monitored effectively in software, and the signal can be changed by monitor in system level Cabinet- • Inject the signal in cabinet level interface, and level check the signal change in system level by monitor; • Change the signal in system level by monitor and check the output interface in cabinet level Board• Check the connection between board and cabinet level by link interface table; • Inject signals on board level, check signal variation in monitor; • Change signals in monitor, check the output on board level Mapping- • Check the connections between software and I/O level slot in control machine by mapping table; • Inject signals in mapping-level by tools (such as, NI MAX), and check the signals change in monitor; • Change signals in monitor, check the output on mapping-level User• Check the configuration table that describes the level connections between signals names and I/O slot name in mapping-level; • Load in and inject the test case, including input values and expected output, check the signals change in monitor, and check the corrections of real output

Gate Monitor successfully connected

The input and output signals on cabinetlevel are changed as same as monitor

• Connections are correct between board and cabinet interface; • The input and output signals on boardlevel are changed as same as monitor

• Connections are correct between mapping-level and control machine; • The input and output signals on mapping-level are changed as same as monitor • Connections are correct between signal names and mapping-level; • The outputs are changed as expected

76

W. Xi et al. Table 2. Debugging method and gate Efficient Human cost (person/day) Rounds Re-debugging interfaces

4.2

Unstructured 4 4 63

Down-top method Comparison 2 50%# 2 50%# 7 80%#

Engineering Practice

Base on this debugging method, this paper practiced on RPS test equipment, which contains 200 interfaces. In this practice, both unstructured and down-top debugging methods are used for comparison. The unstructured debugging method starts at userlevel by test cases, which is lack of organization, more time is used for detecting where the fault happened, the total debugging rounds and re-debugging interfaces are increased, and finally result in huge human and time cost. The down-top method may spend more time on interface checking at lower level, in this way, the correction of interface in higher level can be ensured. As show in Table 2, to debugging the same number of interfaces, compared with unstructured debugging method, the down-top method proposed in this paper reduced 50% human cost and debugging rounds, the re-debugging interfaces has been decreased by 80%.

5 Conclusions A hierarchical structured debugging method is proposed in this paper, according to signal transmission path, the test V model with multi-level has been abstracted, and developed into a down-top debugging process, this process improves the correct and stability of test environment by debugging from button level the top level. With engineering practice, this method establishes a well debugging procedure that improves debugging efficiency, it saves the human and time cost directly. This method also provides the support and reference for test environment establishment and the other test equipment debugging.

References 1. Ding, Y.X., Gu, P.F., et al.: Study on Standard about Safety Digital I&C System in NPP. Process Autom. Instrum. 36(11), 61–64 (2015) 2. International Electro Technical Commission: IEC 60880 Nuclear power plantsInstrumentation and control systems important to safety-Software aspects for computerbased systems performing category A functions. International Electro Technical Commission, Switzerland (2006)

A Hierarchically Structured Down-Top Test

77

3. Gu, P.F., Xi, W., Chen, W.H., et al.: Evaluation system of software concept V&V about the safety digital I&C system in nuclear power plant. In: International Symposium on Software Reliability, Industrial Safety, Cyber Security and Physical Protection for Nuclear Power Plant. Springer, Singapore, Vol. 400, pp. 125–132 (2016) 4. V&V Software Engineering Standards Committee of the IEEE Computer Society: IEEE 1012 IEEE Standard for Software Verification and Validation. Institute of Electrical and Electronics Engineer, New York (2004) 5. He, Y.N., Gu, P.F., Xi, W.: Research on digital control system status monitoring and reliability prediction method for nuclear power plant. Atomic Energy Sci. Technol. 51(12), 2338–2343 (2017) 6. Xiao, P., Zhou, J.X., Liu, H.C.: Relationship between architecture of reactor protection system and reliability. Nucl. Power Eng. 34(S1), 179–183 (2013) 7. Xu, H.L.: The design and realization of nuclear power plant DCS TEST instrument. North China Electric Power Univ. 3, 4–6 (2016)

Discussion for Uncertainty Calculation of Containment Leakage Rate Yu Sun(&), Jun Tian, Tian-You Li, and Zhao-yang Liu State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Engineering Company Ltd, Shenzhen 518172, Guangdong, China [email protected]

Abstract. This paper discussed the uncertainty calculation process of containment leakage rate. Containment leakage rate is linear regressed based on the change of containment air standard volume. The uncertainty of leakage rate is the combination of the uncertainty of Type A evaluation and Type B evaluation. The uncertainty of Type A evaluation reflects the effect of random fluctuation of containment air volume on the regression line slope and is calculated using statistical theory directly. The uncertainty calculation of Type B evaluation can be conducted based on empirical data and statistical theory. The uncertainty of sensor data is analyzed based on empirical data. With the knowledge of sensor data uncertainty, the calculation process of leakage rate is further decomposed into several steps and the calculation of each step is established according to statistical theory. All of these jobs determine the uncertainty calculation method of Type B evaluation. Keywords: Containment

 Leakage rate  Uncertainty  On-line monitoring

1 Introduction The containment leakage rate on-line monitoring system in nuclear power plant monitors the change in containment tightness and provides containment leakage rate during power operation. The difference between containment on-line monitoring system and containment total tightness test (type A test) lies that the latter is to validate the containment performance during Loss Of Coolant Accident status and the test object is to measure total containment leakage rate under design pressure. The leakage is from concrete pores and crack for containment total tightness test. While in power operation, the containment leakage is mainly from penetration leakage [1]. The requirement for containment leakage rate on-line monitoring system in nuclear power plant is presented in European Utility Requirements for LWR Nuclear Power Plants (EUR) and Advanced Light Water Reactor Utility Requirements Document (URD) [2, 3]. Similar requirement is also put forward in HAD102-06 Design of Containment System for Nuclear Power Plant Reactor which is drafted in 2009 for update.

© Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 78–86, 2019. https://doi.org/10.1007/978-981-13-3113-8_9

Discussion for Uncertainty Calculation

79

The measurement result of containment leakage rate is formulated by the best estimate value and its uncertainty. In this article, a calculation method of leakage rate uncertainty is discussed.

2 Brief Introduction to the Calculation Method of Containment Leakage Rate At present, mass conservation method is widely used in the world to calculate the containment leakage rate. And according to the equation of state of the ideal gas, the standard volume of air is equivalent to the quality of air. The standard status is defined as 0 °C, 1.01325  105 Pa. A number of standard volume changes denoted as DVh during one day are used to linear fit with time. The linear slope is the leakage rate of that day. This calculation method is based on the principle of mass conservation which is adopted by pressured water reactor nuclear power plant such as AP1000. But in AP1000, the standard volume of air in containment is directly linear fitted to get leakage. If to display the air volume data, the volume change will be invisible compared to total volume. So in this paper, the change of air volume is used to calculate leakage. And in civil nuclear power plant, pressured air is used to drive containment isolation valve which will interfere leakage measurement. So it is necessary to deduct pressured air volume in the calculation method. The calculation method for DVh is introduced as follows: Zt

DVhðtÞ ¼ VNH ðtÞ  VNH ðt0 Þ  Qsar ðtÞdt

ð1Þ

t0

where VNH ðtÞ: VNH ðt0 Þ: Qsar :

The standard volume of containment air at present time t (Nm3); The standard volume of containment air at reference time t0 (Nm3); The standard volume flow rate of compressed air injected into containment from t0 to t which disturb leakage rate measurement and should be deducted (Nm3/h).

3 Present Situation of Uncertainty Evaluation Method for Containment Leakage Rate The uncertainty of containment leakage rate is used to measure the reliability of leakage rate calculation result. When the uncertainty passes high, the input data should be processed and leakage rate be re-calculated. Factors contributing to high leakage rate uncertainty include [1]: (1) Containment ventilation system exhaust causes change of containment air volume; (2) Change of containment leakage rate;

80

Y. Sun et al.

(3) The transient operation results in a sudden change in the calculated volume of containment air. There is lack of information about the uncertainty calculation method of containment leakage rate both at home and abroad. The calculation method is not given in document of EUR, URD and HAD. The domestic research papers on the uncertainty calculation are still in blank state. In the algorithm description of the French leakage rate monitoring software (SEXTEN) commonly used internally, Type A evaluation formula of uncertainty is based on simulation and detailed derivation process is not available, Type B evaluation is provided with a fixed value whose calculation method is also lacked [4]. By studying statistical theory, the method of calculating the uncertainty of leakage rate is discussed in this paper based on mass conservation method.

4 Method for Evaluating the Uncertainty of Containment Leakage Rate According to the rules in JJF 1059.1-2012 Evaluation and Expression of Uncertainty in Measurement, the uncertainty of the measured value is composed of several components [5]. An experimental standard deviation based on a series of values measured is denoted as Type A evaluation of standard measurement uncertainty. The standard deviation obtained from the prior probability distribution estimated according to the relevant information is denoted as Type B evaluation of standard measurement uncertainty. The total measurement uncertainty is the combination of Type A and Type B uncertainty. 4.1

Calculation Method for Type a Evaluation Uncertainty of Containment Leakage Rate

Assuming 48 air standard volume variations (DVh) are used to linear fit containment leakage rate, the line slope is leakage rate Qld . The Type A evaluation uncertainty of Qld is from dispersion of DVh data points which is because of random fluctuation of thermal conditions in the containment. The uncertainty is described as uA ðQld Þ. Let: Xi ¼ ti ¼ i ¼ 1. . .N; N ¼ 48; ti ¼ 0h; 0:5h; 1h; 1:5h. . .23:5h Yi ¼ DVhðti Þ Based on statistical theory, standard uncertainty of line slope adopting least squares is calculated below [6]: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r2 uA ðQld Þ ¼ PN  Þ2 ðXi  X i¼1

ð2Þ

Discussion for Uncertainty Calculation

81

In the equation, r2 is the variance around the fitted line (Residual Variance) and is calculated below: PN r2 ¼

i¼1 ðYi

 a0  a1 Xi Þ2 N 2

ð3Þ

a1 , a0 is the slope and intercept of the regression line. 4.2

Calculation Method for Type B Evaluation Uncertainty of Containment Leakage Rate

The uncertainty component of measurement evaluated by the method which is different from that of Type A is denoted as Type B evaluation uncertainty. The evaluation can be based on the information as bellows [5]: (1) (2) (3) (4) (5) (6)

The amount issued by an authority; The value of certified reference materials; Calibration certificate; The drift of the instrument; The degree of accuracy of verified measuring instruments; A limit value inferred from the experience of a person, etc.

In this paper, the leakage rate calculation process is decomposed into several steps. The calculation method of uncertainty for each step is derived. 4.2.1 Sensor Uncertainty The containment temperature, pressure, compressed air flow, which is the input data for leakage rate calculation, can be received by field instrumentation. The sensor uncertainty can be obtained from the manufacturer and some examples are described below. (1) Temperature Sensor Assuming Class A thermal resistance is used and the measured average temperature is tiavg , the standard uncertainty under that temperature is calculated as [7]:     uB tiavg ¼ 0:15 þ 0:002tiavg 

ð4Þ

(2) Pressure Sensor The Type B evaluation uncertainty of pressure sensor is combined by several uncertainty components. Assuming the uncertainty components include reference uncertainty e1 and correct uncertainty e2 , the combined uncertainty is calculated below [8]: 

uB pavg



rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi e21 e22 þ ¼ 1:962 3

ð5Þ

82

Y. Sun et al.

(3) Flow Sensor of Compressed Air The uncertainty of compressed air flow sensor can be obtained from sensor’s datasheet and is denoted as uB ðQsar Þ. 4.2.2 The Uncertainty of Average Temperature for the Containment Air Suppose the containment leakage rate on-line monitoring system uses n temperature sensors to measure the air temperature of containment every half hour. The measured values of each temperature sensor represent an average temperature of a part of free-space air and then average temperature of the containment air, Tavg , is a weighted average according to the volume of the air measured by each sensor. The calculation model is as follows: VL Tavg ¼ Pn

Vi i¼1 tiavg

ð6Þ

where VL : The volume of free space in the containment, unit m3; Vi : The volume of the air measured by each sensor, unit m3; tiavg : The average air temperature measured by each sensor, unit K Because the measurements of the temperature sensors in the containment are independent to each other, the correlation coefficient of the uncertainty of any two temperature sensors is zero. According to the uncertainty combination theory, the calculation of the combination uncertainty of the air average temperature per half hour is as follows [5]: 

uB Tavg



vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n   uX @Tavg   2 t  uB tiavg ¼ @tiavg i¼1

ð7Þ

Substituting Eq. (6) into Eq. (7), we obtain: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 32 u u n Vi V  u 2 L X    2 tiavg 7 6 uB Tavg ¼ u 4P 2 5  uB tiavg t m Vj i¼1

j¼1 tjavg

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n T4  V2 X  2 i avg ¼  uB tiavg 2 4 V  tiavg i¼1 L

ð8Þ

Let Vi ¼ vi VL

ð9Þ

Discussion for Uncertainty Calculation

83

Equation (8) can be expressed as 

uB Tavg



sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n T4 X  2 avg  v2i  uB tiavg ¼ 4 t i¼1 iavg

ð10Þ

Totally 48 uncertainty of average temperature can calculated for the containment air in one day. 4.2.3 The Uncertainty of Standard Volume for the Containment Air The standard volume VH ðti Þ of the air in the containment is calculated every half hour according to the average pressure of piavg and the average temperature Tiavg within half an hour. According to thermodynamic law, the calculation model is as follows: VH ðti Þ ¼ Where TN : PN : VL : piavg : Tiavg : i:

TN  VL piavg  PN Tiavg

ð11Þ

Standard state temperature, 273.15 K; Standard state absolute temperature, 1.01325  105 Pa; The volume of free space in the containment, unit m3; The average pressure in the containment at the moment ti , unit Pa; The average temperature in the containment at the moment ti , unit K; The ith half hour in a day and its value ranges from 1 to 48

Let k¼

TN  VL PN

ð12Þ

And then Eq. (11) can be expressed as: VH ðti Þ ¼ k 

piavg Tiavg

ð13Þ

Since piavg and Tiavg are independent, according to the uncertainty combination theory, the uncertainty of VH ðti Þ are calculated as follows: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi       2   2 @VH ti @VH ti uB ðVH ðti ÞÞ ¼  uB piavg þ  uB Tiavg @piavg @Tiavg sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2 p2iavg  2 1 ¼k  uB piavg þ 4  uB Tiavg 2 Tiavg Tiavg

ð14Þ

Totally 48 uncertainty of air standard volume can be calculated for the containment air in one day.

84

Y. Sun et al.

4.2.4 The Uncertainty of Type B Evaluation for Containment Leakage Rate According to the principle of least square method used for linear regression, the leakage rate Qld is calculated as follows: " Qld ¼

N X

PN Xi  Yi 

i¼1

i¼1

  PN #," N  PN 2 # X Xi  i¼1 Yi i¼1 Xi 2 Xi  N N i¼1

ð15Þ

where Xi : The time for every half hour in a day and the value is 0 h, 0.5 h, 1 h, 1.5 h… 23.5 h; Yi : The change in standard volume, DVH ðti Þ, of the containment air in every half hour relative to the reference moment t0 ; N: A constant value representing the total number of time points when measurement is performed and it is equal to 48 in this case Here N and Xi are constant, and let 1

," N X

PN Xi2



i¼1

i¼1

N

2 #

N

i¼1

PN

Xi

Xi

¼A

ð16Þ

 ¼B

ð17Þ

Equation (15) can be expressed as: Qld ¼ A 

N X

Xi  Yi  B 

i¼1

¼A

N X

N X

! Yi

i¼1

ð18Þ

ðXi  BÞ  Yi

i¼1

The change in standard volume of containment air Yi is the difference between the moment ti and the reference moment t0 : The volume of the compressed air injected during the period between t0 to ti is further deducted from Yi . The calculation is as follows: Yi ¼ DVH ðti Þ ¼ VH ðti Þ  VH ðt0 Þ 

Mi X j¼1

Qsarij 

1 2

ð19Þ

where, Mi is the number of data points of compressed air measurement during the period between t0 to ti . By substituting Eq. (19) into Eq. (18), containment leakage rate Qld can be expressed as:

Discussion for Uncertainty Calculation

Qld ¼ A 

N X

" ðXi  BÞ  VH ðti Þ  VH ðt0 Þ 

i¼1

¼A

N X

Mi X j¼1

85

# 1 Qsarij  2

ðXi  BÞ  ½VH ðti Þ  VH ðt0 Þ

ð20Þ

i¼1

A

N Mi X X 1 ½ðXi  BÞ  Qsarij   2 i¼1 j¼1

Let Qlda ¼ A 

N X

ðXi  BÞ  ½VH ðti Þ  VH ðt0 Þ

ð21Þ

i¼1

Qldb ¼ A 

N X i¼1

½ðXi  BÞ 

Mi X j¼1

1 Qsarij   2

ð22Þ

Since Qlda is calculated according to the pressure and temperature in the containment, the Qldb is calculated according to the compressed air flow, and the three kinds of data are independent to each other. Therefore, the uncertainty of the type-B evaluation of leakage rate Qld is calculated as follows: uB ðQld Þ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u2B ðQlda Þ þ u2B ðQldb Þ

ð23Þ

Here the uncertainty of Qlda and Qldb are calculated as follows: (1) uB ðQlda Þ Because the standard volume of containment air is calculated by the same temperature and pressure instrument measurement data every half hour, so each two of 49 measurement data of air standard volume are strongly correlated. Assume correlation coefficients as 1 and then calculate uB ðQlda Þ as follows:     N X   uB ðQlda Þ ¼ A  ðXi  BÞ  ½uB ðVH ðti ÞÞ  uB ðVH ðt0 ÞÞ ð24Þ   i¼1

(2) uB ðQldb Þ Since the compressed air flow data from reference time t0 to current time ti are measured by the same flowmeter and any two of flow data are strongly correlated. Assume correlation coefficients as 1 and then calculate uB ðQldb Þ as follows:

86

Y. Sun et al.

   N Mi  X X  1   uB ðQldb Þ ¼ A  ½ðXi  BÞ  u Qsarij    2 i¼1 j¼1

4.3

ð25Þ

Combination Uncertainty of Containment Leakage Rate

According to the principle of uncertainty combination, the uncertainty of containment leakage rate Qld is combined by the uncertainty of type A evaluation and the uncertainty of type B evaluation. Due to the uncorrelation between two types of uncertainty, the combination uncertainty of containment leakage rate is calculated as follows: uðQld Þ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u2A ðQld Þ þ u2B ðQld Þ

ð26Þ

According to the calculation process of leakage rate described from Sect. 4.1 to 4.3, the uncertainty is related to the measured values of the sensor, and the uncertainty of leakage rate should be calculated according to the actual operation data.

5 Conclusion The containment leakage rate on-line monitoring system is used to monitor the leakage of the containment during normal operation of nuclear power plant. It can alert the opening of the containment caused by human error operation, or provide early warning for the overall leakage rate of containment under accident. The uncertainty of containment leakage rate is used to judge the reliability of the leakage rate measurement. When the measurement error or occasional factor leads to the abnormal uncertainty, it is necessary to analyze the input data, remove the abnormal data and recalculate it, to ensure the effective of the measurement results. Based on the statistics theory, the calculation method of the uncertainty of containment leakage rate is derived in this paper.

References 1. Software Requirements Description of Containment Leakage Rate Monitoring System for Units 5 and 6 in Yangjiang Nuclear Power Plant, China Nuclear Power Design Company LTD.(Shen zhen) (2012) 2. European Utility Requirements for LWR Nuclear Power Plants, Revision E, December 2016 3. Advanced Nuclear Technology: Advanced Light Water Reactor Utility Requirements Document, Revision 13, 2014 4. Sexten 2 System Principles and Methodology Software v 3.1, Electricite De France (2006) 5. JJF 1059-1: Evaluation and Expression of Uncertainty in Measurement, State Administration of Quality Supervision, Inspection and Quarantine (2012) 6. The Theory of Probability and Statistics, National Defense Industry Press (2011) 7. IEC 60751:2008 Industrial platinum resistance thermometers and platinum temperature sensors 8. TS-X-NIEP-PELI-F-DC-20012 Ver. G Sensors accuracies and response time calculation

Research and Improvement of the Flowmeter Fracture Problem of Condensate Polishing System in Nuclear Power Plant Hai-Tao Wu1(&), Xin Ding2, and Tie-Qiang Lu1 1 State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Engineering Company Ltd, Shenzhen 518172, China [email protected] 2 Nuclear Industry Research and Engineering Co., Ltd, Beijing 101300, China

Abstract. The Annubar flowmeter of Condensate Polishing System (ATE) in booster pump outlet is important parameter for monitoring full-flow treatment of the Second-Loop condensate, and is also the reference for adjusting booster pump. In this paper, it analyzes the flowmeter principle and fracture analysis report in detail, and it confirms that the mechanical fatigue is the main reason for flowmeter fracture. By analyzing and comparing the feasibility of the application of flowmeter in this system, the Annubar flowmeter with double-end fixed installation method is proposed to deal with the fracture problem. Through practice, this method can solve the problem of mechanical fracture. Keywords: Annubar flowmeter Brittle fracture

 Installation  Mechanical fatigue 

1 Preface The Condensate Polishing system (ATE) is used as the important system of the full bypass in the Second-Loop of Nuclear Power Plant. Its main function is to remove suspended impurities and ionic impurities in the condensate water, and ensure the water quality of the Second-Loop within the operating requirements, for reducing Thermal system equipment corrosion, extending equipment life. The Annubar flowmeter of ATE system in Booster Pump outlet, is used to monitor and calculate real-time date of condensate polishing, to determine whether the system is full-flow treatment. It is also used as an adjustment condition of Booster Pump operating. But, during the normal operation of ATE system in Nuclear Power Project, the flowmeter probe on the Booster Pump outlet occur neck fracture, and the part of fracture is found at the downstream valve of the condensate extraction system. The breakage of Flowmeter will bring troubles to the important equipment and safe operation of the Second-Loop system. The broken part is likely to enter the Condenser Extraction System (CEX) and Low Pressure Feedwater Heater System (ABP) following condenser flow, then it will cause damage to important equipment of the SecondLoop system, and affect the downstream system’s safe operation (Fig. 1). © Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 87–94, 2019. https://doi.org/10.1007/978-981-13-3113-8_10

88

H.-T. Wu et al. CEX

ATE

MD

MD

CATION BED

PIPE

MD 5 PCS

FLOW MD TO ABP

M M M

MD MIX BED

MD 5 PCS

Fig. 1. ATE System diagram

For example, the deaerator level control valve and the recirculation control valve of the CEX system are affected by the broken part for jamming and damage. If the recirculation control valve fails, the recirculation function of CEX will not be realized normally. And if the deaerator level control valve fails, deaerator will not be adjusted automatically, more seriously, which will cause the Unit shutdown [1].

2 Flowmeter Introduction The flowmeter of Condensate Polishing System in booster pump outlet is Annubar flowmeter, with single-end fixed installation. It is a new type of differential pressure flow detecting element according to the principle of pitot speed measurement. The detection bod of the Annubar flowmeter is a hollow metal pipe, and perpendicular to the flow direction. There are four connected total pressure pipe holes at the inflow surface. After get the average pressure value from detection bar, the pressure is extracted from the total pressure pipe, sent to the positive pressure of transmitter [2]. Another detection hole, represented the static pressure of the fluid interface, is seted in the middle of the back side of the detection rod. The static pressure is extracted from the low pressure connection to the negative pressure chamber of the transmitter. The square of the differential pressure between the positive and negative pressure chambers is linearly proportional to the flow speed (Fig. 2). The Annubar flowmeter is installed in the outlet of the three pump booster of the ATE system (Fig. 1). The booster pump is used to provide slightly higher output (5%–10%) than the condensate, forming water returned, to achieve the purpose of all the condensed water passing through the ATE system. Parameters are as follows (Table 1):

Research and Improvement of the Flowmeter Fracture Problem

89

Fig. 2. Annubar flowmeter schematic Table 1. Comparison of parameters Name Annubar flowmeter

Parameter comparison Technology Device

Pres. (MPa) 3 3.5

Tem. (°C) 40 60

Pipe size (mm) 600 600

Flow (t/h) 4000 4795

Speed (m/s) 3.14 3.93

Install Pipe 11D

Remark Condensate

3 Fracture Analysis The broken equipment was returned to the factory for inspection and failure analysis. The analysis report is mainly related to the material quality of the equipment. The Positive Material Identification (PMI) test confirmed that the Annubar material was stainless steel 316. The hardness average value by the hardness measurement was 87HRB, belong to normal value scope, another Energy Dispersive X-Ray Spectrum (EDX) did not detect any corrosive elements. Two different fracture types, brittle fracture and ductile fracture, were detected in the fractured surface of Flowmeter. The Scanning Electron Micrograph (SEM) observe that the river patterns on the brittle fracture surface which was consistent with fatigue, and found dimples on the ductile fracture surface (Fig. 3); The streak on the fracture surface shows that the ductile fracture belong to the secondary damage after brittle fracture, isn’t the main fracturing cause. Therefore, determined that the brittle fracture is the first cause, the cause of the fracture is mechanical fatigue of the equipment. Through the principle analysis of flowmeter, comprehensive equipment parameters, field installation and product analysis reports, determine the single-end fixed Annubar Flowmeter, which located at the outlet of the three condensate booster pumps, for long time work longtime flow for long time Mechanical fatigue causes the flow probe to break.

90

H.-T. Wu et al.

Fig. 3. Fracture type

4 Feasibility Study Single-end fixed Annubar flowmeter’s parameters are satisfied the design requirements in process selection. However, in the actual operation, this flowmeter is fractured due to mechanical fatigue. So it can confirm that single-ended fixed Annubar flowmeter is not suitable for ATE system flow measurement. For analysing widely flowmeter types that used in industrial application, and refering to the successful measurement of large flow in other systems, the following three options are selected for feasibility study. 4.1

Ultrasonic Flowmeter

Ultrasonic flowmeter is a time difference method, used to detect the effect of ultrasonic beam (or ultrasonic pulse) on fluid flow. The measurement principle: a probe transmits a signal through the wall, or the medium of pipe, will be received by another probe. At the same time, the second probe also transmits the signal and is received by the first probe. Due to the medium flow velocity, there is a time difference between the two signal, so the relationship between the flow rate and the time difference can be derived and calculated [3]. Ultrasonic flowmeter is non-contact measurement, so it can solve problem of fracture perfectly (Fig. 4).

Research and Improvement of the Flowmeter Fracture Problem

91

Upstream Transducer L

D

Downstream Transducer Fig. 4. Ultrasonic flowmeter schematic

4.2

Orifice Flowmeter

Orifice flowmeter, also known as differential pressure flowmeter, consist of Detection element (throttle) and secondary device (differential pressure transmitter and flow indicator). The working principle is that when the fluid filled with the pipe flows through the throttling device in the pipe, local contraction is caused near the throttling member, the flow velocity increases, and generate static pressure difference between upstream and downstream sides. Under the condition of known parameters, according to the principle of flow continuity and Bernoulli equation, the relationship between differential pressure and flow rate can be derived to obtain the flow rate [4]. There is no instrument probe for Orifice flowmeter, so this flowmeter can solve problem of fracture (Fig. 5).

Pressuer device

5 Valve Group

Calculate

TransmiƩer

IndicaƟon

Fig. 5. Orifice flowmeter schematic

4.3

Double-Ended Fixed Annubar Flowmeter

The principle of the double-ended fixed Annubar flowmeter is the same as that of the single-ended fixed type, differential pressure flowmeter adopting the principle of a bifurcated velocity measurement calculates the flow through the relationship between the differential pressure and the average flow rate. Only the installation method is different. Double fixed type and single-ended fixed type (Figs. 6 and 2).

92

H.-T. Wu et al.

Fig. 6. Double-ended fixed flowmeter schematic

The double-ended fixed Annubar flowmeter keep the advantages of Annubar, and at the same time it can better ensure the stability of the sensor and play a role in preventing the probe from breaking. 4.4

Comparison of Program Feasibility

Ultrasonic flowmeter (program 1) is not limited by the pipe diameter, and can be measured without contact. It is not necessary to cut the pipe or install the hole. Theoretically, it is the most suitable solution to solve the fracture and measurement of the condensate fine flowmeter. However, the ultrasonic flowmeter has poor anti-jamming capability, and the installation position of the flowmeter is at the outlet of the three Booster Pumps. It is vulnerable to noise from pump vibration or other sound sources, affecting the measurement results, and the installation of ultrasonic flowmeter also affect the accuracy of the measurement. Both program 2 and program 3 are differential pressure flowmeter. However, compared to program 2 (Orifice flowmeter), the lengths of the upstream and downstream straight sections of program 3 (Annubar flowmeter) are much lower than the orifice plates, and it is easy to install and brings great flexibility and convenience to the layout design of pipelines (especially large-diameter pipe); the pressure loss of Annubar flowmeter is much smaller than Orifice Flowmeter. With the increase of pipe diameters, Annubar pressure loss can be ignored. Program 3 (Double-ended Fixed Annubar flowmeter), for the water quality of the Condensate Polishing System is the pure condensed water after treatment, the pressure between the pressure holes is not easily blocked by the medium debris. And the doubleend fixed installation and fixation method, it can well prevent the risk of mechanical fatigue fracture due to the long-time impact of the fluid [5] (Table 2).

Research and Improvement of the Flowmeter Fracture Problem

93

Table 2. Comparison of flowmeter solutions Program Meter 1 Ultrasonic flowmeter

2

3

Advantage 1. Non-contact measurement, without cutting or opening hole installation; 2. No pressure loss; 3. Not subject to pipe diameter restrictions;

Disadvantage 1. Low measurement accuracy; 2. Poor anti-interference ability; 3. Susceptible to bubbles, scaling, pumps and other sources of influence; 4. The measurement accuracy is affected by the installation technology; Orifice 1. Widely applications; 1. Require long straight pipe section (difficult flowmeter 2. Solid structure; to meet, especially 3. Without real-time large pipe diameters); calibration; 4. Stable and reliable 2. Pressure loss; 3. Easy to run, run, drip, performance, long leakage problems, service life; maintenance workload; 1. High measurement 1. Easy to get clogged by Doublethe media debris in the accuracy, good ended fixed pressure hole; stability; Annubar flowmeter 2. Easy installation and maintenance, conducive to pipeline layout; 3. Low pressure loss, low energy consumption; 4. Successful cases of other nuclear power systems with similar conditions;

Economical 1. High equipment costs; 2. Small workload, but high requirements of installation technical;

1. Low equipment costs; 2. Re-cutting, installation workload;

1. General Cost; 2. Symmetrical opening of the pipeline, small workload;

5 Conclusion Above schemes have mature designs in the nuclear power industry, and each has its own advantages for flow measurement. However, this paper analyzes the cause of flowmeter probe breakage in detail, discusses the applicability, advantages and disadvantages of various solutions, and refers to similar flowmeter applications currently used in operating nuclear power projects. After multiple comparisons and comparisons, the final determination of the solution will be finalized. Annubar flowmeter is the best solution to solve the problem of breakage of booster pump outlet flowmeter.

94

H.-T. Wu et al.

The solution can provide reference for subsequent condensate polishing system flow design and similar problems in nuclear power plants.

References 1. Guangdong Nuclear Power Training Center. 900 MW Pressurized Water Reactor Nuclear Power Plant System and Equipment. Atomic Energy Press (2007) 2. Guo-wei, L., Wu-chang, C.: Flow Measurement Technology and Instrumentation. Mechanical Industry Press (2002) 3. Zhi-min, L., Shan, X.: Research and application of ultrasonic flowmeter. Pipeline Technol. Equip. (2004) 4. HG/T 20507: Design Specification for Automatic Instrument Selection (2014) 5. Jiangxi, Y.: Installation of Thermal Measurement and Control Instruments. China Electric Power Press (1998)

Study on Optimization of Turbidity Control for Seawater Desalination System in Nuclear Power Plant Hai-Tao Wu1(&), Pan-Xiang Yan2, Yong Yan2, and Hao Zhong1 1

2

State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Engineering Company Ltd., Shenzhen 518172, China [email protected] Nuclear Industry Research and Engineering Co., Ltd., Beijing 101300, China

Abstract. The coagulation and sedimentation process of Seawater Desalination System is a complicated physical and chemical reaction process. It has time delay and nonlinearity, which is difficult to control the coagulation process by adjusting the dosage. The traditional control method used in the project is the empirical method, that the dosage is proportional to the influent flow, but other factors are not full considered. In this paper, according to a lot of field data, the multiple linear regression method for parameter identification is used to obtained dose and other factors between the quantitative relationship with good accuracy. The mathematical model of coagulant dosage is applied in the drug control system, which can effectively overcome the shortcomings of current empirical method, such as simple, extensive, not adjusting in time, and improve the economy and reliability of operation. Keywords: Seawater Desalination  Coagulation sedimentation Turbidity control  Multiple regression  Mathematical model



1 Introduction Seawater Desalination System is a large-scale BOP(Balance of Plant) subproject in nuclear power plant and assumes the function of providing production and domestic water for the whole plant. In the Seawater Desalination System, the control of water turbidity is more typical and complicated. Water turbidity directly affects the water production capacity of seawater desalination equipment and their service life, which has a critical impact on the system operation. The turbidity is controlled mainly in the coagulation sedimentation tank. By adding flocculants, coagulants or other drugs, and removing suspended particles, colloids and other impurities, thereby controlling the water turbidity. The effluent turbidity is affected by many conditions, such as the influent flow rate, turbidity, temperature, pH, and drug addition amount, etc. But turbidity control is mainly performed by changing the drug dosage. The coagulation and dosing process is a complex physical and chemical process of time delay and nonlinear characteristics. In the current control plans for seawater desalination projects in Nuclear Power, the © Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 95–103, 2019. https://doi.org/10.1007/978-981-13-3113-8_11

96

H.-T. Wu et al.

amount of medicines to be added are mainly related to the flow of influent water. Generally, drug dosage is increasing in proportion to the flow of influent water from experience. So when other conditions of turbidity are changing, drug dosage can’t be adjusted timely. In turbidity control, it is difficult to establish an accurate and reliable mathematical model for the complication of turbidity change process, and also it has time lag and nonlinearity. Because the deference between the Nuclear Power and the other common power plants is tiny in the process of Seawater Desalination, the data collected from the common power plants can be used to the Nuclear Power. In this paper, a coagulation sedimentation tank of some water plant is took as the research object, to give a method based on nonlinear multivariate regression to calculate the dosage of medicines. Based on the measured values of different conditions related to the turbidity of the effluent, the drug dosage can be adjusted in real time.

2 General In the coagulation sedimentation tank, flowmeter, pH, and turbidimeter are installed in the inlet pipe, and another turbidimeter is installed in the outlet pipe. In this paper, the actual engineering data of these instruments is used to establish a mathematical model on how coagulant dosage affects on the effluent turbidity, therefor more accurately control turbidity by adjusting the drug dosage. The data of Coagulation sedimentation tank in some water plant were collected every one hour from August 8th, 2013 to September 5th, 2014. A total of 9,398 actual engineering data were collected, including the original pH, raw water turbidity, sedimentation tank effluent turbidity, water flow, coagulant consumption and other process parameters, etc. Turbidity dosing control flow chart shown in Fig. 1.

Fig. 1. Turbidity control diagram

Study on Optimization of Turbidity Control for Seawater

97

3 Engineering Data Preprocessing 3.1

Abnormal Data Processing

The data collected are the instantaneous value of the process variable, and therefore there are many unreasonable false values. In addition to taking into account the impact of the instrument failure, the preprocessed data need to be performed. The data is continuously collected, therefore the average filter can be used to perform abnormal processing on unreasonable data. Based on statistical theory, Mean Filtering is a non-linear signal processing technique that can effectively suppress noise [1]. Its definition is as follows: gðx; yÞ ¼ meanðfðs; tÞÞ;

s; t 2 Sxy

ð1Þ

In the formula: g(x, y) : the output value of (x, y); Sxy: the Center’s neighborhood of (x, y); f(s, t) : the Value of (s, t) of nearby the (x, y) as the center; mean(f(s, t)) : Average value after highest value of nearby (x, y) as the center. The mean filter uses the average value nearby the most significant value as the value of this point, effectively eliminating the mutation point. The filtering method greatly reduces the influence of the deviation of the filtering result from the true value due to the abnormal point participating in the operation, but also changes the original data. The rules for determining the outlier is: jMfðx; yÞj [ 3  Stdðfðs; tÞ0 Þ

ð2Þ

In the formula: f(s, t)’ : the value after deleting highest value; Std (f(s, t)’) : Standard deviation of processed data; Mfðx; yÞ ¼ fðx; yÞ  meanðfðs; tÞ0 Þ

ð3Þ

If the above equation holds, it is determined that the (x, y) point is an abnormal value. The average filter method is used to determine the data. If the data is abnormal, the data is rejected. When the data is filtered and detected, two values are selected as the area before and after the data. This is the case when discriminating abnormalities: When the maximum and minimum values of the five data are removed and the remaining three data are relatively close, the data will be close to the average value, and the resulting standard deviation will be extremely small. The exception data will be treated as normal data.

98

H.-T. Wu et al.

To do this, use the following weightings: Stdðfðs; tÞ ¼ 0:2  Stdðfðx; yÞÞ þ 0:8  Stdðfðs; tÞÞ

ð4Þ

In the formula: Std(f(x, y)) : standard deviation of all data. Through the average filter method to detect the original data, abnormal value statistics are obtained, shown in Table 1. Table 1. Raw data anomaly detection Variable Raw water PH Raw water turbidity Inlet Outlet Abnormal Point Qty 50 130 20 7

3.2

Data Division

By applying the previous data processing steps, a large amount of data of the water plant are analyzed and found that the turbidity range of the raw water varies greatly. When the raw water is clear, the lowest turbidity is only 5.13 NTU, But when the raw water is turbid, the highest turbidity is as high as 868.36 NTU. In order to apply the established model of dosage to a wider range of applications, the sample interval set should be divided for the raw water turbidity so that the model data set and verification data set cover all the turbidity intervals. The data samples are divided into turbidity intervals (before closing and opening After the interval), and the sample data set are obtained, shown in Table 2 and Fig. 2. Table 2. Data sample turbidity interval table Turbidity (NTU) Sample Qty Turbidity (NTU) Sample Qty

0–10

10–20

20–30

30–40

40–50

50–60

60–70

70–80

80–90

259 90– 100 108

3175 100– 200 462

1861 200– 300 61

742 300– 400 26

493 400– 500 17

350 500– 600 8

282 600– 700 15

176 700– 800 16

124 >800 3

According to Table 2 and Fig. 1, the raw water quality of the coagulation sedimentation tank is relatively stable, whose turbidity is mostly between 10 NTU and 400 NTU. Particularly, the plateau water turbidity is very rare, and water quality is mostly on low turbidity interval. When the model parameters are being identified, the processed sample space can be divided into a training set and a generalization set. The training set is used for model training and the generalization set is for the inspection and prediction of the model.

Study on Optimization of Turbidity Control for Seawater

99

Fig. 2. Sample data of turbidity diagram

4 Dosing Model Establishment The amount of dosing in the coagulation sedimentation tank is related to different factors, so different mathematical regression models can be used to construct the mathematical model. Regression analysis is a method of establishing regression function expressions between dependent and independent variables by using mathematical statistics methods through a large number of observed data. Regression analysis is divided into linear regression analysis and nonlinear regression analysis. Usually linear regression analysis method is the most basic analysis method. The problem of nonlinear regression can be solved by means of mathematics to solve the linear regression problem, and then by least square method. The estimated value of the parameter is finally transformed to obtain the required regression equation. Combined with the research results of the existing literature, the dosage of coagulant and various factors can be expressed by the following index [2]: a2 a3 M ¼ a0  Ca1 0  Q  C1

In the formula: M : C0 : C1 : Q : a 0, a 1, a 2, a 3 :

ð5Þ

Coagulant dosage, mg/L; Raw water turbidity; turbidity of the sedimentation tank outlet; Inflow, m3/h variable parameter.;

Formula (5) shows the non-linear exponential relationship between dosing amount and other factors, and the nonlinear problem is transformed into a linear problem by taking the mathematical method of logarithm. Formula (5) can be converted to:

100

H.-T. Wu et al.

ln M ¼ a1  ln C0 þ a2  ln Q þ a3  ln C1 þ ln a0

ð6Þ

Let: y = lnM, x1 = lnC0, x2 = lnQ, x3 = lnC1; Perform n times observations on y, x1, x2, x3, to obtain n sets of sample data yi, xi1, xi2, xi3 (i = 1,2,…,n), then 8 < y1 ¼ b0 þ b1  x11 þ b2  x12 þ b3  x13 þ e1 y ¼ b0 þ b1  x21 þ b2  x22 þ b3  x23 þ e2 : 2 y1 ¼ b0 þ b1  xn1 þ b2  xn2 þ b3  xn3 þ en e1-en

ð7Þ

: are Residuals, independent of each other, obey normal distribution N(0, d2).

The data obtained after data preprocessing is randomly divided into 6 sample tables. To ensure that the established model is suitable for various turbidity intervals, when randomly allocating sample tables, each sample set table should contain various sources. The water turbidity interval was selected and the five data tables were used to obtain the parameters of the model. Another set of sample table data was used to verify the validity of the model. Using MATLAB for multiple linear regression identification, the undetermined parameters under five different data samples can be solved. The MATLAB solution program is described in the annex. The linear identification parameters are shown in Table 3.

Table 3. Linear identification parameters Parameter Group 1 Group 2 Group 3 Group 4 Group 5 Average 5.7988 5.2142 4.8950 5.8457 5.2279 5.3963 b0 b1 0.2091 0.2128 0.2178 0.2028 0.2186 0.2122 b2 −0.3321 −0.2674 −0.2357 −0.3360 −0.2731 −0.2889 b3 0.3706 0.3346 0.3312 0.3489 0.3093 0.3389

Based on the above parameter table, a mathematical model of the dosing amount of the coagulant can be obtained, as shown below. y1 ¼ 5:3963 þ 0:2122  x1  0:2889  x2 þ 0:3389  x3

ð8Þ

M ¼ 220:6  C0:2122  Q0:2889  C0:3389 0 1

ð9Þ

as

As can be seen from the formula above, the unit consumption of coagulant is positively related to the turbidity of the raw water and produced water, and negatively related to the water flow. Bigger turbidity of the raw water leads to bigger turbidity of

Study on Optimization of Turbidity Control for Seawater

101

the produced water and unit consumption of the coagulant. Bigger flow rate of water withdrawal leads to smaller unit consumption of coagulant, indicating that the consumption of coagulant has a character of scale effect [3].

5 Model Verification 5.1

Dataset Verification

Formula (9) reveals the qualitative relationship between the unit consumption of coagulant and the turbidity of raw water, the amount of water taken, and the turbidity of the current product water. Use the sixth set of sample data to test the qualitative relationship. Take 100 continuous raw water turbidity, intake flow rate and current effluent turbidity data in the sixth set of data to obtain the predicted value of coagulant consumption, and compare with the actual value. The result is shown in the Fig. 3 below. 80

70

60

50

40

30

20

0

10

20

30

40

50

60

70

80

90

100

Fig. 3. Comparison between model predictions and actual values

Above Fig. 2, the red curve is the predicted value of the dosing amount calculated by the dosing model, and the blue curve is the actual value of the project. From the Fig. 2, the dosing amount model effectively tracks changes in actual values.

102

5.2

H.-T. Wu et al.

Application of Dosing Strategies in Engineering

Based on the actual measured data of the project, the mathematical model of the dosage of the coagulation sedimentation tank, the influent water quality, the influent flow and the current turbidity of the produced water were obtained. In practical projects, the coagulation and sedimentation process takes approximately 70 to 120 min, with a large lag, and the dosing strategy must be optimized according to the target turbidity [4]. Let the current product water turbidity be C, and the dosing amount calculated according to Eq. (9) is M. The connotation of formula (9) is: if other relevant factors remain unchanged, the dosing amount m can maintain the turbidity of the produced water as C. If C is the target production water turbidity C0, the dosing amount M0 is set as the reference dosing amount. If C is greater than the target product water turbidity, mean that the turbidity of the current production water is too large, the dosing amount calculated according to formula (9) will be greater than the reference dosing amount M0, and the difference is ΔM. In order to reduce the turbidity of the produced water to the target turbidity, an amount of ΔM is added to the current dosage M. When C is less than the target product water turbidity, the same reason. Summary, the output of the dosing control system should be determined based on the current influent water quality, influent flow rate, current product water turbidity, and target product water turbidity, This can be expressed in the following formula. m ¼ mðcÞ þ Mm ¼ 2  mðcÞ  mðc0 Þ

ð10Þ

In the formula: m : the unit drug consumption of the coagulant; m(c) : current drug consumption obtained from Eq. (9); m(c0) : the standard drug consumption of maintaining the target turbidity; Due to the lack of temperature data, the model presented in this paper does not consider the effect of temperature on coagulation and sedimentation. Due to the small pH change, the model does not consider the pH characteristics of the influent, which has certain limitations. In addition, the neural network algorithm has a good applicability to dosing analysis of coagulation sedimentation. The raw water flow rate, raw water turbidity, raw water pH, product water turbidity and other factors were taken as the input variables of the neural network, and the dosing amount was taken as the output variable of the neural network. The training and generalization of the neural network model can be achieved through actual data [5]. The application of neural network algorithm in the coagulation and sedimentation dosing control system can be used as a follow-up research direction.

Study on Optimization of Turbidity Control for Seawater

103

6 Conclusion The coagulation and sedimentation process of Seawater Desalination System is a complex physical and chemical reaction process with time delay and nonlinearity. This paper uses mathematical methods to transform nonlinear problems into linear problems. Through a large number of field data, using multiple linear regression methods for parameter identification, the quantitative relationship between dosing amount and other factors was obtained, with good accuracy. Combined with the actual situation of the project, the mathematical model of the dosage of the coagulant is applied to the dosing control system, which can effectively overcome the shortcomings of the current experience method, such as simple, extensive, and not real-time. Thereby improving the economical efficiency and reliability of the operation.

References 1. Xin, Xin, Na, Zhou, Zhen, Wang: Research on detection and correction of data outliers. Modern Electron. Technol. 36(11), 5–11 (2013) 2. Yimei, Tian, Hongwei, Zhang, Gengzhong, Qi, Jingyue, Luo: Research on the mathematical model of water treatment system operation state. China Water Supply Drain. 14, 10–13 (1998) 3. Xiaodong, Huang, Yuling, Qi, Tiejun, Qiao, et al.: Research on turbidity control technology of conventional water purification process. Water Supply Technol. 1(1), 19–23 (2007) 4. Decui, T., Xiaoyan, D., Xuefeng, Z. et al.: Modeling research on dosage of coagulant in water works, water treatment technology 6, 54–56 (2010) 5. Hua, B., Guibai, L.: Neural network control method of coagulation and administration, water supply and drainage 11, 83–86 (2001)

Optimization Scheme of Turbine Frequency Regulation for Passive Nuclear Power Plant Le-Yuan Bai(&), Kai Gu, Bin Zeng, and Gang Yin State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Engineering Company Ltd., Shenzhen 518172, Guangdong, China [email protected]

Abstract. Frequency is one of significant operation parameters for the power plant, and also one of significant assessment indicators for grid. Tracking and model switch of turbine control are used to realize partial function of frequency regulation for the passive nuclear power plant. However, there is one existing defect that turbine cannot completely participate in grid frequency regulation. One optimization scheme is proposed in this paper. By adding a specialized function for frequency regulation, the frequency regulation output is introduced into both G model and L model to complete the function of turbine frequency regulation. At the same time, a specialized power limiter is added to prevent reactor from overpower due to frequency regulation at full power level. Keywords: Passive nuclear power plant Optimization



Turbine



Frequency regulation



1 Introduction Frequency has a significant impact on the safety and stable operation of the power grid. Once the load changes, the total power of generators will not match the total load of grid, and the frequency changes. To maintain the stability of grid frequency, it is needed to regulate the unit power according to the variation of frequency, namely frequency regulation. According to the differences of regulatory range and capacity, frequency regulation can be divided into two parts, primary frequency regulation and secondary frequency regulation [1, 2]. Frequency regulation in nuclear power plant is related to the composition structure of grid. In Paris, the scale of nuclear power is over 75%, and the nuclear power units directly participate in grid frequency regulation. In other countries, such as American, Canada, Japan, Korea, the nuclear power units are in base load operation, and hardly take part in grid frequency regulation [3]. As the domestic nuclear power percentage of grid is relatively low, the nuclear power units only take part in primary frequency regulation, but not secondary frequency regulation [4].

© Springer Nature Singapore Pte Ltd. 2019 Y. Xu et al. (Eds.): Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems, LNEE 507, pp. 104–113, 2019. https://doi.org/10.1007/978-981-13-3113-8_12

Optimization Scheme of Turbine Frequency Regulation

105

2 Original Scheme In a passive nuclear power plant, the turbine generator is designed by Mitsubishi, and turbine governing system (DEH) controls the speed and power by regulating steam flow, to meet the power supply demand of grid and ensure the safe and stable operation of plant. DEH has two load controllers, governor and limiter. As the different controller used, turbine load control is divided into governor control mode (G mode) and limiter control mode (L mode). This two control modes collectively regulate turbine regulating valves (GV), including the main steam regulating valves and the reheated steam regulating valves. GV opening demand is the smaller output between governor and limiter, to realize turbine power control. Auto following function can be applied between the two control modes [5, 6]. Frequency regulation is realized in governor control loop. As shown in Fig. 1, in normal operation, turbine actual load (Pm) is approximately equal to load set point (Pset) and load deviation is about zero to limiter setting. Meanwhile, as the generator is connected to grid, the turbine rotation frequency follows the frequency of grid. Once the grid frequency varies, there is a deviation between turbine speed set point (Nset) and turbine actual speed (Nm). This speed deviation is converted into the increase or decrease of governor setting through speed governing droop. If governor output is smaller than limiter output, speed deviation will influence turbine steam demand (SD). Then GV opening demand and turbine power vary, to realize the stability of grid frequency.

Nset

Nm

-

Governor setting

+ Droop

Following width

h

M I N

SD

f(x) GV Opening Demand

Pm

-

P

+ Pset

limiter setting

GV

Fig. 1. Original scheme of frequency regulation in passive nuclear plant

106

L.-Y. Bai et al.

As the speed deviation is only introduced into governor control loop, frequency regulation is only valid in G mode. Thereby, frequency regulation is directly related with turbine control mode. The following is to analyze the different frequency regulation functions in different control modes. 2.1

G Mode

G mode is mainly used for speed control and synchronization with grid. Before connection to grid, turbine is in G mode automatically, and GV opening demand is determined by speed deviation. Once connecting to grid, governor will automatically set GV opening demand equal to the initial load, to prevent turbine from motor mode, which might cause cylinder deformation and vibration. Figure 2 demonstrates the principle of frequency regulation in G mode. In normal operation, turbine speed follows grid rated frequency (f0). If limiter auto tracking is selected, turbine automatically switches to G mode until grid frequency falls to a certain value (f1), and limiter setting automatically tracks the sum of limiter setting and following width (h). GV Opening/% Governor setting limiter setting h

Frequency/Hz f1

f0

Fig. 2. Schematic diagram of frequency regulation in G mode

As shown in Fig. 3, When grid frequency increases (>f0), turbine actual speed is greater than set value, and a negative speed deviation acts on governor setting. Then, GV opening decreases and turbine output power decreases, to lower grid frequency. Conversely, when grid frequency decreases (f0

f f1

N

Y GV Opening

Turbine Output

GV Opening

Turbine Output

GV Opening

Turbine Output

Fig. 3. Flow Chat of frequency regulation in G mode

will be out of action. The purpose is to prevent GV from quick opening when grid frequency drastically decreases, which may cause reactor overpower and impact on plant safety. 2.2

L Mode

L mode is mainly used for load control. Figure 4 demonstrates the principle of frequency regulation in L mode. Once governor auto tracking is selected, turbine automatically switches to L mode, and governor setting automatically tracks the sum of limiter setting and following width (h). Turbine does not take part in frequency regulation until grid frequency rises to a certain value (f2). As shown in Fig. 5, when grid frequency decreases (f0), a negative speed deviation is added to governor setting. Since the difference between governor setting and limiter setting is

108

L.-Y. Bai et al.

GV Opening/% Governor setting h

f0

limiter setting

Frequency/Hz

f2

Fig. 4. Schematic diagram of frequency regulation in L mode

f = f0

f f0 Y

Y

f >f2

N

Y GV Opening

Turbine Output

GV Opening

Turbine Output

GV Opening

Turbine Output

Fig. 5. Flow Chat of frequency regulation in L mode

Optimization Scheme of Turbine Frequency Regulation

109

following width, whether frequency regulation works depends on the size of grid frequency increase. If grid frequency does not increase to a certain value (f2), governor setting reduction is smaller than following width, turbine is still in L mode, and frequency regulation does not work. Once grid frequency increases higher (>f2), turbine turns into G mode. In this condition, the frequency deviation makes GV opening and turbine output smaller, and frequency regulation goes into effect.

3 Defects of Original Scheme According to above analysis, the original scheme of frequency regulation in passive nuclear power plant has the following characteristics as shown in Table 1:

Table 1. Characteristics of original frequency regulation scheme Range Amplitude Dead band

Original scheme G mode: [f1, +∞); L ! G mode: [f2, +∞) G mode:  h; L ! G mode:  0 G mode: (−∞, f1), no dead band at f0; L ! G mode: (−∞, f2)

GBT 31464-2015 Beyond dead band

Difference Yes

 6%

Yes

Recommended performance indicators

Yes

(1) In G mode, frequency regulation takes effect unless grid frequency falls to the certain value (f1); in L mode, frequency regulation is no longer effective unless grid frequency rises to the certain value (f2) and automatically switches to G mode. Frequency regulation function is related to turbine control mode. However, turbine is generally in L mode in normal operation, and cannot participate in frequency regulation. If the unit has to participate in frequency regulation, it is necessary to switch from L mode to G mode. According to GBT 31464-2015 Grid Operation Criteria, “Grid generators should all participate in frequency regulation.” Thereby, there are some differences. (2) In G mode, the upper limit is determined by following width (h). When grid frequency rises to the certain value (f2) and turbine automatically turns to G mode, the unit participates in frequency regulation. In this case, the unit can only reduce its output and cannot increase the output. According to “GBT 31464-2015 Grid Operation Criterion”, “the maximum load limit of thermal power unit is not less than 6% of the rated capacity of the unit, and the unit in the rated load operation should participate in frequency adjustment.” Although the nuclear power unit does not have to increase the output during the rated load operation for reactor safety, at the other power levels, the frequency regulation function to increase output should be set to support the stability of the grid frequency as much as possible.

110

L.-Y. Bai et al.

(3) In G mode, the dead band is (−∞, f1), and there is no dead band at the rated frequency; in L mode, the dead band is (−∞, f2). The dead band is determined by control mode, following width, and speed governing droop. There is no special dead band setting at rated frequency in G mode. However, the grid-connected generator set generally has a dead band, and GBT 31464-2015 Grid Operation Guideline puts forward basic performance indicators for the dead band. On one hand, a dead band can avoid unnecessary response of turbine to small changes of grid frequency, which is beneficial to the stable operation of the unit. On the other hand, if reactor frequently responds to grid frequency fluctuation, it will cause the frequent movement and aggravate the mechanical wear of the control rod, which is not conducive to the operational safety of the unit and should be avoided or reduced as much as possible. At present, most nuclear power plants have a frequency dead band, and the reactor does not respond to frequency disturbances within a certain range. According to the analysis, there are actually no concepts of dead band, amplitude, and frequency regulation function, and the corresponding functions are achieved by mode switching and following width. However, the original scheme has some detects and needs to be optimized in order to meet the requirements of grid operation criteria and the unit safety.

4 Scheme Optimization The frequency regulation scheme of passive nuclear power plant is optimized in the following aspects: (1) A specialized frequency regulation function is added, and the regulating variable is introduced into L mode, to solve the problem of no frequency regulation in L mode. A settable amplitude parameter of frequency regulation is introduced, and is no longer determined by mode switching and following width. At the same time, a settable dead band parameter is introduced, in order to solve the problem that the dead band cannot be set independently. (2) A specialized power limiter is added in order to avoid the problem of frequency regulation at full power level and low power level. After the optimization, the frequency regulation scheme is shown in Fig. 6. When grid frequency varies, the speed deviation is converted to the regulating variable through the added frequency regulation function and power limiter function. The frequency regulation variable is introduced into both G mode and L mode. Frequency regulation in both control modes does not affect turbine load control function. In the optimization scheme, the amplitude and dead band are reflected in the frequency regulation function. As shown in Fig. 7, when grid frequency fluctuation exceeds dead band (g1, g2), the frequency regulation output is between lower limit (△P1) and upper limit (△P2). The frequency regulation output is calculated out by speed deviation and speed governing droop. Then turbine automatically increases or decreases power. The parameters of dead band (g1, g2) and limit (△P1, △P2) and can

Optimization Scheme of Turbine Frequency Regulation Nset

Nm

-

+

111

Governor setting Frequency regulation g(x)

Power limiter

h(x)

Following width

h

M I N

SD

f(x)

+

Pm

-

GV Opening Demand

P

+ Pset

limiter setting

GV

Fig. 6. Optimization scheme of frequency regulation in passive nuclear plant

Frequency correction output/% P2

Frequency/Hz g1

g2

P1 Fig. 7. Frequency regulation function

be set manually based on the requirements of plant and grid, independently from control modes. At the same time, the power limiter is introduced into the optimization scheme, to takes into account that frequency regulation at full power level and low power level. When the unit is at the full power level, if grid frequency decreases, the unit power is required to be increased, and the steam demand of second loop increases. However, the reactor power cannot increase because the control rod is already at the top of reactor. It will cause steam quality degradation and primary loop temperature decrease, which

112

L.-Y. Bai et al.

will seriously cause primary loop over-cooling and reactor overpower [7, 8]. Therefore, the frequency regulation output has to be limited in order to avoid the risk of reactor overpower caused by frequency regulation under full power condition. At the same time, the unit at low power level generally does not participate in frequency regulation. The power limiter is realized as shown in Fig. 8. When turbine power is more than the upper limit (W2%), if frequency regulation output is positive (>0), the limiter is active. Then the frequency regulation output enters locked mode and turbine power no longer increases. Similarly, when turbine power is less than the lower limit (W1%), if frequency regulation output is negative (