Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019 : NUSYS'19 [1st ed.] 9789811552809, 9789811552816

This book includes research papers from the 11th National Technical Symposium on Unmanned System Technology. Covering a

1,569 81 58MB

English Pages XVI, 1263 [1239] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019 : NUSYS'19 [1st ed.]
 9789811552809, 9789811552816

Table of contents :
Front Matter ....Pages i-xvi
Front Matter ....Pages 1-1
Tracking Control Design for Underactuated Micro Autonomous Underwater Vehicle in Horizontal Plane Using Robust Filter Approach (Muhammad Azri Bin Abdul Wahed, Mohd Rizal Arshad)....Pages 3-13
Design and Development of Remotely Operated Pipeline Inspection Robot (Mohd Shahrieel Mohd Aras, Zainah Md Zain, Aliff Farhan Kamaruzaman, Mohd Zamzuri Ab Rashid, Azhar Ahmad, Hairol Nizam Mohd Shah et al.)....Pages 15-23
Vision Optimization for Altitude Control and Object Tracking Control of an Autonomous Underwater Vehicle (AUV) (Joe Siang Keek, Mohd Shahrieel Mohd Aras, Zainah Md. Zain, Mohd Bazli Bahar, Ser Lee Loh, Shin Horng Chong)....Pages 25-36
Development of Autonomous Underwater Vehicle Equipped with Object Recognition and Tracking System (Muhammad Haniff Abu Mangshor, Radzi Ambar, Herdawatie Abdul Kadir, Khalid Isa, Inani Yusra Amran, Abdul Aziz Abd Kadir et al.)....Pages 37-56
Dual Image Fusion Technique for Underwater Image Contrast Enhancement (Chern How Chong, Ahmad Shahrizan Abdul Ghani, Kamil Zakwan Mohd Azmi)....Pages 57-72
Red and Blue Channels Correction Based on Green Channel and Median-Based Dual-Intensity Images Fusion for Turbid Underwater Image Quality Enhancement (Kamil Zakwan Mohd Azmi, Ahmad Shahrizan Abdul Ghani, Zulkifli Md Yusof)....Pages 73-86
Analysis of Pruned Neural Networks (MobileNetV2-YOLO v2) for Underwater Object Detection (A. F. Ayob, K. Khairuddin, Y. M. Mustafah, A. R. Salisa, K. Kadir)....Pages 87-98
Different Cell Decomposition Path Planning Methods for Unmanned Air Vehicles-A Review (Sanjoy Kumar Debnath, Rosli Omar, Susama Bagchi, Elia Nadira Sabudin, Mohd Haris Asyraf Shee Kandar, Khan Foysol et al.)....Pages 99-111
Improved Potential Field Method for Robot Path Planning with Path Pruning (Elia Nadira Sabudin, Rosli Omar, Ariffudin Joret, Asmarashid Ponniran, Muhammad Suhaimi Sulong, Herdawatie Abdul Kadir et al.)....Pages 113-127
Development of DugongBot Underwater Drones Using Open-Source Robotic Platform (Ahmad Anas Yusof, Mohd Khairi Mohamed Nor, Mohd Shahrieel Mohd Aras, Hamdan Sulaiman, Abdul Talib Din)....Pages 129-138
Development of Autonomous Underwater Vehicle for Water Quality Measurement Application (Inani Yusra Amran, Khalid Isa, Herdawatie Abdul Kadir, Radzi Ambar, Nurul Syila Ibrahim, Abdul Aziz Abd Kadir et al.)....Pages 139-161
Discrete Sliding Mode Controller on Autonomous Underwater Vehicle in Steering Motion (Nira Mawangi Sarif, Rafidah Ngadengon, Herdawatie Abdul Kadir, Mohd Hafiz A. Jalil)....Pages 163-176
Impact of Acoustic Signal on Optical Signal and Vice Versa in Optoacoustic Based Underwater Localization (M. R. Arshad, M. H. A. Majid)....Pages 177-188
Design and Development of Mini Autonomous Surface Vessel for Bathymetric Survey (Muhammad Ammar Mohd Adam, Zulkifli Zainal Abidin, Ahmad Imran Ibrahim, Ahmad Shahril Mohd Ghani, Al Jawharah Anchumukkil)....Pages 189-203
Front Matter ....Pages 205-205
Optimal Power Flow Solutions for Power System Operations Using Moth-Flame Optimization Algorithm (Salman Alabd, Mohd Herwan Sulaiman, Muhammad Ikram Mohd Rashid)....Pages 207-219
A Pilot Study on Pipeline Wall Inspection Technology Tomography (Muhammad Nuriffat Roslee, Siti Zarina Mohd. Muji, Jaysuman Pusppanathan, Mohd. Fadzli Abd. Shaib)....Pages 221-239
Weighted-Sum Extended Bat Algorithm Based PD Controller Design for Wheeled Mobile Robot (Nur Aisyah Syafinaz Suarin, Dwi Pebrianti, Nurnajmin Qasrina Ann, Luhur Bayuaji)....Pages 241-258
An Analysis of State Covariance of Mobile Robot Navigation in Unstructured Environment Based on ROS (Hamzah Ahmad, Lim Zhi Xian, Nur Aqilah Othman, Mohd Syakirin Ramli, Mohd Mawardi Saari)....Pages 259-270
Control Strategy for Differential Drive Wheel Mobile Robot (Nor Akmal Alias, Herdawatie Abdul Kadir)....Pages 271-283
Adaptive Observer for DC Motor Fault Detection Dynamical System (Janet Lee, Rosmiwati Mohd-Mokhtar, Muhammad Nasiruddin Mahyuddin)....Pages 285-297
Water Level Classification for Flood Monitoring System Using Convolutional Neural Network (J. L. Gan, W. Zailah)....Pages 299-318
Evaluation of Back-Side Slits with Sub-millimeter Resolution Using a Differential AMR Probe (M. A. H. P. Zaini, M. M. Saari, N. A. Nadzri, A. M. Halil, A. J. S. Hanifah, K. Tsukada)....Pages 319-328
Model-Free Tuning of Laguerre Network for Impedance Matching in Bilateral Teleoperation System (Mohd Syakirin Ramli, Hamzah Ahmad, Addie Irawan, Nur Liyana Ibrahim)....Pages 329-343
Identification of Liquid Slosh Behavior Using Continuous-Time Hammerstein Model Based Sine Cosine Algorithm (Julakha Jahan Jui, Mohd Helmi Suid, Zulkifli Musa, Mohd Ashraf Ahmad)....Pages 345-356
Cardiotocogram Data Classification Using Random Forest Based Machine Learning Algorithm (M. M. Imran Molla, Julakha Jahan Jui, Bifta Sama Bari, Mamunur Rashid, Md Jahid Hasan)....Pages 357-369
FPGA Implementation of Sensor Data Acquisition for Real-Time Human Body Motion Measurement System (Zarina Tukiran, Afandi Ahmad, Herdawatie Abd. Kadir, Ariffudin Joret)....Pages 371-380
Pulse Modulation (PM) Ground Penetrating Radar (GPR) System Development by Using Envelope Detector Technique (Maryanti Razali, Ariffuddin Joret, M. F. L. Abdullah, Elfarizanis Baharudin, Asmarashid Ponniran, Muhammad Suhaimi Sulong et al.)....Pages 381-397
An Overview of Modeling and Control of a Through-the-Road Hybrid Electric Vehicle (M. F. M. Sabri, M. H. Husin, M. I. Jobli, A. M. N. A. Kamaruddin)....Pages 399-417
Euler-Lagrange Based Dynamic Model of Double Rotary Inverted Pendulum (Mukhtar Fatihu Hamza, Jamilu Kamilu Adamu, Abdulbasid Ismail Isa)....Pages 419-434
Network-Based Cooperative Synchronization Control of 3 Articulated Robotic Arms for Industry 4.0 Application (Kam Wah Chan, Muhammad Nasiruddin Mahyuddin, Bee Ee Khoo)....Pages 435-447
EEG Signal Denoising Using Hybridizing Method Between Wavelet Transform with Genetic Algorithm (Zaid Abdi Alkareem Alyasseri, Ahamad Tajudin Khader, Mohammed Azmi Al-Betar, Ammar Kamal Abasi, Sharif Naser Makhadmeh)....Pages 449-469
Neural Network Ammonia-Based Aeration Control for Activated Sludge Process Wastewater Treatment Plant (M. H. Husin, M. F. Rahmat, N. A. Wahab, M. F. M. Sabri)....Pages 471-487
A Min-conflict Algorithm for Power Scheduling Problem in a Smart Home Using Battery (Sharif Naser Makhadmeh, Ahamad Tajudin Khader, Mohammed Azmi Al-Betar, Syibrah Naim, Zaid Abdi Alkareem Alyasseri, Ammar Kamal Abasi)....Pages 489-501
An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer (Ammar Kamal Abasi, Ahamad Tajudin Khader, Mohammed Azmi Al-Betar, Syibrah Naim, Sharif Naser Makhadmeh, Zaid Abdi Alkareem Alyasseri)....Pages 503-516
Front Matter ....Pages 517-517
Metamaterial Antenna for Biomedical Application (Mohd Aminudin Jamlos, Nur Amirah Othman, Wan Azani Mustafa, Maswani Khairi Marzuki)....Pages 519-528
Refraction Method of Metamaterial for Antenna (Maswani Khairi Marzuki, Mohd Aminudin Jamlos, Wan Azani Mustafa, Khairul Najmy Abdul Rani)....Pages 529-534
Circular Polarized 5.8 GHz Directional Antenna Design for Base Station Application (Mohd Aminudin Jamlos, Nurasma Husna Mohd Sabri, Wan Azani Mustafa, Maswani Khairi Marzuki)....Pages 535-542
Medical Image Enhancement and Deblurring (Reza Amini Gougeh, Tohid Yousefi Rezaii, Ali Farzamnia)....Pages 543-554
A Fast and Efficient Segmentation of Soil-Transmitted Helminths Through Various Color Models and k-Means Clustering (Norhanis Ayunie Ahmad Khairudin, Aimi Salihah Abdul Nasir, Lim Chee Chin, Haryati Jaafar, Zeehaida Mohamed)....Pages 555-576
Machine Learning Calibration for Near Infrared Spectroscopy Data: A Visual Programming Approach (Mahmud Iwan Solihin, Zheng Zekui, Chun Kit Ang, Fahri Heltha, Mohamed Rizon)....Pages 577-590
Real Time Android-Based Integrated System for Luggage Check-in Process at the Airport (Xin Yee Lee, Rosmiwati Mohd-Mokhtar)....Pages 591-603
Antenna Calibration in EMC Semi-anechoic Chamber Using Standard Antenna Method (SAM) and Standard Site Method (SSM) (Abdulrahman Ahmed Ghaleb Amer, Syarfa Zahirah Sapuan, Nur Atikah Zulkefli, Nasimuddin Nasimuddin, Nabiah Binti Zinal, Shipun Anuar Hamzah)....Pages 605-616
An Automatic Driver Assistant Based on Intention Detecting Using EEG Signal (Reza Amini Gougeh, Tohid Yousefi Rezaii, Ali Farzamnia)....Pages 617-627
Hybrid Skull Stripping Method for Brain CT Images (Fakhrul Razan Rahmad, Wan Nurshazwani Wan Zakaria, Ain Nazari, Mohd Razali Md Tomari, Nik Farhan Nik Fuad, Anis Azwani Muhd Suberi)....Pages 629-639
Improvising Non-uniform Illumination and Low Contrast Images of Soil Transmitted Helminths Image Using Contrast Enhancement Techniques (Norhanis Ayunie Ahmad Khairudin, Aimi Salihah Abdul Nasir, Lim Chee Chin, Haryati Jaafar, Zeehaida Mohamed)....Pages 641-658
Signal Processing Technique for Pulse Modulation (PM) Ground Penetrating Radar (GPR) System Based on Phase and Envelope Detector Technique (Che Ku Nor Azie Hailma Che Ku Melor, Ariffuddin Joret, Asmarashid Ponniran, Muhammad Suhaimi Sulong, Rosli Omar, Maryanti Razali)....Pages 659-669
Evaluation of Leap Motion Controller Usability in Development of Hand Gesture Recognition for Hemiplegia Patients (Wan Norliyana Wan Azlan, Wan Nurshazwani Wan Zakaria, Nurmiza Othman, Mohd Norzali Haji Mohd, Muhammad Nurfirdaus Abd Ghani)....Pages 671-682
Using Convolution Neural Networks Pattern for Classification of Motor Imagery in BCI System (Sepideh Zolfaghari, Tohid Yousefi Rezaii, Saeed Meshgini, Ali Farzamnia)....Pages 683-692
Metasurface with Wide-Angle Reception for Electromagnetic Energy Harvesting (Abdulrahman A. G. Amer, Syarfa Zahirah Sapuan, Nasimuddin, Nabiah Binti Zinal)....Pages 693-700
Integrated Soil Monitoring System for Internet of Thing (IOT) Applications (Xin Yi Lau, Chun Heng Soo, Yusmeeraz Yusof, Suhaila Isaak)....Pages 701-714
Contrast Enhancement Approaches on Medical Microscopic Images: A Review (Nadzirah Nahrawi, Wan Azani Mustafa, Siti Nurul Aqmariah Mohd Kanafiah, Mohd Aminudin Jamlos, Wan Khairunizam)....Pages 715-726
Effect of Different Filtering Techniques on Medical and Document Image (Wan Azani Mustafa, Syafiq Sam, Mohd Aminudin Jamlos, Wan Khairunizam)....Pages 727-736
Implementation of Seat Belt Monitoring and Alert System for Car Safety (Zainah Md Zain, Mohd Hairuddin Abu Bakar, Aman Zaki Mamat, Wan Nor Rafidah Wan Abdullah, Norsuryani Zainal Abidin, Haris Faisal Shaharuddin)....Pages 737-749
Electroporation Study: Pulse Electric Field Effect on Breast Cancer Cell (Nur Adilah Abd Rahman, Muhammad Mahadi Abdul Jamil, Mohamad Nazib Adon, Chew Chang Choon, Radzi Ambar)....Pages 751-760
Influence of Electroporation on HT29 Cell Proliferation, Spreading and Adhesion Properties (Hassan Buhari Mamman, Muhammad Mahadi Abdul Jamil, Nur Adilah Abd Rahman, Radzi Ambar, Chew Chang Choon)....Pages 761-773
Wound Healing and Electrofusion Application via Pulse Electric Field Exposure (Muhammad Mahadi Abdul Jamil, Mohamad Nazib Adon, Hassan Buhari Mamman, Nur Adilah Abd Rahman, Radzi Ambar, Chew Chang Choon)....Pages 775-784
Color Constancy Analysis Approach for Color Standardization on Malaria Thick and Thin Blood Smear Images (Thaqifah Ahmad Aris, Aimi Salihah Abdul Nasir, Haryati Jaafar, Lim Chee Chin, Zeehaida Mohamed)....Pages 785-804
Stochastic Analysis of ANN Statistical Features for CT Brain Posterior Fossa Image Classification (Anis Azwani Muhd Suberi, Wan Nurshazwani Wan Zakaria, Razali Tomari, Ain Nazari, Nik Farhan Nik Fuad, Fakhrul Razan Rahmad et al.)....Pages 805-817
Improvement of Magnetic Field Induction for MPI Application Using Maxwell Coils Paired-Sub-coils System Arrangement (Muhamad Fikri Shahkhirin Birahim, Nurmiza Othman, Syarfa’ Zahirah Sapuan, Mohd Razali Md Tomari, Wan Nurshazwani Wan Zakaria, Chua King Lee)....Pages 819-829
DCT Image Compression Implemented on Raspberry Pi to Compress Image Captured by CMOS Image Sensor (Ibrahim Saad Mohsin, Muhammad Imran Ahmad, Saad M. Salman, Mustafa Zuhaer Nayef Al-Dabagh, Mohd Nazrin Md Isa, Raja Abdullah Raja Ahmad)....Pages 831-841
A Racial Recognition Method Based on Facial Color and Texture for Improving Demographic Classification (Amer A. Sallam, Muhammad Nomani Kabir, Athmar N. M. Shamhan, Heba K. Nasser, Jing Wang)....Pages 843-852
Automatic Passengers Counting System Using Images Processing Based on YCbCr and HSV Colour Spaces Analysis (Muhammad Shahid Che Husin, Aimi Salihah Abdul Nasir)....Pages 853-872
Face Recognition Using PCA Implemented on Raspberry Pi (Ibrahim Majid Mohammed, Mustafa Zuhaer Nayef Al-Dabagh, Muhammad Imran Ahmad, Mohd Nazrin Md Isa)....Pages 873-889
Comparability of Edge Detection Techniques for Automatic Vehicle License Plate Detection and Recognition (Fatin Norazima Mohamad Ariff, Aimi Salihah Abdul Nasir, Haryati Jaafar, Abdul Nasir Zulkifli)....Pages 891-910
Classification of Facial Part Movement Acquired from Kinect V1 and Kinect V2 (Sheng Guang Heng, Rosdiyana Samad, Mahfuzah Mustafa, Zainah Md Zain, Nor Rul Hasma Abdullah, Dwi Pebrianti)....Pages 911-924
Hurst Exponent Based Brain Behavior Analysis of Stroke Patients Using EEG Signals (Wen Yean Choong, Wan Khairunizam, Murugappan Murugappan, Mohammad Iqbal Omar, Siao Zheng Bong, Ahmad Kadri Junoh et al.)....Pages 925-933
Examination Rain and Fog Attenuation for Path Loss Prediction in Millimeter Wave Range (Imadeldin Elsayed Elmutasim, Izzeldin I. Mohd)....Pages 935-946
Introduction of Static and Dynamic Features to Facial Nerve Paralysis Evaluation (Wan Syahirah W Samsudin, Rosdiyana Samad, Kenneth Sundaraj, Mohd Zaki Ahmad)....Pages 947-963
Offline EEG-Based DC Motor Control for Wheelchair Application (Norizam Sulaiman, Nawfan Mohammed Mohammed Ahmed Al-Fakih, Mamunur Rashid, Mohd Shawal Jadin, Mahfuzah Mustafa, Fahmi Samsuri)....Pages 965-980
Automated Cells Counting for Leukaemia and Malaria Detection Based on RGB and HSV Colour Spaces Analysis (Amer Fazryl Din, Aimi Salihah Abdul Nasir)....Pages 981-996
Simulation Studies of the Hybrid Human-Fuzzy Controller for Path Tracking of an Autonomous Vehicle (Hafiz Halin, Wan Khairunizam, Hasri Haris, Z. M. Razlan, S. A. Bakar, I. Zunaidi et al.)....Pages 997-1005
A New Approach in Energy Consumption Based on Genetic Algorithm and Fuzzy Logic for WSN (Ali Adnan Wahbi Alwafi, Javad Rahebi, Ali Farzamnia)....Pages 1007-1019
Front Matter ....Pages 1021-1021
Comparison of Buck-Boost Derived Non-isolated DC-DC Converters in a Photovoltaic System (Jotham Jeremy Lourdes, Chia Ai Ooi, Jiashen Teh)....Pages 1023-1037
Fault Localization and Detection in Medium Voltage Distribution Network Using Adaptive Neuro-Fuzzy Inference System (ANFIS) (N. S. B. Jamili, Mohd Rafi Adzman, Wan Syaza Ainaa Wan Salman, M. H. Idris, M. Amirruddin)....Pages 1039-1052
Flashover Voltage Prediction on Polluted Cup-Pin the Insulators Under Polluted Conditions (Ali. A. Salem, R. Abd-Rahman, M. S. Kamarudin, N. A. Othman, N. A. M. Jamail, N. Hussin et al.)....Pages 1053-1065
Effect of Distributed Generation to the Faults in Medium Voltage Network Using ATP-EMTP Simulation (Wan Syaza Ainaa Wan Salman, Mohd Rafi Adzman, Muzamir Isa, N. S. B. Jamili, M. H. Idris, M. Amirruddin)....Pages 1067-1082
Optimal Reactive Power Dispatch Solution by Loss Minimisation Using Dragonfly Optimization Algorithm (Ibrahim Haruna Shanono, Masni Ainina Mahmud, Nor Rul Hasma Abdullah, Mahfuzah Mustafa, Rosdiyana Samad, Dwi Pebrianti et al.)....Pages 1083-1103
Analysis of Pedal Power Energy Harvesting for Alternative Power Source (Sheikh-Muhammad Haziq Sah-Azmi, Zuraini Dahari)....Pages 1105-1114
An Application of Barnacles Mating Optimizer Algorithm for Combined Economic and Emission Dispatch Solution (Mohd Herwan Sulaiman, Zuriani Mustaffa, Mohd Mawardi Saari, Amir Izzani Mohamed)....Pages 1115-1124
Development of Microcontroller Based Portable Solar Irradiance Meter Using Mini Solar Cell (Lee Woan Jun, Mohd Shawal Jadin, Norizam Sulaiman)....Pages 1125-1137
Performance of Graphite and Activated Carbon as Electrical Grounding Enhancement Material (Mohd Yuhyi Mohd Tadza, Tengku Hafidatul Husna Tengku Anuar, Fadzil Mat Yahaya, Rahisham Abd Rahman)....Pages 1139-1154
Design on Real Time Control for Dual Axis Solar Tracker for Mobile Robot (Muhammad Hanzolah Shahul Hameed, Mohd Zamri Hasan, Junaidah Ali Mohd Jobran)....Pages 1155-1172
Modified Particle Swarm Optimization for Robust Anti-swing Gantry Crane Controller Tuning (Mahmud Iwan Solihin, Wei Hong Lim, Sew Sun Tiang, Chun Kit Ang)....Pages 1173-1192
Feasibility Analysis of a Hybrid System for a Health Clinic in a Rural Area South-Eastern Iraq (Zaidoon W. J. AL-Shammari, M. M. Azizan, A. S. F. Rahman)....Pages 1193-1202
Optimal Sizing of PV/Wind/Battery Hybrid System for Rural School in South Iraq (Zaidoon W. J. AL-Shammari, M. M. Azizan, A. S. F. Rahman)....Pages 1203-1211
The Use of Gypsum and Waste Gypsum for Electrical Grounding Backfill (Amizatulhani Abdullah, Nurmazuria Mazelan, Mohd Yuhyi Mohd Tadza, Rahisham Abd Rahman)....Pages 1213-1226
Energy-Efficient Superframe Scheduling in Industrial Wireless Networked Control System (Duc Chung Tran, Rosdiazli Ibrahim, Fawnizu Azmadi Hussin, Madiah Omar)....Pages 1227-1242
Design of Two Axis Solar Tracker Based on Optoelectrical Tracking Using Hybrid FuGA Controller (Imam Abadi, Erma Hakim Setyawan, D. R. Pramesrani)....Pages 1243-1263

Citation preview

Lecture Notes in Electrical Engineering 666

Zainah Md Zain · Hamzah Ahmad · Dwi Pebrianti · Mahfuzah Mustafa · Nor Rul Hasma Abdullah · Rosdiyana Samad · Maziyah Mat Noh   Editors

Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019 NUSYS’19

Lecture Notes in Electrical Engineering Volume 666

Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Naples, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for Technology, Karlsruhe, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Università di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität München, Munich, Germany Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Stanford University, Stanford, CA, USA Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Sebastian Möller, Quality and Usability Laboratory, TU Berlin, Berlin, Germany Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston North, Manawatu-Wanganui, New Zealand Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China Junjie James Zhang, Charlotte, NC, USA

The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering - quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning:

• • • • • • • • • • • •

Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please contact leontina. [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Associate Editor ([email protected]) India, Japan, Rest of Asia Swati Meherishi, Executive Editor ([email protected]) Southeast Asia, Australia, New Zealand Ramesh Nath Premnath, Editor ([email protected]) USA, Canada: Michael Luby, Senior Editor ([email protected]) All other Countries: Leontina Di Cecco, Senior Editor ([email protected]) ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, MetaPress, Web of Science and Springerlink **

More information about this series at http://www.springer.com/series/7818

Zainah Md Zain Hamzah Ahmad Dwi Pebrianti Mahfuzah Mustafa Nor Rul Hasma Abdullah Rosdiyana Samad Maziyah Mat Noh •











Editors

Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019 NUSYS’19

123

Editors Zainah Md Zain Faculty of Electrical & Electronics Engineering Universiti Malaysia Pahang Pekan, Pahang, Malaysia

Hamzah Ahmad Faculty of Electrical & Electronics Engineering Universiti Malaysia Pahang Pekan, Pahang, Malaysia

Dwi Pebrianti Faculty of Electrical & Electronics Engineering Universiti Malaysia Pahang Pekan, Pahang, Malaysia

Mahfuzah Mustafa Faculty of Electrical & Electronics Engineering Universiti Malaysia Pahang Pekan, Pahang, Malaysia

Nor Rul Hasma Abdullah Faculty of Electrical & Electronics Engineering Universiti Malaysia Pahang Pekan, Pahang, Malaysia

Rosdiyana Samad Faculty of Electrical & Electronics Engineering Universiti Malaysia Pahang Pekan, Pahang, Malaysia

Maziyah Mat Noh Faculty of Electrical & Electronics Engineering Universiti Malaysia Pahang Pekan, Pahang, Malaysia

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-15-5280-9 ISBN 978-981-15-5281-6 (eBook) https://doi.org/10.1007/978-981-15-5281-6 © Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

The National Technical Seminar on Unmanned System Technology 2019 (NUSYS’19) was organized by the IEEE Oceanic Engineering Society (OES) Malaysia Chapter and Malaysian Society for Automatic Control Engineers (MACE) IFAC NMO. NUSYS’19 was held during December 2–3, 2019, at Universiti Malaysia Pahang, Gambang Campus, Kuantan, Pahang, Malaysia, with a conference theme “Unmanned System Technology and AI Applications”. The event was the 11th conference continuing from previous conferences since the year 2008. NUSYS’19 focused on both theory and application, primarily covering the topics of intelligent unmanned technologies, robotics and autonomous vehicle. We invited four keynote speakers who dealt with related state-of-the-art technologies including unmanned aerial vehicles (UAVs), underwater vehicles (UVs), autonomous vehicles, humanoid robot and intelligent system, among others. They are Mr. Kamarulzaman Muhamed (Founder and CEO Aerodyne Group, “CEO of Top 10 hottest start-up company by Nikkei Japan, May 2019”), Assoc. Prof. Dr. Hanafiah Yussof (Founder, Board of Director and Group Chief Officer of Robopreneur Sdn. Bhd.), Assoc. Prof. Dr. Hairi Zamsuri (General Manager eMoovit Technology Sdn. Bhd.) and Mr. Mohd Fairuz Nor Azmi (Project Manager, Fugro Malaysia Marine Sdn. Bhd. formerly known as Fugro Geodetic Malaysia Sdn. Bhd.). The objectives of the conference are threefold: to accommodate a medium to discuss a wide range of unmanned system technology between universities and industries, to disseminate the latest technology in the field of unmanned system technology and to provide an opportunity for researchers to present their research paper in the unmanned system technology area. Despite focusing on a rather specialized area of research concerning unmanned system technology and electrical and electronics engineering technology, NUSYS’19 has successfully attracted 87 papers locally from 12 universities and one internationally from Institute Technology Surabaya, Indonesia. This volume of proceedings from the conference provides an opportunity for readers to engage with a selection of refereed papers that were presented during the NUSYS’19 conference. The book is organized into four parts, which reflect the research topics of the conference themes: v

vi

Part Part Part Part

Preface

1: 2: 3: 4:

Unmanned System Technology, Underwater Technology and Marine Applied Electronics and Computer Engineering Control, Instrumentations and Artificial Intelligent Systems Sustainable Energy and Power Electronics.

One aim of this book is to stimulate interactions among researchers in the areas pertinent to intelligent unmanned systems of AUV, UAV and AGV, namely autonomous control systems and vehicles. Another aim is to share new ideas, new challenges and the author’s expertise on critical and emerging technologies. The book covers multifaceted aspects of unmanned system technology. The editors hope that readers will find this book not only stimulating but also useful and usable in whatever aspect of unmanned system design in which they may be involved or interested. The editors would like to express their sincere appreciation to all the contributors for their cooperation in producing this book. We wish to take the opportunity to thank all individuals and organizations who have contributed in some way in making NUSYS’19 a success and a memorable gathering. Also, we wish to extend our gratitude to the members of the IEEE OES Malaysia Chapter Committee and Organizing Committee for their tireless effort. Finally, the publisher, Springer, and most importantly, Mr. Karthik Raj Selvaraj for his support and encouragement in undertaking this publication. Editors

Contents

Unmanned System Technology, Underwater Technology and Marine Tracking Control Design for Underactuated Micro Autonomous Underwater Vehicle in Horizontal Plane Using Robust Filter Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muhammad Azri Bin Abdul Wahed and Mohd Rizal Arshad Design and Development of Remotely Operated Pipeline Inspection Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohd Shahrieel Mohd Aras, Zainah Md Zain, Aliff Farhan Kamaruzaman, Mohd Zamzuri Ab Rashid, Azhar Ahmad, Hairol Nizam Mohd Shah, Mohd Zaidi Mohd Tumari, Alias Khamis, Fadilah Ab Azis, and Fariz Ali@Ibrahim Vision Optimization for Altitude Control and Object Tracking Control of an Autonomous Underwater Vehicle (AUV) . . . . . . . . . . . . . Joe Siang Keek, Mohd Shahrieel Mohd Aras, Zainah Md. Zain, Mohd Bazli Bahar, Ser Lee Loh, and Shin Horng Chong Development of Autonomous Underwater Vehicle Equipped with Object Recognition and Tracking System . . . . . . . . . . . . . . . . . . . . Muhammad Haniff Abu Mangshor, Radzi Ambar, Herdawatie Abdul Kadir, Khalid Isa, Inani Yusra Amran, Abdul Aziz Abd Kadir, Nurul Syila Ibrahim, Chew Chang Choon, and Shinichi Sagara Dual Image Fusion Technique for Underwater Image Contrast Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chern How Chong, Ahmad Shahrizan Abdul Ghani, and Kamil Zakwan Mohd Azmi

3

15

25

37

57

vii

viii

Contents

Red and Blue Channels Correction Based on Green Channel and Median-Based Dual-Intensity Images Fusion for Turbid Underwater Image Quality Enhancement . . . . . . . . . . . . . . . . . . . . . . . . Kamil Zakwan Mohd Azmi, Ahmad Shahrizan Abdul Ghani, and Zulkifli Md Yusof

73

Analysis of Pruned Neural Networks (MobileNetV2-YOLO v2) for Underwater Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. F. Ayob, K. Khairuddin, Y. M. Mustafah, A. R. Salisa, and K. Kadir

87

Different Cell Decomposition Path Planning Methods for Unmanned Air Vehicles-A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sanjoy Kumar Debnath, Rosli Omar, Susama Bagchi, Elia Nadira Sabudin, Mohd Haris Asyraf Shee Kandar, Khan Foysol, and Tapan Kumar Chakraborty

99

Improved Potential Field Method for Robot Path Planning with Path Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Elia Nadira Sabudin, Rosli Omar, Ariffudin Joret, Asmarashid Ponniran, Muhammad Suhaimi Sulong, Herdawatie Abdul Kadir, and Sanjoy Kumar Debnath Development of DugongBot Underwater Drones Using Open-Source Robotic Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Ahmad Anas Yusof, Mohd Khairi Mohamed Nor, Mohd Shahrieel Mohd Aras, Hamdan Sulaiman, and Abdul Talib Din Development of Autonomous Underwater Vehicle for Water Quality Measurement Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Inani Yusra Amran, Khalid Isa, Herdawatie Abdul Kadir, Radzi Ambar, Nurul Syila Ibrahim, Abdul Aziz Abd Kadir, and Muhammad Haniff Abu Mangshor Discrete Sliding Mode Controller on Autonomous Underwater Vehicle in Steering Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Nira Mawangi Sarif, Rafidah Ngadengon, Herdawatie Abdul Kadir, and Mohd Hafiz A. Jalil Impact of Acoustic Signal on Optical Signal and Vice Versa in Optoacoustic Based Underwater Localization . . . . . . . . . . . . . . . . . . 177 M. R. Arshad and M. H. A. Majid Design and Development of Mini Autonomous Surface Vessel for Bathymetric Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Muhammad Ammar Mohd Adam, Zulkifli Zainal Abidin, Ahmad Imran Ibrahim, Ahmad Shahril Mohd Ghani, and Al Jawharah Anchumukkil

Contents

ix

Control, Instrumentation and Artificial Intelligent Systems Optimal Power Flow Solutions for Power System Operations Using Moth-Flame Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . 207 Salman Alabd, Mohd Herwan Sulaiman, and Muhammad Ikram Mohd Rashid A Pilot Study on Pipeline Wall Inspection Technology Tomography . . . 221 Muhammad Nuriffat Roslee, Siti Zarina Mohd. Muji, Jaysuman Pusppanathan, and Mohd. Fadzli Abd. Shaib Weighted-Sum Extended Bat Algorithm Based PD Controller Design for Wheeled Mobile Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Nur Aisyah Syafinaz Suarin, Dwi Pebrianti, Nurnajmin Qasrina Ann, and Luhur Bayuaji An Analysis of State Covariance of Mobile Robot Navigation in Unstructured Environment Based on ROS . . . . . . . . . . . . . . . . . . . . . 259 Hamzah Ahmad, Lim Zhi Xian, Nur Aqilah Othman, Mohd Syakirin Ramli, and Mohd Mawardi Saari Control Strategy for Differential Drive Wheel Mobile Robot . . . . . . . . . 271 Nor Akmal Alias and Herdawatie Abdul Kadir Adaptive Observer for DC Motor Fault Detection Dynamical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Janet Lee, Rosmiwati Mohd-Mokhtar, and Muhammad Nasiruddin Mahyuddin Water Level Classification for Flood Monitoring System Using Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 J. L. Gan and W. Zailah Evaluation of Back-Side Slits with Sub-millimeter Resolution Using a Differential AMR Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 M. A. H. P. Zaini, M. M. Saari, N. A. Nadzri, A. M. Halil, A. J. S. Hanifah, and K. Tsukada Model-Free Tuning of Laguerre Network for Impedance Matching in Bilateral Teleoperation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Mohd Syakirin Ramli, Hamzah Ahmad, Addie Irawan, and Nur Liyana Ibrahim Identification of Liquid Slosh Behavior Using Continuous-Time Hammerstein Model Based Sine Cosine Algorithm . . . . . . . . . . . . . . . . 345 Julakha Jahan Jui, Mohd Helmi Suid, Zulkifli Musa, and Mohd Ashraf Ahmad

x

Contents

Cardiotocogram Data Classification Using Random Forest Based Machine Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 M. M. Imran Molla, Julakha Jahan Jui, Bifta Sama Bari, Mamunur Rashid, and Md Jahid Hasan FPGA Implementation of Sensor Data Acquisition for Real-Time Human Body Motion Measurement System . . . . . . . . . . . . . . . . . . . . . . 371 Zarina Tukiran, Afandi Ahmad, Herdawatie Abd. Kadir, and Ariffudin Joret Pulse Modulation (PM) Ground Penetrating Radar (GPR) System Development by Using Envelope Detector Technique . . . . . . . . . . . . . . . 381 Maryanti Razali, Ariffuddin Joret, M. F. L. Abdullah, Elfarizanis Baharudin, Asmarashid Ponniran, Muhammad Suhaimi Sulong, Che Ku Nor Azie Hailma Che Ku Melor, and Noor Azwan Shairi An Overview of Modeling and Control of a Through-the-Road Hybrid Electric Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 M. F. M. Sabri, M. H. Husin, M. I. Jobli, and A. M. N. A. Kamaruddin Euler-Lagrange Based Dynamic Model of Double Rotary Inverted Pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Mukhtar Fatihu Hamza, Jamilu Kamilu Adamu, and Abdulbasid Ismail Isa Network-Based Cooperative Synchronization Control of 3 Articulated Robotic Arms for Industry 4.0 Application . . . . . . . . . 435 Kam Wah Chan, Muhammad Nasiruddin Mahyuddin, and Bee Ee Khoo EEG Signal Denoising Using Hybridizing Method Between Wavelet Transform with Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Zaid Abdi Alkareem Alyasseri, Ahamad Tajudin Khader, Mohammed Azmi Al-Betar, Ammar Kamal Abasi, and Sharif Naser Makhadmeh Neural Network Ammonia-Based Aeration Control for Activated Sludge Process Wastewater Treatment Plant . . . . . . . . . . . . . . . . . . . . . 471 M. H. Husin, M. F. Rahmat, N. A. Wahab, and M. F. M. Sabri A Min-conflict Algorithm for Power Scheduling Problem in a Smart Home Using Battery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Sharif Naser Makhadmeh, Ahamad Tajudin Khader, Mohammed Azmi Al-Betar, Syibrah Naim, Zaid Abdi Alkareem Alyasseri, and Ammar Kamal Abasi An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Ammar Kamal Abasi, Ahamad Tajudin Khader, Mohammed Azmi Al-Betar, Syibrah Naim, Sharif Naser Makhadmeh, and Zaid Abdi Alkareem Alyasseri

Contents

xi

Applied Electronics and Computer Engineering Metamaterial Antenna for Biomedical Application . . . . . . . . . . . . . . . . . 519 Mohd Aminudin Jamlos, Nur Amirah Othman, Wan Azani Mustafa, and Maswani Khairi Marzuki Refraction Method of Metamaterial for Antenna . . . . . . . . . . . . . . . . . . 529 Maswani Khairi Marzuki, Mohd Aminudin Jamlos, Wan Azani Mustafa, and Khairul Najmy Abdul Rani Circular Polarized 5.8 GHz Directional Antenna Design for Base Station Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 Mohd Aminudin Jamlos, Nurasma Husna Mohd Sabri, Wan Azani Mustafa, and Maswani Khairi Marzuki Medical Image Enhancement and Deblurring . . . . . . . . . . . . . . . . . . . . 543 Reza Amini Gougeh, Tohid Yousefi Rezaii, and Ali Farzamnia A Fast and Efficient Segmentation of Soil-Transmitted Helminths Through Various Color Models and k-Means Clustering . . . . . . . . . . . . 555 Norhanis Ayunie Ahmad Khairudin, Aimi Salihah Abdul Nasir, Lim Chee Chin, Haryati Jaafar, and Zeehaida Mohamed Machine Learning Calibration for Near Infrared Spectroscopy Data: A Visual Programming Approach . . . . . . . . . . . . . . 577 Mahmud Iwan Solihin, Zheng Zekui, Chun Kit Ang, Fahri Heltha, and Mohamed Rizon Real Time Android-Based Integrated System for Luggage Check-in Process at the Airport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 Xin Yee Lee and Rosmiwati Mohd-Mokhtar Antenna Calibration in EMC Semi-anechoic Chamber Using Standard Antenna Method (SAM) and Standard Site Method (SSM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Abdulrahman Ahmed Ghaleb Amer, Syarfa Zahirah Sapuan, Nur Atikah Zulkefli, Nasimuddin Nasimuddin, Nabiah Binti Zinal, and Shipun Anuar Hamzah An Automatic Driver Assistant Based on Intention Detecting Using EEG Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 Reza Amini Gougeh, Tohid Yousefi Rezaii, and Ali Farzamnia Hybrid Skull Stripping Method for Brain CT Images . . . . . . . . . . . . . . 629 Fakhrul Razan Rahmad, Wan Nurshazwani Wan Zakaria, Ain Nazari, Mohd Razali Md Tomari, Nik Farhan Nik Fuad, and Anis Azwani Muhd Suberi

xii

Contents

Improvising Non-uniform Illumination and Low Contrast Images of Soil Transmitted Helminths Image Using Contrast Enhancement Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 Norhanis Ayunie Ahmad Khairudin, Aimi Salihah Abdul Nasir, Lim Chee Chin, Haryati Jaafar, and Zeehaida Mohamed Signal Processing Technique for Pulse Modulation (PM) Ground Penetrating Radar (GPR) System Based on Phase and Envelope Detector Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 Che Ku Nor Azie Hailma Che Ku Melor, Ariffuddin Joret, Asmarashid Ponniran, Muhammad Suhaimi Sulong, Rosli Omar, and Maryanti Razali Evaluation of Leap Motion Controller Usability in Development of Hand Gesture Recognition for Hemiplegia Patients . . . . . . . . . . . . . . 671 Wan Norliyana Wan Azlan, Wan Nurshazwani Wan Zakaria, Nurmiza Othman, Mohd Norzali Haji Mohd, and Muhammad Nurfirdaus Abd Ghani Using Convolution Neural Networks Pattern for Classification of Motor Imagery in BCI System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 Sepideh Zolfaghari, Tohid Yousefi Rezaii, Saeed Meshgini, and Ali Farzamnia Metasurface with Wide-Angle Reception for Electromagnetic Energy Harvesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693 Abdulrahman A. G. Amer, Syarfa Zahirah Sapuan, Nasimuddin, and Nabiah Binti Zinal Integrated Soil Monitoring System for Internet of Thing (IOT) Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701 Xin Yi Lau, Chun Heng Soo, Yusmeeraz Yusof, and Suhaila Isaak Contrast Enhancement Approaches on Medical Microscopic Images: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715 Nadzirah Nahrawi, Wan Azani Mustafa, Siti Nurul Aqmariah Mohd Kanafiah, Mohd Aminudin Jamlos, and Wan Khairunizam Effect of Different Filtering Techniques on Medical and Document Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727 Wan Azani Mustafa, Syafiq Sam, Mohd Aminudin Jamlos, and Wan Khairunizam Implementation of Seat Belt Monitoring and Alert System for Car Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737 Zainah Md Zain, Mohd Hairuddin Abu Bakar, Aman Zaki Mamat, Wan Nor Rafidah Wan Abdullah, Norsuryani Zainal Abidin, and Haris Faisal Shaharuddin

Contents

xiii

Electroporation Study: Pulse Electric Field Effect on Breast Cancer Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751 Nur Adilah Abd Rahman, Muhammad Mahadi Abdul Jamil, Mohamad Nazib Adon, Chew Chang Choon, and Radzi Ambar Influence of Electroporation on HT29 Cell Proliferation, Spreading and Adhesion Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761 Hassan Buhari Mamman, Muhammad Mahadi Abdul Jamil, Nur Adilah Abd Rahman, Radzi Ambar, and Chew Chang Choon Wound Healing and Electrofusion Application via Pulse Electric Field Exposure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775 Muhammad Mahadi Abdul Jamil, Mohamad Nazib Adon, Hassan Buhari Mamman, Nur Adilah Abd Rahman, Radzi Ambar, and Chew Chang Choon Color Constancy Analysis Approach for Color Standardization on Malaria Thick and Thin Blood Smear Images . . . . . . . . . . . . . . . . . 785 Thaqifah Ahmad Aris, Aimi Salihah Abdul Nasir, Haryati Jaafar, Lim Chee Chin, and Zeehaida Mohamed Stochastic Analysis of ANN Statistical Features for CT Brain Posterior Fossa Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 805 Anis Azwani Muhd Suberi, Wan Nurshazwani Wan Zakaria, Razali Tomari, Ain Nazari, Nik Farhan Nik Fuad, Fakhrul Razan Rahmad, and Salsabella Mohd Fizol Improvement of Magnetic Field Induction for MPI Application Using Maxwell Coils Paired-Sub-coils System Arrangement . . . . . . . . . . . . . . 819 Muhamad Fikri Shahkhirin Birahim, Nurmiza Othman, Syarfa’ Zahirah Sapuan, Mohd Razali Md Tomari, Wan Nurshazwani Wan Zakaria, and Chua King Lee DCT Image Compression Implemented on Raspberry Pi to Compress Image Captured by CMOS Image Sensor . . . . . . . . . . . . . 831 Ibrahim Saad Mohsin, Muhammad Imran Ahmad, Saad M. Salman, Mustafa Zuhaer Nayef Al-Dabagh, Mohd Nazrin Md Isa, and Raja Abdullah Raja Ahmad A Racial Recognition Method Based on Facial Color and Texture for Improving Demographic Classification . . . . . . . . . . . . . . . . . . . . . . . 843 Amer A. Sallam, Muhammad Nomani Kabir, Athmar N. M. Shamhan, Heba K. Nasser, and Jing Wang Automatic Passengers Counting System Using Images Processing Based on YCbCr and HSV Colour Spaces Analysis . . . . . . . . . . . . . . . . 853 Muhammad Shahid Che Husin and Aimi Salihah Abdul Nasir

xiv

Contents

Face Recognition Using PCA Implemented on Raspberry Pi . . . . . . . . . 873 Ibrahim Majid Mohammed, Mustafa Zuhaer Nayef Al-Dabagh, Muhammad Imran Ahmad, and Mohd Nazrin Md Isa Comparability of Edge Detection Techniques for Automatic Vehicle License Plate Detection and Recognition . . . . . . . . . . . . . . . . . . . . . . . . 891 Fatin Norazima Mohamad Ariff, Aimi Salihah Abdul Nasir, Haryati Jaafar, and Abdul Nasir Zulkifli Classification of Facial Part Movement Acquired from Kinect V1 and Kinect V2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911 Sheng Guang Heng, Rosdiyana Samad, Mahfuzah Mustafa, Zainah Md Zain, Nor Rul Hasma Abdullah, and Dwi Pebrianti Hurst Exponent Based Brain Behavior Analysis of Stroke Patients Using EEG Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925 Wen Yean Choong, Wan Khairunizam, Murugappan Murugappan, Mohammad Iqbal Omar, Siao Zheng Bong, Ahmad Kadri Junoh, Zuradzman Mohamad Razlan, A. B. Shahriman, and Wan Azani Wan Mustafa Examination Rain and Fog Attenuation for Path Loss Prediction in Millimeter Wave Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935 Imadeldin Elsayed Elmutasim and Izzeldin I. Mohd Introduction of Static and Dynamic Features to Facial Nerve Paralysis Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947 Wan Syahirah W Samsudin, Rosdiyana Samad, Kenneth Sundaraj, and Mohd Zaki Ahmad Offline EEG-Based DC Motor Control for Wheelchair Application . . . . 965 Norizam Sulaiman, Nawfan Mohammed Mohammed Ahmed Al-Fakih, Mamunur Rashid, Mohd Shawal Jadin, Mahfuzah Mustafa, and Fahmi Samsuri Automated Cells Counting for Leukaemia and Malaria Detection Based on RGB and HSV Colour Spaces Analysis . . . . . . . . . . . . . . . . . 981 Amer Fazryl Din and Aimi Salihah Abdul Nasir Simulation Studies of the Hybrid Human-Fuzzy Controller for Path Tracking of an Autonomous Vehicle . . . . . . . . . . . . . . . . . . . . 997 Hafiz Halin, Wan Khairunizam, Hasri Haris, Z. M. Razlan, S. A. Bakar, I. Zunaidi, and Wan Azani Mustafa A New Approach in Energy Consumption Based on Genetic Algorithm and Fuzzy Logic for WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007 Ali Adnan Wahbi Alwafi, Javad Rahebi, and Ali Farzamnia

Contents

xv

Sustainable Energy and Power Engineering Comparison of Buck-Boost Derived Non-isolated DC-DC Converters in a Photovoltaic System . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023 Jotham Jeremy Lourdes, Chia Ai Ooi, and Jiashen Teh Fault Localization and Detection in Medium Voltage Distribution Network Using Adaptive Neuro-Fuzzy Inference System (ANFIS) . . . . . 1039 N. S. B. Jamili, Mohd Rafi Adzman, Wan Syaza Ainaa Wan Salman, M. H. Idris, and M. Amirruddin Flashover Voltage Prediction on Polluted Cup-Pin the Insulators Under Polluted Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053 Ali. A. Salem, R. Abd-Rahman, M. S. Kamarudin, N. A. Othman, N. A. M. Jamail, N. Hussin, H. A. Hamid, and I. M. Rawi Effect of Distributed Generation to the Faults in Medium Voltage Network Using ATP-EMTP Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 1067 Wan Syaza Ainaa Wan Salman, Mohd Rafi Adzman, Muzamir Isa, N. S. B. Jamili, M. H. Idris, and M. Amirruddin Optimal Reactive Power Dispatch Solution by Loss Minimisation Using Dragonfly Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 1083 Ibrahim Haruna Shanono, Masni Ainina Mahmud, Nor Rul Hasma Abdullah, Mahfuzah Mustafa, Rosdiyana Samad, Dwi Pebrianti, and Aisha Muhammad Analysis of Pedal Power Energy Harvesting for Alternative Power Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105 Sheikh-Muhammad Haziq Sah-Azmi and Zuraini Dahari An Application of Barnacles Mating Optimizer Algorithm for Combined Economic and Emission Dispatch Solution . . . . . . . . . . . 1115 Mohd Herwan Sulaiman, Zuriani Mustaffa, Mohd Mawardi Saari, and Amir Izzani Mohamed Development of Microcontroller Based Portable Solar Irradiance Meter Using Mini Solar Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125 Lee Woan Jun, Mohd Shawal Jadin, and Norizam Sulaiman Performance of Graphite and Activated Carbon as Electrical Grounding Enhancement Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139 Mohd Yuhyi Mohd Tadza, Tengku Hafidatul Husna Tengku Anuar, Fadzil Mat Yahaya, and Rahisham Abd Rahman Design on Real Time Control for Dual Axis Solar Tracker for Mobile Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155 Muhammad Hanzolah Shahul Hameed, Mohd Zamri Hasan, and Junaidah Ali Mohd Jobran

xvi

Contents

Modified Particle Swarm Optimization for Robust Anti-swing Gantry Crane Controller Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1173 Mahmud Iwan Solihin, Wei Hong Lim, Sew Sun Tiang, and Chun Kit Ang Feasibility Analysis of a Hybrid System for a Health Clinic in a Rural Area South-Eastern Iraq . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193 Zaidoon W. J. AL-Shammari, M. M. Azizan, and A. S. F. Rahman Optimal Sizing of PV/Wind/Battery Hybrid System for Rural School in South Iraq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203 Zaidoon W. J. AL-Shammari, M. M. Azizan, and A. S. F. Rahman The Use of Gypsum and Waste Gypsum for Electrical Grounding Backfill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213 Amizatulhani Abdullah, Nurmazuria Mazelan, Mohd Yuhyi Mohd Tadza, and Rahisham Abd Rahman Energy-Efficient Superframe Scheduling in Industrial Wireless Networked Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227 Duc Chung Tran, Rosdiazli Ibrahim, Fawnizu Azmadi Hussin, and Madiah Omar Design of Two Axis Solar Tracker Based on Optoelectrical Tracking Using Hybrid FuGA Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243 Imam Abadi, Erma Hakim Setyawan, and D. R. Pramesrani

Unmanned System Technology, Underwater Technology and Marine

Tracking Control Design for Underactuated Micro Autonomous Underwater Vehicle in Horizontal Plane Using Robust Filter Approach Muhammad Azri Bin Abdul Wahed and Mohd Rizal Arshad

Abstract Micro autonomous underwater vehicle (µAUV) design and developed at Underwater, Control and Robotics Group (UCRG) is a torpedo-shaped vehicle measuring only 0.72 m in length and 0.11 in diameter with a mass of approximately 6 kg. This paper proposed a time invariant tracking control method for underactuated micro AUV in horizontal plane using robust filter approach to track a predefined trajectory. Tracking error is introduced which can then be converged by using force in surge direction and moment in yaw direction. A robust control will minimize the effects of external disturbance and parameter uncertainties on the AUV performance. With only rigid-body system inertia matrix information of the micro AUV, robustness against parameter uncertainties, model nonlinearities, and unexpected external disturbance is achievable with the proposed controller. Performance of the proposed robust tracking control is demonstrated in simulation results.



Keywords Underactuated system Micro autonomous underwater vehicles Robust control Trajectory tracking





1 Introduction The micro Autonomous Underwater Vehicle [1] developed by Underwater, Control and Robotics Group (UCRG) is a torpedo shaped vehicle design for use in shallow water inspection such as coral reef inspection. It measures at 0.72 m in length, 0.11 in diameter and 6 kg at its most basic configuration. Underwater mission requires the µAUV to be very stable to be able to follow the predefined trajectory with high accuracy. However, this µAUV is an underactuated AUV and this complicates the AUV to follow a predefined trajectory. Therefore, a

M. A. B. A. Wahed  M. R. Arshad (&) Underwater, Control and Robotics Group, School of Electrical and Electronic Engineering, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_1

3

4

M. A. B. A. Wahed and M. R. Arshad

tracking control system is required to allow the AUV to overcome the limitation of its propulsion system. Furthermore, performance of the µAUV is adversely affected by the unpredictable disturbances in the underwater environment. A precise mathematical representation of an Autonomous Underwater Vehicle (AUV) is very hard to obtain and this cause the control problem of underwater robot becomes even more challenging. Hydrodynamic parameters that occurs in the interaction between the vehicle and fluid is difficult to obtain with reasonable accuracy due to their variations against different maneuvering conditions. Therefore, a robust control technique with the constraint of not having its complete mathematical representation is required to reduce the effects of external disturbance on system behavior of the AUV. Sliding Mode Control (SMC) has been used by many researchers due to it robustness and is the most powerful robust control technique. SMC technique alter the dynamics of underwater vehicle by applying a discontinuous control signal. The control signal guides and maintains the trajectory of the system state error toward a specified surface called sliding surface [2]. However, because of the frequent switching, chattering phenomenon occur in the control input of SMC. Chattering has to be avoided because it causes high thruster wear and degrade the system performance. To avoid chattering, dynamics in a small vicinity of the discontinuity surface need to be alter by using a smoothing function such as saturation function and hyperbolic tangent function [3, 4]. Unfortunately, accuracy and robustness are partially lost as convergence are only ensured to approach a boundary layer of the sliding surface. To overcome the chattering effect, a second order SMC controller has been proposed [5, 6]. No smoothing function is required by the second order SMC controller to produce the continuous control signal and this allows for finite-time convergence to zero of the first-time derivative of sliding surfaces. However, second order SMC controller takes a longer time for its error to converges to zero. Another robust control technique used in underwater environment is Time Delay Control (TDC) which is relatively a new technique. It assumes that during a small short enough time, a continuous signal will remain the same. Therefore, past observation of uncertainties and disturbance can be used directly in the controller. Even in the presence of sensor noise and ocean current disturbance, good performance is achievable by using TDC controller [7, 8]. In general, TDC controller consists of time delay estimator and linear controller. However, the introduced delay causes the TDC controller unable to eliminate estimation error that arises. To avoid critically affecting the stability and performance of the system, the feedback data acquisition rate has to be fast in order to shorten the delay time. In this paper, position of AUV is controlled by using a time invariant tracking control method using robust filter approach. First proposed by [9], robustness against parameter uncertainties, model nonlinearities, and unexpected external disturbance is achievable with only inertia matrix information. The controller [10, 11] is designed consisting of a nominal controller and a robust compensator.

Tracking Control Design for Underactuated (lAUV) ...

5

This paper contains 6 sections. Section 1 introduce the research background while Sect. 2 presents the µAUV dynamic model and Sect. 3 presents the control objectives. Section 4 presents the design of proposed robust tracking control design, Sect. 5 discussed the simulation result and finally Sect. 6 concluded this paper.

2 Mathematical Modeling of µAUV Before defining the model, reference frames need to be defined. AUV are best described as a nonlinear system, thus two reference frame are considered: Earth-fixed frame and Body-fixed frame. Standard notation from Society of Naval Architects and Marine Engineers (SNAME) is used for easier understanding in this paper. Figure 1 shows the defined reference frames. Earth-fixed frame has its x-axis and y-axis pointing towards the North and East respectively while z-axis points downwards normal to the surface of earth. On the other hand, Body-fixed frame has its origin coincides with the center of gravity of the AUV. In this paper, the AUV is assumed to be moving only at a certain depth and is passively stable in roll direction. Therefore, all corresponding elements are neglected during derivation of dynamic equation. The nonlinear equations of motion of a Body-fixed frame is expressed in a vectorial setting as shown in (1)–(6), where v represents vector of linear and angular velocities expressed in Body-fixed frame, rigid-body system inertia matrix represented by MRB while added mass system inertia matrix represented by MA . DL and DQ represents linear hydrodynamic damping matrix and quadratic hydrodynamic damping matrix respectively. Lift matrix represented by L and the vector of

Fig. 1 Defined Earth-fixed frame and Body-fixed frame

6

M. A. B. A. Wahed and M. R. Arshad

Body-fixed force from actuators is represented by s. For simplicity, the lift matrix is assume as input. ðMRB þ MA Þv þ ðDL þ DQ jvjÞv ¼ s þ Ljvjv v r T

v ¼ ½u

MRB ¼ diag½ m

m

ð1Þ ð2Þ

Iz 

ð3Þ

MA ¼ diag½ MAu

MAv

MAr 

ð4Þ

DL ¼ diag½ DLu

DLv

DLr 

ð5Þ

DQ ¼ diag½ DQu

DQv

DQr 

ð6Þ

Body-fixed linear and angular velocities can be conveyed in Earth-fixed frame using Euler angle transformation as shown in (7)–(9). g represents the vector of position and attitude expressed in Earth-fixed frame while J represents the Jacobian matrix. g_ ¼ J ðwÞv g ¼ ½ x y w T 2 cos w  sin w J ðwÞ ¼ 4 sin w cos w 0 0

ð7Þ ð8Þ 3 0 05 1

ð9Þ

3 Control Objectives Before designing the trajectory tracking control problem, we need to first defined the tracking error as shown in (10). e represent the vector tracking error in Earth-fixed frame while gd represent the vector of desired position and orientation. Because the AUV is underactuated in sway direction, the desired velocities in x and y directions has to depend on the desired yaw angle as (12). e ¼ gd  g

ð10Þ

ey

ew  T

ð11Þ

wd ¼ tan1

  y_ d x_ d

ð12Þ

e ¼ ½ ex

Tracking Control Design for Underactuated (lAUV) ...

7

The first objective of this research is in designing a controller for an underactuated AUV to track a predefined, time-varying trajectory in the horizontal plane. Using only force in surge direction and moment in yaw direction, the proposed controller should be able to converge to zero the tracking error of the underactuated AUV in the x, y and w directions. The second objective of this research is to design a robust filter to compensate the effect of unknown hydrodynamic parameters on the AUV. This is because the complete mathematical representation of the AUV is not available.

4 Robust Tracking Control Design This section presents the design of the proposed tracking control of underactuated AUV in horizontal plane by using robust filter approach. Figure 2 shows the block diagram of the proposed controller. There are 3 steps in designing the proposed controller. Firstly, the tracking error has to be transformed to allow it to be converge by only using force in surge direction and moment in yaw direction. The Earth-fixed tracking error vector described as shown in (10) is transformed into introduced error vector in Body-fixed frame as shown in (13). ge ¼ ½ x e

ye

we T

ð13Þ

xe ¼ cosðwÞex þ sinðwÞey

ð14Þ

ye ¼  sinðwÞex þ cosðwÞey

ð15Þ

we ¼ ew þ aye

ð16Þ

Second step is in designing a robust filter to compensate the effect of added mass and hydrodynamic damping force on the AUV system as used by [12]. Since the complete mathematical representation of the AUV is unknown, an artificial signal of equivalent disturbance, q as shown in (17) which represent effect of added mass and damping force on the AUV system is introduced. This equivalent signal is then compensated by compensating signal as shown in (18) produced by a unity gain,

Fig. 2 Block diagram of the proposed controller

8

M. A. B. A. Wahed and M. R. Arshad

low pass filter. FLP represent the low pass filter with fs and fl representing the two positive constants related to undamped natural frequency of the filter. MRB v_ þ q ¼ s

ð17Þ

uR ¼ FLP q

ð18Þ

q ¼ s  MRB v_ FLP ðsÞ ¼

h

fl fs ðs þ fl Þðs þ fs Þ

0

ð19Þ fl fs ðs þ fl Þðs þ fs Þ

i

ð20Þ

Final step is to designed a nominal controller to introduce desired error dynamic into the AUV system. The nominal control signal which is similar to PD controller is shown in (21). KD and KP represent derivative and proportional gain matrix respectively. A predefined error dynamic as shown in (22) will converge the introduced tracking error to zero by using a suitable derivative and proportional gain. uN ¼ MRB ðKD g_ e þ KP ge Þ

ð21Þ

€ge þ KD g_ e þ KP ge ¼ 0

ð22Þ

In the proposed controller, two input from robust compensator and nominal controller is used as shown in (23). Where uR is robust compensating signal while uN is nominal control signal. s ¼ uR þ uN

ð23Þ

5 Simulations For simulation, SimulinkTM is used to verify the performance of the proposed controller. AUV parameters derived in (1) based on parameters presented in [1] is used as the AUV parameters while control parameters values are shown in (24)– (27). KP ¼ diag½ 0:2

0

0:89 

ð24Þ

KD ¼ diag½ 0:2

0

0:89 

ð25Þ

fl ¼ 8

ð26Þ

fs ¼ 2

ð27Þ

Simulation 1 is performed to test the performance of the proposed controller in a straight line trajectory with a constant velocity. The parameter of the value used is

Tracking Control Design for Underactuated (lAUV) ...

9

Table 1 Straight-line trajectory with constant velocity simulation parameters 0:5 0 T

Desired trajectory

gd ¼ ½ 0:2t

Initial position in y direction

eð0Þ ¼ ½ 0 0:5 0 T

Initial velocity in x direction

e_ ð0Þ ¼ ½ 0:2 0 0 T a¼1

Positive constant related to converging rate of ye

Fig. 3 Position response of straight-line trajectory tracking

shown in Table 1 and the results are shown in Figs. 3, 4 and 5. At a constant velocity, the controller is able to track a straight-line trajectory and converge to zero the initial error in y direction within 30 s. Next, simulation 2 is done to show the capabilities of the proposed controller in a sinusoidal desired trajectory against a Model Free High Order Sliding Mode Control (MFHOSMC) controller designed by [6]. The parameter of the value used is shown in Table 2. From Fig. 6, both controller is able to achieve a path similar to the desired path. In Fig. 7, the tracking error reach steady state for proposed controller in 22 s while MFHOSMC controller requires 25 s. Finally, Fig. 8 shows the comparison for the controllers to reach steady state in y direction with the proposed controller tracking error bounded to within 2  10−3 while SMC controller bounded within 20  10−3. The tracking error is bigger in y direction due to no actuator in y direction.

10

M. A. B. A. Wahed and M. R. Arshad

Fig. 4 Tracking error in x direction of straight-line trajectory tracking

Fig. 5 Tracking error in y direction of straight-line trajectory tracking

Tracking Control Design for Underactuated (lAUV) ...

11

Table 2 Sinusoidal trajectory tracking simulation parameters Desired trajectory

gd ¼ ½ 0:2t

Initial position in y direction

eð0Þ ¼ ½ 0 0 0:25 T

Initial velocity in x direction

e_ ð0Þ ¼ ½ 0:2 0:05 a¼4

Positive constant related to converging rate of ye

sinð0:05tÞ

Fig. 6 Position response of sinusoidal trajectory tracking

Fig. 7 Tracking error in x direction of sinusoidal trajectory tracking

0 T

0:25 cosð0:05tÞ T

12

M. A. B. A. Wahed and M. R. Arshad

Fig. 8 Tracking error in y direction of sinusoidal trajectory tracking

6 Conclusions This paper proposed an underwater tracking control method using robust filter approach. By using the proposed controller, the effects of external influences on AUV’s system behavior with subjects to the constraint of not having a complete representation of the AUV system has been minimized. Simulation results show that the proposed controller is able to track trajectory of straight-line and sinusoidal with an excellent performance. Acknowledgements The authors would like to thank RUI grant (Grant no.: 1001/PELECT/ 8014088) and Universiti Sains Malaysia for supporting the research.

References 1. Wahed MA, Arshad MR (2019) Modeling of Torpedo-Shaped Micro Autonomous Underwater Vehicle. Springer, Singapore 2. Shtessel Y, Edwards C, Fridman L, Levant A (2014) Sliding Mode Control and Observation. Springer, New York 3. Guo J, Chiu FC, Huang CC (2003) Design of a sliding mode fuzzy controller for the guidance and control of an autonomous underwater vehicle. Ocean Eng 30(16):2137–2155 4. Hoang NQ, Kreuzer E (2008) A robust adaptive sliding mode controller for remotely operated vehicles. Tech Mech 28(3–4):185–193 5. Deng CN, Ge T (2013) Depth and heading control of a two DOF underwater system using a model-free high order sliding controller with transient process. In: Proceedings of 2013 5th

Tracking Control Design for Underactuated (lAUV) ...

6.

7. 8. 9. 10.

11. 12.

13

International Conference on Measuring Technology and Mechatronics Automation, ICMTMA 2013, pp 423–426 García-Valdovinos LG, Salgado-Jiménez T, Bandala-Sánchez M, Nava-Balanzar L, Hernández-Alvarado R, Cruz-Ledesma JA (2014) Modelling, design and robust control of a remotely operated underwater vehicle. Int J Adv Robot Syst 11(1):1–16 Prasanth Kumar R, Dasgupta A, Kumar CS (2007) Robust trajectory control of underwater vehicles using time delay control law. Ocean Eng 34(5–6):842–849 Park JY, Cho BH, Lee JK (2009) Trajectory-tracking control of underwater inspection robot for nuclear reactor internals using Time Delay Control. Nucl Eng Des 239(11):2543–2550 Zhong YS (2002) Robust output tracking control of SISO plants with multiple operating points and with parametric and unstructured uncertainties. Int J Control 75(4):219–241 Gilbert S, Varghese E (2017) Design and simulation of robust filter for tracking control of quadcopter system. In: 2017 International Conference on Circuit, Power and Computing Technologies, ICCPCT, Kollam, pp 1–7 Yu Y, Zhong YS (2008) Robust tracking control for a 3DOF helicopter with multi-operation points. In: Proceedings 27th Chinese Control Conference, CCC, pp 733–737 Song YS, Arshad MR (2016) Tracking control design for autonomous underwater vehicle using robust filter approach. In: Autonomous Underwater Vehicles 2016, AUV 2016, pp 374–380

Design and Development of Remotely Operated Pipeline Inspection Robot Mohd Shahrieel Mohd Aras, Zainah Md Zain, Aliff Farhan Kamaruzaman, Mohd Zamzuri Ab Rashid, Azhar Ahmad, Hairol Nizam Mohd Shah, Mohd Zaidi Mohd Tumari, Alias Khamis, Fadilah Ab Azis, and Fariz Ali@Ibrahim

Abstract Pipeline Inspection Robot (PIR) which is a type of mobile robot is operated remotely or autonomously with little to no human intervention, inspecting various fields of the pipeline system and even cleaning the inner walls of the pipelines by using integrated programs. The development and application of PIR that is specifically used in monitoring the pipeline system are still not widely studied and applied, although Malaysia is a nation that is vastly developing in the industrial fields. The proposed PIR can help in monitoring and inspecting pipe diameter ranging from 215 to 280 mm that are impossible to reach and hazardous to human life. In addition, the PIR is needed to make the inspecting operation easier and able to save work time. This project is focusing on the design and development of suitable PIR for pipeline system monitoring. The PIR is designed by using the SolidWorks software and several simulations are conducted in the software such as the stress and strain analysis. The PIR is fabricated by using aluminium and uses the adaptive mechanism structure which allow the robot to adapt in pipe changing diameters. Moreover, the PIR is controlled by a microcontroller. Experiments are performed to verify the robot’s performance such as the ability of the robot to adapt in the pipeline. The results shown that the PIR has an average speed of 0.0096 m/s and can move accurately straight in the pipeline. Keywords Pipeline Inspection Robot analysis

 Solid works design  Performances

M. S. Mohd Aras (&)  A. F. Kamaruzaman  M. Z. Ab Rashid  H. N. Mohd Shah  A. Khamis  F. Ab Azis  F. Ali@Ibrahim Underwater Technology Research Group (UTeRG), Centre for Robotics and Industrial Automation (CERIA), Fakulti Kejuruteraan Elektrik, Universiti Teknikal Malaysia Melaka, 76100 Durian Tunggal, Melaka, Malaysia e-mail: [email protected] Z. Md Zain Robotics & Unmanned Systems (RUS) Research Group, Faculty of Electrical and Electronics Engineering, Universiti Malaysia Pahang, 26600 Pekan, Pahang, Malaysia A. Ahmad  M. Z. Mohd Tumari Fakulti Teknologi Kejuruteraan Elektrik dan Elektronik, Universiti Teknikal Malaysia Melaka, 76100 Durian Tunggal, Melaka, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_2

15

16

M. S. Mohd Aras et al.

1 Introduction The Pipelines Inspection Robot is a mobile robot that is equipped with a camera and specifically used to inspect various fields of the pipeline systems. The PIRs are used vastly in the supply of water, petrochemical and industries that working on fluid transportation [1–3]. On the other hand, the pipelines are the crucial equipment for transporting fuel oils and gas, delivering drinking water and transferring pollutants [4]. Piping networks can cause a lot of inconvenience such as corrosion, aging, cracks, and mechanical abrasion. Hence, the need of constant inspection, maintenance and repairs are massively needed [5]. The pipeline inspection robots are utilized to investigate internal disintegration, fractures and defects which are mainly due to many causes such as corrosion, degradation, and overheating [6]. With the decades of enormous developments in the robotics field, the pipeline robots have numerous designs such as the wheel type robot, caterpillar type robot, wall-press robot, legged type robot, inchworm type robot and screw type robot [2]. In this project, a PIR is to be designed and developed by using the SOLIDWORKS software and the designs of the robot are specifically to apply in a straight pipeline system and it can adapt in a various pipeline diameter. The PIR will be programmed by a microcontroller which is the Arduino Mega2560. The performance of the PIR will be based on its ability to move in a various pipeline diameter and its ability to inspect the pipelines. The aim of this project is to design and develop the PIR by using the SOLIDWORKS software, fabricate the robot and to analyze its performance. The goal of this project is to design and develop a PIR that is not too complex, low cost, able to adapt in various pipelines and multifunctional. However, the performance of other types of complex robot is detailed in this project. The pipelines are generally used for fluid transportation from place to place. The usage and application of pipelines across all over Malaysian industries are growing massively [7]. There are several industries that are very well known to the pipeline industries, namely Lembaga Air Sarawak, Telekom Malaysia, Petronas and Indah Water. As an example, Petronas themselves is responsible to operate a huge number of 2500 km of gas transmission pipeline in our country [8]. Nowadays, modern housing and town planning in Malaysia are mostly having centralized sewage system. With the utilization of the new sewage systems, all houses’ pipelines will be connected to one station for each district. In addition, there will eventually be a more future network of pipelines that will be constructed. These pipelines will require the constant need of maintenance and technology as the pipeline repair has become more vital [9]. There have been a series of accidents involving pipelines throughout the years. As claimed by Carl Weimer [10], the executive director of the Pipeline Safety, 135 excavation tragedy that involved pipelines have occurred in which the pipelines are transporting dangerous chemicals such as crude oil and petroleum over the last 10 years. This incident can be summarized that roughly one incident happens every month. Apart from that, on the 31st of July 2014, gas explosion series had occurred in the Cianjhen and Lingya

Design and Development of Remotely Operated Pipeline …

17

districts of Kaohsiung, Taiwan. Earlier that evening, there were reports of gas spills and unfortunately, after the blasts, thirty-two people did not survive and a number of 321 others were wounded [11]. Recently this year, on the 1st of August, another series of natural gas pipeline explosion in Midland Country, Texas has occurred and five people were sent to hospital, leaving them with critical burn injuries. The cause of the explosions was unknown, the officials said [12].

2 Methodology The whole system has been constructed as shown in the Fig. 1. The control module consists of the controller that is wired to connect with the Arduino Mega2560. The inspection module consists of the pan-and-tilt CCD camera that is attached with servo motor and the computer that is used to get real-time image or video recording for pipe inspection. Next, the moving part module consists of the motor driver, 12 V DC motor, gear and the wheel’s movement. The whole module is powered by a power supply that is connected externally from the robot. Pipeline Inspection Robot is shown in the different planes of view as shown in Fig. 2(a)–(d). The robot that have been designed can fit a pipe diameter ranging from 90 to 130 mm. This robot applies the adaptive mechanism in which the spring tension acts as a passive support which enable the robot to keep intact to the pipe inner walls. The designed robot has a length of 15 cm and the arms of the robot have a maximum reach of 130 mm. The most contracted and expanded state of the robot arm as shown in the Fig. 2(e) and (f), respectively. The body tube of the designed robot which act as the main body is used to store the electrical components. The designed robot uses stainless steel as its main materials that composed most of its parts. Stainless steel has been chosen mainly due to its ability to withstand corrosion and oxidation as this robot is going to be used to inspect pipelines which have various conditions. In addition, the front and the rear of the robot is attached with a transparent acrylic plastic respectively to protect the electrical components inside the body tube especially the camera that is used for inspecting the pipelines.

3 Results and Discussions The stress and strain analysis results on the certain parts of the robot that have been done in the SolidWorks software as shown in Fig. 3. All the parts are given the same amount of force which is 100 N and are given the same type of materials which is the Annealed Stainless Steel. The Annealed Stainless Steel has a yield strength of 2.750e8 N/m2. The maximum stress given by the 100 N force to the Body Tube is 2.656e5 N/m2 which is lower than the yield strength of the material. Therefore, the body tube is operating within safe limits because the maximum stress

18

M. S. Mohd Aras et al.

Fig. 1 The block diagram of the pipeline inspection robot

is below the amount of the yield strength. As mentioned earlier, all the parts are given the same amount of force and materials which is 100 N and Annealed Stainless Steel. The robot part as shown in the Fig. 3 has the yield strength of 2.750e8 N/m2 and the maximum stress given by the 100 N force is 4.325e68 N/m2 which is lower than the yield strength. Therefore, this part of the robot operates within the safe limit. Same goes to the two robot parts in the Fig. 3, they are operating within the safe limits because the maximum stress given is below the yield strength of the parts. The specifications and the measurements of the fabricated robot is shown in the Table 1. The differences between the designed and the fabricated Pipeline Inspection Robot are mainly on the adaptive mechanism linkage, which connect to the wheels of the robot. The changes are made because the measurements of the adaptive

Design and Development of Remotely Operated Pipeline …

19

Fig. 2 A view of the designed pipeline inspection robot using SolidWorks software

mechanism parts of the designed robot are too small and thus, it was impossible to be fabricated. The changes in the measurements led to the increase of the maximum extended state diameter and the minimum extended diameter of the Pipeline Inspection Robot. Hence, pipes with bigger diameter are needed to analyze the performance of the fabricated Pipeline Inspection Robot. On the other hand, the changes in measurements also led to the increase of the robot’s weight. The robot is

20

M. S. Mohd Aras et al.

Fig. 3 The stress and strain analysis results on the certain parts of the PIR using SolidWorks software

Table 1 The specifications and measurements of the fabricated pipeline inspection robot

Items

Specifications

Length (mm) Weight (kg) Maximum adaptive diameter (mm) Minimum adaptive diameter (mm) Diameter without spring attached (mm) Wheels diameter (mm) Average speed

150 2.2 280 215 200 30 0.0096

quite heavy with the weight of 2.2 kg. The robot’s weight was not expected to be heavier than we thought after the fabrications and thus the DC motors that are used to move the robot did not have enough power to move the robot sufficiently. The speed of the robot is rather slow with an average speed of 0.0096 m/s. Thus, further modifications of the fabricated Pipeline Inspection Robot and recommendations will be made and stated for future works to improve the robot’s driving speed. The materials that are used to make the Pipeline Inspection Robot parts are entirely aluminiums. Aluminiums have a very low specific weight of about 1/3 of iron. Hence, this can decrease the robot’s weight than using common metals to fabricate the robot. Furthermore, aluminium has a very high resistance against corrosion and oxidation, which best to be used for the Pipeline Inspection Robot as the robot will be used and travel inside a pipeline with various conditions. Despite the beneficial properties of the aluminium, the fabricated Pipeline Inspection Robot turns out quiet heavy and thus, further research and development will be made to the robot for future works and studies. Next, the transparent body covers for the front and backside of the Pipeline Inspection Robot were not be able to completed because of time constraint. The fabrications, modifications and the assembly of the fabricated Pipeline Inspection Robot took a tremendous amount of time. The designed body covers that are made up of acrylic plastic are used to protect the electronic parts inside the body of the robot. It also protects the camera that will be placed inside the robot’s body for inspection utilizations (Fig. 4). The experiment is prepared to analyze and observe the robot’s average speed in a 320 mm long pipe with the diameter of 266 mm. A number of 10 trials were done to test the robot’s speed inside the pipe and the time for the robot to move inside the

Design and Development of Remotely Operated Pipeline …

Fig. 4 A view of the fabricate Pipeline Inspection Robot

Table 2 The results of the pipeline inspection robot speed test Trials

Time taken to move inside the pipeline (320 mm length 266 diameter) s

1 2 3 4 5 6 7 8 9 10 Average time Average speed

31 33 35 31 34 33 32 35 36 32 33.2 0.0096 m/s

21

22

M. S. Mohd Aras et al.

pipe and the average speed is records in the Table 2. The robot took an average of 33.2 s to move to the end of the 320 mm long pipe and gain an average speed of 0.0096 m/s. The performance of the robot’s speed can be further improved with proper modifications and future works.

4 Conclusion The design of the Pipeline Inspection Robot with the specifications and features has been done successfully. Next, the fabrications of the robot are also a success, although there were a few modifications that have been made to the measurements and specifications of the PIR. The performance of the PIR in terms of flexibility can be further analyze with proper modifications to the Pipeline Inspection Robot. Throughout the fabrication process, a few changes in measurements were made to the parts of the robot because some parts are too small to be fabricated. These changes were carefully made and the robot is fabricated successfully. There was the unexpected result made after the fabrications of the robot. The weight of the robot was unexpectedly heavy and it affected the speed of the robot. There are many ways to improve the Pipeline Inspection Robot in terms of its performance and design. To increase and improves the performance of the robot, these future works are needed and further develop this Pipeline Inspection Robot. Acknowledgements The authors would like to thank Universiti Malaysia Pahang for the provision of PJP grant (RDU170366) and Special appreciation and gratitude to especially for Centre of Research and Innovation Management (CRIM), Centre for Robotics and Industrial Automation (CERIA) for supporting this research and to Faculty of Electrical Engineering from UTeM for supporting this research under PJP (PJP/2019/FKE(3C)/S01667).

References 1. Harish P, Venkateswarlu V (2013) Design and motion planning of indoor pipeline inspection robot. Int J Innov Technol Explor Eng 3(7):41–47 2. Bhadoriya AVS, Gupta VK, Mukherjee S (2018) Development of in-pipe inspection robot. Mater Today Proc 5(9):20769–20776 3. Nayak A, Pradhan SK (2014) Design of a new in-pipe inspection robot. Procedia Eng 97:2081–2091 4. Lee D, Park J, Hyun D, Yook G, Yang HS (2012) Novel mechanisms and simple locomotion strategies for an in-pipe robot that can inspect various pipe types. Mech Mach Theory 56:52– 68 5. Roh SG, Choi HR (2005) Differential-drive in-pipe robot for moving inside urban gas pipelines. IEEE Trans Robot 21(1):1–17 6. Roslin NS, Anuar A, Jalal MFA, Sahari KSM (2012) A review: Hybrid locomotion of in-pipe inspection robot. Procedia Eng 41:1456–1462 7. Abidin ASZ (2015) Development of track wheel for in-pipe robot application. Procedia Comput Sci 76:500–505

Design and Development of Remotely Operated Pipeline …

23

8. Bujang AS, Bern CJ, Brumm TJ (2016) Summary of energy demand and renewable energy policies in Malaysia. Renew Sustain Energy Rev 53:1459–1467 9. Enner F, Rollinson D, Choset H (2013) Motion estimation of snake robots in straight pipes. In: Proceedings of IEEE International Conference on Robotics and Automation, Germany, pp 5168–5173. IEEE 10. How often do pipelines blow up? https://money.cnn.com/2016/11/01/news/pipelinefatalities/ index.html. Accessed 25 May 2019 11. Multiple gas explosions rock Kaohsiung streets. http://focustaiwan.tw/news/asoc/ 201408010001.aspx. Accessed 25 May 2019 12. Natural Gas Pipeline Explosions in Texas Critically Injure 5 Workers. https://www.huffpost. com/entry/natural-gas-pipeline-explosionstexas_n_5b62964be4b0fd5c73d62c97. Accessed 25 May 2019

Vision Optimization for Altitude Control and Object Tracking Control of an Autonomous Underwater Vehicle (AUV) Joe Siang Keek, Mohd Shahrieel Mohd Aras, Zainah Md. Zain, Mohd Bazli Bahar, Ser Lee Loh, and Shin Horng Chong Abstract Underwater vision is very different with atmospheric vision, in which the former is subjected to a dynamic and visually noisy environment. Absorption of light by the water and rippling waves caused by atmospheric wind are resulting uncertain refraction of light in the underwater environment, thus continuously causing disturbance towards the visual data collected. Therefore, it is always a challenging task to obtain reliable visual data for the control of autonomous underwater vehicle (AUV). In this paper, an AUV was developed and is tasked to perform altitude control and object (poles) tracking control in a swimming pool by merely using a forward-viewing vision camera and a convex mirror. Prior to design and development of control system for the AUV, this paper only focuses on utilizing and optimizing the visual data acquired. The processing process involves only gray-scaled image and without any common color restoration or image enhancement techniques. In fact, the image processing technique implemented for the object tracking control in this paper contains a self-optimizing algorithm, which results improvement on the object detection. The result shows that under similar challenging and dynamic underwater environment, the detection with optimization is 80% more successful than without the optimization.



Keywords Vision optimization Altitude control Autonomous underwater vehicle

 Object tracking control 

J. S. Keek  M. S. Mohd Aras (&)  M. B. Bahar  S. L. Loh  S. H. Chong Faculty of Electrical Engineering, Universiti Teknikal Malaysia Melaka, Jalan Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia e-mail: [email protected] Z. Md. Zain Robotics and Unmanned Systems (RUS) Research Group, Faculty of Electrical and Electronics Engineering, Universiti Malaysia Pahang, 26600 Pekan, Pahang, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_3

25

26

J. S. Keek et al.

1 Introduction Besides of the universe up in the sky and beyond, underwater world is another universe that is always in the to-explore-list of mankind throughout the past decades. While the mankind has already reached millions of light years up into the universe, but still yet to complete the exploration of underwater world even though it is just a few hundreds of kilometers of deepness. The main reason for this circumstance is because of the medium of the underwater environment—water not only hinders the transmission of radio frequency (RF) signal, it refracts and absorbs the penetration of visible light and thus causing the exploration of underwater world to encounter various difficulties, even for shallow water environment as well. As vision is one of the most informative source of feedback sensing, losing such capability means a ‘handicapped’ autonomous underwater vehicle (AUV). Therefore, the exploration of underwater world without vision is not preferable. In underwater environment, visible light is refracted. What even worse is, a gust of wind can easily create waves of ripple, causing the refraction to be varying and uncertain. Therefore, the light reflected from underwater object may has dynamic light reflection and patching over time. Moreover, water tends to absorb red and green lights, thus leaving multicolor object left with only blue color. Therefore, the image taken under water is very different with the image taken on ground, additional image processing techniques are mandatory. Existing conventional image processing techniques for ground image are matured and common, however, when it comes to the application of underwater images, these techniques may be inadequate. Therefore, various additional image processing technique for underwater image is developed and formulated from time to time. As mentioned earlier, the rippling water waves cause the underwater image to contain noise and disturbance. Image transformation technique such as wavelet, curvelet and contourlet are promising in overcoming such circumstance [1]. Meanwhile, as water tends to absorb all spectrum of visible light except the blue one, therefore, effort such as color restoration and correction was proposed for acoustic underwater image with heuristic algorithm [2, 3]. Occasionally, working with colorful image can be easier for feature extraction and object recognition, but it is three times more computational power hungrier than gray-scaled image. Zhang et al. proposed an implementation of Particle Swarm Optimization (PSO) in optimizing the gray-scaled tuning parameter, with the objective of achieving lesser computational power yet retaining decent accuracy of object recognition and detection [4]. As working with color restoration or correction techniques may add complexity to the image processing, and colorful image involves higher computational power as well, therefore in this project, gray-scaled underwater image is adopted but unlike [4–7], a more complicated object is used for detection and a simple self-tuning algorithm is implemented to cope with the dynamic environment of under water. The final result displays a more robust detection of the object assigned and deployed. This paper is organized as follow. Section 2 describes the hardware

Vision Optimization for Altitude Control and Object Tracking …

27

and experimental setups of the AUV developed. Section 3 presents the image processing techniques used in this paper. Section 4 presents and discusses experimental result and finally in Sect. 5, this paper is concluded.

2 Hardware and Experimental Setups The autonomous underwater vehicle (AUV) developed in this paper is equipped with a looking-forward Raspberry Pi camera module and is tasked to acquire altitude and object location data. In order to fulfill these criteria concurrently and instead of using two cameras (one looking-forward camera and one looking-downward camera), a convex mirror is used. The outcome of the looking-forward raw image data is as shown in Fig. 1. The convex mirror is actually a blind-spot mirror for the rear mirrors of car. The advantage of such mirror is that it produces zoomed and wider field of view. Based on Fig. 1, the areas (size) of the tiles spotted in the mirror are computed and used to determine the immediate altitude of the AUV. The benefit of such approach or hardware setup is, both altitude and object detecting data can be acquired concurrently by using merely one camera. Moreover, the image can be segmented into two smaller regions of interest (ROI) for simultaneous processing, thus saving abundant of computational power and time. Next, the detail of the poles is illustrated in Fig. 2.

Fig. 1 Forward view from the perspective of the AUV in a swimming pool

28

J. S. Keek et al.

Fig. 2 Illustration and detail of the object (poles) used

Overall, the frame captured by the camera has resolution of 640  480 pixels and with frame rate of 10 frames per second (fps). Although the poles are colored with bright orange color, however in Fig. 1, the poles appeared to have dark colored surface and the overall image is blueish. Such properties vary from time to time and from position to position. Therefore, a self-tuning image processing technique is implemented to cope with such dynamicity, which will be presented in upcoming section.

3 Image Processing Technique 3.1

Data for Altitude Control

To efficiently acquire altitude data, the raw image or frame is first cropped based on region of interest (ROI), that is where the mirror locates in the image. Since the mirror moves along with the AUV, the position of the mirror is constant and thus the parameters for the ROI can be pre-defined. Figure 3 depicts the cropped image of the raw image in Fig. 1. To ease the computation, the segmented or cropped image is converted into gray-scaled image, whereby the intensity of each pixel is then ranged between 0 and 255. Next, Gaussian blur is applied with 5  5 pixels of kernel to smoothen edges, then followed by edge detection by using built-in Canny function from Python OpenCV. To enhance edges, morphological transformations is applied, whereby the

Vision Optimization for Altitude Control and Object Tracking …

29

Fig. 3 ROI for altitude control

Fig. 4 Morphological transformed image

image is first dilated and then followed by erosion and the result is as shown in Fig. 4. At this stage, contours of the image can be easily obtained. The shape of each contour can be approximated by using Douglas-Peucker algorithm. Polygon with four vertices is detected as a quadrilateral, which denotes the tile of the swimming pool. Finally, the areas of each detected quadrilateral (tiles) are computed and collected and the altitude of the AUV can be determined by using the average value of these tile areas.

30

J. S. Keek et al.

Fig. 5 ROI for object tracking control

3.2

Data for Object Tracking Control

In this subsection, the image processing technique on locating the targeted object i.e. poles in the vision of the autonomous underwater vehicle (AUV) is presented. As mentioned earlier, due to the dynamic and noisy environment of underwater environment, detecting the poles in the swimming pool requires certain extent of adaptability. Therefore, a self-tuning algorithm is discussed in this subsection, whereby a parameter will be optimized heuristically based on the fitness function designed and developed. First of all, and as previous, to minimize computational power as much as possible, only region of interest (ROI) is extracted or cropped out for processing. The cropped image with the ROI is as shown in Fig. 5. Then, the image is converted into gray-scaled image to further lighten the computation. Based on the image in Fig. 5, the poles straightforwardly outstand from the environment based on our perspective. Therefore, there is certainly a boundary value that can capture and detect the poles. Since the image is in gray-scaled, the lower boundary value is 0 whereas the upper boundary value, Uop is the parameter to be optimized. Since the optimization does not involve multidimensional search space and multivariable, a simple optimization process is implemented, that is by just increasing the value of Uop with step value of 1 at each iteration. During each iteration, contours are computed, and all polygons with four vertices (quadrilaterals) are collected. The key point of a successful and accurate detection of the poles depends on the reliability of the fitness function designed. The algorithm of the fitness function in Python programming language is presented in Algorithm 1.

Vision Optimization for Altitude Control and Object Tracking …

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

31

Algorithm 1: Fitness function for optimizing Uop. … if angles is not None and len(angles) == 2 and abs(angles[0]) < 45 and abs(angles[1]) < 45: angleDiff = abs(round(angles[0]) - round(angles[1])) else: angleDiff = 90 if len(widths) == 2 and len(areas) == 2 and angleDiff < 45: widthAreaRa o = [] for i in range(2): widthAreaRa o.append(widths[i]/areas[i]) fitnessFunc on = abs(widthAreaRa o[0]-widthAreaRa o[1]) else: fitnessFunc on = float(‘inf’) … costs.append(fitnessFunc on) … minimumCostLoca on = costs.index(min(costs)) minimumCost = costs[minimumCostLoca on] op malParameter = parameters[minimumCostLoca on]

Intuitively, the characteristics of the object (poles) are used as the criteria to design the fitness function. Based on Fig. 5, the object is made up of two poles and therefore in line 1 of Algorithm 1, the number of detected quadrilaterals allowed is equals to 2. Moreover, the poles are in upright position and never in horizontal position. Therefore, ‘abs(angles[0])’ and ‘abs(angles [1])’ only accept quadrilaterals that are angled in less than 45° and −45°. Next, since these two poles are parallel to each other, their angle difference should not have large difference; only angle difference of less than 45° is allowed. Next, the width-area ratio is introduced, and the value returned by the fitness function is exactly the absolute difference of the width-area ratio of these two quadrilateral (poles) as shown in line 10 of Algorithm 1. Intuitively, these two poles are identical and therefore have very similar width. However, due to the cropped region as shown in Fig. 5, the pole or poles may be partially blocked occasionally, resulting the area of the poles obtained via the image processing technique to have significant difference. Therefore, their widths are normalized by their respective areas for reasonable detection. Finally, all the fitness function values are compiled. The value of Uop with minimum cost value is selected as the optimal parameter.

32

J. S. Keek et al.

4 Experimental Result and Discussion In this section, the result of the methods implemented for the image processing is presented and discussed. The autonomous underwater vehicle (AUV) was manually moved from one position to another to acquire raw image data. 15 frames of images are selected to evaluate the performance of the proposed method. Table 1 presents the altitude data obtained experimentally. Based on Table 1, all 15 frames have successful detection of the tiles, even though the underwater environment is dynamic and is sensitive to external disturbance. This is because, unlike the detection of the poles, detection of the tiles is simply easier. Moreover, the tiles are beneath the AUV and therefore, noisy light refraction caused by the rippling water waves does not affect the image significantly. Overall, the tile areas of each frame have coefficient of variation (COV) of not more than 0.27, which denotes that the detection is reliable and consistent. Next, the result for the detection of the poles is presented in Table 2. In Table 2, experiments without and with optimization is compared. Without the optimization, parameter Uop is fixed at value of 98 throughout all frames. Whereas with the optimization, the value of Uop is dynamic and varies according to immediate state and environment. The overall result shows that without the self-tuning algorithm, only three frames i.e. Frames 1, 5 and 7 successfully detect the poles whereas with the self-tuning algorithm, all 15 frames attain successful detection. Take note that the values of Uop varies without an incremental or decremental pattern, which indicates the uncertain dynamic environment of under water. Meanwhile, the error, which is also the input for system controller, denotes the horizontal distance between center point of the frame (white dot) and the center point between the poles (black dot).

Table 1 Altitude data

Frame No. 1

Outcome

Areas (pixel2) 208.0, 210.0

126.0,

210.0,

Coefficient of Varia on

Mean (pixel2)

0.22

188.5

2

126.0, 154.0

0.14

140

3

150.0, 224.0, 180.0

0.20

184.7

Vision Optimization for Altitude Control and Object Tracking …

33

Table 1 (continued)

4

126.0, 264.0, 176.0, 164.5, 256.0, 180.0, 255.5

0.27

203.1

5

224.0, 224.2, 196.0, 250.7, 154.0, 335.8, 188.0

0.26

224.6

0

225.0

6

225.0, 225.0

7

296.1, 223.4, 255.0, 176.0, 176.0, 192.0

0.22

204.5

8

289.0, 256.0, 180.0, 180.0, 272.0, 210.0

0.18

232.9

9

126.0, 150.0, 150.0, 196.0, 130.0, 225.0, 165.0, 154.0, 255.0, 180.0

0.24

173.1

10

225.0, 165.0, 255.4, 180.0

0.17

213.8

255.0, 221.0,

203.4,

34

J. S. Keek et al.

Table 2 Object detection data without and with the self-tuning algorithm

Frame No.

1

Outcome without Error (pixels) Self-tuning Algorithm, Uop = 98

Outcome with Self- Error (pixels) tuning Algorithm

-15.66

-15.66

Uop

98

2

nil

4.54

95

3

nil

7.38

89

4

nil

52.60

83

5

nil

40.43

76

Vision Optimization for Altitude Control and Object Tracking …

35

Table 2 (continued)

6

-4.24

-4.24

79

7

22.01

23.06

69

8

nil

9

nil

10

nil

-29.32

-74.97

-18.56

91

83

65

5 Conclusion and Future Work The proposed method has successfully achieved robust data extraction for the purposes of altitude control and object tracking control in the future. A conclusion that can be drawn is, self-tuning or self-optimizing algorithm is a mandatory for dynamic circumstance such as the environment of under water. In future work, optimization technique with better convergence time can be implemented to improve the proposed image processing technique. Moreover, more tuning

36

J. S. Keek et al.

parameters can be introduced to improve the robustness and reliability of the detection. Acknowledgements The authors would like to thank Universiti Malaysia Pahang for the provision of PJP grant (RDU170366) and Ministry of Higher Education of Malaysia for the provision of FRGS grant (FRGS/2018/FKE-CeRIA/F00352).

References 1. Sharumathi K, Priyadharsini R (2016) A survey on various image enhancement techniques for underwater acoustic images. In: International Conference on Electrical, Electronics, and Optimization Techniques, pp 2930–2933 2. Pramunendar R, Shidik AGF, Supriyanto CP, Andono N, Hariadi M (2018) Auto level color correction for underwater image matching optimization. Int J Comput Sci Netw Secur 13 (1):18–23 3. Trucco E, Olmos-Antillon AT (2016) Self-tuning underwater image restoration. IEEE J Oceanic Eng 31(2):511–519 4. Zhang R, Liu J (2006) Underwater image segmentation with maximum entropy based on particle swarm optimization (PSO). In: Proceedings of the First International Multi-symposiums on Computer and Computational Sciences 5. Silpa-Anan C, Brinsmead T, Abdallah S, Zelinsky A (2001) Preliminary experiments in visual servo control for autonomous underwater vehicle. In: Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the Next Millennium, vol 4, pp 1824–1829 6. Lee P-M, Hong S-W, Lim Y-K, Lee C-M, Jeon B-H, Park J-W (1999) Discrete-time quasi-sliding mode control of an autonomous underwater vehicle. IEEE J Oceanic Eng 24 (3):388–395 7. Shojaei K, Dolatshahi M (2017) Line-of-sight target tracking control of underactuated autonomous underwater vehicles. Ocean Eng 133:244–252

Development of Autonomous Underwater Vehicle Equipped with Object Recognition and Tracking System Muhammad Haniff Abu Mangshor, Radzi Ambar, Herdawatie Abdul Kadir, Khalid Isa, Inani Yusra Amran, Abdul Aziz Abd Kadir, Nurul Syila Ibrahim, Chew Chang Choon, and Shinichi Sagara

Abstract The development and design of autonomous underwater vehicle (AUVs) provides unmanned, self-propelled vehicles that are typically deployed from a surface vessel, and can operate independently for periods of a few hours to several days. This project discusses the development of an AUV equipped with object recognition and tracking system. In this project, the motion of AUV is controlled by two thrusters for horizontal motions and two thrusters for vertical motions. A Pixy CMUcam5 is used as a vision sensor for the AUV that is utilized to recognize an object through its specific color signatures. The camera recognizes an object through colour-based filtering algorithm by calculating the colour (hue) and saturations of each red, green and blue (RGB) pixel derived from built-in image sensor. When the camera recognizes an object, the AUV will automatically track the object without any operator. Preliminary underwater experiments have been carried out to test its ability to stay submerge underwater as well as its functionality to navigate and recognize object underwater. Experiments also have been carried out to verify the effectiveness of Pixy CMUcam5 to recognize a single and multiple objects underwater, then tracks the recognize object. This work reports the findings that demonstrate the usefulness of PixyCMUcam5 in the development of the AUV. Keywords Autonomous underwater vehicle recognition Object tracking



 Pixy CMUcam5  Object

M. H. Abu Mangshor  R. Ambar (&)  H. A. Kadir  K. Isa  I. Y. Amran  A. A. A. Kadir  N. S. Ibrahim  C. C. Choon Department of Electronic Engineering, Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, 86400 Parit Raja, Batu Pahat, Johor, Malaysia e-mail: [email protected] S. Sagara Department of Mechanical and Control Engineering, Kyushu Institute of Technology, Tobata, Kitakyushu 804-8550, Japan © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_4

37

38

M. H. Abu Mangshor et al.

1 Introduction An underwater vehicle is a robotic vehicle that travels underwater that can be classified into manned and unmanned vehicles. The manned variants include submarines and submersible. A submarine is a ship that can be submerged and navigated underwater with a streamlined hull intended for lengthy periods of operation in the ocean, fitted with a periscope and typically fitted with torpedoes or rockets. Military submarines are typically used to protect aircraft carriers on the water surface, to attack other submarines and watercraft, to supply ships for other submarines, to launch torpedoes and rockets, and to provide surveillance and protection against prospective attackers. It differs from a submersible which has limited underwater capability. Submersible is used for various purpose, including deep-sea surveys, marine ecological assessment, natural marine resource harvesting, deep-sea exploration and marine exploration [1]. Unmanned underwater vehicle (UUV) or more often referred to as autonomous underwater vehicles (AUV) are robots that travels underwater independently without requiring no physical connection to their input from an operator [2, 3]. AUVs are programmed at the surface, and then navigate through the water on their own, collecting data as they go. AUVs can be preprogramed with an assignment and location. Once their assignment is complete, the robot will return to its location. On the other hand, remotely operated vehicles (ROV) are any vehicles that are able to operate underwater where the vehicles are controlled by humans from a remote location using remote control devices [4–6]. A series of wires running on land or in the air connect the vehicles to a surface ship. These wires convey control and control signals between the operator and the ROV, enabling the vehicles to be remotely navigated. A ROV can include a video camera, lights, sonar systems and robotic arms. The roles of UUV such as ROVs and AUVs are for example to map the seabed for oil and gas industry, underwater observation, seabed exploration, underwater building and subsea project maintenance and underwater inspection and ship hull cleaning. ROVs involve in collecting samples or manipulating the environment while AUVs will help to create detailed maps or measure water properties. Vision system is a technology that enables a computer to recognize and evaluate images. A vision system usually comprises of hardware and software for digital cameras and back-end image processing. The front camera of a robotic vehicle captures pictures from the setting or a centered object and sends them to the processing scheme [7]. The vision system has the ability to recognize objects, places, people, writing and actions in images. Computers can use machine vision technologies in combination with a camera and artificial intelligence software to achieve image recognition. Image recognition is utilized to play out an enormous number of machine-based visual errands, for example, naming the substance of images with meta-tags, self-driving vehicles and mishap evasion frameworks, performing images content inquiry and controlling self-governing robots. Robotic vehicles are expected to simultaneously detect obstacles and recognize an object.

Development of Autonomous Underwater Vehicle Equipped …

39

The technology is even capable of following the objects. By applying a vision system to a robotic vehicle means that you give it eyes to recognize an object. In this project, an autonomous underwater vehicle equipped with vision system has been developed. The project proposes the design and development of an AUV that can navigate based on object recognition and tracking system using a single camera. A Pixy CMUcam5 camera is used to recognize a target object and track its movements in underwater environment. This paper is organized as follows. Section 2 describes the detail design of the propose AUV including 3D model design and actual AUV prototype design which consist of the vision sensor. Section 3 introduces object recognition and tracking method algorithm that is used in this work, followed by a brief conclusion and future recommendation in Sect. 4.

2 Methodology 2.1

AUV Design Process

Figure 1 shows the AUV design process. It can be classified into several stages. The main stage focuses on the design concept of the AUV which covers mechanical

Review of previous AUV concepts and designs Propose design Analysis of design concept Choose final design concept

Testing and assessing AUV Object recognition Tracking system

yes Designing mechanical system

Designing electrical system

Require upgrades?

Adjustment to AUV design

no

Integrate mechanical and electrical system

Construction process

Fig. 1 Process of design and construction of the propose AUV

Final AUV design

40

M. H. Abu Mangshor et al.

and electrical design. The next stages can be described in two sections; the first section is the development of the mechanical parts. Computer-aided software such as the Sketch Up software is used to draw and animate the proposed AUV. Other subsections discuss on the development of the internal and external electrical design of the AUV. The last stages are testing, fine tuning and minor upgrading tasks.

2.2

AUV Structure 3D Modelling

This subsection discusses the 3D design of the AUV. The actual structure of the AUV is developed based on the 3D design. Figure 2 shows the 3D design of the proposed AUV modelled using Sketch Up software based on the actual size, dimensions and the entire component that has been used. Figure 3 shows various views of the 3D design. Figure 4 shows the main components of the proposed AUV.

2.3

AUV Structure 3D Modelling

Figure 5 shows various views of the completed AUV structure. The structure is composed of aluminium alloy struts which is extremely tough, light-weight,

Fig. 2 3D design of the AUV structure

Development of Autonomous Underwater Vehicle Equipped …

41

Fig. 3 Various view of the AUV’s 3D design

PVC Pipe Left thruster

Compartment

Aluminium Alloy Strut

Arduino Mega Pixy CMUCam5 Right thruster

Bottom thruster Bottom thruster

Fig. 4 AUV main components

42

M. H. Abu Mangshor et al.

Fig. 5 AUV body structure with dimensions

corrosive resistant, and anti-rusting. The aluminium alloy struts are easy to be installed and modified making it very flexible in order to fitting with other component into the AUV. The dimension of the AUV is 65 cm length, 24 cm width and 24 cm height as shown in the figure. The process of cutting the metal must be precise to avoid difficulty during buoyancy test. Each aluminium alloy strut is joined using aluminium corner 90° L shape joint bracket tightened using button head and ball nut. The joint parts need to be completely tightened so that the AUV structure is strong enough to face underwater external forces. After all the installation and testing completed, all the system were integrated and uploaded into the Arduino Mega microcontroller. All the electronic components were placed into the underwater compartment and the thrusters were mounted onto the AUV in order to test its overall system functionality. Figure 6 shows the completed installation of AUV including all peripherals such as thrusters and electronic circuitry.

2.4

Pixy CMUcam5 Installation

The Pixy CMUcam5 is placed inside a waterproofed underwater compartment as shown in Fig. 7. The underwater compartment has a dome end cap design. This dome end cap helps to improve vision underwater environment clearly. The

Development of Autonomous Underwater Vehicle Equipped …

43

Fig. 6 Various viewpoints of the completed AUV

position of Pixy CMUcam5 is inside the compartment and at the dome end cap. A mounting bracket has been designed using 3D printer in order to hold the camera inside the compartment Fig. 8a shows the mounting bracket for Pixy CMUcam5. The dimension of the mounting bracket is 8 cm in diameter with a thickness of 1 cm. Figure 8b shows the Pixy CMUcam5 is attached inside the compartment using the mounting bracket.

44

M. H. Abu Mangshor et al.

Fig. 7 AUV’s waterproofed underwater compartment

Fig. 8 a Mounting bracket for Pixy CMUcam5, b Pixy CMUcam5 is attached inside the compartment using the mounting bracket

2.5

Object Recognition and Tracking System Using Single Camera

Object recognition using Pixy CMUcam5. In this work, a Pixy CMUcam5 is used as a vision sensor. Figure 9 shows an image of a Pixy CMUcam5 connected to an Arduino Mega microcontroller. This Pixy CMUcam5 uses a colour-based filtering algorithm to recognize object. Pixy calculates the hue and saturation of each RGB pixel from the image sensor and uses these as the primary filtering parameters.

Development of Autonomous Underwater Vehicle Equipped …

45

Fig. 9 Pixy CMUcam5 connected to Arduino. As can be seen the Pixy CMUcam5 is connected to Arduino at ICSP pin

The hue of an object remains largely unchanged with changes in lighting and exposure. The changes in lighting and exposure can have a frustrating effect on color filtering algorithms. It can also recognize seven different color signatures; find hundreds of objects at the same time, and processing at 50 fps. Pixy processes an entire 640  400 image frame every 1/50th of a second (20 ms). This means that you get a complete update of all detected objects’ positions every 20 ms. Pixy CMUcam5 addresses these problems by pairing a powerful dedicated processor with the image sensor. Pixy processes images from the image sensor and only sends the useful information to the microcontroller. Pixy can easily connect to lots of different controllers because it supports several interface options (UART serial, SPI, I2C, USB, or digital/analog output). Object Tracking using Pixy CMUcam5. The Pixy CMUcam5 is connected to an Arduino microcontroller to recognize and track object. Figure 10 shows the flowchart of object tracking. The Pixy CMUcam5 will find the set signature colour by using object colour-based filtering algorithm. Once the Pixy CMUcam5 succeed in recognizing the object, the AUV will take action to achieve the goal. Otherwise the AUV will keep acquiring image to recognize target object. As the AUV near to the recognized object, the AUV will stop moving. Initially, the Pixy CMUcam5 was ‘taught’ to track an object. PixyMon software is used to teach the AUV to recognize the objects. This was done by holding the object in front of its lens while holding down the button located on top. While doing this, the RGB LED under the lens provides feedback regarding which object it is looking at directly. When tracking an object using PixyMon, the Pixy CMUcam5 will determine some object image resolutions that have same assumption when trying to detect an object. Object tracking is implemented in the TrackBlock function where the function is to keep following the object in a set area. It analyzes the image and identifies objects matching the colour characteristics of the object being tracked. It then reports the position size and colors of all the detected objects back to the Arduino.

46

M. H. Abu Mangshor et al.

Fig. 10 Flowchart of Object Tracking

START Image acquisition Object colour-based filtering algorithm

Object recognized?

No

Yes Object tracking AUV moves towards object (Forward, Reverse, Left, Right)

No

Object distance =10cm?

No Object lost?

Yes AUV stop

2.6

Yes

AUV Circuit Design

Figure 11 shows the circuit design for the AUV illustrated using Fritzing software. As shown in the figure, the AUV utilizes an Arduino Mega microcontroller to control all peripherals. The circuit consists of one (1) input and four (4) outputs. The input is only Pixy CMUcam5 that connects at Arduino’s ICSP pin. The output are consisting of four (4) T100 thrusters from BlueRobotics that perform up, bottom, right and left movements. To operate the thrusters, 11 V power supplies are needed. The thrusters are connected to electronic speed controllers (ESC) and then to the Arduino Mega. The ESC is used to control the speed of thrusters and the forward or reverse rotation for forward or reverse thrust. A Pixy CMUCam5 is used to give instructions to the AUV to recognize and track the object in underwater based on a colour set signature and sends the data to the control system. The control

Development of Autonomous Underwater Vehicle Equipped …

47

5V 9V Power jack ESC ESC Arduino Mega

ESC ESC

Thruster A Thruster B Thruster C Thruster D

Pixy CMUcam5

Fig. 11 AUV circuit design using Fritzing

9V LiPo Battery

Electronic Speed Controller 5V Power bank

Arduino Mega

Pixy CMUCam5

Thrusters

Fig. 12 Actual circuit for the proposed AUV

system will give instruction to the thrusters whether to move forward or reverse, submerge deeper or rise depending the location of the object. Figure 12 shows the actual circuit of the proposed AUV.

48

M. H. Abu Mangshor et al.

3 Preliminary Experiments 3.1

Water Leakage and Submerging Experiment

Before placing the electronic devices inside the underwater compartment, it is necessary to perform water leak test. Figure 13a shows the water leakage test condition. To detect an air leaks, the underwater compartment was submerged for an hour inside a water container. If there is any present of bubbles means there is an air leak. This test helps to prevent short circuit for electronic components inside the underwater compartment and keeps of the underwater compartment dry while submerged underwater. The underwater compartment has been tested three times submerged underwater where each test was done for an hour. Before submerging, the compartment was tested to make sure it is watertight and reliable in preventing the electronic devices from damage due to water leakage. After the AUV was completely assembled, a submerging test was carried out in a lake to test whether the AUV ready to remain completely submerged for a period of time. The experiment also carried out to verify the waterproofing of the component storage compartment. Figure 13b shows the submerging experiment condition. As shown in the figure, the yellow coloured PVC pipes were added to the sides of the AUV to act as floating mechanism for the AUV to reduce the buoyant force acted upon the AUV. Additional loads were added to the AUV in order to the AUV submerged. Based on the experiment, the compartment was waterproofed reliably. Furthermore, the right amount of loads required for the AUV to stay submerged were verified successfully.

PVC Pipe

AUV

(a)

(b)

Fig. 13 a Compartment water leakage test condition, b AUV submerging experiment condition performed in a lake

Development of Autonomous Underwater Vehicle Equipped …

3.2

49

Underwater Experiment on Single Object Recognition Using Pixy CMUcam5

This experiment has been carried out to investigate the effectiveness on Pixy CMUcam5 to recognize a single object in underwater. The object used in the experiment test is a pink colour dinosaur toy named as Spinosaurus (pink). Underwater experiments have been carried out in a water container with the size of 80 cm (width)  58 cm (depth)  50 cm (height). The container was chosen since there was no large water tank to test long distance recognition capabilities. Therefore, the maximum distance between camera position and the object was 30 cm. Experimental Steps. The steps for this experiment are as follows: 1. 2. 3. 4. 5. 6.

Connect Pixy CMUcam5 to Arduino Mega. Use 5 V power supply to Arduino Mega. Upload a source code to Arduino Mega. The electronic components are placed inside underwater compartment. The object is placed in a water container as shown in Fig. 14. Initially, the camera is located with a distance 30 cm to the object position. Then, it is moved near to the object at 25, 20, 15, and 10 cm positions. 7. Repeat step 4 to 6 with different type of water which is clear water, and mud water. 8. The video images captured by camera are recorded.

Fig. 14 Clear underwater single object recognition by Pixy CMUcam5

50

M. H. Abu Mangshor et al.

(a) 30cm

(b) 25cm

(c) 20cm

(d) 15cm

(e) 10cm

Fig. 15 Camera views of a single object in clear water with varying distances

(a) 30cm

(b) 25cm

(c) 20cm

(d) 15cm

(e) 10cm

Fig. 16 Camera views of a single object in muddy water with varying distances

Experimental Results. From the experiment, the pixy CMUcam5 was able to recognize a single object in clear water condition with the distances of camera to object set as 30, 25, 20, 15 and 10 cm for the clear water as shown in Figs. 15a–e. In muddy water condition, the pixy CMUcam5 was able to only recognize object located 10 cm from the camera position as shown in Figs. 16a–e.

3.3

Underwater Experiment on Multiple Objects Recognition Using Pixy CMUcam5

This experiment has been carried out to investigate the effectiveness of Pixy CMUcam5 to recognize on multiple objects in underwater. The objects used in the experiment were Spinosaurus (pink), Stegosaurus (green), Pteranodon (yellow), Triceratops (orange) and Tyrannosaurus (purple) in colours. Experimental Steps. The steps for this experiment are as follows: 1. 2. 3. 4. 5.

Connect Pix CMUcam5 to Arduino Mega. Use 5 V power supply to Arduino Mega. Upload a source code to Arduino Mega. The electronic components are placed inside underwater compartment. The object is placed in a water container as shown in Fig. 17a.

Development of Autonomous Underwater Vehicle Equipped …

(a) Clear water

51

(b) Muddy water

Fig. 17 Camera views of a multiple objects in a clear water, b muddy water

52

M. H. Abu Mangshor et al.

6. Initially, the camera is located with a distance 30 cm to the object position. Then, it is moved near to the object at 25, 20, 15, and 10 cm positions. 7. Repeat step 4 to 6 with different type of water which is clear water and mud water. 8. The video images captured by camera are recorded. Experimental Results. From the experiment, it was found that the pixy CMUcam5 was able to recognize a certain multiple objects in clear underwater at certain distances as shown in Fig. 17a. At the camera distance to object at 30 cm, the camera was able to recognize Spinosaurus and Pterandon. The camera able to recognize Stegosaurus at the distances 25 cm. Next is Tyrannosaurus, where the camera recognizes at distance 20 cm. The camera started to recognize the orange coloured Triceratop at the distance of 15 cm. On the other hand, the camera was able to recognize multiple objects in muddy water at certain distances. At distance of 20 cm, the camera could only recognized Stegosaurus. The camera started to recognize all objects at a distance of 15 cm, but only the Tyrannosaurus was undetected in muddy underwater. Figure 17b shows the results. Light is comprised of wavelengths of light, and every wavelength is a specific colour. As results, the pixy CMUcam5 recognize longest wavelength and then follow by the lowest wavelength from the light’s visible spectrums.

3.4

Underwater Experiment on Recognizing and Tracking a Single Object

This experiment has been carried out to investigate the effectiveness of Pixy CMUcam5 to recognition an object in underwater and track the object. The object used to recognize and track was a pink coloured Spinosaurus. Experimental Steps. The steps for this experiment are as follows: 1. 2. 3. 4. 5. 6. 7.

Supply 9 V LiPo battery to ECS for thrusters. Connect Pixy CMUcam5 to Arduino Mega. Use 5 V power supply to Arduino Mega. Upload a source code to Arduino Mega. The electronic components are placed inside underwater compartment. The object is place in 10 m underwater depth. The camera from object distance is 20 cm and continuous move an object from left to right. 8. The distances for object to recognize and track are recorded. Experimental Results. Figures 18, 19, 20 and 21 show the experimental results. From the experiments, the system is able to perform the desire tasks where the pixy CMUcam5 able to recognise Spinosaurus in a clear underwater and tracking the

Development of Autonomous Underwater Vehicle Equipped …

53

Left thruster Right thruster

Fig. 18 The direction of thrusters moving to the right. As can be seen thrusters on the left is rotating based on the produced bubbles

Left thruster Right thruster

Fig. 19 The direction of thrusters moving to the left. As can be seen, the thruster on the right is rotating

54

M. H. Abu Mangshor et al.

Left thruster Right thruster

Fig. 20 The direction of thrusters are moving forward. As can be seen both thrusters are rotating

Left thruster Right thruster

Fig. 21 All thrusters stopped. As can be seen both thrusters are not rotating

Spinosaurus. When the Spinosaurus is moved to the left, the thruster A stopped and the thruster B stopped hence it turned to left. Then, the thrusters A was activated and the thruster B is stopping hence it turned to right. When Spinosaurus was moved backwards, the thruster A and thruster B were activated to move forward to track the object. Lastly, when the distance between the Spinosaurus and the camera is 10 cm, the thrusters stopped.

Development of Autonomous Underwater Vehicle Equipped …

3.5

55

Summary

Every step that has been taken plays an essential role in order to successfully develop a fully functional AUV. From sketching up the structure of the AUV by using computer software until assembling the AUV, each procedure was very crucial in the process of developing the AUV. Since the AUV will remain submerged, it is imperative to guarantee all the electronic components is water proof and would not leak to water. The experimental results show that the camera was able to recognize a single and multiple objects underwater especially for clear water. The thrusters have been operated as desired where the direction of thrusters follow the position of object.

4 Conclusion This paper describes the development of an autonomous underwater vehicle equipped with object recognition and tracking system. In this paper, the hardware and software designs of the AUV has been described. The AUV is installed with a Pixy CMUcam5 camera for object recognition and tracking system. Based on preliminary object recognizing experiments, the Pixy CMUcam5 is capable to recognize single and multiple objects underwater. It has been observed that the Pixy CMUcam5 starts recognizing objects at a distance of 30 cm for clear water. While in muddy water condition, it was difficult for the Pixy CMUcam5 to recognize objects. This is maybe due to the fact that CMUcam5 utilizes colour-based algorithm. Furthermore, experiments related to thrusters showed that the thruster rotated based on input from the image captured from the Pixy CMUcam5. In conclusion, the objective of the project is to design and develop an AUV equipped with object recognition and tracking system is successfully independently. Lastly, improvement to be considered in future projects include using high-end vision system which can monitor a real-time underwater. As a camera that can perform in multiple types of water so that the AUV not limited to clear water only but also muddy waters. Acknowledgements The authors would like to thank the Research Management Center (RMC), UTHM and Ministry of Higher Education for sponsoring the research under Tier 1 Research Grants (Vot H161).

References 1. Levin LA et al (2019) Global observing needs in the deep ocean. Front Mar Sci 6(241):1–32 2. Spears A et al (2016) Under Ice in Antarctica: the icefin unmanned underwater vehicle development and deployment. IEEE Robot Autom Mag 23(4):30–41

56

M. H. Abu Mangshor et al.

3. Ribas D et al (2015) I-AUV mechatronics integration for the TRIDENT FP7 project. IEEE/ ASME Trans Mechatron 20(5):2583–2592 4. Ambar RB, Sagara S (2015) Development of a master controller for a 3-link dual-arm underwater robot. Artif Life Robotics 20:327–335 5. Yuh J (2000) Design and control of autonomous underwater robots: a survey. Auton Robots 8 (1):7–24 6. Khatib O et al (2016) Ocean one: a robotic avatar for oceanic discovery. IEEE Robot Autom Mag 23(4):20–29 7. Techopedia: Machine Vision System (MVS). https://www.techopedia.com/definition/30414/ machine-vision-system-mvs. Accessed 21 Feb 2019

Dual Image Fusion Technique for Underwater Image Contrast Enhancement Chern How Chong, Ahmad Shahrizan Abdul Ghani, and Kamil Zakwan Mohd Azmi

Abstract Underwater imaging is receiving attentions throughout these years. Attenuation of light causes the underwater images to have poor contrast and deteriorated color. Furthermore, these images usually appear foggy and hazy. In this paper, a new approach to enhance underwater images is proposed, which implements the integration of dehazing method, homomorphic filtering and image fusion. The dehazing method consists of multi-scale fusion technique, which applies weight maps in the pre-processing step. Homomorphic filtering and image fusion are then applied to the resultant image for contrast and color enhancement. Qualitative and quantitative evaluations are performed to analyze the performance of the proposed method. The results show the superiority of the proposed method in terms of contrast, image details, colors, and entropy. Moreover, implementation of Raspberry Pi with Picamera as standalone underwater image processing device is also successfully implemented.









Keywords Underwater image Contrast Color Multi-scale fusion Standalone prototype device

1 Introduction The physical features of an object are captured and stored as an image by capturing device such as a camera, telescope, and computer devices built-in camera module. In such ways, images have been categorized in varied forms. In terms of digital reign, digital image is represented as a form in two-dimensional (2D) rectangular matrix of any digital form sample value for the image itself. All of the quantized sample values are converted as picture, pixels and image elements. The properties

C. H. Chong  A. S. Abdul Ghani (&)  K. Z. Mohd Azmi Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, 26600 Pekan, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_5

57

58

C. H. Chong et al.

Fig. 1 Different wavelengths of light are attenuated at different rates in water

of the image itself can be quantified and undergo processing for further analysis to the next stage to illustrate the characteristics and properties of an image. As reported by Abdul Ghani [1], most images that captured in water medium have qualities (e.g. color and contrast) differ from internal properties possessed of the environmental medium. An object captured underwater is overshadowed by blue-green color cast. This problem creates an undesirable condition where the genuine characteristics and natural color of an underwater object is falsely interpreted. Moreover, a capturing device (i.e. camera) also can cause degradation to underwater image. Incompetent specification of a capturing device may result in various noises to be induced in output image. Therefore, these issues need to be resolved in order to have better quality of underwater image. Nowadays, underwater image processing gradually becomes as one of challenging field study to researchers. The fundamental knowledge of image formation in a water medium is described briefly in order to understand the underwater imaging process. Light’s phenomena that originated from the light attenuation as shown in Fig. 1 resulted in underwater images to suffer from low quality and poor contrast [2]. There are few experiments where the light source is replaced with artificial light to rectify the light illumination in underwater, yet it contributes toward other lighting issues. An image that captured with artificial light source tends to have bright spot appeared in the center of the image. Moreover, absorption and scattering effects also degrades further the contrast of underwater image. There are lots of ways that have been introduced and proposed by researchers in order to enhance underwater image quality. The advance of underwater image processing technique can help to ease up the overall progress of marine’s exploration. For instance, Chiang and Chen [3] developed underwater image enhancement by wavelength compensation and de-hazing to compensate for the attenuation discrepancy along the light propagation path. In 2017, Abdul Ghani and Mat Isa [4] introduced a new method of enhancing underwater image, which implements the modification to image histograms column wisely in accordance with Rayleigh distribution. In other report, Mohd Azmi et al. [5] proposed a method that focuses

Dual Image Fusion Technique for Underwater Image Contrast Enhancement

59

on enhancing deep underwater image. They [6, 7] have also successfully integrated a swarm-intelligence algorithm to further enhance the effectiveness of their image enhancement method. In 2017, Peng and Cosman [8] proposed a depth estimation method for underwater scenes based on image blurriness and light absorption for underwater image enhancement. The visibility of output image can be improved through this method. However, the blue-green color cast problem is not significantly reduced. In 2018, Ancuti et al. [9] offered a single image approach where it builds on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original image. This method is proven effective in improving turbid underwater images. However, for deep underwater images, this method tends to produce a reddish effect. Recently, Kareem et al. [10] applied integrated color model with Rayleigh distribution (ICMRD) in their proposed method. The ICMRD approach is operated in YCbCr color space for image enhancement. The blue-green color cast is seen to be successfully reduced through this method. However, the image contrast remains low. In this paper, the image enhancement technique is presented with the application of Graphical User Interfaces (GUI) to display the comparison between the raw input image and the processed output image. Moreover, the proposed method is extended by using Raspberry Pi [11, 12] as the computing platform to run underwater image processing. Visual aid with GUI is developed to compare the results, and a standalone prototype device is also designed for underwater image acquisition application.

2 New Approach for Underwater Image Contrast Enhancement In this work, homomorphic filtering and image fusion with dehazing (HFIFD) technique is introduced for underwater image enhancement. First, the input image is subjected to dehazing process in order to reduce the haziness element in the underwater image. Dehazing method is a pre-processing procedure which split the input image into two separated images, where these images are improved through white balanced and contrast enhancement techniques. Implementation of luminance, chromatic and saliency weight maps are performed to both images and then all the outputs are fused together to produce the output image as shown in Fig. 2. This dehazing technique is necessary to eliminate unwanted distortion elements in the image. The white balancing process is aimed to shed unreal color, chromatic casts that are distorted by atmospheric color. Shades-of-gray color constancy technique is applied in this process to have better computational efficiency. Meanwhile, contrast enhancement is implemented to the second image by using adaptive histogram equalization technique. This method is used to enhance the contrast of each RGB channels by applying the histogram equalization on the intensity of the whole frame of the image.

60

C. H. Chong et al.

Underwater image

White Balanced Image

Luminance Weight Map

Contrast Enhanced Image

Chromac Weight Map

Saliency Weight Map

Images Fusion

Dehazing Image

Fig. 2 Block diagram of dehazing method with fusion technique

Then, weight maps are applied to white-balanced image and contrast-enhanced image as the previous enhancement is insufficient to restore the quality of underwater image. Luminance, chromatic and saliency weight maps are introduced and opted into the resultant image to improve the visibility and the color of underwater

Dual Image Fusion Technique for Underwater Image Contrast Enhancement

61

image. Luminance weight map is applied as there is color reduction occurred after performing white balancing technique. Luminance weight map is used to assign higher saturation value to region with better visibility and low saturation value to others region. Chromatic weight map is then introduced by working on saturation gain of the input image. Some of the object’s edge in certain region is considered as the informative part of the image, which should be distinguished from their surrounding as they possessed important features. Therefore, saliency weight map is applied to improve those regions so that they can be easily seen. Those three weight maps that have been employed in the dehazing process hold critical roles to enhance the image quality and to reduce haziness element.

Step

Output Image

Dehazing Image

Homomorphic Filtering process

Histogram Matching

Dual-image Global Stretching

Local Stretching as post processing

Sharpening of image for final output image Fig. 3 Block diagram of homomorphic filtering and image fusion with dehazing (HFIFD)

62

C. H. Chong et al.

After the pre-processing steps (dehazing) are done, homomorphic filtering is applied to the resultant image to enhance and restore the natural colors of underwater image as shown in Fig. 3. Butterworth filtering technique is applied in the homomorphic filtering to filter low-frequency noise in the image. However, the homomorphic filtering is inadequate to improve the underwater image as the bluish or greenish illumination tends to retain in the background. Therefore, histogram matching method is utilized in the filtering process to increase inferior and intermediate color channels. In this step, the dominant color channel is matched by the inferior and intermediate color channels. This process automatically increases the influence of the inferior and intermediate color channels, while the dominant color channel is being reduced. Then, dual-image global stretching, local stretching, and image sharpening are applied to enhance further the image contrast.

3 GUI Application on Underwater Image Acquisition MATLAB software is used as the compiler platform in this work. In addition, a GUI application is designed and developed as well through MATLAB Guide to display the input and output of the processed underwater image. The GUI is developed to help users to see clearly the difference between the raw underwater image and the processed image. As shown in Fig. 4, there are axes and press button which have been designed on the GUI. The axes are divided into two, and both axes have been labeled to display both input and output images. The “Pick and Process” button is clicked for selection of input image through file selector function in MATLAB. The corresponding function is uigetfile () where filename and pathname are the output from

Fig. 4 GUI for underwater image acquisition using MATLAB

Dual Image Fusion Technique for Underwater Image Contrast Enhancement

63

the function. The type of input image is defined as .jpg, which is common image format. The user can choose any underwater image with .jpg format and then select it as the input image. Figure 5 shows the flowchart of the GUI application with the implementation of the proposed image enhancement technique.

Fig. 5 Flowchart for GUI application

4 Standalone Prototype Device for Underwater Image Acquisition Application In this proposed method, Raspberry Pi is used as the computing device for underwater image enhancement. Raspberry Pi is a basic embedded system conjointly with a low cost single-board computer that is commonly used to ease the complexity of a system in real time application. The application of Raspberry Pi gives better opportunities than only observing simulation results. The interaction between the Raspberry Pi and PC is handled by MATLAB and Simulink software where Simulink makes possible porting of the MATLAB software to variety of devices and platforms. MATLAB in Raspberry Pi can operate both in a simulation mode where the board is connected to a PC and in a standalone mode where a software is downloaded onto the board and runs independently from a PC.

64

C. H. Chong et al.

Raspberry Pi operates on special derivatives of Linux Operating System (OS). There are six OS variants that are capable to install into Raspberry Pi such as Raspbian, Pidora, OpenELEC, RaspBMC, RISC OS and Arch Linux. Raspbian is the most frequently used OS which is specifically developed for Raspberry Pi. For underwater imaging field, Raspberry Pi is supported by different programming language software (i.e. MATLAB, Simulink) which is integrated by MathWorks. MATLAB’s supporting package enable in development software for algorithms that can run in Raspberry Pi. It also allows controlling peripheral devices connected to the board through its GPIO interfaces, namely serial, I2C and SPI as well as a camera module via command functions in MATLAB command window. The performance of Raspberry Pi as the computing platform, helps researchers to study and analyze the phenomena existed in underwater environment. To capture live images from underwater environment, Raspberry Pi Camera Module is utilized. The reason to use the Picamera as it has a built-in module that can be integrated through Raspbian, and easy to connect it to the Raspberry Pi board via short ribbon cable. Live still-images can be captured through the Picamera module. It also has 8 megapixels lens in the module which is capable to capture great quality of image. Moreover, a 5 in. TFT Display with a mini panel-mountable HDMI monitor is used to display the Raspbian operating system since the original Raspberry Pi board doesn’t come with a display. The display showed 800  400 common HDMI display that is made for the Raspberry Pi. For the power source, a portable power bank from PINENG with 20,000 maH capacity is adopted to the Raspberry Pi board. OpenCV is an open-source computer vision and machine learning software library. It is also aimed at real-time computer vision function. In this proposed method, OpenCV is applied and written with Python language to deploy the implementation of homomorphic filtering process for underwater image enhancement. Python2 IDLE is used as the programming environment to write out the algorithm for homomorphic filtering method. The libraries for both of Picamera and OpenCV are imported to the programming environment to fully utilize the features (Fig. 6). Picamera captures the input image and the image is saved into a prepared folder for storing purpose. The captured image is read by using function in OpenCV tools which is cv2.imread. cv2.imread read an RGB image to BGR sequence image. Therefore, another function from the tools itself, cv2.imwrite will write the final output image which is the enhanced image back to RGB image. The input image is then processed with adaptive histogram equalization. The image is divided into small blocks by 2  2 tile grid sizes and then each histogram is equalized based on tile. A contrast limiting parameter is also applied to prevent any noise amplification in the image if there is noise presents in the blocks. The pixels in the input image are clipped and distributed evenly to other bins before implementing adaptive histogram equalization. The next step is to split out the processed image after adaptive histogram equalization to B, G, and R color channels. Contrast adjustment is applied by normalized all three B, G, and R color channels in order to adjust the color and

Dual Image Fusion Technique for Underwater Image Contrast Enhancement

65

Fig. 6 Block diagram process flow of homomorphic filtering method through Raspberry Pi microprocessor module

Applied homomorphic filtering to the image

contrast in the image. The function processes each color band (BGR) and determines the minimum and maximum value in each of the three colors band. Each of the color channels has the same minimum value but different maximum value. The minimum and maximum values are range in between 0–255 since the input image is in 8-bits. The normalized of all BGR color channels are then merged together and adaptive histogram equalization is performed again. The output from the merging is then subjected to homomorphic filtering for the final enhancement. Gaussian high-pass filter is used in the homomorphic filtering. Figure 7 shows the interfaces generated to display the comparison on the raw input image and enhanced output image in the Raspberry Pi. The left side of the window shows the raw input image that is loaded from the database sample images, and the right side of the window shows the output image that has been enhanced through the proposed method. Both windows are generated by using Python IDLE.

66

C. H. Chong et al.

Fig. 7 Raw image and enhanced image display show in Raspberry Pi with HDMI display

5 Results and Discussion Five sample images are used to test the effectiveness of the proposed method, namely fish 1, coral 1, stone, fish 2, and coral 2. The performance of the proposed method is compared with homomorphic filtering, gray world [13], CLAHE, and contrast adjustment. The resultant images produced by all methods are shown in Figs. 8, 9, 10, 11 and 12.

Fig. 8 Comparison of fish 1 images, a Original image; b Homomorphic filtering; c Gray world; d CLAHE; e Contrast adjustment; f Proposed HFIFD method

Dual Image Fusion Technique for Underwater Image Contrast Enhancement

67

Fig. 9 Comparison of coral l images, a Original image; b Homomorphic filtering; c Gray world; d CLAHE; e Contrast adjustment; f Proposed HFIFD method

Fig. 10 Comparison of stone images, a Original image; b Homomorphic filtering; c Gray world; d CLAHE; e Contrast adjustment; f Proposed HFIFD method

The original image of fish 1 is affected by bluish color cast and the objects are hardly seen. The homomorphic filtering method show a promising result as the bluish color cast is significantly reduced. Meanwhile, gray world tends to generate a reddish output image. CLAHE inadequately improve the original image as the bluish color cast retains in the image. Contrast adjustment method is able to reduce the bluish color cast in the foreground. However, this effect retains in the

68

C. H. Chong et al.

Fig. 11 Comparison of fish 2 images, a Original image; b Homomorphic filtering; c Gray world; d CLAHE; e Contrast adjustment; f Proposed HFIFD method

Fig. 12 Comparison of coral 2 images, a Original image; b Homomorphic filtering; c Gray world; d CLAHE; e Contrast adjustment; f Proposed HFIFD method

background. On the other hand, the proposed method is able to reduce the bluish color cast significantly. The image contrast is also well-improved as the fishes can be seen clearly. The original image of coral 1 has poor contrast and the real color of the object is overshadowed by the bluish color cast. Homomorphic filtering method is able to reduce the bluish color cast. However, the image contrast is insufficiently enhanced. Gray world method over-enhances the foreground color as the reddish color cast dominated in that region. There is no significant improvement made by CLAHE as the bluish color cast retains in the output image. Similar to gray world, the contrast

Dual Image Fusion Technique for Underwater Image Contrast Enhancement

69

adjustment method tends to produce reddish color cast in the foreground. On the other hand, the proposed method is able to improve the image contrast adequately. The bluish color cast is also significantly reduced. A similar trend can be seen in other tested images, where the proposed method successfully recovers the image contrast as the visibility of objects has been improved. To support the visual observation, the quantitative evaluation metrics are used such as entropy [14], MSE [15], and PSNR [15]. Entropy represents the abundance of image information which measures the image information content. High entropy is preferred as it shows the resultant images contain more information. Meanwhile, MSE and PSNR are the quantitative metrics used to compare the original image and the improved image. High noise of an image is indicated by high value of MSE and low value of PSNR. As shown in Table 1, for all tested images, the proposed method obtains the highest value of entropy, indicating that the proposed method is able to produce output images that have more details and information. For MSE and PSNR evaluations, the proposed method is in fourth place for images fish 1, coral 1, fish 2, and coral 2. For image stone, the proposed method is in fifth place. Nevertheless, this does not certainly denote that the proposed method is inferior compared to the other methods. The quantitative evaluation metrics that are being used are subjective and thus have complexities in measuring correctly the enhancements made by an image enhancement technique [16]. In some cases, some performance metrics unsuccessfully achieve a result that is in agreement with the human perception of image quality [7]. For example, based on image fish 1, gray world method obtains a better score for MSE (3.521) compared to the proposed method (6.802). However, according to visual observation, the output image produced by gray world looks reddish and the image contrast is inadequately improved. Meanwhile, the proposed method adequately reduces the bluish color cast while the image contrast has been improved significantly as the fish can be seen clearly. Therefore, in terms of image quality comparison, visual qualitative evaluation by human visual system is taken as the first priority for overall image quality evaluation [4]. On the other hand, the GUI which has been developed with MATLAB software is successfully developed. The performance of this application in enhancing the underwater image is promising since the computational time required is short. Each of images requires 2–3 s to be processed and enhanced. Compared to GUI, Raspberry Pi requires longer computational time to process the underwater image. On average, this application takes about 21 s to improve underwater image.

70 Table 1 Quantitative results in terms of entropy, MSE, and PSNR

C. H. Chong et al. Image

Method

Entropy

MSE

PSNR

fish 1

Original Homomorphic filtering Gray world CLAHE Contrast adjustment HFIFD Original Homomorphic filtering Gray world CLAHE Contrast adjustment HFIFD Original Homomorphic filtering Gray world CLAHE Contrast adjustment HFIFD Original Homomorphic filtering Gray world CLAHE Contrast adjustment HFIFD Original Homomorphic filtering Gray World CLAHE Contrast adjustment HFIFD

7.463 7.865

– 5.920

– 40.408

6.404 6.940 7.419

3.521 11.078 2.948

42.664 37.686 43.436

7.870 7.591 7.779

6.802 – 55.592

39.804 – 30.681

7.141 7.466 7.491

143.191 20.487 32.673

26.572 35.016 32.989

7.878 7.557 7.886

60.119 – 31.451

30.341 – 33.154

7.494 7.653 7.608

22.906 8.298 9.549

34.531 38.941 38.331

7.888 7.529 7.863

40.308 – 5.635

32.077 – 40.622

6.647 7.254 7.441

3.010 8.093 3.102

43.345 39.049 43.214

7.889 7.181 7.553

7.589 – 38.944

39.329 – 32.226

6.734 7.047 7.087

266.114 28.447 19.058

23.880 33.590 35.330

7.735

40.022

32.108

coral 1

stone

fish 2

coral 2

Dual Image Fusion Technique for Underwater Image Contrast Enhancement

71

6 Conclusion The proposed image enhancement method has proven to be effective in enhancing underwater image in terms of color, contrast and image details. Qualitative evaluation and quantitative evaluation have been performed to evaluate and justify the performance of the proposed method. Three sample images were tested and the results showed the effectiveness of the proposed method. In addition, GUI application has been successfully developed for processing underwater images. This GUI has successfully displayed the comparison between the input image (raw image) and the output image (enhanced image). The implementation of the Raspberry Pi device in underwater image acquisition application is also successfully produced. The idea of it is to take an image from the Picamera, and then the image quality is improved through the proposed method. The image quality produced through the Raspberry Pi also shows satisfactory results. Acknowledgements The research is supported by University Malaysia Pahang (UMP) research grant RDU1803131 entitled “Development of Multi-Vision Guided Obstacle Avoidance System for Ground Vehicle”. The sample images and some related references are taken from database https://sites.google.com/ump.edu.my/shahrizan/database-publication.

References 1. Abdul Ghani AS (2015) Improvement of underwater image contrast enhancement technique based on histogram modification. Thesis - Universiti Sains Malaysia. Accessed Jan 2019 2. Ancuti C, Ancuti CO, Haber T, Bekaert P (2012) Enhancing underwater images and videos by fusion. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 81–88 3. Chiang JY, Chen YC (2012) Underwater image enhancement by wavelength compensation and dehazing. IEEE Trans Image Process 21(4):1756–1769 4. Abdul Ghani AS, Mat Isa NA (2017) Automatic system for improving underwater image contrast and color through recursive adaptive histogram modification. Comput Electron Agric 141:181–195 5. Mohd Azmi KZ, Abdul Ghani AS, Md Yusof Z, Ibrahim Z (2019) Deep underwater image enhancement through integration of red color correction based on blue color channel and global contrast stretching. In: Md Zain Z et al (eds) Proceedings of the 10th national technical seminar on underwater system technology 2018, LNEE, vol 538, pp 35–44. Springer, Singapore 6. Mohd Azmi KZ, Abdul Ghani AS, Md Yusof Z, Ibrahim Z (2019) Deep underwater image enhancement through colour cast removal and optimization algorithm. Imag Sci J 67(6):330– 342 7. Mohd Azmi KZ, Abdul Ghani AS, Md Yusof Z, Ibrahim Z (2019) Natural-based underwater image color enhancement through fusion of swarm-intelligence algorithm. Appl Soft Comput J 85:1–19 8. Peng Y, Cosman PC (2017) Underwater image restoration based on image blurriness and light absorption. IEEE Trans Image Process 26(4):1579–1594 9. Ancuti CO, Ancuti C, De Vleeschouwer C, Bekaert P (2018) Color balance and fusion for underwater image enhancement. IEEE Trans Image Process 27(1):379–393

72

C. H. Chong et al.

10. Kareem HH, Daway, HG, Daway EG (2019) Underwater image enhancement using colour restoration based on YCbCr colour model. In: IOP conference series: materials science and engineering, vol 571, pp 1–7 11. Horak K, Zalud L (2015) Image processing on raspberry pi in Matlab. Adv Intell Syst Comput 4:1–7 12. Patil VP, Gohatre UB, Singla CR (2018) Design and development of raspberry pi based wireless system for monitoring underwater environmental parameters and image enhancement. Int J Electron Electr Comput Syst 7(5):133–138 13. Buchsbaum G (1980) A spatial processor model for object colour perception. J Franklin Inst 310(1):1–26 14. Ye Z (2009) Objective assessment of nonlinear segmentation approaches to gray level underwater images. ICGST J Graph Vis Image Process 9(II):39–46 15. Hitam MS, Awalludin EA, Wan Yussof WNJ, Bachok Z (2013) Mixture contrast limited adaptive histogram equalization for underwater image enhancement. In: Proceeding of the IEEE international conference on computer applications technology (ICCAT), pp 1–5 16. Rao SP, Rajendran R, Panetta K, Agaian SS (2017) Combined transform and spatial domain based “no reference” measure for underwater images. In: Proceedings of the IEEE international symposium on technologies for homeland security (HST), pp 1–7

Red and Blue Channels Correction Based on Green Channel and Median-Based Dual-Intensity Images Fusion for Turbid Underwater Image Quality Enhancement Kamil Zakwan Mohd Azmi, Ahmad Shahrizan Abdul Ghani, and Zulkifli Md Yusof

Abstract One of the main problems encountered in processing the turbid underwater images is the effect of greenish color cast that overshadows the actual color of an object. This paper introduces a new technique which focuses on the enhancement of turbid underwater images. The proposed method integrates two major steps. The first step is specially designed to reduce the greenish color cast problem. The blue and red channels are improved according to the difference between these channels and the reference channel in terms of the total pixel values. Then, the median-based dual-intensity images fusion approach is applied to all color channels to improve the image contrast. Qualitative and quantitative evaluation is used to test the effectiveness of the proposed method. The results show that the proposed method is very effective in improving the visibility of the turbid underwater images. Keywords Image processing

 Turbid underwater image  Contrast stretching

1 Introduction The features of the turbid underwater images differ from deep underwater images, where not only the red channel but the blue channel also problematic due to absorption by the organic matter [1]. As a result, the greenish color cast dominates these images and causes the actual color of an object difficult to be determined accurately. In addition, the turbid underwater images also have low contrast issue, resulting in poor image quality. Based on the aforementioned issues, it is very crucial for underwater researchers to focus on improving the turbid underwater images. In this paper, an idea to K. Z. Mohd Azmi  A. S. Abdul Ghani (&)  Z. Md Yusof Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, 26600 Pekan, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_6

73

74

K. Z. Mohd Azmi et al.

improve the visibility of turbid underwater images is presented. The proposed method involves two major steps: red and blue channels correction based on green channel, and median-based dual-intensity images fusion (RBCG-MDIF). The capability of the proposed method is validated through qualitative and quantitative evaluation results. This paper is organized as follows: literature review is described in Sect. 2. Section 3 discusses the motivation of this research. Section 4 provides a detail explanation of the proposed method. In Sect. 5, the capability of the proposed method is confirmed through qualitative and quantitative evaluation results. This paper ends with a conclusion.

2 Related Works The gray world (GW) assumption [2] is a famous method, which has been employed to improve underwater images. This method assumes that all color channels have the same mean value before attenuation. However, this method inadequately enhances underwater images that are highly affected by a strong greenish effect such as in turbid underwater scene. Another well-known method which is frequently being used to compare the effectiveness of a method is unsupervised color correction method (UCM) [3]. This method is able to increase the image contrast. However, for turbid underwater images, it tends to produce a yellowish output image. In 2016, Abdul Ghani and Mat Isa [4] proposed an integrated-intensity stretched-Rayleigh histograms method (IISR). In this method, each color channel is multiplied by a gain factor in order to balance all the color channels. Based on visual observation, for turbid underwater images, IISR over-enhances the greenish effect, thus reducing the visibility of the objects. Recently, Mohd Azmi et al. (2019) [5] proposed a method for deep underwater image enhancement. It incorporates two main steps, which are red color correction based on blue color channel (RCCB) and global contrast stretching (GCS). This method is very effective in enhancing the attribute of deep underwater images, as it is able to reduce the bluish color cast significantly. However, it is less effective in improving the quality of turbid underwater images. In the next section, we will explain how this method is being modified and adapted for turbid underwater images enhancement.

Red and Blue Channels Correction Based on Green Channel …

75

3 Motivation The RCCB step has shown excellent results in improving the feature of deep underwater images [5]. This step works by modifying the red channel with regards to the difference between this channel and the blue channel in terms of the total pixel value. However, this step is less effective in improving the quality of turbid underwater images. As mentioned earlier, the features of the turbid underwater images differ from deep underwater images, where not only the red channel but the blue channel also problematic due to absorption by the organic matter [1]. The diver image in Table 1(a) is used to show the output image produced by the RCCB step. The original image is entirely disguised by the greenish color cast while the objects are hardly seen. According to the image histograms, the green channel is dominant over the other color channels. No changes can be seen in the output image generated by the RCCB step. Image histograms also show no adjustment and improvement. This is because of the RCCB step only improves the red channel by referring to the blue channel [5], while in the turbid scene, generally, the red and blue channels is not significantly differ as shown in the histograms of the original image. Therefore, this paper introduces a new idea to improve the RCCB step, considering the enhancements that need to be made to both red and blue channels. The reference channel should be changed to the green channel, instead of the blue channel as proposed in the RCCB step [5]. This is because the green channel is usually superior to the other color channels in turbid underwater images.

Table 1 Resultant image and image histograms produce by RCCB step

Method

Resultant image

Histogram of image Red 1000 500 0

(a)

Original image

0

50

100

0

50

100

0

50

100

Green

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

500

0

Blue

1000 500 0

Red 1000 500 0

(b)

RCCB [5]

0

50

100

0

50

100

0

50

100

Green

500

0

Blue

1000 500 0

76

K. Z. Mohd Azmi et al.

4 Methodology: Red and Blue Channels Correction Based on Green Channel and Median-Based Dual-Intensity Images Fusion (RBCG-MDIF) This section provides a detail explanation of the proposed method. Figure 1 shows the flowchart of the proposed method, while Table 2 shows the resultant images and image histograms of each step of the proposed RBCG-MDIF method.

4.1

Red and Blue Channels Correction Based on Green Channel (RBCG)

To begin with, the image is disintegrated into the red, green, and blue channels. Then, the total pixel value of red channel, Rsum , green channel, Gsum and blue channel, Bsum are calculated. The green channel is chosen as the reference channel for the enhancement of the red and blue channel, as this color channel is usually dominant in turbid underwater scene. Two gain factors, Y and Z are obtained as follows:

Fig. 1 Flowchart of the proposed RBCG-MDIF method

Input image

Red and blue channels correction based on green channel (RBCG)

Median-based dual-intensity images fusion (MDIF)

Unsharp masking

Output image

Red and Blue Channels Correction Based on Green Channel …

77

Table 2 Resultant images and image histograms of each step of the proposed RBCG-MDIF

Steps

Resultant images

Histograms of image Red 1000 500 0

(a)

Input image

0

50

100

0

50

100

0

50

100

Green

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

150

200

250

500

0

Blue

1000 500 0

Red 500

0

(b)

RBCG

0

50

100

0

50

100

0

50

100

Green

500

0

Blue

500

0

Red 500

0

(c)

MDIF

0

50

100

0

50

100

0

50

100

0

50

100

0

50

100

0

50

100

Green

500

0

Blue

500

0

Red 500

0

(c)

Unsharp masking

Green

500

0

Blue

500

0

Gsum  Bsum Y ¼ Gsum þ Bsum Gsum  Rsum Z ¼ Gsum þ Rsum

ð1Þ ð2Þ

The gain factor of Y contains information concerning the difference between the green and blue channels in terms of total pixel value. Meanwhile, the gain factor of Z contains information concerning the difference between the green and red channels. This information is crucial to control the appropriate amount of pixel value that has to be added to the blue and red channels in order to reduce the greenish color cast. The larger the pixel value difference between the green channel and the other color channels, the higher the pixel value will be added to improve the blue and red channels.

78

K. Z. Mohd Azmi et al.

After RBCG step

Before RBCG step Reference channel

Red 1000 500

Red 500

0

0 0

50

100

Green

150

200

250

0

50

100

0

50

100

0

50

100

Green

150

200

250

150

200

250

150

200

250

500

500

0

0 0

50

100

Blue

150

200

250

Blue

1000 500 500 0

0 0

50

100

150

200

250

Fig. 2 Images and their respective histograms before and after RBCG step

Then, the blue and red channels are improved through Eqs. (3) and (4), respectively. As shown in Fig. 2, the proposed RBCG is able to enhance the blue and red channels appropriately, thus significantly reduce the effect of greenish color cast. Pblue ¼ Pblue þ Y  Pgreen Pred ¼ Pred þ Z  Pgreen





ð3Þ ð4Þ

where Pred , Pgreen and Pblue are the pixel values of red, green and blue channels, respectively.

4.2

Median-Based Dual-Intensity Images Fusion (MDIF)

Then, the median-based dual-intensity images fusion approach is employed to all color channels to improve the image contrast. The phase starts with the determination of minimum, median, and maximum intensity values of each image histogram.

Red and Blue Channels Correction Based on Green Channel …

79

Original histogram Median point 700 600 500 400

Min value

300

Max value

200 100 0 0

50

100

150

200

250

Upper stretched-region

Lower stretched-region 1500

1500

1000

1000

500

500 0

0 0

50

100

150

200

0

250

50

100

150

200

250

Fig. 3 Illustration of histogram division at a median point and stretching process

As shown in Fig. 3, based on the median point, each image histogram is separated into two regions, which are upper and lower stretched-regions. Then, each region is stretched according to Eq. (5). Pin and Pout are the input and output pixels, respectively. imin and imax represent the minimum and maximum intensity level values for the input image, respectively.  Pout ¼ 255

Pin  imin imax  imin

 ð5Þ

For each color channel, the separation at the median point and global stretching processes will produce two types of histograms, which are upper-stretched and lower-stretched histograms. All upper-stretched histograms are integrated to generate a new resultant image. The similar process is performed to all lower-stretched histograms. Then, these two types of images are composed by average points as illustrated in Fig. 4.

4.3

Unsharp Masking

The unsharp masking technique [6] is applied in the last step to improve the overall image sharpness. The fundamental idea of this method is to blur the original image first, then deduct the blurry image from the original image. Then, the difference is added to the original image.

80

K. Z. Mohd Azmi et al. Over-enhanced image

Enhanced-contrast output image

Input image

Under-enhanced image

Fig. 4 Composition of under-enhanced and over-enhanced images

This technique can be used and proven effective in improving the quality of underwater images [7] [8]. Through this method, blurry appearance of underwater objects can be further enhanced. This can assist underwater researchers to better detect an object such as plants or animals under the sea.

5 Results and Discussion In this experiment, 300 underwater images are used to evaluate the performance of the proposed RBCG-MDIF method. The proposed method is compared with gray world (GW) [2], unsupervised color correction method (UCM) [3], integrated-intensity stretched-Rayleigh (IISR) [4], and red channel correction based on blue channel and global contrast stretching (RCCB-GCS) [5]. Besides visual observation, three quantitative performance metrics are used to support the qualitative assessment, which are entropy [9], patch-based contrast quality index (PCQI) [10], and natural image quality evaluator (NIQE) [11]. A high entropy value indicates that a method is able to generate an output image with more information, while a high PCQI value corresponds to high quality of image contrast. On the other hand, a low NIQE value indicates a high degree of image naturalness of the output image. Five samples of underwater images are selected for comparison as shown in Figs. 5, 6, 7, 8 and 9, while Table 3 shows the quantitative results of these samples images. The original image of turbid image 1 has low contrast and the greenish color cast overshadows the actual color of objects. Through comparison, GW produces a

Red and Blue Channels Correction Based on Green Channel …

81

(a) Original image

(b) GW

(c) UCM

(d) IISR

(e) RCCB-GCS

(f) Proposed RBCG-MDIF

Fig. 5 Processed images of turbid image 1 based on different methods

(a) Original image

(d) IISR

(b) GW

(e) RCCB-GCS

(c) UCM

(f) Proposed RBCG-MDIF

Fig. 6 Processed images of turbid image 2 based on different methods

82

K. Z. Mohd Azmi et al.

(a) Original image

(d) IISR

(b) GW

(c) UCM

(e) RCCB-GCS

(f) Proposed RBCG-MDIF

Fig. 7 Processed images of turbid image 3 based on different methods

(a) Original image

(d) IISR

(b) GW

(e) RCCB-GCS

(c) UCM

(f) Proposed RBCG-MDIF

Fig. 8 Processed images of turbid image 4 based on different methods

Red and Blue Channels Correction Based on Green Channel …

(a) Original image

(d) IISR

83

(b) GW

(c) UCM

(e) RCCB-GCS

(f) Proposed RBCG-MDIF

Fig. 9 Processed images of turbid image 5 based on different methods

Table 3 Quantitative results in terms of entropy, PCQI, and NIQE Images (a)

Turbid image 1

(b)

Turbid image 2

(c)

Turbid image 3

Methods

Quantitative analysis Entropy PCQI

Original GW UCM IISR RCCB-GCS Proposed RBCG-MDIF Original GW UCM IISR RCCB-GCS Proposed RBCG-MDIF Original GW UCM IISR RCCB-GCS Proposed RBCG-MDIF

7.556 7.030 7.665 7.113 7.559 7.917 7.600 6.987 7.762 5.431 7.490 7.942 7.266 6.639 7.391 4.779 7.180 7.858

1.000 0.943 1.196 1.107 1.209 1.256 1.000 0.858 1.101 0.698 1.141 1.166 1.000 0.846 1.131 0.756 1.179 1.221

NIQE 3.822 3.769 3.849 4.026 3.700 3.747 7.112 6.578 4.828 4.725 5.112 3.959 7.767 6.310 4.696 4.619 4.888 4.359 (continued)

84

K. Z. Mohd Azmi et al.

Table 3 (continued) Images (d)

Turbid image 4

(e)

Turbid image 5

Methods

Quantitative analysis Entropy PCQI

NIQE

Original GW UCM IISR RCCB-GCS Proposed RBCG-MDIF Original GW UCM IISR RCCB-GCS Proposed RBCG-MDIF

6.713 6.075 7.301 4.856 6.630 7.719 7.674 7.033 7.863 6.796 7.691 7.951

4.996 4.344 6.947 4.615 4.783 4.774 5.999 5.279 4.711 4.943 4.975 4.445

1.000 0.992 1.209 0.973 1.421 1.442 1.000 0.940 1.155 1.033 1.132 1.202

reddish output image that seem unnatural to human visual system. Furthermore, this method insufficiently enhances the image contrast as it produces the lowest values of entropy (7.030) and PCQI (0.943). UCM is able reduce the greenish color cast, however, the bright region is occupied by yellowish appearance. There is no major enhancement can be observed in the resultant image delivered by IISR, as this method intensify further the greenish color cast. The high score of NIQE (4.026) obtained by this method shows the quality of this output image is worse than the original image. RCCB-GCS is able to lessen the greenish color cast problem. However, based on quantitative analysis, this method obtains low entropy value (7.559) which is almost similar to original image (7.556). Meanwhile, the proposed RBCG-MDIF produces the best image quality as the greenish color cast effect is extensively lowered. This better performance is also verified by the quantitative assessment stated in Table 3 (a) as the proposed method obtains the highest scores for entropy and PCQI. For NIQE, the proposed method is in second rank after RCCB-GCS method. However, the visual observation shows that output image produced by the proposed method is better than RCCB-GCS. Based on output image produced by RCCB-GCS method, the greenish color cast retains in the background as shown in Fig. 5(e). Contrary to the previous tested image, the original image of turbid image 2 is affected by a strong greenish color cast causing the actual color of objects being implicated with this effect. Instead of reducing the greenish color cast, GW introduces a reddish color cast in the output image. This causes the true color of objects being associated with this effect. UCM is able to improve the image contrast, however, this method produces a yellowish effect especially in the background. Compared to the original image, the resultant image processed by IISR is worse. This method over-enhances the greenish effect, thus reducing the visibility of the

Red and Blue Channels Correction Based on Green Channel …

85

objects. This outcome is supported by the quantitative analysis, where this method produces the lowest values of entropy (5.431) and PCQI (0.698). RCCB-GCS is able to improve the image contrast and reduces the greenish color cast problem as the objects can be differentiated from the background. However, this method produces a large NIQE value (5.112), indicating poor image naturalness. On the other hand, the proposed RBCG-MDIF effectively reduces the greenish color cast. The image contrast is also well-improved. This notable accomplishment is verified by the quantitative assessment stated in Table 3(b) as the proposed RBCG-MDIF obtains the highest values of entropy, PCQI, and NIQE with the values of 7.942, 1.166, and 3.959, respectively. Meanwhile, the original image of turbid image 3 is occupied by intense greenish color cast causing the appearance of objects is very limited. Through comparison, GW darkens the original image. This method also produces a high value of NIQE (6.310), indicating poor naturalness quality of the processed image. UCM produces a yellowish effect in the output image while the greenish color cast preserves in the background. IISR degrades further the original image, as the greenish color cast exceedingly overshadows the output image. RCCB-GCS successfully reduces the greenish color cast to some extent, however, this effect is retained in the background. On the other hand, the proposed RBCG-MDIF produces better image feature than the other methods as the greenish color cast is significantly reduced. Furthermore, the objects can be seen clearly. This prominent performance is confirmed by the quantitative assessment stated in Table 3(c) as the proposed method obtains the highest scores for all performance metrics. A similar trend can be observed in other tested images, where the proposed RBCG-MDIF successfully reduces the greenish color cast and improve the image contrast. Table 4 reports the average quantitative scores of 300 tested underwater images. Based on this table, the superior performance of the proposed method is further supported by this quantitative evaluation, as the proposed method attains the best rank for all performance metrics.

Table 4 Average quantitative result of 300 tested underwater images

Methods

Quantitative analysis Entropy PCQI

NIQE

Original 7.064 1.000 4.244 GW 6.607 0.976 4.801 UCM 7.571 1.194 4.615 IISR 7.258 1.148 3.959 RCCB-GCS 7.287 1.192 3.836 Proposed RBCG-MDIF 7.775 1.279 3.808 Note The values in bold typeface represent the best result obtained in the comparison

86

K. Z. Mohd Azmi et al.

6 Conclusion The RBCG-MDIF method is specifically designed to solve turbid underwater image problems, especially to reduce the greenish color cast effect and to improve overall image contrast. This paper introduces a new idea to improve the RCCB step, considering the enhancements that need to be made to the red and blue channels. The reference channel has been changed to the green channel, instead of the blue channel for turbid underwater image enhancement. The capability of the proposed method in enhancing the turbid underwater images is verified through qualitative and quantitative evaluation results. Acknowledgements We would like to thank all reviewers for the comments and suggestions to improve this paper. This study is supported by Universiti Malaysia Pahang (UMP) through Postgraduate Research Grant Scheme (PGRS1903184) entitled “Development of Underwater Image Contrast and Color through Optimization Algorithm”.

References 1. Lu H, Li Y, Xu X, Li J, Liu Z, Li X, Yang J, Serikawa S (2016) Underwater image enhancement method using weighted guided trigonometric filtering and artificial light correction. J Vis Commun Image Represent 38:504–516 2. Buchsbaum G (1980) A spatial processor model for object colour perception. J Franklin Inst 310(1):1–26 3. Iqbal K, Odetayo M, James A, Salam RA, Talib AZH (2010) Enhancing the low quality images using unsupervised colour correction method. In: Proceedings of the IEEE international conference on systems, man and cybernetics pp. 1703–1709 4. Abdul Ghani AS, Raja Aris RSNA, Muhd Zain ML (2016) Unsupervised contrast correction for underwater image quality enhancement through integrated-intensity stretched-Rayleigh histograms. J Telecommun Electron Comput Eng 8(3):1–7 5. Azmi KZM, Ghani, ASA, Md Yusof Z, Ibrahim Z (2019) Deep underwater image enhancement through integration of red color correction based on blue color channel and global contrast stretching. In: Md Zain Z, et al (eds) Proceedings of the 10th national technical seminar on underwater system technology 2018. LNEE, vol 538. Springer, Singapore, pp 35–44 6. Jain AK (1989) Fundamentals of digital image processing. Prentice Hall, Englewood Cliffs 7. Mohd Azmi KZ, Abdul Ghani AS, Md Yusof Z, Ibrahim Z (2019) Deep underwater image enhancement through colour cast removal and optimization algorithm. Imaging Sci J 67 (6):330–342 8. Mohd Azmi KZ, Abdul Ghani AS, Md Yusof Z, Ibrahim Z (2019) Natural-based underwater image color enhancement through fusion of swarm-intelligence algorithm. Appl Soft Comput J 85:1–19 9. Ye Z (2009) Objective assessment of nonlinear segmentation approaches to gray level underwater images. ICGST J Graph Vis Image Process 9(2):39–46 10. Wang S, Ma K, Yeganeh H, Wang Z, Lin W (2015) A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Process Lett 22(12):2387–2390 11. Mittal A, Soundararajan R, Bovik AC (2013) Making a “completely blind” image quality analyzer. IEEE Signal Process Lett 20(3):209–212

Analysis of Pruned Neural Networks (MobileNetV2-YOLO v2) for Underwater Object Detection A. F. Ayob, K. Khairuddin, Y. M. Mustafah, A. R. Salisa, and K. Kadir

Abstract Underwater object detection involves the activity of multiple object identification within a dynamic and noisy environment. Such task is challenging due to the inconsistency of moving shapes underwater (i.e. goldfish) within a very dynamic surrounding (e.g. bubbles, miscellaneous objects). The application of pre-trained deep learning classifiers (e.g. AlexNet, ResNet, GoogLeNet and so on) as the backbone of several object detection algorithms (e.g. YOLO, Faster-RCNN and so on) have gained popularity in recent years, however, there is a lack of attention on the systematic study to reduce the size of the pre-trained neural networks hence speeding up the object detection process in the real-world application. In this work, we investigate the effect of reducing the size of the pre-trained MobileNetV2 as the backbone of the YOLOv2 object detection framework to construct a fast, accurate and small neural network model to perform goldfish breed identification in real-time.





Keywords Artificial neural network Object detection Underwater engineering Ocean technology



A. F. Ayob (&)  K. Khairuddin  A. R. Salisa Faculty of Ocean Engineering Technology and Informatics, Universiti Malaysia Terengganu, 21030 Kuala Nerus, Malaysia e-mail: [email protected] Y. M. Mustafah Department of Mechatronics Engineering, International Islamic University Malaysia, 50728 Kuala Lumpur, Malaysia K. Kadir Garisan Automotive Sdn. Bhd., Cyberjaya, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_7

87

88

A. F. Ayob et al.

1 Introduction Deep learning is a branch of artificial neural network which concerns about developing a model that act as universal function approximator based on the training data. In the field of underwater object detection, such function approximator/model can be constructed without prior knowledge such as the depth of the water, the map of the surrounding, underwater occlusion and the temperature of the surrounding. Underwater object detection presented by [1] utilized the combination of the colour contrast, intensity and transmission information to identify the ROI in underwater images, however unstable performance was reported in the artificially illuminated environment. Sung et al. [2] presented the utilization of You Look Only Once (YOLO) algorithm for the underwater fish detection via the use of transfer learning by adopting the original framework and trained using their custom dataset, however reported a very low frame per seconds (FPS) (16.7 FPS) through GeForce Pascal Titan GPU. Xu and Matzner [3] presented the utilization of third version of YOLO (YOLOv3) to perform underwater fish detection via the use of transfer learning, however with a moderate value of mean average precision, mAP = 0.5392. This paper shall address two questions which involve the effectiveness of deep learning framework in the real-life applications, such as; 1. The effect of utilizing many layers of deep learning to solve for several classes within a dynamic underwater environment with respect to detection time and model size. 2. Whether there is a need to utilize all the layers in the pre-trained deep learning model to be used in a different situation.

2 Proposed Approach 2.1

You Look Only Once (YOLO) and YOLOv2

YOLO [4] is a single convolutional network that directly predict object bounding boxes and class probabilities directly from full images in just one evaluation [5]. YOLO comes with its own benefits, one of which is it is exceptionally fast. YOLO does not need complex pipeline as it models detection as regression problem [4] YOLO uses regression as its final detection layer that maps the output of the last fully connected layer to the final bounding boxes and class assignments [6]. The network of YOLO consists of 24 convolutional layers followed by 2 fully connected layers [7], as shown in Fig. 1. Furthermore, YOLO reasons globally about the image when making prediction, resulting in less false positive prediction on the

Analysis of Pruned Neural Networks (MobileNetV2-YOLO v2)…

89

Fig. 1 Original YOLO architecture [4]

background. In addition, YOLO also learns the object general representation, means that YOLO are able to detect the object in natural images and also in other domains like artwork. YOLOv2 [8], also known as YOLO9000, is an improved version of YOLO that are able to detect over 9000 objects. When compared to Fast R-CNN, YOLO tends to make a significant number of localization errors [8]. YOLO also suffers from low recall when compared to region proposal-based methods. In YOLOv2, anchor boxes are added to predict bounding boxes [9]. Anchor boxes proves to be effective, which allows for multiple objects detection which varies in terms of aspect ratio in a single grid cell. Furthermore, YOLOv2 introduces dimension clustering and clustering-based (k-means) for bounding box parameterization which improves the mean Average Precision (mAP) of the detection.

2.2

MobileNet and Mobile v2 Algorithm

MobileNet consists of two layers, in which its model is based on depth-wise separable convolutions [10]. Depth-wise separable convolution is made up of depth-wise convolution and 1  1 pointwise convolutions, as shown in Fig. 2. Basically, it performs a single convolution on each colour channel rather than combining all three and flattening it. MobileNet shows that its models have large accuracy gap against its float point model despite being successfully reduces parameter size and computation latency with separable computation [11]. In MobileNetV2, bottleneck convolutions had been utilized [12]. The ratio between the size of the input and the inner size is referred as the expansion ratio. Each bottleneck block contains an input followed by several bottleneck. Shortcuts were used directly between the bottlenecks because the bottlenecks contain all the

90

A. F. Ayob et al.

Fig. 2 MobileNet architecture [12]

Fig. 3 Two types of bottleneck blocks incorporated in MobileNetV2 [12]

necessary information while an expansion layer only acts as an implementation detail that accompanies a non-linear transformation of the tensor, as shown in Fig. 3. Instead of using classical residual block, where it connects the layers with high number of channels, the inverted residuals are used where it connects the bottlenecks. The inverted design is used as it is considerably more memory efficient and works slightly better. Within the pre-trained MobileNetV2, a 16-blocks architecture were incorporated. The16-blocks pre-trained MobileNetV2 model can be obtained from [13].

Analysis of Pruned Neural Networks (MobileNetV2-YOLO v2)…

2.3

91

Evaluation of Models

In order to evaluate the models, several evaluation metrics have been utilized, namely; Precision, Recall, Average Precision and the mean Average Precision (mAP). In a human perspective, such metrics are aimed to evaluate the skill of the model with respect to its capability to mimic human’s capability in the detection task. Given a number of queries; a) Precision is defined as the ratio of true positive items detected to the sum of all positive objects based on the ground truth data, shown in Eq. (1). precision ¼

true positives true positives þ false negatives

ð1Þ

b) Recall is defined as the ratio of true positive items to the sum of the true positive and false negatives items identified by the detector, with relative to the ground truth data, shown in Eq. (2). recall ¼

true positives true positives þ false negatives

ð2Þ

c) Average Precision (AP) is defined as the area under the curve based on the calculation of Precision and Recall across a given queries. In this work, the mean Average Precision (mAP) is calculated for each model across a number of classes, as shown in Eq. (3). PnClass mAP ¼

2.4

class¼1 AP nClass

ð3Þ

Data Preparation

A four-minute free-swimming goldfish video has been prepared within the lab under controlled lighting setup, as show as in Fig. 4. This setup is adequate to simulate real world application, where bubbles and other uncontrolled movement are tolerated. The frame-by-frame images of four-minute video has been extracted, which resulted to 11,4444 images. A split of training set and validation set of the images have been set to 60%–40% is applied. The training set was annotated/ labelled with respect to the goldfish breeds prior to the training of the YOLOv2 deep learning model.

92

A. F. Ayob et al.

Fig. 4 QR-code link to the video results of the 6-classes Goldfish Breeds detection/identification [14]

3 Results and Discussions The experiments were conducted using the pre-trained MobileNet v2 model acted as the backbone of the YOLOv2 detection framework. The initial pre-trained model were consists of 16 building blocks, in which for each experiment, the block was systematically reduced with number of blocks minus one (n−1) for each new training session. Each training was conducted across 30 epoch, mini batch size of 16, with stochastic gradient descent as the optimizer. The specification of the machine is Intel i7 (8th Generation), 16 GB of RAM and RTX 2060 GPU with 6 GB of VRAM. The deep learning models were trained using 5,833 annotated goldfish breeds image dataset, which consist of 6 classes of goldfish breeds; Calico Goldfish, Blackmoor Goldfish, Common Goldfish, Lionhead Goldfish, Ryukin Goldfish and Pearlscale Goldfish. The time taken for each experiment to complete was approximately 4 h. Each newly trained model was analyzed qualitatively (via videos) and quantitatively (Table 1) to measure its effectiveness. The first order in evaluating the model is through observing the precision-recall (PR) curve. The graphs that represented the precision-recall curve are presented in Figs. 8, 9 and 10. Across all the models (Block 1 to 16), the character of the PR curves is almost similar, which indicated the consistency of the training. In this work, Block 1, Block 8 and Block 16 were selected as an indicative typical representation. It can be observed that the models are able to perform with high precision, even with the recall threshold of 0.5. The video representation of the results can be accessed via the link provided in the QR as shown in Fig. 4. The effectiveness of the detection model can be observed qualitatively in the web-based demonstration and further elaborated in this section. Shown in Fig. 5, 6 and 7 are the results of the snapshot at the time t = 1:28 min (named as the ‘checkpoint’) for the three models that were trained on the respective feature layers named Block 1, Block 8 and Block 16. Considering the whole 16 blocks that built the pre-trained MobileNet v2, Block 1 represented the 1/16 (6%) of

Analysis of Pruned Neural Networks (MobileNetV2-YOLO v2)…

93

Fig. 5 Video snapshot of the t = 1:28 min of the detection using Block 1 model

Fig. 6 Video snapshot of the t = 1:28 min of the detection using Block 8 model

the original pre-trained model, while Block 8 represented 8/16 (50%) and finally Block 16 that represented the whole (100%) original pre-trained model. Referring to the figures, it can be observed that at the checkpoint of t = 1:28 min, Block 1 was able to detect 8 out of 11 goldfishes in the aquarium, where Block 8 were able to detect all goldfishes, followed by Block 16 which was able to detect 8 goldfishes out of 11. This qualitative observation is closely related to the mAP of each of the model as reported in Table 1, where Block 8 represented the highest mAP compared with Block 1 and Block 16.

94

A. F. Ayob et al.

Fig. 7 Video snapshot of the t = 1:28 min of the detection using Block 16 model

Further inspection in Table 1 indicated that Block 16 with 3 million parameter evaluations contributed to the longer detection time which resulted an average of 12.53 frame per-second, compared with Block 1 (17,328 parameter count) that was able to perform the fastest detection with the rate of 56.64 frames per-second. A much reasonable FPS (*24 frame per-second) for this case with the mAP close to *97% can be attributed to Block 8, Block 9 and Block 10, as shown in Table 1. In terms of possible extension or future works, for a non-critical, non-life threatening application, such reduction of model size, parameters is beneficial for mobile-based high-speed detection task such as presented in this paper.

Table 1 Quantitative observation of the trained model across different evaluation metrics. Highlighted are the most reasonable models with respect to its mAP, FPS and size Model name Block Block Block Block Block Block Block Block

16 15 14 13 12 11 10 9

Total number of parameters (x105)

Mean average precision (mAP) (%)

Mean frame per second (FPS)

Size of model (MB-decimal)

36.0284 17.478 14.3692 11.2604 6.782 5.654 4.526 2.95896

95.05 94.66 94.41 94.44 97.39 97.08 96.89 96.79

12.53 12.61 12.75 12.74 13.80 15.04 23.21 24.13

13.594 6.787 5.596 4.405 2.738 2.29 1.843 1.246 (continued)

Analysis of Pruned Neural Networks (MobileNetV2-YOLO v2)…

95

Table 1 (continued) Model name 8 7 6 5 4 3 2 1

2.45272 1.94648 1.44024 0.67928 0.54904 0.4188 0.24792 0.17328

Mean frame per second (FPS)

96.43 96.65 95.99 95.83 95.46 94.86 91.77 89.53

25.06 30.33 32.35 33.57 35.47 37.37 50.90 56.64

Black Moor Goldfish AP = 0.9

Calico Goldfish AP = 0.8 1

Size of model (MB-decimal) 1.035 0.825 0.615 0.322 0.26 0.198 0.123 0.084

Common Goldfish AP = 0.9 1

1

0.95

0.95

0.95

Precision

Precision

Mean average precision (mAP) (%)

0.9 0.85

Precision

Block Block Block Block Block Block Block Block

Total number of parameters (x105)

0.9

0.9 0.85 0.8

0.85

0.8

0.75 0.8

0.75 0

0.5

0.7 0

1

0.5

0

1

0.5

1

Recall

Recall

Recall

Lionhead Goldfish AP = 0.9

Ryukin Goldfish AP = 1.0

Pearlscale Goldfish AP = 0.9

1

1

1

0.95

0.95

0.85 0.8

Precision

0.9

Precision

Precision

0.95

0.9

0.9

0.85

0.85 0.75

0.8

0.8

0.7 0

0.5

1

0

Recall

Fig. 8 Precision-recall graph for Block 1

0.5

Recall

1

0

0.5

Recall

1

96

A. F. Ayob et al. Calico Goldfish AP = 1.0

0.97

0.99

0.99

Precision

Precision

0.985 0.98 0.975

0.96 0.95 0

0.5

1

0.98 0.97 0.96

0.97

0.95 0

0.5

0

1

0.5

1

Recall

Recall

Recall

Lionhead Goldfish AP = 0.9

Ryukin Goldfish AP = 1.0

Pearlscale Goldfish AP = 1.0

1

0.99

0.995

0.98

0.99

Precision

1

0.97 0.96

1 0.99

Precision

Precision

0.98

Common Goldfish AP = 1.0 1

0.995

0.99

Precision

Black Moor Goldfish AP = 1.0 1

1

0.985 0.98

0.97

0.94 0

0.5

1

0.97 0.96

0.975

0.95

0.98

0.95 0

Recall

0.5

Recall

1

0

0.5

1

Recall

Fig. 9 Precision-recall graph for Block 8

4 Conclusions In this work, we have presented a case study that investigates the effect of reducing the neural network layers of the original MobileNetV2 from ‘16 Blocks’ to ‘1 Block’ architecture. The decrease of the number of layers accounts for the reduction of 17,328 to 1.7 million learnable parameters in the deep learning neural net. Important observations with regards to the effect of reducing the number of layers include the significant speed-up in the detection process, which accounted to 78% increase of speed; from *12 fps to *56 fps. The mean Average Precision (mAP) were observed to be 89% by only utilizing ‘Block 1’, compared with utilizing the whole 16 blocks of MobileNet v2 that accounted for 95% mAP. Furthermore, 99% of model size shrinkage has been achieved between ‘Block 16’ (13.594 MB) and ‘Block 1’ (0.084 MB), asserting that reducing the number of layers will also beneficial for the real-world mobile-based model architecture while maintaining satisfactory accuracy.

Analysis of Pruned Neural Networks (MobileNetV2-YOLO v2)… Black Moor Goldfish AP = 1.0

1

0.99

0.99

0.99

0.98 0.97 0.96

Precision

1

0.98 0.97 0.96

0.95 0.94 0.5

0.97 0.96

0.94 0

1

0.98

0.95

0.95 0

0.5

1

0

0.5

1

Recall

Recall

Recall

Lionhead Goldfish AP = 0.9

Ryukin Goldfish AP = 1.0

Pearlscale Goldfish AP = 1.0

1

1

0.98

0.99

1

0.98

0.96 0.94

0.98

Precision

Precision

Precision

Common Goldfish AP = 0.9

1

Precision

Precision

Calico Goldfish AP = 1.0

97

0.97 0.96

0.96

0.94

0.92

0.95 0.92

0.94

0.9 0

0.5

1

0

Recall

0.5

Recall

1

0

0.5

1

Recall

Fig. 10 Precision-recall graph for Block 16

Acknowledgements Parts of this research were sponsored under Fundamental Research Grant Scheme (FRGS) 59361 awarded by Ministry of Education Malaysia, and Research Intensified Grant Scheme (RIGS) 55192/12 awarded by Universiti Malaysia Terengganu.

References 1. Chen Z, Zhang Z, Dai F, Bu Y, Wang H (2017) Monocular vision-based underwater object detection. Sensors (Basel) 17(8):1784 2. Sung M, Yu S, Girdhar Y (2017) Vision based real-time fish detection using convolutional neural network. In: OCEANS 2017—Aberdeen, pp 1–6 3. Xu W, Matzner S (2018) Underwater fish detection using deep learning for water power applications. arXiv preprint arXiv:1811.01494 4. Redmon J, Divvala S, Girshick R, Farhadi, A (2016) You only look once: unified, real-time object detection. In: IEEE conference on computer vision and pattern recognition, pp 779–788 5. Jing L, Yang X, Tian Y Video you only look once: overall temporal convolutions for action recognition. J Visual Commun Image Rep, 58–65 (2018) 6. Putra MH, Yussof ZM, Lim KC, Salim SI (2018) Convolutional neural network for person and car detection using YOLO framework. J Telecommun Electron Comput Eng 10:1–7

98

A. F. Ayob et al.

7. Du J (2018) Understanding of object detection based on CNN family and YOLO. J Phys Conf Series, 12–29 8. Redmon J, Farhadi A (2017) YOLO9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7263–7271 9. Shafiee MJ, Chywl B, Li F, Wong A (2017) Fast YOLO: a fast you only look once system for real-time embedded object detection in video, arXiv preprint arXiv:1709.05943 10. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 11. Sheng T, Feng C, Zhuo S, Zhang X, Shen L, Aleksic M (2018) A quantization-friendly separable convolution for MobileNets. In: 1st workshop on energy efficient machine learning and cognitive computing for embedded applications (EMC2), pp 14–18 12. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510–4520 13. Mathworks Inc.: Pretrained MobileNet-v2 convolutional neural network. Mathworks Inc. (2019). https://www.mathworks.com/help/deeplearning/ref/mobilenetv2.html. Accessed 14 Nov 2019 14. Ayob AF.: MobileNet(v2)-YOLOv2 Goldfish Detection (2019). https://www.youtube.com/ playlist?list=PLyM-KBafTfgicwqAhpa9a8HSv2TSHV3fZ. Accessed 21 July 2019

Different Cell Decomposition Path Planning Methods for Unmanned Air Vehicles-A Review Sanjoy Kumar Debnath, Rosli Omar, Susama Bagchi, Elia Nadira Sabudin, Mohd Haris Asyraf Shee Kandar, Khan Foysol, and Tapan Kumar Chakraborty Abstract An Unmanned Aerial Vehicle (UAV) or robot is guided towards its goal through path planning that helps it in avoiding obstacles. Path planning generates a path between a given start and an end point for the safe and secure reach of the robot with required criteria. A number of path planning methods are available such as bio-inspired method, sampling based method, and combinatorial method. Cell decomposition technique which is known as one of the combinatorial methods can be represented with configuration space. The aim of this paper is to study the results obtained in earlier researches where cell decomposition technique has been used with different criteria like shortest travelled path, minimum computation time, memory usage, safety, completeness, and optimality. Based on the classical taxonomy, the studied methods are classified. Keywords Path planning

 Cell decomposition  Regular grid  UAV

1 Introduction The use of unmanned air vehicle or autonomous robot in place of human beings to carry out dangerous missions in adverse environments has been gradually increased since last decades. Path planning is one of the vital aspects in developing an autonomous vehicle that should traverse the shortest distance from a starting point to a target point while in a given mission for saving its resources and minimizing S. K. Debnath  R. Omar (&)  S. Bagchi  E. N. Sabudin  M. H. A. Shee Kandar Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia e-mail: [email protected] K. Foysol Department of Allied Engineering, Bangladesh University of Textiles, Dhaka, Bangladesh T. K. Chakraborty Department of Electrical and Electronics Engineering, University of Asia Pacific, Dhaka, Bangladesh © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_8

99

100

S. K. Debnath et al. Path Planning Approaches

Combinatorial

C-Space Representation

Sampling Based RRT Probability roadmap

Graph Search Algorithms

Road Map Depth First

Visibility graph Voronoi diagram

Potential Field

A*

Genetic Algorithm Differential Evolution Swarm Intelligence

Bread First Search

Cell Decomposition

Biologically Inspired Evolutionary Algorithm

Particle Swarm Optimization

Dijkstra’s

Ant colony optimization

Best First

Simulated Annealing

D*

M*

Ecology Based

Fig. 1 Classification of path planning approach [8]

the potential risks. Therefore, it is crucial for a path planning algorithm to produce an optimal path. The path planning algorithm should also hold the completeness criterion which means that a path can be found if that exists. Moreover, the robot’s safety, memory usages for computing and the real-time algorithms are also significant [1–7]. Figure 1 illustrates the classification of path planning approaches. The bio-inspired methods are the nature-motivated/biologically inspired algorithms. A number of instances of bio-inspired approaches are the Genetic algorithm (GA), Simulated annealing (SA), Particle Swarm Optimization (PSO) plus Ant Colony Optimization (ACO). GA uses the natural selection course of biological evolution that continuously fluctuate a populace of distinct results. Nonetheless, it cannot assure any optimal path. Local minima may occur in narrow environments and thus, it offers a lesser amount of safety and constricted corridor difficulty. GA is computationally costly and ultimately it is not complete [8]. SA algorithm is developed based on warming and cooling process of metals to regulate the internal configuration of its properties. Separate from very sluggish and very high cost functions, SA is not able to accomplish the optimal path [9–15]. PSO is a meta-heuristic population based approach and it has real-time outcome, but it tumbles into local optima easily in many optimization complications. Additionally, there is no general convergence concept appropriate for PSO in practice and its convergence period is mostly vague for multidimensional problems [16]. On the other hand, ACO emulates an ant to mark a path while the food source is confirmed. The ant separates its direction towards the food source with pheromones for tracing purpose. In ACO, the path in between the initial point and target point is arbitrarily produced. ACO does a blind exploration and therefore, it is not proper for efficient path planning due to the lack of optimal result [13, 17]. In sampling based path planning, a method Rapidly Exploring Random Tree (RRT) does not require the establishment of the design space. In RRT, the first step is to define the starting and the target points. Then, it considers the starting point as

Different Cell Decomposition Path Planning Methods …

101

the base for the tree, based on which different new branches are grown-up till it reaches the target point [10, 11]. RRT is simple and easy way to handle problems with obstacles and different constraints for autonomous/unmanned robotic motion planning. Depending on the size of the engendered tree, the computation time is also escalated. The resulting path commencing by RRT is not optimal all the time. Nonetheless, it remains pretty easy to find a path for a vehicle with dynamic and physical constrictions and it also creates least number of edges [18, 19]. Probabilistic roadmap (PRM) method is a path-planning algorithm that takes random samples from the configuration space by examining the accessible free space and dodging the crashes to find a way. A local planner is used to join these configurations with close-by configurations. PRM is costly without any possibilities to acquire the path. [18, 19]. Combinatorial path planning consists of mainly two methods, i.e. C-space representation technique and graph search algorithm. In this case, the first step is to create the configuration space of the environment. Then, a graph search algorithm, for example Dijkstra’s and A-star (A*), is applied to search a path [7, 20]. Depth-first search (DFS) is good to pick up a path among many possibilities without caring about the exact one. It may be less appropriate when there is only one solution. DFS is good because a solution can be found without computing all nodes [7]. Breadth-first search that is suitable for limited available solutions uses a comparatively small number of steps. Its exceptional property finds the shortest path from the source node up to the node that it visits first time when all the graph’s edges are either un-weighted or having similar weight. Breadth-first search is complete if one exists. Breadth-first search is good because it does not get trapped in dead ends [21] and this algorithm does not assure to discover the shortest path because it bypasses some branches in the search tree. It is a greedy search which is not complete and optimal. Dijkstra’s algorithm is systematic search algorithm and gives shortest path between two nodes. In optimal cases, where there is no prior knowledge of the graph, it cannot estimate the distance between each node and the target. Usually, a large area is covered in the graph by Dijkstra’s due to its edge selections with minimum cost at every step and thus, it is significant for the situation having multiple target nodes without any prior knowledge of the closest one [22]. A* is not very optimal because it needs to be executed a number of times for each target node to get them all. A* expands on a node only if it seems promising. It only aims to reach the target from the current node at the earliest and does not attempt to reach any other node. A* is complete because it always finds a path if one exists. By modifying the used heuristics and node’s evaluation tactics of A*, other path-finding algorithm can be developed [23]. Configuration space gives complete information about the location of all points in the coordination and it is the space for all configurations such as real free space area for the motion of autonomous vehicle and guarantees that the vehicle must not crash with obstacles. An illustration of a C-space for a circular vehicle is shown in Fig. 2. It assumes the robot as a point and adds the area of the obstacles so that the planning can be complete in a more capable way. C-space is obtained by adding the vehicle radius while sliding it along the edge of the obstacles and the border of the

102

S. K. Debnath et al.

Goal

A

Start

Goal

A

(a)

Start

(b)

Fig. 2 A scenario represented in a original form b configuration space. Note that the darker rectangles in a are those with actual dimensions while in b are those enlarged according to the size of robot A. The white areas represent free space

search space. In Fig. 2(a), the obstacle-free area is represented by the white region inside the close area. The robot in Fig. 2(a) is represented by A. On the other hand, as the workspace is considered as C-space, as shown in Fig. 2(b), it tells that the free space has been condensed while the obstacles’ area has been inflated. Hence, C-space indicates the real free space region for the motion of autonomous vehicle or unmanned vehicle and it assures that the autonomous vehicle or robot must not collide with the obstacle.

2 Cell Decomposition (CD) Method Cell decomposition (CD) is a very useful method especially in outdoor atmosphere. In CD, C-space is first divided into simple and connected regions called cells. The cells may be of rectangular or polygonal shapes and they are discrete, non-overlapping and contiguous to each other. If the cell contains obstacle, then it is identified as occupied, or else it is obstacle free. A connectivity graph is erected at

Different Cell Decomposition Path Planning Methods …

103

Cell Decomposition

Regular Grid

Adaptive Cell Decomposition

Exact Cell Decomposition

Fig. 3 Classification of cell decomposition method

that point to link the adjacent cells [42]. There are several variations of CD including Regular Grid (RG), Adaptive Cell Decomposition (ACD) and Exact Cell Decomposition (ECD) [22]. The classification of CD is shown in Fig. 3.

2.1

Regular Grid (RG)

Regular grid (RG) technique was introduced by Brooks and Lozano-Perez [24] to find a collision-free path for an object moving through cluttered obstacles. In general, RG can be constructed by laying a regular grid over the configuration space. As the shape and size of the cells in the grid are predefined, RG is easy to apply. RG basically samples the domain and marks up the graph subsequently to know whether the space is occupied, unoccupied or partially occupied. A cell is marked as an obstacle if an object or part of it occupies the cell; else it stays as free space. The node is located in the middle of every free space cell within the C-space. Connectivity graph is then constructed from all the nodes. Path planning using RG is illustrated in Fig. 4. The path connecting starting point and target point is shown by solid yellow line. RG method is popular because they are very easy to apply to a C-space and also flexible. The computation time can be reduced by increasing the cell size. On the other hand, the cell size can be made smaller to provide more detailed information and completeness. Although RG is easy to apply, there are some drawbacks with this method. Firstly, it has the digitization bias wherever an obstacle that is too smaller than the cell dimension results in that whole grid square as filled or occupied. Consequently, a traversable space may be considered impenetrable by the planner. This scenario is illustrated in Fig. 4 (b). Furthermore, if the cell is too big (hence grid resolution is too coarse), the planner may not be complete.

104

S. K. Debnath et al.

Goal

Goal

Start

Start

(a)

(b)

Fig. 4 a Configuration Space obstacles b Obstacles represented by Regular Grid techniques. Note that the drivable area is considered impenetrable

2.2

Adaptive Cell Decomposition (ACD)

The, adaptive cell decomposition (ACD) is built using quad-tree unlike RG. The cells of a quad-tree are identified either as free cells, which contain no obstacles, as obstacles cells, where the cells are occupied or as mixed cells, which represent nodes with both free space and obstacles. The mixed cells should be recursively sub-divided into four identical sub-cells until the resulted smaller cells contain no obstacles’ region or the smallest cells are produced [25]. ACD maintains as much detail as possible while regular shape of the cells is maintained. It also removes the digitization bias of RG. An ACD representation employed for path planning is depicted in Fig. 5. The collision-free path that connects starting point (Start) and target point (Goal) is depicted via solid yellow line.

Different Cell Decomposition Path Planning Methods …

105

Fig. 5 Path planning using quad-tree

2.3

Exact Cell Decomposition

Another variant of CD is Exact Cell Decomposition (ECD) method and it consists of two-dimensional cells to resolve certain dilemma linked with regular grids. The sizes of the cells are not pre-determined; nonetheless they are decided based on the location and shape of obstacles in the C-space [26]. The cell boundaries are determined exactly as the boundaries of the C-space, and the unification of the cells stands the free space. Therefore, ECD is complete that always finds a path if one exists. ECD is shown in Fig. 6. The path connecting the starting (Start) and target (Goal) points is shown as solid yellow line. Opposed Angle-Based Exact Cell Decomposition is suggested and it is intended for the mobile robot path-planning issue through curvilinear obstacles for more natural collision-free efficient path [27].

106

S. K. Debnath et al.

7

11 6

Goal

1 12

5

4

10 2 Start

3

8

9

Fig. 6 Path planning using exact cell decomposition

Till date many researchers have used cell decomposition-based method to solve path planning problems. In [28], researchers recommended three innovative formulations to construct a piecewise linear path for an unmanned/ autonomous vehicle when a cell decomposition planning method is used. Another trajectory was obtained via path planning algorithms, by varying the involved cell decomposition, the graph weights, and the technique to calculate the waypoints [29]. A combined algorithm was developed by cell decomposition and fuzzy algorithm to create a map of the robot’s path [30]. A technique suggested an ideal route generation outline in which the global obstacle-avoidance problem was decomposed into simpler sub complications, corresponding to distinct path homotopy that impacted the description of a technique for using current cell-decomposition methods to count and represent local trajectory generation problems for proficient and autonomous resolution [31]. Parsons and Canny [32] used cell decomposition-based algorithm for multiple mobile robots path planning, which shared the same workspace. The algorithm computed a path for each robot and it was capable of avoiding any obstacles and

Different Cell Decomposition Path Planning Methods …

107

other robots. The cell decomposition algorithm was based on the idea of a product operation that was defined on the cells in a decomposition of a 2D free space. However, the developed algorithm was only useful when infrequent changes occurred in obstacles set. Chen et al. [8] introduced framed-quad-tree to create a map in order solve a problem to find a conditional shortest path over a new atmosphere in real time. Conditional shortest path is the path that has shortest path among all possible paths based on known environmental information. The path was found using a propagated circular path planning wave based on a graph search algorithm [33]. Jun and D’Andrea [34] used approximate cell decomposition-based method to accomplish a robot path planning task. The proposed approach used the initial information of the locations and shapes of the obstacles. The method decomposed the region into uniform cells, and changed the values of probabilities while detecting unexpected changes during the mission. A search algorithm was used to find the shortest path. One drawback of this method is that if the penalty is considered for accelerations and decelerations, the graph will become a tree and it will expand exponentially with the number of cells making them very slow. Lingelbach [35] applied the so-called Probabilistic Cell Decomposition (PCD) method for path planning in a high-dimensional static C-space for its easy scalability. Investigational consequences showed that the performance of PCD was acceptable in numerous circumstances for path planning of rigid body movement such as maze-like problems and chain-like robotic platform. However, the PCD had a degraded performance when the free space was small compared to the area of C-space. Zhang et al. [36] utilised ACD for path planning of robot to subdivide the C-space into cells. The localised roadmaps were then computed by generating samples within these cells. Since the complexity of ACD is increased with the number of degree of freedom (DOF) of robots, it is not practical to use the higher DOF robot. Arney [37] implemented ACD path planning approach, in which the efficiency was attained by using a method found in Geographic Information Systems (GIS) known as tesseral addressing. Each cell was labelled with an address during the decomposition process that defined the cell size, position and neighbours addresses. The planner had a priori information about environment and the generated path had an optimal distance from the unmanned/autonomous vehicles’ present location to the target location. It is suitable for real-time path planning applications.

3 Discussion on Different Cell Decomposition Methods The benefits of CD are that it provides assurance to find a collision-free path, if exists and is controllable. Therefore, it is a comprehensive algorithm for an unmanned or autonomous vehicle that can travel the path deprived of the risk of local minima incidence [38]. Yet, the shortcoming of CD is that if the formed cell is too rough, at that time it will not be feasible to achieve the smallest path distance or length. Instead, if the cell is too trivial, then computation is more time-consuming

108

S. K. Debnath et al.

Table 1 Comparison of different cell decomposition methods Method

Optimal path

CD

  

RG ACD ECD

Computational time p p 

Real time

Memory

  

 p p

Safety p p 

Completeness  p p

[1, 39, 40]. The CD approach also does not provide acceptable performance in a dynamic state and in real-time circumstance [10, 38, 39]. It is required for CD to fine-tune with the situation as necessary; e.g. in exact CD, the cells are not predefined, but they are selected based on the site and shape of the obstacles inside the C-space [41]. Although RG is easy to apply, but the planner may not be complete if cell is too big, i.e. finding a path where one exists is not guaranteed. If the obstacle’s size is significantly lesser than the cell size, then also the outcome for the entire grid square is not obstacle free or occupied. One more drawback of RG is that it inefficiently represents the C-space as in sparse area many same sized cells are required to fill the empty space. As a result, planning is costly because additional cells are handled than they are actually required. The outcome of ACD is a map that holds different size grid cells and concentrates with the cell boundaries to match the obstacle’s boundaries closely. It produces lesser number of cells so that the C-space can be used more efficiently and hence, less memory and processing time are required. ACD maintains maximum details while regular shape of the cells is maintained. ECD is complete. Still, the paths generated via ECD are not optimal in path length. There is no simple rule to decompose a space into cells. This method is not suitable to apply in outdoor environments where obstacles are often poorly defined and of irregular shape (Table 1).

4 Conclusion The results from earlier researches on several path planning algorithms for cell decomposition methods are compared in this study where the nature of motion was given importance and these algorithms were discussed for their advantages and drawbacks. When an optimal energy efficient collision-free path that is complete can be calculated with lowest computation time by an algorithm, then that algorithm can be conferred as an efficient path planning algorithm. Since none of the algorithms covers all the criteria, hence the optimization of an energy efficient path planning depends on the criteria of the used algorithm such as completeness, computation time etc., and the significant requisites of the vehicle’s mission and its

Different Cell Decomposition Path Planning Methods …

109

objective. For example, RG path planning is expensive but easy to apply. ACD has the adaptive quality and ECD is complete but not suitable for outdoor environment. Acknowledgements Authors like to give appreciations to Universiti Tun Hussein Onn Malaysia (UTHM) and Research Management Center (RMC) for supporting fund under TIER-1 VOT H131.

References 1. Omar R (2012) Path planning for unmanned aerial vehicles using visibility line based methods. PhD diss., University of Leicester 2. Debnath SK, Omar R, Latip NBA (2019) A review on energy efficient path planning algorithms for unmanned air vehicles. Computational science and technology. Springer, Singapore, pp 523–532 3. Ganeshmurthy MS, Suresh GR (2015) Path planning algorithm for autonomous mobile robot in dynamic environment. In: 2015 3rd international conference on signal processing, communication and networking (ICSCN). IEEE 4. Nguyet T, Duy-Tung N, Duc-Lung V, Nguyen-Vu T (2013) Global path planning for autonomous robots using modified visibility graph, vol 13. IEEE, pp 317–321 5. Latip NBA, Omar R, Debnath SK (2017) Optimal path planning using equilateral spaces oriented visibility graph method. Int J Electr Comput Eng 7(6):3046 6. Chen P, et al (2013) Research of path planning method based on the improved Voronoi diagram. In: 2013 25th Chinese Control and Decision Conference (CCDC). IEEE 7. Omar R, Da-Wei G (2009) Visibility line based methods for UAV path planning. In: ICCAS-SICE, 2009. IEEE 8. Cho K, et al (2017) Cost-aware path planning under co-safe temporal logic specifications. IEEE Robotics and Automation Letters 2(4) 9. Li G, et al (2012) “An efficient improved artificial potential field based regression search method for robot path planning. In: 2012 International Conference on Mechatronics and Automation (ICMA). IEEE 10. Abbadi A, Matousek R (2014) Path planning implementation using MATLAB in technical computing bratislava, pp 1–5 11. Adiyatov O, Huseyin AV “Rapidly-exploring random tree based memory efficient motion planning. In: 2013 IEEE International Conference on Mechatronics and Automation (ICMA). IEEE 12. Achour N, Chaalal M (2011) Mobile robots path planning using genetic algorithms. In: the seventh international conference on autonomic and autonomous systems, Baker (ICAS 2011), pp 111–115 13. Hsu, C-C, Wang W-Y, Chien Y-H, Hou R-Y, Tao C-W (2016) FPGA implementation of improved ant colony optimization algorithm for path planning. In: 2016 IEEE Congress on Evolutionary Computation (CEC). IEEE, pp 4516–4521 14. Goyal JK, Nagla KS (2014) A new approach of path planning for mobile robots. In: international conference on advances in computing, communications and informatics (ICACCI 2014). IEEE, pp 863–867 15. Gomez EJ, Santa FM, Sarmiento FH (2013) A comparative study of geometric path planning methods for a mobile robot: potential field and Voronoi diagrams. In: 2013 II International Congress of Engineering Mechatronics and Automation (CIIMA), 23 October. IEEE, pp 1–6 16. Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: proceedings of the sixth international symposium on micro machine and human science (MHS 1995), 4 October 1995. IEEE, pp 39–43

110

S. K. Debnath et al.

17. Shaogang Z, Ming L (2010) Path planning of inspection robot based on ant colony optimization algorithm. In: 2010 International Conference Electrical and Control Engineering (ICECE). IEEE, pp 1474–1477 18. Latombe JC (1999) Motion planning: a journey of robots, molecules, digital actors, and other artifacts. Int J Robot Res 18(11):1119–1128 19. Marble JD, Bekris KE (2013) Asymptotically near-optimal planning with probabilistic roadmap spanners. IEEE Trans Rob 29(2):432–444 20. LaValle SM (2006) Planning Algorithms, Cambridge University Press (2006) 21. Dudek G, Jenkin M (2000) Computational principles of mobile robotics. Cambridge University Press, Cambridge 22. Mehlhorn K, Sanders P (2008) Algorithms and data structures: the basic toolbox (PDF). Springer 23. Debnath SK, Omar R, Latip NBA, Shelyna S, Nadira E, Melor CKNCK, Chakraborty TK, Natarajan E (2019) A review on graph search algorithms for optimal energy efficient path planning for an unmanned air vehicle. Indonesian J Electr Eng Comput Sci 15(2):743–749 24. Brooks RA, Lozano-Perez T (1985) A subdivision algorithm in configuration space for findpath with rotation. IEEE Trans Syst Man Cybern 2:224–233 25. Chen DZ, Szczerba RJ, Uhran JJ (1995) Planning conditional shortest paths through an unknown environment: A framed-quadtree approach. In: Proceedings 1995 IEEE/RSJ international conference on intelligent robots and systems. Human Robot Interaction and Cooperative Robots. vol 3. IEEE 26. Debnath SK, Omar R, Latip NBA (2019) Comparison of different configuration space representations for path planning under combinatorial method. Indonesian J Electr Eng Comput Sci 1(1):401–408 27. Jung J-W et al (2019) Expanded douglas–peucker polygonal approximation and opposite angle-based exact cell decomposition for path planning with curvilinear obstacles. Appl Sci 9 (4):638 28. Kloetzer M, Mahulea C, Gonzalez R (2015) Optimizing cell decomposition path planning for mobile robots using different metrics. In: 2015 19th international conference on system theory, control and computing (ICSTCC), IEEE pp 565–570 29. Gonzalez R, Kloetzer M, Mahulea C (2017) Comparative study of trajectories resulted from cell decomposition path planning approaches. In: 2017 21st international conference on system theory, control and computing (ICSTCC), IEEE, pp 49–54 30. Tunggal TP, Supriyanto A, Faishal I, Pambudi I (2016) Pursuit algorithm for robot trash can based on fuzzy-cell decomposition. Int J Electr Comput Eng 6(6):2088–8708 31. Park J, Karumanchi S, Iagnemma K (2015) Homotopy-based divide-and-conquer strategy for optimal trajectory planning via mixed-integer programming. IEEE Trans Rob 31(5):1101– 1115 32. Parsons D, Canny J (1990) A motion planner for multiple mobile robots. In: Proceedings, IEEE international conference on robotics and automation. IEEE, pp 8–13 33. Chen DZ, Szczerba RJ, Uhran JJ (1995) Planning conditional shortest paths through an unknown environment: A framed-quadtree approach. In: Proceedings 1995 IEEE/RSJ international conference on intelligent robots and systems. Human Robot Interaction and Cooperative Robots, vol 3. IEEE, pp 33–38 34. Jun M, D’Andrea R Path planning for unmanned aerial vehicles in uncertain and adversarial environments. In: cooperative control: models, applications and algorithms. Springer, Boston, pp 95–110 (2003) 35. Lingelbach F (2004) Path planning using probabilistic cell decomposition. In: IEEE international conference on robotics and automation, 2004. Proceedings. ICRA 2004, vol 1. IEEE, pp 467–472 36. Zhang X (1994) Cell decomposition in the affine weyl group wA ([Btilde] 4). Commun Algebra 22(6):1955–1974

Different Cell Decomposition Path Planning Methods …

111

37. Timothy A (2007) An efficient solution to autonomous path planning by approximate cell decomposition. In: 2007 third international conference on information and automation for sustainability, IEEE, pp 88–93 38. Glavaški D, Volf M, Bonkovic M Robot motion planning using exact cell decomposition and potential field methods. In: Proceedings of the 9th WSEAS international conference on Simulation, modelling and optimization. World Scientific and Engineering Academy and Society (WSEAS) (2009) 39. Gonzalez R, Mahulea C, Kloetzer M (2015) A Matlab-based interactive simulator for mobile robotics. In: 2015 IEEE international conference on automation science and engineering (CASE). IEEE, pp 310–315 40. Hoang VD, Hernandez DC, Hariyono J, Jo KH (2014) Global path planning for unmanned ground vehicle based on road map images. In: 2014 7th international conference human system interactions (HSI), IEEE, pp 82–87 41. Giesbrecht J (2004) Global path planning for unmanned ground vehicles. No. DRDC-TM-2004-272. Defence Reserch And Development Suffield Alberta 42. Omar R, Melor CK, Hailma CKNA (2015) Performance comparison of path planning methods

Improved Potential Field Method for Robot Path Planning with Path Pruning Elia Nadira Sabudin, Rosli Omar, Ariffudin Joret, Asmarashid Ponniran, Muhammad Suhaimi Sulong, Herdawatie Abdul Kadir, and Sanjoy Kumar Debnath Abstract Path planning is vital for a robot deployed in a mission in a challenging environment with obstacles around. The robot needs to ensure that the mission is accomplished without colliding with any obstacles and find an optimal path to reach the goal. Three important criteria, i.e., path length, computational complexity, and completeness, need to be taken into account when designing a path planning method. Artificial Potential Field (APF) is one of the best methods for path planning as it is fast, simple, and elegant. However, the APF has a major problem called local minima, which will cause the robot fails to reach the goal. This paper proposed an Improved Potential Field method to solve the APF limitation. Despite that, the path length produced by the Improved APF is not optimal. Therefore, a path pruning technique is proposed in order to shorten the path generated by the Improved APF. This paper also compares the performance on the path length and computational time of the Improved APF with and without path pruning. Through simulation, it is proven that the proposed technique could overcome the local minima problem and produces a relatively shorter path with fast computation time. Keywords Path planning

 Artificial Potential Field

E. N. Sabudin  R. Omar (&)  A. Joret  A. Ponniran  H. A. Kadir  S. K. Debnath Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia e-mail: [email protected] A. Ponniran Power Electronic Converters (PECs) Focus Group, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia A. Joret  M. S. Sulong Faculty of Technical and Vocational Education, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia M. S. Sulong Internet of Things (IOT) Focus Group, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_9

113

114

E. N. Sabudin et al.

1 Introduction Path planning is one of the most critical issues to be considered in robot research. Path planning in robotic is the act of robot to compute a valid and feasible solution in order for it to traverse from a start to goal points with a sequence of collision-free and safe motion to achieve a certain task in a given environment. The path taken must be free of any collisions with surrounding obstacles and also meets kinematic or dynamic conditions [1, 2]. In the path planning problem, the workspace for the robot and obstacle geometry is outlined in 2D or 3D, while the motion is represented as a path in configuration space [3]. In path planning, the presented structure of the environment is an aspect that needs to be taken into account to ensure the robot can achieve a defined mission. There are two types of environment for path planning, namely known and unknown. As its name implies, the known environment has all the information of obstacles and goal point. The robot moves based on the prescribed information. On the other hand, in an unknown environment, there is no previous knowledge or only partial information of the environment is available. The robot needs to plan a path based on current information. The unknown environment may contain obstacles which move continuously, and dynamic obstacles also appear spontaneously and randomly while the robot is performing its mission. As previously mentioned, the aspects that need to be addressed in path planning are the computation time, path length, and completeness. In a dynamic or uncertain environment, the path planning algorithm must be able to produce a low computational time for real-time applications. Apart from that, the robot should take the optimal path during the mission to save fuel and energy. Completeness criterion is satisfied if the path planning algorithm could find a path if one exists. There are few common techniques used in path planning problems such as Cell Decomposition (CD), Visibility Graph (VG), Voronoi Diagram (VD), Probability Roadmap (PRM) and Artificial Potential Field (APF). APF is a path planning method which is simple, highly safe, and elegant [4–6]. It uses simple mathematical equations that are ideal for real-time environments [7]. APF produces two types of forces, i.e., attractive force and repulsive force. The goal point generates the attractive force to pull the robot towards it; meanwhile, the obstacles produce a repulsive force to repel the robot from it. In that way, the robot movement depends on the resultant of the forces. However, local minima is the major drawback of APF. The robot will be trapped into local minima if the resultant force is zero. The problem of Goal Non-Reachable with Obstacle Nearby (GNRON) also happens, if the robot plunges into local minima. In order to solve the above-mentioned problem, this paper has proposed Improved Artificial Potential Field. This technique is able to reduce the limitation of APF method. Besides that, it is also computationally tractable. In reducing the path length, a path pruning is applied to the planned path.

Improved Potential Field Method for Robot Path Planning ...

115

2 Potential Field Method Potential field (PF) is one of the most popular techniques in path planning problem. Artificial Potential Field (APF) method has been used by many researchers because of its properties such as simplicity, elegance, and high safety method [3]. Khatib was the who first suggested this idea in which the robot was regarded as a point under the influence of fields generated by the goals and obstacles in the search space [8]. The APF can generate path planning based on two types of force which are attractive force and repulsive force. The attractive force is produced by the goal, and the repulsive force is generated by the obstacle. This method can be applied in known scenarios and also effort working in the unknown environment despite changes and modifications. APF method has several advantages such as path planning can be implemented in a real-time environment due to its (1) fast computation time and (2) ability to generate a smooth path without any collision with obstacles. However, this method has major drawbacks namely local minima, goal non-reachable problem, and narrow passages [9, 10]. To address these problems, researchers have improved the potential field method. Mei and Arshad used a Balance-Artificial Potential Field Method to solve the local minima and narrow passage besides achieving heading and speed control of ASV (Autonomous Surface Vessel) in a riverine environment [11]. An efficient Improved Artificial Potential Field based Regression Search Method for robot path planning and also Effective Improved Artificial Potential Field- Based Regression Search Method for Autonomous Mobile Robot Planning developed by Li et al. could generate a global sub-optimal/optimal path effectively and could reduce the local minima and oscillation problems in a known environment without complete information [12, 13]. Sfeir et al. presented the real-time mobile robot navigation in an unknown environment using Improved APF approach to create a smoother trajectory around the obstacles by developing an integrate of rotational force [14]. This method successfully prevented the limitation in APF due to Goal Non-Reachable when Obstacles are Nearby (GNRON) problem. Besides that, Park et al. proposed potential field method (PFM) and vector field histogram (VFH) to overcome the PF limitations by developing a new obstacle avoidance method for mobile robots based on advanced fuzzy PFM (AFPFM) [15].

3 Path Planning Method 3.1

Field Function Based on Traditional APF

The attractive potential field, Vg at goal is represented as

116

E. N. Sabudin et al.

Vg ¼ Kg rg

ð1Þ

  rg ¼ dist X; Xg

ð2Þ

where Kg is a variable constant which is greater than zero, X ¼ ðx; yÞ is a current   position, Xg ¼ xg ; ygÞ is a goal position, and rg is the distance between the current robot position and the goal. Figure 1 shows an attractive potential field at the target. The attractive force will pull the robot towards the target [16]. The repulsive potential field, Vo at can be defined as Vo ¼

Ko ro

ro ¼ distðX; X0 Þ

ð3Þ ð4Þ

where Ko is a variable constant that is greater than zero, X0 ¼ ðx0 ; y0 Þ is an obstacle position, Ko and r0 are equivalent to the gain and distance from the robot, respectively. The repulsive potential field, Vr at the starting point can be written as Vr ¼

Kr rr

rr ¼ dist ðX; Xr Þ

Fig. 1 The form of the general attractive potential field

ð5Þ ð6Þ

Improved Potential Field Method for Robot Path Planning ...

117

Fig. 2 General repulsive potential field (the gradients pointed away from the obstacles)

Fig. 3 Negative gradient between target and obstacles

Kr is a variable constant equal to or greater than zero, X ¼ ðx; yÞ is a current position and Xr ¼ ðxr ; yr Þ is a starting position. Figure 2 illustrates a repulsive potential field at a goal [16]. The repulsive force will push the robot towards the target.

118

E. N. Sabudin et al.

Fig. 4 a The attractive potential without obstacle b The repulsive potential set the highest value to the obstacle c The whole potential shows the combination of the two forces to get the final potential field result

Therefore, the total potential field can be as represented as in (7) Vtotal ¼ Vg þ Vr þ Vo

ð7Þ

Figure 3 illustrates the total force of the potential field [16]. The resultant force of the fields is used to determine the direction of motion the robot. In Fig. 4, the resultant force of the potential is shown in the 3D view [17].

3.2

Algorithm for Traditional Artificial Potential Field (APF)

In APF, there are two forces involved, which are the attractive force and repulsive force. The traditional APF is unable to reduce the local minima problem where the total sum of the potential field is zero. Figure 5 shows the flowchart of APF for robot path planning. In particular, the APF algorithm starts with the setting variable initialization, such as the number of obstacles and the environment range. The current waypoint assigned as a starting point and as a target point. Subsequently, the total potential field is calculated. The robot will move from the starting point; decreasingly with respect to the value of the potential field surrounding it until reaches the target point. If the local minima occur while the robot is carrying out a mission to a target point, the robot will collide with obstacles or oscillation happen. The robot cannot reach the goal success-fully unless there are no local minima problems while the robot deploys the mission.

Improved Potential Field Method for Robot Path Planning ...

119

Fig. 5 The traditional APF process for path planning

3.3 3.3.1

Improved APF Method Background

The attractive gain at goal, Kg is determined by the diagonal distance of the search space.

120

E. N. Sabudin et al.

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Kg ¼ ðdistxÞ2 þ ðdistyÞ2

ð8Þ

where distx represents the distance of the search space along the x-axis, while disty is that of the search space along the y-axis. On the other hand, the repulsive gain at the obstacle, K0 is written as: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðdistxÞ2 þ ðdistyÞ2 Ko ¼ ax þ b

ð9Þ

Where a; x and b, are the parameters for a line segment from (9). K0 is defined based on the environmental factor (diagonal distance), and the number of obstacles.

3.3.2

Algorithm of Improved APF Method

The proposed Improved APF algorithm is shown in Fig. 6. From its initial point, the location of the next position of the robot is selected by identifying and selecting the lowest point from the eight surrounding points generated by the potential field. Once the lowest point has been selected, the robot will move to that point. If the identified and selected point is local minima, the robot will identify and select the second-lowest potential field point value. The robot moves to that point and removes the point where local minima happen. This process will continue until the robot reaches the target.

3.4

APF with Pruning Path

The main aim of the improved APF is to solve the local minima, oscillation, and non-reachable problems. However, the path length generated by the APF is non-optimal. In addition, to ensure that the mission of the robot can be carried out successfully, other factors such as the energy-saving need to be taken into account. This could be realized if the path can be shortened. Therefore, an alternative technique known as path pruning has been applied to address this issue. Debnath et al. has mentioned that APF is effective in finding a shorter path [18]. Omar et al. proposed the path pruning in path planning problem using the probability roadmap (PRM) to produce a path with a shorter length [19]. Li et al. came out with Efficient Improved Artificial Potential Field Based Simultaneous Forward Search (Improved APF-based SIFORS) method for robot path planning which redefined the potential function to calculate the valid path and consequently shorten the distance of the planned path [20]. Lifen et al. improved the APF through changing the repulsive potential function that could help the UAV to avoid collision with obstacles effectively and found the optimal path [6].

Improved Potential Field Method for Robot Path Planning ...

121

Fig. 6 Proposed method for improved APF that solving the limitation of potential field method

3.5

Algorithm for Improved APF Method with Path Pruning

In this paper, a path pruning technique is used to shorten the existing path. The flowchart shown in Fig. 7 illustrates the process of pathfinding using Improved APF with path pruning. Let the path W consist of waypoints fPi ; Pi þ 1; Pi þ 2. . .Pn g where Pi is the starting point and Pn is the target point. The path pruning process starts by checking if there are any obstacles between waypoints Pi and Pi þ 1. Pi þ 1 will be eliminated if no obstacle is detected between Pi and Pi þ 1, and the checking of the obstacle will proceed between Pi and Pi þ 2. Otherwise, Pi þ 1 will be maintained as one of the waypoints of W, and the above process continues from Pi þ 1. The process will proceed until Pi ¼ Pn .

122

Fig. 7 Algorithm of path pruning based on improved APF

E. N. Sabudin et al.

Improved Potential Field Method for Robot Path Planning ...

123

4 Simulation Results and Discussion Simulation of the proposed algorithm has been carried out using MATLAB R2016a on a PC with Intel i5-4200U 1.6 GHz CPU and Windows 10 OS. The range of the environment R is set to 100 units, with obstacles numbers, O varied from 25 to 125. Coefficients Kg and Ko for calculating the attractive and repulsive force are set based on Eqs. (8) and (9) which are 282.843 and 15.687 respectively. The performance of the proposed algorithm is in terms of: i- Local minima ii- Path length iii- Computational time Figure 8 shows the comparison of the simulation result of the traditional APF (blue line) and Improved APF (magenta line). As can be seen from the scenario in Fig. 8(a), the Improved APF manages to overcome the local minima problem, and the robot reaches the goal. The red dots are referred to the area of local minima that have been addressed successfully. Figure 8(b) illustrates the 3D representation of the scenario. The subplot of the altitude of waypoints is depicted in Fig. 8(c) where the robot moves from the highest value (initial point) to the lowest value (target point). With the different numbers of obstacles, i.e., 25, 50, 75, 100, and 125, the resulting paths are shown in Fig. 9(a)–(e), respectively. Referring to subplot the scenario, the magenta lines show the paths planned based on Improved APF, and the blue lines represent the pruned paths. It is clearly shown that the algorithm manages to address the local minima, oscillation, GNRON, and narrow passages. Besides that, the resulting paths are shorter due to the application of path pruning technique.

Fig. 8 Comparison between the traditional APF (blue line) and improved APF (magenta line) simulation results, a Improved APF overcome the local minima problem, b 3D representation and c Robot movement waypoint

124

E. N. Sabudin et al.

(a) 25 Obstacles

(b) 50 Obstacles

(c) 75 Obstacles

Fig. 9 Paths generated by the Improved APF (magenta lines); the pruned paths (blue lines) with a number of obstacles, a 25 Obstacles, b 50 Obstacles, c 75 Obstacles, d 100 Obstacles and e 125 Obstacles

Improved Potential Field Method for Robot Path Planning ...

125

(d) 100 Obstacles

(e) 125 Obstacles

Fig. 9 (continued)

The computational time and path length of the proposed algorithm are summarized in Table 1. The overall simulation results show the path length and computational time of the Improved APF with path pruning in each scenario computational time are longer if local minima happen. Referring to the Improved APF performances, the generated path is relatively long due to the local minima. For the obstacles numbers of 25 and 50, there are no local minima. For 75 obstacles in the environment, the generated path is relatively long due to the local minima problems (red dots). The robot removes the previous waypoints to avoid the repetition of local minima point, and then the robot needs to move to the lowest point from the midpoint. It can be seen that the robot struggles to exit from the local minima. As a result, the computation time has increased

126

E. N. Sabudin et al.

Table 1 The performance of Improved APF and pruning path Number of obstacles

Path length of Improved APF (unit)

Pruned path length (unit)

Computation time of Improved APF (s)

Computation time of pruned path (s)

25 50 75 100 125

193.807 208.056 431.686 257.863 274.967

153.270 143.532 187.355 160.987 172.536

14.127 17.811 48.695 27.771 32.419

0.323 0.431 1.674 0.971 0.892

dramatically. On the other hand, the path generated in environments with 100 and 125 obstacles are considered moderate. In these cases, the local minima problems still occur, but the robot manages to address it.

5 Conclusion and Future Work The Improved APF with path pruning has been proposed for robot path planning in a known environment. The proposed method finds a valid, feasible, and shorter solution for robot mission, and consumes low computation time, which is vital for a real-time path planning application. Improved APF has also been proven to address the problem faced by APF method. By the proposed algorithm, the criteria for path planning problems have been fulfilled. In future work, the improved APF with path pruning will be enhanced considering with a specific region to improve the algorithm speed. This research also focuses on the cooperative technique for multi robots path planning. Acknowledgements Authors like to give appreciations to Universiti Tun Hussein Onn Malaysia (UTHM) and Research Management Center (RMC) for supporting fund under TIER-1 VOT H131.

References 1. Hasircioglu I, Topcuoglu HR, Ermis M (2008) 3-D path planning for the navigation of unmanned aerial vehicles by using evolutionary algorithms. In: Proceedings of the conference on genetic and evolutionary computation, pp 1499–1506 2. Omar RB (2011) Path planning for unmanned aerial vehicles using visibility line-based methods. control and instrumentation research group. Department of Engineering, University of Leicester, March 2011 3. Sabudin EN, Omar R, Che Ku Melor CKANH (2016) Potential field methods and their inherent approaches for path planning. ARPN J Eng Appl Sci 11(18):10801–10805 4. Borenstein J, Koren Y (1991) Potential field methods and their inherent limitations for mobile robot navigation, April 1991, pp 1398–1404

Improved Potential Field Method for Robot Path Planning ...

127

5. Cen Y, Wang L, Zhang H (2007) Real-time obstacle avoidance strategy for mobile robot based on improved coordinating potential field with genetic algorithm. In: IEEE international conference on control applications, October 2007 6. Lifen AL, Rouxin BS, Shuandao CL, Jiang DW (2016) Path planning for UAVS based on improved artificial potential field method through changing the repulsive potential function. In: IEEE Chinese guidance, navigation and control conference (CGNCC), 12–14 August 2016 7. Liu Y, Zhao Y (2016) A virtual-waypoint based artificial potential field method for UAV path planning. In: Proceedings of 2016 IEEE Chinese guidance, navigation and control conference, 12–14 August 2016 8. Khatib O (1985) Real-time obstacle avoidance for manipulators and mobile robots. In: Proceedings of the IEEE international conference on robotics and automation, pp 500–505 9. Mei W, Su Z, Tu D, Lu X (2013) A hybrid algorithm based on artificial potential field and BUG for path planning of mobile robot. In: 2nd international conference on measurement, information and control 10. Wang S, Min H (2013) Experience mixed the modified artificial potential field method. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), 3–7 November 2013 11. Mei JH, Arshad MR (2015) A balance-artificial potential field method for autonomous surface vessel navigation in unstructured riverine environment. In: IEEE international symposium on robotics and intelligent sensors (IRIS) 12. Li G, Tamura Y, Yamashita A, Asama H (2012) Effective improved artificial potential field-based regression search method for robot planning. In: IEEE international conference on mechatronic and automation, 5–8 August 2012 13. Li G, Tamura Y, Yamashita A, Asama H (2013) Effective improved artificial potential field-based regression search method for autonomous mobile robot path planning. Int J Mechatron Autom 3(3):141–170 14. Sfeir J, Saad M, Saliah-Hasane H (2011) An improved potential field approach to real-time mobile robot path planning in an unknown environment. In: IEEE international symposium on robotic and sensors environments (ROSE) 15. Park JW, Kwak HJ, Kang YC, Kim DW (2016) Advanced fuzzy potential field method for mobile robot obstacle avoidance. J Comput Intell Neurosci 2016. Article No. 10 16. Godrich MA. Potential Field Tutorial. https://pdfs.semanticscholar.org/725e/fa1af22f41dcbe cd8bd445ea82679a6eb7c6.pdf. Accessed 29 Aug 2019 17. Robot Motion Planning and Control. Potential Field. https://sebastian-hoeffner.de/uni/ ceng786/index.php?number=2. Accessed 29 Aug 2019 18. Debnath SK, Omar RB, Abdul Latip NB (2019) A review on energy efficient path planning algorithms for unmanned air vehicles. In: Computational science and technology. Springer, Singapore 19. Omar RB, Che Ku Melor CKNAH, Sabudin EN (2015) Performance comparison of path planning methods. ARPN J Eng Appl Sci 20. Li G, Tong S, Lv G, Xiao R, Cong F, Tong Z, Yamashita A, Asama H (2015) An improved artificial potential field-based simultaneous forward search (improved APF-based SIFORS) method for robot path planning. In: The 12th international conference on ubiquitous robots and ambient intelligence (URAI), 28–30 October 2015

Development of DugongBot Underwater Drones Using Open-Source Robotic Platform Ahmad Anas Yusof, Mohd Khairi Mohamed Nor, Mohd Shahrieel Mohd Aras, Hamdan Sulaiman, and Abdul Talib Din

Abstract This paper presents the development and fabrication of an open source, do-it-yourself underwater drone called DugongBot, which is developed in collaboration with the Underwater Technology Research Group (UTeRG), Universiti Teknikal Malaysia Melaka. Research institutes and hobbyist have shown a growing interest in the development of micro observation class remotely operated vehicle (micro-ROV) using open-source platform. Currently, OpenROV and Ardusub are the low-cost open-source solutions that are available for such ROVs. The open-source hardware and software platforms are being used worldwide for the development of small range of electrical powered ROV system’s architecture, with support from the literature in the internet and the extensive experience acquired with the development of robotic exploration systems. This paper presents the development of DugongBot, which uses the OpenROV open-source platform. Weighing approximately 3 kg and designed for 100 m depth, the drone uses a single 18 cm long watertight tube in 10 cm diameter to accommodate the main electronics compartment, which can be tilted up and down with a servo, for CMOS sensor HD webcam alignment. Two horizontal thrusters for forward, reverse and rotational movement and a vertical thruster for depth control is also used for manoeuvrability. Keywords Micro-ROV

 OpenROV  Underwater drones  Open-source

A. A. Yusof (&)  M. K. M. Nor  M. S. M. Aras Faculty of Electrical Engineering, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia e-mail: [email protected] A. A. Yusof  M. K. M. Nor  M. S. M. Aras Centre for Robotics and Industrial Automation, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia H. Sulaiman  A. T. Din Faculty of Mechanical Engineering, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_10

129

130

A. A. Yusof et al.

1 Introduction Open-source robotic platform for underwater robotics has provided high return investment for the scientific community. There is now significant evidence that such sharing concept has allowed a scenario in such a way that underwater technology can be studied, modified, created, and distributed by anyone. Thus, micro ROV or underwater drones are increasingly famous due to the growing curiosity in underwater drones by researchers that uses the open-source platform [1, 2]. The platform has led to the development of various low-cost underwater drones for hobbyist such as OpenRoV Trident, Gladius Mini and Geneinno Poseidon that serves a wide variety of purposes in capturing footage in the underwater environment for scientific exploration, industrial inspections and military surveillance [3–9]. The availability of open-source platform also gives the opportunity for students to develop underwater robots for underwater vehicles competition around the globe [10–14]. These electric powered vehicles can weight to as low as 2 kg and are generally smaller in size, which is suitable for backpack storage. They are generally limited to depth ratings of less than 100 m due to the limitations to the underwater pressure and power to weight ratios. They can be easily hand launched from the surface, use a simple tether system, and sometimes can be connected wirelessly from a floating buoy at the surface. This will ensure continuous live video feed from the drones and more importantly, to avoid losing the drones in the deep ocean. Most of them are also equipped with powerful headlamps, providing visibility in the dark and murky underwater conditions. They also use 4K cameras for high-quality image capture, FPV goggles for first person view experience and a simple robotic arm for underwater sampling. Figure 1 shows price comparison of of selected small ROVs and the underwater drones in Malaysian ringgit.

Fig. 1 Price comparison of small ROVs [15]

Development of DugongBot Underwater Drones …

131

Thus, in this paper, the review and the development of an underwater drone using open-source platforms and solutions are presented and evaluated. Named DugongBot, the underwater drone serves as the first generation of low-cost drones that is developed in house at UTeRG.

2 DugongBot Development Dugong, as shown in Fig. 2, is a species of sea cow found throughout the warm latitudes of the Indian and western Pacific Oceans. It can be found in the coastal area of Malaysia, and has been categorized as decreasing in numbers in the International Union for Conservation of Nature’s Red List of Threatened Species [16]. In support of the dugong protection throughout the world, the underwater drone in this project is called DugongBot, as shown in the CAD design in Fig. 3.

2.1

Hardware Development

The DugongBot comes with the BeagleBone Black single board computer as a processor, and integrated with Arduino MEGA microcontroller for sensor detection and thruster control. It can be tele-operated by using either gamepad or keyboard to control the vehicle’s movement. It can also works with any Windows compatible gamepad. DugongBot uses inertia measurement unit and pressure sensor for movement and depth calibration that uses a single-axis rate gyroscope to measure the yaw rate and a two-axis accelerometer to measure the roll and the pitch. The system has a maximum operational pressure of 30 bar for depth capability and a magnetometer compass. A 1080p high-definition webcam with 120-degree field-of-view is used in the telemetry system through I2C protocols for laptop display. There are 3 thrusters used for forward, upward and downward movement. The topside control hardware contains few electronics equipment to communicate with the drone. The controller board, which is designed based on the Arduino Mega configuration manages the low-level input commands from the IMU and pressure sensors and the output commands to the motors/thrusters and lights, while the Fig. 2 Dugong

132

A. A. Yusof et al.

Beaglebones Black processes the input from the underwater footage using the mjpg-streamer. The topside interface board provides an Ethernet connection between the drone and the laptop. The drone uses micro USB power supply that can supply at least 500 milliamps to the topside interface board. It has been documented in the OpenROV support group forum that the topside interface board can be connected wirelessly by implementing a small modification [17]. Table 1 shows the specification for the DugongBot 1.0.

Fig. 3 DugongBot CAD design

Table 1 DugongBot specification Name

DugongBot 1.0

Dimension Weight Hull Frame Thrusters ESCs Controller Processor Software Batteries Sensors Tether Ballast Camera

25H  30 W  45L (cm) 3 kg Poly(methyl methacrylate) (Acrylic) Polyvinyl chloride (PVC) pipe 3 thrusters Afro ESC 12amp Arduino Mega–based OpenROV microcontroller Beaglebone Black OpenROV Cockpit, Node.JS, mjpg-streamer, Socket.IO, 2500mAh, 9.6 V, 26650, LiFePO4 OpenROV IMU (add-on) Ethernet 2 wire Lead HD Camera on tilt servo

Development of DugongBot Underwater Drones …

2.2

133

Software Development

OpenROV itself is a company that produces underwater exploration devices, which is located in Berkeley, California and was founded in 2011. In 2019, Ocean data startup Spoondrift and OpenROV has announced the merger into a new company known as Sofar Ocean Technologies. Since then, the support for OpenROV 2.8 has been unavailable from the OpenROV website, due to the merger. However, despite the fact that OpenROV has merged into a new company called SOFAR, and the company current focus is on marketing the OpenROV Trident and Intelligent Spotter buoy, the support and documentation of OpenROV 2.8 and the older versions can still be downloaded from GitHub and Dozuki. GitHub is a hosting platform for software development, which offers all of the distributed version control and source code management for many software developer, including OpenROV. Github OpenROV community is managed by a DIY community centred on underwater robots for exploration and adventure. The community is a group of amateur and professional ROV builders and operators from over 50 countries who have a passion for underwater robotics. Dozuki is a cloud-based platform that provides access to various step-by-step manuals for repair, process tracking, training and work instructions. Both platform provide good community and support group for OpenROV documentations. It is noted that almost 30 guides are available for the step-by-step development of OpenROV in Dozuki itself. Figure 4 shows some of the open-source support for the project.

Fig. 4 Open Source support

134

A. A. Yusof et al.

3 Drone Testing 3.1

Camera Function with Software

DugongBot uses an ultra-wide angle full HD webcam. This camera enables the user to experience the live video streaming to explore the underwater environment and capture photos. The camera can also detect objects and be remotely operated for 25 to 30° upward movement and 60° downward movement. The camera also provides a view of 120° wide. The battery enables the camera to be functioning up to 3 h. The movement of the camera is controlled by a keyboard, whereby the Q key controls the downward movement, T controls the upward movement and I key controls the lights. The visual interface for openROV platform is known as the Cockpit, as shown in Fig. 5, which provides informations on depth, heading display, battery voltage and consumption, and the flight time to the operator. It also provides the graphical user interface to the operator. The cross-platform JavaScript run-time environment Node.js application is used to send commands through the keyboard by using a HTML 5 one page application supported browser. ROV connection is possible, by using a static IP address that is similar to the ROV built static IP address. The static IP address is 192.168.254.1, the last number must be set other than 1 and the subnet mask need to be change at 255.255.255.0.

Fig. 5 Camera function using OpenROV cockpit platform

Development of DugongBot Underwater Drones …

135

The drone is connected via Ethernet tether to transfer data, and does not need to download any software or having an internet connection to operate them. Ethernet protocol is used to connect the DugongBot with a computer via Ethernet tether. The BeagleBone black in the drone runs the browser and the webserver on the computer, and communicate with the server using Socket.IO, a JavaScript library that enables bidirectional, real time event based-communication. The DugongBot’s controller board, which is designed based on Arduino Mega configuration manages the low-level input commands from the IMU and pressure sensors and the output commands to the motors/thrusters and lights, while the Beaglebones Black processes the input from the underwater footage using the mjpg-streamer. The DugongBot’s topside interface board provides an Ethernet connection between the ROV and the laptop, as shown in Fig. 6.

Tenda Adapter

Topside Computer

Ethernet (RJ45)

Gamepad Controller (Optional) Topside Interface Board

Fig. 6 DugongBot version 1.0

Ethernet (2 Wire)

136

A. A. Yusof et al.

Fig. 7 DugongBot thrusters

3.2

Thrusters Functions

The low cost brushless motors are a good choice for the thrusters, but the motors may have a limited life when used only in the salt water environments. Nevertheless, proper maintenance will definitely enhance their life expectancy. All the thrusters are wired to the input power and controlled by the keyboard which enables the user to control the movement from the topside. The input power source is powered by 2500 mAh, 9.6 V, 26650, LiFePO4 batteries. It can also be tested with a 12 V power supply. The thrusters needed to be identified with their rotation and movement effects, in order to align them together. The thruster is connected to the left Shift key on the keyboard, for a anticlockwise rotation, that is used in a forward drone movement. The right Shift key will provide a command for a clockwise rotation on the same thruster, which also introduce a backward movement. In general, the Up, Down, Left, Right, Shift and Ctrl keys can be used to maneuver the DugongBot. Figure 7 shows the thrusters used in the drone.

3.3

Buoyancy

An underwater drone that is stable and doesn’t tip over is very important. DugongBot must be buoyant enough so that it can be maneuvered easily up or down without using too much energy. The objective of the development is also to

Development of DugongBot Underwater Drones …

137

Fig. 8 DugongBot in action

develop a well balanced structure of underwater drone that it will naturally bouyant below the water surface. During the first trial, the underwater drone is partially submerged in the water, but not in a stable condition. The left side is heavier than the right side. Later on, some weight is introduced, as a ballast, with one at the front and two at the sides. The result is a naturally bouyant DugongBot, as shown in Fig. 8.

4 Conclusion The development of DugongBot underwater drone using a low cost open-source robotic platform has been successfully implemented. The underwater drone has been designed for maneuverability, performance and underwater footage capability. This project will give much benefit for related underwater industries by looking at small underwater drones features with minimum cost implementation. In this paper, an open source prototype for building low-cost underwater drones and for customizing their thrusters and ballast configurations has been successfully tested using a three-propeller underwater drone based on open source hardware and software solutions. Nonetheless, further tests in deeper waters and under different frame configurations will be undertaken in the near future. Acknowledgements The authors wish to thank Ministry of Education (MOE) and Universiti Teknikal Malaysia Melaka for their support.

138

A. A. Yusof et al.

References 1. Aristizábal LM, Rúa S, Gaviria CE, Osorio SP, Zuluaga CA, Posada NL, Vásquez RE (2016) Design of an open source-based control platform for an underwater remotely operated vehicle. DYNA 83(195):198–205 2. Schillaci G, Schillaci F, Hafner VV (2017) A customisable underwater robot. arXiv abs/ 1707.06564 3. OpenROV Trident. https://www.sofarocean.com/products/trident. Accessed 10 Oct 2019 4. Fathom One. https://www.kickstarter.com/projects/1359605477/fathom-one-the-affordablemodular-hd-underwater-dr. Accessed 10 Oct 2019 5. Geneinno Poseidon. https://www.geneinno.com/poseidon.html. Accessed 10 Oct 2019 6. BlueROV2. https://www.bluerobotics.com/store/rov/bluerov2/. Accessed 10 Oct 2019 7. Aras MSM, Azis FA, Othman MN, Abdullah SS (2012) A low cost 4 DOF remotely operated underwater vehicle integrated with IMU and pressure sensor. In: 2012 4th international conference on underwater system technology: theory and applications (USYS 2012), Shah Alam, Malaysia 8. Zain ZMd, Noh, MM, Ab Rahim KA, Harun N (2016) Design and development of an X4-ROV. In: IEEE 6th international conference on underwater system technology: theory & applications, Penang, Malaysia 9. Mainong AI, Ayob AF, Arshad MR (2017) Investigating pectoral shapes and locomotive strategies for conceptual designing bio-inspired robotic fish. J Eng Sci Technol 12(1):001–014 10. Singapore Autonomous Underwater Vehicle Challenge (2017). https://sauvc.org/. Accessed 10 Oct 2019 11. Malaysia Autonomous Underwater Vehicle Challenge (2018). http://oes.ieeemy.org/. Accessed 10 Oct 2019 12. Yusof AA, Nor MKM, Shamsudin SA, Alkahari MR, Mohd Aras MS, Nawawi MRM (2018) Facing the autonomous underwater vehicle competition challenge: the TUAH AUV experience. In: Hassan M (eds) Intelligent manufacturing & mechatronics. Lecture notes in mechanical engineering. Springer, Singapore 13. Yusof AA, Nor MKM, Shamsudin SA, Alkahari MR, Musa M (2018) The development of PANTHER AUV for autonomous underwater vehicle competition challenge 2017/2018. In: Hassan M (eds) Intelligent manufacturing & mechatronics. Lecture notes in mechanical engineering. Springer, Singapore 14. Yusof A, Kawamura T, Yamada H (2012) Evaluation of construction robot telegrasping force perception using visual, auditory and force feedback integration. J Robot Mechatron 24(6):949–957 15. Sulaiman H, Nor MKM, Yusof AA, Aras MSM, Mohamad Ayob AF (2019) Low cost observation class remotely operated underwater vehicle using open-source platform: a practical evaluation between Openrov And Bluerov. In: International conference on ocean, engineering technology and environmental sustainability (I-OCEANS 2019), Kuala Terengganu, Malaysia 16. IUCN Red List of Threatened Species. https://www.iucn.org/ur/node/24442. Accessed 10 Oct 2019 17. Jakobi N. Guide ID 59. How to build a WiFi enabled Tether ManagementSystem. https:// openrov.dozuki.com/Guide/How+to+build+a+WiFi+enabled+Tether+Management+System/ 59. Accessed 10 Oct 2019

Development of Autonomous Underwater Vehicle for Water Quality Measurement Application Inani Yusra Amran, Khalid Isa, Herdawatie Abdul Kadir, Radzi Ambar, Nurul Syila Ibrahim, Abdul Aziz Abd Kadir, and Muhammad Haniff Abu Mangshor Abstract Autonomous Underwater Vehicles (AUVs) are unmanned, self-propelled vehicles typically deployed from a surface vessel and are capable of operating independently from that vessel for periods of several hours to several days. This project presents the development of an Autonomous Underwater Vehicle (AUV) with a pH sensor, temperature sensor, and turbidity sensor to measure the water quality. An existing method is a conventional approach, where a scientist has to go to the site and collect a water sample to measure the quality. It required more time to gather the data and lack the capability for real-time data capture. Thus, through the innovation and idea of this project, a scientist can measure the water quality in real-time, autonomously and easier than the conventional method. In this project, two thrusters control the horizontal motion of the AUV, which placed on the side of the AUV with the guidance of a digital magnetic compass to control the direction of the AUV. The vertical movement of the AUV is controlled by two thrusters located at the bottom of the AUV with the help of a depth sensor to ensure that the AUV remains submerged. A pH sensor used to detect the water quality whether the water contamination is close to acidity or alkaline or normal value. The temperature sensor is used to sense the water temperature. The turbidity sensor is used to detect the cloudiness of water, either murky water or clear water. These three sensors start operating when the microcontroller starts to power up. The AUV is tested in a G3 lake at UTHM to test its ability to stay submerged and its functionality to measure the water quality parameters. The AUV has successfully carried out the given task without requiring the interface of an operator. Future researchers can improve the AUV’s design to make the AUV works more efficiently. Keywords Autonomous Underwater Vehicle Water quality sensor

 Water quality measurement 

I. Y. Amran  K. Isa (&)  H. A. Kadir  R. Ambar  N. S. Ibrahim  A. A. A. Kadir  M. H. A. Mangshor Faculty of Electrical and Electronic Engineering, Universiti Tun Hussien Onn Malaysia, 86400 Parit Raja, Batu Pahat, Johor, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_11

139

140

I. Y. Amran et al.

1 Introduction 1.1

Project Background

An Autonomous Underwater Vehicle (AUV), also known as an unmanned underwater vehicle, is a robot that submerged underwater without requiring a command from an operator. An AUV is different from Remotely Operated Vehicle (ROV). The different between AUV and ROV is on how do both robots were operated. An AUV works independently of humans, while ROV is an unoccupied underwater robot with a sequence of wires linked to a vessel [1]. An AUV only submerged underwater with the requirement inside the code from the user and returned after it finishes and completes the mission, but ROV transmits all the data to the operator through the cables convey power and allow the ROV to be controlled by the operator. The application of AUV has been used for more and more tasks, with roles and missions continually evolving, such as the oil and gas industry. This industry uses the AUV to make detailed about seafloor maps before they start to build their subsea infrastructure. The scientist uses AUV for their research about ocean floor mapping, used to find wreckages of missing aeroplanes, and also can be as a hobby. Water is a significant source of every living thing to survive. However, when humans pollute the water, the water starts to be unclean. From that situation, water problems become widespread. Water contamination is the primary inducement of human disease [2]. Thus, measuring and monitoring of water quality is very crucial. Human beings begin to assess the water quality of the contaminated water. They were using conventional methods to measure water quality. The conventional methods to measure water quality lack the capability for real-time data capture. Traditional techniques of collecting, testing, and analysing water samples in water laboratories are not expensive but also lack the capacity to collect, analyse, and rapidly disseminate information in real-time [3]. Several procedures need to be done before the data comes out. Many scientists collect water samples only from lake cliffs and on the surface of the water, and they were also looking for beautiful weather to go out to collect the sample of water. The collected water was tested and analysed in the laboratory and need time to get the result. From this process, the result became not real-time data capture because the conventional process took time to analyse the data. For traditional tools, the scientist using Litmus paper (pH strip paper) or a Membrane-based kit. A litmus paper is produced of a lichens-based dye, turning purple in acid (pH < 6.0) while turning green in a base (pH > 8.0) [4]. A litmus paper only needs to dip into the collected water, and the paper changed the colour according to the pH indicator. The pH indicator is the specific range of pH values. A Membrane-based kit is also a type of strip paper which contains tetrazolium dye and a carbon source on it. The kit is only required for the water sample to kept, and the colour development is observed [5]. The traditional pH tools need time to analyse real-time data because the Litmus paper changes colour after the paper

Development of Autonomous Underwater Vehicle …

141

dipped into the collected water. From that colour, the paper needs to match with the pH indicator whether the water is acidity, alkaline or normal. The objective of this project is to develop a functional prototype of an AUV for water quality measurement application where the AUV that consists of a pH sensor, turbidity sensor, and temperature sensor is a new idea and innovation to make it easier for the scientist to carry out measurement tasks. The function of AUV innovation is where the AUV can collect the water quality data on the surface of the water and underwater. The data will be recorded and stored in the data logger. The recorded data can be retrieved by removing the memory card inside the data logger. From this kind of innovation, the data that produce were approximate to the real-time data capture and also analysed the performance of the AUV and the effectiveness of the water quality measurement. Overall, there are five parts in this paper, and the following are structured. Section 1 presents the introduction of this project. The problem statements and goals were discussed, including reviewing the associated prior project. Section 2 introduces the project methodology, including system layout and a few project trials. While Sect. 3 addresses the outcomes and analyses, the gathered information was discussed in detail in this section. Towards the completion of this project, Sect. 4 discusses the project restriction, and Sect. 5 presents the future work to enhance this project.

1.2

Previous AUV with Water Quality Sensor

This section addresses relevant and important previous research that offers a detailed and systematic perspective on the Underwater Vehicles literature review. Komaki [6] concerned with the design and creation of an AUV specifically designed for entry into hydrothermal settings over-complicated, wide depth seabed topography. They can be very close to the ventilation fields and carry various types of a chemical sensor. Okamura developed MINIMONE (Mini Monitoring Equipment) for collecting water samples. MINIMONE information analyses various water characteristics such as water density, pH, dissolved inorganic carbon, nutrients, iron and manganese. The environment for this AUV is for underwater. The advantage for AUV Urashima is every second the information was logged; meanwhile, the disadvantage is the AUV has 10 m in length, as shown in Fig. 1. Takeuchi [7] applied the design implementation of a Solar-Powered Autonomous Surface Vehicle (SASV), as shown in Fig. 2. SASV measured depth, temperature, turbidity, conductivity, oxygen dissolved, and chlorophyll. The ultimate objective of this study is to create an index of ocean ecosystem soundness and to suggest preventive steps to avoid collisions between fast passenger vessels and big whales. The environment for SASV is on the sea surface. The advantage of this project is solar-powered, and the disadvantage is that the data collected only on the water surface.

142

I. Y. Amran et al.

Fig. 1 AUV Urashima [6]

Fig. 2 Solar-powered ASV [7]

An innovative project has been created by Helmi [8] to monitor water quality in the continental, coastal and lake regions. The parameters for this project is pH data, Oxidation Reduction Potential (ORP) and temperature of the water where these water quality sensors are attached to a buoy. The environment of this project is on the water surface, as shown in Fig. 3. The benefit for the portable buoy is that information is obtained in real-time from the buoy, and the disadvantage is that the data were collected only on the water surface. Prasad [9] stated that the Internet of Things (IoT) and Remote Sensing (RS) methods are commonly used to monitor, collect and analyse information from remote places. The researcher developed the Smart Water Quality Monitoring to analyse the following water parameters, as shown in Fig. 4. This project aims to develop a technique for monitoring the quality of seawater, surface water, tap water and polluted stream water in an attempt to help manage water pollution using IoT and RS technologies. The benefit of Smart Water Quality Monitoring System is that the information was stored onboard via the SD card or sent to the File Transfer

Development of Autonomous Underwater Vehicle …

143

Fig. 3 Mobile buoy [8]

Fig. 4 Smart water quality system [9]

Protocol (FTP) or cloud server and the disadvantage is that the data can only be taken at one point to another. Kafli [10] mentioned that the environmental monitoring process is characterising and monitoring environmental quality such as air quality and water quality. Furthermore, environment monitoring is used to prepare environmental impact assessment and in many cases where human operations pose a danger of damaging impacts on the natural environment. The author developed a floating platform to observe the air and the water, as shown in Fig. 5. This device monitors parameter like temperature, humidity, latitude and longitude, water pH, date and time, and carbon monoxide. The benefit for this project is the information saved for every 10 min in the SD card in .txt format [11], and the weakness is the data of water quality measurement collected only at the water surface area. Niswar [12] has studied soft shell crab farming throughout south-east Asia, such as Indonesia. Poor water quality throughout crab farming raises the mortality rate in the pond of the crab. The author proposed to design and implement a water quality monitoring system for crab farming using IoT technology to raise awareness among

144

I. Y. Amran et al.

Fig. 5 Floating platform for environment monitoring [10]

Fig. 6 IoT-based water quality monitoring system for soft-shell crab farming [12]

farmers about the maintenance of acceptable water quality levels in the pond. The parameter used in this project is the temperature sensor, salinity, and pH sensor. The environment of this project is the bottom of the water floor as shown in Fig. 6. The advantage of this project is that the data sensing is transmitted via the ZigBee network and stored in the cloud database, and the disadvantage of this project is that the data collected only on the water surface.

Development of Autonomous Underwater Vehicle …

145

2 Methodology 2.1

Project Design

In order to attain the goals, this project is divided into several stages. It is to ensure that the design of the project can be carried out smoothly. The subsequent phases can be described into three sections; the first section is the modelling section, the second section is design and development, and the third section is testing and analysis sections. Figure 7 shows a sequence plan to start the AUV project. The first phase of this project is modelling, where AUV system architecture and mechanical assembly drawing is designed. Therefore, computer-aided software like Solidworks is used to draw 3D modelling and design the suggested and anticipated AUV structure. Phase 2 is to design and develop the AUV that consists of hardware development, software development, and integration, which covers internal and external mechanical design and electrical design. Phase 3 is to test and analyse the components of the AUV. Three tests have been focused on, is a lake test, buoyancy test, and leaking test. Figure 8 demonstrates the sensor flowchart of the AUV for water quality measurement application. The sensor is on with the connected parts and senses the surroundings. The pH sensor, temperature sensor, and turbidity sensor information gathered will be stored in the data logger every 1 s. If the data were not collected or not an accurate result, all connections of sensor need to troubleshoot.

Fig. 7 Sequence plan of project

146

I. Y. Amran et al.

Start Switch on Arduino to power up sensors

4 Thrusters start operates Troubleshooting sensor connections Acquire data from water quality sensor Yes

No

Is data collected?

The sensor data stored in the memory card

End

Fig. 8 Sensor flowchart

Start

A

Switch on Arduino to power up sensor

Acquire data from depth sensor

Acquire data from digital magnetic compass

Is data > range?

No Yes

Both bottom thrusters rotate counter clockwise for 1s

No Is data = range?

Yes Both horizontal thrusters remain stationary

A

Both bottom thrusters rotate clockwise for 1s

Yes

Is data < range?

No

Is data = range?

End

Fig. 9 System flowchart

Figure 9 shows the system flowchart for operation of an AUV. After it is entirely in the water, the AUV switched on automatically. The compass navigates the AUV

Development of Autonomous Underwater Vehicle …

147

underwater while assisted by the depth sensor to keep the AUV underwater. When the direction of the AUV is changed, the horizontal thruster reset the AUV to return to its direction of instruction. At the same time, the vertical thruster adjusts the AUV to remain submerged if the AUV reappears on the water surface. As the pH sensor, temperature sensor, and turbidity sensor start operates when the AUV switched on.

2.2

System Design

Figure 10 shows the project operational block diagram that consists of input, process, and output part of the project. The input part comprises several sensors with a battery as the primary power supply. Then, the process took place in Arduino microcontroller, and then data logger displays the output. Finally, the outcomes of the method will be discussed in the outcomes and analyses part in Sect. 3. Several hardware experiments that are endurance testing, buoyancy testing, and leakage testing have been performed after the model has been effectively constructed. The AUV was tested to evaluate the buoyancy, endurance, and leakage at Universiti Tun Hussien Onn Malaysia (UTHM) G3 Lake. In Fig. 10, the sensors enable the AUV to perceive its surroundings. The sensors in the input section play a key role in providing the AUV with accurate and detail environmental information. The sensors include a pH sensor, turbidity sensor, and temperature sensor. The pH sensor is used to evaluate the quality of water. The turbidity sensor is used to sense the water’s cloudiness. The temperature sensor is used to detect water temperature. These three sensors operated in simultaneously when AUV is switched on. On the other hand, the output section consists of a memory card and 4 thrusters; memory card is used to stores all the collected data from water quality sensors and thruster is used to stabilise the AUV or to control the movement.

Fig. 10 Block diagram of system

148

2.3

I. Y. Amran et al.

Hardware Requirements

The hardware requirement for AUV project is actuator and sensors. Figure 11 shows the T100 Thruster and Electronic Speed Controller (ESC). Four units of thrusters with ESC were used in this project. The T100 Thruster is a patented underwater marine robotic propeller. High performance with more than 5 lb of thrust and long-lasting enough to be used at great depths in the open ocean. The T100 is made of polycarbonate injection-moulded plastic, high-strength, UV resistant. The core of the engine is closed and protected with an epoxy coating, and it uses high-performance plastic bearings rather than steel bearings that rust in saltwater. All that is not plastic is high-quality, non-corroding aluminium or stainless steel. The propeller and nozzle intended by the T100 deliver a reliable and effective thrust while active water-cooling helps cool the motor. This model is composed by an electric brushless motor, ranging from 300 to 4200 rpm, has up to 130 W of output power and has 2.36 kgf of nominal torque [15]. The T100 can be used to counter torque with clockwise (CW) and counter-clockwise (CCW). Figure 12 shows that the microcontroller which used to control the AUV. This panel has 54 pins and 16 more memory analogue pins to store the code [16]. The Arduino Mega uses an Atmel 8 bits microcontroller that is ATmega2560 with 256 kb flash memory, 8 kb SRAM, 4 kb EEPROM, and 16 MHz of the clock frequency [17]. The Arduino Mega can be powered with an external power supply or via a USB connection. The power source is automatically selected. This microcontroller has the purpose of controlling the four (4) thrusters, digital magnetic compass, depth sensor, temperature sensor, pH sensor, turbidity sensor, IMU module, and data logger. Figure 13 shows an analogue pH sensor that senses the pH level of water. This sensor operates in 5 V. The measuring range of this sensor is 0pH to 14pH. The pH sensor is the alternative to get the result of water quality comparing Litmus paper or pH testing kit with colours that need to place on a pH indicator to get the result of water quality. The electrode is made of a sensitive glass membrane with low impedance. The calibrations of pH were a fast response. The pH is a significant parameter for water quality measurement, and the pH impacts aquatic animal development and reproduction [18]. Fig. 11 T100 Thruster [13] and ESC [14]

Development of Autonomous Underwater Vehicle … Fig. 12 Arduino Mega 2560 microcontroller

Fig. 13 pH sensor

149

150

I. Y. Amran et al.

Figure 14 shows a turbidity sensor that used to evaluate water quality turbidity. Its procedure is based on the concept that the light intensity dispersed by the suspended substance is proportional to its concentration [19]. The turbidity sensor operates in 5 V and 40 mA. Figure 15 shows a Celsius temperature sensor, also known as TSYS01. It is a quick response, a high-precision temperature sensor sealed from the water protected by an aluminium cage and ready to be installed in a waterproof enclosure [20]. The TSYS01 sensor itself has a rapid response time and designed the entire package to maintain that speed to enable accurate measurement of the temperature profile even if it drops and rises rapidly. Fig. 14 Turbidity sensor

Fig. 15 Temperature sensor

Development of Autonomous Underwater Vehicle …

151

3 Results and Analysis 3.1

3D AUV Modeling

This subtopic discusses the tools of the 3D AUV Modeling. The tools that used to sketch the 3D AUV Modeling are Solidworks 2016 Software. Figure 16 shows the AUV designed a box-shaped based on the features required for the AUV stabilisation system. The AUV mechanical system is designed that a centre of buoyancy (COB) is above the centre of gravity (COG). The COB and COG distance is referred to as metacentric height. The moment of restoration returning the vehicle to its stable orientation is proportional to the height of the metacentre. As the value of the metacentric height increases, the hydrostatic stability is increased. In addition, the COB and COG location must be aligned in the vertical direction so that the vehicle does not have a moment when the vehicle’s pitch and roll angle is equal to zero. Figure 17 shows the isometric 3D Design of an Autonomous Underwater Vehicle. The isometric consists of three principal axes, where the x-axis represents the front view, the y-axis represents the left view, and the z-axis represents the top view of an AUV 3D Modeling.

Fig. 16 3D AUV Modeling

152

I. Y. Amran et al.

Fig. 17 Isometric 3D AUV design

3.2

Control System

All thrusters and sensors calibrated and tested for their functionality before installation on the AUV, as shown in Fig. 18. The thruster connected to the AUV control system, powered by an external 11 V power supply, to control the speed and direction of the thrusters. The thrusters are precisely mounted in the centre of the vehicle to prevent the AUV from becoming imbalanced when flooded. Thus, a depth sensor is used to give the AUV instructions for submerging or floating underwater. The depth sensor detects the depth of water via its pressure sensor and transmits the data to the control system. The

Fig. 18 Thruster calibration and testing

Development of Autonomous Underwater Vehicle …

153

Fig. 19 Thrusters tested on the AUV structure

control system provided the thrusters with instructions on whether to submerge deeper or rise depending on the preset value. A digital magnetic compass is used as the AUV navigation system. The compass provided the microcontroller with directional data, and the AUV moved in the direction of pre-setting. The AUV’s orientation system used an Inertial Measurement Unit (IMU) Module. The IMU sensors help to position an object in three-dimensional space attached to the sensor. Usually, these values are in angles to determine their position. Figure 19 shows the view of the thruster testing process. All four thrusters attached on the AUV open structure; two thrusters attached on both side which left and right of AUV structure for horizontal movements and two thrusters attached at the bottom of the AUV open structure for vertical movement. The purpose of two thrusters at horizontal sides for back and forth movement which means the thruster needs to control the torque to clockwise for forwarding movement or counter-clockwise for backward movement. The function of two thrusters at the bottom of the AUV structure is for submerging movement and flotation movement. These two thrusters are also needed to counter the torque to clockwise for submerging movement or counter-clockwise for floating movement.

154

3.3

I. Y. Amran et al.

AUV Prototype

Before the model was constructed, several experiments were performed to check each sensor’s functionality. A few experiments were also carried out on the model by putting the model on the lake which the test of buoyancy, the test of leakage, and the test of endurance. All the parts that were assembled were put on the AUV body structure after all the experiments were completed, as shown in Fig. 20. The AUV consists of four thrusters; two horizontal movement thrusters and two vertical movement thrusters. The AUV has two compartments used to store all its electronic components to prevent them from getting contact with water. All AUV sensors stored in the upper compartment such as a compass, IMU module, data logger, depth sensor, turbidity sensor, temperature sensor, and pH sensor. Thruster speed controllers and power supply stored in the lower compartment. The floats and weights were used to provide sufficient buoyancy force for the AUV to stay on the float while it was fully submerged. To collect the data, as shown in Fig. 21, it was conducted at the UTHM G3 Lake. All sensors begin to collect the data when the power supply is switched on, and the data send to the Arduino microcontroller for storage in the memory card. The underwater compartments of the AUV are reinforced with white tape, epoxy and silicone grease to ensure that no water can enter the compartment to avoid water contact with the components, causing the entire circuit to be short circuit. The plasticine was also used as an additional reinforcement to seal off the entire opening of the compartment.

Fig. 20 The AUV prototype

Development of Autonomous Underwater Vehicle …

155

Fig. 21 AUV field test at the G3 Lake, UTHM

The endurance test shows that the AUV was able to survive with turbulent streams of water. For example, when the water flow is turbulent, the AUV can swim stable and balanced with the AUV’s assistive sensor like IMU sensor and actuator to make AUV remains swim in position.

3.4

AUV Submerging and Leaking Test

Following the complete assembly of the AUV, the AUV was submerged at the G3 Lake in UTHM to test whether the AUV could remain fully submerged underwater for a period of time, as shown in Fig. 22. The floats are added to the sides of the AUV to act as a floating mechanism to increase the buoyant force acting on the AUV.

156

I. Y. Amran et al.

Fig. 22 AUV submerging and leaking test

The additional weights are added to the AUV to prevent the AUV from surfacing back to the water surface to act as a sinking mechanism for the AUV. Both mechanisms work together in order to keep the AUV underwater floating. The AUV’s underwater compartments play a major role as their used for storing the AUV control system. As the AUV control system is not waterproof, it is therefore very important to ensure that the AUV control system does not come into contact with the water. Simultaneously, a leakage test is also carried out to ensure that no water can enter the AUV submarine compartments.

3.5

Experimental Results

The project goal was effectively accomplished from the outcome that was to develop an AUV for Water Quality Measurement Application. The system effectively gathered the data of water turbidity, temperature, and pH and saved it every 1 s as shown in Fig. 23 to the SD card in .txt format. UTHM G3 Lake is the suggested place for AUV to run the field test. This is because the G3 Lake consists of thermocline where the thermocline is a layer of transition between deep water and surface water. Each layer of water that is mixed layer represented as surface water, thermocline layer, and deep water has different temperature as shown in Fig. 24. Water close to the surface and warmed by the sun is less dense the water closes to the bottom because of water density changes as the water temperature changes. The lower the water temperature, the higher the water density until around 4 °C [21]. In a thermocline, with small increases in-depth, the temperature decreases rapidly. In these three layers also has different of cloudiness of water and pH value of the water.

Development of Autonomous Underwater Vehicle …

157

Fig. 23 Data is saved in SD card with .txt format

Based on the results in Fig. 25, during field test at G3 Lake in UTHM, the temperature of water starts to decrease rapidly until below 15 °C at 12:57:00 until 12:57:20. It is because the AUV is submerged underwater at the centre of the lake which at the thermocline layer. The layer that is close to the thermocline, the temperature of the water is decreasing. While early minutes of AUV operation for pH data, the pH sensors begins with unstable data because of the voltage reads incorrectly, the pH value viewed as the voltage is also discarded [23]. The

158

I. Y. Amran et al.

Fig. 24 Thermocline of water [22]

sensitivity of glass of pH sensor takes time to calibrate the correct data of the pH water quality. After several seconds which is the AUV started to swim at the centre of the lake, the pH sensor calibrated the pH water between pH7 until pH10. It is because the layer where the AUV dive in underwater, the pH value changed in every layer and location. Finally, the turbidity sensor senses the cloudiness of the water. From the result shown, the turbidity data changed at 12:56:18 until 12:56:36 to 5 V. It is because the water flows were in unsteady movement; in other words, is turbulence. The turbulence makes water becomes murkier.

Development of Autonomous Underwater Vehicle …

159

Fig. 25 Data analysis for temperature, turbidity, and pH sensors

4 Conclusion After testing out the AUV in a G3 lake at Universiti Tun Hussien Onn Malaysia, it can be summed up that the AUV can perform the given task without requiring the interface of an operator. The AUV switched on automatically after it is entirely in the water, all sensor in the control system power-up including water quality measurement sensor. The digital magnetic compass navigated the AUV swam underwater while the depth sensor helps to keep the AUV remain submerged. At the time, the water quality measurement sensors such as pH sensor, temperature sensor and turbidity started calibrating the data of the water and record the data into the data logger. There were a few problems that were present before reaching the final phase, which is the problem of leakage at the second compartment that consists of power supply (batteries) and four ESCs. The problem of leakage could be solved by applying a sealing tape with a layer of silicon grease around the thruster wire to prevent the passage of water. The second problem that was the power supply problem could be solved by adding a charging port to the power supply compartment so that the power supply could be recharged directly within the AUV instead of replacing the old battery with new ones. The third problem that was the uploading code to microcontroller could be solved by adding a Universal Serial Bus

160

I. Y. Amran et al.

(USB) port to the primary compartment (microcontroller compartment) so that the user can upload the code through the USB port that connected with microcontroller instead of opening the hull. In conclusion, the project aims at designing and developing a functional Autonomous Underwater Vehicle for Water Quality Measurement Application is achieved. The last objective that is to analyse the performance of the AUV and the effectiveness of the water quality measurement is successfully achieved as the AUV able to operate fully function.

5 Recommendation For future work, there are a few improvements that can be implemented in the future. One of the recommendations is to decrease the length of the AUV because a smaller size AUV can improve the manoeuvrability of the AUV. Based on the First Law of Motion of Newton, also known as Inertia, an object in rest remains at rest, and an object in movement remain in movement at the moment, unless an unbalanced force acts on it. As the mass of the AUV increases, the AUV’s inertia will also increase, resulting in large inertia for the AUV. In terms of manoeuvrability, a smaller size AUV will have small inertia that will benefit to the AUV. Another improvement that can be implemented in future projects is by using waterproof electronic components. This idea plays an essential part in the development of an AUV as the AUV is used explicitly for underwater missions, in particular for mapping seafloors, detecting wreckage, and measuring the water quality at seafloors. This is why the component will not malfunction when in contact with water by using waterproof electronic components while lowering costs at the same moment of replacing malfunction components with new ones.

References 1. National Oceanic and Atmospheric Administration (2018) What is the difference between an AUV and an ROV? US Department of Commerce 2. Zhou B, Bian C, Tong J, Xia S (2017) Fabrication of a miniature multi-parameter sensor chip for water quality assessment. Sensors 17(12):157 3. Faustine A, Mvuma AN, Mongi HJ, Gabriel MC, Tenge AJ, Kucel SB (2014) Wireless sensor networks for water quality monitoring and control within lake victoria basin: prototype development. Wirel Sens Netw 6:281–290 4. Gunda NSK, Dasgupta S, Mitra SK (2017) DipTest: a litmus test for E. coli detection in water. PLoS ONE 12(9):1–13 5. Kumar SB, Shinde AH, Mehta R, Bhattacharya A, Haldar S (2018) Simple, one-step dye-based kit for bacterial contamination detection in a range of water sources. Sens Actuators B Chem 276:121–127

Development of Autonomous Underwater Vehicle …

161

6. Komaki K, Hatta M, Okamura K, Noguchi T (2015) Development and application of chemical sensors mounting on underwater vehicles to detect hydrothermal plumes. In: 2015 IEEE underwater technology, UT 7. Arima M, Takeuchi A (2016) Development of an autonomous surface station for underwater passive acoustic observation of marine mammals. In: Ocean 2016, Shanghai, no. 26289339, pp 1–4 8. Helmi AHMA, Hafiz MM, Rizam MSBS (2014) Mobile buoy for real-time monitoring and assessment of water quality. In: Proceedings of the 2014 IEEE conference on systems, process and control, ICSPC 2014, December, pp 19–23 9. Prasad AN, Mamun KA, Islam FR, Haqva H (2016) Smart water quality monitoring system. In: 2015 2nd Asia-Pacific world congress on computer science and engineering, APWC CSE 2015, pp 1–6 10. Kafli N, Othman MZ, Isa K (2017) Unsupervised floating platform for environmental monitoring. In: Proceedings of the 2016 IEEE international conference on automatic control and intelligent systems, I2CACIS 2016, October, pp 84–89 11. Kafli N, Othman MZ, Isa K (2016) Development of a floating platform for measuring air and water quality. In: 2016 IEEE 6th international conference on underwater system technology: theory and applications, USYS 2016, pp 177–182 12. Niswar M et al (2018) IoT-based water quality monitoring system for soft-shell crab farming. In: Proceedings of the 2018 IEEE international conference on internet of things and intelligence system, IOTAIS 2018, pp 6–9 13. T100 Thruster - Blue Robotics. https://www.bluerobotics.com/store/thrusters/t100-t200thrusters/t100-thruster/. Accessed 18 May 2019 14. Speed Controllers (ESCs) Archives - Blue Robotics. https://www.bluerobotics.com/productcategory/thrusters/speed-controllers/. Accessed 18 May 2019 15. Nascimento S, Valdenegro-Toro M (2018) Modeling and soft-fault diagnosis of underwater thrusters with recurrent neural networks. IFAC-PapersOnLine 51(29):80–85 16. Introduction to Arduino Mega 2560 - The Engineering Projects. https://www.theengineer ingprojects.com/2018/06/introduction-to-arduino-mega-2560.html. Accessed 18 May 2019 17. RobotShop (2015) Arduino Mega 2560 Datasheet. Power, pp 1–7 18. Wei Y, Hu X, An D (2018) Design of an intelligent pH sensor based on IEEE1451.2. IFAC-PapersOnLine 51(17):191–198 19. Lambrou TP, Anastasiou CC, Panayiotou CG (2010) A nephelometric turbidity system for monitoring residential drinking water quality. Springer, Berlin, Heidelberg, pp 43–55 20. Fast-Response, High Accuracy (± 0.1 °C) Temperature Sensor. https://www.bluerobotics. com/store/sensors-sonars-cameras/sensors/celsius-sensor-r1/. Accessed 18 May 2019 21. About Water Temperature. https://staff.concord.org/*btinker/GL/web/water/water_temperat ures.html. Accessed 27 May 2019 22. US Department of Commerce, N. N. W. S. Thermocline - Temperature Fluctuations at Erie, PA 23. Top 10 Mistakes in pH Measurement. https://blog.hannainst.com/top-10-mistakes-in-phmeasurement. Accessed 21 May 2019

Discrete Sliding Mode Controller on Autonomous Underwater Vehicle in Steering Motion Nira Mawangi Sarif, Rafidah Ngadengon, Herdawatie Abdul Kadir, and Mohd Hafiz A. Jalil

Abstract The purpose of this study is to implement sliding mode control in discrete time domain for Autonomous Underwater Vehicle (AUV). Six Degree of Freedom (DOF) was established for Naval Postgraduate School (NPS) AUV II model, followed by linearizing surge and sway nonlinear Equation of Motion (EoM) in horizontal plane to simplify the control system design. Discrete sliding mode controller was designed based on Gao’s reaching law. Discrete Proportional Integral Derivative (PID) controllers were used for performance comparative analysis and brief discussion on existence of chattering phenomena in the controller input. As a result, computer simulations on NPS AUV II showed that the proposed controller has zero overshoot and faster settling time than the discrete PID controller. Keywords AUV

 Chattering reduction  Discrete time sliding mode

1 Introduction Autonomous Underwater Vehicle (AUV) has shown popularity for three decades due to its versatility and excellent performance which are increasingly being used in many industries [1]. Their solid small size with self-operated propulsion systems, capability carrying sensors such as depth sensors, video cameras, side-scan sonar and other oceanographic measuring devices has made the AUV to be well suited in dangerous mission. Futuristic elements in the AUV prompt advantage into much wider area such as surveillance, environmental monitoring, underwater inspection of harbor and pipeline, geological and biological survey and mine counter measures. However, extremely unexpected ocean behavior has created challenges to the AUV navigation and motion performance in which this phenomenon demonstrate N. M. Sarif  R. Ngadengon (&)  H. A. Kadir  M. H. A. Jalil Faculty of Electrical Engineering, University Tun Hussein Onn Malaysia, 86400 Parit Raja, Johor, Malaysia e-mail: rafi[email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_12

163

164

N. M. Sarif et al.

highly frequency oscillating movement by affecting the sensor performance especially acoustical and optical sensors and also causing the dynamics system to have highly nonlinear, time-varying and uncertainties in hydrodynamic parameters such as added mass, lift forces, gravity and buoyancy forces [2]. Additionally, most AUVs are operated under actuated mode, hence tracking and stabilization control become demanding task, owing to over possession of Degree Of Freedom (DOF) beyond control [3]. This restriction is imposed in real life application as inverting or pointing vertically can cause equipment damage or dangerous control response [4]. As a result, the AUVs motion control is restricted to only one noninteracting subsystem at a time [5]. Due to aforementioned challenges, many advanced control techniques have been implemented in existing literatures, mostly including robust control techniques in [6–8], intelligent control method in [9] and adaptive control approach in [10–12]. It is apparent that the SMC evidently is a promising strategy [13] among the robust controllers types, to overcome the obstacles due to its simpler computation and robust to external disturbance and parameter variations [14]. The work reported in the literature addresses that, majority of the SMC application on the AUV is in continuous time point of view but its effectiveness in real situations is no longer efficient due to current trend toward digital rather than analog control of dynamic system [15]. In other words, controllers nowadays are almost exclusively in digital computer or microprocessors. This is mainly due to availability of low-cost digital computers and the advantage found in digital signals rather than continuous time signal [16]. For this reason, researcher has produced significant interest over recent years [13, 17, 18] in solving the problems caused by the discretization of continuous time controllers. It was started in 1997, Lee et al. [19] adopted self-tuning discrete sliding mode control on AUV ARMA based on equivalent discrete variable structure control method and it was continued with a research on quasi sliding mode control in presence of uncertainties and long sampling interval as started in [20] on an AUV named VORAM (Vehicle for Ocean Research and Monitoring). The research was then followed by Zhang [21] who has proposed discrete-time quasi sliding mode controller for the multiple-input multiple-output on AUV REMUS. In addition, Wu et al. [22] implemented adaptive sliding mode control in discrete time system and applied time varying sliding surface obtained via parameter estimation method. The work developed by Bibuli et al. in [23] described hybrid guidance and control system based on neural dynamic and quasi sliding mode integration on Shark USV. Verma et al. [24] worked on controlling speed of Carangiform robotic fish using Discrete Terminal Sliding Mode Controller. Research in discrete-time controller was started by Milosavljevic in [25]. Later Gao et al. created quasi sliding mode band in [26]. Soon after that, Bartoszewics in [27] proposed non-switching condition of DSMC. Although Gao’s reaching law method has been introduced since two decades ago, it is still been used in many significant studies such as [28–30]. The objective of this research is to implement discrete time sliding mode control law proposed by Gao et al. in [31] during steering motion control. This is to ensure

Discrete Sliding Mode Controller on Autonomous Underwater Vehicle …

165

the designed control law is parallel to technology advancement and minimize the vehicle heading error so that the vehicle steering motion will follow the desired heading angle as close as possible. Discrete Proportional Integral Derivative (PID) and Discrete Sliding Mode Control (DSMC) are tested on AUV NSP II via simulation and discrete PID controller is used for performance comparative analysis. The paper is organized as follows: Dynamic model of AUV NSP II in the Body-Fixed Reference Frame (BFF) and DSMC structure design are presented in Sects. 2 and 3 respectively. Results from numerical simulation are illustrated in Sect. 4 and discussion on advantages and drawback of the control methods is provided in Sect. 5.

2 Mathematical Modelling of NPS AUV II 2.1

Nonlinear Equation of Motion

AUV dynamic system is highly nonlinear, coupled and time varying which attribute to considerations of many parameters such as hydrodynamic drag, damping and lift forces, Coriolis and centripetal forces, gravity, buoyancy forces and thrust [32]. General nonlinear equation of motion is present as M v_ þ CðvÞv þ DðvÞv þ GðgÞ ¼ s

ð1Þ

n_ ¼ J ðnÞv

ð2Þ

where, M 2 < > > > :

c0 þ m1 ðu  d0 Þ c1 þ m2 ðu  d1 Þ

if d0  u\d1 ; if d1  u\d2 ;

.. . cr1 þ mr ðu  dr1 Þ if dr1  u\dr ;

ð1Þ

350

J. J. Jui et al.

Fig. 5 Block diagram of Hammerstein model based SCA

and the transfer function G(s) is given by GðsÞ ¼

BðsÞ sm þ bm1 sm1 þ    þ b0 ¼ : AðsÞ am sm þ am1 sm1 þ    þ a0

ð2Þ

In (1), the symbol mi ¼ ðci  ci1 Þ=ðdi  di1 Þ ði ¼ 1; 2; . . .; rÞ are the segment slope with connecting input and output points as di ði ¼ 0; 1; . . .; rÞ and ci ði ¼ 0; 1; . . .; rÞ, respectively. For simplicity of notation, let d = [d0, d1, …, dr]T and c = [c0, c1, …, cr]T. The input of the real liquid slosh plant and the identified model is defined by u(t), while the output of the real liquid slosh plant and the identified model are denoted by yðtÞ and ~yðtÞ, respectively. Thence, the expression of the identified output can be written as ~yðtÞ ¼ GðsÞhðuðtÞÞ:

ð3Þ

Moreover, several assumptions are adopted in this work, which are: (i) The order of the polynomial A(s) and B(s) are assumed to be known (ii) The nonlinear function h(u(t)) is one-to-one map to the input u(t) and the values of di ði ¼ 1; 2; . . .; rÞ are pre-determined according to the response of input u(t).

Identification of Liquid Slosh Behavior …

351

Next, let ts be a sampling time for the real experimental input and output data (u (t), y(t)) (t = 0, ts, 2ts, …, Nts). Then, in order to accurately identify the liquid slosh model, the following objective function in (4) is adopted in this study: EðG; hÞ ¼

N X

ðyðgts Þ  ~yðgts ÞÞ2 :

ð4Þ

g¼0

Note that the objective function in (4) is based on the sum of quadratic error, which has been widely used in many literature [28, 29]. Finally, our problem formulation can be described as follows. Problem 1. Based on the given real experimental data (u(t), y(t)) in Fig. 1, find the nonlinear function h(u) and the transfer function G(s) such that the objective function in (4) is minimized. Furthermore, it is shown on how to apply the SCA in solving Problem 1. For simplicity, let the design parameter of Problem 1 is defined as x ¼ ½ b0 b1    bm1 a0 a1    am c0    cr T , where the elements of the design parameter are the coefficients of both the nonlinear function and the transfer function of the continuous-time Hammerstein model. In SCA framework, let xi ði ¼ 1; 2; . . .; MÞ be the design parameter of each agent i for M total number of agents. Then, consider xij ðj ¼ 1; 2; . . .; DÞ be the j-th element of the vector xi ði ¼ 1; 2; . . .; MÞ, where D is the size of the design parameter. Thence, by adopting objective function in (4), a minimization problem is expressed as arg

min

xi ð1Þ; xi ð2Þ; ...

Eðxi ðkÞÞ:

ð5Þ

for iterations k = 1, 2, …, until maximum iteration kmax. Finally, the procedure of the SCA in solving Problem 1 is shown below: Step 1: Determine the total number of agents M and the maximum iteration kmax. Set k = 0 and initialize the design parameter xi ð0Þði ¼ 1; 2; . . .; MÞ according to the upper bound xup and lower bound xlow values of the design parameter. Step 2: Calculate the objective function in (4) for each search agent i. Step 3: Update the values of the best design parameter P based on the generated objective function in Step 2. Step 4: For each agent, update the design parameter using the following equation:  xij ðk þ 1Þ ¼

  xij ðkÞ þ r1 sin(r2 Þ  r3 Pj  xij ðkÞ xij ðkÞ þ r1 cos(r2 Þ  r3 Pj  xij ðkÞ

if if

r4 \0:5; r4  0:5;

ð6Þ

352

J. J. Jui et al.

where   k r1 ¼ 2 1  kmax

ð7Þ

for maximum iteration kmax and constant positive value a. Note that r2, r3 and r4 are random values that are generated independently and uniformly in the ranges [0, 2p], [0, 2] and [0, 1], respectively. The detailed justification on the selection of the coefficients r1, r2, r3 and r4 are clearly explained in [25]. In (6), the symbol Pj (j = 1, 2,…, n) is denoted as the best current design parameter in j-th element of P that is kept during tuning process. Step 5: After the maximum iteration is achieved, record the best design parameter P and obtained the continuous-time Hammerstein model in Fig. 1. Otherwise, repeat Step 2.

4 Results and Analysis In this section, the effectiveness of the SCA based method for identifying the liquid slosh system using continuous-time Hammerstein model is demonstrated. In particular, the convergence curve response of the objective function in (4), the pole-zero mapping of linear function and the plot of nonlinear function, will be presented and analyzed in this study. Based on the experimental setup in Sect. 2, the input response u(t) as shown in Fig. 3 is applied to the liquid slosh plant, and the output response y(t) is recorded as shown in Fig. 4. Here, the input and output data are sampled at ts = 0.02 for N = 450. In this study, the structure of G(s) is selected as follows: GðsÞ ¼

BðsÞ s3 þ b2 s2 þ b1 s þ b0 ¼ : 4 AðsÞ a4 s þ a3 s3 þ a2 s2 þ a1 s þ a0

ð8Þ

after performing several preliminary testing on the given data (u(t), y(t)). The fourth order system is used by considering a cascade of 2nd order system for both dc motor of remote car and the slosh dynamic. Meanwhile, the input points for piece-wise affine function of h(u(t)) are given by d = [0, 0.2, 0.4, 0.6, 0.8, 1, 2, 3, 4, 5]T. The selection of vector d is obtained after several preliminary experiments. The design parameter x 2 R18 with its corresponding transfer function and nonlinear function is shown in Table 1. Next, the SCA algorithm is applied to tune the design parameter with initial values of design parameter are randomly selected between the upper bound xup and lower bound xlow as shown in Table 1. Note that the values xup and xlow are obtained after performing several preliminary experiments. Here, we choose the number of agents M = 40 with maximum iterations kmax = 5000.

Identification of Liquid Slosh Behavior … Table 1 Design parameter of liquid slosh plant

353

x

Coefficients

xlow

xup

P

x1 x2 x3 x4 x5 x5 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18

b2 b1 b0 a4 a3 a2 a1 a0 c0 c1 c2 c3 c4 c5 c6 c7 c8 c9

−5 −5 −5 −5 −2200 −2200 −2200 −2200 −5 −5 −5 −5 −5 −5 −5 −5 −5 −5

35 35 35 35 −1 −1 −1 −1 5 5 5 5 5 5 5 5 5 5

−3.7948 10.7153 −0.9059 −0.6154 −5.3112 −139.8711 −1132.2883 −839.7621 −4.8859 −0.0219 3.3211 −4.7295 −0.3240 −4.4858 −0.0002 0.0000 0.1679 −4.3282

Fig. 6 Convergence curve response

Figure 6 shows the response of the objective function convergence with the value of E(G, h) = 0.1616 at kmax = 5000 with 80.44% of objective function improvement to produce the best design parameter P as shown in the final column of Table 1. It shows that the SCA based method is able to minimize the objective function in (4) and produce a quite close output response yðtÞ as compared to the real output ~yðtÞ, which can be clearly seen in Fig. 7. Note that the identified output response tends to yield high oscillation when input is injected to the system and it start to attenuate when the input is zero, which is quite similar to the response of real experimental output.

354 Fig. 7 Response of the identified output ~yðtÞ and real output yðtÞ

Fig. 8 Pole-zero map of transfer function G(s)

Fig. 9 Resultant of piece-wise affine function h(u)

J. J. Jui et al.

Identification of Liquid Slosh Behavior …

355

In the real experimental setup, we can say that the liquid slosh system is stable since the liquid slosh output is reduced gradually as t ! 1. In order to validate our model regarding the stability, we use the pole-zero map of the identified transfer function G(s) as shown in Fig. 8. From the pole-zero map, all the poles are located at the left hand side of y-axis. In particular, the obtained values of poles are −0.1190 ± j14.8001, −7.5621 and −0.8229, while the obtained values of zeros are 0.0872 and 1.8538 ± j2.6373. On the other hand, we also can observe the feature of nonlinear function by plotting the obtained piece-wise function as depicted in Fig. 9. Note that our nonlinear function is not restricted to any form of nonlinear function (i.e., quadratic), which is more generalized and provide more flexibility of searching a justifiable function.

5 Conclusion In this paper, an identification of liquid slosh plant using continuous-time Hammerstein model based on Sine Cosine Algorithm (SCA) has been presented. The results demonstrated that the proposed generic Hammerstein model based on SCA has a good potential in identifying the real liquid slosh behavior. In particular, it is shown that the proposed method is able to produce a quite close identified output with real liquid slosh output. Moreover, the resultant linear model has been proved to be stable based on the pole-zero map. It is also shown that the used of piecewise-affine function gives more flexibility for the SCA to search more generic nonlinear function. In the future, our work can be extended to various types of nonlinear function such as continuous-time Wiener and Hammerstein-Wiener. Acknowledgements The authors gratefully acknowledged Research and Innovation Department of Universiti Malaysia Pahang under grant RDU1703153 for the financial support.

References 1. Rizzuto E, Tedeschi R (1997) Surveys of actual sloshing loads on board of ships at sea. In: Proceedings of International Conference on Ship and Marine Research, pp 7.29–7.37 2. Terashima K, Schmidt G (1994) Sloshing analysis and suppression control of tilting-type automatic pouring machine. In: Proceedings of IEEE International Symposium on Industrial Electronics, pp 275–280 3. Acarman T, Ozguner U (2006) Rollover prevention for heavy trucks using frequency shaped sliding mode control. Vehi Syst Dyn 44(10):737–762 4. Li C, Zhu X, Cao G, Sui S, Hu M (2008) Identification of the Hammerstein model of a PEMFC stack based on least squares support vector machines. J Power Sour 175:303–316 5. Kara T, Eker I (2004) Nonlinear modeling and identification of a DC motor for bidirectional operation with real time experiments. Energy Convers Manag 45(7–8):1087–1106 6. Su SW, Wang L, Celler BG, Savkin AV (2007) Oxygen uptake estimation in humans during exercise using a Hammerstein model. Ann Biomed Eng 35(11):1898–1906

356

J. J. Jui et al.

7. Westwick DT, Kearney RE (2001) Separable least squares identification of nonlinear Hammerstein models: Application to stretch reflex dynamics. Ann Biomed Eng 29(8):707– 718 8. Zhang Q, Wang Q, Li G (2016) Nonlinear modeling and predictive functional control of Hammerstein system with application to the turntable servo system. Mech Syst Signal Process 72:383–394 9. Ai Q, Peng Y, Zuo J, Meng W, Liu Q (2019) Hammerstein model for hysteresis characteristics of pneumatic muscle actuators. Int J Intell Robot Appl 3(1):33–44 10. Saleem A, Mesbah M, Al-Ratout S (2017) Nonlinear Hammerstein model identification of amplified piezoelectric actuators (APAs): Experimental considerations. In: 2017 4th International Conference on Control, Decision and Information Technologies (CoDIT), pp 0633–0638 11. Zhang HT, Hu B, Li L, Chen Z, Wu D, Xu B, Huang X, Gu G, Yuan Y (2018) Distributed Hammerstein modeling for cross-coupling effect of multiaxis piezoelectric micropositioning stages. IEEE/ASME Trans Mechatron 23(6):2794–2804 12. Bai EW, Li D (2004) Convergence of the iterative Hammerstein system identification algorithm. IEEE Trans Autom Control 49(11):1929–1940 13. Hou J, Chen F, Li P, Zhu Z (2019) Fixed point iteration-based subspace identification of Hammerstein state-space models. IET Control Theory Appl 13(8):1173–1181 14. Ge Z, Ding F, Xu L, Alsaedi A, Hayat T (2019) Gradient-based iterative identification method for multivariate equation-error autoregressive moving average systems using the decomposition technique. J Frankl Inst 356(3):1658–1676 15. Hou J, Liu T, Wahlberg B, Jansson M (2018) Subspace Hammerstein model identification under periodic disturbance. IFAC-PapersOnLine 51(15):335–340 16. Hou J, Liu T, Wang QG (2019) Subspace identification of Hammerstein-type nonlinear systems subject to unknown periodic disturbance. Int J Control, 1–29 (Just-accepted) 17. Jamaludin IW, Wahab NA (2017) Recursive subspace identification algorithm using the propagator based method. Indones J Electr Eng Comput Sci 6(1):172–179 18. Wang D, Zhang W (2015) Improved least squares identification algorithm for multivariable Hammerstein systems. J Frankl Inst 352(11):5292–5307 19. Bai EW (2002) A blind approach to the Hammerstein-Wiener model identification. Automatica 38(6):967–979 20. Ma L, Liu X (2015) A nonlinear recursive instrumental variables identification method of Hammerstein ARMAX system. Nonlinear Dyn 79(2):1601–1613 21. Lin W, Liu PX (2006) Hammerstein model identification based on bacterial foraging. Electron Lett 42(23):1332–1333 22. Gotmare A, Patidar R, George NV (2015) Nonlinear system identification using a cuckoo search optimized adaptive Hammerstein model. Expert Syst Appl 42(5):2538–2546 23. Al-Duwaish HN (2011) Identification of Hammerstein models with known nonlinearity structure using particle swarm optimization. Arab J Sci Eng 36(7):1269–1276 24. Zhang H, Zhang H (2013) Identification of hammerstein model based on Quantum Genetic Algorithm. Telkomnika 11(12):7206–7212 25. Mirjalili S (2016) SCA: A sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133 26. Suid MH, Tumari MZ, Ahmad MA (2019) A modified sine cosine algorithm for improving wind plant energy production. Indones J Electr Eng Comput Sci 16(1):101–106 27. Suid MH, Ahmad MA, Ismail MRTR, Ghazali MR, Irawan A, Tumari MZ (2018) An improved sine cosine algorithm for solving optimization problems. In: IEEE Conference on Systems, Process and Control (ICSPC), pp 209–213 28. Mjahed M, Ayad H (2019) Quadrotor identification through the cooperative particle swarm optimization-cuckoo search approach. Comput Intell Neurosci 2019:1–10 29. Gupta S, Gupta R, Padhee S (2018) Parametric system identification and robust controller design for liquid–liquid heat exchanger system. IET Control Theory Appl 12(10):1474–1482

Cardiotocogram Data Classification Using Random Forest Based Machine Learning Algorithm M. M. Imran Molla, Julakha Jahan Jui, Bifta Sama Bari, Mamunur Rashid, and Md Jahid Hasan

Abstract The Cardiotocography is the most broadly utilized technique in obstetrics practice to monitor fetal health condition. The foremost motive of monitoring is to detect the fetal hypoxia at early stage. This modality is also widely used to record fetal heart rate and uterine activity. The exact analysis of cardiotocograms is critical for further treatment. In this manner, fetal state evaluation utilizing machine learning technique using cardiotocogram data has achieved significant attention. In this paper, we implement a model based CTG data classification system utilizing a supervised Random Forest (RF) which can classify the CTG data based on its training data. As per the showed up results, the overall performance of the supervised machine learning based classification approach provided significant performance. In this study, Precision, Recall and F-Score has been employed as the metric to evaluate the performance. It was found that, the RF based classifier could identify normal, suspicious and pathologic condition, from the nature of CTG data with 94.8% accuracy. We also highlight the major features based on Mean Decrease Accuracy and Mean Decrease Gini. Keywords Fetal heart rate

 Random forest classifier  Cardiotocography

M. M. Imran Molla Faculty of Computer Science and Engineering, Khwaja Yunus Ali University, 6751 Enayetpur, Sirajganj, Bangladesh J. J. Jui (&)  B. S. Bari  M. Rashid Faculty of Electrical and Electronics Engineering, Universiti Malaysia Pahang, 26600 Pekan, Pahang, Malaysia e-mail: [email protected] M. J. Hasan Faculty of Mechanical and Manufacturing Engineering, Universiti Malaysia Pahang, 26600 Pekan, Pahang, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_25

357

358

M. M. Imran Molla et al.

1 Introduction Cardiotocography is a strategy that is utilized to screen fetal health condition during pregnancy. A cardiotocogram (CTG) comprises of two signals, to be specific, the fetal heart rate (FHR) as well as uterine activity (UA). The identification of fetal hypoxia at early stage is the target for CTG monitoring. Further examinations for fetal condition may be performed or the baby is delivered by a surgical strategy. A standardized nomenclature has been embraced to peruse the cardiotocographs [1]. It incorporates baseline fetal heart rate (110 to 160 beats/minute), uterine activity, baseline FHR variability (5 to 25 beats/minute above and below the stable FHR baseline), periods of decreased and increased FHR variability and existence of any acceleration or deceleration [2]. It is conceivable to recognize the fetal hypoxia (lack of oxygen normally in the range of 1 to 5%) by observing FHR. The possibility of being disabling of the newborn baby gets to be high and, in some cases, it may lead to the death if fetal hypoxia is prolonged. Consequently, it is essential to detect abnormal FHR patterns and take suitable actions for evading prenatal morbidity as well as mortality [3, 4]. Cardiotocography can be utilized to examine the fetus health condition, normoxia [5] (oxygen tensions between 10–21%) and normal or abnormal fetus acid base status [6]. Thus, numerous indicators (occurring days or hours before fetus death) that can be identified promptly can lead to appropriate obstetric intervention which could assist in delivering a healthy baby. CTG is done manually which may cause human error. A computerized CTG may develop automatic interpretation by decreasing the fetal mortality rate [7, 8]. For the classification of CTG data, various techniques are utilized. Czabanski et al. [9] reported that two steps mechanism consisting of weighted fuzzy scoring and LSVM algorithm are applied to FHR to predict the acidemia hazard. Artificial neural network is applied to record the fetal wellbeing by Georgieva et al. [10] and Jezewski et al. [11]. Esra et al. [12] utilized adaptive boosting ensemble of decision trees for analyzing cardiotocogram to detect pathologic fetus. Neuro-fuzzy method [13], naïve Bayes classifier [14] are two approaches utilized in the ensemble classifiers to combine the classification outputs of the weak learners. Random forest [15] is a classifier that is built on multiple trees from randomly sampled subspaces of the input features which combine the output of the trees using bagging. It is applied to different real life applications including protein sequencing [16], classification of Alzheimer’s disease [17], cancer detection [18], physical activity classification [19], classification of cardiotocograms using random forest classifier [20] and so on. Fetal state classification from cardiotocography with feature extraction utilizing hybrid K-Means and support vector machine has been reported in [21] with 90.64% accuracy. Fetal state assessment using Cardiotocogram with Artificial Neural Networks has been presented in [22]. Fetal state assessment using cardiotocography parameters by applying PCA and AdaBoost has been done by Zhang et al. [23] with 93% accuracy. In [24], decision Tree is used for analyzing the Cardiotocogram data for fetal distress determination. In this paper, random forest classifier is applied for the classification of cardiotocograms into normal, suspicious

Cardiotocogram Data Classification Using Random Forest …

359

as well as pathological classes. Feature importance index is utilized for identifying important features of the database. Fetal state identification from cardiotocogram applying LS-SVM with PSO (Particle Swarm Optimization) and binary decision tree has been reported in [25]. There proposed method provides 91.62% classification accuracy. It has been observed that good classification accuracy can be obtained by applying only ten important features among twenty-one features [25]. A mathematical modeling strategy has been presented to simulate early deceleration in CTG by Beatrijs et al. [26]. Their outcomes for the uncompromised fetus have been described that partial oxygen pressures decreases with the strength and duration of the contraction. Sundar has been proposed classification of cardiotocogram data using neural network in [27] the accuracy of 91%. A feature group weighting method for subspace clustering of high-dimensional data reported in [28]. The get the f measure value 0.77. Zhou and Sun proposed Active learning of Gaussian Processes with the accuracy 89% in [29]. Cruz et al. proposed META-DES Ensemble Classifier for the identification with the accuracy of 84.6% in [30].

2 Research Methodology Figure 1 depicted the complete working procedure while working with Random Forest algorithm. For building any model at first it is necessary to import the dataset. In this research CTG dataset [27] has been used. This dataset is collected from UCI Machine Learning Repository. Then, various operations have been performed for checking whether there is any missing value or misleading data present in the dataset. After that the dataset is split in order to train the model. For the classification model the dataset has been split into 80% train and 20% test set and then, testing the model based on trained dataset. Random Forest classifier has been used to get trained model using train dataset. After the training phase, testing phase is performed to validate the predictive result using test data. Finally, various measurements also used to evaluate the performance of the model.

2.1

Dataset Description

A freely accessible CTG data set [31] from the UCI Machine Learning Repository has been utilized in this study. This data set comprises of 2126 instances described by 22 attributes. The last two attributes are class codes for FHR pattern and fetal condition, individually. Each instance can be grouped utilizing the FHR pattern and fetal condition. The attributes are presented in Table 1. CTG is a technique for account the fetal heartbeat and the uterine contractions during pregnancy typically in the last trimester.

360

M. M. Imran Molla et al.

Fig. 1 Working principle of Random Forest regression

The data set comprises of 2126 cardiotocograms which has been collected from the Maternity and Gynecological Clinic [32]. CTG are classified by three expert obstetricians and their larger part has characterized the class of the cardiotocogram. The dataset is labeled as one the three classes, Normal (N), Suspicious (S) and Pathological (P) which is shown in Table 2.

Cardiotocogram Data Classification Using Random Forest …

361

Table 1 Explanation of features Symbol of features

Description

LB AC FM UC DL DS DP ASTV MSTV ALTV MLTV Width Min Max Nmax Nzeros Mode Mean Median Variance Tendency

FHR baseline (beats/min) Number of accelerations/second Number of fetal movements/second Number of uterine contractions/second Number of light decelerations/second Number of severe decelerations/second Number of prolonged decelerations/second Percentage of time with abnormal short-term variability Mean value of short term variability Percentage of time with abnormal long-term variability Mean value of long-term variability Width of FHR histogram Minimum of FHR histogram Maximum of FHR histogram Number of histogram peaks Number of histogram zeros Histogram mode Histogram mean Histogram median Histogram variance Histogram tendency

Table 2 Class distribution of CTGs Fetal state

Class

Numeric class

Number of FHR recordings

Normal Suspect Pathologic Total

N S P

1 2 3

1655 295 176 2126

2.2

Random Forest Classifier

Random forest classifier makes a set of decision trees from arbitrarily chosen subset of training dataset. It aggregates the votes from various decision trees to choose the final class of the test objects [33]. Each tree is grown as follows: 1. If the number of cases within the training set is N, sample N cases at random but with replacement, from the original data. This sample will be the training set for growing/developing the tree.

362

M. M. Imran Molla et al.

2. If there are M input variables, a number m C

bj = λ × aj

(2)

(3)

where P j denotes all appliances power consumption at time slot j, C is the threshold of power consumed, λ is a positive number, a j denotes normal price at j, and b j is high price at j. PAR is the second objective of addressing PSPSH, which is related to balancing overall power consumed. PAR is formulated in Eq. 4

A Min-conflict Algorithm for PSPSH Using Battery

P AR =

Pmax Pavg

493

(4)

where Pmax denotes maximum power consumed and Pavg is average overall power consumed. User comfort level can be improved by reducing waiting time rate (W T R) of appliances because users always prefer to finish appliances’ operations as soon as possible. W T R is formulated as follows: W T Ri =

sti − O T Psi , ∀i ∈ S O T Pei − O T Psi − li

(5)

where W T Ri denotes W T R for appliance i, sti is starting operation of appliance i, O T Psi and O T Pei are beginning and ending allowable period for appliance i to be scheduled, respectively, and li is length of operation cycle of appliance i. Average W T R for all appliances is calculated as follows: m

(sti − O T Psi ) , i=1 (O T Pei − O T Psi − li )

W T Ravg = m

i=1

(6)

The components of W T Ravg are presented and illustrated in Fig. 1. In this study, the percentage of satisfaction (comfort) of users (U C p ) is calculated based on W T R as follows: U C p = (1 − W T Ravg ) × 100%,

(7)

3.2 Smart Home Battery (SHB) SHB is containing a system known as a battery management system which allows it to charge and discharge automatically based on predefined constraints. In this section, SHB is formulated to enhance quality of solution(s) and attempt to achieve PSPSH objectives optimally. The proposed SHB can efficiently reduce power consumed at

Fig. 1 Illustration of the components in Eq. 6

494

S. N. Makhadmeh et al.

peak periods, where it formulated to store power at low peak periods and discharge the stored power at peak periods. The proposed SHB can store power at low pricing periods and if it is not completely charged, and discharge at high pricing periods and if it is not empty. In addition, power consumed by charging operation should not exceed C. The charging and discharging states of SHB is formulated as follows:  1 if pc j ≤ pcavg and N S H B = 0 and P j < C (8) XSH B = 0 if pc j > pcavg and C HS H B > 0 X S H B is the state of SHB, where number 1 denoting the charging mode and 0 is the discharging mode. Power charged and discharged at each time slot should not exceed a maximum allowable limit. pcavg is average tariffs of all time slots, C HS H B is total power charged in SHB and N S H B is power needed by SHB to be full where it is formulated as follows: N S H B = C S H B − C HS H B

(9)

where C S H B is capacity of SHB.

4 Min-conflict Heuristic Algorithm (MCA) for PSPSH MCA is one of the most popular heuristic optimization algorithms that proposed to address scheduling problems due to its simplicity and speed [14]. MCA was adapted to address different problems such as scheduling sensor resources [19], job shop scheduling [21] and n-queens [27]. In PSPSH, MCA solution is containing a vector of appliances’ starting operation time (st). MCA for PSPSH is started by initializing PSPSH and SHB parameters, then initializing the solution vector, as shown in step 1 and 2 of Algorithm 1. Note that MCA is a local search algorithm and its population can be only one solution vector of size S × 1. In the third step, the solution is updated by choosing an appliance randomly and calculate its operation cost at each time slot, then update its st to operate at time slot with least cost. As remember, each appliance should be operated with respecting several constraints such as O T Ps, O T Pe, and l (see Fig. 1); therefore, these constraints should be considered during the updating step. In step 4, allowable periods and power that can SHB be charged and discharged are determined by calculating power consumed by each appliance (see step 4 of Algorithm 1). Step 3 and 4 are repeated until reach maximum number of iteration, as shown in step 5 of Algorithm 1.

A Min-conflict Algorithm for PSPSH Using Battery

495

Algorithm 1. Pseudo code of MCA for PSPSH using SHB //Step 1: Initializing PSPSH parameters //Step 2: Initializing MCA population of size (S × 1) //Step 3: while (k < Maximum number of iterations) do Choose an appliance randomly Calculate the appliance operation cost at each time slot with respecting its O T Ps, O T Pe, and l Update the appliance starting time to operate at time slot with least cost //Step 4: Calculate power consumed by each appliance Determine allowable periods and power that can SHB be charged and discharged Operate SHB Calculate fitness value of the solution //Step 5: k =k+1 Is the maximum number of iterations reached? end while Return fitness value;

5 Experiments and Results This section provides experiment results and their discussion and illustration. This section begins with a description of the dataset used to evaluate the proposed approach. SHB effects on the scheduling process and its enhancement are presented as well. In addition, the adapted MCA is compared with BBO to assess its performance. The simulation results are executed using MATLAB on a PC with 8 GB of memory (RAM), Intel Core2 Quad CPU, and 2.66 GHz processor.

5.1 Dataset: Dynamic Pricing Program In this study, the time horizon is containing 24 h that divided into 1440 slots, where each slot equaled to 1 min. RTP is considered as a dynamic pricing program using the pricing curve of the 1st of June 2016 that adopted from Commonwealth Edison Company [17]. The RTP curve used is presenting in Fig. 2. As mentioned previously, RTP is combined with IBR to disperse power consumed and maintain the stability of power system. The IBR owns two parameters, including C and λ (see Eq. 2). The values of these parameters are assigned by 0.0333 for each slot and 1.543, respectively [24, 26].

496

S. N. Makhadmeh et al.

Fig. 2 RTP curve of the 1st of June 2016

5.2 Dataset: Smart Home Appliances Generally, appliances can be operated several times in a time horizon. Therefore, 36 operations of nine appliances are used in the evaluation results. The primary parameters of these operations are presented in Table 1.

Table 1 Parameters of appliances used in the experiments No.

Appliance

l

OTPs–OTPe

Power (kW)

No.

Appliance

l

OTPs–OTPe

1

Dishwasher

105

540–780

0.6

19

Dehumidifier

30

1–120

Power (kW) 0.05

2

Dishwasher

105

840–1080

0.6

20

Dehumidifier

30

120–240

0.05

3

Dishwasher

105

1200–1440

0.6

21

Dehumidifier

30

240–360

0.05

4

Air conditioner

30

1–120

1

22

Dehumidifier

30

360–480

0.05

5

Air conditioner

30

120–240

1

23

Dehumidifier

30

480–600

0.05

6

Air conditioner

30

240–360

1

24

Dehumidifier

30

600–720

0.05

7

Air conditioner

30

360–480

1

25

Dehumidifier

30

720–840

0.05

8

Air conditioner

30

480–600

1

26

Dehumidifier

30

840–960

0.05

9

Air conditioner

30

600–720

1

27

Dehumidifier

30

960–1080

0.05

10

Air conditioner

30

720–840

1

28

Dehumidifier

30

1080–1200

0.05

11

Air conditioner

30

840–960

1

29

Dehumidifier

30

1200–1320

0.05

12

Air conditioner

30

960–1080

1

30

Dehumidifier

30

1320–1440

0.05

13

Air conditioner

30

1080–1200

1

31

Electric Water Heater

35

300–420

1.5

14

Air conditioner

30

1200–1320

1

32

Electric Water Heater

35

1100–1440

1.5

15

Air conditioner

30

1320–1440

1

33

Coffee Maker

10

300–450

0.8

16

Washing machine

55

60–300

0.38

34

Coffee Maker

10

1020–1140

0.8

17

Clothes dryer

60

300–480

0.8

35

Robotic Pool Filter

180

1–540

0.54

18

Refrigerator

1440

1–1440

0.5

36

Robotic Pool Filter

180

900–1440

0.54

A Min-conflict Algorithm for PSPSH Using Battery

497

Fig. 3 EB using MCA with and without SHB

For SHB, the usable C S H B is 13.5 kWh and the maximum allowable limit to charge and discharge is 5 kW [29].

5.3 The Enhancement of SHB In this section, SHB efficiency in attaining PSPSH objectives is examined and evaluated using MCA. The results with and without using SHB are compared, to show whether SHB can improve the quality of the schedule. Figure 3 presents EB obtained by MCA with and without considering SHB in the scheduling process. EB reduced from (44.79 cent) using unscheduled mode (i.e., random schedule) to (41.12 cent) and (28.85 cent) using MCA and MCA with SHB, respectively. The results show the performance of SHB in improving quality of schedule and reduce EB. In terms of PAR reduction, PAR value is reduced from (3.32) using unscheduled mode to (2.53) using MCA and (2.60) using MCA with SHB, as shown in Fig. 4. The results show that MCA without SHB obtained a better PAR value than MCA with SHB. These results archived due to SHB process that allow it to store and consume power only at low pricing periods which increase power consumed at these periods and increase value of Pmax (see Eq. 4). As discussed, the percentage of user comfort level could be improved by reducing WTR value because users always prefer to finish appliances’ operations as soon as possible. The proposed SHB reduced WTR and enhanced user comfort level significantly, where WTR value is reduced from (0.4615) using unscheduled mode to (0.3581) and (0.3368) using MCA and MCA with SHB, respectively, as shown in Fig. 5. The percentage of user comfort level is 53.85% 64.19%, and 66.32% using

498

S. N. Makhadmeh et al.

Fig. 4 PAR using MCA with and without SHB

Fig. 5 WTR with and without SHB

unscheduled mode, MCA, and MCA with SHB. The results prove the efficiency of proposed MCA with SHB in reducing waiting time for appliances and improving user comfort level.

A Min-conflict Algorithm for PSPSH Using Battery Table 2 Comparison between MCA and BBO. BBO EB PAR WTR Without SHB With SHB

499

MCA EB

PAR

WTR

42.46

2.64

0.3534

41.12

2.53

0.3581

28.95

2.60

0.3352

28.85

2.60

0.3368

5.4 Comparison Study Between MCA and BBO This section presents a comparison between the adapted MCA and BBO algorithm. This comparison study is provided to show the results of MCA against BBO and evaluate its performance. The results obtained by MCA and BBO without and with SHB are compared in Table 2. The table shows the robust performance of MCA in reducing EB and PAR, where it obtained better results than BBO in term of reducing EB and PAR, whereas BBO performed better than MCA in improving user comfort level.

6 Conclusion and Future Work PSPSH is the primary issue facing power supplier companies and their users, due to the scheduling efficiency in maintaining power system and reducing EB for users. PSPSH can be addressed by shifting appliances operation time from period to another according to a time horizon and dynamic pricing program. The primary objectives of addressing PSPSH are minimizing EB and PAR, and maximizing satisfaction level of users. In this paper, MCA is adapted to address PSPSH according to a time horizon divided into 1440 time slots and RTP program. The RTP is combined with IBR program to efficiently balance power demand though the time horizon. SHB is formulated and used as an additional source to attempt to enhance quality of solution. In the simulation results, the schedule using SHB is compared with schedule without considering SHB. SHB prove its efficiency in enhancing the schedule in terms of EB and WTR, where MCA using SHB reduce EB and WTR by up to 29.8% and 6%, respectively, better than MCA without SHB. However, MCA without SHB obtains better schedule than MCA with SHB in terms of reducing PAR. In addition, MCA is compared with BBO to evaluate its obtained results. The comparison showed that MCA obtained better schedule in terms of reducing EB and PAR, and BBO performed better in improving user comfort level. In the future, different dataset can be considered in the scheduling process to efficiently evaluate MCA and SHB. Besides, renewable energy sources can be integrated with the proposed SHB to improve quality of schedule.

500

S. N. Makhadmeh et al.

Acknowledgments This work has been partially funded by Universiti Sains Malaysia under Grant 1001/PKOMP/8014016.

References 1. Abasi AK, Khader AT, Al-Betar MA, Naim S, Makhadmeh SN, Alyasseri ZAA (2019) Linkbased multi-verse optimizer for text documents clustering. Appl Soft Comput 87:1–36 2. Abasi AK, Khader AT, Al-Betar MA, Naim S, Makhadmeh SN, Alyasseri ZAA (2019) A text feature selection technique based on binary multi-verse optimizer for text clustering. In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT). IEEE, pp 1–6 3. Abasi AK, Khader AT, Al-Betar MA, Naim S, Makhadmeh SN, Alyasseri ZAA (2020) An improved text feature selection for clustering using binary grey wolf optimizer. In: Proceedings of the 11th national technical seminar on unmanned system technology 2019. Springer, Heidelberg, pp 1–13 4. Abbasi BZ, Javaid S, Bibi S, Khan M, Malik MN, Butt AA, Javaid N (2017) Demand side management in smart grid by using flower pollination algorithm and genetic algorithm. In: International conference on P2P, parallel, grid, cloud and internet computing. Springer, Heidelberg, pp 424–436 5. Al-Betar MA (2017) β-hill climbing: an exploratory local search. Neural Comput Appl 28(1):153–168 6. Al-Betar MA, Alyasseri ZAA, Khader AT, Bolaji AL, Awadallah MA (2016) Gray image enhancement using harmony search. Int J Comput Intell Syst 9(5):932–944 7. Al-Betar MA, Awadallah MA, Bolaji AL, Alijla BO (2017) β-hill climbing algorithm for sudoku game. In: 2017 Palestinian international conference on information and communication technology (PICICT). IEEE, pp 84–88 8. Al-Betar MA, Khader AT (2012) A harmony search algorithm for university course timetabling. Ann Oper Res 194(1):3–31 9. Alomari OA, Khader AT, Al-Betar MA, Abualigah LM (2017) Gene selection for cancer classification by combining minimum redundancy maximum relevancy and bat-inspired algorithm. Int J Data Min Bioinform 19(1):32–51 10. Alomari OA, Khader AT, Al-Betar MA, Alyasseri ZAA (2018) A hybrid filter-wrapper gene selection method for cancer classification. In: 2018 2nd international conference on biosignal analysis, processing and systems (ICBAPS). IEEE, pp 113–118 11. Alyasseri ZAA, Khader AT, Al-Betar MA, Papa JP, Ahmad Alomari O (2018) EEG-based person authentication using multi-objective flower pollination algorithm. In: 2018 IEEE congress on evolutionary computation (CEC). IEEE, pp 1–8 12. Alyasseri ZAA, Khader AT, Al-Betar MA, Papa JP, Alomari OA, Makhadme SN (2018) An efficient optimization technique of EEG decomposition for user authentication system. In: 2018 2nd international conference on biosignal analysis, processing and systems (ICBAPS). IEEE, pp 1–6 13. Alyasseri ZAA, Khader AT, Al-Betar MA, Papa JP, Alomari OA, Makhadmeh SN (2018) Classification of EEG mental tasks using multi-objective flower pollination algorithm for person identification. Int J Integr Eng 10(7) (2018) 14. Bouhouch A, Loqman C, El Qadi A (2019) CHN and min-conflict heuristic to solve scheduling meeting problems. In: Bioinspired heuristics for optimization. Springer, Heidelberg, pp 171– 184 15. Briefing US (2013) International energy outlook 2013. US Energy Information Administration 16. Colak I, Kabalci E, Fulli G, Lazarou S (2015) A survey on the contributions of power electronics to smart grid systems. Renew Sustain Energy Rev 47:562–579 17. ComED Company (2017). https://hourlypricing.comed.com/live-prices/

A Min-conflict Algorithm for PSPSH Using Battery

501

18. Farooqi M, Awais M, Abdeen ZU, Batool S, Amjad Z, Javaid N (2017) Demand side management using harmony search algorithm and bat algorithm. In: International conference on intelligent networking and collaborative systems. Springer, Heidelberg, pp 191–202 19. Gage A, Murphy RR (2004) Sensor scheduling in mobile robots using incomplete information via min-conflict with happiness. IEEE Trans Syst Man Cybern Part B (Cybern) 34(1):454–467 20. Iftikhar H, Asif S, Maroof R, Ambreen K, Khan HN, Javaid N (2014) Biogeography based optimization for home energy management in smart grid. In: International conference on networkbased information systems. Springer, Heidelberg, pp 177–190 21. Johnston M, Minton S et al (1994) Analyzing a heuristic strategy for constraint satisfaction and scheduling. Intell Sched 257–289 22. Khan AR, Mahmood A, Safdar A, Khan ZA, Khan NA (2016) Load forecasting, dynamic pricing and dsm in smart grid: a review. Renew Sustain Energy Rev 54:1311–1322 23. Makhadmeh SN, Khader AT, Al-Betar MA, Naim S (2018) Multi-objective power scheduling problem in smart homes using grey wolf optimiser. J Ambient Intell Hum Comput 1–25 24. Makhadmeh SN, Khader AT, Al-Betar MA, Naim S (2018) An optimal power scheduling for smart home appliances with smart battery using grey wolf optimizer, pp 1–6 25. Makhadmeh SN, Khader AT, Al-Betar MA, Naim S, Abasi AK, Alyasseri ZAA (2019) Optimization methods for power scheduling problems in smart home: survey. Renew Sustain Energy Rev 115:109362 26. Makhadmeh SN, Khader AT, Al-Betar MA, Naim S, Alyasseri ZAA, Abasi AK (2019) Particle swarm optimization algorithm for power scheduling problem using smart battery. In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT). IEEE, pp 672–677 27. Minton S, Johnston MD, Philips AB, Laird P (1992) Minimizing conflicts: a heuristic repair method for constraint satisfaction and scheduling problems. Artif Intell 58(1–3):161–205 28. Nexans (2010) Deploying a smarter grid through cable solutions and services. http://www. nexans.com/Corporate/2010/WHITEPAPERSMARTGRIDS2010.pdf 29. Powerwall T (2018). https://www.tesla.com/powerwall 30. Rahim S, Javaid N, Ahmad A, Khan SA, Khan ZA, Alrajeh N, Qasim U (2016) Exploiting heuristic algorithms to efficiently utilize energy management controllers with renewable energy sources. Energy Build 129:452–470 31. Zhao Z, Lee WC, Shin Y, Song KB (2013) An optimal power scheduling method for demand response in home energy management system. IEEE Trans Smart Grid 4(3):1391–1400

An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer Ammar Kamal Abasi, Ahamad Tajudin Khader, Mohammed Azmi Al-Betar, Syibrah Naim, Sharif Naser Makhadmeh, and Zaid Abdi Alkareem Alyasseri

Abstract Text Feature Selection (FS) is a significant step in text clustering (TC). Machine learning applications eliminate unnecessary features in order to enhance learning effectiveness. This work proposes a binary grey wolf optimizer (BGWO) algorithm to tackle the text FS problem. This method introduces a new implementation of the GWO algorithm by selecting informative features from the text. These informative features are evaluated using the clustering technique (i.e., k-means) so that time complexity is reduced, and the clustering algorithm’s efficiency is improved. The performance of BGWO is examined on six published datasets, including Tr41, Tr12, Wap, Classic4, 20Newsgroups, and CSTR. The results showed that the BGWO output outperformed the rest of the compared algorithms such as GA and BPSO based on the measurements of the evaluation. The experiments also showed that the BGWO method could achieve an average purity of 46.29%, F-measure of 42.23%. Keywords Binary grey wolf optimizer · Text mining · K-means · Text feature selection problem · Text clustering

1 Introduction The number of digital documents is extremely increasing day by day due to the proliferation of the internet, that cannot be investigated only by humans [3]. Therefore, text mining tools can assist in addressing this issue. Automatic systems, which are not affected by a text explosion, can replace the human reader. Text mining examines the massive documents’ collection to detect data that are previously unknown. Text A. K. Abasi (B) · A. T. Khader · S. Naim · S. N. Makhadmeh · Z. A. A. Alyasseri School of Computer Sciences, Universiti Sains Malaysia, Gelugor, Penang, Malaysia e-mail: [email protected] M. A. Al-Betar Department of Information Technology, Al-Huson University College, Al-Balqa Applied University, Irbid, Jordan Z. A. A. Alyasseri ECE Department-Faculty of Engineering, University of Kufa, Najaf, Iraq © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_34

503

504

A. K. Abasi et al.

document clustering (TDC) is, among other techniques, an effective method, which is used in the fields of text mining, topic extraction, machine learning, text summarization, and pattern recognition [16]. An efficient TDC technique allows automatic classification of a corpus of documents into semantic cluster hierarchies. It is the method through which documents are structured into significant classification. This means that the records of similar clusters are closer together than the records of different clusters [11]. The application of TDC algorithms requires the conversion of raw text files (i.e., terms) into numerical formats with document characteristics. The most fundamental stage to obtain trends and ideas from them is document representation [17]. In TDC, Vector Space Model (VSM) is commonly utilized so that the documents are presented, and the terms represent the features/dimensions in the VSM [29]. Huge informative, in addition to uninformative, in other words, irrelevant and redundant, as well as noisy dimensional features are the result of the conversion process [12]. The main informative documents’ features are determined by FS. However, the high dimensionality space represents the key difficulty. Problems are related to the removal of non-informational features in order to reduce the dimension space and improve the clustering performance [18]. It is a fact that hundreds of thousands of textual features are part of the compilation of the text. The document dimensionality determines the efficiency of TDC. Figure 1 shows the overall steps of TDC. The FS techniques fall into three categories, including the filter method, the wrapper method, and the hybrid method based on the studies’ approach to obtaining an information sub-ensemble of features. The filter method examines the feature set based on statistical methods so that a discriminatory function subset is chosen

Fig. 1 Text clustering steps.

An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer

505

regardless of the machine learning algorithm. These include mean-median [15], mean absolute difference [15], and odd ratio [23], to name a few examples of filter methods. The previously mentioned methods are widely used in FS due to their advantageous less computational complexity, particularly if the dimension of the text feature is vast. The search approach in the wrapper methods is used to evaluate the subsets of features so that effective informative features are obtained. These techniques include plus-l-take-away-r-process [25], and sequential forward selection/backward elimination [26]. Although these techniques are computationally costly, they are relatively more expensive compared to the filter methods. Another class of FS is the hybrid technique. Various FS techniques are incorporated into the hybrid methods to select informative subsets of features. They utilize the advantages of one strategy and reduce the disadvantages of another technique in choosing the subset. FS is formulated as an NP-hard (nondeterministic polynomial time) optimization problem [12]. In combinatorial optimization problems, the best way to achieve the optimal solution is the exhaustive search [5]. However, the exhaustive search throughout the full search space cannot be practical because it includes an overwhelmingly high degree of computing complexity [9, 21, 22]. Recently, many surveys have investigated the metaheuristic algorithms to address the issues of combinatorial optimization [2, 7, 20]. These algorithms are extensively utilized with the aim of discovering the problems’ unknown search space and obtaining the best global solution and, therefore, they are becoming more and more popular. Numerous metaheuristic algorithms are available, particle swarm optimization [19], binary multi-verse optimizer [1], ant lion optimizer [19], harmony search (HS) [6], etc. [8, 28]. They are used to address the FS issue. Grey Wolf Optimizer (GWO) is a recent metaheuristic swarm optimization technique, which emulates grey pack hunting and social behaviour. It is proposed by Mirjalili [24]. This algorithm provides many advantages over other swarm-based intelligence techniques. It has a fewer set of parameters and any derivative information is not required. Besides, the decision variables’ exchange and the cooperation process between swarm participants have a significant advantage. Consequently, GWO has been effectively adjusted in the last analysis of GWO to several types of optimization problems such as engineering, robotics, scheduling [22], economic dispatch problems, planning, feature selection for classification problem [13], and many more as described in [14]. The FS problem is basically a binary problem. For the continuous optimization problem, the original GWO variant is suggested. Based on the above, a binary Grey Wolf Optimizer (BGWO) is proposed in the present paper as a novel FS application using all the GWO operators. As for the structure of the paper, it is outlined as follows: The theoretical motivation for this work provides in Sect. 2. In Sect. 3, the binary grey wolf algorithm is provided. In Sect. 4, BGWO for text FS is provided. Section 5 explains the obtained empirical results to emphasize the efficiency of the new FS method. Finally, Sect. 6 provides the conclusion and future work.

506

A. K. Abasi et al.

2 Preliminaries The preliminary research is briefly presented in this section.

2.1 Text Clustering Problem TDC aims at finding the best distribution of a vast set of documents into a clusters’ subset by the clusters’ fundamental features. The pre-processing stages of TDC are introduced in the following subsection and the k-means technique is briefly introduced to produce document clusters depending on the obtained features. Pre-processing Steps. The standard pre-processing stages, which include tokenization and stop words removal, as well as stemming, in addition to feature weighting, are performed before clusters are created to convert the document into a numerical form format [18]. The pre-processing substeps are shortly outlined as follows: – Tokenization: Each word (term) in a single document is extracted as separate units called tokens in this stage, neglecting special characters, symbols, and weight spice in the text. – Stop words removal: This involves a list of terms that are common, including (‘in’, ‘on’, ‘at’, ‘that’, ‘the’, ‘of’, ‘an’, ‘a’, ‘she’, ‘he’, etc.). Short words, highfrequency terms, and functional terms are also recognized as stop words in TDC. It is vital to remove these terms as they often cover a substantial part of the document. Therefore, not only the number of characteristics is unnecessarily intensified but also the clustering method efficiency is deluded and deteriorated. The stop words list consists of 571 words that can be obtained. – Stemming: Transforms several word forms with the same root. We can do this by separating prefixes and suffixes from the term. For instance, ‘multi-coloured’ and ‘multi-media’ share the same root, i.e., /-multi-/. – Term weighting: A weighting scheme TF-IDF (i.e., term frequency-inverse document frequency) is frequently utilized for transforming textual data into number formats.

2.2 K-Means Text Clustering Algorithm K-means represents one of the most popularly used clustering technique for solving the TDC problem [16]. Algorithm 1 provides the K-means algorithm steps. It splits the text documents’ set Docs = (doc1 , doc2 , doc3 , ., docn ) into a subset of K clusters via three main steps: (a) choosing random documents as clusters’ centroid (the number of clusters is predefined). (b) assigning the documents to the nearest clusters. (c) recalculating the clusters’ centroid.

An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer

507

Algorithm 1. K-means clustering algorithm Data: The clusters’ number K , and a documents’ set Docs (after the pre-processing step) Result: Clusters K contain homogeneous documents. Create centroid clusters K by choosing one document randomly for each cluster. while the number of iteration is not met do for each document doci in Docs do Compute the distance (i.e., the similarity) between centroid clusters K and document doci . end for each document doci in Docs do Assign document doci to the nearest cluster k. end recalculate the clusters centroid k. end

2.3 Problem Formulation of Unsupervised Feature Selection In this paper, the technique of text FS problem utilizes the BGWO to cluster text using a novel model to identify the most comprehensive informative text features. In addition, uninformative features are removed. The following math defines the proposed model for addressing the FS problem. Since F is a set of features F = { f 1 , f 2 , ...., f t }, where t signifies the amount of the entire unique features VSM. Consider N ew_sub_ f eatur es = {N f1 , N f2 , ..., N f j , ..., N f,tn } signifies the subset of the new features, which is the new dimension of informative features that is obtained through the FS algorithm, tn signifies the amount of the new features.

3 Binary Grey Wolf Optimizer The GWO mechanism is modelled by the grey wolves’ lifestyle. Their hunting mechanisms were formulated in 2014 as an optimization algorithm by Mirjalili [24] using four stages of GWO social hierarchy, including (α), (β), (δ), and (ω) alpha, which stand for an alpha, beta, gamma, and omega, respectively. Alpha is the leader of the grey wolf pack, and it is at the top of the social hierarchy. In consulting alpha wolf, beta bears perform the leading role. Delta refers to the level positioned in the structure between beta and omega wolves. Omega wolves are part of the last hierarchy. To hunt prey, they surround it first [22]. The intelligence of the group hunting is also proceedingly modelled along with this intelligent social hierarchy. It involves three main phases: chasing, encircling, and attacking. Optimization speaking, the top three solutions in the hunting group are classified into three types according to the fitness value: Alpha (α) is the first-best solution in the hunting group. Beta (β) is the second-best solution and delta (δ) is the third-best one. Other solutions involve omega (ω).

508

A. K. Abasi et al.

All the solutions are guided by these three solutions (i.e., (α), (β), and (δ)) to discover the search space to find the optimal solution. The following equations are used to mathematically model the encircling behaviour. − → − → − → − → X (t + 1) = X p (t) + A × D

(1)

− → − → − → − → D = | C × X p (t) − X (t)|,

(2)

− → − → where D is as defined in 2 and t signifies the number of iterations, X p signifies the − → − → − → position of the prey, A , C represent coefficient vectors and X signifies the grey wolf position. − → → (3) C =2×− r 2 − → → → → A =2×− a ×− r 1−− a

(4)

− → − → → The A , C vectors are calculated based on Eqs. 4 and 3. The components of − a are linearly reduced from (2.0 to 0.0) over the course of iterations and r 1, r 2 are random vectors in [0, 1]. Hunting is typically driven by alpha. Sometimes, beta and delta might be involved in hunting. In order to mathematically simulate the hunting behaviour of the grey wolves, alpha, beta, and delta (i.e., the highest solutions) are expected to possess a stronger understanding of the prey location. Other search agents follow the first three best solutions, which have been so far achieved in the hunting processes to update their position to the best search agent’s position. The updating positions of the wolves are presented in these equations. − → − → − → − → Dα = |C 1 × X α − X |

(5)

− → − → − → − → Dβ = |C 2 × X β − X |

(6)

− → − → − → − → Dδ = |C 3 × X δ − X |

(7)

− → − → − → − → X 1 = X α − A1 × Dα

(8)

− → − → − → − → X 2 = X β − A2 × Dβ

(9)

− → − → − → − → X 3 = X δ − A3 × Dδ

(10)

An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer

− → − → − → X1+ X2+ X3 − → X (t + 1) = 3

509

(11)

This paper proposes the modification of a GWO as a binary GWO (BGWO) for the adaptation of binary variables in a search area (FS problem nature). The generation − → function of solutions, as well as the equation of the new position (i.e., X (t + 1)) Eq. (11) are adjusted to identify the practical solutions during the execution of BGWO as follows: 1 − → , (12) Sig( X (t + 1)) = − → − 1 + e X (t+1) − → where Sig( X (t + 1)) refers to the opportunity of the decision variables will be taken ‘0’ or ‘1’ in solution X . The Eq. 13 to update the decision variables of the X solution.  − → 1 X (t + 1) = 0

− → if r < Sig( X (t + 1)) otherwise,

(13)

− → where the sigmoid function is used in Eq. 12 to convert the value of X (t + 1) in Eq. 11 in the range [0, 1], r refers to random numbers between (0, 1). Figure 2 illustrates − → the sigmoid function of the X (t + 1).

Fig. 2 Sigmoid function.

510

A. K. Abasi et al.

Fig. 3 Solution represents.

4 BGWO for the Text FS Problem 4.1 Solution Representation Figure 3 illustrates the BGWO solution presentation, which is proposed for the text FS problem. In this presentation, the solution involves a text features’ subset. The binary value of each position indicates whether if the feature selected or not selected [3, 18]. BGWO starts after creating a random solutions’ set, then it improves the solutions so that the best optimal solution can be found (i.e., the best informative features).

4.2 Fitness Function The mean absolute difference (MAD) [18] can be utilized by the BGWO algorithm as a fitness function to evaluate each solution in the population to tackle the text FS problem. MAD is used to give weight (i.e., significance rating) to each feature in the subset N ew_sub_ f eatur es, then all the scores are summarized. The feature weight is computed by calculating the distinction of each feature using Eq. 14. M AD(Ui ) = where, Ui =

t 1  |Ui , j − U j |, n i i=1

(14)

t 1  Ui , j, n i i=1

(15)

where n i refers to all the selected features in the text document i, Ui , j signifies the feature j value in the document i, U j refers to the mean value of the feature j, t refers

An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer

511

to the total features’ number. The methodology, which is proposed in this paper, is described briefly in Algorithm 2. Algorithm 2. The proposed BGWO algorithm’s pseudo code for FS problem Initialize GWO and FS problem parameters (a, A, C, number of solutions(N ), number of iterations, number of feature (F) Create a population matrix of size (N × F) Calculate the fitness function for all solutions Assign the best solution to X α Assign the second best solution to X β Assign the third best solution to X δ for each iteration (t) do for each solution(i) do Update solution(i) using equation 13 end for Update a, A, C Calculate the fitness function for all solutions Update X α Update X β Update X δ s end for Return the best solution X α ;

5 Experimental Setup The proposed BGWO is tested on six standard datasets to solve the text FS problem. The results were contrasted using GA [27], BPSO [18]. The parameter setting of every comparative algorithm is described in Table 1. It should be noted that, the values of the control parameters are set according to the recommendation given by the founder of GWO in [24].

Table 1 The parameter setting for each algorithm of comparison. Algorithm Parameters Value GA GA binary PSO binary PSO binary PSO binary PSO BGWO, BPSO, GA BGWO, BPSO, GA BGWO, BPSO, GA

Crossover rate Mutation rate C1 C2 Max weight Min weight Population size Maximum number of iteration Runs

0.70 0.04 2 2 0.9 0.2 60 1000 30

512

A. K. Abasi et al.

Table 2 Text datasets details. Datasets ID tr41 tr12 Wap Classic4 20Newsgroups CSTR

DS1 DS2 DS3 DS4 DS5 DS6

No. documents (d)

No. clusters (K)

No. features or terms (t)

878 313 1560 2000 300 2 99

10 8 20 4 3 4

6743 5329 7512 6500 2275 1725

5.1 Standard Datasets and Evaluation Metric The BGWO algorithm is tested on six benchmark datasets, and it is compared with the state-of-the-art algorithms in the experiment, including (Tr41, Tr12, Wap)1 , (Classic4, 20Newsgroups, CSTR)2 . Several characteristics in these datasets such as sparsity and skewness. Based on Table 2, the features’ description of the datasets is given. The Purity and F-measure measures are used as standards to evaluate the TDC algorithms [16]. The measures that are implemented involve the criteria, which is commonly used to achieve validity and compare the clustering of various cluster datasets [4]. It is worth noting that after the outcomes are achieved, they are calculated. The following section describes these steps in detail. Purity. The purity measure is utilized for calculating the maximum correct documents of every single cluster and the highest purity score is close to 1 because, in a single cluster, the immense cluster size is calculated according to the estimated cluster size. Through the given measure, each cluster is assigned the most repeated class [1]. Purity is calculated in Eq. 16 of the entire clusters. purit y =

k 1 max(i, j), n i=1

(16)

where n refers to the entire documents’ total number in the dataset, max(i, j) refers to the large size in the cluster j of class i, k refers to the clusters’ number. F-measure. The F-measure indicates the harmonic combination of the precision measures (P) with the recall measures (R). When the F-measure’s value is close to 1, this shows a robust clustering algorithm. Conversely, when the F-measure’s value is close to 0, the clustering algorithm is considered weak [10]. In the following Equation, the F-measure is calculated:

1 glaros.dtc.umn.edu/gkhome/fetch/sw/cluto/datasets.tar.gz. 2 sites.labic.icmc.usp.br/text_collections/.

An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer

P(i, j) =

n i, j , nj

513

(17)

where ni, j refers to the correct documents number in cluster j of class i, n j refers to the total documents number in cluster j. R(i, j) =

n i, j , ni

(18)

where ni, j refers to the correct documents number in cluster j of class i, n j refers to the total documents number in class i. 2 × P(i, j) × R(i, j) , P(i, j) + R(i, j)

F(i, j) =

(19)

where R(i, j) refers to the Recall in cluster j of class i, P(i, j) refers to the Precision in cluster j of class i. For all clusters, the calculated F-measure is shown in Eq. 20 F=

k  nj i=1

n

max F(i, j)

(20)

5.2 Results and Discussion The findings, which were achieved through BGWO, were compared with BPSO and GA. In order to make a reasonable comparison, every single algorithm was reiterated 30 times, and the parameters’ setting of each clustering algorithm is similar as shown in Table 1. Table 3 provides the average of 30 runs for Purity and F-measure results, which were obtained individually through the six standard text benchmarks by all the FS algorithms GA, BPSO, and BGWO. For all datasets, BGWO exhibited higher purity and F-measure in comparison with GA and BPSO. In contrast with both techniques, this indicates that BGWO is effective and simultaneously efficient to find the globally optimal solution. Compared with other data sets, BPSO obtained the best purity, as well as the best F-measure in the DS2 dataset. According to the results, it was found that BGWO exceeded other algorithms in comparison with purity and F-measure. Figure 4 demonstrates the selected features percentages, which are compared with other methods in different datasets. According to the findings, it is possible to state that a better subset of the appropriate text clustering efficiency is discovered in the proposed algorithm compared with other algorithms. The selection of features, however, aims at improving the quality of the clustering and, at the same time, removing unusable features. Otherwise, the efficiency may be decreased while the feature subset is tiny. For example, BPSO obtained the smallest subset of features for the DS3 text dataset. However, the purity and F-measure were smaller (please

514

A. K. Abasi et al.

Table 3 Comparison of BPSO, GA, BGWO results for different datasets based on k-means clustering algorithm in terms of Purity and F-measure Dataset Measure K-means BPSO GA BGWO without FS DS1

DS2

DS3

DS4

DS5

DS6

Average ranks Final rank

Purity F-measure Rank Purity F-measure Rank Purity F-measure Rank Purity F-measure Rank Purity F-measure Rank Purity F-measure Rank

0.4108 0.3876 4 0.3908 0.3222 4 0.4759 0.4315 4 0.5938 0.5472 4 0.3741 0.3406 4 0.3525 0.3460 4 4 4

0.4358 0.4004 2 0.4083 0.3471 2 0.4981 0.4507 2 0.5970 0.5579 3 0.4014 0.3499 1 0.3702 0.3662 2 2.00 2

Fig. 4 Features selected percentage between GA, BPSO, BGWO

0.4139 0.3904 3 0.4012 0.3250 3 0.4887 0.4436 3 0.6035 0.5504 2 0.3810 0.3481 3 0.3558 0.3512 3 2.83 3

0.4400 0.4286 1 0.4354 0.3299 1 0.5010 0.4627 1 0.6074 0.5801 1 0.3953 0.3418 2 0.3986 0.3962 1 1.16 1

An Improved Text Feature Selection for Clustering Using Binary Grey Wolf Optimizer

515

refer to Table 3). Instead, BGWO selected a larger subgroup, which provided higher purity and F-measure than BPSO in the same dataset. Figure 4 also shows that the worst or the best clustering performance cannot be guaranteed by the lowest or the largest features’ subset.

6 Conclusion This paper proposed a binary grey wolf optimizer (BGWO) to solve the FS problem in TDC. It aims to address the binary nature problem. BGWO uses the original features to produce a subset, which contains the most necessary text features. The k-means clustering technique addresses the features as an input in the clustering step so that the new subset is evaluated. The proposed algorithm is tested on six benchmarks document datasets regarding the purity and F-measure criteria. The experimental findings of the BGWO algorithm archived better results than the existing FS technique. Therefore, the proposed FS algorithm enhanced the outcome of the TDC by obtaining more homogeneous groups. The hybridization of this algorithm with other metaheuristic algorithms may potentially improve the information by increasing the search capabilities of the algorithm. Another enhancement in the future can involve applying various fitness functions so that the results are expected to be further improved. Acknowledgements This work was supported by Universiti Sains Malaysia (USM) under Grant (1001/PKOMP/ 8014016).

References 1. Abasi AK, Khader AT, Al-Betar MA, Naim S, Makhadmeh SN, Alyasseri ZAA (2019) A text feature selection technique based on binary multi-verse optimizer for text clustering. In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT). IEEE, pp 1–6 2. Abasi AK, Khader AT, Al-Betar MA, Naim S, Makhadmeh SN, Alyasseri ZAA (2020) Linkbased multi-verse optimizer for text documents clustering. Appl Soft Comput 87:106002 3. Abualigah LM, Khader AT (2017) Unsupervised text feature selection technique based on hybrid particle swarm optimization algorithm with genetic operators for the text clustering. J Supercomput 73(11):4773–4795 4. Abualigah LM, Khader AT, Al-Betar MA (2016) Multi-objectives-based text clustering technique using k-mean algorithm. In: 2016 7th international conference on computer science and information technology (CSIT). IEEE, pp 1–6 5. Al-Betar MA, Awadallah MA (2018) Island bat algorithm for optimization. Expert Syst Appl 107:126–145 6. Al-Betar MA, Awadallah MA, Khader AT, Bolaji AL, Almomani A (2018) Economic load dispatch problems with valve-point loading using natural updated harmony search. Neural Comput Appl 29(10):767–781

516

A. K. Abasi et al.

7. Alomari OA, Khader AT, Al-Betar MA, Awadallah MA (2018) A novel gene selection method using modified MRMR and hybrid bat-inspired algorithm with β-hill climbing. Appl Intell 48(11):4429–4447 8. Alyasseri ZAA, Khader AT, Al-Betar MA, Awadallah MA, Yang XS (2018) Variants of the flower pollination algorithm: a review. In: Yang XS (ed) Nature-inspired algorithms and applied optimization. Springer, Cham, pp 91–118 9. Alyasseri ZAA, Khader AT, Al-Betar MA, Papa JP, Alomari OA, Makhadme SN (2018) An efficient optimization technique of EEG decomposition for user authentication system. In: 2018 2nd international conference on biosignal analysis, processing and systems (ICBAPS). IEEE, pp 1–6 10. Bharti KK, Singh PK (2015) Hybrid dimension reduction by integrating feature selection with feature extraction method for text clustering. Expert Syst Appl 42(6):3105–3114 11. Bharti KK, Singh PK (2016) Chaotic gradient artificial bee colony for text clustering. Soft Comput 20(3):1113–1126 12. Bharti KK, Singh PK (2016) Opposition chaotic fitness mutation based adaptive inertia weight BPSO for feature selection in text clustering. Appl Soft Comput 43:20–34 13. Emary E, Zawbaa HM, Hassanien AE (2016) Binary grey wolf optimization approaches for feature selection. Neurocomputing 172:371–381 14. Faris H, Aljarah I, Al-Betar MA, Mirjalili S (2018) Grey wolf optimizer: a review of recent variants and applications. Neural Comput Appl 30(2):413–435 15. Ferreira AJ, Figueiredo MA (2012) Efficient feature selection filters for high-dimensional data. Pattern Recogn Lett 33(13):1794–1804 16. Forsati R, Mahdavi M, Shamsfard M, Meybodi MR (2013) Efficient stochastic algorithms for document clustering. Inf Sci 220:269–291 17. Karaa WBA, Ashour AS, Sassi DB, Roy P, Kausar N, Dey N (2016) Medline text mining: an enhancement genetic algorithm based approach for document clustering. In: Hassanien AE, Grosan C, Fahmy Tolba M (eds) Applications of intelligent optimization in biology and medicine. Springer, Cham, pp 267–287 18. Kushwaha N, Pant M (2018) Link based BPSO for feature selection in big data text clustering. Future Gener Comput Syst 82:190–199 19. Mafarja MM, Mirjalili S (2019) Hybrid binary ant lion optimizer with rough set and approximate entropy reducts for feature selection. Soft Comput. 23(5):1–17 20. Makhadmeh SN, Khader AT, Al-Betar MA, Naim S, Alyasseri ZAA, Abasi AK (2019) Particle swarm optimization algorithm for power scheduling problem using smart battery, pp 1–6 21. Makhadmeh SN, Khader AT, Al-Betar MA, Naim S, Alyasseri ZAA, Abasi AK (2020) A minconflict algorithm for power scheduling problem in a smart home using battery. In: Proceedings of the 11th national technical seminar on underwater system technology 2019. Springer, pp 1–12 22. Makhadmeh SN, Khader AT, Al-Betar MA, Naim S (2019) Multi-objective power scheduling problem in smart homes using grey wolf optimiser. J Ambient Intell Human Comput. 10:3643– 3667 23. Mengle SS, Goharian N (2009) Ambiguity measure feature-selection algorithm. J Am Soc Inform Sci Technol 60(5):1037–1050 24. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61 25. Nakariyakul S, Casasent DP (2009) An improvement on floating search algorithms for feature subset selection. Pattern Recogn 42(9):1932–1940 26. Pudil P, Novoviˇcová J, Kittler J (1994) Floating search methods in feature selection. Pattern Recogn Lett 15(11):1119–1125 27. Shamsinejadbabki P, Saraee M (2012) A new unsupervised feature selection method for text clustering based on genetic algorithms. J Intell Inf Syst 38(3):669–684 28. Sheikhpour R, Sarram MA, Gharaghani S, Chahooki MAZ (2017) A survey on semi-supervised feature selection methods. Pattern Recogn 64:141–158 29. Song W, Qiao Y, Park SC, Qian X (2015) A hybrid evolutionary computation approach with its application for optimizing text document clustering. Expert Syst Appl 42(5):2517–2524

Applied Electronics and Computer Engineering

Metamaterial Antenna for Biomedical Application Mohd Aminudin Jamlos, Nur Amirah Othman, Wan Azani Mustafa, and Maswani Khairi Marzuki

Abstract In this paper, metamaterial element is applied towards antenna for biomedical application. The metamaterial unit cell is constructed using circular split ring resonator (CSRR) technique to be attached at the ground of the antenna. The metamaterial antenna is design to be operated at frequency between 0.5–3.0 GHz which is suitable for biomedical application such as wireless patient movement monitoring, telemetry and telemedicine including micro-medical imaging and Magnetic Resonance Imaging (MRI). The design and simulation has been carried out using Computer Simulation Technology Microwave Studio (CST MWS) while the fabricated antenna is measured using Vector Network Analyzer (VNA) to analyse the overall performance. Keywords Biomedical

 Metamaterial  Antenna

1 Introduction Nowadays, Metamaterial has been a popular research topic for almost two decades. Most of the researcher agree on certain the basic metamaterial definition characteristics although it has different definitions [1]. Metamaterials are materials not generally found in nature and having negative permittivity and permeability but are instead artificially medium with a negative index of refractive and structures that have properties that are either not or seldom found in natural material [1–3]. Variable metamaterials have been designed from radio frequencies up to optical frequencies, and different functions have been realized such as negative refractive index (NRI), huge chirality, anisotropy, and bianisotropy [4]. As an interdisciplinary topic, metamaterials can be classified into different categories based on different criteria. From an operating frequency point of view, they can be classified M. A. Jamlos (&)  N. A. Othman  W. A. Mustafa  M. K. Marzuki Faculty of Engineering Technology, Universiti Malaysia Perlis, UniCITI ALAM Campus, Sungai Chuchuh, 02100 Padang Besar, Perlis, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_35

519

520

M. A. Jamlos et al.

as microwave metamaterials, terahertz metamaterials, and photonic metamaterials. From a spatial arrangement point of view, there are 1D metamaterials, 2D metamaterials, and 3D metamaterials. From a material point of view, there are metallic and dielectric metamaterials. In this work, we will concentrate on the electromagnetic properties, and introduce several important types of metamaterials [5]. Metamaterial concepts are mainly focused on the size reduction and improving the conventional patch antenna characteristics [6, 7]. For some years the metamaterials idea has mostly been considered as a means of engineering the electromagnetic response of passive micro- and nanostructured materials. Remarkable results have been achieved so far including negative-index media that refract light in the opposite direction from that of conventional materials, chiral materials that rotate the polarization state of light hundreds of thousands of times more strongly than natural optical crystals, and structured thin films with remarkably strong dispersion that can slow light in much the same way as resonant atomic systems with electromagnetically induced transparency [11–13]. These great achievements in applications of metamaterials encouraged the biomedical scientists to use these novel materials and their electromagnetic application in medicine.

2 Metamaterial Unit Cell The proposed metamaterial unit cell dimensions layout of the proposed G-shape Ring Resonator (GSRR) [8] is depicted in Fig. 1. The gap between the splits (W2) plays a significant role in determining the stop-band phenomenon of the proposed metamaterial unit cell. Figure 2 illustrated a proper gap of W2 = 0.5 mm the stop band phenomenon of the structure is observed at 3.3 GHz. At 3.3 GHz the reflection coefficient (S11) is almost near to zero and the transmission coefficient is below −10 dB. Similar to GSRR unit cell, Hexagon Split Ring Resonator (HSRR) unit cell is also analyzed as shown in Fig. 3 meanwhile the S-parameter of the HSRR design illustrated in Fig. 4. On the other hand, a schematic view and the design parameters of the proposed double-negative square-circular ring resonator (SCRR) metamaterial unit cell have been depicted in Fig. 5 [9]. This SCRR metamaterial unit cell is made by combining split circular and split square ring shape structure on the front side and metal strip on the backside of the substrate. The metal strip on the backside is treated as a wire. The square, circle and wire structures are made up of copper material with a thickness of 0.035 mm. Arlon AD 350 (lossy) is used as the substrate material which has a dielectric constant of 3.5 and loss tangent of 0.003. The square-circular rings behave as inductors whereas the splits in the square and circular ring behave as capacitors which are responsible for resonance characteristics. Magnetic and electric field induced in SRR and wire respectively are responsible for negative permeability (l) and negative permittivity (e). Due to these characteristics, metamaterials exhibits left-handed properties.

Metamaterial Antenna for Biomedical Application

Fig. 1 Detailed dimension layout of GSRR

Fig. 2 S-parameter of proposed design

521

522

M. A. Jamlos et al.

Fig. 3 HRR unit cell

Fig. 4 S-parameter of HSRR

Figure 6 shown the simulation setup for proposed square-circular unit cell. The frequency domain solver based electromagnetic simulator CST microwave studio has been used for the calculation of reflection and transmission coefficient of the proposed design. The unit cell is placed between two waveguide ports on positive and negative X-axis. The perfect Electric Conductor (PEC) and Perfect Magnetic Conductor (PMC) boundary conditions are applied along Y and Z-axes. Electromagnetic properties obtained by simulated S11 and S21 characteristics of SCRR metamaterial unit cell. There are some methods which are suitable for parameter extraction such

Metamaterial Antenna for Biomedical Application

523

Fig. 5 SCRR unit cell structure. a Front view. b Back view

Fig. 6 Simulation setup of unit cell

as TR method, Nicolson Ross method and many others. By using a transfer matrix, the effective parameters of proposed SCRR metamaterial structure such as complex permittivity and complex permeability are extracted [10]. Figure 7 represent the transmission (S21) and reflection (S11) characteristics for simulated unit cell structure. Transmission characteristics (|S21| < −10 dB) shows that it can be used from 3.36 to 5.88 GHz which belongs to C-band. Meanwhile Fig. 8 show the phase response of S11 and S21. In Fig. 9, negative refractive index

524

M. A. Jamlos et al.

is obtained from 5.7 to 6 GHz with maximum negative value at 5.816 GHz. For Fig. 10, the real part of permittivity is negative from 3.22–6 GHz while Fig. 11 shows that real part of permeability is negative from 5.824–6.1 GHz. For biomedical application, an attractive properties of metamaterial is the plane wave propagating in the media would there phase velocity antiparallel with group velocity so that media would support backward waves. In this paper we proposed a periodic rectangular split ring resonator structure (SRSM) a unit cell is depicted in Fig. 12. This metamaterial SRSM unit cell is composed of two nested spilt rings, which are etched on a FR4 substrate of a dielectric constant of 4.4. The resonance frequency of this rectangular split ring unit cell structure depends on the gap dimension (g). Normally, slot loaded miniaturized patch antennas were used in biomedical applications. Such patch antennas were never extended and analyzed by metamaterial structure. Hence, rectangular split ring metamaterial structure loaded on ground plane of the conventional circular microstrip antenna so that the antenna achieved 75% of size reduction and good amount of bandwidth and gain for biomedical and wireless applications. The designed metamaterial circular microstrip patch antenna is shown in Fig. 13 after varying the width and gap of the metamaterial structure parametric studies was done for the better improvement of bandwidth and gain and efficiency for biomedical applications for antenna under test (AUT).

Fig. 7 The transmission (S21) and reflection (S11) characteristics

Metamaterial Antenna for Biomedical Application

Fig. 8 Phase response of S11 and S21

Fig. 9 Refractive index

525

526

Fig. 10 Real part of permittivity

Fig. 11 Real part of permeability

M. A. Jamlos et al.

Metamaterial Antenna for Biomedical Application

527

Fig. 12 RSRM unit cell

Fig. 13 Metamaterial circular microstrip patch antenna as AUT (top and bottom view)

3 Conclusion As conclusion, variety of antennas metamaterial design for biomedical applications has been discussed. The competency of the metamaterial determines by evaluating its performances in term of resonant frequency, gain, efficiency, radiation pattern, reflection coefficient magnitude, power ratio and bandwidth. Among the challenges in realizing ideal designs of metamaterial are to obtain optimum efficiency and compact size of the antenna which can be achieved through additional effort in designing ideal metamaterial must be further carried on with metamaterial antenna designs.

528

M. A. Jamlos et al.

References 1. Gangwar K, Gangwar R (2014) Metamaterials: characteristics, process and applications. Adv Electron Electric Eng 4:97–106 2. Mendhe SE, Kosta YP (2011) Metamaterial properties and applications. Int J Inf Technol Knowl Manag 4(1):85–89 3. Sihvola A (2007) Metamaterials in electromagnetics. Metamaterials 1(1):2–11 4. Yan S (2015) Metamaterial design and its application for antennas. KU Leuven, Science, Engineering & Technology 5. Anandhimeena B, Selvan PT, Raghavan S (2016) Compact metamaterial antenna with high directivity for bio-medical systems. Circuits Syst 7:4036–4045 6. Islam MM, Islam MT, Samsuzzaman M, Faruque MRI, Misran N, Mansor MF (2015) A miniaturized antenna with negative index metamaterial based on modified SRR and CLS unit cell for UWB microwave imaging applications. Materials 8:392–407 7. Ali T, Subhash BK, Biradar RC (2018) Design and analysis of two novel metamaterial unit cell for antenna engineering. In: Proceedings of 2018 2nd international conference on advances in electronics, computers and communications, pp 1–4 8. Khombal M, Bagchi S, Harsh R, Chaudhari A (2018) Metamaterial unit cell with negative refractive index at C band. In: 2018 2nd international conference on electronics, materials engineering and nano-technology, IEMENTech 2018, pp 1–4 9. Rajput GS, Gwalior S (2012) Design and analysis of rectangular microstrip patch antenna using metamaterial for better efficiency. Int J Adv Technol Eng Res 2:51–58 10. Koutsoupidou M, Karanasiou IS, Uzunoglu N (2013) Rectangular patch antenna on split-ring resonators substrate for THz brain imaging: modeling and testing. In: 13th IEEE international conference on bioinformatics and bioengineering, BIBE 2013. IEEE, pp 1–4 11. Singh G, Marwaha A (2015) A review of metamaterials and its applications. Int J Eng Trends Technol 19(6):305–310 12. Hosseinzadeh HR (2018) Metamaterials in medicine: a new era for future orthopedics. Orthop Res Online J 2(5):1–3 13. Tütüncu B, Torpi H, Urul B (2018) A comparative study on different types of metamaterials for enhancement of microstrip patch antenna directivity at the Ku-band (12 GHz). Turk J Electr Eng Comput Sci 26:1171–1179

Refraction Method of Metamaterial for Antenna Maswani Khairi Marzuki, Mohd Aminudin Jamlos, Wan Azani Mustafa, and Khairul Najmy Abdul Rani

Abstract This paper reviews several refraction methods of metamaterial. Metamaterial is an engineered structure to produce electromagnetic properties that is not naturally occurred in ordinary material, such as negative permittivity, negative permeability and negative refractive index. This reviewed paper focuses on negative refractive index application where complies with microwave and optical frequency ranges. Each method provides different frequency range. Split ring resonator used in microwave radiation enhances the gain while fishnet-chiral planar structure is used in photonic frequency. The photonic metamaterial acts similar to lens, which leads to enhancing the gain of the microwave. Keywords Refraction method

 Metamaterial  Antenna

1 Introduction Metamaterial is an artificial material introduced on 19th century by researchers to the world. It is known because of the unique properties, which do not occur naturally in other materials [1]. It is formed by a multiple of composite materials or meta-atoms and is arranged in repeating pattern also known as unit cell. The metamaterial structured atoms are much larger than conventional atoms but much smaller than the wavelength of incident waves. The wavelength for microwave radiation is in millimeter while for photonic metamaterial is in nanometer [2]. Each design will provide different properties and capable to manipulate the electromagnetic waves, such as blocking, absorbing, enhancing and bending the incident wave. It also affects the electromagnetic radiation or lights [3]. The idea to create an unusual material like metamaterial occurred because of the limited abilities of natural materials where it has only positive characteristics, such as M. K. Marzuki  M. A. Jamlos (&)  W. A. Mustafa  K. N. A. Rani Faculty of Engineering Technology, Universiti Malaysia Perlis, UniCITI ALAM Campus, Sungai Chuchuh, 02100 Padang Besar, Perlis, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_36

529

530

M. K. Marzuki et al.

positive dielectric permittivity and positive magnetic permeability, which is also known as “double positive” material. Metamaterial can be characterized into two characters, which are “single –negative” where one of the permittivity or permeability is negative and for this type of metamaterial, it supports evanescent waves. While other character of metamaterial is to have both negative values also known as “double negative” metamaterial for permittivity and permeability, which leads to negative refractive index [4]. The focus of this paper is to explore the application used by the negative index metamaterial (NIM). Theoretically, NIM is referred as left-handed materials (LHM) where the poynting vector is antiparallel to wave vector. It is different from the right-handed material where the poynting vector is parallel with wave vector with positive permittivity and permeability [5]. The important property of NIM is it can bend or refract the light passes differently from common positive index material. The refracted light will lie on the same side of the normal as the incident light. NIM with a −1 refractive index would provide ultrahigh resolution and give the super lensing effect. NIM used in variety of applications and it can be distinguishing by different methods [6].

2 Refraction Method There are several refraction methods of metamaterial discussed in this section. Each method is used in different applications depends on the design of the unit cell. The first method uses cylindrical lens antenna as shown in Fig. 1. The researcher uses this method to replace the array antenna used at the Base Station for the next generation mobile system (5G). It supports the application of multi-beam and Fig. 1 Cylindrical lens antenna

Refraction Method of Metamaterial for Antenna

531

Fig. 2 Huygen’s metasurfaces

multi-frequency use. Besides that, the negative refractive index reduces the thickness of the lens and the angle obtained for the application is n = 2 [7]. Huygen’s metasurface method also produces negative refractive index, which is used to focus the beam of the signal. This method is printed on two bonded boards by using standard PCB fabrication techniques even there are many stacked and interspaced layers as shown Fig. 2 [8]. The split-ring resonator (SRR) is commonly used in metamaterial antenna for many applications depend on the design as shown in Fig. 3. Many researchers tend to use this method because of the design characteristics. The permeability value is controlled by the radius and width of the ring [9]. There are five different designs discussed for this method. Firstly, the design used the double circular slot ring resonator. It acts as planar surface lens and the 3-dB transmission band of 2 GHz obtained between 8.55 and 10.55 GHz. Then, the high gain antenna is modified by placing double stacked meta-surface lens over a microstrip patch antenna and the gain enhanced by 8.55 dB in H-plane while 6.20 dB in E-Plane. Lastly, cross polarization improved by 8 dB [10]. There is also squared SRR design, which is used to synthesize negative refractive index lens and parabolic lens. This method uses 90 unit cells to get n = ∞ at 11.6 GHz. The combination of these two meta-surfaces able to focus the energy in a point despite of the power losses in the air [11]. Besides that, the combination of square shape and circular designed to exhibit negative refractive index from 5.7 to 6 GHz frequency band [12] and other researchers also used this design to produce negative refractive index in S-band range between 2.2–3.3 GHz, which resonated at 2.5 GHz. Radiation directivity was also enhanced and it could be used for wireless power transfer application [13]. Lastly, for SRR design is not limited for

532

M. K. Marzuki et al.

Fig. 3 a Double circular slot ring resonator. b Squared split-ring resonator. c Square-circular split-ring resonator. d S-shape resonator

Refraction Method of Metamaterial for Antenna

533

Fig. 4 a Chiral planar. b Fishnet structure. c Fishnet-like chiral metamaterial

circular or square shape only. One of the researchers manages to design SRR in S-shape as shown in Fig. 3d. The negative refractive index occurred at the higher frequency, which was between 5 and 9 GHz [14]. Subsequent paragraphs, however, are indented. All the methods discussed are used to get the negative index from microwave. However, none of the above method is used in optical frequency. Therefore, the Fishnet-Chiral Planar method is introduced as shown in Fig. 4. There are three designs reviewed in this section. The first design is chiral planar design used in optical frequency. It managed to reduce losses of the negative index metamaterial and exhibit polarization effects for lights field [2]. Then, the fishnet structure design was introduced and the researcher found that this method used to gain negative permeability and able to get the highest figure of merit (FOM) without loss compensation. Besides that, the light passes through undergoes negative refractive index at the interface and focuses at the far field. The negative index metamaterial (NIM) slab acts similar to a lens. Lastly, the combination of the fishnet and chiral planars was designed known as fishnet like chiral metamaterial. It was used to reduce losses exhibited by the chiral metamaterial and exhibit negative refractive indices in three frequency bands [15].

3 Conclusion Metamaterial capabilities explored in many applications as reviewed in this paper by using negative index metamaterial. However, most of the applications are in microwave frequency range. Therefore, it is good to explore more in photonic system. As reviewed, the 4th method, fishnet-chiral Planar design is able to manipulate the electromagnetic radiation or light. There are three different capabilities of this method based on its design, which are it can exhibit polarization effects of lights, bend and focus the light at a point and act similar to lens. With these properties, it can be used to explore more in electromagnetic radiation and to manipulate light properties.

534

M. K. Marzuki et al.

References 1. Kuse R, Hori T, Fujimoto M (2015) Variable reflection angle meta-surface using double layered FSS. In: 2015 IEEE international symposium on antennas and propagation & USNC/ URSI national radio science meeting, Canada. IEEE, pp 872–873 2. Linden S, Wegener M (2007) Photonic metamaterials. In: Conference proceedings of the international symposium on signals, systems and electronics, USA, pp 147–150 3. Zhu B, Huang C, Zhao J, Jiang T, Feng Y (2010) Manipulating polarization of electromagnetic waves through controllable metamaterial absorber. In: 2010 Asia-pacific microwave conference, Japan. IEEE, pp 1525–1528 4. Duan ZY, Guo C, Guo X, Chen M (2016) Double negative-metamaterial based terahertz radiation excited by a sheet beam bunch. Phys Plasmas 20(9):1–6 5. Solymar L, Shamonina E (2009) Waves in metamaterial. Oxford University Press, Oxford A bird’s-eye view of metamaterials 6. Yang J, Xu F, Yao S (2018) A dual frequency Fabry-Perot antenna based on metamaterial lens. In: 2018 12th international symposium on antennas, propagation and EM theory (ISAPE), China. CRIRP, pp 1–3 7. Hamid S, Ali MT, Abd Rahman NH, Pasya I, Yamada Y, Michishita N (2016) Accuracy estimations of a negative refractive index cylindrical lens antenna designing. In: Proceedings of the 2016 6th IEEE-APS topical conference on antennas and propagation in wireless communications, APWC, USA. IEEE, pp 23–26 8. Wong Joseph PS (2015) Design of Huygens’ metasurfaces for refraction and focusing. A dissertation submitted to the faculty of The University of Toronto in partial fulfillment of requirement for the degree of Doctor of Philosophy in Electrical and Computer Engineering 9. Singh AK, Abegaonkar MP, Koul SK (2017) A negative index metamaterial lens for antenna gain enhancement. In: International symposium on antennas and propagation, USA. IEEE, pp 1–2 10. Yang J, Xu F, Yao S (2018) A dual frequency Fabry-Perot antenna based on metamaterial lens. In: 12th international symposium on antennas, propagation and EM theory (ISAPE), China. IEEE, pp 1–3 11. Pan CW, Kehn MNM, Quevedo-Teruel O (2015) Microwave focusing lenses by synthesized with positive or negative refractive index split-ring resonator metamaterials. In: International workshop on electromagnetics: applications and student innovation competition, IWEM, pp 1–2 12. Khombal M, Bagchi S, Harsh R, Chaudhari A (2018) Metamaterial unit cell with negative refractive index at C band. In: 2nd international conference on electronics, materials engineering and nano-technology, India. IEEE, pp 1–4 13. Baghel AK, Nayak SK (2018) Negative refractive index metamaterial for enhancing radiation directivity in S-band. In 3rd international conference on microwave and photonics, India. IEEE, pp 1–2 14. Fiddy MA, Adams R, Weldon TP (2017) Exploiting metamaterials: fundamentals and applications. A dissertation submitted to the faculty of The University of North Carolina at Charlotte in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering 15. Fernández O, Gómez Á, Vegas A, Molina-Cuberos GJ, García-Collado AJ (2017) Novel fishnet-like chiral metamaterial structure with negative refractive index and low losses. In: IEEE antennas and propagation society international symposium proceedings, USA, pp 1959– 1960

Circular Polarized 5.8 GHz Directional Antenna Design for Base Station Application Mohd Aminudin Jamlos, Nurasma Husna Mohd Sabri, Wan Azani Mustafa, and Maswani Khairi Marzuki

Abstract Nowadays, research development and utilization of directional antenna with circular polarization have been grown rapidly for base station applications. High Gain Antenna (HGA) is one of directional antenna that focused on narrow beam width for the application. The antenna permits more precise on the targeting the radio signal and usually is placed at the open area so that the radio waves to be transmitted will not be interrupted. For this paper, methods for circularly polarized microstrip patch antenna designs are being reviewed. In order to realized circularly polarized antenna, the patch has undergone some design modification while array antenna is design for improving antenna performance as to realize high gain so that it is suitable to be used in base station applications. Keywords Circular polarize

 Base station  Antenna

1 Introduction Circular polarized 5.8 GHz directional antenna is designed to be used for base station application. To design the antenna, it must have a very wide band impedance matching, stable radiation pattern in a wide frequency band and high cross-polarization ratio in wide angle range [1–3]. For this research, circularly polarized microstrip patch antenna is designed since it is suitable for wireless communication. In order to make circularly polarized design, the patch must undergo some modification such as masking perturbation, slot or slit and by truncating corners [1, 4]. In order to enable the antenna that works in the base station, the antenna must have a very high gain so that the signal can be easily transmitted and received consistently. Thus, an array antenna is designed for improving antenna gain M. A. Jamlos (&)  N. H. Mohd Sabri  W. A. Mustafa  M. K. Marzuki Faculty of Engineering Technology, Universiti Malaysia Perlis, UniCITI ALAM Campus, Sungai Chuchuh, 02100 Padang Besar, Perlis, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_37

535

536

M. A. Jamlos et al.

performance in base station applications [5] where rectangular microstrip patch antenna array is designed and some modification on the patch is made in order to make a circularly polarized antenna for use in base station application. Besides, the requirement of the directional radiation pattern is important since it provides increased performances and reduced interference when transmission and reception of communication [6]. The directional antenna is designed to function more effectively than in others. The reason for that directionality is for improving transmission and reception of the signal communication as well as to reduce interference [5]. The antenna for base station application is operating at 5.8 GHz frequency for the requirement of the large bandwidth and gain for base station application.

2 Microstrip Antenna Microstrip antenna associated with low cost, light weight, conformal antennas which can be integrated with feed networks and active devices. The basic structure of microstrip antenna consists of a radiating patch on one side of a dielectric substrate and a ground plane on the other side of the substrate [1, 3, 5]. A microstrip patch antenna structure is shown in Fig. 1. Patch is generally made up of conducting material like copper or gold and it can be of any possible shape. The patch and the feed lines are photo etched on the substrate. As this antenna is etched on the substrate so it can take any desired shape. Rectangular shaped patch is the simplest patch shape to be etched and analyzed. Microstrip antenna has advantages of low profile, lightweight, low cost, ease of integration with active component and radio frequency devices [3, 7]. However, the microstrip antenna also have the disadvantages which is low gain, low efficiency, low power handling capability and all of this disadvantages can be overcome by using an array concept or by make MIMO antenna [5, 8]. Besides, the radiation pattern of an antenna depends on its dimensions. It also depends on the effective permittivity of the substrate which is dependent on the width and height of the patch.

Fig. 1 Microstrip patch antenna structure [1]

Circular Polarized 5.8 GHz Directional Antenna …

537

Fig. 2 Types of polarization. a Linear. b Circular. c Eliptical [9]

3 Antenna Polarization Polarization is the property of electromagnetic wave describing the time-varying direction and relative magnitude of the electric field vector as observed along the direction of propagation. Transmitting and receiving antennas should be similarly polarized otherwise there will be more losses. There are three types of polarization which is linear polarization, circular polarization and elliptical polarization. Figure 2 above show three types of polarization and is rotating [9]. Transmitting and the receiving should be similarly polarized otherwise there will be more losses. The uses of linear polarization will make the alignment of transmitting and receiving antenna become well. This limitation of alignment can be removed by using circular polarization which compatible with this research project that is needed circular polarized in its design [10]. Circularly polarized antenna used to be exotic mw technology for communication. The field of CP antenna is always rotating. A Circular Polarization Circulation polarization (CP) can be achieved by making axial ratio equal to one. Besides, other researcher claims that circularly polarized antenna have axial ratio less than 3 dB at 90° phase shift [11]. Circular polarization has two types which is Right Hand CP (RHCP) and Left Hand CP (LHCP). For practical implementation of antenna, to consider whether the antenna is LHCP antenna or RHCP antenna, if the transmitting is LHCP antenna and receiving is RHCP antenna there will be 25 dB gain difference between them. Some of the antenna polarization losses are also exist when transmitting antenna and receiving antenna polarizations are different [12].

4 Methods for Circular Polarized Antenna Design Circular polarization (CP) antenna is increasingly attractive in wireless communication systems [13]. Circular Polarized can be obtained if two orthogonal modes with equal amplitudes are excited with a 90° time-phase difference. This can be

538

M. A. Jamlos et al.

accomplishing for instance by adjusting the physical dimensions of the microstrip patch or by various feed arrangements [14, 15]. Figures below show some of the designs of the antenna resulting in circular polarization from some researchers. Some researcher has modified the antenna design in result of circular polarization. As presented by Thoetphan Kingsuwannaphong, the design of 5.7 GHz circular polarization antenna uses the double feeder in order to avoid the interference from adjacent channel of other wireless devices. But, the antenna required two input port of 0° and 90° phase input to achieve circular polarization property. Since it possible to create two output signals with 90° phase different, hence, the compact circular polarized antenna with inset fed and slot is design as shown on Fig. 3. The slot at edge of the circular patch is made to achieve circular polarized. The result of the axial ratio is shown in Fig. 4 below. From the simulation, the result of the axial ratio is acceptance which at 90 phases, AR is below that 3 dB. So the design is circularly polarized. The other way of design to achieve circularly polarization is make an inclined or diagonally slot at the centre of the patch. The slot technique is a way to obtain a circularly polarization [16–18]. As contribute from one of the researcher, the antenna element is a square with an inclined slot at the center. The antenna is feeding by a microstrip line having a characteristic impedance of 100 Ω, this antenna was mounted on a FR4 substrate. The antenna dimensions are presented in Fig. 5. Besides, by introducing asymmetrical slits in diagonal direction of the square microstrip patches [18], the single coaxial-feed microstrip patch antenna is realized for circularly polarized radiation with compact antenna size. The impedance and axial ratio bandwidths are small around 2.5 and 0.5%. Besides, in order to make the circular polarized antenna, some modification on the patch is done such as make some truncated design on the patch or make a slot and so on. From the previous research, the proposed antenna is develop by combining two array antenna which excited from 50 GHz coaxial feed probe, the array

Fig. 3 Circular polarized antenna design [15]

Circular Polarized 5.8 GHz Directional Antenna …

Fig. 4 Simulation result of axial ratio

Fig. 5 Patch antenna design with inclined slot [16]

539

540

M. A. Jamlos et al.

Fig. 6 Circular polarized array antenna design [12]

antenna is designed with 4 element patches on the substrate and each elements is truncated at the corner of the patch to achieve circular polarized result [12, 19, 20]. The antenna designed is shown in the Fig. 6. A single-feed CP U-slot microstrip antenna is proposed in [21]. The asymmetrical U-slot structure is able to generate two orthogonal modes for CP operation without truncate any corner of the square patch. The CP radiation is achieved by etching the complementary split-ring resonator on the patch. The etched gap orientation to the current propagating direction will render the antenna to generate CP waves. By cutting asymmetrical slots onto the square patches, the single probe-feed microstrip antenna is realized for CP radiation [22]. A new technique to design single-feed CP microstrip antenna using Fractal Defected Ground Structure FDGS has been presented in this communication [21, 23]. By using this method, the level of the linearly polarized microstrip antenna is increased to the required level for CP radiation. Another technique to obtain circularly polarized antenna in [24]. In this paper, a circular microstrip patch antenna and its two element array have been proposed for ISM band Applications. Here, the proposed antenna and its array is operated on 5.8 GHz ISM band. The antenna consists of a circular patch which has an elliptical slot and a vertical strip at the center of the patch as shown on Fig. 7 below. The antenna shows circularly polarized radiation pattern with best return loss characteristics.

Circular Polarized 5.8 GHz Directional Antenna …

541

Fig. 7 Circular polarized array antenna design [24]

5 Conclusion As conclusion, the paper describes the method for circularly polarized microstrip patch antenna design and ways to improve its performance to enhance its applicability for use in base station application. Basically bandwidth of the microstrip antenna is its main limitation since for the base station, a large bandwidth is needed. Through this paper, methods including modifying the shape of the patch antenna or by using different feeding techniques circular polarization are described which helps in increasing the bandwidth of the antenna as well as by making the antenna in an array configuration. Different slotted antenna in term of shape and size of the slot also helps in achieving increased bandwidth, improved efficiency, and gain.

References 1. Kingsuwannaphong T, Sittakul V (2018) Compact circularly polarized inset-fed circular microstrip antenna for 5 GHz band. Comput Electr Eng 65:554–563 2. Chen W-S, Wu C-K, Wong K-L (2002) Compact circularly-polarised circular microstrip antenna with cross-slot and peripheral cuts. Electron Lett 34:1040 3. Nayan MKA, Jamlos MF, Jamlos MA (2014) Circular polarized phased shift 90° MIMO array antenna for 5.8 GHz application. In: IEEE international symposium on telecommunication technologies, ISTT, vol 76, pp 169–173 4. Karvekar S, Deosarkar S, Deshmukh V (2014) Design of compact probe fed slot loaded microstrip antenna. In: International conference on communication and signal processing, ICCSP, pp 387–390 5. Midasala V, Siddaiah P (2016) Microstrip patch antenna array design to improve better gains. Procedia Comput Sci 85:401–409 6. Fauzi DLN, Hariyadi T (2018) Design of a directional microstrip antenna at UHF-band for passive radar application. IOP Conf Ser Mater Sci Eng 384:012006 7. Balanis CA (2005) Antenna theory analysis and design, 3rd edn. Wiley, Hoboken

542

M. A. Jamlos et al.

8. Nayan MKA, Jamlos MF, Jamlos MA, Lago H (2014) MIMO 22 RHCP array antenna for point-to-point communication. In: IEEE symposium on wireless technology and applications, ISWTA, pp 121–124 9. Orban D, Moernaut GJK (2006) The basics of patch antennas. Orban Microwave Products, pp 1–4 10. Lacoste R (2010) Robert Lacoste’s the darker side: practical applications for electronic design concepts. Elsevier Inc., Amsterdam 11. Fujita K, Yoshitomi K, Yoshida K, Kanaya H (2015) A circularly polarized planar antenna on flexible substrate for ultra-wideband high-band applications. AEU Int J Electron Commun 69:1381–1386 12. Kunooru B, Nandigama SV, Rani SS, Ramakrishna D (2019) Analysis of LHCP and RHCP for microstrip patch antenna. In: International conference on communication and signal processing (ICCSP), pp 0045–0049 13. Jamlos MA, Jamlos MF, Ismail AH (2015) High performance of coaxial feed UWB antenna with parasitic element for microwave imaging. Microw Opt Technol Lett 57:649–653 14. Jackson DR, Long SA, Williams JT, Davis VB (1997) Computer aided design of rectangular microstrip antennas of advances in microstrip and printed antennas, 2nd edn. Wiley, Hoboken 15. Garg AIR, Bhartia P, Bahl I (2001) Microstrip antenna design handbook. Artech House, Boston 16. Nayan MK, Jamlos MF, Lago H, Jamlos MA (2015) Two-port circular polarized antenna array for point-to-point communication. Microw Opt Technol Lett 57:2328–2332 17. Madhuri S, Tiwari VN (2016) Review of circular polarization techniques for design of microstrip patch antenna. In: International conference on recent cognizance in wireless communication & image processing, pp 663–669 18. Nasimuddin, Chen ZN, Esselle KP (2008) Wideband circularly polarized microstrip antenna array using a new single feed network. Microw Opt Technol Lett 50:1784–1789 19. Liang D, Hosung C, Robert WH, Hao L (2005) Simulation of MIMO channel capacity with antenna polarization. IEEE Trans Wireless Commun 4(4):1869–1873 20. Wei K, Li JY, Wang L, Xu R, Xing ZJ (2017) A new technique to design circularly polarized microstrip antenna by fractal defected ground structure. IEEE Trans Antennas Propag 65:3721–3725 21. Nasimuddin, Qing X, Chen ZN (2011) Compact asymmetric-slit microstrip antennas for circular polarization. IEEE Trans Antennas Propag 59:285–288 22. Gupta K, Jain K, Singh P (2014) Analysis and design of circular microstrip patch antenna at 5.8 GHz. Int J Comput Sci Inf Technol 5:3895–3898 23. Nayan MK, Jamlos MF, Jamlos MA (2015) Circularly polarized MIMO antenna array for point-to-point communication. Microw Opt Technol Lett 57:242–247 24. Singh N, Yadav DP, Singh S, Sarin RK (2010) Compact corner truncated triangular patch antenna for WiMax application. In: Mediterranean microwave symposium, MMS, pp 163– 165

Medical Image Enhancement and Deblurring Reza Amini Gougeh, Tohid Yousefi Rezaii, and Ali Farzamnia

Abstract One of the most common image artifacts is blurring. Blind methods have been developed to restore a clear image from blurred input. In this paper, we introduce a new method which optimizes previous works and adapted with medical images. Optimized non-linear anisotropic diffusion was used to reduce noise by choosing constants correctly. After de-noising, edge sharpening is done using shock filters. A novel enhanced method called Coherence-Enhancing shock filters helped us to have strong sharpened edges. To obtain a blur kernel, we used the coarse-to-fine method. In the last step, we used spatial prior before restoring the unblurred image. Experiments with images show that combining these methods may outperform previous image restoration techniques in order to obtain reliable accuracy. Keywords Medical images

 Blind deconvolution  Deburring

1 Introduction Medical images are an indispensable component of the diagnosis and treatment system, so we need accurate images. Blur is a type of medical image artifact that has various sources such as body movement or detector. The blur kernel determines the effect of the blur on the image. If the blur is non-shift invariant, it can be modeled as a convolution of the original image with the blur kernel; thus, obtaining a clear image becomes a deconvolution problem. In non-blind decon-volution, the blur function is known, and the problem is to find the original image from the blurred image. In blind deconvolution, the blur function is unknown [1]. Among the non-blind methods, we can refer to the Wiener filter and the Lucy-Richardson method that were introduced decades ago with the initial R. Amini Gougeh  T. Yousefi Rezaii Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran A. Farzamnia (&) Faculty of Engineering, Universiti Malaysia Sabah, Kota Kinabalu, Sabah, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_38

543

544

R. Amini Gougeh et al.

assumptions about the blur function. From the new blind methods, we can mention the Fergus method [2]. In this article, we will investigate the blind deconvolution method and will try to achieve an efficient method for use in the medical field with previous improvements. A clear image is obtained fully and correctly in the absence of noise in blur image and error in blur kernel estimation. So the proposed algorithm tries to achieve this ideal. As mentioned, blurry images are noisy, so we have the following equation for the blurry image: b ¼ i~kþn

ð1Þ

where b is the blurry image, i is the clear image, k is a blur kernel and n is noise. ~ indicates convolution operator. In the case of the Fourier transform, Eq. (1) becomes the following relation: B ¼ I  KþN

ð2Þ

Figure 1 shows this equation on carotid MRI image. Projection-based and maximum-likelihood method are the two major types of blind deconvolution. The projection-based approach retrieves the blur function and the real image simultaneously. This method is repeated continuously until it meets a predefined criterion. The first step is estimating the blur function. One of the benefits of this method is that it is not sensitive to noise. The second approach shows the maximum likelihood estimation of blur parameters, such as the covariance matrix. Since the estimated blur function is not unique, it is possible to introduce functions by considering the size, symmetry of the estimated function. One of the significant advantages of this method is that its computational complexity is low, and it also helps to detect blur, noise, and real image power spectra [3]. Blur kernel estimation is an ill-posed problem. So various types of regularization terms were used in the models. Fergus et al. [2] used heavy-tailed distribution. They used the mixture of Gaussians and Bayes’ theorem to estimate kernel. Shan et al. [4] has developed a parametric model to estimate heavy-tailed distribution from natural image gradients. Levin et al. [5] used Hyper-Laplacian regularization terms of image gradient approximation. Cho and Lee [1] used coarse-to-fine method to determine the blur kernel. They used this iterative method with a bilateral filter. This method used Gaussian regularization terms. Notably, our method is an adaptation of this method.

Fig. 1 Practical Eq. (1)

Medical Image Enhancement and Deblurring

545

According to previous studies of the blur kernel estimation, the existence of appropriate edges makes the estimation more accurate. Combined methods such as shock filters with bilateral filters have been used by Money and Kang [6] and Alvarez and Mazorra [7]. Xu et al. [8] used zero norms in equations for kernel estimation, which has a good effect on noise and prevents errors that appear around the edges. Our paper is formed as follows. In Sect. 2, we describe the structure of our algorithm and the methods we used. Numerical aspects and results are briefly sketched in Sect. 3. In the last section, we have a summary which concludes the paper.

2 Materials and Methods The primary purpose of the iterative alternating optimization is to refine the motion blur kernel progressively. The final deblurring result is obtained by the last non-blind de-convolution operation that is performed with the final kernel K and the given blurred image B. The intermediate latent images estimated during the iterations have no direct influence on the deblurring result. They only affect the result indirectly by contributing to the refinement of kernel K. The success of previous iterative methods comes from two essential properties, including sharp edge restoration and noise suppression in smooth regions. These attributes help to estimate the kernel accurately [1]. The coarse-to-fine method starts from developed for medical images. Chen et al. [9] developed a new framework for 3D Brain MR image registration. We used another method based on spatial priors.

2.1

Noise Reduction

In the first phase of blur function estimation, we try to denoise the blurry image. The method used in this study is based on the Perona-Malik method [10], which relies on the use of partial derivatives in image analysis. The values of the conduction coefficient and diffusion rate play an important role in noise reduction. The weaknesses of conventional methods are the manual selection of constants. In our method, the image gradient is calculated in its four major neighborhoods, then the difference between the gradients are calculated in horizontal and vertical directions. By calculating the average value of the gradient and variance, we obtain an appropriate criterion for obtaining the magnitude of the image gradient changes, which has a linear relationship with the diffusion rate. Choosing the right values is critical to maintaining the edges of the image, larger values make the image smoothly, and at low values, noise reduction will not be possible.

546

R. Amini Gougeh et al.

Equation (3) specifies the output image of this method in (1 + t)th repetition: tþ1 t Ii;j ¼ Ii;j þ k½CN :rN I þ CS :rS I þ CE :rE I þ CW :rW I ti;j

ð3Þ

where 0  k  0:25 for the numerical scheme to be stable, N, S, E, and W are the subscripts for North, South, East, West neighbors, and the symbol r indicates nearest-neighbor differences: rN Ii;j  Ii1;j  Ii;j rS Ii;j  Ii þ 1;j  Ii;j

ð4Þ

rE Ii;j  Ii;j þ 1  Ii;j rW Ii;j  Ii;j1  Ii;j

The conduction coefficients are updated at every iteration as a function of the brightness gradient.     CtNi;j ¼ g ðrIÞti þ ð1Þ;j  2     CtSi;j ¼ g ðrIÞtið1Þ;j  2     CtEi;j ¼ g ðrIÞti;j þ ð1Þ  2     CtWi;j ¼ g ðrIÞti;jð1Þ 

ð5Þ

2

Figure 2 illustrated pixel’s 4 major neighborhood. We used the equation of Black et al. [11]. As g(.): ( gðrIÞ ¼ f ðxÞ ¼

h i 2 0:67 1  kxpffiffi5 ; 0;

pffiffiffi xk 5 otherwise

ð6Þ

where k is the diffusion rate controls the sensitivity to edges. rNS I ¼ rN I  rS I rEW I ¼ rE I  rW I

Fig. 2 Discrete computational structure for simulation of diffusion equation of Perona and Malik [10]

ð7Þ

Medical Image Enhancement and Deblurring

547

We calculated the gradient in two vertical and horizontal directions by (7), then the average gradient value is calculated as follows: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rI ¼ ðrNS IÞ2 þ ðrEW IÞ2

ð8Þ

According to results Hasanpor et al. [12], k has a linear relationship with the variance of gradients, so we have: k ¼ a:VarðrIÞ

ð9Þ

with respect to noise properties, we can suggest an optimum number of a so we can calculate k more precise and easier. After applying modified Perona-Malik, we obtain an image with less noise without removing image parts like edges which are essential for blur kernel estimation.

2.2

Shock Filter

The shock filter is used to restore salient edges by [13] One of the disadvantages of the shock filter is enhancing remnant noise. Money and King [6] used a shock filter to find sharp edges, and estimated a blur kernel. Weickert [14] introduced an enhanced version of shock filters called Coherence-enhancing shock filters. We used this method in our research. The basic of the shock filter is the transfer of gray values to the edge from both sides by applying image’s morphological operations to satisfy the differential equation conditions. The two main operations in image morphology are: 1-Dilation and 2-Erosion. The shock filter uses the sign function which has {−1, 0, +1} values to select between two states (dilation and erosion). Applying such a method creates a severe discontinuity called shock at the boundary between the two zones of influence. We use the Gaussian filter to smooth the image and solve the shock filter equation. @Is ¼ sgnðDIs ÞkrIs k @t

ð10Þ

where DIs and rIs are Laplacian and gradient of Is , respectively. Is is the filtered image which results from the equation follows: Is ¼ Gr ~ Ip

ð11Þ

which Ip is image after the de-noising section and Gr is Gaussian filter with standard deviation r. r determines the size of the resulting patterns. Often r is

548

R. Amini Gougeh et al.

chosen in the range between 0.5 and 2 pixel units. It is the main parameter of the method and has a strong impact on the result. If the right edges are not selected, the estimated blur kernel will have less accuracy. Several modifications have been proposed in order to improve the performance of shock filters. For instance, replacing rIs with other expressions can be a better edge detector. It is clear that the shock filter and Perona-Malik method are iterative processes, so we need to define the iteration number. Furthermore, it has been proven that the number of salient edges does not always lead to accurate estimates. Impact of iteration has shown in Fig. 3.

2.3

Edge Selecting

In order to achieve useful edges, Xu and Jia [15] assumed an h  h window centered at pixel x and moving over all parts of the blurred image; we can obtain a criterion for choosing the correct gradients as follows: P     y2Nh ðxÞ rBðyÞ ð12Þ rðxÞ ¼ P y2Nh ðxÞ krBðyÞk þ 0:5 B is the blurred image and Nh ðxÞ is the mentioned window. The nominator is the sum of the absolute values of the gradients of the windows with different x centers, giving us an estimation of the structure of an image. Flat areas of the image, where the pixel difference is negligible, and also the areas where the pixel sharpness is high (such as the impulse) have the small r(x) values because they neutralize by other gradient factors. It should be noted that we obtain the above equation for the x and y coordinates (derived in two directions). 0.5 was used for grayscale level [0, 1], and if we use system with value [0, 255] we can select 20 instead of 0.5. Absolute value is:

Fig. 3 The output of the shock filter. (a) Input image (b) shock filter iteration = 5 (c) iteration = 50 (d) iteration = 150 (g) iteration = 250

Medical Image Enhancement and Deblurring

549

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2ffi 2 rðxÞ ¼ ðrx Þ þ ry

ð13Þ

Figure 4 shows the calculated r(x) for depicting image. We have the phase as follows: h ¼ arctan

 rx ry

ð14Þ

Which h 2  p2 ; p2 then r values were sorted into 4 groups in descending order:

p p p

p p p 2 ; 4 , ½ 4 ; 0Þ, ½0; 4Þ, 4 ; 2 . Then the threshold value was defined to ensure the minimum number of pixels to be selected in each group. pffiffiffiffiffiffiffiffiffiffi sr ¼ 0:5 PI PK

ð15Þ

where PI is the total number of pixels in the input image and PK is the total number of pixels in the kernel. Using the Heaviside function, H(.), the threshold will be applied: M ¼ H ð r  sr Þ

ð16Þ

Another threshold was defined which works with the gradient magnitude. Selected edges are determined as: pffiffiffiffiffiffi ss ¼ 2 PK     rIs ¼ rIsh :H MrIsh   ss

Fig. 4 (a) Input image (b) Calculated r(x)

ð17Þ ð18Þ

550

R. Amini Gougeh et al.

Ish is shock filtered image, ss is mentioned threshold to guarantee that at least pffiffiffiffiffiffi 2 PK pixels participate estimation in each group. It also excludes seg in kernel  ments depended on MrIsh . We calculated the required edges. Next step is the blur kernel estimation. Our target is k, which is the kernel. We know that this problem is ill-posed so we need to use regularization terms to solve the problem correctly. Our problem modeled as follows by Xu and Jia [15]: EðkÞ ¼ krIs ~ k  rBk2 þ ckkk2

ð19Þ

To solve this problem, we need separate dimensions and solve the convolution in matrixes. We can do that operation by flipping both of the rows and columns of the image and then multiplying locally similar entries and summing:  2 EðkÞ ¼ kAx k  rx Bk2 þ Ay k  ry B þ ckkk2

ð20Þ

If we apply the first-order derivation: @EðkÞ ¼ 2ATx ðAx k  rx BÞ þ 2ATy ðAy k  ry BÞ þ 2ck @k

ð21Þ

We assume that the Eq. (21) equals zero, then we apply Fast Fourier Transform (FFT) on all variables. 2ðATx Ax þ ATy Ay þ cÞk ¼ ATx rx B þ ATy ry B

ð22Þ

Using Parseval’s theorem: k ¼ F 1

   ! F ð@x Is ÞF ð@x BÞ þ F @y Is F @y B  2 F ð@x Is Þ2 þ F @y B þ c

ð23Þ

where F ð:Þ and F 1 ð:Þ denote the FFT and inverse FFT respectively. F ð:Þ is the complex conjugate operator. So we restored the blur kernel with Eq. (23). To restore an image, we need to model ill-posed problem again, but we use spatial prior this time: EðIÞ ¼ kI ~ k  Bk2 þ kkrI  rIs k2

ð24Þ

which rI  rIs is new prior and restore sharp selected edges properly. Using the former approach results:

Medical Image Enhancement and Deblurring

551

Fig. 5 (a) Blurred input (b) c ¼ 15; k ¼ 0:005 (c) c ¼ 15; k ¼ 0:05 (d) c ¼ 15; k ¼ 0:5 (e) c ¼ 15; k ¼ 5 (f) c ¼ 5; k ¼ 0:005 (g) c ¼ 10; k ¼ 0:005 (h) c ¼ 20; k ¼ 0:005 (i) c ¼ 30; k ¼ 0:005

     1 F ðkÞF ðBÞ þ kF ð@x ÞF Isx þ kF @y F Isy I ¼ F 1 @    A F ðkÞF ðBÞ þ kF ð@x ÞF ð@x Þ þ kF @y F @y 0

ð25Þ

I; is latent image and we need to use a non-blind deconvolution technique to restore detailed image. Various methods for reach final image have been developed and we used method of Cho and Lee [1]. Effect of k and c values is illustrated in Fig. 5.

3 Discussion and Results The parameters in the calculations have an important role in predicting the blur kernel. For example, if the threshold values are selected for the function r(x) and the final edges are either large or very small, the image will be smoothed, and therefore important edges will not be selected for kernel estimation. In this paper, we attempted to improve performance by select these values automatically. In Fig. 6 effects of values on kernel depicted. We also tried our algorithm on images which contain text such as Fig. 7.

552

R. Amini Gougeh et al.

Fig. 6 (a) Output image and estimated kernel with c ¼ 15 (b) Output image and estimated kernel with c ¼ 5 (c) Output image and estimated kernel with c ¼ 1

Fig. 7 Debluring image with text (a) blurred input (b) perona-malik output (c) deconvolution output

Our algorithm was implemented in MATLAB R2016a on AMD A10 6th generation CPU 1.8 GHz. and duration of image restoration has calculated in Table 1.

Medical Image Enhancement and Deblurring

553

Table 1 Calculation speed Image

Vessels (Fig. 1) Foot (Fig. 3) Arm (Fig. 5) Brain (Fig. 6) Faculty façade (Fig. 7)

Restoration duration (sec)

Iterations Perona-malik

22.5 31.8 42 27.4 38

5 5 5 5 5

Shock filter

Coarse to fine

8 8 8 8 8

7 7 7 7 7

4 Summary Image processing has improved dramatically in the last decades. The rate of development has increased with the advent of more advanced machine vision technologies in daily life. Medical imaging, as one of the pillars of the modern medical diagnosis system, is not devoid of this technology. Different imaging methods have different sensitivities to noise, camera movement, beam source, and other factors. The blur of the images cause damage to these images. For example, a slight movement on an MRI or x-ray machine results in blurry images. Figure 1 is used to detect blockage of the vein, which results in relative blind-ness. Therefore, these images must have accuracy due to the physician can diagnose the disease with less error. The current method, in contrast to conventional methods, can compute the blur kernel and help to reduce the costs of re-imaging by restoring the original image. Proper edges and reduced initial noise of blurry images lead to an accurate estimation of the blur kernel. According to the results, using nonlinear noise reduction methods increases accuracy. The method provided by Perona-Malik has basic parameters that are selected by the user. Choosing these parameters automatically reduce error and leads to optimal results. The next factor in the accuracy of the blur kernel after noise reduction is to select the appropriate edges of the estimator function input. Shock filters introduced by Osher and Rudin [13] perform better than other methods, such as Canny. Our iterative algorithm modifies itself at every step and results in a more transparent output. Local deburring is one of the accurate ways which leads to clear images. In Addition, creating a fast algorithm for shift-variant blur models is needed in future works. Acknowledgements The authors appreciate those who contributed to make this research successful. This research is supported by Center for Research and Innovation (PPPI) and Faculty of Engineering, Universiti Malaysia Sabah (UMS) under the Research Grant (SBK0393-2018).

554

R. Amini Gougeh et al.

References 1. Cho S, Lee S (2009) Fast motion deblurring. ACM Trans Graph (TOG) 28(5):145 2. Fergus R, Singh B, Hertzmann A, Roweis ST, Freeman WT (2006) Removing camera shake from a single photograph. ACM Trans Graph (TOG) 25(3):787–794 3. Yadav S, Jain C, Chugh A (2016) Evaluation of image deblurring techniques. Int J Comput Appl 139(12):32–36 4. Shan Q, Jia J, Agarwala A (2008) High-quality motion deblurring from a single image. ACM Trans Graph (TOG) 27(3) 5. Levin A, Weiss Y, Durand F, Freeman WT (2009) Understanding and evaluating blind deconvolution algorithms. In: IEEE conference on computer vision and pattern recognition, pp 1964–1971 6. Money J, Kang S (2008) Total variation minimizing blind deconvolution with shock filter reference. Image Vis Comput 26(2):302–314 7. Alvarez L, Mazorra L (1994) Signal and image restoration using shock filters and anisotropic diffusion. SIAM J Numer Anal 31(2):590–605 8. Xu L, Zheng S, Jia J (2013) Unnatural l0 sparse representation for natural image deblurring. In: Computer vision and pattern recognition, pp 1107–1114 9. Chen T, Huang TS, Yin W, Zhou XS (2005) A new coarse-to-fine framework for 3D brain MR image registration. In: International workshop on computer vision for biomedical image applications, pp 114–124. Springer, Heidelberg, October 2005 10. Perona P, Malik J (1987) Scale-space and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell 12(7):629–639 11. Black MJ, Sapiro G, Marimont DH, Heeger D (1998) Robust anisotropic diffusion. IEEE Trans Image Process 7(3):421–432 12. Hasanpor H, Nikpour M (2008) Using adaptive diffusion coefficient to eliminate image noise using partial equations. Iranian J Electr Comput Eng 6(4) 13. Osher S, Rudin LI (1990) Feature-oriented image enhancement using shock filters. SIAM J Numer Anal 27(4):919–940 14. Weickert J (2003) Coherence-enhancing shock filters. In: Joint pattern recognition symposium. Springer, Berlin, pp 1–8 15. Xu L, Jia J (2010) Two-phase kernel estimation for robust motion deblurring. In: European conference on computer vision. Springer, Berlin, pp 157–170

A Fast and Efficient Segmentation of Soil-Transmitted Helminths Through Various Color Models and k-Means Clustering Norhanis Ayunie Ahmad Khairudin, Aimi Salihah Abdul Nasir, Lim Chee Chin, Haryati Jaafar, and Zeehaida Mohamed Abstract Soil-transmitted helminths (STH) are one of the causes of health problems in children and adults. Based on a large number of helminthiases cases that have been diagnosed, a productive system is required for the identification and classification of STH in ensuring the health of the people is guaranteed. This paper presents a fast and efficient method to segment two types of STH; Ascaris Lumbricoides Ova (ALO) and Trichuris Trichiura Ova (TTO) based on the analysis of various color models. Firstly, the ALO and TTO images are enhanced using modified global contrast stretching (MGCS) technique, followed by the extraction of color components from various color models. In this study, segmentation based on various color models such as RGB, HSV, L*a*b and NSTC have been used to identify, simplify and extract the particular color needed. Then, k-means clustering is used to segment the color component images into three clusters region which are target (helminth eggs), unwanted and background regions. Then, additional processing steps are applied on the segmented images to remove the unwanted region from the images and to restore the information of the images. The proposed techniques have been evaluated on 100 images of ALO and TTO. Results obtained show saturation component of HSV color model is the most suitable color component to be used with the k-means clustering technique on ALO and TTO images which achieve segmentation performance of 99.06% for accuracy, 99.31% for specificity and 95.06% for sensitivity. Keywords Soil-transmitted helminths Color models k-Means clustering



 Modified global contrast stretching 

N. A. A. Khairudin (&)  A. S. A. Nasir  H. Jaafar Faculty of Engineering Technology, Universiti Malaysia Perlis, UniCITI Alam Campus, Sungai Chuchuh, 02100 Padang Besar, Perlis, Malaysia e-mail: [email protected] L. C. Chin School of Mechatronic Engineering, University Malaysia Perlis, Pauh Putra Campus, 02600 Arau, Perlis, Malaysia Z. Mohamed Department of Microbiology and Parasitology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia, 16150 Kubang Kerian, Kelantan, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_39

555

556

N. A. A. Khairudin et al.

1 Introduction Soil-transmitted helminths (STH) are a group of intestinal parasitic worms that affect humans through contact with larvae or ingestion of infective eggs. The infections for humans are common in underprivileged communities where overcrowded, poor environmental sanitation and lack of access for clear and safe water are prevalent [1, 2]. The most commonly STH eggs found in the human body are Ascaris Lumbricordes ova (ALO) and Trichuris Trichiura ova (TTO). STH inhabit the intestine, liver, lungs and blood vessels of their hosts while the adult worms inhabit intestine to mate and they will release the eggs in feces [3] to be diffused into soils. The sizes of the eggs are microscopic and vary for each species [4]. Helminth eggs can remain viable for 1 to 2 months in crops and many months in soil, freshwater and sewage [5]. They can remain viable for several years in feces, night soil, sludge and wastewater. STH eggs can be transmitted to the human body through direct contact with polluted sludge or fecal material, exposure to contaminated food, water and also from an animal body or their fur [6]. These parasites can multiply in the human body and this could lead to a serious illness such as filariasis and cysts. They also might increase the susceptibility to other illnesses such as tuberculosis, malaria and HIV infection. For children, the STH infection may cause malnutrition, education deficits and intellectual retardation [7, 8]. Studies have shown such infections have a high consequence on school performance and attendance and future economic productivity [9]. In 2016, around 2.5 billion people all around the world affected with helminthiases disease and over 530 million children which representing 63% of the world’s total were treated [10]. Based on the high number of helminthiases cases, the requirement for identification and classification for the types of helminth eggs is paramount importance in the healthcare industry. Early diagnosis is fundamental for patient recovery, especially for children cases. Helminth eggs can be diagnosed through patients’ stool, blood and tissue sample. Parasitologist needs to diagnose these sample in fresh condition under a limited time. Problems occur when the procedures take a great amount of time and the observer must have a good concentration in observing the samples [11]. Results obtained are often neither accurate nor reliable. These limitations have initiated the improvement in digital image processing for helminth eggs recognitions by using image processing and computer algorithms. Hadi et al. [12] used the median filter twice to reduce the artifacts and noises in the image while edge enhancement based on sharpness and edge detection with canny filter have been used to detect the edge of the hard sharp objects. Threshold with Logical Classification Method (TLCM) has been proposed for the automatic identification process by using shape, shell smoothness and size of the eggs as features in the feature extraction process. The classifying accuracy obtained for ALO species is 93% while TTO species is 94%.

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

557

Then, Suzuki et al. [13] identified 15 types of human intestine parasites through a system that automatically segmented and classified the human intestinal parasites from microscopy images. The proposed system explores image foresting transform and ellipse matching for segmentation and optimum-path forest classifier for object recognition. This system obtained 90.38% for sensitivity, 98.19% for efficiency and 98.32% for specificity. Kamarul et al. [14] proposed a new classification using Filtration with Steady Determinations Thresholds System (F-SDTS) classifier. This classifier is applied in the feature extraction stage by using the ranges of feature values as a database to identify and classify the type of parasite. The overall success rate for this classification system is 94%. Jimenez et al. [11] proposed a system that identifies and quantifies seven species of helminth eggs in wastewater. Gray-scale profile segmentation is used to identify the shape and thus to differentiate genera and species of the helminth eggs. The system shows a specificity of 99% and a sensitivity of 80% to 90%. The systems proposed by the previous researchers showed an increment in identification and classification of human intestinal parasites. However, improvement can be done in the segmentation part in order to achieve efficient results. One of the improvements is by manipulating the color conversion in an image to differentiate the feature of helminths with the artifacts. This suggestion is recommended based on the outcome obtained when color conversion is applied to the image of other medical studies such as cancer, cyst, leukemia and malaria [15–20]. Ghimire and Lee [15] used HSV color model on image by keeping H and S components unvaried and used only (V) component from HSV color image to prevent the change of state of color balance among the HSV component. The enhanced image is not altered because the H and S are not changed. The proposed method obtained a better image compared to other methods such as histogram equalization and integrated neighborhood dependent approach for nonlinear enhancement (AINDANE). Kulkarni et al. [16] applied color conversion after the pre-processing method in order to recognize Acute Lymphoblastic Leukemia (ALL) images. RGB color space is converted into HSV color space to reduce the color dimension from three to two. Saturation (S) plane is selected as it shows a better contrast compared to Hue (H) and Value (V) components. Otsu’s Thresholding method is used for the segmentation part and able to segment the ALL into two parts; nucleus and cytoplasm. Poostchi et al. [17] have listed RGB, HSV, YCbCr, LAB and intensity under color feature when they analyzed the feature computation for classifying malaria parasites for both thin and thick blood smear. Color feature is the most natural to be used for stained parasite to acquire information and to describe the morphological features in red blood cells. An analysis of the usability of color model in image processing has been studied by Sharma and Nayyer [18]. Color components provide a rational way to specify orders, manipulated and effectively display the color of the object that is been considered. Thus, the selected color model should be appropriate to deal with the problem statement and solution. The process of selecting the best color

558

N. A. A. Khairudin et al.

representation involves knowing how color signals are generated and what information is needed from these signals. Color models are widely used to facilitate the specification of the color in some standard generally accepted way. Aris et al. [19] have analyzed color components in color spaces to improve the counting performance of malaria parasites based on thick blood smear images. Y, Cb, R, G, C, M, S and L components have been extracted from YCbCr, RGB, CMY, HSV and HSL color models in order to identify which color component shows the most accurate counting for malaria parasites. Based on results obtained, Y component of YCbCr shows the best segmentation result with 98.48% of average counting accuracy for 100 images of malaria thick blood smear. A new color components’ exchanging method on different color spaces for image segmentation has been proposed by Dai and Li [20] in order to segment a hematocyte image. This method exchange the order of color components after the color component from the original image is extracted. The new image formed has been segmented using Otsu thresholding and region segmentation techniques. The proposed method can differentiate the target segmentation of hematocyte image which are nucleus and cytoplasm of hematocyte, erythrocytes and leukocyte from background image. However, this method is unfitting for sample images that have different staining methods and magnification. Based on the previous studies, it can be seen that color models plays a major role in improving the segmentation performance of image. Therefore, this study will discover the potential of various color components for segmentation process in order to improve the STH segmentation performance.

2 Methodology Most of the researchers have focusing on segmentation and classification techniques to achieve the most accurate results. However, the most crucial part lies in the pre-processing step in which it will affect the next processing step. In this paper, several color models are applied on the enhanced images in order to identify which color component is the most suitable to be applied in segmenting the ALO and TTO images. The methodological steps for segmenting these images will be explained in this section.

2.1

Image Acquisition

The samples of STH are acquired from helminthiases patients through a stool sample. The samples of ALO and TTO are obtained from the Department of Microbiology and Parasitology, Hospital University Science of Malaysia (HUSM). These stool samples are freshly prepared on slides and have been analyzed under

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

559

40X magnification by using Leica DLMA digital microscope. Normal saline is used as the staining to obtain a clear vision of the eggs. In this study, 100 images for each species of ALO and TTO have been captured and saved in .jpg format.

2.2

Image Enhancement Technique Using Modified Global Contrast Stretching (MGCS)

The samples obtained may have different luminance which needs to be standardized. The cause of this problem is due to the color of stool sample or through the lighting from microscope. In order to standardize the luminance, a contrast enhancement technique namely modified global contrast stretching (MGCS) is used [21]. This technique is used to standardize the lighting in the image as well as improving the quality of the targeted image. One of the advantages of MGCS technique is its ability to enhance the contrast of the image without affecting the color structure of the original image. Besides, this technique is able to preserve as much information as the original image. MGCS is altered from global contrast stretching (GCS), hence this technique able to overcome the weakness of GCS by adjusting the values of minimum and maximum in R, G and B components that have been acquired through a certain calculation from the total number of pixels in the images. The original equation of GCS is shown in Eq. (1) [22]. 

inRGB ðx; yÞ  minRGB outRGB ðx; yÞ ¼ 255  maxRGB  minRGB

 ð1Þ

Several parameters are required in order to obtain the new minimum and maximum values. These include the value for minimum percentage, minp, maximum percentage, maxp, number of pixels in each pixel level, Tpix, total number of pixels that lie in a specified minimum percentage, Tmin and total number of pixels that lie in a specified maximum percentage, Tmax. The procedures to develop the MGCS techniques are as follows [22]: 1. Select the preferred values for minp and maxp. 2. Initialize Tmin = 0 and Tmax = 0. Set the value of k = 0, where k is the current pixel level. 3. Estimate the histogram for the red component. 4. Find the number of pixels, Tpix[k] at k. If Tpix[k]  1, set Tmin = Tmin + Tpix[k]. 5. Check the following condition: Tmin  100  minp total number of pixel in image

ð2Þ

560

N. A. A. Khairudin et al.

6. If Tmin fulfills Eq. 2, set the new minimum value, Nmin for the red component in the image to the k value that satisfies this condition; else set k = k + 1. 7. Repeat steps 4 to 6 for the next pixel levels until Nmin is obtained based on the k value that satisfies Eq. 2. 8. Set the value of k = 255. 9. Find Tpix[k] at k. If Tpix[k]  1, set Tmax = Tmax + Tpix[k]. 10. Check the following condition:

Tmax  100  maxp total number of pixel in image

ð3Þ

11. If Tmax satisfies Eq. 3, set the new maximum value, Nmax for the red component in the image to the k value that satisfies this condition; else set k = k − 1. 12. Repeat steps 9 to 11 for the next pixel levels until Nmax is obtained based on the k value that satisfies Eq. 3. 13. Repeat steps 2 to 12 in order to calculate the Nmin and Nmax for the green and blue components. 14. Nmin and Nmax then are used to replace the original min and max in the GCS formula in Eq. (1).

2.3

Color Conversion of STH Image Using Various Color Models

Color conversion identifies color that present in an image. It generally is made from 3D coordinate system and a subspace where each color is represented by a single point [22]. In image processing, color model is used to identify, simplify, extract and edit the particular color needed. Various color models like RGB (Red, Green, Blue), HSV (Hue, Saturation, Value) and L*a*b are used in various applications such as cell detection, lane

(a) Enhanced image

(b) R component

(c) G component

Fig. 1 Results of R, G and B components on STH image

(d) B component

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

561

detection, face detection and many more. Sharma et al. [23] stated that color space provides a rational way to effectively considered in displaying the color of objects. RGB Color Model. The RGB color model is based on the theory that all visible color models can be created using primary colors of red, green and blue [22]. These color models are commonly used to recognize, represent and display images in an electronic system such as televisions, computers and photography. Figure 1 shows the results of RGB color model on STH image. R, G and B components are suitable to be used on STH images. HSV Color Model. HSV is made up based on hue, saturation and value character. The characteristic of HSV have been illustrated in hex-cone shape and the coordinate system is cylindrical. H describes the hue or true color in the image, while S represents the amount of white color in the image [24]. The higher the amount of white, the lower the image saturation. Value shows the degree of brightness in the image which describes value or luminance in the image. The top of HSV hex-cone is a projection along the RGB main diagonal color [25]. Figure 2 shows the hex-cone shape of HSV. Hue is defined by the one or two largest parameter. The range for H is from 0° to 360°. S able to be controlled by varying the R, G and B collective minimum value whereas V is controlled by varying the magnitudes while keeping a constant ratio [23, 25].  H ¼ f ð xÞ ¼ S¼

H1 ; if B  G 360  H1 ; if B [ G

ð4Þ

maxðR; G; BÞ þ ðR  GÞ maxðR; G; BÞ

ð5Þ

maxðR; G; BÞ 255

ð6Þ



The advantage of HSV is it has a simple conceptual concept that each of the element attributes directly corresponds to the basic color model. The disadvantage is the saturation attributes correspond to the mixture of a color with white (tinting), so color desaturation increases the amount of intensity [26]. In this paper, S and V components are applied on the STH images as the H component is unsuitable to be Fig. 2 Hex-cone shape of HSV color space

562

N. A. A. Khairudin et al.

(a) Enhanced image

(b) H component

(c) S component

(d) V component

Fig. 3 Results of H, S and V components on STH image

(a) Enhanced image

(b) Y component

(c) I component

(d) Q component

Fig. 4 Results of Y, I and Q components on STH image

used on the STH image because H components shows low contrast between the foreground and background as can be seen in Fig. 3(b). CIE 1976 L*a*b* Color Model. This color conversion is derived from CIE XYZ and is used to linearize the perceptibility of color differences. The designation of Lab color space is approximately for a human vision which L component is closely matched to the human perception of lightness [27]. L* stands for luminosity, A* is for red or green axis and B* is for blue or yellow axis. CIE Lab is popular in measuring reflective and transmissive objects [25, 27]. NTSC Color Model. National Television System Committee (NTSC) uses YIQ as color space which Y component represents the luma information while I and Q represent the chrominance information for television receiver. Luminance can be obtained from a linear combination of the three primaries. Equation (7) shows the formula for the conversion from RGB color space to YIQ color space while Eq. (8) shows the determined formula by the colorimetric for display system [28]. 2

3 2 Y 0; 299 4 I 5  4 0:5959 Q 0:2115

32 3 0:587 0:114 R 0:2746 0:3213 54 G 5 0:5227 0:3112 B

Y ¼ 0:299R þ 0:587G þ 0:114B

ð7Þ ð8Þ

In this study, only Y and I components are applied on the enhanced STH images. This is because Y and I able to differentiate the foreground and background in the image whereas the foreground and background are in the same color in Q

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

563

component. Figure 4 shows the results obtained from the NTSC color model based on Y, I and Q components. Arithmetic Between Color Models. The components in color models are altered through addition and subtraction arithmetic to help in increasing the possibility of the enhanced image to be segmented accurately. Between the arithmetic formulas for the color models components, two formulas from arithmetic show a good improvement in differentiating the color components in the enhanced STH image. First formula is based on the addition of G component from RGB color model with Lab color model (GLab). Second, subtraction of S component from HSV color model with G component from RGB color model (SG).

2.4

GLab ¼ G þ Lab

ð9Þ

SG ¼ S  G

ð10Þ

Image Segmentation of STH Image Using k-Means Clustering

The main purpose for segmentation of STH image is to separate the regions in STH image by dividing the image into the region of interest and background region. The segmentation process is important because it will serve as a basic step for all subsequent analyses. In this paper, k-means clustering is used in order to identify which color component shows the best STH segmentation result. The algorithm for k-means clustering is based on the concept of data assignation to their respective centers by the shortest Euclidean distance. The k-means clustering is one of the most popular clustering methods based on unsupervised learning algorithms due to its simplicity [20]. The k-means clustering is constructed on minimizing the objective function, J as in Eq. (11). J¼

Xn Xk   xi  cj  i¼1 j¼1

ð11Þ

Where n is the number of data, k is the number for the cluster, xi is the ith the sample and cj is the jth center of the cluster. In this paper, three clusters are used for the segmentation process in order to differentiate between target, unwanted and background regions.

564

2.5

N. A. A. Khairudin et al.

Post-processing Steps After Segmentation Process

After the segmented images have been obtained from k-means clustering, the unwanted pixels and regions are removed by using object remover technique in binary form. This technique helps in removing the pixel lower than 17000 pixel and larger than 70000 pixel in order to achieve an accurate diagnosis for STH. However, the tendency for the pixels inside the target image to disappear is high. Fill holes operation is selected to overcome the side effect from the object remover method on the segmented image by filling the area of the dark pixels that are surrounded by lighter pixels.

2.6

Segmentation Performance

The segmentation performance aims to identify the successfulness of the segmentation. In this paper, segmentation performance is used to compare the image of the segmentation results when the different color components are applied with k-means clustering technique. Segmentation performance is divided into three measures which are accuracy, specificity and sensitivity. These measurements are calculated by comparing the pixels from the resultant segmented image with the manually segmented image. The calculation for accuracy, specificity and sensitivity are defined in Eqs. (12), (13) and (14) respectively. TP þ TN  100 TP þ TN þ FP þ FN

ð12Þ

Specificity ¼

TN  100 TN þ FP

ð13Þ

Sensitivity ¼

TP  100 TP þ FN

ð14Þ

Accuracy ¼

Accuracy is the ratio of correctly classified pixels to the entire area of the STH images while sensitivity is a true positive measure in that it refers to the proportion of images that contain the region of helminth eggs which has been classified correctly. Specificity is the percentage of pixels that are correctly segmented as negative region [29].

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

565

3 Results and Discussion In this study, MGCS technique has been applied on 100 ALO images and 100 TTO images. From the enhancement results obtained, nine color components have been applied on the enhanced images. The results of color components image has been

(a) ALO_1

(b) ALO_2

(c) TTO_1

(d) TTO_2

Fig. 5 Original ALO and ALO and TTO images

(a) MGCS ALO_1

(b) R ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) R ALO_2

(g) k-Means ALO_1

(h) PPS ALO2

(i) MGCS TTO_1

(j) R TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) R TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 6 Results of R component and k-means clustering on enhanced ALO and TTO images

566

N. A. A. Khairudin et al.

(a) MGCS ALO_1

(b) G ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) G ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) G TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) G TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 7 Results of G component and k-means clustering on enhanced ALO and TTO images

used as input image for k-means clustering in order to pinpoint the most suitable color component to be used for the segmentation part. Then, the results of the segmented images has been determined through qualitative and quantitative evaluations. Figure 5 shows the samples of the original ALO and TTO images. The lighting in the images is different from each other. ALO_1 and TTO_2 images are darker than ALO_2 and TTO_1. The artifacts also come in different colors and sizes for each image. These differences increase the difficulty in the segmentation process.

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

567

(a) MGCS ALO_1

(b) B ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) B ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) B TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) B TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 8 Results of B component and k-means clustering on enhanced ALO and TTO images

However, the MGCS technique eases the problem encountered by enhancing and fixing the lighting in the images. Figure 6 until Fig. 15 show the result of images when the proposed color components and k-means clustering are applied on the MGCS images of ALO and TTO (Figs. 7, 8, 12). From the resultant images achieved, it can be said that each of the color components has their advantage and disadvantage when applied on the MGSC images. The results obtained from color components are crucial for k-means clustering and post-processing process. Based on the observation of the enhanced images, MGCS technique shows that the original images are enhanced into a better quality of images. The target images pop up and can be distinguished from the artifacts while the lighting for each image is balanced.

568

N. A. A. Khairudin et al.

(a) MGCS ALO_1

(b) S ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) S ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) S TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) S TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 9 Results of S component and k-means clustering on enhanced ALO and TTO images

The results obtained show that R, V, Lab and GLab components are incompatible for STH segmentation. The information of the target images is greatly affected when the images go through the post-processing procedure because most of the loss information from the target images are unable to be restored. Figures 6, 10, 11 and 14 show the resultant images that have lost their information and unable to be restored which are mostly come from TTO images.

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

569

(a) MGCS ALO_1

(b) V ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) V ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) V TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) V TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 10 Results of V component and k-means clustering on enhanced ALO and TTO images

The images are successfully segmented when G, B, Lab and Y components are applied on the enhanced images with the combination of k-means clustering technique. However, the final results show that the artifacts are still present in the images even though the target images are successfully segmented. These artifacts are difficult to be removed because their sizes are within the range of target image size. This increases the possibility of misleading analysis to occur in segmentation performance.

570

N. A. A. Khairudin et al.

(a) MGCS ALO_1

(b) Lab ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) Lab ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) Lab TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) Lab TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 11 Results of Lab color model and k-means clustering on enhanced ALO and TTO images

Then, S, I and SG components show better resultant images when been applied on the MGCS images compared to the other techniques. The artifacts are present but in minimize amounts. Figure 9 shows the result images for S component. The target images are successfully segmented with only a small portion of artifact present because they are in the same cluster as the target images. The results from I components in Fig. 13 shows good segmentation results but the target images

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

571

(a) MGCS ALO_1

(b) Y ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) Y ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) Y TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) Y TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 12 Results of Y component and k-means clustering on enhanced ALO and TTO images

produced in the final images are smaller than the original images. The results from SG component in Fig. 15 shows that some information is missing although the target images are successfully segmented with a lesser amount of the artifacts. Table 1 shows the average results performance for each color component proposed on the total images of ALO and TTO. From the results obtained, the highest accuracy result is 99.06%, obtained by S and SG color component. For specificity,

572

N. A. A. Khairudin et al.

(a) MGCS ALO_1

(b) I ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) I ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) I TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) I TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 13 Results of I component and k-means clustering on enhanced ALO and TTO images

Table 1 Results of segmentation performances based on different color components and k-means clustering

Color components

Accuracy

Specificity

Sensitivity

R G B S V Lab Y I GLab SG

96.76% 98.24% 98.53% 99.06% 96.97% 98.02% 98.01% 97.40% 96.50% 99.06%

98.06% 98.29% 98.64% 99.31% 99.54% 98.35% 98.12% 99.96% 99.41% 99.54%

67.81% 97.33% 96.54% 95.06% 91.46% 89.97% 95.19% 56.24% 40.83% 91.46%

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

573

(a) MGCS ALO_1

(b) GLab ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) GLab ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) GLab TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) GLab TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 14 Results of GLab arithmetic component and k-means clustering on enhanced ALO and TTO images

the highest result is 99.96%, obtained by I component while the highest result for sensitivity is 97.33%, obtained by G component. By comparing the overall performance, S component achieved the best segmentation performance when been applied with the k-means clustering with accuracy of 99.06%, specificity of 99.31% and sensitivity of 95.06%.

574

N. A. A. Khairudin et al.

(a) MGCS ALO_1

(b) SG ALO_1

(c) k-Means ALO_1

(d) PPS ALO_1

(e) MGCS ALO_2

(f) SG ALO_2

(g) k-Means ALO_2

(h) PPS ALO_2

(i) MGCS TTO_1

(j) SG TTO_1

(k) k-Means TTO_1

(l) PPS TTO_1

(m) MGCS TTO_2

(n) SG TTO_2

(o) k-Means TTO_2

(p) PPS TTO_2

Fig. 15 Results of SG component and k-means clustering on enhanced ALO and TTO images

4 Conclusions In this paper, the results of applying the proposed color models with k-means clustering have been presented. Color components from the various color models are used for k-means clustering segmentation to ease the identification of the target image in order to achieve good segmentation results. A good segmentation result helps to achieve more accurate results for classification and diagnosis of STH. S component from HSV color model has proven to be the best in obtaining a good segmentation of ALO and TTO images with accuracy of 99.06%, specificity of 99.31% and sensitivity of 95.06%. These results can be used as a reference for the morphology of the ALO and TTO in the next project such as classification and identification process. Acknowledgements The author would like to acknowledge the support from the Fundamental Research Grant Scheme for Research Acculturation of Early Career Researchers (FRGS-RACER) under a grant number of RACER/1/2019/ICT02/UNIMAP//2 from the Ministry of Higher Education Malaysia. The authors gratefully acknowledge team members and thank Hospital Universiti Sains Malaysia (HUSM) for providing the helminths eggs samples.

A Fast and Efficient Segmentation of Soil-Transmitted Helminths …

575

References 1. Mohd-Shaharuddin N, Lim YAL, Hassan N-A, Nathan S, Ngui R (2018) Soil-transmitted helminthiasis among indigenous communities in Malaysia: is this the endless malady with no solution? Trop Biomed 35(1):168–180 2. Mehraj V, Hatcher J, Akhtar S, Rafique G, Beg MA (2008) Prevalence and factors associated with intestinal parasitic infection among children in an urban slum of Karachi. PLoS ONE 3 (11):e3680 3. Ghate DA, Jadhav C (2012) Automatic detection of malaria parasite from blood images. Department of Computer, College of Engineering, Pimpri, Pune, Maharashtra, India, TIJCSA 4. Ghazali KH, Hadi RS, Zeehaida M (2013) Microscopy image processing analysis for automatic detection of human intestinal parasites ALO and TTO. In: International conference on electronics computer and computation, ICECCO 2013, pp 40–43 5. World Health Organization (2004) Division of control of tropical diseases. Schistosomiasis and intestinal parasites unit: training manual on diagnosis of intestinal parasites, tutor’s guide electronic resource. CD-ROM 6. Amoah ID, Singh G, Stenström TA, Reddy P (2017) Detection and quantification of soil-transmitted helminths in environmental samples: a review of current state-of-the-art and future perspectives. Acta Trop 169(February):187–201 7. World Health Organization (WHO) (2005) Deworming for health and development. Report of the third global meeting of the partners for parasite control. WHO, Geneva 8. World Health Organization (WHO) (2015) Third WHO report on neglected diseases: investing to overcome the global impact of neglected tropical diseases. World Health Organization, p 191 9. Bleakly H (2003) Disease and development. Evidence from hookworm eradication in the American South. Q J Econ 1:376–386 10. Kaewpitoon SJ, Sangwalee W, Kujapun J, Norkaew J, Chuatanam J, Ponphimai S, Chavengkun W, Padchasuwan N, Meererksom T, Tongtawee T, Matrakool L, Panpimanmas S, Wakkhuwatapong P, Kaewpitoon N (2018) Active screening of gastrointestinal helminth infection in migrant workers in Thailand. J Int Med Res 46:4560–4568 11. Jiménez B, Maya C, Velásquez G, Torner F, Arambula F, Barrios JA, Velasco M (2016) Identification and quantification of pathogenic helminth eggs using a digital image system. Exp Parasitol 166:164–172 12. Hadi RS, Ghazali KH, Khalidin IZ, Zeehaida M (2012) Human parasitic worm detection using image processing technique. In: ISCAIE 2012 - 2012 IEEE symposium on computer applications & industrial electronics, no Iscaie, pp 196–201 13. Suzuki CTN, Gomes JF, Falcão AX, Papa JP, Hoshino-Shimizu S (2013) Automatic segmentation and classification of human intestinal parasites from microscopy images. IEEE Trans Biomed Eng 60(3):803–812 14. Kamarul HG, Raafat SH, Zeehaida M (2013) Automated system for diagnosis intestinal parasites by computerized image analysis. Modern Appl Sci 7(5):98–114 15. Ghimire D, Lee J (2011) Nonlinear transfer function based local approach for color image enhancement. IEEE Trans Consum Electron 57(2):858–865 16. Kulkarni TA, Bhosale DS, Yadav DM (2014) A fast segmentation method for the recognition of acute lymphoblastic leukemia using thresholding algorithm. Int J Electron Commun Comput Eng 5(4):364–368 17. Poostchi M, Silamut K, Maude RJ, Jaeger S, Thoma G (2018) Image analysis and machine learning for detecting malaria. Transl Res 194(2018):36–55

576

N. A. A. Khairudin et al.

18. Sharma B, Nayyer R (2015) Use and analysis of color models in image processing. J Food Process Technol 7(01):1–2 19. Aris TA, Nasir ASA, Mohamed Z, Jaafar H, Mustafa WA, Khairunizam W, Jamlos MA, Zunaidi I, Razlan ZM, Shahriman AB (2019) Colour component analysis approach for malaria parasites detection based on thick blood smear images. In: MEBSE 2018- IOP conference series: materials science and engineering, vol 557. IOP 20. Dai H, Li X (2010) The color components’ exchanging on different color spaces for image segmentation of hematocyte. In: 2nd international conference on multimedia information networking and security, MINES 2010. IEEE, pp 10–13 21. Abdul-Nasir AS, Mashor MY, Mohamed Z (2012) Modified global and modified linear contrast stretching algorithms-new colour contrast enhancement techniques for microscopic analysis of malaria slide images. Comput Math Methods Med 22. Miller E (2017) Understanding the RGB colour model, graphic design 101. https://www. thoughtco.com/colour-models-rgb-1697461 23. Sharma B, Nayyer B (2015) Use and analysis of color model in image processing. J Food Process Control Technol 24. Nasir ASA, Mashor MY, Rosline H (2011) Detection of acute leukaemia cells using variety of features and neural networks. In: 5th Kuala Lumpur international conference on biomedical engineering. International Federation for Medical and Biological Engineering (IFMBE), Kuala Lumpur, pp 40–46 25. Latoschik ME (2006) Realtime 3D computer graphic/virtual reality 26. Puniani S, Arora S (2015) Performance evaluation of image enhancement techniques. Int J Signal Process Image Process Pattern Recogn 8(8):251–262 27. Erich LM (2006) Colour models CIE space for colour matching. CIE 1931 Model International C 28. Hong Yan NL (2006) Improved method for color image enhancement based on luminance and color contrast. J Electron Imaging 3(2):190–197 29. Khairudin NAA, Ariff FNM, Nasir ASA, Mustafa WA, Khairunizam W, Jamlos, MA, Zunaidi I, Razlan ZM, Shahriman AB (2019) Image segmentation approach for acute and chronic leukaemia based on blood sample images. In: MEBSE 2018- IOP conference series: materials science and engineering, vol 557. IOP

Machine Learning Calibration for Near Infrared Spectroscopy Data: A Visual Programming Approach Mahmud Iwan Solihin, Zheng Zekui, Chun Kit Ang, Fahri Heltha, and Mohamed Rizon

Abstract Spectroscopy including Near infrared spectroscopy (NIRS) is a non-destructive and rapid technique applied increasingly for food quality evaluation, medical diagnosis, manufacturing, etc. The qualitative or quantitative information using NIRS is only obtained after spectra data calibration process based mathematical knowledge in chemometrics and statistics. This process naturally involves multivariate statistical analysis. Machine learning as a subset of AI (artificial intelligence), in addition to conventional multivariate statistical tools, seems to get more popularity for chemometric calibration of NIRS data nowadays. However, often the software/toolboxes in chemometrics are commercialized version which is not free. For the free versions, programming skills are required to deal with applications of machine learning in spectra data calibration. Therefore, this paper introduces a different approach of spectra data calibration based on visual programming approach using Orange data mining, a free software which is still rarely used by the research community in spectroscopy. The data used namely: pesticide sprayed on cabbage (to classify between pure cabbage and pesticide-sprayed cabbage with different level of pesticide solution), mango sweetness assessment (to predict sugar soluble content in mango based on Brix degree value). These two data represent classification and regression respectively. This approach is intended more for researchers who want to apply machine learning calibration in their spectroscopy data but don’t want to have rigorous programming jobs, i.e. for non-programmers.

M. I. Solihin (&)  C. K. Ang  F. Heltha Mechatronics Engineering, Faculty of Engineering, UCSI University, Kuala Lumpur, Malaysia e-mail: [email protected] M. Rizon Electrical and Electronics Engineering, Faculty of Engineering, UCSI University, Kuala Lumpur, Malaysia Z. Zekui TUM (Technical University of Munich) Asia, Singapore, Singapore © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_40

577

578

M. I. Solihin et al.



Keywords Machine learning calibration Near infrared spectroscopy free software Handheld near infrared spectrometer



 Orange

1 Introduction Machine learning including deep learning has become a highly discussed topics recently in digital data world. It has tremendous potential to solve complex human problems. Thus, many fields of applications demand implementation of machine learning and artificial intelligence in broad to solve their respective problems [1–3]. This is not exclusive of spectroscopy data application. Spectroscopy is the study of the interaction between matter and electromagnetic radiation originated through the study of visible light dispersed according to its wavelength by a prism. Particularly, near infrared spectroscopy (NIRS) is a non-destructive and rapid technique applied increasingly for food quality evaluation, medical diagnosis, manufacturing, etc. in recent years [4–15]. It can provide qualitative (substance concentrations determination) and quantitate (raw material identification, adulteration of product identification) information of samples for in situ analysis and online applications [4, 5]. For example, it can provide moisture, protein, fat, and starch content information. In each industry, NIR applications vary and are tailored to suit different companies and their products and needs [16–18]. In spectroscopy, absorption spectra of chemical species (atoms, molecules, or ions) are generated when a beam of electromagnetic energy (i.e. light) is passed through a sample, and the chemical species absorbs a portion of the photons of electromagnetic energy passing through the sample. Lamberts beer law states that the absorptive capacity of a dissolved substance is directly proportional to its concentration in a solution. The relationship can be expressed as shown in Eq. (1) [19]. A ¼ log10

  Io ¼ elc I

ð1Þ

where: A= e= l= c=

absorbance the molar extinction length of the path light must travel in the solution in centimeters concentration of a given solution

The qualitative or quantitative information using NIRS is only obtained after spectra data calibration process using chemometrics and this process naturally involves multivariate statistical analysis. Machine learning as a subset of AI (artificial intelligence), in addition to conventional multivariate statistical tools, seems to get more popularity for chemometric calibration of NIRS data nowadays due to its well-known capability to perform complex classification and regression tasks [20– 22]. This emergence may be encapsulated in a subject so called intelligent

Machine Learning Calibration for Near Infrared Spectroscopy Data ...

579

chemometrics. Among popular machine learnings in this regard are support vector machine (SVM) and artificial neural networks (ANN). Some research in this area include literatures review can be found [23–28]. The software programming tools for chemometric purpose which can accommodate machine learning are many such as Unscrambler, MALAB, R language, WEKA, SIMCA and Python. However, often these softwares/toolboxes are commercialized version which is not free. Free software implementation on their respective applications is motivating due to cost [29]. For the free versions, programming skills are required to deal with applications of machine learning in spectra data calibration. Therefore, this paper introduces a different approach of spectra data calibration based on visual programming approach using Orange free software developed by Biolab [30] which is still rarely used by the research community in spectroscopy. This approach is intended more for researchers who want to apply machine learning calibration in their spectroscopy data but don’t want to have rigorous programming jobs, i.e. for non-programmers. This paper will demonstrate the results of machine learning calibration for some NIRS data in classification and regression mode. The NIRS data used are obtained using micro handheld spectrometer, a new type of NIR spectroscopy instrument. The data used namely: pesticide sprayed on cabbage (to classify between pure cabbage and pesticide-sprayed cabbage with different level of pesticide solution), mango sweetness assessment (to predict sugar soluble content in mango based on Brix degree value). These two data represent classification and regression respectively.

2 Instrument and Software Spectrometer is the instrument used to collect spectra data of the objects/samples by directing infrared light source. The spectra data obtained for each sample is unique for each simple indicating the uniqueness of its chemical composition. Therefore, particularly NIR spectrometer can be used as a mean of study for material fingerprint. The spectra data graph can be plotted in unit of nm or cm−1 (wavelength in x axis) versus the intensity or absorbance (arbitrary unit in y axis). Figure 1 shows example of spectra data obtained from a spectrometer. The NIR spectrometer used in this study is a handheld type (hand palm size) with wavelength range in NIR region from 900–1700 nm. The optical electrical board of this spectrometer is developed by Texas Instruments. Figure 2 shows the handheld micro spectrometer used in this study. This device is connected via USB port so that the user can acquire the spectra signal of the samples in their personal computer using GUI software. The detailed explanation on how the data was collected will be explained in the next section for the respective case studies. For the multivariate spectra data calibration, Orange data mining software is used [30]. This software can be downloaded freely as it is open source. It features a visual programming front-end for explorative data analysis and interactive data

580

M. I. Solihin et al.

Fig. 1 An example of spectra data obtained from spectrometer reading on many samples

Fig. 2 The handheld hand palm-sized NIR spectrometer

visualization and can also be used as a Python library. The visual programming in Orange is performed as workflow. Orange workflow components are called widgets and they range from simple data visualization, subset selection, and pre-processing, to empirical evaluation of learning algorithms and predictive modelling. It means that workflows are created by linking predefined or user-designed widgets, while advanced users can use Orange as a Python library for data manipulation and widget alteration [31]. Figure 3 shows typical Orange workflow example. The widgets for spectroscopy can be downloaded as Add-ons option which also includes some other applications such as Image Analyses, Time-Series, Geo, etc. The widgets contained in Spectroscopy Add-ons are as seen in Fig. 4.

Machine Learning Calibration for Near Infrared Spectroscopy Data ...

Fig. 3 An example of workflow visual programming in Orange

Fig. 4 The orange software widgets available in spectroscopy add-ons

581

582

M. I. Solihin et al.

3 Case Studies In this section, two case studies for NIR spectra data calibration will be presented. One case represents classification problem (qualitative analysis) using machine learning and another case represents regression problem (quantitative analysis). This first case for qualitative analysis is experiment on pesticide solution spayed on cabbage samples. The second case for quantitative analysis is mango sweetness assessment based on sugar content (Brix value).

3.1

Pesticide Solution Sprayed on Cabbage

This experiment is motivated by the effort of developing rapid non-destructive approach to detect pesticide residue on agricultural crops. It is carried out as initial research to scrutinize whether NIRS is suitable tool for pesticide residue detection. Monitoring of pesticides in fruit and vegetable samples has increased in the recent years since most countries have established maximum residue level (MRL) for pesticides in food products [32, 33]. Figure 5 shows the cabbage sample and the pesticide solution used, i.e. Potassium oleate solution (285 g/1000 mL). The experiment procedure can be summarized as follows: 1. 2. 3. 4. 5.

The instrument is set up. A high concentration solution (28.5%, original ratio) of pesticide is blended. The pesticide solution is spray on cabbage. The cabbage sample is scanned 6 times to prove the result. The spectrum is saved as .csv file.

Fig. 5 Cabbage and the pesticide solution

Machine Learning Calibration for Near Infrared Spectroscopy Data ...

583

Fig. 6 The orange workflow in the experiment for classification task

6. Repeat step 3 to 5 for 50 times for different leaf of cabbage. 7. Repeat step 2 to 6 for 5% pesticide, 1% pesticide and water. 8. Repeat step 4 to 5 for 30 times for different leaf of cabbage. Total NIR spectrum of 230 samples are collected. Those NIR spectrum are of 30 samples of 30 pure cabbage leaves, 50 samples of cabbage sprayed with respectively 28.5% (original product ratio) pesticide solution, 5% pesticide solution, 1% pesticide solution and water only solution. This means the machine learning will make classification based on the recorded NIR spectrum to produce five classes classification outcome. From these 230 samples, 180 samples are randomly for training and the rest 50 samples are for testing. Figure 6 shows the orange workflow for this experiment where three classifiers are used namely, ANN, SVM and KNN (k-nearest neighbor). The classification results are readily available from Confusion Matrix widget and Test & Score widget as shown in Figs. 7 and 8. Figure 7 shows confusion matrix of classification performed by SVM. To see the results of other classifiers (ANN and KNN), a selection click button can be performed on the left side. Noted that some other classifiers can also be used such as Random Forest, Naïve Bayes, Decision Tree etc. Furthermore, Test & Score widget can be used to check the classification accuracy. As can be seen in Fig. 8, the results is mostly expressed in Data Science terminologies such as AUC (area under curve), CA (classification accuracy), Precision and Recall, etc. As can be seen, the highest CA performed on Test is achieved by SVM followed by KNN and ANN respectively: 92, 86 and 72%. Obviously, these results can be fine-tuned by changing parameters and the performance might be different. However, the focus of this study at the moment is on

584

M. I. Solihin et al.

Fig. 7 Confusion matrix of classification performed by SVM

Fig. 8 Screenshot of Test & Score widget that shows classification results

the use of the software instead of the machine learning algorithms performance. In addition, some other algorithms can also be used and analyzed easily.

3.2

Brix Value Prediction on Mango

The second case study is regression problem as a part of research on non-destructive fruit quality assessment using NIR spectroscopy. For this project, three different types of mango fruit were selected namely Chokonan, Rainbow, and Kai Te. Total of 60 samples was prepared to be scanned by the spectrometer. The samples were scanned in reflectance mode to record the absorbance spectra data. Each sample spectrum was measured for 3 s in reflectance mode. Some

Machine Learning Calibration for Near Infrared Spectroscopy Data ...

585

samples were scanned two times in different environment, and some were scanned only one time. A total of 80 spectra were collected from 60 samples. The training and testing dataset consist of 60 and 20 samples respectively. In assessing the fruit maturity of mango and as a guide to final food quality, short wave near-infrared spectroscopy (NIR) (900–1700 nm) has been investigated. To obtain a predictive model using spectroscopy data, real data needed to be collected so that it can be used to calibrate and validate the accuracy of the prediction model. Refractometer – A device used to measure the refractive index of plant juices in order to determine the mineral/sugar ratio of the plant cell protoplasm. The refractometer measured in units called Brix. NIRS is used to predict the Brix values in mango fruit. The mango fruits used as samples are of three different types namely: Chokonan, Rainbow, and Kai Te. The MA871 is an optical refractometer instrument that employs the measurement of the refractive index to determine the % Brix of sugar in aqueous solutions as shown in Fig. 9 [34]. In this project, the NIR spectrum of the Mango samples is calibrated by machine learning (AdaBoost ensemble algorithm for regression in this case) to predict Brix value non-invasively. Figures 10 and 11 show the raw and the pre-processed spectra data of the Mango samples. Some pre-processes are applied here namely: Gaussian smoothing and EMSC (extended multiplicative scatter correction). Test & Score widget can be used to show the regression accuracy in this regression case, in terms of R2 (coefficient of determination). The regression performance obtained by AdaBoost ensemble regression in this case is 0.99% (training) indicating a very good prediction accuracy. However, R2 = 0.64 is obtained for testing. This lower attainment

Fig. 9 MA871 digital refractometer

586

Fig. 10 Raw spectral data of Mango

Fig. 11 Pre-processed spectra data of Mango

M. I. Solihin et al.

Machine Learning Calibration for Near Infrared Spectroscopy Data ...

Fig. 12 Orange workflow for regression experiment and the regression result

Fig. 13 Actual %Brix value vs predicted value (by AdaBoost)

587

588

M. I. Solihin et al.

is indication of overfitting of the prediction model and this needs to be remedied. However, this discussion is beyond the scope of this conference. Figure 12 shows the orange workflow (visual programing) used to generate the data for this regression process. Figure 13 shows the regression plot for testing data. It indicates the relation between actual %Brix and predicted value.

4 Conclusions and Discussions This paper introduces a different approach of spectra data-particularly near infrared spectroscopy- calibration based on visual programming approach using Orange data mining, a free software which is still rarely used by the research community in spectroscopy. This software tool is useful particularly for the non-programmer researchers who want to apply machine learning algorithms in spectroscopy data which leads to intelligent chemometrics approach. There was no coding involved in the calibration and analysis which may attract interest for non-programmers. However, there some recommendations for future improvement particularly for the Orange software development that the research community and the authors should proceed, such as: development of PLS (partial least square) regression widget and deep learning (e.g. convolutional neural networks) widget. This is because especially PLS is among the popular multivariate regression method in chemometrics and spectroscopy. This can only be achieved with knowledge in Python programming language.

References 1. Ang CK, Tey WY, Kiew PL, Fauzi M (2017) An artificial intelligent approach using fuzzy logic for sleep quality measurement. J Mech Eng SI 4(2):31–47 2. Tang SH, Ang CK, Ariffin MKABM, Mashohor SB (2014) Predicting the motion of a robot manipulator with unknown trajectories based on an artificial neural network. Int J Adv Robot Syst 11(10):176 3. Hong TS, Kit AC, Nia DN, Ariffin MKAM, Khaksar W (2013) Planning for redundant manipulator based on back-propagation neural network. Adv Sci Lett 19(11):3307–3310 4. Cen H, He Y (2007) Theory and application of near infrared reflectance spectroscopy in determination of food quality. Trends Food Sci Technol 18(2):72–83 5. Teixeira Dos Santos CA, Lopo M, Páscoa RNMJ, Lopes JA (2013) A review on the applications of portable near-infrared spectrometers in the agro-food industry. Appl Spectrosc 67(11):1215–1233 6. Porep JU, Kammerer DR, Carle R (2015) On-line application of near infrared (NIR) spectroscopy in food production. Trends Food Sci Technol 46(2):211–230 7. Sakudo A (2016) Near-infrared spectroscopy for medical applications: current status and future perspectives. Clin Chim Acta 455:181–188 8. Qu J-H et al (2015) Applications of near-infrared spectroscopy in food safety evaluation and control: a review of recent research advances. Crit Rev Food Sci Nutr 55(13):1939–1954

Machine Learning Calibration for Near Infrared Spectroscopy Data ...

589

9. Yadav J, Rani A, Singh V, Murari BM (2015) Prospects and limitations of non-invasive blood glucose monitoring using near-infrared spectroscopy. Biomed Signal Process Control 18:214– 227 10. Saputra I, Jaswir I, Akmeliawati R (2018) Identification of pig adulterant in mixture of fat samples and selected foods based on FTIR-PCA wavelength biomarker profile. Int J Adv Sci Eng Inf Technol 8(6):2341 11. Chandran M, Rajamamundi P, Kit AC (2017) Tire oil from waste tire scraps using novel catalysts of manufacturing sand (M Sand) and TiO 2: production and FTIR analysis. Energy Sources Part A Recover Util Environ Eff 39(18):1928–1934 12. Elango N, Gupta NS, Lih Jiun Y, Golshahr A (2017) The effect of high loaded multiwall carbon nanotubes in natural rubber and their nonlinear material constants. J Nanomater 2017:1–15 13. Solihin MI, Shameem Y, Htut T, Ang CK, Hidayab MB (2019) Non-invasive blood glucose estimation using handheld near infra-red device. Int J Recent Technol Eng 8(3):16–19 14. Abdullah Al-Sanabani DG, Solihin MI, Pui LP, Astuti W, Ang CK, Hong LW (2019) Development of non-destructive mango assessment using handheld spectroscopy and machine learning regression. J Phys Conf Ser 1367(1):012030 15. Karunathilaka SR, Yakes BJ, He K, Chung JK, Mossoba M (2018) Non-targeted NIR spectroscopy and SIMCA classification for commercial milk powder authentication: a study using eleven potential adulterants. Heliyon 4(9) 16. Martens H, Stark E (1991) Extended multiplicative signal correction and spectral interference subtraction: new preprocessing methods for near infrared spectroscopy. J Pharm Biomed Anal 9(8):625–635 17. Skogholt J, Liland KH, Indahl UG (2019) Preprocessing of spectral data in the extended multiplicative signal correction framework using multiple reference spectra. J Raman Spectrosc 50(3):407–417 18. Manley M (2014) Near-infrared spectroscopy and hyperspectral imaging: non-destructive analysis of biological materials. Chem Soc Rev 43(24):8200–8214 19. Hardesty JH, Attili B, College C (2010) Spectrophotometry and the Beer-Lambert Law: an important analytical technique in chemistry 20. Zhang H, Yang Q, Lu J (2013) Classification of washing powder brands using near-infrared spectroscopy combined with chemometric calibrations. Spectrochim Acta Part A Mol Biomol Spectrosc 120:625–629 21. Makky M, Soni P (2014) In situ quality assessment of intact oil palm fresh fruit bunches using rapid portable non-contact and non-destructive approach. J Food Eng 120(1):248–259 22. Devos O, Ruckebusch C, Durand A, Duponchel L, Huvenne J-P (2009) Support vector machines (SVM) in near infrared (NIR) spectroscopy: focus on parameters optimization and model interpretation. Chemom Intell Lab Syst 96(1):27–33 23. Barbon S, da Costa Barbon APA, Mantovani RG, Barbin DF (2018) Machine learning applied to near-infrared spectra for chicken meat classification. J Spectrosc 2018:1–12 24. Madden MG, Howley T (2009) A machine learning application for classification of chemical spectra. In: Applications and innovations in intelligent systems XVI. Springer, London, pp 77–90 25. Cheng C, Liu J, Zhang C, Cai M, Wang H, Xiong W (2010) An overview of infrared spectroscopy based on continuous wavelet transform combined with machine learning algorithms: application to chinese medicines, plant classification, and cancer diagnosis. Appl Spectrosc Rev 45(2):148–164 26. Torrione P, Collins LM, Morton KD (2014) Multivariate analysis, chemometrics, and machine learning in laser spectroscopy. In: Laser spectroscopy for sensing. Elsevier, pp 125– 164 27. Astuti W, Dewanto S, Soebandrija KEN, Tan S (2018) Automatic fruit classification using support vector machines: a comparison with artificial neural network. IOP Conf Ser Earth Environ Sci 195:012047

590

M. I. Solihin et al.

28. Astuti W, Aibinu AM, Salami MJE, Akmelawati R, Muthalif AG (2011) Animal sound activity detection using multi-class support vector machines. In: 2011 4th international conference on mechatronics (ICOM), pp 1–5 29. Yie Y, Solihin MI, Kit AC (2017) Development of swarm robots for disaster mitigation using robotic simulator software, vol 398 30. Orange – data mining fruitful & fun. https://orange.biolab.si/. Accessed 12 Mar 2019 31. Demšar J et al (2013) Orange: data mining toolbox in Python. J Mach Learn Res 14:2349– 2353 32. Zhao G, Guo Y, Sun X, Wang X (2015) A system for pesticide residues detection and agricultural products traceability based on acetylcholinesterase biosensor and internet of things. Int J Electrochem Sci 10(4):3387–3399 33. Jamshidi B, Mohajerani E, Jamshidi J, Minaei S, Sharifi A (2015) Non-destructive detection of pesticide residues in cucumber using visible/near-infrared spectroscopy. Food Addit Contam Part A Chem Anal Control Expo Risk Assess 32(6):857–863 34. Milwaukee - MA871 Digital Brix Refractometer. http://www.milwaukeeinst.com/site/ products/products/digital-refractometers/165-products-g-digital-refractometers-g-ma871. Accessed 22 Aug 2019

Real Time Android-Based Integrated System for Luggage Check-in Process at the Airport Xin Yee Lee and Rosmiwati Mohd-Mokhtar

Abstract Airway transportation has become trending among travelers. However, the check-in process involved a lot of stages, includes flight ticket check-in, luggage drop-off, luggage scanning and others. This research focuses on developing an online software integrated device that capable of managing the luggage check-in process at the airport. A weight sensor device is used to capture the weight of luggage and synchronized with the real time database. Android application is developed to display the user interface in check-in the luggage. Additional capability of purchasing add-on luggage is included in the developed software application. Result shows that by some assumption on time duration taken to scan, print and get the bag tag, the estimated recorded time is four minutes and thirty-seven seconds for no purchase add-on activity, and an average time of six minutes and fifteen seconds with purchase add-on activity. By eliminating the purchase of additional luggage at different counter process, it is believed to reduce the time taken during luggage check-in process for that passenger. Keywords Android system

 Check-in process  Weight sensor

1 Introduction Research has been carried out on the procedures and methods of airline passengers’ check-in over the years [1]. The outcome tabulated detail analysis on each of the current methods implemented at the airports. Examples of methods used are agent/ check-in desk (assisted counter check-in at the airport), browser (self-online web check-in), kiosks (self-check-in at the airport), app (self-mobile apps check-in) and others. Based on the research in [1], it was observed that passenger most prefer check-in method is agent/check-in desk, which covers a high value of forty nine X. Y. Lee  R. Mohd-Mokhtar (&) School of Electrical and Electronic Engineering, Universiti Sains Malaysia, Engineering Campus, 14300 Nibong Tebal, Pulau Pinang, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_41

591

592

X. Y. Lee and R. Mohd-Mokhtar

percent, nearly half of the percentage. It means that most of the passengers are still preferred the traditional method of check-in process. However, there is a slight increase in the app method over the years from year 2015 to year 2017, forecasted to become the majority preference of passengers in year 2020 [1]. Nowadays, national airlines have adapted advance technology into their flight check-in operation system. In the passenger ticket check-in process, website browser check-in was popular over the past few years. Mobile applications begin to take the lead ahead of browser check-in. One of the popular airlines in Malaysia, Airasia Berhad introduces various passenger check-in methods with advance technology. The passenger able to get the boarding pass by scanning the 2D barcode received on smart phone using the kiosks at the airport. Passenger is ready to board the flight if they do not carry any luggage. The whole process takes less than one minute. However, if passenger has checked luggage, they need to head over to the luggage drop counter and required to get their travel documents verified two hours before boarding [2]. Like Malaysia Airport, other international airport and airlines also facilitate the boarding process with automation system. This can be seen for example at the Changi Airport in Singapore. The main boarding process can be categorized as 4 steps, namely, check-in, luggage drops, immigration, and boarding [3]. Changi Airport has adopted the automation system technology in these processes. Firstly, the automated check-in kiosk. Secondly, the automated luggage-drop system allows passengers to drop their luggage without counter check-in and others. If the luggage in excess of the allowed weight, passenger need to pay an excess luggage or air freight it to destination as unaccompanied luggage [3]. There are several researches that have been conducted to study the check-in process at the airport [4–9]. They usually conduct simulation design as to study the dynamics behavior of the passenger, estimate the queuing time as well as analyze the operational procedures run at the airport. The study mainly focused on impact of social behavior among passengers at the airport. The simulations are developed as agent based model. There are also researches which focused on simulation to evaluate the airport security during luggage check-in as mentioned by Perboli et al. [10], Hepler [11], Miller [12] and, Leone and Liu [13]. Studies on waiting and queuing times at the airport were also made as can be seen in [14–18]. The aim is to provide an optimized model that able to reduce the length of queuing lanes as well as reduce the operational cost of the airport and flight service providers. Pou et al. [14] for example used a dynamic programming technique to simulate the process and model the influx of passengers as a Poisson process. Mehri et al. [15] also used quite similar approach with different in airport case study. In the other hand, de-Lange et al. [16] proposed the virtual queuing model at airport security checking lanes. Flight station is always crowded with people queuing up to check-in the flight ticket or carrying luggage or both at each counter [19]. At the departure gate the time available to resolve eventual problems becomes strictly limited [20–22]. The problem is the removal of cabin bags whenever the volumetric limits for stowage in passenger cabin are reached. The counter will eventually pause and delay the

Real Time Android-Based Integrated System for Luggage…

593

services to the next passengers. Things are getting worse when there are passengers without luggage must queue up and wait for those who are checking in the luggage. These are the causes of the long lines at each counter whenever there are some issues occurred. In addition, things eventually come to the worst when the one who have their check-in luggage exceeds the limitation of the weight allowed. They are prompted to proceed to the cashier counter first in order to pay for the excess weight of their luggage, and then return to the luggage drop counter to continue with the flight check-in before boarding on the departure gate. It is inefficient and time consuming, especially when travelling abroad which may have foreign currency issues that may be occurred as well. The processes are troublesome and not user-friendly. The aim of this research is to implement a real time integrated system that able to store details information about the passenger and the luggage weight that he/she has initially purchased/obtained, to directly link to the weight measurement device and to automatically monitor the luggage check-in process via smart phone. The system also has features to directly purchase the extra luggage weight via the smart phone should the excess luggage occurred without need to change counter. Even though some commercial airlines already have apps that able to buy extra luggage online, however, the integration on system that able to provide real time luggage check-in process is yet to be implemented by any airline services so far. This will become the innovative and new contribution of this research. The remainder of the paper goes as follow. Section 2 will give details on project implementation. The hardware system is developed to imitate the weighing process at the luggage check-in counter. The software that synchronized with the weight sensor, real time database and user interface will be embedded to the system. Outcome and analysis from the project will be presented in Sect. 3. Section 4 will give conclusion to the paper.

2 Project Implementation 2.1

Hardware Components and Circuits

The flight luggage check-in device is a mechanism that able to manage the luggage check-in process without human resources. It measures the weight of the luggage and the values measured will be auto updated to the database system in the communication system. It will analyze the data collected and trigger the notifications according to the allowed weight or purchased weight of the luggage. The system will setup an internet server to connect the automation device with the portable devices of the passenger. Figure 1 shows the block diagram of the flight luggage check-in system. Raspberry Pi is used as the core of the luggage check-in system hardware implementation. A sensor device which is sensitive to the weight is used to measure

594

X. Y. Lee and R. Mohd-Mokhtar

Fig. 1 Block diagram of the flight luggage check-in system

the weight of the luggage. Weight sensor module, HX711 is utilized to read the values from sensor device. Raspberry Pi is connected to the Wi-Fi module to establish a connection with database server. Raspberry Pi able to support Wi-Fi module, thus make it a suitable embedded system to setup wireless server connection. It can be manually remote by using remote SSH (Security Shell) only, as long as the Pi is powered on with Wi-Fi connection. Raspberry Pi is linked to the database system, in this case, Firebase real time database is used. Raspberry Pi granted with permission to refer to the database and change the data and vice versa. Raspberry Pi reads the login identity and verifies their purchased weight with check-in luggage. Then, it will upload the measured weight of the luggage to the database. Figure 2 shows the front view of the automatic luggage check-in system platform.

Fig. 2 Front view of hardware platform

Real Time Android-Based Integrated System for Luggage…

2.2

595

Software Design

Software design can be categorized into 3 sections, mainly low level software design in hardware device, database design and user interface android application. Low level software involves python programming language script. On the other hand, Android application is programmed in Java, object-oriented language. Database is set up in both hardware and software by configuring the settings in compiler. Python script is suitable to be used in Raspberry Pi program. It is easy to use and universal. Firebase is a cloud storage database. The database is designed in category and group according to specific feature or characteristic. The main information needed in this project is user/passenger information. User will be defined as the parent of group. Second layer is identified as various user name registered, followed by their respective flight information such as user password, purchased luggage weight and their respective measured luggage weight. Table 1 tabulates the dummy data to be used in designing the database. Figure 3 shows the design structure of the database created. The measured weight is synchronized with the weight sensing device. Raspberry Pi is linked to the database through the python script. The library of the firebase is imported into Raspberry Pi. An authentication link to the firebase account is used to access the data stored in the database. Android operating system is used to create the graphical user interface for luggage check-in process. It has been chosen for this study as the operating system (OS) is commonly used among the phone users.

2.3

Design Layout

Firstly, the user or passenger needs to log in to his/her account. Upon successful log in, user will be prompted with the flight he/she is going to board and details of the ticket check-in and weight of luggage purchased. At the same time, the data is linked to the server system, or called database system. The database system is shared together with the luggage automation device. If the details of the passenger and luggage are passed, a prompt message of “Luggage

Table 1 Dummy data to be used in the database

User

Password

Purchased luggage (kg)

Measured luggage

Ali Lee Raju

0000 1234 4321

4 3 2

0 0 0

596

X. Y. Lee and R. Mohd-Mokhtar

Fig. 3 Design structure of the database created

Check-in is successful.” will prompt. After that, the luggage will automatically transfer into the luggage store. The luggage check-in process is done. However, once the luggage automation device sense there is any weight of the luggage exceeds the allowed weight, a notification prompt is sent to inform the passenger that the luggage is overweight. If the luggage is overweight, the passenger is not allowed to proceed to the final session of luggage check-in process. The passengers have the options to choose either proceed to the payment session to further purchase add-on luggage or cancel the luggage check-in process by pressing the logout button. Figure 4 shows the flow of the proposed flight luggage check-in system.

Real Time Android-Based Integrated System for Luggage…

2.4

597

System Functionality on Several Luggage Condition

Several luggage conditions are created to test on the overall system. There are three major state conditions of the luggage, mainly underweight, equal weight and overweight. These test cases are simulated. A dummy data listed in the Table 1 is used to simulate the test case created. In this case, user name, Ali is selected, in which his respective purchased weight is 4 kg, with initial measured weight set as 0 kg. Table 2 lists down the simulation test case based on the information of the user, Ali in the database.

Table 2 Test case simulated based on information of the user, Ali in the database Test case

Weight category

Load (kg)

1 2 3 4 5

Underweight Equal weight Overweight No luggage Overweight with purchase add-on activity

2.0 4.0 5.0 0.0 5.0

3 Simulation Results The flight luggage weighting device able to measure the weight of the luggage. The collected weight values will be auto updated to the data base system in the communication session. It will analyze the data collected and trigger the notifications according to the allowed weight or purchased weight of luggage correspondingly.

598

X. Y. Lee and R. Mohd-Mokhtar

Fig. 4 Flow chart of the flight luggage check-in system

3.1

Load Cell Testing

Table 3 shows the results of the weight sensor obtained when different known weight items are placed on top of the sensor device. It is found that the percentage of error for each test case is minimum and almost insignificant. It ranges from as low as 0.16%, until a maximum value of 3.00% only through the observations. Therefore, it can be concluded that the weight sensor device functionality is within acceptable range of percentage of error of ±5%.

Real Time Android-Based Integrated System for Luggage…

599

Table 3 Lists of the load cell tests results Test case

Actual weight (kg)

Average measured weight (kg)

Percentage error (%)

1 2 3 4 5 6

0.5 1.1 2.0 4.0 5.0 8.0

0.485 1.083 1.977 3.983 4.984 7.987

3.00 1.55 1.15 0.43 0.32 0.16

3.2

Android-Based User Interface

The software displays user interface for passenger who like to check in the luggage is shown in Fig. 5. Figure 6 shows the simulation for Test Case 1 when the luggage check-in process is completed with no excess weight issue. After that, the luggage will automatically transfer to the luggage cabin. The check-in process is completed. The simulation for Test Case 2 in which the luggage weight is equivalent to purchased weight also run successfully. If the luggage automation device sense there is any weight of the luggage exceeds the allowed weight, a notification prompt is sent to inform the passenger that the luggage is overweight (Fig. 7(a)). The passenger has the options to choose either proceed to the payment session to further purchase add-on luggage or cancel the luggage check-in process by pressing the logout button. In the other hand, if no

Fig. 5 Android application (a) Welcome homepage (b) User login page (c) User flight details

600

X. Y. Lee and R. Mohd-Mokhtar

Fig. 6 Test case 1: Transition of the user interface before and after passenger clicked on the luggage check-in button

luggage presented on the weighing scale, the no luggage message will be prompted (Fig. 7(b)). Figure 8 displays the window for purchasing the additional luggage weight. For simulation purposes, there are three options that are set for purchasing the luggage check-in. First option is add-on luggage of 1 kg with price of RM100, followed by 3 kg with price of RM150, and 5 kg with price of RM200. Once the purchased is done, user pressed the “Luggage Check-in” button to finish the purchase add-on process. The add-on luggage will update the value of purchased weight in database with the sum of previous purchased weight and add-on weight. The layout activity switched from purchased add-on luggage back to user info layout to resume the luggage check-in process. Overall, the processes include account login, weighing of luggage, purchasing of extra luggage weight, scan, print and get bag tag, put the tag to the bag, account logout, and lastly luggage move to storage. For the normal scenarios in which there is no purchase add-on activity, the estimated recorded time is four minutes and thirty-seven seconds only. The time taken is a rough estimation based on the user hand phone’s clock with some assumption on time duration taken to scan, print and

Real Time Android-Based Integrated System for Luggage…

601

Fig. 7 (a) Test case 3: Luggage check-in is blocked when the measured weight luggage exceeds the purchased weight of the user (b) Test case 4: Luggage check-in is blocked when no luggage is detected

Fig. 8 Test case 5: Purchase add-on process when the luggage exceeds the weight limit

get the bag tag. In contrast, the excess luggage scenario with purchase add-on activity consumed an average time of six minutes and fifteen seconds while the assumption of time for other process is remain constant. In the current system, when there is excess of luggage happen, the passenger will have to cancel the current luggage check-in process and will have to go to the finance counter to purchase the add-on luggage. Then, passenger will return to the

602

X. Y. Lee and R. Mohd-Mokhtar

luggage check-in counter again to continue with the check-in process. The passenger may take a lot of times in shifting from counter to counter and queuing repeatedly. If passenger is in foreign country, he/she may take additional times to convert money before purchasing the add-on luggage. By eliminating the one step process of moving to another counter for purchasing the extra luggage weight as proposed in this project, it will indirectly reduce the time taken to complete the luggage check-in process for that passenger. However, the results are also subjected to the speed of the internet or the processing speed of the mobile device. For this research the computational cost is bearable as the developed prototype system only used python programming language (for programming the Raspberry Pi) and Java object oriented program for Android system. The processing time is very much depended on the type of smart phone used.

4 Conclusion A hardware device is successfully developed with capability of getting real time luggage weight measurement and linked to the communication system. The objective is achieved by using the load cell of maximum capacity of 10 kg that connected to the Raspberry Pi with synchronization of real time database. Calibration on the hardware device is performed in which, the accuracy of the weight sensing device is guaranteed with a tolerance of ±3% only. An android software application that monitor the luggage check-in process is designed to interact with the hardware sensing device. The system is simulated with five test cases as described in Sect. 3. From the results, the estimated time taken is recorded as six minutes fifteen seconds only with the purchase add-on procedure. On the other hand, the normal luggage check-in process takes about four minutes thirty-seven seconds only. In conclusion, the real time Android-based luggage check-in system able to eliminate one step process for purchasing of add-on luggage at another counter. Thus, it reduces the time consumes during the luggage check-in procedure for that particular passenger. Acknowledgement The project is partially supported by the USM RUI Grant: 1001/PELECT/ 8014093.

References 1. The statistics portal, check in method of airline passengers worldwide from 2015 to 2020, travel, tourism & hospitality. https://www.statista.com/statistics/493957/check-in-methodairline-passengers/. Accessed June 2018 2. Airasia Berhad: Passenger guide, departure. https://www.airasia.com. Accessed June 2018

Real Time Android-Based Integrated System for Luggage…

603

3. Changi Airport Singapore: Fast and seamless travel (FAST), passenger guide. http://www. changiairport.com. Accessed Aug 2018 4. Joustra PE, Van Dijk NM (2001) Simulation of check-in at airports. In: Proceedings of the 2001 winter simulation conference, Arlington, VA, USA, vol 2, pp 1023–1028 5. Bevilacqua M, Ciarapica FE (2010) Analysis of check-in procedure using simulation: a case study. In: 2010 IEEE international conference on industrial engineering and engineering management, Macao, pp 1621–1625 6. Ma W, Kleinschmidt T, Fookes C, Yarlagadda PKDV (2011) Check-in processing: simulation of passengers with advanced traits. In: Proceedings of the 2011 winter simulation conference, 11–14 December, Phoenix, AZ, USA, pp 1783–1794 7. Trakoonsanti L (2016) A process simulation model of airline passenger check-in. Univ J Manage 4(5):265–276 8. Felix MM (2015) Micro-simulation of check-in operations: case study of Lisbon Airport’s Terminal 1. Technical report, pp 1–10 9. Felix M, Reis V (2016) A micro-simulation model for assessing the performance of check-in airports. In: 2016 IEEE 19th international conference on intelligent transportation systems, 1–4 November, Rio de Janeiro, Brazil, pp 1–6 10. Perboli G, Musso S, Perfetti F, Trapani P (2014) Simulation of new policies for the baggage check in the security gates of the airports: the Logiscan case study. Procedia Soc Behav Sci 111:58–67 11. Hepler W (2003) Simulation of airport baggage screening. In: 2003 simulation workshop using simulation to evaluate impact of airport security, no. E-C060, Washington D.C., January 2003, pp 16–17 12. Miller E (2003) Modeling checked baggage requirements for Dallas/Fort Worth International Airport. In: 2003 simulation workshop using simulation to evaluate impact of airport security, no. E-C060, Washington D.C., January 2003, pp 21–22 13. Leone K, Liu R (2003) Measures of effectiveness for passenger baggage security screening. In: 2003 simulation workshop using simulation to evaluate impact of airport security, no. E-C060, Washington D.C., January 2003, pp 23–24 14. Pou S, Kunin D, Xiang D (2017) Reducing wait times at airport security. Technical report team 56632, pp 1–21 15. Mehri H, Djemel T, Kammoun H (2008) Solving of waiting lines models in the airport using queuing theory model and linear programming the practice case: A.I.M.H.B., Hal-00263072v2, pp 1–26 16. De-Lange R, Samoilovich I, Van der Rhee B (2012) Virtual queuing at airport security lanes. Eur J Oper Res 225(2013):153–165 17. Al-Sultan AT (2017) Simulation and optimization for modeling the passengers’ check-in system at the airport terminal. Rev Integr Bus Econ Res 7(1):44–53 18. Simaiakis I, Balakrishnan H (2016) A queuing model of the airport departure process. Transp Sci 50(1):1–30 19. Liu XC (2018) Field investigation on characteristics of passenger flow in a Chinese hub airport terminal. Build Environ 133:51–61 20. Xavier F, Ricardo F (2016) How do airlines react to airport congestion? The role of networks. Reg Sci Urban Econ 56:73–81 21. Alvaro R, Fernando GC, Rosa AV, Javier PC, Rocio BM, Sergio CS (2019) Assessment of airport arrival congestion and delay: prediction and reliability. Transp Res Part C Emerg Technol 98:255–283 22. Rui CM, Pedro S (2010) Measuring the influence of congestion on efficiency in worldwide airports. J Air Transp Manage 16:334–336

Antenna Calibration in EMC Semi-anechoic Chamber Using Standard Antenna Method (SAM) and Standard Site Method (SSM) Abdulrahman Ahmed Ghaleb Amer, Syarfa Zahirah Sapuan, Nur Atikah Zulkefli, Nasimuddin Nasimuddin, Nabiah Binti Zinal, and Shipun Anuar Hamzah

Abstract Electromagnetic Compatibility (EMC) engineers should self-check the antenna’s condition and parameters, including Antenna Factor (AF) and Gain, continuously, by calibrating the antenna. Therefore, several analyses have been done to compare between SAM and SSM in EMC testing lab. Based on the analysis, the antenna to be used inside a 3 m semi-anechoic chamber need to be positioned 1.5 m above the ground plane to avoid the reflection. The Antenna Factor (AF) results show a good agreement with manufacturer data. SAM is recommended as a calibration method in the semi-anechoic chamber because the percentage error is 5% which is lower compared with SSM (18%). This is due to the site imperfection in the EMC lab. The uncertainty for the EMC lab is up to ±4 dB, compared to the calibration test site where allowed uncertainty is ±1 dB. Therefore, an absorber needs to be placed between two antennas. In addition, the phase center of the reference antenna needs to take into consideration for highly accurate AF.



Keywords Antenna calibration Standard antenna method (SAM) method (SSM) Semi-anechoic chamber Antenna factor EMC







 Standard site

A. A. G. Amer (&)  S. Z. Sapuan (&)  N. A. Zulkefli  S. A. Hamzah Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, 86400 Parit Raja, Batu Pahat, Johor, Malaysia e-mail: [email protected] S. Z. Sapuan e-mail: [email protected] N. Nasimuddin Institute for Infocomm Research, A-STAR, Singapore, Singapore N. B. Zinal Centre for Diploma Studies, Universiti Tun Hussein Onn Malaysia, Parit Raja, Batu Pahat, Johor, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_42

605

606

A. A. G. Amer et al.

1 Introduction Electromagnetic Compatibility (EMC) consists of two aspects, radiation emission, and immunity. Emission is the generation of electromagnetic energy, whether deliberate or accidental, by some source and its release into the environment. Immunity is the opposite of susceptibility, being the ability of equipment to function correctly in the presence of EMI/RFI, with the discipline of “hardening” equipment being known equally as susceptibility or immunity. During the EMC radiated emission and immunity measurement, several antennas are used to sense or radiate the electromagnetic wave and later the result will be compared with an allowable limit set by an EMC Standard. Therefore, the accuracy of the antenna for EMC measurements is important to reduce the uncertainty of the result obtained. Antenna factor (AF) is a fundamental requirement for reliable EMC measurements in radiated emissions and radiated immunity test [1]. Usually, calibration of the antenna was conducted once in a year due to the deviation of AF values or expiry of calibration date. Unfortunately, the antenna needs to be sent to the calibration laboratory and the cost will be increased. Most of the semi-anechoic is designed for an EMC measurement and it is not suitable for antenna calibration. In addition, EMC engineer in the test Lab, need to perform an intermediate check once in two or three months to ensure the antenna provides an accurate reading for radiated emission testing. Therefore, it is important to conduct a feasibility study of the EMC semi-anechoic chamber as an antenna calibration test site. SAM, SSM and SFM used for AF determination in the antenna calibration lab assigned by the National Institute of Standards and Technology (NIST). SAM and SSM are the most methods used for AF determination in the antenna calibration laboratory. Therefore, it is important to analyze the capability of EMC lab for antenna calibration using SAM and SSM. Both methods will be analysed to ensure the accuracy of the antenna calibration in EMC laboratory [2, 3].

2 Antenna Calibration An Open Area Test Site (OATS) [4], full anechoic chamber and Semi-anechoic chamber, designed for radiated emission in EMC measurement testing (i.e. COMTS) do not usually meet the standard requirement for an antenna calibration test site (CALTS), because the uncertainties of the site’s imperfection of the COMTS is up to 4 dB, and considered high for antenna calibration. Therefore, NPL has designed OATS as a calibration test site with a specific antenna range, close to the ideal site, which can be regarded as a national standard site where measurement on other CALTS can be compared for consistency. In about 2000, NPL built a fully anechoic chamber with special pyramidal absorber and ferrite, to achieve low reflection for free space AF [5]. The decision to move CALTS from outdoor OATS

Antenna Calibration in EMC Semi-anechoic Chamber …

607

to indoor shielded rooms are because: to eliminate ambient RF interference and to enable accurate testing free from extreme weather [6, 7]. Since then, free space AF in a calibrated fully anechoic chamber, limited to and focused on antenna calibration, has been accepted as a standard site. Standard sites that use for antenna calibration or so-called antenna calibration test sites (CALTS) must first be calibrated to ensure they satisfy the standard provided by ANSI and CISPR. The key differences during the calibration of the CALTS is the use of theoretical computations and geometric specific correction factors in [8], as compared to the validation of a reference test site (REFTS) using analytically calculable dipole in CISPR 16-1-5 [8]. Two standard techniques that are commonly used in a national laboratory for antenna calibration are: 1) Standard Site Method (SSM); and 2) Standard Antenna Method (SAM).

2.1

Standard Site Method (SSM)

Historically, the SSM has been used to calibrate antennas in the frequency range of 30 to 1000 MHz, and is based on the work of Albert Smith, in the early 1980s [9]. The author proposed that the SSM of determining AFs is based on site attenuation measurements made on a near ideal, open-field site. The method of SSM is based on the far-field Friis transmission equation, and adds a ray tracing component from the ground bounce of the wave, over the conducting ground plane used for these calibrations. Even though the ground plane is used, the SSM in recent times now aims to produce free space AF by removing the ground effect mathematically [10]. In the standard site method, three sets of measurements of site attenuation were taken under the identical geometrics using three different antennas, taking two at a time, as shown in Fig. 1. For the test set-up, transmitting and receiving antennas were kept at a height of 2 m, and 1–4 m, respectively. The distance between the

Fig. 1 Standard Site Method set up [5]

608

A. A. G. Amer et al.

transmitting and receiving antennas was kept at 3 m. Three equations associated with three site-attenuation measurements as in [10].

2.2

Standard Antenna Method (SAM)

According to ANSI 63.5, the SAM is different from the reference antenna method (RAM). Commonly, most researchers highlight that both methods are the same owing to the same procedure in calibrating the antenna. However, in the reference antenna method (RAM), the reference/standard antenna is a dipole with a well-matched balun, but in SAM, the reference/standard antenna can be any accurate antenna with known AF, including dipole. For this reason, RAM is also known as SAM, but SAM is not always a RAM. Therefore, to simplify the terms in this paper, SAM will be used, and the reference antenna terms will be used as a standard antenna [11, 12]. The SAM of antenna calibration usually uses a dipole with an accurately matched balun as a reference antenna. The AF of any other antenna may be derived by the substitution against the reference antenna [13]. The AF measurement was carried out on the 3 m OATS, or certificate anechoic chamber, by keeping a distance of 3 m between the transmitting antenna and the receiving antenna. ANSI 63.5 (2006) states that the two antennas must be 10 m each other, but a 3 m measurement has been accepted by CISPR 16-1-5 as a standard owing to its low-cost implementation. Both CIPSR and ANSI have the same calibration techniques with different criteria, and the discrepancy is acceptable to use in any method based on the customer requirement [14]. The transmitting antenna, S1, was kept at a height of two metres, and the receiving antenna, S2, was kept at a height between 2.5 and 4 m, as shown in Fig. 2. It is not important to position the antenna for a signal maximum, but it is important to avoid the region around a null, where readings will be changing rapidly with the antenna position. Therefore, the use of an absorber on the floor in a semi-anechoic chamber, or using a fully anechoic chamber, is preferable for this method.

Fig. 2 SAM configuration

2

3m

Antenna Calibration in EMC Semi-anechoic Chamber …

609

To calibrate the antenna against the reference dipole antenna, first measure the signal strength of the reference antenna at S2. Then, the reference antenna should be substituted with the antenna under test (AUT), keeping the height and position from S1 the same as the reference antenna. The AF for the AUT is calculated as Eq. (1)–(2) [8]. Vref þ AFref ¼ E

ð1Þ

E  VAUT ¼ AFAUT

ð2Þ

where; = Received voltage (dBlV) of a RA Vref & VAUT AFref & AFAUT = AF of reference antenna and AUT (dB/m) E = Electric field strength (dBlV/m)

3 Results and Discussion 3.1

Measurement Set-up

Figure 3 shows the standard antenna method measurement set-up. Three antennas are involved in this measurement: 1) Log periodic antenna as a transmitter, 2) reference antenna, which is a calibration antenna and has a highly accurate AF (receiver) and 3) An antenna under calibration (AUC), which in this research is a direct feed horn, dipole, and bi-log antenna (receiver). The transmitter and receiver must be placed at a distance of 3 m from each other. Both transmitter and receiver must be positioned at 2 m above the floor, however, for self-calibration, CISPR 16-1-15 states that the height of an antenna depends on the chamber size and must be positioned to avoid any reflection. Therefore, height scanning for the receiver is not important in this method. The transmitter was

Fig. 3 Measurement set-up for SAM

610

A. A. G. Amer et al.

connected to the signal generator and the receiver (either reference antenna or AUC) was connected to a spectrum analyzer. The measurement is started by measured the voltage received from the reference antenna. The reference antenna is then was replaced with the AUC and the receive voltage is measured. During the replacement of the reference antenna with the AUC, all cables, connections, transmitter position and any equipment must be static and must be same as the previous layout to reduce uncertainties. The measurement figure for SSM almost same as SAM in Fig. 3 except SSM is setting without an absorber. Three antennas are involved in this measurement. All three different antennas taken in pairs. Transmitter and receiver must be placed at a distance of 3 m from each other. Both transmitter and receiver must be positioned at 2 m above the floor. For each frequency, −20 dBm was fed to transmitting antenna and receiving antenna height is scanned from 1 to 4 m to record the maximum received signal; . All frequencies points are measured with above procedure for the first pair of transmitter1 – receiver2 antenna, above procedure is repeated for remaining antenna pairs. Then the antenna has been replaced by the reference antenna. Resulting AF is calculated by using equation as in [9].

3.2

Antenna Height Analysis

The height of the antenna for AUT was analyzed by using SSM method without absorber. For SAM, it is stated in the literature that the effects of antenna height in the semi-anechoic chamber for antenna factor is 1.5 m. The author mentioned that it is obvious that both the reference and antenna under test should be positioned at the same height at 1.5 m from the ground plane to achieve reliable data for AF [15]. For SSM measurement, the absorber was removed from the ground plane to get a maximum signal from the combination direct and reflected signal. A transmitter will be fixed at 1.5 m from the ground plane for all measurement but the receiver was position at 1 to 4 m from the ground plane as shown in Fig. 4. Fig. 4 SSM measurement for antenna height analysis

Antenna Calibration in EMC Semi-anechoic Chamber …

611

Figure 5 shows the Antenna Factor (AF) vs frequency for various antenna height. The measurement result is compared with manufacturer data from calibration certificate of the AUC. AF for the antenna height at 1.5 m gives a good agreement with manufacture data because the height is located in the middle of a semi-anechoic chamber. Therefore, the maximum power received is measured due to the direct signal and reflected signal from the ground plane as mentioned in the standard procedure of the SSM. However, at a height of 4 m, measurement results show that the voltage give the highest deviation compare with the manufacture data. This is because the AUC is too close with the ceiling without absorber as shown in Fig. 6. Therefore, the reflection from the ceiling at 4 m antenna height is unavoidable. As a conclusion, the effects of the antenna height in the semi-anechoic chamber for AF have been analysed. Therefore, any measurement including all the antenna analysis such as antenna gain, and radiation pattern need to be conducted at the antenna height of 1.5 m from the ground plane due to the lower reflection occurs in the EMC Semi-Anechoic chamber testing lab.

3.3

Analysis of the Antenna Phase Center

Phase center is an important parameter in determining the AF. EMC tests usually do not emphasize the position of the phase center of the antenna to be used. But for the antenna calibration, several studies were conducted using horn antenna as a reference antenna for SAM. Therefore, an absorber will be used between the transmitter and receiver.

Fig. 5 AF vs frequency for various antenna height

612

A. A. G. Amer et al.

Fig. 6 SSM measurement at 4 m antenna height

Fig. 7 Horn antenna measurement without considering a phase center position (Front)

Transmit-

>

3m

Two different analyses regarding the phase centre of the horn antenna were undertaken. The horn antenna is a reference antenna while a dipole antenna is an AUC. For the first measurement, the 3 m distance was positioned at the edge (Front) of the horn antenna, as shown in Fig. 7. For this measurement, the actual distance between the horn antenna (reference antenna) and the transmitter was greater than 3 m. For the second measurement, the horn antenna was positioned exactly 3 m below the phase centre from the transmitter, as shown in Fig. 8. As a result, the horn antenna with exact phase centre position (Center) is in good agreement with the simulation, compared to the approximation distance without considering a phase centre as shown in Fig. 9. The horn antenna must be positioned exactly below the phase centre to ensure the 3 m measurement can be achieved accurately. The phase centre is therefore important and must be taken into consideration, because deviation from the simulation for horn antenna was quite high, which is up to 3 dB.

Antenna Calibration in EMC Semi-anechoic Chamber …

613

Transmit-

3m

Fig. 8 Horn antenna measurement with correct phase center position (Center)

Fig. 9 AF for dipole antenna and horn as a reference antenna

4 Comparison Between SAM and SSM in Semi-anechoic Chamber Figure 9 shows the AF result for a dipole antenna by using SAM. In the measurement, the horn is used as a reference antenna and the dipole is the AUC. The result indicates that the AF for dipole by using SAM gives a good agreement with manufacturer data from a calibration certificate and an absorber needs to be placed in between transmitter and receiver. Figure 10 shows the AF result for a dipole antenna (AUC) by using SSM. The result indicates that the AF without absorber for dipole antenna gives a good agreement with the manufacturer data. Based on the result, SSM requires a reflected

614

A. A. G. Amer et al.

Fig. 10 AF result for Dipole antenna using SSM

Fig. 11 Percentage error between SSM and SAM

ground plane because to ensure that the maximum transmit wave from direct and reflected signal are received accurately. Figure 11 displays the percentage error of the dipole antenna using SAM and SSM between 500 and 1000 MHz. The percentage error of the dipole antenna using SAM is 5%, whereas 18% using SSM. Therefore, for an accurate AF calibration in semi-anechoic chamber (EMC testing laboratory), it is preferable to use SAM instead of SSM. Apparently, SSM must have a high accurate site measurement compared with SAM, where site performance is not important, but it must have a high accuracy antenna with known AF.

Antenna Calibration in EMC Semi-anechoic Chamber …

615

5 Conclusion Based on studied in the paper, it can be concluded the SAM is the best method for antenna calibration in semi anechoic chamber (EMC test site) because it gives lower error compare with SSM. SSM is not recommended because it does require high accurate calibration test sites, which have low uncertainties [16]. However, the semi-anechoic chamber for EMC testing lab has high uncertainty of the site imperfection; which is up to ±4 dB, compared to the calibration test site, where allowed uncertainty is around ±1 dB [17]. Acknowledgements The authors would like to acknowledge Universiti Tun Hussein Onn Malaysia (UTHM) for their funding of this research under TIER 1 research grant, H150.

References 1. Betta G, Capriglione D, Carobbi CFM, Migliore MD (2011) The accurate calibration of EMC antennas in compact chambers — measurements and uncertainty evaluations. Comput Stand Interfaces 33:201–205 2. Chen Z (2015) An improved method for simultaneous calibrations of gain, phase center and near boresight patterns for Log-Periodic Dipole Arrays. In: 2015 9th European Conference on Antennas and Propagation, EuCAP 2015, pp 1–5 3. Lim JH, Lee BW, Choi YJ, Kim HB (2017) A study of measuring a commercial antenna gain using an R-SAM. In: 2017 Asia-Pacific International Symposium on Electromagnetic Compatibility, pp 131–133 4. Meng D, Liu X, Dabo L (2015) Research on unwanted reflections in an OATS for precise omni antenna measurement. In: 2015 IEEE 6th international symposium on microwave, antenna, propagation, and EMC technologies (MAPE), pp 245–249 5. Dawson L, Clegg J, Porter SJ, Dawson JF, Alexander MJ (2002) The use of genetic algorithms to maximize the performance of a partially lined screened room. IEEE Trans Electromagn Compat 44:233–242 6. Seki Y, et al (2018) Antenna calibration in anechoic chambers (30 MHz to 1 GHz): new approach to antenna calibration. In 2018 IEEE international symposium on electromagnetic compatibility and 2018 IEEE asia-pacific symposium on electromagnetic compatibility (EMC/APEMC), pp 1230–1235 7. Sapuan SZ, Jenu MZM (2016) Time domain analysis of direct-feed biconical antenna for Antenna calibration and EMC measurement. In: 2016 IEEE Asia-pacific conference on applied electromagnetics, APACE 2016, pp 198–201 8. Eser S, Sevgi L (2010) Open-area test site (OATS) calibration. IEEE Antennas Propag Mag 52:204–212 9. Smith AA (1982) Standard-site method for determining antenna factors. IEEE Trans Electromagn Compat 3:316–322 EMC-24 10. Standards, C (2008) Calibration standards in the united states. 13 11. Jeong MJ, Lim JH, Park JW, Park SW, Kim N (2019) Validation of compact-standard antenna method for antenna calibration above 1 GHz. J Electromagn Eng Sci 19:89–95 12. Lim JH, Lee BW, Park SH, Choi YJ, Seo MW (2016) A study of standard antenna method using two homogeneous horn antennas. In: 2016 URSI asia-pacific radio science conference, URSI AP-RASC 2016, pp 793–795

616

A. A. G. Amer et al.

13. Chand S (2003) Calibration of antennae on 3 m OATS. In: Proceedings of the international conference on electromagnetic interference and compatibility, pp 375–379 14. Kaketa S, Fujii K, Sugiura A, Matsumoto Y, Yamanaka Y (2003) A novel method for EMI antenna calibration on a metal ground plane. In: 2003 IEEE international symposium on electromagnetic compatibility, 2003. EMC 2003, vol 1, pp 66–69 15. Sapuan SZ, Jenu M, Zarar M, Kazemipour A (2014) Issue on calibration of direct feed biconical antenna in a semi- anechoic chamber using standard antenna method. In: Advanced materials research, vol 903, pp 273–278 16. Alexander MJ, et al (2004) Calibration and use of antennas, focusing on EMC applications. Director 17. Fujii K, Alexander M, Sugiura, A (2012) Uncertainty analysis for three antenna method and standard antenna method. In: IEEE international symposium on electromagnetic compatibility, pp 702–707

An Automatic Driver Assistant Based on Intention Detecting Using EEG Signal Reza Amini Gougeh, Tohid Yousefi Rezaii, and Ali Farzamnia

Abstract Each year, vehicle safety is increasing. Recently brain signals were used to assist drivers. Attempting to do movement produces electrical signals in specific regions of the brain. We developed a system based on motor intention to assist drivers and prevent car accidents. The main objective of this work is improving reaction time to external hazards. The motor intention was recorded by 16 channels of a portable device called Open-BCI. Extracting features was done by common spatial patterns which is a well-known method in motor imagery based brain computer interface (BCI) systems. By using enhanced common spatial pattern (CSP) called strong uncorrelated transform complex common spatial pattern (SUTCCSP), features of preprocessed data were extracted. Regarding the nonlinear nature of electroencephalogram (EEG), support vector machine (SVM) with kernel trick classifier was used to classify features into 3 classes: left, right and brake. Due to using developed SVM, commands can be predicted 500 ms earlier with the system accuracy of 94.6% on average. Keywords Intentional EEG

 Driving assistant  BCI

1 Introduction Physiological signals have been widely used in clinical trials to evaluate patients. Small voltage sensing systems were developed in the past years and now we can sense and amplify signals of the brain called electroencephalogram (EEG) which is mainly used to detect mental diseases. The first link between brain and computer was developed in 1970. Different types of brain computer interface (BCI) systems have been proposed depending on the target areas. We used motor intention signals; R. A. Gougeh  T. Y. Rezaii Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran A. Farzamnia (&) Faculty of Engineering, Universiti Malaysia Sabah, Kota Kinabalu, Sabah, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_43

617

618

R. A. Gougeh et al.

when intention happens, characteristic signals in some regions of the brain will appear. Brain waves are made before actual movement of the body, so we can predict intention of an action just before it happens. Each year, thousands of people die from car accidents. Reducing this number is a priority for countries and automakers. Nowadays, manufacturers use radar systems (for automatic break), drowsiness detector and EEG signals to control commuters’ safety. Conventional systems use external sensors such sonar and video-cameras to obtain and to analyze information from the vehicle and its surroundings to react dangerous situations [1]. Several behavioral experiments have shown that drowsiness can have a serious impact on driving performance [2]. To recognize distractions in driving, Sigari et al. [3] proposed a method based on driver’s face image processing which evaluates distraction symptoms related to eye region. React to emergency situations is the final outcome of cognitive and peripheral processes [4]. Non-invasive EEG were used by Haufe et al. [5], showed that event-related potentials (ERP) recorded in a simulated driving environment can be used to distinguish emergency braking intention from non-braking driving. Kim et al. [6] suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by braking assistance systems. Scientific evidence has shown that the attention level during the driving task is influenced by stress, workload, and fatigue, and thus they tend to increase the braking reaction time [7]. Teng and Bi [8] analyzed EEG signals for the early detection of emergency braking situations and reducing reaction time. In recent researches, Alyasseri et al. [9], proposed a novel method for EEG signal denoising based on multi-objective Flower Pollination Algorithm with wavelet transform. Nguyen et al. [10] used CSP to extract significant discriminant features from the multi-class data. Then, proposed fuzzy logic system classifies data. This method outperforms other competing classification methods including k-nearest neighbor. But its learning process is generally time consuming. In this paper, unlike most of the EEG-based assistants which simply evaluate mental state, our objective is using EEG to detect driver’s decision in 3 states, turning to left, to right and brake. Open-BCI headset is utilized to acquire EEG data and machine learning techniques are used to find a discriminant pattern of motor intention. Common spatial patterns (CSP) is a well-known preprocessing method to find features of data in motor imagery systems and we used it in the proposed intention detection system. Several extensions have been developed for CSP to improve its performance. Standard CSP doesn’t involve phase information but the strong un-correlating transform complex CSP (SUTCCSP) is an improved variant of CSP which is shown to improve the accurate by 4% compared to the standard CSP [11]. Support vector machines were created by Vapnik based on the statistical learning theory and can solve the problems associated with small sample sizes, nonlinear relationships, and multiple classifications [12]. The main idea is separate the data with line in 2-d. Training this system is an optimization problem. If data isn’t linear we may apply transformations on it and map it into higher dimensional space. The

An Automatic Driver Assistant Based on Intention Detecting…

619

objective is finding a space that in it data will separate linearly. Therefore, problem converts from nonlinear to linear with changing kernel. This method was called kernel trick by M. Aizerman et al. in 1964 [13]. There are 2 methods in order to obtain multiclass SVM. One-versus-all distinguish between one of the labels and the rest and one-versus-one distinguish between every pair of classes. Researches shows that one-versus-one method is suitable for practical use [5].

2 Materials and Methods Procedure of our work illustrated in Fig. 1. In the following we will explain each section.

2.1

Data Acquisition

Twenty students between the ages of 20–25 contributed to this study on a voluntary basis. Participant’s gender distribution was equal. They were informed by instruction files and were examined physically and mentally via questionnaire before taking the exam. The experiment was executed in various time slots which contains morning and evening time. Examining system was made up from the screen, 3-button controller, open-BCI and personal computer. The experiment has 3 states including left, right and break which are illustrated in Fig. 2, and therefore recorded data will be labeled with 3-button controlling box. Electroencephalography signals were recorded during the experiment by 16-channel Open-BCI headset. EEG signals were recorded from frontal, temporal, parietal and occipital lobe (Fp1, Fp2, F7, F3, F4, F8, T3, C3, C4, T4, T5, P3, P4, T6, O1, O2) in accordance with 10–20 international system as shown in Fig. 3. The reference electrodes were placed in the left and right ear. Open-BCI headset has its own dry electrodes with acceptable impedance. In order to increase the adaptation of interface with each subject, eight types of noise/interference data (blinking, eye up/down movement, eye left/right movement, clenching, tongue movement and relaxation state) were recorded before starting the main tasks. Subjects were sat in front of a monitor while they were supposed to put their dominant hand on the box. Fixation section was shown for 5 s; the participant

Data Acquisi on

Preprocessing

Fig. 1 Block diagram of proposed work

SUTCCSP Feature Extrac on

SVM with kernel trick classifier

620

R. A. Gougeh et al.

Fig. 2 Test equipment were used in study

Fig. 3 EEG Channels’ name and location (Baker et al. [14])

was then prepared to take the intention task. Then, one of the left, right and brake marks appeared for 3 s on screen and subject was asked to press the related key on the box. Finally, blank screen was shown and subject was given a break for 3 s (Fig. 4). This procedure was repeated 10 times for one run, and 4 runs were performed in each day. Between each run, the subjects received a 3-min break. All the platform was based on Open-sesame and its python core. We repeated this procedure for a second day, therefore, there are 80 trials for 8 runs per subject. Two days before the experiment began, the participants are asked to control their sleep and foods due to their effect on EEG signals [16]. In the experiment day, they asked to fill out a questionnaire. An Excel document is created to store the subject’s data. Additionally, they were requested to report false actions.

An Automatic Driver Assistant Based on Intention Detecting…

621

Fig. 4 Procedure of signal acquisition

2.2

Preprocessing

In the preprocessing stage, having the raw signal including labels, latency and channels is available, we extract each trial by a constant timeframe of 3000 ms. Each extracted trial was then band-pass filtered between 8 to 30 Hz. Furthermore, we extracted different types of noise data which are recorded at the beginning of the experiment as mentioned blinking, eye up/down movement, eye left/right movement, clenching, tongue movement and relaxation state.

2.3

Data Analysis

CSP algorithm is an efficient method used to extract features from different classes. It finds vectors that maximize the variance for one class while simultaneously minimizing the variance for the other. A complex version of CSP uses the covariance matrix that maintains the power sum information of the real and imaginary parts of the complex-valued data. Another complex-valued CSP algorithm, analytic signal-based CSP (ACSP), was proposed by Falzon et al. [17] to discriminate different mental tasks. However, given that the Hilbert transformed analytic signals could only produce circular signals (rotation invariant probability distribution) and that physiological signals are improper (mismatch of power between different channel data), the augmented complex CSP was introduced to fully exploit the second-order statistics of noncircular complex vectors [18]. SUTCCSP is an improved version of ACCSP which provides a higher classification rate compared to the standard CSP algorithm [19]. Suppose z is z to be complex-valued random vector: z ¼ zr þ jzi ; j ¼

pffiffiffiffiffiffiffi 1

ð1Þ

622

R. A. Gougeh et al.

For circular data, zr and zi become uncorrelated and pseudocovariance will equal to zero. But most of the real signals are non-circular. When we apply SUT, multichannel complex data become uncorrelated. Let Za and Zb be zero-mean complex-valued matrix as a function pf complex values with the form (1).     Ca ¼ E Za ZaH ; Cb ¼ E Zb ZbH     Pa ¼ E Za ZaT ; Pb ¼ E Zb ZbT

ð2Þ

where a and b denote the two different classes and C and P stands for covariance and pseudo-covariance respectively. E[.] is the statistical expected value operator T and ð:ÞH is  the Hermitian of the data, ð:Þ is transposed data. Za and Zb are 16 N¼  S matrices which S is sample size. So if efa; bg: 2     Ck ¼ E Zk ZkH ; Pk ¼ E Zk ZkT

ð3Þ

We can define the composite covariance Cc and pseudocovariance Pc matrices: Cc ¼

X

    Ck ¼ E Za ZaH þ E Zb ZbH

k

Pc ¼

X

    Pk ¼ E Za ZaT þ E Zb ZbT

ð4Þ

k

Then if we suppose Uc and Kc as eigenvectors and eigenvalues respectively: Cc ¼ Uc Kc UcH

ð5Þ

Each eigenvector (Uc Þ is related to diagonal eigenvalues matrices (Kc Þ, Now if we qffiffiffiffiffiffiffiffi H whiten Cc by the whitening matrix G ¼ K1 c Uc : GCc GH ¼ I Pc ¼ GPc GT ¼ DKDT

ð6Þ

where I denotes the identity matrix, D and K yielded by symmetric matrices. DKDT named SSVD or Takagi’s transformation. So SUT results in the following form: Q ¼ DH G

ð7Þ

An Automatic Driver Assistant Based on Intention Detecting…

623

Now we can diagonalize the covariance and pseudo-covariance matrices simultaneously. QCc QH ¼ I QPc QH ¼ K ¼ QPa QH þ QPb QH

ð8Þ

If we define Sa ¼ QCa QH and Sb ¼ QCb QH , an estimation for K can be as following: Ka ¼ B1 Sa B Kb ¼ B1 Sb B If we multiple Eq. 8 by

ð9Þ

pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi K1 and KT :

pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi K1 QPc QH KT ¼ K1 QPa QH KT þ K1 QPb QH KT ¼ I

ð10Þ

^ T and Sbb ¼ QP ^ T , Objective ^ aQ ^ bQ K is diagonal matrix. Now if we define Sba ¼ QP is finding eigenvectors of pseudo-covariance: ^ ¼ K12 Q ¼ K12 DH G Q ^Sa þ ^Sb ¼ I ca ¼ B cb ¼ B ^ K ^ ^ H ^Sa B; ^H ^ K Sb B

ð11Þ

So we diagnolized Pc and Cc . Thus, we have a spatial filter as follow: ^ bH ¼B ^H Q W H ¼ BH Q; W

ð12Þ

Spatially filtered vector is calculated as follows: b ^H ¼ W b HZ V ¼ W H Z; V

ð13Þ

So if we had N data channels: 0

v1 .. .

1

C 0 B 1 C B v1 C B C B . C  ¼ B vm C ¼ B V B vNm þ 1 C @ .. A C B v2m C B .. A @ . vN

ð14Þ

624

R. A. Gougeh et al.

 features by SUTCCSP calculated as follow: If vp corresponds to each row of V, !   var vp fp ¼ log P i¼1;...;m var ðvi Þ

ð15Þ

where p ¼ ½1; . . .; 2m. In the dominator of this equation we use variant m which chooses m rows of var ðvi Þ. After achieving features, a nonlinear classifier is needed to sort these features. Support vector machine (SVM) is a relatively low cost classifier which is widely used in BCI systems. SVM uses a set of mathematical functions that are defined as the kernel. In order to using SVM on the data which are not linearly separable, kernel trick can be used to overcome the problem. Simply, it is mapping the non-linear data into a higher dimensional space where we can find a hyperplane that can separate the samples. We used the method of Li et al. [20]. As SVM is a binary classifier, According to Hsu and Lin [5], One-versus-one method used to separate classes. It constructs one classifier per pair of classes. So if the number of classes is N, then N(N − 1)/2 SVM must be trained. The number of SVMs in this method are more than one-versus-all method, but the training set of each SVM is smaller than the one-versus-all method (only includes two-class data). In test phase, each data applied to all of the SVMs. We have three classes: i, j, k. If SVM is for i and j classes classify the test sample belonged to class i, then it votes it, otherwise, it votes j. At prediction time, the class which received the most votes is selected. When two classes have an equal number of votes, it selects the class with the lowest index.

3 Simulation Results and Discussion In previous experiments we have done, our systems used AC current, then a notch filter required to reduce line noise (50 or 60 Hz). Furthermore, gel electrodes were used which have low inductance besides it needs washing after the experiment. Dry electrodes were used in this study instead of gel type, therefore subjects are comfortable while signal accuracy is acceptable. Also, we used open-BCI with DC current which can reduce the line noise. Its platform is based on python and JAVA which stores all 16 channels with minimum latency, especially in WIFI mode (Table 1). A negligible issue in this study is weight of headset and sharpness of electrodes which annoyed some of the subjects. In 3-min breaks, we unscrewed the electrodes. Classification rate of conventional method is provided in Table 2. Classification was repeated by 100 times for each subject. To train and test our system, all of the acquired data are split in a 80–20 ratio, 80% for training and 20% for testing system.

An Automatic Driver Assistant Based on Intention Detecting…

625

Table 1 Scheduling of procedure (adapted from Cho et al. [15])

Number

Task

Duration (min:sec)

1 2 3 4 5 6 7 8 9 10 11 12

Filling questionnaire Open-BCI headset placement Acquisition of noise data Run 1 Relaxation Run 2 Relaxation Run 3 Relaxation Run 4 Relaxation Removing Open-BCI headset Sum

5 7 2 7:20 3 7:20 3 7:20 3 7:20 3 5 60:20

Table 2 Results of classifying without Kernel trick SVM

Method

Classification rate (%)

CSP SUTCCSP

69.77 71.62

Table 3 Overall accuracy with one-versus-one method

m 1 2

Average Maximum Average Maximum

CSP

SUTCCSP

83 89.3 79.05 86

94.6 96 90.6 92.1

Overall accuracy of used multiclass SVM with one-versus-one method and using kernel trick provided in Table 3 with specific variable m. 94.6% accuracy rate is achieved with proposed method. The improvement in the results in conventional SVM (Table 1) is quite obvious compared to proposed combination (Table 2). Table 4 contains linear discriminant analysis (LDA) with CSP and SUTCCSP. Despite LDA can be used in multiclass problems using the one-versus-one method, the main disadvantage of LDA is its linearity that can have insufficient results on nonlinear EEG data [21]. Therefore, optimized SVM is used in our research to obtain more accuracy and reliability.

626 Table 4 Overall accuracy of classifying with LDA

R. A. Gougeh et al. Method

Classification rate (%)

CSP SUTCCSP

78.3 80.7

4 Conclusion Limited field of vision, high speed, careless driving and fatigue are the most common reasons of accidents. To reduce number of injures, we have two options: first, we can prepare drivers to obey the rules. Second, preparing cars to get less damage from an incident. One of the pioneer technologies in this field is using EEG signals to brake quicker. Our work is just a step toward a system with maximum accuracy. We showed that using SUTCCSP method with SVM leads to accurate results compared to previous conjunctions. Our proposed method starts with good quality data acquisition system. In second stage, we applied conventional preprocessing methods to prepare data for main stage. In the most important part of the work, we used SUTCCSP to extract features of our classes. SVM with different kernel helped us to reach beneficial procedure in order to obtain reliable accuracy. Several approaches have been suggested for multiclass classification using SVM, and here we adopted the one-versus-one approach. We hope using this approach in actual systems have positive effect on passenger’s safety. Acknowledgement The authors appreciate those who contributed to make this research successful. This research is supported by Center for Research and Innovation (PPPI) and Faculty of Engineering, Universiti Malaysia Sabah (UMS) under the Research Grant (SBK0393-2018).

References 1. Shaout A, Colella D, Awad S (2011) Advanced driver assistance systems - past, present and future. In: 2011 seventh international computer engineering conference (ICENCO 2011), pp 72–82, Giza 2. Liu CC, Hosking SG, Lenné MG (2009) Predicting driver drowsiness using vehicle measures: recent insights and future challenges. J Saf Res 40(4):239–245 3. Sigari MH, Fathy M, Soryani M (2013) A driver face monitoring system for fatigue and distraction detection. Int J Veh Technol 2013:1–11 4. Sherk H, Fowler GA (2001) Chapter 16 Neural analysis of visual information during locomotion. Prog Brain Res 134:247–264 5. Haufe S, Kim JW, Kim IH, Sonnleitner A, Schrauf M, Curio G, Blankertz B (2014) Electrophysiology-based detection of emergency braking intention in real-world driving. J Neural Eng 11(5):056011 6. Kim IH, Kim JW, Haufe S, Lee SW (2014) Detection of braking intention in diverse situations during simulated driving based on EEG feature combination. J Neural Eng 12 (1):016001

An Automatic Driver Assistant Based on Intention Detecting…

627

7. Borghini G, Astolfi L, Vecchiato G, Mattia D, Babiloni F (2014) Measuring neurophysiological signals in aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness. Neurosci Biobehav Rev 44:58–75 8. Teng T, Bi L (2014) A novel EEG-based detection method of emergency situations for assistive vehicles. In 2017 seventh international conference on information science and technology (ICIST), IEEE, pp 335–339 9. Alyasseri ZAA, Khader AT, Al-Betar MA, Papa JP, Ahmad Alomari O (2018) EEG-based person authentication using multi-objective flower pollination algorithm. In: 2018 IEEE congress on evolutionary computation (CEC), IEEE, pp. 1–8 10. Nguyen T, Hettiarachchi I, Khatami A, Gordon-Brown L, Lim CP, Nahavandi S (2018) Classification of multi-class BCI data by common spatial pattern and fuzzy system. IEEE Access 6:27873–27884 11. Kim Y, Park C (2015) Strong uncorrelated transform applied to spatially distant channel EEG data. IEIE Tran Smart Process Comput 4(2):97–102 12. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297 13. Aizerman MA (1964) Theoretical foundations of the potential function method in pattern recognition learning. Autom Remote Control 25:821–837 14. Baker M, Akrofi K, Schiffer R, O’Boyle MW (2008) EEG patterns in mild cognitive impairment (MCI) patients. Open Neuroimaging J 2:52 15. Cho H, Ahn M, Ahn S, Kwon M, Jun SC (2017) EEG datasets for motor imagery brain-computer interface. Gigascience 1(7):1–8 16. Hoffman LD, Polich J (1998) EEG, ERPs and food consumption. Biol Psychol 48(2):139–151 17. Falzon O, Camilleri KP, Muscat J (2010) Complex-valued spatial filters for task discrimination. In 2010 annual international conference of the IEEE engineering in medicine and biology, IEEE, pp 4707–4710 18. Park C, Took CC, Mandic DP (2013) Augmented complex common spatial patterns for classification of noncircular EEG from motor imagery tasks. IEEE Trans Neural Syst Rehabil Eng 22(1):1–10 19. Kim Y, Ryu J, Kim KK, Took CC, Mandic DP, Park C (2016) Motor imagery classification using mu and beta rhythms of EEG with strong uncorrelating transform based complex common spatial patterns. Comput Intell Neurosci 2016:1 20. Li X, Chen X, Yan Y, Wei W, Wang ZJ (2014) Classification of EEG signals using a multiple kernel learning support vector machine. Sensors 14(7):12784–12802 21. Garcia GN, Ebrahimi T, Vesin JM (2003) Support vector EEG classification in the Fourier and time-frequency correlation domains. In: First international IEEE EMBS conference on neural engineering, 2003, conference proceedings, IEEE, pp 591–594

Hybrid Skull Stripping Method for Brain CT Images Fakhrul Razan Rahmad, Wan Nurshazwani Wan Zakaria, Ain Nazari, Mohd Razali Md Tomari, Nik Farhan Nik Fuad, and Anis Azwani Muhd Suberi

Abstract Ischemic stroke is a medical condition in which blood flow is obstructed in the brain region. This condition will cause the brain tissue to be deprived of oxygen resulting in the death of the tissue. In medical imaging, computed tomography (CT) image or magnetic resonance imaging (MRI) is used to display a series of slices of the head section when diagnosing ischemic stroke. With the progress of image processing technology, ischemic stroke can be detected through a series of detection algorithm instead of visual perception by radiologists. This method promotes the accuracy of diagnosis by reducing human error. While MRI is more accurate as compared to CT, the availability of MRI impede the prognosis when diagnosing ischemic stroke. CT scan is more reliable when dealing with an emergency situation since it is widely available despite providing less accurate data especially in early stage of ischemic stroke detection. This is where image processing approach provide comprehensive data for ischemic detection. This paper proposes a preliminary processing stage to remove unrelated non-brain region which can be considered as an obstacle in ischemic detection. The hybrid method consists of intensity-based and morphology-based method converts the image scanned from CT scanner to 8-bit DICOM format before the skull is stripped in the sequencing process. This method shows remarkable results in term of visual representation of skull stripping skull and processing time.



Keywords Image pre-processing Skull stripping Brain CT image Ischemic stroke



 Medical imaging 

F. R. Rahmad (&)  W. N. Wan Zakaria  A. Nazari  M. R. Md Tomari  A. A. Muhd Suberi Faculty of Electrical and Electronic Engineering (FKEE), Universiti Tun Hussein Onn Malaysia (UTHM), 86400 Parit Raja, Batu Pahat, Johor, Malaysia e-mail: [email protected] N. F. Nik Fuad UKM Medical Centre, Jalan Yaacob Latif, Bandar Tun Razak, 56000 Cheras, Kuala Lumpur, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_44

629

630

F. R. Rahmad et al.

1 Introduction Stroke from brain ischemia has been in the spotlight for quite some time in the medical field and has been pressuring the paramedics on a race against the clock [1]. A collective study had found that brain stroke is progressively growing and well-known to be among the top cause of death around the globe. Even in Malaysia, ischemic stroke is responsible for 79.4% of admitted patients [2]. The key to accurate diagnosis in treating a stroke patient lies within the medical imaging process. The medical imaging process provides a visual data that will decide the following treatment planned by the doctors [3]. Introducing machine learning into the medical fields has helped doctors in making an accurate diagnosis hence improving patient’s prognosis. As much advantage machine learning could provide, there will always still be room for improvement. Machine learning process multitude of data simultaneously disregarding whether the data is pertinent or not for the diagnosis which may cause the system to slow down. Hence, preprocessing stage is incorporated to ensure that the only related images are considered for the processing stage. In this paper, a preprocessing method is proposed to remove the skull from the image which leaves only the tissue for the processing stage using a derivative technique by combining aspect of conventional methods. Introducing a pre-processing stage such as skull-stripping to a CT image is meant to remove unnecessary non-brain region from the scanned images with a rapid processing time which is crucial in promoting accuracy and efficiency for succeeding process such as automated detection, segmentation and analysis [4]. Among the earliest morphology-based skull-stripping method used histogram-based thresholding for its morphological computation. One of the problem related with this operation is that the optimum morphological size is difficult to determine [5]. Another way to remove skull from a CT image is by addressing the intensity properties of a particular region or also known as intensity-based method. The method addresses the difference in shade which characterize the different tissue components that made up the skull and isolate them [6]. Nevertheless, this method is limited by its sensitivity towards intensity bias that may arise from low resolutions and contrasts of the CT image, presence of artifact etc. Theoretically, the drawback from the morphology-based method is complimented by the intensity-based method to ensure concise stripping.

Hybrid Skull Stripping Method for Brain CT Images Axial

Coronal

631 Sagittal

Fig. 1 Axial, Coronal and Sagittal views of CT image scanning window [9,10]

2 Medical Imaging 2.1

Brain Anatomy in Non-enhanced Computed Tomography (CT) Image

CT permits the sampling of brain images with the thickness between 4 to 14 mm with a specific given value [7]. Depending on the patient and data visualization, the number of brain image slices diverse and all these variables should be taken into account to ensure accurate visualization. Through CT scanning process, the tube with x-ray beams is rotated around the patient’s head to produce three types of views which are sagittal, axial and coronal perspective [8]. Figure 1 shows CT image scanning window. In order to display the most optimize visualization, the window setting of the image should be in the range of +40 to +80 Hounsfield Unit (HU) for both or either window centre (Wc ) and window width (Ww ). This paper utilizes the window setting of Wc = 40 HU Ww = 40 HU since this setting has been proven to show the most optimize display for brain morphology [11].

2.2

CT vs MRI

In the battle against the progressive threatening medical condition, every second past before treating the patient could lead to fatality. Consequently, the sooner the radiologists take action, the higher the chance of saving the patient. In time constraint, CT acted as the first-line of imaging due to its fast response despite being less reliable as compared to MRI [12]. For a very long time, the paramedics have been relying on CT to diagnose head injury despite its credibility in constraining situation [13]. CT is suitable in providing rapid prescreening hence, encouraging radiologists to monitor unstable patient. Its availability has made CT a reliable clinical measuring tool since MRI is not as widely available as CT particularly in the countryside [14]. Table 1 display the differences between CT and MRI.

632

F. R. Rahmad et al.

Table 1 Comparison between MRI and CT [15] Imaging technique

MRI

CT

Advantages

– No radiation imposed on patient – Distinctive contrast for soft tissue – Excellent visual representation compared to CT – Low accessibility – Complex real-time imaging implementation – Expensiv

– Wide availability – Rapid data provision – Economical

Disadvantages

– High radiation – Poor soft tissue visibility

For example, in the event where ischemic sign is present but too subtle or hard to perceive, the CT detection procedure can be combined and succeeded with CT Angiography (CTA) and CT Perfusion (CTP) [16]. CTA aids radiologists to determine salvageable tissue of the brain by distinguishing infarcted tissue. CTP on the other hand evaluate the flow blockade in the major vessels. The integration of these technique is proven to provide a more accurate diagnosis but similar to MRI which is not widely accessible.

2.3

CT Attenuation (Hounsfield Unit)

The whole brain constitute of several parts when displayed in a CT image. Every part in the brain structure is represented by different attenuation of Hounsfield Unit (HU) when portrayed by grayscale value. The representation of different attenuation is crucial to segment different part of the brain. Figure 2 shows the Hounsfield Unit for different part of the brain where each part is in between a specific range. This paper utilizes some part of the range especially the bone to implement the stripping process from the image. Figure 2 visualize the different cranial region with their respective attenuation. For example, the attenuation ranges between −1000 to 1000 where the −1000 represent the darkest hue, in this case; air, and 1000 represent the brightest hue, in this case; the bone, which is focused on in this research.

Hybrid Skull Stripping Method for Brain CT Images

633

Fig. 2 Hounsfield Unit representation of different head structures [17]

3 Pre-processing Method This section discusses the method for skull stripping which combined intensity-based method and morphology-based method to eliminate non-brain tissues from CT images.

3.1

Image Acquisition

The experimental methods are conducted with MATLAB R2018b and Intel® Core™ i3-3110M processor with 2.40 GHz CPU and 6 GB RAM as the testing platform. The algorithm used CT images with the resolutions of 512 by 512 in DICOM format obtained from the Radiology department of UKM Medical Centre acknowledging patients discretion. The images are scanned with Aquilion One by Toshiba scanner. The parameter covered by the scanner involve producing 152 to 207 slices for each patients where each slice is 1 mm thick 16 bit depth. The default window centre (Wc) and window width (Ww) for the scanner are 40 HU and 90 HU respectively.

634

3.2

F. R. Rahmad et al.

Image Normalization

In order to proceed with skull-stripping, it is crucial to prepare the images by converting them to DICOM format. This process involve normalizing the 16-bit Hounsfield unit (HU) from the scanner to suitable window setting value which is the 8-bit grayscale. Previous studies have suggested several window values that can be used to emphasize the tissue visibility. Nevertheless, the difference in values may temper with the results in addition to different suitability of machine used with the window setting. Thus, the most suitable window setting values had been tested and evaluated to decide the suitable value. Consequently, the default values of Wc = 40 HU and Ww = 40 HU is selected since this setting permits the visualization of the brain tissue effectively (Fig. 3(b)) [11]. Fig. 3 Output of DICOM conversion

(a)

3.3

CT bone view

(b)

CT brain view

Hybrid Method (Intensity-Morphology-Based Hybrid)

The proposed hybrid method combines some aspect of the conventional intensity-based method and morphology-based method. This paper uses histogram normalization to separate between brain region and non-brain region to present only the tissue. Figure 4 shows the flow of the skull-stripping method. The process begins with image acquisition from the CT in 16-bit format as shown in Fig. 5(a). The presented image is a whole head image in 8-bit DICOM format which is comprised of tissue, background and skull. The skull is then

Fig. 4 Flow of skull-stripping method

Hybrid Skull Stripping Method for Brain CT Images

Fig. 5 Stages of skull-stripping method with product of each stage

635

636

F. R. Rahmad et al.

sampled with the value 0.9 from the 255 unit of the image value. This process produced image as illustrated in Fig. 5(b). The same image is then filled with holes to produce a single subject masked image as shown in Fig. 5(c). The skull region and inner part of skull region are shown in Fig. 5(b) and Fig. 5(c) respectively. Subtracting Fig. 5(b) from Fig. 5(c) yields the tissue region as shown in Fig. 5(d). However, notice that Fig. 5(d) includes an unwanted cavity which is shown by the smaller region unattached to the tissue region. This small region is removed to ensure that the image displayed is only the tissue region as shown in Fig. 5(e). The image is then fine-tuned to remove residual grain by filling the tissue region with holes similar to previous process and produced Fig. 5(f). Afterwards, the original image is subtracted with Fig. 5(f) to produce Fig. 5(g) which is a clean tissue region, without grain. Finally, the original image is used again by subtracting it with Fig. 5(g) to produce Fig. 5(h) which is the brain tissue without the skull.

3.4

Performance Measurement

The skull-stripping technique is carried out and analyzed with the conventional morphological-based method. The performance of this method is evaluated qualitatively by comparing the segmented output with the original image by observing the image overlays. In addition to the qualitative evaluation, this method is also compared quantitatively using Jaccard and Dice similarity index with the conventional morphology-based method. The qualitative method of overlay is meant to compare the segmented image from the original image by layering the masked segmented image on the original image. The Sørensen–Dice coefficient is a statistical method that can be utilized to analyze the similarity of two samples. [18] This coefficient index is initially used for discrete data and can be represented as shown in Eq. (1). DSC ¼

ð2jX \ YjÞ ðjXj þ jYjÞ

ð1Þ

Where X and Y are two sets and |X| and |Y| are the cardinality of those respective sets. Jaccard coefficient on the other hand utilize the intersection over union principle as shown in Eq. (2). J(A; B) ¼

ðjA \ BjÞ ðjA [ BjÞ

ð2Þ

From this equation, A and B represent their individual sets. As the principle suggest, the Jaccard coefficient make use of the number of the intersected element and divide them by the number of elements in the union [19]. Additionally, the traditional morphological-based method attributed to this hybrid technique is used to compare the quantitative result.

Hybrid Skull Stripping Method for Brain CT Images

637

4 Result and Discussion Table 2 shows the proposed method manage to segment the brain tissue region as accurately as possible within the skull boundary. Table 2 Qualitative analysis for proposed method

Input

Output

Overlay

Dice

Jaccard

1

2

3

4

5

As shown in Table 3 and Table 4, it is worth noticing that the processing time is faster with the proposed method whereas the morphology-based method is rather slower. However, the similarity of the segmented image seems to be quite similar to each other.

638 Table 3 Quantitative analysis of proposed method

Table 4 Quantitative analysis of conventional morphological-based method

F. R. Rahmad et al. Image

1

2

3

4

5

Processing time (s) Dice Jaccard

0.0394

0.0240

0.0158

0.0202

0.0176

0.9430 0.8922

0.9367 0.8809

0.9456 0.8969

0.9233 0.8575

0.9472 0.8997

Image

1

2

3

4

5

Processing time (s) Dice Jaccard

0.1245

0.1407

0.1396

0.0901

0.0797

0.9427 0.8916

0.9361 0.8780

0.9402 0.2273

0.9230 0.8570

0.9469 0.8991

5 Conclusion and Future Works In this paper, we propose a hybrid skull-stripping methods for brain CT image by combining aspect of intensity-based method and morphology-based method. This paper provide insight for medical imaging field, especially image pre-processing of plausible methods for skull stripping as a mean for a better image processing by comparing the performance of each methods qualitative and quantitatively. As it turns out, a hybrid method is proven to provide a faster outcome while retaining the accuracy of the attributed methods. A well-structured medical imaging technique mainly image segmentation may be handy in assisting medical professionals in making decision and even have the potential for a more intricate diagnosis pipeline to fulfill the needs of the ever-expanding clinical data. Progress in medical imaging field may provide an intuitive mean to process and manage a huge amount of clinical data mostly for analysis purposes. This paper generally emphasize the segmentation of the top brain of the anterior part of the brain. Acknowledging this fact, this method could be refined to be utilized not only for the top brain, but for the whole section of the brain including both anterior and posterior region of the brain. On top of that, these methods hold a potential in facilitating the diagnosis of disease such as stroke and head trauma if they were to be incorporated with a particular deep learning algorithm. Acknowledgements The authors are grateful for Universiti Tun Hussein Onn Malaysia (UTHM) for supporting this research work under Postgraduate Research Grant (GPPS) Vot 402 and Tier 1 Grant Vot H203.

Hybrid Skull Stripping Method for Brain CT Images

639

References 1. Park E, Kim JH, Nam HS, Chang H-J (2018) Requirement analysis and implementation of smart emergency medical services. IEEE Access 6:42022–42029 2. Aziz ZA, Lee YY, Ngah BA, Sidek NN, Looi I, Hanip MR, Basri HB (2015) Acute stroke registry Malaysia, 2010-2014: results from the National Neurology Registry. J Stroke Cerebrovasc Dis 24(12):2701–2709 3. Yahiaoui AFZ, Bessaid A (2016) Segmentation of ischemic stroke area from CT brain images. In: 2016 international symposium on signal, image, video and communications (ISIVC), Tunis, pp 13–17 4. Kalavathi P, Prasath VBS (2015) Methods on skull stripping of MRI head scan images—a review. J Digit Imaging 29(3):365–379 5. Brummer ME, Mersereau RM, Eisner RL, Lewine RRJ, Caeslles V, Kimmel R, Sapiro G (1993) Automatic detection of brain contours in MRI datasets. IEEE Trans Image Process 12 (2):153–166 6. Subudhi A, Jena J, Sabut S (2016) Extraction of brain from MRI images by skull stripping using histogram partitioning with maximum entropy divergence. In: 2016 international conference on communication and signal processing (ICCSP), Melmaruvathur, 2016, pp 0931–0935 7. Rekik I, Allassonnière S, Carpenter TK, Wardlaw JM (2012) Medical image analysis methods in MR/CT-imaged acute-subacute ischemic stroke lesion: Segmentation, prediction and insights into dynamic evolution simulation models. A critical appraisal. NeuroImage Clin. 1 (1):164–178 8. Zaki WMDW (2012) Content-based medical image analysis and retrieval of intracranial haemorrhage CT brain images. Doctoral dissertation. Multimedia University Malaysia 9. Gulsen S, Terzi A (2013) Multiple brain metastases in a patient with uterine papillary serous adenocarcinoma: treatment options for this rarely seen metastatic brain tumor. Surg Neurol Int 4(1):111 10. Clare S (1997) Functional MRI: methods and applications 11. Suberi AAM, Zakaria WNW, Tomari R, Fuad NFN (2018) Classification of Posterior Fossa CT brain slices using Artificial Neural Network. Procedia Comput Sci 135:170–177 12. Li S, Manogaran G (2019) Design and implementation of networked collaborative service system for brain stroke prevention and first aid. IEEE Access 7:14825–14836 13. Jauch EC, Saver JL, Adams HP Jr, Bruno A, Connors JJ, Demaerschalk BM, Khatri P, McMullan PW Jr, Qureshi AI, Rosenfield K, Scott PA (2013) Guidelines for the early management of patients with acute ischemic stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke 44(3):870–947 14. Dubey P, Pandey S, Moonis G (2013) Acute stroke imaging: recent updates. Stroke Res. Treat. 2013:1–6 15. Saad NM, Bakar SARSA, Muda AS, Mokji MM (2015) Review of brain lesion detection and classification using neuroimaging analysis techniques. Jurnal Teknologi 74(6):73–85 16. van Seeters T, Biessels GJ, Kappelle LJ, Van Der Schaaf IC, Dankbaar JW, Horsch AD, Niesten JM, Luitse MJ, Majoie CB, Vos JA, Schonewille WJ (2015) The prognostic value of CT angiography and CT perfusion in acute ischemic stroke. Cerebrovasc Dis 40(5–6):258– 269 17. M.D. D. O. K. CT Imaging for Stroke. http://neurovascularmedicine.com/imagingct.php. Accessed 29 July 2019 18. Sørensen T (1948) A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. Kongelige Danske Videnskabernes Selskab 5(4):1–34 19. Revoledu homepage. https://people.revoledu.com/kardi/tutorial/Similarity/Jaccard.html. Accessed 17 Sept 2019

Improvising Non-uniform Illumination and Low Contrast Images of Soil Transmitted Helminths Image Using Contrast Enhancement Techniques Norhanis Ayunie Ahmad Khairudin, Aimi Salihah Abdul Nasir, Lim Chee Chin, Haryati Jaafar, and Zeehaida Mohamed Abstract Image enhancement plays an important role in image processing and computer vision. It is used to enhance the visual appearance in an image and also to convert the image suited to the requirement needed for image processing. In this paper, image enhancement is used to produce a better image by enhancing the image quality and highlighting the morphological features of the helminth eggs. Result obtained from enhancement is prepared for segmentation and classification process. The helminth eggs used in this paper are Ascaris Lumbricoides Ova (ALO) and Trichuris Trichiura Ova (TTO). In this study, several enhancement techniques have been performed on 100 images of ALO and TTO which have been captured under three different illuminations: normal, under-exposed and over-exposed images. The techniques used are global contrast stretching, limit contrast, linear contrast stretching, modified global contrast stretching, modified linear contrast stretching, partial contrast and reduce haze. Based on results obtained from these techniques, modified linear contrast stretching and modified global contrast stretching are able to equalize the lighting in the non-uniform illumination images of helminth eggs. Both techniques are suitable to be used on non-uniform illumination images and also able to improve the contrast in the image without affecting or removing the key features in ALO and TTO images as compared to the other techniques. Hence, the resultant images would become useful for parasitologist in analyzing helminth eggs. Keywords Helminth eggs

 Image processing  Contrast enhancement techniques

N. A. A. Khairudin (&)  A. S. Abdul Nasir  H. Jaafar Faculty of Engineering Technology, Universiti Malaysia Perlis, UniCITI Alam Campus, Sungai Chuchuh, 02100 Padang Besar, Perlis, Malaysia e-mail: [email protected] L. C. Chin School of Mechatronic Engineering, University Malaysia Perlis, Pauh Putra Campus, 02600 Arau, Perlis, Malaysia Z. Mohamed Department of Microbiology and Parasitology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia, 16150 Kubang Kerian, Kelantan, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_45

641

642

N. A. A. Khairudin et al.

1 Introduction Helminth is an infection agent for parasitic worm diseases which globally known as helminthiases. Parasitic helminths can be classified into three major groups which are nematodes, trematodes and cestodes. Nematodes have a general vermiform body shape and mostly live on insects. Trematodes are worms with solid body, hermaphroditic and have a complex life cycle. They use snails as their intermediate hosts. Trematodes usually inhabit lungs, liver, intestine and blood vessels of their hosts; while cestodes are flat-worm with a segmented body. The adult cestodes live in the human intestine [1]. Helminth is a multicellular animal whose eggs are microscopic. The egg sizes for different species vary which is around 20 to 80 µm for those that are significant in variable amounts in the sanitary field [2]. Their shape and resistance also differ. Helminth eggs can remain viable for 1 to 2 months in crops and many months in soil, freshwater and sewage [1]. They can remain viable for several years in feces, night soil, sludge and wastewater. Helminth eggs are able to be transmitted into a human body through contact with polluted sludge or fecal material, exposure to contaminated food, water and dust from fur or animal body [3]. These parasites can multiply in the human body and could lead to a serious illness such as filariasis, cancerous diseases and cysts. Helminthiases may cause anemia, diarrhea and severe problems of malnutrition. Usually, children within age 5 to 15 years old are affected which can affect their physical, mental and quality of their life [4]. Early diagnosis of the disease is fundamental for patient recovery especially in the case of children. Helminth eggs can be diagnosed through stool, cells and blood samples from the patients. Problem occurs when parasitologist needs to diagnose these sample in fresh condition under a limited time. The diagnosis procedure is also conducted manually using light microscope. This procedure consumes a great amount of time and the observer must have a good concentration in observing samples [5]. Results obtained are often neither reliable nor accurate. These limitations have triggered the improvement in digital image processing for helminth egg recognition by using image processing and computer algorithms. For example, a digital image processing technique is proposed by Hadi et al. [6] in detecting ALO and TTO from fecal samples. Three pre-processing methods have been compared in order to obtain the best segmentation result. Method I used contrast enhancement while method II used canny edge detection and method III used combination of contrast enhancement and canny edge detection. The results obtained concluded that method III has the highest accuracy which is 93% for ALO and 94% for TTO. Then, 15 types of human intestine parasites are diagnosed by Suzuki et al. [7] through a proposed automatic segmentation and classification system of human intestinal parasites from microscopy images. The proposed system explored image foresting transform and ellipse matching for segmentation and

Improvising Non-uniform Illumination and Low Contrast Images …

643

optimum-path forest classifier for object recognition. This system obtained 90.38% for sensitivity, 98.32% for specificity and 98.19% for efficiency. Next, Jimenez et al. [5] proposed a system that identifies and quantifies seven species of helminth eggs in wastewater. The system shows a specificity of 99% and a sensitivity in a range of 80% to 90%.

2 Literature Reviews on Enhancements and Image Quality Assessments Exposure of the microscope may impact the quality of captured images. Over-exposure setting will lead to forming a bright image, while the under-exposure setting will form a dark image [8]. It is hard to visualize and analyze the morphological features of helminth eggs when quality of the image is low. Various approaches for the enhancement techniques have been developed and published. Most studies aimed in enhancing the visibility for low contrast images while reducing the noise to improve the image quality in order to obtain a better visual in an image [9]. Abdul-Nasir et al. [10] proposed modified global and modified linear contrast stretching which are good in enhancing the contrast and brightness of the image compared to the conventional global contrast stretching (GCS) and linear contrast stretching (LCS) techniques based on the qualitative and quantitative analysis. Kaur and Choundhary [11] compared five enhancement techniques on acute leukemia images which are LCS, GCS, dark contrast stretching, bright contrast stretching and partial contrast and found that partial contrast has a better result in enhancing the contrast of non-illumination in acute leukemia images compared to the other techniques. Ho et al. [12] proposed dehazing algorithm on dark channel prior and contrast enhancement approaches. The usual dark channel prior method restores the color of objects in the scene after eliminates the haze, nonetheless it does not consider the enhancement of image contrast. On the contrary, the image contrast method improves the local contrast of objects, nevertheless, the color is generally distorted consequently of to the over-stretching of contrast. The proposed algorithm combines the advantages of those two conventional approaches for keeping the color while dehazing the image. Jang et al. [13] proposed an adaptive contrast enhancement using edge-based lighting to improve the perceptual visual quality by lessening the structural distortion to a tolerable level. This method estimated the lighting conditions and adaptively adjusts the luminance in the images. Enhancement performance measure is used to analyze the results obtained. Al-Ameen [14] improved LCS by developing an adjustable contrast stretching improving the contrast of color images. The proposed technique is evaluated by comparing the technique with four specialized enhancement techniques of

644

N. A. A. Khairudin et al.

TW-CESBLK, NMHE, ESIHE and RESIHE. The proposed technique has provided satisfactory results as it produced natural contrast images with no visible artefacts and outperformed the comparative techniques by scoring the highest in terms of recorded accuracy. Hitam et al. [15] applied CLAHE to the underwater images in red–green–blue (RGB) and hue–saturation–value (HSV) color models separately. Then, the individual images are combined through Euclidean distance to produce a contrast enhanced image with a low-noise. In several cases, however, this method produces output images with more noise than the conventional CLAHE does. As such, the output image is greenish. Image quality assessment is used to develop and monitor the quantitative measure that able to automatically predict the image quality [16]. Kumar and Rattan [17] analyzed 10 types of quality metrics for medical image and state that SSIM gave a better accuracy and has a higher result performance compared to the other quality metrics. Saha et al. [18] analyzed the performance from the full-reference image quality assessment by combining global and local distortion measures and concluded that there is no assurance of better overall performance; if the distortion wise, performance is better. For overall performance in image, the relative assessment for different distortions is also required. In conclusion, a lot of enhancement techniques have been proposed and applied on medical images in order to acquire the targeted information. Then, the quantitative measure for the enhanced image also shows a great improvement. The image can easily be measured suits the requirement needed. So, the quality of the resultant image is able to be verified through the obtained results.

3 Methodology 3.1

Image Acquisition

In image acquisition, images for ALO and TTO species are captured from a stool sample slides by using a computerized microscope. These stool samples are prepared by the Department of Microbiology and Parasitology, Hospital Universiti Sains Malaysia (HUSM) which are freshly collected from patients. The stool is placed on microscope slide and normal saline has been used as staining to obtain a clearer image of ALO and TTO. Then, the stool slides are observed under 40X magnification and the images captured are saved in.jpg extension under 3 different conditions which are normal images, under-exposed images and over-exposed images. 100 images for each species are randomly selected from the captured image to be tested in this paper. Figure 1 shows the samples of the captured ALO and TTO images in different illuminations.

Improvising Non-uniform Illumination and Low Contrast Images …

645

(a) Normal ALO image

(b) Under-exposed ALO image

(c) Over-exposed ALO image

(d) Normal TTO image

(e) Under-exposed TTO

(f) Over-exposed TTO image

image

Fig. 1 Samples of ALO and TTO images captured under three different conditions

3.2

Image Enhancement Techniques on Helminth Eggs

In image enhancement, the quality of the image is enhanced either by modifying the brightness, darkness [19] or sharpness in the image into a more suitable values or conditions depending on the user’s preference. In this paper, seven image enhancement techniques are applied and tested for enhancing the ALO and TTO images. Global Contrast Stretching (GCS). GCS simplifies the contrast problems that occur in global fashion such as poor or excessive light conditions in the environment source [20]. Image with high global contrast will have a detailed and variation-rich image while an image with a lower global contrast contains less information, fewer details and seems to be more uniform [21]. All color palate range is considered at once to determine the maximum and minimum for all RGB color image [22, 23]. The combination of RGB color will give only one value for maximum and minimum for RGB color. These maximum and minimum values will be used as the desired values for the contrast stretching process. The calculation for GCS is defined as in Eq. 1. 

inRGB ðx; yÞ  minRGB outRGB ðx; yÞ ¼ 255  maxRGB  minRGB

 ð1Þ

Limit Contrast (LC). Limit contrast also known as normalization is a simple enhancement technique that is used to improve the contrast in an image to a desired

646

N. A. A. Khairudin et al.

range of values through stretching the range of intensity values of original image [24]. The original purpose for LC is to enhance the dynamic range in image gray level but for this project, the dynamic range is enhanced on the image color component. Lower and upper pixel value limits (LP and UP) are decided through the pixel value in RGB components and the highest and the lowest pixel are chosen from these components automatically. Then, the image is scanned to find the lowest and highest pixel values (L and H) currently present in the image. Then each pixel is scaled by using Eq. 2.   UP  LP Pout ¼ ðimage  LÞ þ LP HL

ð2Þ

The lowest values from R, G and B components are combined to produce a new L value to be applied on the color image. The disadvantage of this technique is that LC will not achieve any results if the original range covers the full possible set of pixel values. Linear Contrast Stretching (LCS). LCS locally adjusts the value of each element in the image to simultaneously enhance the visualization structure of both the lightest and darkest parts of the image. This technique helps to highlight the information in the regions that are initially very light or dark [25]. In LCS, a certain amount of stretching is applied in a neighborhood that is controlled by the origin contrast in that neighborhood. Linear contrast technique considers each range of RGB components in the image. Thus, the range of each color component is used during the contrast stretching process to represent each range of color. This will give each component a set of minimum and maximum values [26]. Equation 3 shows the calculation for LCS technique.  outRGB ðx; yÞ ¼ 255 

 ðinRGB ðx; yÞ  minÞ max  min

ð3Þ

Modified Global Contrast Stretching (MGCS). MGCS overcomes the weakness of GCS by adjusting the minimum and maximum values RGB components in the image. MGCS able to enhance the contrast of the image without affecting the color structure of the original image. This technique also able to preserve as much information as the original image [21]. This technique is altered from the GCS by using a new minimum and maximum values which differ from the original GCS. The new minimum and maximum are from the value between RGB components that have been acquired through a certain calculation from the total number of pixels in the images. The process must satisfy these conditions:

Improvising Non-uniform Illumination and Low Contrast Images …

647

TminðRGBÞ  100  minp total number of pixels in image

ð4Þ

TmaxðRGBÞ  100  maxp total number of pixels in image

ð5Þ

Tmin and Tmax are the total number of pixels that lie in a specific minimum and maximum while minp and maxp are the desired values for minimum and maximum. Modified Linear Contrast Stretching (MLCS). MLCS prevails over the weakness of LCS by adjusting the values of minimum and maximum of RGB components in the image. MLCS is capable in enhancing the contrast of the image without affecting the color structure of the original image. This technique also able to retain the same information as the original image [21]. MLCS is altered from LCS enhancement by improving the minimum and maximum value for each of the RGB components in the image into a new minimum and maximum values that are beyond the original values for each of the RGB components [25]. The equation is the same as the original LCS. The desired percentages (minp & maxp ) are obtained from the values of each RGB components which follows these conditions: TminðRGBÞ  100  minp total number of pixels in image

ð6Þ

TmaxðRGBÞ  100  maxp total number of pixels in image

ð7Þ

Partial Contrast (PC). Partial contrast is a linear mapping function that is used to increase the contrast level and brightness level of the image. The technique is based on the original brightness and contrast level of the images to be adjusted. This technique is able to enhance the contrast of image even with a different illumination [26, 27]. Before the mapping process started, the range of where the majority of the input pixels converge for each color model need to be identified. Since the input images are the RGB color model, so it is necessary to find the range for the red, blue and green intensities [26]. The pixel within the range of minTH and maxTH are stretched to the desired range of NmaxTH to NminTH, whereas the remaining pixels will experience compression. By this stretching and compressing processes, the pixels of the image can be mapped to a wider range and brighter intensities. As a result, the contrast and the brightness level of the raw images are increased [27]. PC is defined as in Eq. 8.

648

N. A. A. Khairudin et al.

Pout ¼

8 >

:

img minTH  NminTH  NmaxTHNminTH maxTHminTH  ðimg  minTH img maxTH  NmaxTH

for img [ minTH þ minTH; for minTH\img\maxTH

ð8Þ

for img\maxTH

Reduce Haze (RH). RH technique helps in improving the visibility of information in low contrast image as well as in the image that has lose the color fidelity by reducing the haze in the image. This technique employs a per-pixel dark channel in order to identify the low intensity pixel (dark pixel) in image. The haze transmission is estimated through the dark pixels in the image. The intensity of the dark channel act as an estimation for the thickness of the haze. Then, the haze and quad-tree decomposition is evaluate in order to compute the atmospheric light. The estimation of haze is based on the value of haze thickness at each pixel and quantity atmospheric light, which represents the value of the brightest non-specular haze [28, 29]. Equation 9 represent the calculation for RH technique. H¼

I ð xÞ  A maxðtð xÞ; t0 Þ

ð9Þ

I is the observed intensity while A is the global atmospheric. t(x) is a constant transmission in local path and t0 is equal to 0.1.

3.3

Image Quality Assessment (IQA)

Image Quality Assessment (IQA) is divided into qualitative analysis and quantitative analysis. For qualitative analysis, over enhancement, unnatural enhancement or the presence of the artefacts in the images are inspected by a human through naked eyes. Generally, the visual quality or appearance of the resultant image is evaluated in the qualitative measurement process [30]. Quantitative analysis is divided into three parts which are full reference (FR) IQA, no-reference (NR) IQA and reduce reference (RR) IQA. Similarity, brightness and contrast in the image are examples of inspection in the quantitative analysis. In this paper, seven techniques from FR IQA are used in order to analyze the results from the proposed enhancement techniques. These techniques are mostly been used for restoration and enhancement [31]. For these techniques, MSE, PSNR, SSIM, and FSIM calculates the similarity and intensity level between the original image and enhanced image while AMBE, EMEE and Entropy are used to measure

Improvising Non-uniform Illumination and Low Contrast Images …

649

the contrast between the original and resultant image. The description of these quantitative analyses are as follows: Mean Square Error (MSE). MSE is computed by averaging the squared intensity of the original (input) image and the resultant (output) image pixels. MSE is defined as in Eq. 10. MSE ¼

1 Xm1 Xn1 ðBði; jÞ  Aði; jÞÞ2 i¼0 j¼0 mn

ð10Þ

Where Aði; jÞ and Bði; jÞ be the original and enhanced image while m  n signifies the image size [30, 32]. Peak Signal-to-Noise Ratio (PSNR). Signal-to-noise ratio (SNR) is a mathematical measure of image quality based on the pixel difference between two images [32]. The SNR measure is an estimation of the quality of a reconstructed image when compared to the original image. The PSNR is basically the SNR when all pixel values are equal to the maximum possible value. PSNR is defined as in Eq. 11. PSNR ¼ 10 log

s2 MSR

ð11Þ

where s = 255 for the 8-bit image. Structural Similarity Index Measurement (SSIM). SSIM measures the similarity between two images and is correlated with the quality perception of human visual system. SSIM is not only has a good image quality prediction accuracy but also has a simple formulation and low computational complexity [32, 33]. The calculation of SSIM is given in Eq. 12. 

SSIM ðx; yÞ ¼

 2lx ly þ c1 2rxy þ c2

l2x þ l2y þ c1 r2x þ r2y þ c2

ð12Þ

Where lx ly is the average of x and y. rxy is the covariance of x and y. r2x and r2y are a variance of x and y. c1 ¼ ðk1 LÞ2 and c2 ¼ ðk2 LÞ2 are variables to stabilize the division with weak denominator. L is the dynamic range of the pixel values. k1 ¼ 0:01 and k2 ¼ 0:03 by default.

650

N. A. A. Khairudin et al.

Feature Similarity Index Measurement (FSIM). FSIM considers the luminance component of images by calculating the similarity score between the original image and the resultant image [33]. Equation 13 presents the calculation for FSIM. P s ð xÞ:PCm ð xÞ P l FSIM ¼ x2X ð13Þ x2X PCm ð xÞ PCm ð xÞ is phase congruency and SL is similarity while Ω is a spatial domain for the whole image. Absolute Mean Brightness Error (AMBE). AMBE is used to calculate the difference in mean brightness between two images. Higher AMBE value indicates that the brightness is better preserved [34]. The calculation for AMBE is shown in Eq. 14. AMBE ¼ meanðoriginal image  resultant imageÞ

ð14Þ

Weber-Law Based Contrast Measure with Entropy (EMEE). EMEE measure contrast in an image based on entropy. A high value of EMEE is desired because they indicate the degree of contrast enhancement in compared images [32]. Equation 15 presents the calculation for EMEE. EMEEk1;k2

     Ymaxðk;lÞ a Ymaxðk;lÞ 1 Xk1 Xk2 ¼ a ln l¼1 l¼1 k1 k2 Yminðk;lÞ þ c Yminðk;lÞ þ c

ð15Þ

Where a is constant and c is 0.00001 to avoid dividing by zero (0). Entropy. Entropy is defined as the corresponding grey level state that can be implemented by individual pixels. A high entropy value is preferred since it discloses that an image contains much information [35]. The equation for entropy is as in Eq. 16. e ¼ xði;jÞ lnðxði;jÞ Þ

ð16Þ

4 Results and Discussions In this paper, seven proposed enhancement techniques have been applied on the helminth egg images which are ALO and TTO images. Figure 2 shows the original images of ALO and TTO while Figs. 3 until 9 show the resultant images of enhancement techniques on ALO and TTO images.

Improvising Non-uniform Illumination and Low Contrast Images …

(a) ALO_1

(b) ALO_2

(c) TTO_1

651

(d) TTO_2

Fig. 2 The original images for two types of STH

Fig. 3 The resultant images of GCS on ALO and TTO images

Figure 2 shows the original images of ALO and TTO. These pictures differ in illuminations. ALO_1 image is in normal illumination while ALO_2 and TTO_1 are over-exposed illumination images. TTO_2 is an under-exposed illumination image. The artefact in the images differs in size and color which may lead to false diagnosis results if it is wrongly detected in the image. Figure 3 shows the results when the GCS technique is applied on the original images. The targeted images appear clearer in the images. The background is slightly enhanced but the artefact also becomes visible. The illumination is still different for each image. Figure 4 shows the results when the LC technique is applied on the original images. The target images become dimmer compared to the original color. The background color is enhanced and become brighter. The illumination is still different for each image.

(a) ALO_1

(b) ALO_2

(c) TTO_1

Fig. 4 The resultant images of LC on ALO and TTO images

(d) TTO_2

652

N. A. A. Khairudin et al.

Figure 5 shows the resultant images for the LCS technique. The targeted images appear clearly in the images. The background is also enhanced but differs for each image. The background color in ALO_2 and TTO_1 images are completely changed from the original color. The illumination in each image are still different from each other. Figure 6 shows the resultant images from the MGCS technique. The enhanced images shows a good contrast between targeted images and background images. The targeted images appear clearly in the images. The background is enhanced and appears in a similar color for each image. The illumination is identical for each image. Figure 7 shows the resultant images from the MLCS technique. The enhanced images shows a good contrast between targeted images and background images. The targeted images appears clearly in the images but in a darker color. The background is enhanced and appear in a similar color for each image. The illumination is identical for each image.

(a) ALO_1

(b) ALO_2

(c) TTO_1

(d) TTO_2

Fig. 5 The resultant images of LCS on ALO and TTO images

(a) ALO_1

(b) ALO_2

(c) TTO_1

(d) TTO_2

Fig. 6 The resultant images of MGCS on ALO and TTO images

(a) ALO_1

(b) ALO_2

(c) TTO_1

Fig. 7 The resultant images of MLCS on ALO and TTO images

(d) TTO_2

Improvising Non-uniform Illumination and Low Contrast Images …

653

Figure 8 shows the resultant images for the PC technique. The targeted images in ALO_1, TTO_1 and TTO_2 appear clearer in images while the targeted image in ALO_2 is dimmer compared to other images. The background color in ALO_1, ALO_2 and TTO_2 are slightly enhanced while the background in TTO_1 is over enhanced. The illumination in the images are not identical to each other. Figure 9 shows the resultant images for the RH technique. A good contrast is obtained from the targeted images and background images. The targeted images appear clearer in the images but the artefacts also become visible. Then, the background is enhanced but the color in each image is different from each other. The illumination in the images are not identical to each other. Based on the results obtained, results in GCS and LCS show an improvement on the target images and the background image. The color of the targeted images is more highlighted in GCS compared to LCS. LC shows the target images are slightly enhanced and the background color does not have much difference from the original images. For MGCS and MLCS, the enhanced images show a good contrast between the target images and backgrounds. The target images MLCS are darker than MGCS images. In PC, the resultant images are slightly enhanced while in RH results, the enhancement shows a decent contrast for target image but the background color only slightly enhanced. Artefacts in the images also appeared which can lead to false diagnosis results. The most suitable technique for enhancement based on the qualitative measure is MGCS technique. This is because the resultant image is clearer and the feature of the target image is improved. The resultant images also show that the color for target image is brighter but it does not change the original color of the target image.

(a) ALO_1

(b) ALO_2

(c) TTO_1

(d) TTO_2

Fig. 8 The resultant images of PC on ALO and TTO images

(a) ALO_1

(b) ALO_2

(c) TTO_1

Fig. 9 The resultant images of RH on ALO and TTO images

(d) TTO_2

654

N. A. A. Khairudin et al.

This shows that MGCS is able to enhance the image while preserving the originality of the STH color. Therefore, the resultant image is suitable to be used for further process of image processing. In order to identify the significance of enhancement techniques, seven quantitative analysis are used in comparing each resultant image from the proposed techniques with the original images. MSE, PSNR, SSIM and FSIM are for similarity measure while AMBE, AMEE and Entropy are for contrast measure. For enhanced ALO images, Table 1 shows the similarity quantitative analysis results while Table 2 shows the contrast quantitative analysis results. For enhanced TTO images, Table 3 shows the similarity quantitative analysis results while Table 3 shows the contrast quantitative analysis results. Table 1 shows the similarity average results from enhanced ALO images. MSE analysis shows the lowest value is obtained by LC technique while the highest value is obtained by MLCS technique. Then, in PSNR analysis MLCS technique has the lowest value while LC technique has the highest value. For SSIM analysis, LCS technique obtained the lowest value while LC technique obtained the highest value. Lastly, in FSIM analysis, RH technique has the lowest value and LC technique has the highest value. Overall, the values of LC technique are the lowest in MSE but are the highest in PSNR, SSIM and FSIM quantitative analysis while MLCS technique has the lowest value in PSNR analysis. RH technique has the lowest value in SSIM and FSIM analysis.

Table 1 Average values for similarity quantitative analysis from 100 enhanced ALO images

Techniques

MSE

PSNR

SSIM

FSIM

GCS LC LCS MGCS MLCS PC RH

460.331 157.190 804.981 3403.958 3664.069 754.941 1052.279

24.235 28.136 20.039 13.797 13.375 20.252 18.535

0.932 0.974 0.624 0.827 0.722 0.819 0.652

0.978 0.999 0.953 0.915 0.902 0.922 0.837

Table 2 Average values for contrast quantitative analysis from 100 enhanced ALO images

Techniques

AMBE

EMEE

Entropy

GCS LC LCS MGCS MLCS PC RH

11.084 11.044 13.073 42.974 44.616 12.481 17.011

2.960 1.070 4.720 23.493 25.636 5.063 5.982

2.462 0.051 2.453 3.856 3.998 3.617 2.375

Improvising Non-uniform Illumination and Low Contrast Images … Table 3 Average values for similarity quantitative analysis from 100 enhanced TTO images

655

Techniques

MSE

PSNR

SSIM

FSIM

GCS LC LCS MGCS MLCS PC RH

357.951 167.246 614.836 3324.678 3559.440 1095.490 875.567

26.429 31.801 21.963 14.140 13.715 18.791 19.697

0.950 0.959 0.658 0.824 0.733 0.614 0.673

0.935 0.968 0.954 0.891 0.870 0.828 0.819

Table 2 shows the contrast results from the enhanced ALO images. In AMBE analysis, the lowest value is obtained by LC technique while the highest value is obtained by MLCS. Then, EMEE analysis shows that LC has the lowest value while MLCS has the highest value. In entropy analysis, the lowest value is obtained by LC and the highest is obtained by MLCS. Overall, LC technique has the lowest value while MLCS has the highest value for all techniques in the contrast quantitative analysis. Table 3 shows the similarity results from the enhanced TTO images. MSE analysis shows that LC technique has the lowest value and MLCS has the highest value. Then, the lowest value in PSNR analysis is MLCS technique while the highest is LC technique. Then, PC technique has the lowest value and LC technique has the highest value in SSIM analysis. For FSIM, the lowest value is RH technique while the highest value is LC technique. Overall, LC technique has the highest values in PSNR. SSIM and FSIM analysis but has the lowest value for MSE analysis. MLCS has the highest value in MSE but the lowest in PSNR while PC technique obtained the lowest value for SSIM and FSIM analysis, the lowest value is obtained by RH technique. Table 4 shows the contrast results from the enhanced TTO images. In AMBE analysis. The lowest value is obtained by LC technique and the highest is obtained by MLCS technique. Then, EMEE shows that the lowest value is RH technique while the highest value is MLCS technique. In entropy, the lowest is obtained by LC technique while the highest is MLCS technique. Overall, the lowest value for all quantitative analysis techniques is LC technique while MLGS technique has the highest value.

Table 4 Average values for contrast quantitative analysis from 100 enhanced TTO images

Techniques

AMBE

EMEE

Entropy

GCS LC LCS MGCS MLCS PC RH

10.507 8.257 11.899 45.987 47.573 22.026 16.105

2.365 1.130 3.168 22.959 24.727 7.368 4.755

2.573 0.173 2.874 4.043 4.232 3.919 1.904

656

N. A. A. Khairudin et al.

In similarity quantitative analysis, MSE analysis considers the highest similarity value as the best analysis result while PSNR, SSIM and FSIM pick the lowest value as the best analysis results. Based on these conditions, LC technique has the highest similarity to the original image for both ALO and TTO images. AMBE, EMEE and Entropy analysis prefer the highest value as the most suitable for contrast measure. Hence, MLCS is preferred as the best contrast qualitative analysis for both ALO and TTO. Through the qualitative results, MGCS has a better contrast compared to the other technique followed by MLCS technique. Then, the results from the quantitative analysis show that MLGS has the highest contrast value followed by MGCS technique. This shows that both techniques are suitable to be used on the non-uniform illumination images of ALO and TTO.

5 Conclusions The comparison results from the proposed enhancement techniques showed the advantage and disadvantage of the enhancement techniques when been applied on the different illumination images. The similarity and contrast in quantitative analysis shows the effect of enhancement in tabulated data which helps in analyzing the resultant image. The comparison between quantitative and qualitative analysis proves that although the results from the data are provided, the appropriateness of the image is also important. So, it can be concluded that the suitable methods to be used for non-uniform illumination images are MLCS and MGCS technique because both techniques able to help in improving the enhancement quality of the morphological features in ALO and TTO images. This helps parasitologist to diagnose helminths eggs without much difficulty. The aims of this paper are achieved when the visibility of low contrast feature is increased and the noise in the image is reduced to the minimum amount. Acknowledgments The author would like to acknowledge the support from the Fundamental Research Grant Scheme for Research Acculturation of Early Career Researchers (FRGS-RACER) under a grant number of RACER/1/2019/ICT02/UNIMAP//2 from the Ministry of Higher Education Malaysia. The authors gratefully acknowledge team members and thank Hospital Universiti Sains Malaysia (HUSM) for providing the helminths eggs samples.

References 1. World Health Organization (2004) Training manual on diagnosis of intestinal parasites: tutor’s guide, no. 98.2. Organisation mondiale de la Santé, Gevene 2. Ghazali KH, Hadi RS, Zeehaida M (2013) Microscopy image processing analysis for automatic detection of human intestinal parasites ALO and TTO. In: 2013 international conference on electronics, computer and computation, ICECCO 2013, pp 40–43

Improvising Non-uniform Illumination and Low Contrast Images …

657

3. Amoah ID, Singh G, Stenström TA, Reddy P (2017) Detection and quantification of soil-transmitted helminths in environmental samples: a review of current state-of-the-art and future perspectives. Acta Trop 169(2017):187–201 4. World Health Organization (WHO) (2015) Third WHO report on neglected diseases: investing to overcome the global impact of neglected tropical diseases. World Health Organisation, Geneva, Switzerland 5. Jiménez B, Maya C, Velásquez G, Torner F, Arambula F, Barrios JA, Velasco M (2016) Identification and quantification of pathogenic helminth eggs using a digital image system. Exp Parasitol 166:164–172 6. Hadi RS, Ghazali KH, Khalidin IZ, Zeehaida M (2012) Human parasitic worm detection using image processing technique. In: 2012 IEEE symposium on computer applications and industrial electronics, SCAIE 2012, Kota Kinabalu, Malaysia, pp 196–201 7. Suzuki CTN, Gomes JF, Falcão AX, Papa JP, Hoshino-Shimizu S (2013) Automatic segmentation and classification of human intestinal parasites from microscopy images. IEEE Trans Biomed Eng 60(3):803–812 8. Aris TA, Abdul Nasir AS, Mohamed Z, Jaafar H, Mustafa WA, Khairunizam W, Jamlos MA, Zunaidi I, Razlan ZM, Shahriman AB (2019) Colour component analysis approach for malaria parasites detection based on thick blood smear images. In: MEBSE 2018 - IOP conference series: materials science and engineering, vol 557, p 012007 9. Wu Q, Wang Y-P, Liu Z, Chen T, Castleman KR (2002) The effect of image enhancement on biomedical pattern recognition. In: Proceedings of the second joint 24th annual conference and the annual fall meeting of the biomedical engineering society. IEEE, pp 1067–1069 10. Abdul-Nasir AS, Mashor MY, Mohamed Z (2012) Modified global and modified linear contrast stretching algorithms: New color contrast enhancement techniques for microscopic analysis of malaria slide images. Comput Math Methods Med. https://doi.org/10.1155/2012/ 637360 11. Kaur J, Choudhary A (2012) Comparison of several contrast stretching techniques on acute leukemia images. Int J Eng Innov Technol (IJEIT) 2(1):332–335 12. Ho, KT, Lee SH, Cho NK (2013) A dehazing algorithm using dark channel prior and contrast enhancement. In: IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2484–2487 13. Jang CY, Kang SJ, Kim YH (2012) Adaptive contrast enhancement using edge-based lighting condition estimation. Digit Sig Process 58:1–9 14. Al-Amen Z (2018) Contrast enhancement for color images using an adjustable contrast stretching technique. Int J Comput 17(2):74–80 15. Hitam MS, Yussof WNJW, Awalludin EA, Bachok Z (2013) Mixture contrast limited adaptive histogram equalization for underwater image enhancement. In: IEEE international conference on computer applications technology (ICCAT). IEEE, Sousse, pp 1–5 16. Wang Z, Bovik AC (2002) A universal image quality index. IEEE Sig Process Lett 9(3):81–84 17. Kumar R, Rattan M (2012) Analysis of various quality metrics for medical image processing. Int J Adv Res Comput Sci Softw Eng 2(11):137–144 18. Saha A, Wu QMJ (2016) Full-reference image quality assessment by combining global and local distortion measures. Sig Process 128:186–197 19. Fiete RD (2010) Modelling the imaging chain of digital cameras. SPIE, pp 127–132 20. Arici T, Altunbasak Y (2006) Image local contrast enhancement using adaptive non-linear filters. In: International conference of image processing. IEEE, Atlanta, pp 2881–2884 21. Matkovic K, Neumann L, Neumann A, Psik T, Purgathofer W (2005) Global contrast factor— a new approach to image contrast. In: The computational aesthetics in graphics, visualization and imaging workshop, pp 159–168 22. Abdul-Nasir AS, Mashor MY, Mohamed Z (2012) Modified global and modified linear contrast stretching algorithms - new colour contrast enhancement techniques for microscopic analysis of malaria slide images. Comput Math Methods Med 2012:637360

658

N. A. A. Khairudin et al.

23. Rizzi A, Algeri T, Medeghini G, Marini D (2004) A proposal for contrast measure in digital images. In: Second European conference on color in graphics, imaging and vision. International symposium on multispectral color science, pp 187–192 24. Sulur KM, Abdul Nasir AS, Mustafa WA, Jaafar H, Mohamed Z (2017) Analysis of color constancy algorithms for improving segmentation of malaria images. J Telecommun Electron Comput Eng 10(1–16):43–49 25. Khairudin NAA, Ariff FNM, Abdul Nasir AS, Mustafa WA, Khairunizam W, Jamlos MA, Zunaidi I, Razlan ZM, Shahriman AB (2019) Image segmentation approach for acute and chronic leukaemia based on blood sample images. In: MEBSE 2018-IOP Conference Series: Materials Science and Engineering, vol 557, p 012008 (2019) 26. Radha N, Tech M (2012) Comparison of contrast stretching methods of image enhancement techniques for acute leukemia images. Int J Eng Res Technol 1(6):1–8 27. Abdul-Nasir AS, Mashor MY, Mohamed Z (2012) Segmentation based approach for detection of malaria parasites using moving k-means clustering. In: 2012 IEEE EMBS international conference of biomedical engineering and science. https://doi.org/10.1109/ IECBES.2012.6498073 28. He K, Sun J, Tang X (2011) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 33(12):2341–2353 29. Tian QC, Cohen LD (2018) A variational-based fusion model for non-uniform illumination image enhancement via contrast optimization and color correction. Sig Process 153:210–220 30. Silpa K, Mastani S (2012) Comparison of image quality metrics. Int J Eng Res 3(8):1–5 31. Martens JB, Meesters L (1998) Image dissimilarity. Sig Process 70(3):155–176 32. Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386 33. Gupta S, Porwal R (2016) Appropriate contrast enhancement measures for brain and breast cancer images. Int J Biomed Imaging 2016:4710842 34. Ghani ASA, Isa NAM (2015) Enhancement of low quality underwater image through integrated global and local contrast correction. Appl Soft Comput J 37:332–344 35. Naxos G, Scotti F (2005) Automatic morphological analysis for acute leukemia identification in peripheral blood microscope images In: IEEE international conference on computational intelligence for measurement systems and applications, July 2005, pp 96–101

Signal Processing Technique for Pulse Modulation (PM) Ground Penetrating Radar (GPR) System Based on Phase and Envelope Detector Technique Che Ku Nor Azie Hailma Che Ku Melor, Ariffuddin Joret, Asmarashid Ponniran, Muhammad Suhaimi Sulong, Rosli Omar, and Maryanti Razali

Abstract Ground Penetrating Radar (GPR) system is a system used to detect and locate underground embedded objects which are based on the principles of RADAR. This system use reflection of electromagnetic wave technique, which will be generated towards the ground and detected back using antenna. This paper focuses on the analysis of signal processing of Pulse Modulation (PM) GPR system based on phase and envelope detector (ED) technique to detect and estimate the depth of embedded object in 3 dimension model of GPR system simulation designed using CST Studio Suite software. The antenna used in the simulation model is a Dipole antenna which operates at 70 MHz until 80 MHz. The background model used is a rectangular object with dry sandy soil material while the embedded object chosen is a rectangular iron object. Based on the output signal calculated by the CST software, the signal will be exported to be processed using MATLAB to produce GPR radargram. The simulation results show that by applying the proposed signal processing technique based on phase of GPR output signal, the embedded object can be seen clearly and estimated at about 900 and 1000 mm depth. By using ED technique all the embedded object can be detected but hard to be estimated on its depth. C. K. N. A. H. C. K. Melor  A. Joret (&)  A. Ponniran  R. Omar  M. Razali Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Parit Raja, Malaysia e-mail: [email protected] M. S. Sulong Faculty of Technical and Vocational Education, Universiti Tun Hussein Onn Malaysia, Parit Raja, Malaysia A. Joret  M. S. Sulong Internet of Things (IOT) Focus Group, Universiti Tun Hussein Onn Malaysia, Parit Raja, Malaysia A. Ponniran Power Electronic Converters (PECs) Focus Group, Universiti Tun Hussein Onn Malaysia, Parit Raja, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_46

659

660

C. K. N. A. H. C. K. Melor et al.

Keywords Ground penetrating radar

 Dipole antenna  Pulse Modulation GPR

1 Introduction GPR system can be used to detect and localize target objects in soil [1]. GPR has been extensively used in engineering, geological exploration and other fields recently [2, 3]. The concept used in the GPR system in detecting embedded object is by using the scattering principle of electromagnetic waves [4]. The GPR system can be classified into two groups of operating domain system, namely time domain and frequency domain. The time domain or impulse GPR transmits discrete pulses of nanosecond duration and digitizes the returns at GHz sample rates, while the GPR system of frequency domain emits electromagnetic wave signal with a variable frequency called chirp signal [5]. Another time domain operation of GPR system that radiate signal of electromagnetic waves in Gaussian pulse form is known as Pulse Modulation GPR (PM GPR) [6–11]. This GPR system use amplitude modulation (AM) technique which uses two types of signal known as information signal and carrier signal. The carrier signal used in this GPR system is a high frequency signal in sinusoidal form while the information signal is a low frequency signal of Gaussian pulse. Based on the AM signal, the information signal can be retrieved using envelope detector technique [12].

1.1

Amplitude Modulation (AM)

In AM signal, the amplitude of carrier signal is varied according to the instantaneous amplitude of the information signal m(t). The AM signal s(t) can be shown as:

1.2

Carrier signal: Ac cosð2pfc tÞ

ð1Þ

Modulating information signal: mðtÞ

ð2Þ

AM signal: sðtÞ ¼ Ac ½1 þ mðtÞ cosð2pfc tÞ

ð3Þ

Fast Fourier Transform (FFT)

Fast Fourier Transform (FFT) has been widely used for real time measurement due to higher computational efficiency and its ability to produce high precision receiving signal level for a large class of signal process [13]. The FFT was developed as an

Signal Processing Technique for Pulse Modulation (PM) …

661

efficient method of computing the Fourier transform. In this method, complete time history of the Fourier transform values at all point interest is stored and the exponential components are computes iteratively [14]. FFT which is Fourier analysis converts a signal from time domain to representation in the frequency domain [15]. Considering to convert the AM signal, from analogue time domain s(t) into frequency domain S(k), the time domain signal can be transformed into discrete time signal first as: sðnÞ ¼ Ac½1 þ mðn  ts Þcosð2pfc nts Þ

ð4Þ

where t is time, ts is sampling time, n is sample and fc is carrier frequency. By using Discrete Fourier Transform (DFT), the discrete frequency domain of AM signal S(k) is: Sð k Þ ¼

N 1 X

sðnÞej2pnk=N ; k ¼ 1; 2; 3; . . .N  1

ð5Þ

n¼0

Sð k Þ ¼

N 1 X

Ac ½1 þ mðnts Þcosð2pfc nts Þ

ð6Þ

n¼0

Based on Eqs. (5) and (6), the DFT described N sets of equations, thus required N2 multiplication for its computation. Computationally efficient algorithms to obtain DFT which is the FFT require the number of samples N to be a power of 2 and compute the DFT using only N log2 N multiplications. In this study, the antenna signal of the PM GPR system will be processed using FFT technique based on the phase value. The signal will be converted into frequency domain and rearrange in column as referring to scanning point of the simulation to generate the GPR radargram. Figure 1 shows the block diagram of the phase based signal processing technique of this study. Based on the flow chart shown in Fig. 2, phase difference between the output and input signal is calculated to produce the PM GPR system radargram.

Fig. 1 PM GPR system using phase technique

662

C. K. N. A. H. C. K. Melor et al.

Fig. 2 PM GPR radargram processing technique based on phase calculation

1.3

Envelope Detector

One of the techniques used to retrieve information signal from AM signal is using Envelope Detector (ED) technique [1, 16]. There are three types of ED techniques to demodulate AM signal which are Asynchronous Full Wave (AFW), Asynchronous Half Wave (AHW) and Asynchronous Real Square Law (ARSL). In this study, the AHW type of ED technique was used to detect information signal which is the pulse signal from the antenna output signal of the PM GPR system. Figure 3 shows the block diagram of the AHW type of ED technique and the processing algorithm flowchart is illustrated in Fig. 4.

Signal Processing Technique for Pulse Modulation (PM) …

663

Fig. 3 AHW envelope detector technique Fig. 4 PM GPR radargram processing technique based on AHW ED

2 GPR System Simulation Using CST Dipole antenna used in this simulation is operated from 70 to 80 MHz, which is using a very common practical wire antenna is as shown in Fig. 5. The material used for this antenna is cooper. There is a gap (G) between the two arms of the antenna for feeding purpose. The detailed parameters of this antenna are described in Table 1. Figure 6 provides the overview of the 3 dimension designed of GPR system simulation which has been done in this study by adding models of ground and embedded object to the Dipole antenna design in the CST Software. The size of the ground object model in this simulation was made as rectangular object using dry sandy soil material which is 3000 mm Length, 3000 mm Width and 2000 mm Height.

664

C. K. N. A. H. C. K. Melor et al.

Fig. 5 Dipole antenna design in CST software

Table 1 Design parameter of Dipole antenna

Fig. 6 GPR system simulation model using CST software

Parameter

Value

Unit

Operating frequency Length of the dipole (L) Radius of the dipole (R) Gap (G)

70–80 1500 50 200

MHz mm mm mm

Signal Processing Technique for Pulse Modulation (PM) …

665

Meanwhile the model of embedded object has been set up as iron and the size of this embedded object is 800 mm Length, 800 mm Width and 400 mm Height. In this study, the GPR system simulation was designed in order to determine the GPR system capability in detecting and estimating an embedded iron object in dry sandy soil. The position of the Dipole antenna in this simulation has been set at 5 mm from the surface and placed in the middle of the ground object.

3 Result and Discussion In order to produce a radargram of the GPR system, the output signal of the simulation generated in CST Studio Suite software has to be extracted into MATLAB software. In MATLAB software, this output signal calculated by CST Studio Suite using Finite Difference Time Domain (FDTD) technique will be demodulated using ED technique in order to obtain the pulse of the output signal and then produce an output image of GPR system in form of 2D known as GPR radargram. Besides, the output signal will also be converted in frequency domain to retrieve its phase value and produce the GPR radargram.

3.1

Design of Dipole Antenna

The Dipole antenna of the GPR system simulation has been designed to be resonant at 75 MHz. Figure 7 shows the S11 result of the designed antenna in frequency range from 70 MHz until 80 MHz which is less than −10 dB.

Fig. 7 S11 result of Dipole antenna

666

C. K. N. A. H. C. K. Melor et al.

Table 2 Summary of the analysis result for magnitude and phase technique of the PM GPR system in detection and estimation depth of embedded iron object

Depth (mm) 0

10

ED Technique

Phase Calculation

Signal Processing Technique for Pulse Modulation (PM) … Table 2 (Continued)

100

500

900

667

668

C. K. N. A. H. C. K. Melor et al.

Table 2 (Continued)

1000

3.2

Simulation Result of GPR System Using GPR Radargram

Several simulations was conducted based on the embedded object depth which are at 0, 10, 100, 500, 900 and 1000 mm. Referring to Table 2, based on ED technique, the position of iron object at 0 mm until 1000 mm depth in dry sandy soil can be expressed easily. Unfortunately, their position cannot be estimated clearly in the simulation of GPR system with embedded object at the depth of 3000 until 4000 of time sample. Meanwhile based on GPR radargram produced using phase calculation technique, the position of iron object at depth of 0, 10, 100 and 500 mm in dry sandy soil is not detected, however the embedded iron object can been detected at depth of 900 mm and 1000 with estimated depth at about 4 and 5 of frequency sample respectively.

4 Conclusion The processing of output signal of the Dipole antenna in GPR system simulation using CST Studio Suite software has been performed and shows the good result of detecting estimating embedded iron object in the dry sandy soil area. The simulation of PM GPR system by using ED technique shows all GPR radargram can be clearly show detected of embedded iron object at depth of 0, 10, 100, 500, 900 and

Signal Processing Technique for Pulse Modulation (PM) …

669

1000 mm. This result indicates the performance of the ED as an algorithm for signal processing of antenna output signal can smoothen the GPR radargram. According to the phase calculation technique, the GPR radargram produced cannot detect an embedded object at depth of 0, 10, 100 and 500 mm. However, using this technique the depth of the detected embedded object at depth of 900 and 1000 mm can be estimated correctly. Further investigation will focus on capability of PM GPR system in detecting and estimating the depth of variety embedded object such as wood and water in dry sandy soil by using others antenna.

References 1. Joret A, Sulong MS, Abdullah MFL, Madun A, Dahlan SH (2018) Design and simulation of horn antenna using CST software for GPR system. IOP Conf Ser J Phys 995:012080 ISSN 2600–7495 2. Sokolov KO, Prudetckii ND, Fedorova LL, Savvin DV (2018) GPR investigation of ice-filled cracks in loose deposits. In: 17th international conference on ground penetrating radar (GPR) 3. Kulyandin GA, Fedorova LL, Savvin DV, Prudetskii ND (2016) GPR mapping of bedrock of alluvial gold deposits in permafrost. In: Proceedings of the 2016 16th international conference on ground penetrating radar (GPR), Hong Kong, China, pp 1–4 4. Oskooi B, Julayusefi M, Goudarzi A (2014) GPR noise reduction based on wavelet thresholdings. Arab J Geosci 8(5):2937–2951 5. Joret A, Abdullah MFL, Dahlan SH, Madun A, Sulong MS (2016) Development of ground penetrating radar hybrid system using Vivaldi antenna for buried object detection. Int J Electr Eng Appl Sci IJEEAS 1(1) SSN: 2600 - 7495 6. Warren C, Giannopoulos A (2016) Experimental and modeled performance of a ground penetrating radar antenna in lossy dielectric. IEEE J Sel Topics Appl Earth Obs Remote Sens 9(1):29–36 7. Nishimoto M, Yoshida D, Ogata K, Tanabe M (2012) Target response extraction from measured GPR data. In: International symposium on antennas and propagation (ISAP). IEEE, pp 427–430 8. Seyfried D, Schoebel J (2015) Stepped-frequency radar signal processing. J Appl Geophys 112:42–51 9. Li L, Tan AEC, Jhamb K, Rambabu K (2012) Buried object characterization using ultra-wideband ground penetrating radar. IEEE Trans Microw Theory Tech 60(8):2654–2664 10. Gurbuz AC, McClellan JH, Scott WR (2012) Compressive sensing of underground structures using GPR. Digit Signal Proc 22(1):66–73 11. Qiao L, Qin Y, Ren X, Wang Q (2015) Identification of buried objects in GPR using amplitude modulated signals extracted from multiresolution monogenic signal analysis. Sensors 15(12):30340–30350 12. Joret A (2018) Modulation technique for GPR system radargram. PhD thesis, Universiti Tun Hussein Onn Malaysia 13. Tan L, Jiang J (2013) Discrete Fourier transform and signal spectrum in digital signal processing, 2nd edn., pp 87–136 14. Tan L, Jiang J (2013) Introduction to digital signal processing in digital signal processing, 2nd edn., pp 1–13 15. Sidney CB Fast Fourier Transforms. http://cnx.org/content/col10550/1.22/. Accessed 03 Nov 2019 16. Chaparro L (2015) Fourier analysis in communications and filtering in signals and systems using MATLAB, 2nd edn., pp 449–490

Evaluation of Leap Motion Controller Usability in Development of Hand Gesture Recognition for Hemiplegia Patients Wan Norliyana Wan Azlan, Wan Nurshazwani Wan Zakaria, Nurmiza Othman, Mohd Norzali Haji Mohd, and Muhammad Nurfirdaus Abd Ghani

Abstract A hand gesture recognition system is developed for hemiplegia patients to undergo rehabilitation which can encourage patients’ motor function. The Leap Motion controller has been studied to detect human hand motion for development of hand gesture controlled robotic arms. It was shown that the Leap Motion sensor is useful to obtain the coordinate position and orientation of each human finger, palm and wrist movements. A set of test program has been designed using healthy hand to investigate the accuracy and reliability of the sensor. The test results show the effectiveness of the device in the recognition of the human hand gestures with a high accuracy rate of 100% for opening and closing of hand, 97.61% for whole hand tapping and 99.6% for right movement while 98.71% for left movement of whole hand lateral rotation. Keywords Hand gesture recognition acquisition

 Leap Motion Sensor  Hemiplegia  Data

1 Introduction 1.1

Hand Gesture Recognition

Hand gesture recognition has attracted a growing interest due to its applications in many different fields, such as human-computer interaction, robotics, computer gaming, automatic sign-language interpretation. Gesture-based remote operation is potentially one of the most effective means of communication with a robotic hand W. N. Wan Azlan  W. N. Wan Zakaria (&)  N. Othman  M. N. Haji Mohd  M. N. Abd Ghani Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_47

671

672

W. N. Wan Azlan et al.

as it is one of the most effective and intuitive means of human communication [1, 2]. Nowadays, a large variety of robots are used in various industries. Controlling is highlighted as one of the main points for electro-mechanical designing of robot manipulators [3]. However, the presence of cameras and vision control have existed for a long time and it is a challenging task to develop solutions to invent gesture recognition. The introduction of low cost solutions like Microsoft’s Kinect, has suggested to exploit the depth information acquired by these devices for achieving proper gesture recognition in a low-cost and user-friendly manner. Recently, the Leap Motion has been introduced as an inexpensive choice with a proper accuracy for detecting human hand motions [4, 5]. By using Leap Motion Controller (LMC), a technique of controlling robot manipulators in slave side of the bilateral system can be developed [3]. Leap motion is a motion sensor which can detect 3-dimensional hand gestures in the air. It provides the complete information of hands to help track hand movements and gestures through API (Application Programming Interface). An idea of real time hand gesture recognition process through this device is explained along with insight of existing machine learning models. Finally an attempt is made to explain the complexities with the device and the models along with its features [6].

1.2

Hemiplegia

Hemiplegia is a type of paralysis that affects one side of the body, for example, only an arm and a leg on the left or the right side of the body [7]. Hemiparesis is a milder condition which is described as loss of strength or weakness and mobility on one side of the body. Unlike a person with hemiplegia, which experience in a full paralysis on one side of the body, a person with hemiparesis might be unable to make movements using their arm or leg or may feel a tingling or other unusual sensations on just one side of the body [8]. However, there are some people who develop hemiplegia after experiencing hemiparesis while others may alternately experience both condition of hemiparesis and hemiplegia during a period of time. While paralysis is an extreme form of weakness and nerve dysfunction, therefore hemiplegia is an extreme form of hemiparesis [8]. Hemiplegia can be divided into two categories which are congenital and acquired hemiplegia. Congenital hemiplegia occurs when the brain receives damage before, during or shortly after birth. Damage happens when there is a bleed or a blood clot which damages the brain. Hemiplegia occurs in the ratio of 1 to 1000 children in the world. However, acquired hemiplegia happens when a person in their adulthood experiences stroke, accident, infection or tumor [9].

Evaluation of Leap Motion Controller Usability …

673

2 Methodology 2.1

Leap Motion Sensor

Recent studies have shown that Leap Motion controller is the best device to be used to detect and track movements of the human fingers and hand gestures. This paper demonstrates the usage of Leap Motion device as the first step in detecting the movement of the subject’s hand. Leap Motion is capable in recognizing hands, fingers and arms. The tracking mechanism is very precise as it detects discrete positions and movements. Unlike other tracking devices such as Microsoft Kinect, which tracks using a depth camera, this device operates based on optical sensors and infrared light. Leap Motion is able to detect motion in 3D because of its tracking system which imitates the human pair of eyes. Despite its surface area of 24 cm2, it has two infrared (IR) cameras and three IR light emitters. The range of detection is roughly around 2.5 to 61 cm from the device. This device tracking system is in its optimum state when it has a clear, high-contrast view of an object’s silhouette in order for it to identify and do its tracking work. Leap Motion system uses a right-handed Cartesian coordinate system with the origin that is placed at the top surface of the Leap Motion device as illustrated in Fig. 1. The x-axis runs along horizontally and is parallel to the longer edge of the device. The y-axis is vertical with increasing positive values as the direction goes upwards. The z-axis runs horizontal as positive values are increasing towards the user. When Leap Motion track hands and fingers in the field of perspective, a set of data which are known as frame is provided. The hand model gives data about the identity (right or left hand), position, orientation (roll, pitch and yaw), grab strength, pinch strength and other characteristics of the hand detected. In the case of invisibility of parts of the hand from the Leap Motion’s camera, its software will predict the internal model of a human hand. The complete five fingers of the human hand will be displayed although if the fingers are not clearly captured. However, finger occlusions, fingers that are blocked by the palm, sleeves of a shirt, watch on the wrist and also jewelries on the hands and fingers may result in inaccuracy tracking or false detection. Other than that, data tracking could also be

Fig. 1 The Leap Motion right-handed coordinate system [10]

674

W. N. Wan Azlan et al.

lost when the palm is not perpendicular to the to Leap Motion’s camera. On the other hand, human foot can also be detected by the Leap Motion as it is a hand like object. The human arm is detected as a bone-like object that gives the direction, length, width, and end points of an arm. At the point when the elbow is not in field of perspective, the Leap Motion controller estimates its position based on past observations as well as typical human proportion [10].

2.2

Data Acquisition System

The control architecture consists of a Leap Motion device and an open-source integrated development environment (IDE) software called Processing. Leap Motion detects hand gestures made by the human hand and transmit acquired data to Processing IDE. Processing has its own language of coding within the visual arts context. In Processing IDE, data of hand orientation in degrees (Roll, Pitch and Yaw) was extracted and imported to .txt file. The .txt file was then imported to Microsoft Excel in order to visualize the acquired data and subsequently for further data analysis.

2.3

Test Procedure

Motor training rehabilitation was done in order to test the efficiency of Leap Motion sensor for this study. A set of training session was designed in 6 min which comprises of three hand gesture activities which are (1) Opening and Closing of hand, (2) Whole hand tapping and (3) Whole hand lateral rotation [11], as shown in Table 1. According to [12], only the hand is trained instead of the whole arm because motor impairments usually affects more on the distal compared to proximal limb region. Therefore, resulting in reduced motor abilities and cortical representation of the subject. The tests were designed to improve motor functions for the hemiplegic hand. The subject will hover their healthy hand over the Leap Motion sensor, making sure that the palm is perpendicular to the Leap Motion’s camera and ensuring the hands to be in the field of perspective of the sensor. The subject will then move their Table 1 Hand movements during 6 min of motor training

Hand gestures

Duration

Opening and closing of hand Whole hand tapping Whole hand lateral rotation

2 min 2 min 2 min

Evaluation of Leap Motion Controller Usability …

675

Fig. 2 Opening and closing of hand

Fig. 3 Whole hand tapping

healthy hand according to the hand gestures listed in Table 1 subsequently within the total period of 6 min. Illustrations of the described activities are as shown in Figs. 2, 3 and 4.

3 Results and Discussions Subsections 3.1, 3.2 and 3.3 show the result and calculation of mean, error percentage, and accuracy percentage in proving the usability of Leap Motion Sensor by conducting the three tests procedure described in Sect. 2.3.

676

W. N. Wan Azlan et al.

Fig. 4 Whole hand lateral rotation

3.1

Opening and Closing of Hand

According to [13], the word ‘grasping’ means a firm hold or grip, while ‘release’ means to allow something to move, act or flow freely [14]. In this paper, the action of closing the hand is identified as ‘grasping’ while opening of the hand is known as ‘releasing’. A grab strength test has been conducted in order to identify the grasping and releasing behavior detected by the Leap Motion Sensor. Figure 5 shows the graph of grab strength test as conducted in Fig. 2. The value of grab strength parameter ranges from 0 to 1. Initially, the hand is opened which is

Evaluation of Leap Motion Controller Usability …

677

Grab Strength Test 1.2

Grab Strength

1 0.8 0.6 0.4 0.2 0 0

20

40

-0.2

60

80

100

120

140

Time (s)

Fig. 5 Grab strength value for opening and closing of hand

parallel to Leap Motion Sensor’s z-axis. At t = 20 s, the hand is closed for 5 s and is open again right after in the intervals of 20 s. This test was carried out within 2 min. Grab strength is 0 when the hand is opened or ‘released’ because Leap Motion Sensor does not detect the accumulation of fingers, while grab strength is 1 when the hand is closed or ‘grasping’ because the Leap Motion Sensor detects that the fingers are close to the palm. It can be concluded that the accuracy of Leap Motion Sensor in detecting the grab strength is 100%.

3.2

Whole Hand Tapping

A whole hand tapping gesture was conducted to test the detection of angular motion of the hand by the Leap Motion Sensor. Whole hand tapping means that all five fingers including the palm move simultaneously. A test was carried out as shown in Fig. 3. The graph in Fig. 6 shows the three rotational parameters of orientation which are roll, pitch and yaw when whole hand tapping hand gesture test was carried out. Roll, pitch and yaw are the angular motion of x-axis, y-axis and z-axis respectively in degrees (°). Based on this graph, it is clear that pitch has the most obvious change in angle compared to roll and yaw because of the gesture behavior that moves along the rotational y-axis. Since the change in pitch was significant, pitch signal was extracted when whole hand tapping movement was made. Figure 7 emphasizes on the graph of pitch against time. The hand is in a resting position at t = 0 s until t = 19 s. At t = 20 s, the hand tapped from a 0° to a −45° position.

678

W. N. Wan Azlan et al.

OrientaƟon (º)

Hand Orientation Test 50 40 30 20 10 0 -10 0 -20 -30 -40 -50 -60

Roll Pitch 100

200

300

400

500

Yaw

Time (s)

Fig. 6 Orientation of the hand during whole hand tapping

Pitch (°)

Pitch Test 40 30 20 10 0 -10 0 -20 -30 -40 -50 -60

20

40

60

-40.050613 -48.172955

-44.76962

80

-46.149883

100

-49.335495

120

140

-47.967014

Time (s) Fig. 7 Pitch value for whole hand tapping

Pitch will drop to a negative value and rises to a positive value when the hand retrieved to its initial position. The process repeats after 20 s up to 2 min. Based on the graph, the tapping result varied in the range of −40° to −49°. Table 2 shows the pitch angle recorded when the hand taps for every 20 s. The mean angle that was calculated between the 6 samples was −46.07°. Therefore, the error that was captured by the sensor was 1.07°, which resulted in 2.39% of error. In conclusion, the accuracy of the pitch angle test result was 97.61% for whole hand tapping gesture.

Evaluation of Leap Motion Controller Usability … Table 2 Pitch angle test result

3.3

679

Time (s)

Pitch angle (°)

20 40 60 80 100 120

−48.17 −40.05 −44.77 −46.15 −49.34 −47.97

Whole Hand Lateral Rotation

Whole hand lateral rotation gesture is the movement of palm from side to side without moving the wrist. A test as shown in Fig. 4 was conducted to investigate the gesture behavior captured by the Leap Motion Sensor. Figure 8 compares the rotational parameters of orientation and it was proven that yaw, which is the rotation in z-axis has the most noticeable change in angle of the palm compared to roll and pitch for this hand movement. Yaw signal was emphasized by the graph in Fig. 9 because of its significant change of angle when the hand moves in a lateral rotation. The left hand was used in this experiment. Every movement in this experiment has an interval of 10 s per movement. A total of 2 min were taken to complete this activity. Initially, the hand was at 0° position. Then, the palm was moved to its maximum right horizontally without moving the wrist position. The manually measured maximum positive angle was 20°. The palm was then to be in a resting position which was at 0°. After that, the palm should move to its foremost left, while still keeping the wrist position static. The manually measured negative angle was −40°. Subsequently, the palm moves to the resting position and the process repeats up to 2 min. Hand Orientation Test 60

OrientaƟon (º)

40 20 0

Roll 0

200

400

600

800

-20 -40 -60 -80 -100

Time (s)

Fig. 8 Orientation of the hand during whole hand lateral rotation

1000

Pitch Yaw

680

W. N. Wan Azlan et al.

Fig. 9 Yaw value for whole hand lateral rotation

Yaw Test

60

Yaw (°)

40 20 0 -20

0

20

40

60

80

100

120

140

-40 -60

Table 3 Pitch angle test calculation

Time (s)

Pitch angle calculation

Value

Mean Error Error percentage Accuracy percentage

−46.07° 1.07° 2.39% 97.61%

According to the graph in Fig. 9. The captured right-side movement were in the range of 19° to 22° while the left-side movement were in a range of −38° to −43°. Table 3 shows the yaw angle recorded when the hand moves from side to side for every 10 s. The mean angle for 1 set of 10 s interval was calculated for every different movement. The total mean yaw angle calculated for right direction movement was 19.92° which leads to an error of 0.08°. As a result, the percentage error and the accuracy for right direction movement test was 0.40% and 99.60% respectively. Table 4 Yaw angle test result Towards right direction

Towards left direction

Time (s) 10–20 50–60 90–100

Time (s) 30–40 70–80 110–120

Mean yaw angle (°) 19.22 19.93 20.61

Mean yaw angle (°) −41.31 −41.57 −38.67

Table 5 Yaw angle test calculation Total mean (°) Error (°) Error % Accuracy %

Towards right direction

Towards left direction

19.92 0.08 0.40 99.60

−40.52 0.52 1.30 98.71

Evaluation of Leap Motion Controller Usability …

681

On the other hand, the total mean yaw angle calculated for left direction movement was −40.52° which results the error of 0.52°. As conclusion, the percentage error and the accuracy for left direction movement test was 1.29% and 98.71% respectively.

4 Conclusion This paper discusses the usability and efficiency of Leap Motion sensor to detect hand gesture recognition for hemiplegia patients. Several rehabilitation hand movements were tested to investigate the accuracy of the Leap Motion sensor in detecting hand gestures. It was found that there were slightly errors between the actual measurements of the hand gestures and the measurements that were made by Leap Motion sensor. In addition, the inaccuracies caused might due to a mild hand tremor which is a normal occurrence for every person but varies between individuals. Three different hand gesture tests were carried out which are opening and closing of hand, whole hand tapping and whole hand lateral rotation. The percentage of Leap Motion’s usability accuracy of detecting the hand gestures were 100% for opening and closing of hand, 97.61% for whole hand tapping and 99.6% for right movement while 98.71% for left movement of whole hand lateral rotation. The present study establishes a quantitative framework for detecting human hand gesture particularly for further used in development of robot based controlled rehabilitation devices. For future work, the usability of human hand motion recognition to control robot arm as a prosthetic hand will be studied further for the rehabilitation of hemiplegia patients. Acknowledgements The authors are grateful for Universiti Tun Hussein Onn Malaysia (UTHM) for supporting this research work under Postgraduate Research Grant (GPPS) Vot H409.

References 1. Cheng H, Yang L, Liu Z (2015) A survey on 3D hand gesture recognition. IEEE Trans Circ Syst Video Technol 1–14 2. Wachs JP, Kölsch M, Stern H, Edan Y (2011) Vision-based applications. Commun ACM 54 (2):60–71 3. Gunawardane H, Medagedara N, Madhusanka A (2015) Control of robot arm based on hand gestures using leap motion sensor technology. Int J Robot Mechatronics 2(1):7–14 4. Ren Z, Yuan J, Meng J, Zhang Z (2013) Robust part-based hand gesture recognition using kinect sensor. IEEE Trans Multimed 15(5):1110–1120 5. Leap Motion. https://www.leapmotion.com/. Accessed 20 July 2019 6. Panduranga HT, Mani C (2018) Dynamic hand gesture recognition system: a short survey. In: International conference on inventive research in computing applications (ICIRCA), pp 689– 694

682

W. N. Wan Azlan et al.

7. Hemiplegia, SpinalCord.com, Spinalcord.com. https://www.spinalcord.com/hemiplegia. Accessed 10 June 2019 8. What is the difference between hemiplegia and hemiparesis. Spinalcord.com. https://www. spinalcord.com/blog/what-is-the-difference-between-hemiplegia-and-hemiparesis. Accessed 10 June 2019 9. What is Hemiplegia, Epilepsy Society. https://www.epilepsysociety.org.uk/whathemiplegia#.XP44JYgzbIU. Accessed 10 June 2019 10. API Overview - Leap Motion C++ SDK v3.2 Beta documentation. https://developer-archive. leapmotion.com/documentation/cpp/devguide/Leap_Overview.html. Accessed 20 July 2019 11. Tosi G, Romano D, Maravita A (2018) Mirror box training in hemiplegic stroke patients affects body representation. Journal 11(617):1–10 12. Dohle C, Püllen J, Nakaten A, Küst J, Rietz C, Karbe H (2009) Mirror therapy promotes recovery from severe hemiparesis: a randomized controlled trial. Journal 23(3):209–217 13. Lexico Dictionaries, English (2019) Grasp Definition of Grasp by Lexico. https://www.lexico. com/en/definition/grasp. Accessed 12 Nov 2019 14. Lexico Dictionaries, English (2019) Release Definition of Release by Lexico. https://www. lexico.com/en/definition/release. Accessed 12 Nov 2019

Using Convolution Neural Networks Pattern for Classification of Motor Imagery in BCI System Sepideh Zolfaghari, Tohid Yousefi Rezaii, Saeed Meshgini, and Ali Farzamnia

Abstract The Electroencephalography (EEG) based Brain-computer interfaces (BCI) enable humans to control external devices through extracts informative features from brain signals and convert these features into control commands. Deep learning methods have been the advanced classification algorithms used in various applications. In this paper, the informative features of EEG signals are obtained using the filter-bank common spatial pattern (FBCSP), then the selected features which are prepared using the mutual information method are fed to the classifiers as input. Convolution neural network (CNN), Naive Bayesian (NB), multiple support vector machines (SVM) and linear discriminant analysis (LDA) algorithms are used to classify EEG signals into left and right hand motor imagery (MI) across nine subjects. Our framework has been tested on BCI competition IV-2a 4-class dataset. The results are shown that the CNN classifier has yielded the best average classification accuracy, with 99.77% as compared to other classification methods. The experimental results represent that our proposed method can obtain more refined control in the BCI applications such as controlling robot arm movement.





Keywords Electroencephalography (EEG) Brain-computer interface (BCI) Motor imagery (MI) Filter-bank common spatial pattern (FBCSP) Convolution neural network (CNN)





1 Introduction Brain-computer interface (BCI) is an important research field that provides communication between a subject’s brain with motor impairment and external devices without peripheral nervous systems intervention [1]. BCI system has been used for S. Zolfaghari  T. Y. Rezaii  S. Meshgini Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran A. Farzamnia (&) Faculty of Engineering, Universiti Malaysia Sabah (UMS), Kota Kinabalu, Sabah, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_48

683

684

S. Zolfaghari et al.

various purposes such as rehabilitation [2, 3], control [4], games [5], authentication [6] and so on. Based on the type of electrodes that used for obtaining brain signals, BCI systems can be invasive or non-invasive. Electroencephalography (EEG) is a non-invasive method refers to recording electrical activities of the brain from a user’s scalp [7]. Different mental states are lead to produce different types of brain signals. Interfaces based on, P300 [8, 9], event related desynchronization and synchronization (ERD/ERS), steady-state visually evoked potential (SSVEP) [10], error-related potential (ErrP) [11] and movement related cortical potential (MRCP) [12] are the most widely used. ERD, and ERS respectively refer to decreasing and increasing of the EEG signal power in two frequency bands consist of mu (8−12 Hz) and beta (16−24 Hz) bands during a movement execution or imagery [13]. BCI measures EEG signals associated with the user’s intent and translates the recorded signals into control commands. So, signal processing, pattern recognition and classification are very important in BCI researches [14]. Various methods have been proposed to extract informative features and the classification of the subject’s mental states. The common feature extraction algorithms which used to train classifiers are common spatial pattern (CSP) [15], Filter Bank CSP (FBCSP) [16], wavelet transformation (WT) [17] and other methods. Also, support vector machine (SVM), linear discriminant analysis (LDA), Naive Bayesian (NB) and other algorithms have been used as classifiers. The EEG signals that achieved from changes in the direction of the actual hand movement experiment are reported in [18]. The authors proposed a wavelet-CSP algorithm to extract features of the brain signals. In [19], to differentiate slow and fast implementation of left/right hand movement, Welch method based power spectral density estimates have been used to create the feature vectors and are fed to the SVM, NB, LDA and KNN classifiers. The NB classifier yielded the best accuracy among other classifiers. Recently, people have begun to study deep learning (DL) approaches to extract features and classify data. This method has been used in different fields of study such as natural language processing [20], speech recognition [21] and currently in computer vision applications [22]. According to authentication systems using biometric methods in [23], authors used five various mental tasks from the standard EEG database. Features were obtained by the WT method with 10 and 5 decomposition levels, then were classified through artificial neural network (ANN) classifier. Visual counting task achieved better accuracy than other mental tasks. A novel multi-objective flower pollination and WT algorithm was proposed in [24]. Optimal WT parameters were obtained and Decision tree, neural network, SVM and NB used as classifiers. Inspired by MI-based ERD/ERS, in [25] a 5 layer CNN architecture is proposed to feature extraction and the hand motor imagery classification. The accuracy improved by 5%−10% compared with three convolution classification methods (power + SVM, CSP + SVM, AR + SVM). In [26], a wavelet transform time-frequency is represented then a 2-layer CNN is made as a classifier and convolution kernels of various sizes are validated. In [27] a CNN classification method for the detection of P300 waves is suggested by Cecotti and Graser. The results showed that this classification is obtained the highest accuracy of about 95.5% on the BCI competition. The CSP algorithm

Using Convolution Neural Networks Pattern for Classification …

685

based deep neural network (DNN) is presented in [28]. In this paper, a four-layer neural network consists of two hidden layers is employed to classify the MI signals. The results of several models are compared with each other, it is observed that the CSP - DNN method is computationally efficient and reduced the overall maximum error. In a more recent paper, Sakhavi et al. [29] presented a new classification method for MI tasks by introducing a temporal representation of the data and as well as using a CNN architecture for classification. This representation has been generated by modifying the FBCSP method.

2 Data In this paper, the EEG data are collected using BCI competition IV-2a from 9 subjects [30]. Dataset recorded from 22 Ag/AgCl electrodes with a 250-Hz sampling rate. Two sessions were recorded for each subject and each session comprised 228 trials. The timing plan consists of a fixation cross as a sign of the subject’s preparation, a cue in the form of arrow corresponding to one of the four classes of motor imagery (left hand, right hand, both feet and tongue) for 1.25 s, a period of the motor imagery for 3 s and a black screen to rest the subject is shown in Fig. 1.

3 Method Section 3.1, summarizes the FBCSP algorithm and describes feature extraction from the EEG signal. Section 3.2 discusses mutual information for use feature selection. Section 3.3 describes the proposed CNN algorithm and classification. The Filter Bank Common Spatial Pattern (FBCSP) algorithm, mutual information method and the use of different classifiers for 2-class motor imagery in training and test phases are shown in Fig. 2.

Fig. 1 The timing plan of a visual cue

686

S. Zolfaghari et al.

Fig. 2 The block diagram representation of our general scheme.

3.1

Filter Bank Common Spatial Pattern

The CSP algorithm is widely used to discriminate features between two classes. This method by applying spatial filters to the inputs, maximizes the variance of the signals in the first class and simultaneously minimizes the variance of the signals in the second class [31]. The CSP filter is sensitive to noisy data and consists of all frequency bands. Therefore, the FBCSP method solves these problems by passing the EEG signal through a filter bank [16]. The feature extraction steps are described as follows: 1. The EEG signals consist of hand motor imageries are filtered by a filter bank that has nine band-pass filters (4−8 Hz, 8−12 Hz, …, 36−40 Hz). These band-pass filters are Chebyshev type II filters. 2. Each output of the filters is split into training data and test data. 3. CSP algorithm is performed on the training and test data in each of the sub-bands to compute the spatial filter. Then, the data matrix is depicted in this spatial filter shown in (1). Z ¼ WE

ð1Þ

where E 2 RCT denotes the data matrix of EEG signals, W 2 RCC denotes the spatial filter and the spatially filtered signal which maximizes the discrimination in the variance of two classes is denoted by Z 2 RCT . C is the number of channels and T is the number of samples per signal.

Using Convolution Neural Networks Pattern for Classification …

687

4. Feature vectors are obtained at (2).   var Zj fi ¼ log 2m P   var Zj

ð2Þ

j¼1

Zj 2 R2mT represents the rows of Z which are corresponding to the largest and smallest eigenvalues. We set m ¼ 2 in this paper.

3.2

Mutual Information

One of the commonly used methods of feature selection is mutual information. This method Selects an informative subset of an initial set of features by measuring mutual dependence between random variables and the relevance of various features [32]. The mutual information is described as follows: 1. A set of feature vectors F ¼ ½f1 ; f2 ; . . .; f92m  from Eq. (2), a set of selected features S ¼ ; and the true label of each trial are initialized. 2. The mutual information of each feature with the class label is computed. 3. The features are sorted in descending order of mutual information computed in the previous step. 4. The earliest K features are selected from sorted features.

3.3

Convolution Neural Network

A convolution neural network is a class of neural networks that have proven very impressive in image recognition and classification. This algorithm, firstly introduced by LeCun et al. [33] in the LeNet-5 architecture. The general architecture of CNN has the following layers: an input layer, hidden layers and an output layer. The hidden layers consist of convolution layers, RELU layer and pooling layers and fully connected layers. Convolution layers convolve the input to extract the features of the input and subsampled to the smaller size in the pooling layer. The goal of max pooling and average pooling, which are functioning as the sub-sampling layer is to reduce the dimension of data. The fully connected layer includes neurons that are connected to all previously obtained features is used for flattening features and the output layer shows which class is identified. The CNN model structure is shown in Fig. 3. The features of each subject are classified as the following procedure.

688

S. Zolfaghari et al.

Fig. 3 The CNN model structure for MI classification

1. The training and test data are uniformly resized to N  1  1 matrices and are labeled according to the MI tasks. 2. The convolution-2D layer which includes kernel size 3  1 and 16 filters is convolved with the input. 3. The max pooling-2D layer with kernel size 2  1 is used for down-sampling. 4. Two fully connected layers consisting of 150 and 2 neurons, respectively are set. The flattened matrix goes through the last fully connected layer to tell which class is representing the imagery of left and right hand movement.

4 Results and Discussion In our study, the experiments are performed in a Matlab 2017a platform on the desktop PC with an Intel core i7, up to 3 GHz and 8 GB RAM. In all experiments, the model learns on training data to be predicting the test data. Features are randomly selected for training and testing. 100 runs are performed in each classifier. The average accuracy of the MI classifiers is yielded for all subjects. In Table 1, the results that were obtained from four classifiers: SVM, LDA, NB and CNN are shown. As noted in the table, it is observed that the CNN classifier yields the highest average accuracy (more than 99%) relative to other methods. Also, the average SVM, NB and LDA classification accuracy of all subjects were achieved 98.42%, 97.75% and 96.82%, respectively. Also, the training and test data of all subjects for each class were combined separately. The very distinctive features of the two classes in the training phase are shown in Fig. 4. As noted in the figure, it is concluded that the proposed method consists of feature extraction with the FBCSP algorithm and feature selection with

Using Convolution Neural Networks Pattern for Classification … Table 1 Accuracy of classifiers obtained from 9 healthy subjects

Subject1 Subject2 Subject3 Subject4 Subject5 Subject6 Subject7 Subject8 Subject9 Average All subjects

689

SVM

LDA

NB

CNN

97.22 98.61 98.60 98.98 98.49 99.44 99.49 97.47 97.53 98.42 99.74

96.30 94.90 96.29 96.46 97.98 98.33 95.45 96.96 98.76 96.82 99.40

97.68 96.75 97.22 97.47 98.98 98.89 98.98 96.96 96.91 97.75 99.59

100 99.50 100 100 99.50 99.00 100 100 100 99.77 100

Fig. 4 The diagram of the feature vectors for the training data of all subjects in two classes

mutual information method has been effective and has been able to obtain distinctive features. These informative features have been an increase in classification performance. As indicated in Table 1, The classification accuracies for all subjects were achieved more than 99% with CNN classifier yielding the best performance of 100%. In Fig. 5, the two curves, the accuracy of all subjects and average accuracy are compared with each other. It is observed that the first curve performed better than the second curve.

690

S. Zolfaghari et al.

Fig. 5 Compare the two graphs (average accuracy and the accuracy of all subjects)

5 Conclusion and Future Direction This paper proposes a CNN classification algorithm to discriminate the imagery of left and right hand movement. The discriminative features are obtained by the FBCSP method and mutual information is used to feature selection. These features are transformed into images, and are fed into the CNN classifier. The results show that the method using CNN classifier achieves better results as compared to other traditional classifiers. In the future work, we want to focus on the classification of multi-class motor imagery tasks and the study of other different movements of the hand, which can help better and precise control of the external prosthesis. Acknowledgements The authors appreciate those who contributed to make this research successful. This research is supported by Center for Research and Innovation (PPPI) and Faculty of Engineering, Universiti Malaysia Sabah (UMS) under the Research Grant (SBK0393-2018).

References 1. Cincotti F, Pichiorri F, Aricò P, Aloise F, Leotta F, de Vico Fallani F, Millán JDR, Molinari M, Mattia D (2008) EEG-based Brain-Computer Interface to support post-stroke motor rehabilitation of the upper limb. In: 2012 annual international conference of the IEEE engineering in medicine and biology society, IEEE EMBS San Diego, California. IEEE, pp 4112–4115 2. Bandara DSV, Arata J, Kiguchi K (2018) A noninvasive brain–computer interface approach for predicting motion intention of activities of daily living tasks for an upper-limb wearable robot. Int J Adv Rob Syst 15(2):1–10

Using Convolution Neural Networks Pattern for Classification …

691

3. Kang BK, Kim JS, Ryun S, Chung CK (2018) Prediction of movement intention using connectivity within motor-related network: an electrocorticography study. PLoS ONE 13 (1):1–14 4. He S, Zhang R, Wang Q, Chen Y, Yang T, Feng Z, Zhang Y, Shao M, Li Y (2017) A P300-based threshold-free brain switch and its application in wheelchair control. IEEE Trans Neural Syst Rehabil Eng 25(6):715–725 5. Kreilinger A, Hiebel H, Müller-Putz GR (2016) Single versus multiple events error potential detection in a BCI-controlled car game with continuous and discrete feedback. IEEE Trans Biomed Eng 63(3):519–529 6. Alyasseri ZAA, Khader AT, Al-Betar MA, Papa JP, Ahmad Alomari O (2018) EEG-based person authentication using multi-objective flower pollination algorithm. In: 2018 IEEE congress on evolutionary computation (CEC). IEEE, pp 1–8 7. Daly JJ, Wolpaw JR (2008) Brain–computer interfaces in neurological rehabilitation. Lancet Neurol 7(11):1032–1043 8. Obeidat QT, Campbell TA, Kong J (2015) A P300 brain–computer interface for spelling written words. IEEE Trans Hum Mach Syst 45(6):727–738 9. Martínez-Cagigal V, Gomez-Pilar J, Álvarez D, Hornero R (2016) An asynchronous P300-based brain-computer interface web browser for severely disabled people. IEEE Trans Neural Syst Rehabil Eng 25(8):1332–1342 10. Bi L, Fan XA, Jie K, Teng T, Ding H, Liu Y (2014) Using a head-up display-based steady-state visually evoked potential brain–computer interface to control a simulated vehicle. IEEE Trans Intell Transp Syst 15(3):959–966 11. Bhattacharyya S, Konar A, Tibarewala DN (2017) Motor imagery and error related potential induced position control of a robotic arm. IEEE/CAA J Automatica Sinica 4(4):639–650 12. Lin C, Wang BH, Jiang N, Xu R, Mrachacz-Kersting N, Farina D (2016) Discriminative manifold learning based detection of movement-related cortical potentials. IEEE Trans Neural Syst Rehabil Eng 24(9):921–927 13. Pfurtscheller G, Neuper C, Flotzinger D, Pregenzer M (1997) EEG-based discrimination between imagination of right and left hand movment. Electroencephalogr Clin Neurophysiol 103(6):642–651 14. Graimann B, Allison B, Pfurtscheller G (2009) Brain-computer interfaces: a gentle introduction. In: Brain-computer interfaces, Heidelberg 15. Mishuhina V, Jiang X (2018) Feature weighting and regularization of common spatial patterns in EEG-based motor imagery BCI. IEEE Signal Process Lett 25(6):783–787 16. Park SH, Lee D, Lee SG (2017) Filter bank regularized common spatial pattern ensemble for small sample motor imagery classification. IEEE Trans Neural Syst Rehabil Eng 26(2):498– 505 17. Hernández-González CE, Ramírez-Cortés JM, Gómez-Gil P, Rangel-Magdaleno J, Peregrina-Barreto H, Cruz-Vega I (2017) EEG motor imagery signals classification using maximum overlap wavelet transform and support vector machine. In: 2017 IEEE international autumn meeting on power, electronics and computing (ROPEC), Ixtapa. IEEE, pp 1–5 18. Robinson N, Vinod AP, Guan C, Ang KK, Peng TK (2011) A Wavelet-CSP method to classify hand movement directions in EEG based BCI system. In: 2011 8th international conference on information, communications & signal processing, Singapore. IEEE, pp 1–5 19. Bhattacharyya S, Hossain MA, Konar A, Tibarewala DN, Ramadoss J (2014) Detection of fast and slow hand movements from motor imagery EEG signals. In: Advanced computing, networking and informatics-volume 1. Springer, Cham, pp 645–652 20. Mikolov T, Deoras A, Kombrink S, Burget L, Černocký J (2011) Empirical evaluation and combination of advanced language modeling techniques. In: Twelfth annual conference of the international speech communication association. INTERSPEECH. ISCA, Florence, pp. 605– 608 21. Rezazadeh Sereshkeh A, Trott R, Bricout A, Chau T (2017) EEG classification of covert speech using regularized neural networks. IEEE/ACM Trans Audio Speech Lang Process (TASLP) 25(12):2292–2300

692

S. Zolfaghari et al.

22. Dobhal T, Shitole V, Thomas G, Navada G (2015) Human activity recognition using binary motion image and deep learning. Proc Comput Sci 58:178–185 23. Alyasseri ZAA, Khadeer AT, Al-Betar MA, Abasi A, Makhadmeh S, Ali NS (2019) The effects of EEG feature extraction using multi-wavelet decomposition for mental tasks classification. In: Proceedings of the international conference on information and communication technology, pp 139–146 24. Alyasseri ZAA, Khader AT, Al-Betar MA, Papa JP, Alomari OA, Makhadmeh SN (2018) Classification of eeg mental tasks using multi-objective flower pollination algorithm for person identification. Int J Integr Eng 10(7):102–116 25. Tang Z, Li C, Sun S (2017) Single-trial EEG classification of motor imagery using deep convolutional neural networks. Optik-Int J Light Electron Opt 130:11–18 26. Xu B, Zhang L, Song A, Wu C, Li W, Zhang D, Xu G, Li H, Zeng H (2018) Wavelet transform time-frequency image and convolutional network-based motor imagery EEG classification. IEEE Access 7:6084–6093 27. Cecotti H, Graser A (2010) Convolutional neural networks for P300 detection with application to brain-computer interfaces. IEEE Trans Pattern Anal Mach Intell 33(3):433–445 28. Kumar S, Sharma A, Mamun K, Tsunoda T (2016) A deep learning approach for motor imagery EEG signal classification. In: 2016 3rd Asia-Pacific world congress on computer science and engineering (APWC on CSE), Nadi. IEEE, pp 34–39 29. Sakhavi S, Guan C, Yan S (2018) Learning temporal information for brain-computer interface using convolutional neural networks. IEEE Trans Neural Netw Learn Syst 29(11):5619–5629 30. Tangermann M, Müller KR, Aertsen A, Birbaumer N, Braun C, Brunner C, Leeb R, Mehring C, Miller KJ, Müller-Putz G, Nolte G (2012) Review of the BCI competition IV. Front Neurosci 6(55):10–3389 31. Ramoser H, Muller-Gerking J, Pfurtscheller G (2000) Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Trans. Rehabil Eng 8(4):441–446 32. Battiti R (1994) Using mutual information for selecting features in supervised neural net learning. IEEE Trans Neural Networks 5(4):537–550 33. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

Metasurface with Wide-Angle Reception for Electromagnetic Energy Harvesting Abdulrahman A. G. Amer, Syarfa Zahirah Sapuan, Nasimuddin, and Nabiah Binti Zinal

Abstract A wide-angle and polarization-insensitive metasurface instead of traditional antenna is built as the primary ambient energy harvester. Proposed metasurface harvester can receive electromagnetic (EM) energy from wide angles and received rectified power can be combined by DC combining for EM harvesting system. The reflection coefficient, power distribution, EM harvesting efficiency, and absorption efficiency on the normal and oblique incidences are studied and presented. For a single unit cell with periodic boundary condition, simulation results exhibit that the absorption and harvesting efficiencies are more than 98 and 96%, respectively, under normal incidence at an operating frequency of 2.4 GHz. The results also show that on the oblique incidence of 0°, the maximum absorption efficiency is more than 98% and it can achieve more than 80% reception efficiency at the incidence angle of ±60°. Keywords Metasurface efficiency Wide-angle



 Electromagnetic energy harvesting  Harvesting

1 Introduction Electromagnetic waves filled the human surrounding with the quick advances in wireless communication systems. The wireless power transfer (WPT) conception is demonstrated in the early years of the 20th century by Nikola Tesla. The far-field A. A. G. Amer  S. Z. Sapuan (&) Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, 86400 Parit Raja, Batu Pahat, Johor, Malaysia e-mail: [email protected] Nasimuddin Institute for Infocomm Research, A-STAR, Singapore, Singapore N. B. Zinal Centre for Diploma Studies, Universiti Tun Hussein Onn Malaysia, Parit Raja, Batu Pahat 86400, Johor, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_49

693

694

A. A. G. Amer et al.

WPT system operating at radio frequency (RF) was built in the 1960s by William Brown who was the first to build that system [1]. Due to the advanced developments in wireless technologies, energy harvesting has attracted significant attention in recent years. Energy harvesting or energy scavenging is defined as capturing amounts of energy from the surrounding environment power sources such as electromagnetic waves, vibration, thermal, and storing them for later use. The rectenna systems usually used in wireless power transfer (WPT) to collect the electromagnetic energy from the ambient then transformed into a DC-current to replace the battery usage. The rectenna system usually comprises of three sections: an antenna, a rectifier circuit, and a load. An antenna is the key part and uses to receiving the incident waves. Conversion efficiency is a very important parameter that determines the performance of the harvesting mechanism and depends strongly on the conversion medium. Practically, to get higher conversion efficiency the high-efficiency collectors need to be used. The conventional antennas are used in the rectenna systems as a collector in single or array configuration. To use them as an array, the distance between two neighbor elements is typical k⁄ 2 to avoid destructive mutual coupling between array elements [2–4]. Providing power to stand-alone electronic devices is one of the important applications of electromagnetic energy harvesting and requires highly efficient collectors to capture enough power from the ambient. Therefore, metamaterial cell as electromagnetic collectors have been used. Metamaterials are novel manufactured small metal unit cells array based surface designed to display unique properties that not easily found in nature as tailoring the permittivity (e) and permeability (µ) to control the electromagnetic field [5]. The metasurface is 2-D dimensional counterparts of 3-D metamaterials. Unlike absorbers [6–8], metasurface harvesters can capture the power from the surrounding environment and drive it to the load for collecting. Split-ring resonators (SRR) [9, 10] and complementary split-ring resonators (C-SRR) [11, 12] are the common resonators used in energy harvesting applications. The conventional harvesting antenna cannot maintain high absorption efficiency for arbitrary direction and polarization, due to the random and unknown incident angles and polarization of ambient EM waves. In this work, a wide-angle metasurface based EM harvester is demonstrated for Wi-Fi band (2.4 GHz) to capture the ambient microwave energy from wide area. The suggested metasurface unit cell includes a cross-slot patch with a probe to maximize the energy absorption and drive it to the resistive load by via interconnect with near-unity absorption. The CST microwave studio was been used to optimize the structure for meta-surface analysis.

Metasurface with Wide-Angle Reception …

695

Via S/2

y Load

L W

E

P x ground plane

(a)

(b)

Fig. 1 Illustration of proposed metasurface unit cell harvester (a) Top view and (b) back view

2 Methodology Figure 1 shows the topology of the suggested metasurface unit cell that comprises of a cross-slot patch with a probe/via. The collected power on the cell is delivered to the resistive load using a probe/via interconnect through a substrate layer. The via interconnect is laid at the distance of 5.0 mm from the center and combined to the ground using a resistive load. The resistor load can be replaced by a rectification circuit having an input impedance matching to that of the structure at operating frequency. The optimal resistance value was swept to be 80 Ω where the maximum power transfer occurs. To reduce the dielectric loss that can deteriorate the efficiency of the harvester, a Rogers TMM10i with er ¼ 9:8, loss tangent d = 0.002 and thickness 3.81 mm was used. In addition, both of the top and bottom layers respectively unit cell and ground were copper. The unit cell is designed at 2.4 GHz (ISM band) and its design dimensions are show in Table 1. The electromagnetic waves are fallen to the structure in normal incident with the electric field vector in parallel to cross-slot patch. The effect of incidence angle on absorption efficiency is investigated by changing the direction of the incident waves onto the metasurface structure.

Table 1 The parameter of metasurface unit cell Parameter

Dimension (mm)

Periodicity of the cell, P Length of a slot, L Width of a slot, W Distance between the edge and contact surface, S

13.7 7.8 1.5 0.4

696

A. A. G. Amer et al.

Fig. 2 Boundary conditions setup of CST Microwave Studio

3 Results and Discussion CST Microwave Studio was used to perform the numerical simulation for proposed metasurface design. The periodic boundary condition has been aligned in the x-y axes and was excited by Floquet ports in a way that the incident waves propagating along the z-axis as seen in Fig. 2. The periodic boundary condition was applied to numerically the infinite metasurface unit cell and investigates the effect of incident angle change on the absorption efficiency. The Floquet exciting port was set with two modes of TE and TM polarization on the top boundary to simulate the incident wave. Equation (1) describes the absorption efficiency AðwÞ ¼ 1  jS11 j2 jS21 j2

ð1Þ

where, S11 is the reflection coefficient, and S21 is the transmission coefficient. The transmitted power ðS21 Þ is almost zero due to the bottom layer of the resonator is a copper which means that the absorption can be defined by AðwÞ ¼ 1  jS11 j2

ð2Þ

Therefore, the reflection coefficients need to be minimized to maximizing the absorption efficiency. The harvesting efficiency can be calculated by g¼

Pload Pincident

ð3Þ

where PLoad is the total time-average power consumed in the load, and PIncident is the total time-average power incident on the cell.

Metasurface with Wide-Angle Reception … Fig. 3 Reflection coefficient of the proposed metasurface unit cell

697

0

11

S (dB)

-10

-20

-30

-40 1.5

2

3.5

3

2.4

Frequency (GHz)

Fig. 4 The absorption, reflection and transmission coefficients at normal incidence

100 Absorption Reflection Transmission

80 60 40 20 0 1.5

2

2.5

3

3.5

Frequency (GHz)

Figure 3 exhibits the reflection coefficient S11 of the meta-harvester at h ¼ 0 (normal incidence). The absorption coefficient of the meta-harvester under the normal incidence is over 98% as depicted in Fig. 4. The meta-harvester can capture and absorb almost all the power incident at its surface because of the impedance of the harvester is corresponded to free space impedance at the resonance frequency. The power losses in the load, Rogers substrate and metal were evaluated using full-wave simulation to analyze the power dissipation within the unit cell. Figure 5 shows that the dissipated power across the resistive load is 98 and 2% is consumed in both substrate and copper. For the meta-harvester, the most power dissipation within the unit cell was wasted in the load which is the significant difference from the meta-absorber where the most power dissipated in the substrate. Figure 6 shows the harvesting efficiency is 96% at the resonance frequency of 2.4 GHz under normal incidence.

698

A. A. G. Amer et al.

Fig. 5 Power conversion efficiency at normal incidence

Power Efficiency

100 Load Copper Substrate

80 60 40 20 0 1.5

2

2.4

3

3.5

4

3.5

4

Frequency (GHz)

Fig. 6 Harvesting efficiency under normal incidence

Harvesting Efficiency

100 80 60 40 20

0

1

1.5

2.4

3

Frequency (GHz)

Finally, the metasurface unit cell was simulated on different oblique incident angles to observe the effect of the angle incident change on the absorption efficiency. The phase between the electric field (E-field) and x-axis as shown in Fig. 1a is evaluated from 0 to 60°. The harvesting efficiency of the metasurface energy harvester on different oblique incidence angles is shown in Fig. 7. It can be seen that the max absorption efficiency is more than 98% when the oblique incidence reaches the 0-degree angle and decrease gradually when the incidence angle is increased. Therefore, the max energy harvesting can be achieved from wide area as more than 80% absorption efficiency can be achieved for wide wave incidence angles of ±60°.

Metasurface with Wide-Angle Reception … 100 θ=0o

Absorption Efficiency

Fig. 7 Absorption Efficiency under various incidence angles

699

θ=30o

80

θ=45o θ=60o

60 40 20 0 1.5

2

2.4

3

3.5

Frequency (GHz)

4 Conclusion EM energy collector has been presented based on the metasurface structure. It demonstrates a strong absorption to electromagnetic waves in a microwave regime. The proposed collector has a significant harvesting efficiency with different polarization angles from 0 to 60° At incident angle = 0° a maximum harvesting efficiency of more than 98% was observed. Proposed wide-angle metasurface based EM harvester is useful for efficient energy harvesting systems. Acknowledgements The authors would like to acknowledge Universiti Tun Hussein Onn Malaysia (UTHM) for their funding of this research under TIER 1 research grant, H150.

References 1. Brown WC (1984) The history of power transmission by radio waves. IEEE Trans Microw Theory Tech 32:1230–1242 2. Sharma T, Saini G (2016) Microstrip antenna array for RF energy harvesting system 5:145– 149 3. Heikkinen J, Salonen P, Kivikoski M (2000) Planar rectennas for 2.45 GHz wireless power transfer. In: IEEE radio and wireless conference. Colorado, pp 63–66 4. Sun H, Geyi W (2017) A new rectenna using beamwidth-enhanced antenna array for RF power harvesting applications. IEEE Antennas Wirel Propag Lett 16:1451–1454 5. Holloway CL et al (2012) An overview of the theory and applications of metasurfaces: the two-dimensional equivalents of metamaterials. IEEE Antennas Propag Mag 54:10–35 6. Sood D, Tripathi CC (2016) A wideband wide-angle ultrathin low profile metamaterial microwave absorber. Microw Opt Technol Lett 58:1131–1135 7. Ramya S, Srinivasa Rao I (2017) A compact ultra-thin ultra-wideband microwave metamaterial absorber. Microw Opt Technol Lett 59:1837–1845 8. Bağmancı M et al (2019) Polarization independent broadband metamaterial absorber for microwave applications. Int J RF Microw Comput Eng 29:1–10

700

A. A. G. Amer et al.

9. Almoneef T, Ramahi OM (2014) A 3-dimensional stacked metamaterial arrays for electromagnetic energy harvesting. Prog Electromagn Res 146:109–115 10. Ramahi OM, Almoneef TS, Alshareef M, Boybay MS (2012) Metamaterial particles for electromagnetic energy harvesting. Appl Phys Lett 101:173903 11. Alavikia B, Almoneef TS, Ramahi OM (2015) Complementary split ring resonator arrays for electromagnetic energy harvesting. Appl Phys Lett 107:033902 12. Alavikia B, Almoneef TS, Ramahi OM (2014) Electromagnetic energy harvesting using complementary split-ring resonators. Appl Phys Lett 104:163903

Integrated Soil Monitoring System for Internet of Thing (IOT) Applications Xin Yi Lau, Chun Heng Soo, Yusmeeraz Yusof, and Suhaila Isaak

Abstract Spectroscopy is widely used in various field, including in agriculture to determine the contamination of soil in order to produce the good quality of food and to avoid the excessive use of fertilizer, thereby minimize the impact on the environment. However, the commercial and common method of soil spectroscopy has some limitation such as bulky in size, costly and non-real-time system. In this study, a high-speed electronic data acquisition via machine learning on FPGA is implemented to efficiently monitor the macronutrients level in soil, which would offer economic benefit. Our focus is particularly on recognizing exact photon level absorbed by soil by applying photon count processing techniques to monitor the macronutrient in soil samples. The hardware architectures on FPGA feature a 16-bit Kogge Stone adder to process the input signals from the sensing module, LED light control system, time frame setting system and data synchronization via cloud for Internet of Thing (IoT) application. The proposed photon counting system has been demonstrated using visible range wavelength of 630, 550, and 470 nm, respectively. In addition, the input photon signal can be varied from 0 to 200 kHz and frame time period of 10 ms produces the optimum counting result with the percentage variation from 0% to maximum of 15% as compared to the actual counting from the signal generated by the function generator. Apart from that, a real-time system for IoT application has been successfully tested. Keywords Spectroscopy

 Soil monitoring  Photon counting  FPGA  IoT

X. Y. Lau  C. H. Soo  Y. Yusof  S. Isaak (&) Department of Electronics, School of Electrical Engineering, Faculty of Engineering, Universiti Teknologi Malaysia, 81310 Johor Bahru, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_50

701

702

X. Y. Lau et al.

1 Introduction Monitoring soil nutrients in the orchard or farmland is strongly required in quality and process controls of agricultural produce, and control of soil fertility. Soil needs essential elements for growth, which are known as soil macronutrients. These macronutrients are nitrogen (N), phosphorus (P) and potassium (K). N, P, and K fertilizer should be used based on the optimum requirement at each location. Excessive use of these fertilizers can lead to contamination of ground water pollution. Therefore, the monitoring of soil nutrient level is very important not only for effective production but also to avoid groundwater pollution by nitrate [1, 2]. To sustain the soil condition well and to control the amount of fertilizer, farmer should regularly monitor the content of soil nutrients in their farms. A method using color-developing chemicals for soil nutrients is also useful and commonly used by farmer. The chemical reagents are commercially available as a soil analyzer. Solutions of nutrients extracted from a soil, whose color is developed by chemical reagents, are estimated by a subjective judgment with the color charts for the nutrients. Thus, the value of soil nutrients content is always fluctuating due to the different type of crop production and it is difficult to achieve a quantitative analysis. To execute a precise measurement, a spectrophotometer can be applied to investigate the color developed in solutions. Spectrometer can be used to identify molecules of elements because each molecule vibrates at its frequency characteristic. Spectrophotometry implicates the use of a spectrophotometer, as a device to measure the intensity of light as the function of the light wavelength [3, 4]. The absorbance spectroscopy is a technique to measure the amount of absorbed light by a sample at a given wavelength, transmission spectroscopy can be used for all state of sample (solid, liquid and gas sampling) as the light passed through the sample is being compared to the light that has not passed through, while reflectance spectroscopy is the study of light as a function of wavelength that has been reflected or scattered from a solid, liquid or gas [3]. According to Beer-Lambert Law as illustrated in Fig. 1, there is linear relationship between absorbance and transmittance of sample. Lu et al. states that Beer-Lambert law is a logarithmic relationship between the radiant power and the concentration of a target compound or particle within the path length from source to detector [4].

Fig. 1 Basic concept of light absorption [3]

Integrated Soil Monitoring System for Internet of Thing ...

703

The proportion of light absorption or transmitted light is independent to the intensity of light source. In addition, the absorbance of light will directly proportional to the concentration of the absorbing material sample. The measurement of light transmitted and absorbance are defined as in Eq. (1). A ¼  log

I ¼eCL IO

ð1Þ

Where I0 = light intensity of a light source, I = transmitted light after passes through the material sample, e = molar absorptivity (L mol−1 cm−1), C = molar concentration (mol L−1) and L = path length in cm. The soil is an important substance in the earth that assists the growth of plants in providing food production to the human. To increase the production and quality of food, the farmer may consume an excessive amount of fertilizers to the soil, may resulting the contamination in soil. The excess of fertilizers will not only increase the cost of production, but also cause a huge impact on the environment. Thus, a soil-monitoring device is needed in the agriculture to determine the macronutrient contents in the soil such as nitrogen and phosphorus. Therefore, the huge demand for soil spectrophotometer is needed to provide healthy food to the human. Optical spectroscopy has good merit to realize a low cost and smart tool for soil nutrient monitoring system [5, 6]. New generation digital sensors are smart enough to replace chemical lab testing at real time with minimum efforts and with almost precise results. With the help of portable remote data acquisition system coupled together with sensor could let the researchers collect results from wide locations. The Internet of things (IoT), which is the ability for technology in every day objects to send and receive data, and will revolutionize how we do everything from agriculture to communication. Agriculture also stands to benefit greatly from integrating this technology into simple electronics: IBM estimates that IoT will enable farmers to increase food production by 70% by the year 2050 [7]. In addition to better pest management and weather forecasting, IoT could save up to 50 billion gallons of water annually, as sensors can better help farmers optimize water usage. Being able to better optimize crop management will have a transformative effect on agriculture in the following years. Therefore, IoT solutions with integrated affordable sensors that monitor soil humidity/moisture levels and soil/air temperatures and automatically would help farmers know what is best for their crops without having to manually estimate or make an educated guess. Photon counting is a method to count a single photon by using a single photon detector device. The number of photons is accumulated in a fixed period and used it to determine the strength of the slowly changing optical signal [8]. Single photon counting in the near-infrared and visible light spectrum has become one the popular process for many application [9]. In addition, the photon counting method is fast and accurate in term of the signal to noise and result in a high signal to noise ratio [10]. Thus, a single photon absorption event can perform better as compared to measuring an optical intensity or power in the spectrophotometer. Avalanche

704

X. Y. Lau et al.

Photodiode (APD) in reverse biased and Geiger mode it is a promising single photon detector and it will able to detect a photon even in a low light condition [11]. This paper is reported the integration of inexpensive mobile method on soil macronutrients monitoring and fertilization usage with IoT system. The integrated system is adapted on previous work of [12] for soil spectroscopy on macronutrient detection. Proposed system tends to realize a portable handheld device for soil testing and result uploading over IoT. The data acquisition part is implemented on field programmable gate array (FPGA) based spectrophotometer, which detects the signal from APD and the counted photon count that been trapped in soil sample is sent to mobile application over wireless system network (WSN) communication. This project required high speed in data acquisition to avoid to loss of photon counting. FPGA is a good promising device due to high speed and high channel density features [13]. In addition, the advantage of using FPGA are the ability to reprogram the digital circuit and the powerful backup software Quartus II. Therefore, the same device can be used for multiple tasks by changing the circuitry in the FPGA.

2 Integrated Circuit Design The block diagram of integrated circuit design is shown in Fig. 2. The composition of the circuit design is made of five modules namely soil testing, photon signal detecting, amplification and filtering, data processing, and data schronization via cloud for Internet of Thing (IOT) application. Main system is utilized using FPGA for complex computations, where the intensity level obtained is processed mostly using the FPGA. FPGA based spectrophotometer is implemented as the integration of front end, back end modules, and real-time monitoring. The front end consist of LED light control system, to perform activation of the LED light and time frame signal when the input is permitted. The light source illuminated the soil sample in the container, then the sample is absorbing the photon and the transmitted photon is entering the photodiode. Moreover, the small current is converted and amplified by the transimpedance amplifier (TIA) circuit. The back end module consists of data acquisition system on FPGA. This module is implemented the signal conditioning, discriminating, analyzing and counting. Kogge stone adder is utilized on the photon counting circuit to prevent losing the signal count on high frequency signal detection. The third part of the system consists of the real-time data monitoring system. The data is uploaded to cloud via Nodemcu and user can download the develop app to perform the real-time monitoring. The sensing module is used to detect the photon signal due to the concentration level in soil with various wavelengths of 630, 550, and 470 nm. A photodetector, APD is used to detect a very low light which is absorbed in soil and convert the photon into a very small current signal. The current signal is converted to voltage signal by using passively quenched circuit, then the small voltage is amplified using

Integrated Soil Monitoring System for Internet of Thing ...

705

Fig. 2 The block diagram of FPGA based spectrophotometer

TIA. The gain of the TIA can be defined as transimpedance gains, RT which is given by [14]: RT ¼ @Vout =@Iin

ð2Þ

where @Vout is the output voltage in voltage (V) and @Iin is the input current in Ampere (A). In the hardware setup, the transimpedance gain design is adapted the design proposed with other researcher [15, 16]. The RT is selected based on the specification sheet of the APD. The analog voltage from the TIA output is then converted to digital signal using an analog to digital converter (ADC) circuit before the digitized signal is discriminated with pulse discriminator for counting purpose with using FPGA data acquisition system. The data acquisition system on FPGA consist few modules, including LED control system. The LED light control system is controlling the LED time on and off during macronutrient level test. As the user assigns the start mode for the macronutrient test, the count mode is activated to ignite the counting module on FPGA. The user may choose various frame time for sensitivity analysis purpose. The adding process of the total counted photon signal is performed using Kogge Stone Adder algorithm for fastest counting need. Kogge stone adder (KSA) had the ability to generate it carries fast due to the logarithmic scaling of the delay by (log2 N), where N is the number of the bit through the carry path. As an example, the 16 bit of KSA is required 4 bit of carry path, while the 16 bit of Ripple Carry Adder (RCA) required 16 bit of carry [17, 18]. The total counted photon and the macronutrient concentration level in soil will be displayed on the 7-segment LED on DE2 FPGA board after the data acquisition completing all process. The macronutrient concentration module is developed to compare the counted number of photon in soil with light illumination and without

706

X. Y. Lau et al.

illumination. This comparison is initiated in percentage value on the 7 segment LED to define the useful signal over dark count signal. For IoT application, the data on data acquisition system is uploaded to the Nodemcu to perform the real time data analysis. Nodemcu is connected to the Wi-Fi and uploaded the data from DE2 FPGA board to firebase database. Firebase is chosen as database for this work, which is also acted as a cloud to store the data form Nodemcu. Finally, the user can download the develop application and perform the real time monitoring.

3 Experimental Results and Discussions The implementation of this project is divided into two parts, which are front-end and back-end modules. The front-end module consists of passively quenched APD, ADC and TIA. The back-end module is only involved on the Verilog programming on data processing for the digitized photon count from the front-end module. The overall process of back-end module is illustrated in Fig. 3. This flowchart summarizes the development of integrated system and algorithm on the proposed data acquisition for macronutrient monitoring task.

3.1

Performance of Data Acquisition System

One pulse wave is representing a single photon have been detected by the photodetector and converted into an electrical signal. The light which is absorbed in soil for various macronutrient concentration is converted to a voltage signal by TIA. The generated signal from the TIA output is filtered at threshold voltage of 1.4 V via pulse discriminator module. The counter circuit is only recognized the input signal between 1.4 and 3.3 V. The comparator output is acted as the clock of the counter. Time frame enables signal will only active when there is an input signal ignite the buffer on data acquisition system. Once the time frame enables signal is activated, the counter will start count. The accumulated photon count for a single time frame of 1 ms will be sent out to the 7 segment LED as shown in Fig. 4. The percentage of counting variation between the actual counting and the experimental counting is 10–15% due to some delay in data processing module using FPGA. The photon counting circuit comprises of a 16-bit buffers for sampling of incoming data from the SPAD sensor, a 16-bit Kogge-Stone adder that works as a stabilizer to the incoming signal from the buffer, a 16 bit parallel in serial out (PISO) shift register to serialize the output from the adder and a clock divider that provides the necessary divided clock input to the buffers and PISO as required by the design implementation. All the components of the counting circuit are fully implemented using Verilog HDL. The design implemented is synthesized onto the Cyclone 2 FPGA

Integrated Soil Monitoring System for Internet of Thing ...

707

Fig. 3 The flowchart of data acquisition system on FPGA board

and simulated using Quartus 2 software by Altera to verify its functionality and analyse its performance characteristic. The developed Kogge-stone adder has counting limitation to 2 GHz and affect the counting percentage. The Kogge-Stone adder circuit managed to achieve a delay of 12.283 ns and it was successfully integrated together with the remaining components of the data acquisition circuit to complete the design implementation of the counting circuit before being synthesized. The results from the synthesis reported the designed circuit of achieving an operating frequency of 420 MHz and an average power consumption 38 mW. Therefore, the implemented counting circuit was simulated for an input clock frequency of 400 MHz to verify its functionality.

708

X. Y. Lau et al.

Fig. 4 The accumulated photon count for a single time frame is displayed on 7-segment LED

3.2

Kogge-Stone Adder Implementation Overview

The functional block diagram of the Kogge-Stone adder has been drawn as shown in Fig. 5. The flow from the incoming input to the Kogge-Stone adder circuit output can be easily separated into three main stages. The first stage is the pre-processing stage, which is being represented by the square blocks. This pre-processing stage is required to determine if the input, when added together, is going to either generate or propagate a carry. The carry-generation stage is represented by all the circular and triangular blocks. This stage is an important step in allowing the Kogge-Stone adder circuit to achieve its fast speed as compared to other adder circuits. Finally, the diamond shaped blocks represent the post-processing stage which is responsible to produce the final output for the Kogge-Stone adder circuit. A 16-bit Kogge-Stone adder is successfully implemented structural dataflow modelling and the simulation performance has verified its functionality. The RTL code is synthesized on the Cyclone II family FPGA using the Altera Quartus 2 software. From the synthesized analysis, the design implemented for the Kogge-Stone uses a total of 89 logic elements and have an average fan-out of 2.47. After the successful synthesis of the Kogge-Stone adder circuit is completed, a RTL

Integrated Soil Monitoring System for Internet of Thing ...

709

Fig. 5 16-bit Kogge-Stone adder implementation functional block diagram

and GATE level simulation is performed to verify the circuit functionality. For the simulation, a random set of number are injected into the inputs of the adder to verify the functionality of the circuit as an adder. The results from the functional RTL simulation observed that the output from the adder circuit is corresponding to the input that injected to the circuit. It can also be noted that the carry out bit asserts as it should when the addition outcome from the performed addition of the injected input is greater the supported bit size, thus resulting in an overflow. Figure 6 shows the timing diagram of data processing system on FPGA based spectrophotometer. The comparator output is assigned as the clock of the counter. However, the counter will not start counting if no input is ignited from the DE2 Board. When there is an incoming input (L1 = active low), then the time frame enables signal (clken). Thus, the counter will start count within the set time frame. When the incoming signal is deactivated, then the counter will stop counting and the total counted number of photon will be displayed on the 7 segment LED on DE2 board. In general, the generated digital signal is counted within a given frame time to produce counting value on various soil concentration.

710

X. Y. Lau et al.

Fig. 6 The timing diagram of the data acquisition system

3.3

Real Time Monitoring and App Development

The timing simulation results obtained from the implemented Kogge-Stone circuit obtained that the propagation delay from the first input to the first output is 12.283 ns. The reported worst case propagation delay is similar to the results obtained by authors Kaur and Kumar [19] on their research, in which they implemented 16-bit KSA on a Xilinx FPGA through a Xilinx software achieved a delay of 12.84 ns. A design synthesized with a more recent CMOS process technology implementation would have better performance as compared to a design with a much older process technology implementation. Another attribute that can also contribute to the performance of a design is also the type of optimization performed during synthesis as a different setting for optimization could for

Integrated Soil Monitoring System for Internet of Thing ...

711

Fig. 7 Database interface system

example, result in a high speeds but at a cost of higher power consumption or vice versa. For the case of this circuit, a more balanced optimization setting was selected and therefore the results obtained are a reflection of that setting. One of the vital specification of this project is to perform a real time data analysis and monitoring. Thus, the obtained data from the FPGA is transferred to Nodemcu (refer Fig. 1) and uploaded to the database through Wi-Fi connection. Figure 7 depicts the interface of project database. Two types data of wavelength range and contamination value in soil are uploaded from Nodemcu to firebase via connecting Wi-Fi. MIT App Inventor II is utilized as the application development tool due to its feature of open source, free and user friendly. Therefore, the application developer can easily develop an app by using block coding to assign the function of the application. Figure 8 shows the graphical user interface of the developed application. The data from firebase is grabbed and displayed on the application when the user press the GET button. The user may select any visible wavelength range for the illumination. In addition, the type of macronutrient is set to be phosphorous. Meanwhile the contamination of the macronutrient depends on the input signal from TIA. Since the firebase store the latest result of the type of LED light is red and the contamination level is 50%, thus when GET button is pressed then the data from firebase is initiated and display on the screen. In the view of the overall result, the discriminator module is output the correct pulse waveform. Besides, the high, low level and undefined region of the DE2 FPGA board are determined through the experiment. In addition, the counting module part of the total photon within one frame time is also verified by the experimental performance. Table 1 shows the performance specification of FPGA based spectrophotometer phosphorous macronutrient in soil samples. The results define that, as the speed of the input clock configuration increases, the number of gates used to implement the design also increases. The number of gates used is also dependant to the setting on the type of optimization that is selected for the circuit. In the case of this parallel photon counting circuit, a more

712

X. Y. Lau et al.

Fig. 8 Graphical user interface

Table 1 Specification of FPGA based spectrophotometer Parameters

Specifications

RGB LED time control (s) Frame time (ms) Type of macronutrient Wavelength Input frequency (kHz) Optimum input frequency range (kHz) Optimum frame time (ms)

1 1–1000 0–15 630 nm, 550 nm, and 470 nm 0–200 1–100 10

balanced optimization setting is selected and hence the results obtained are reflection of that selection. Another observation that can also be made is that the timing slack of the circuit slowly turns negative as the input clock configuration of the circuit increases. This indicates the signal arrival time (AT) is unable to meet the required time (RT) of the circuit before the next clock edge and the behaviour can be attributed to the limitation of the available in the standard cell library on this circuit is implemented. It can be observed that the lowest propagation delay time able to be achieved from the synthesized circuit was at 1.18 ns and therefore by making the input clock faster, results in the reduction of the required time for the circuit, hence causing the calculated slack result obtained to be negative.

Integrated Soil Monitoring System for Internet of Thing ...

713

4 Conclusions This proposed project gives a viable tool for in situ soil nutrient monitoring with high speed nutrient, pH and water level alert system, and potentially help daily agricultural activities reduce excessive fertilizer usage. IoT ensures accurate and efficient communication to farmers of real time data related to soil quality, so they can plan agriculture activities beforehand and take corrective or preventive measures in advance for the future. This would promise less usage of fertilizer and more organic food production. In addition, better impact on healthy and economic meal for people with smart soil nutrient monitoring system at the start point of farm production. In addition, this project is performing the replacement of standard library adder from RCA to Kogge Stone Adder (KSA). This is because the speed of the KSA is the fastest as compared to other types of adder especially to RCA. Through analysis of the results obtained through simulation, it could be observed that the parallel photon counting circuit implemented did present some limitations in terms of meta-stability when injected with a randomly toggling asynchronous input at high speeds. This limitation was concluded to be mainly attributed to the minimum setup and hold time required by the circuit to properly sample the incoming signal before a corresponding output can be produced. Another limitation presented with the design was the slow buffer sample rate that was needed to meet the timing requirement of the PISO shift register circuit to properly output the all of sampled data input from the buffer of the parallel photon counting circuit. By adapting KSA, the percentage of photon count lose can be reduced and would improve the accuracy of the spectrophotometer. There are possible contributions that can be generated from this project including an improved automated soil monitoring for healthy and economic daily meal production from farm with IoT and low cost portable soil monitoring system at field. Acknowledgements Authors would like to express gratitude to the financial support from Universiti Teknologi Malaysia under GUP Grant Tier 1 (Vot. 11H44) and FRGS Grant (Vot. 4F959). This work was primarily conducted in Advanced Electronics Laboratory and Basic Communication Laboratory in Faculty of Electrical Engineering. The authors would also like to express utmost appreciation to the assistant engineers, Mr. Ahmad Hassan and Mrs. Wan Norafiza for their contributions to this work.

References 1. Dick WA, Cheng L, Wang P (2000) Soil acid and alkaline phosphatase activity as pH adjustment indicators. Soil Biol Biochem 32:1915–1919 2. Sankpal A, Warhade K (2015) Review of optoelectronic detection method for the analysis of soil nutrients. Int J Adv Comput Electron Technol 2(2):2394–3416 3. Lu C, Wang L, Hu H, Zhuang Z, Wang Y, Wang R, Song L (2013) Analysis of total nitrogen and total phosporus in soil using laser induced breakdown spectroscopy. Chin Opt Lett 11 (5):053004

714

X. Y. Lau et al.

4. Yusof KM, Isaak S, Rashid NCA, Ngajikin N (2016) NPK detection spectroscopy on non agriculture soil. Jurnal Teknologi (Sci Eng) 78(11):227–231 5. Albert DR, Todt MA, Davis FA (2012) A low-cost quantitative absorption spectrophotometer. J Chem Educ 89(11):1432–1435 6. Bah A, Balasundram SK, Husni MHA (2012) Sensor technologies for precision soil nutrient management and monitoring. Am J Agric Biol Sci 7(1):43–49 7. Isaak S, Yusof Y, Ngajikin NH, Ramli N, Chuan MW (2019) A low cost spectroscopy with Raspberry Pi for soil macronutrient monitoring. Telkomnika 17(4):1867–1873 8. Zappa F, Tosi STA, Cova S (2007) Principles and features of single photon avalanche diode arrays. Sens Actuators A 140:103–112 9. Isaak S, Pitter MC, Bull S, Harrison I (2010) Fully integrated linear single photon avalanche diode (SPAD) array with parallel readout circuit in a standard 180 nm CMOS process. AIP Conf Proc 1341(1):175–180 10. Yusof KM, Isaak S, Ngajikin NH, Rashid NCA (2016) LED based soil spectroscopy. Buletin Optik 3:1–7 11. Chuah JH, Holburn D (2014) An integrated solid-state solution for secondary electron detection. Analog Integr Circ Sig Process 1:395–411 12. Isaak S, Pitter MC, Bull S, Harrison I (2010) Design and characterisation of 16  1 parallel outputs SPAD array in 0.18 um CMOS technology. In: 2010 IEEE Asia pacific conference on circuits and systems, Kuala Lumpur, Malaysia. IEEE, pp 979–982 13. Zheng W, Liu R, Zhang M, Zhuang G, Yuan T (2014) Design of FPGA based high-speed data acquisition and real-time data processing system on J-TEXT tokama. Fusion Eng Des 89 (5):689–701 14. Li M (2007) 5 GHz optical front end in 0.35 mm CMOS. PhD dissertation, Nottingham 15. Lu Z, Yeo KS, Lim WM, Do MA, Boon CC (2010) Design of a CMOS broadband transimpedance amplifier with active feedback. IEEE Trans Very Large Scale Integr (VLSI) Syst 18(3):461–472 16. Isaak S, Yusof Y, Leong CW (2018) A 2.5-GHz optical receiver front-end in a 0.13 lm CMOS process for biosensor application. In: Proceedings of 2018 IEEE-EMBS conference on biomedical engineering and sciences (IECBES), Kuching, Malaysia. IEEE, pp 376–381 17. Butchibabu S, Babu SK (2014) Design and implementation of efficient parallel prefix adder on FPGA. Int J Eng Res Technol 3(7):239–244 18. Xiang LM (2017) VLSI implementation of Kogge stone adder. Universiti Teknologi Malaysia 19. Kaur J, Kumar P (2014) Analysis of 16 & 32 bit Kogge stone adder using Xilinx tool. J Environ Sci Comput Sci Eng Technol 3(3):1639–1644

Contrast Enhancement Approaches on Medical Microscopic Images: A Review Nadzirah Nahrawi, Wan Azani Mustafa, Siti Nurul Aqmariah Mohd Kanafiah, Mohd Aminudin Jamlos, and Wan Khairunizam Abstract Nowadays, there are many method for medical identification that exist for example based on microscopic and nonmicroscopic. Microscopic is a method that use microscope to capture an image and identify the disease based on the image captured. The image quality of medical image is very important for patient diagnosis. Image with poor contrast and the quality of the image is not good may lead to the mistaken decision, even in experienced hands. Therefore, contrast enhancement methods was proposed in order to enhance the image quality. Contrast enhancement is a process that improves the contrast of an image to make various features more easily perceived. Contrast enhancement is widely used and plays important roles in image processing application. This paper review the contrast enhancement techniques was used in microscopic images. There are microscopic images for cervical cancer, leukemia, malaria, tuberculosis and anemia. Keywords Contrast

 Enhancement  Microscopic  Image  Review

1 Introduction Contrast condition is an important factor in any assessment of the image quality. It is made by the difference in luminance reflected from the two adjacent surfaces [1]. In human visual perception, contrast is determined by the difference in term of color and brightness of the object and other objects [2–4]. The factor that image have a poor contrast quality is cause by imaging device specification is to low, the operator lack of expertise and adverse external conditions. The result of the imaging will not N. Nahrawi  S. N. A. M. Kanafiah  W. Khairunizam School of Mechatronic Engineering, University of Malaysia Perlis, Pauh Putra Campus, 02600 Arau, Perlis, Malaysia W. A. Mustafa (&)  M. A. Jamlos Faculty of Engineering Technology, University of Malaysia Perlis, UniCITI Alam Campus, Sungai Chuchuh, 02100 Padang Besar, Perlis, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_51

715

716

N. Nahrawi et al.

show all the details in the captured scene [5]. Thus, the aim of contrast enhancement (CE) is to solve these problems. CE is a process that improves the contrast of an image to make various features more easily perceived. CE is widely used and plays important roles in image processing application. The purpose of CE is to enhance the interpretability or acquire a more visually pleasing and informative image [5, 6]. There are two categorized of image enhancement techniques which are direct and indirect enhancement method. For direct enhancement method, the image contrast is defined and try to improve the contrast. In the indirect method, the intensity values of the images are redistributed to improve the contrast [5, 7]. There are many CE techniques and each technique has got its own merits and demerits [8, 9].

2 Contrast Enhancement Method on Microscopic Images 2.1

Cervical Cancer

Previous research by Chang [10] has proposed Energy method. The objective of this paper is to classify abnormal cells in Pap smear images. The images are collected from the Pathology Department, China Medical University Hospital, Taichung, Taiwan. The resolution of the image is 1280  960 pixels. Energy method is used to enhances the cell nucleus which has a low grey level. There are two mean filters used in energy method. There are background operator and local-energy. Background energy operator is the mean filter that has a larger mask and the mean filter with a smaller mask is used as local energy. The local energy is subtracted from background energy to enhance the cell nucleus in the image. Plissiti [11] found differences suggesting that used Contrast-limited adaptive histogram equalization (CLAHE). The aim of this paper is to detect cell nuclei boundaries. There are 19 images of Pap smear and the image is stored in JPEG format. In the pre-processing step, CLAHE and the global threshold are applied to the image to extract the background and get smooth regions of interest. The author used the same method in another paper [12]. The above finding is consistent with the study by Tareef et al. [13]. They also used Contrast-limited adaptive histogram equalization (CLAHE). The aim of this paper is to segment the nucleus and cytoplasm in Pap smear image. There are 135 Pap smear images with the size 512  512 pixels. In the pre-processing process, the Gaussian filter and CLAHE is used to reduce noise and enhance images. The nuclei in the image have poor contrast due to cytoplasm overlap. Thus, CLAHE is applied to enhance the nuclei. By the same objective in order to improve the contrast on Pap smear images, Isa [14] have proposed a combination of moving k-means clustering algorithm and linear contrast enhancement. K-means clustering algorithm is used to segment the images into 60 regions. Next, linear contrast enhancement method is applied to enhance the contrast of the images. The images are tested to three Pap smear images with three different methods which are the proposed method, moving k-means

Contrast Enhancement Approaches on Medical Microscopic Images …

717

clustering algorithm and linear contrast enhancement. The result shows that the size and shape of the nucleus and cytoplasm can be seen clearly. Thus, the proposed method produces better contrast of Pap smear images.

2.2

Leukemia

The combination of removing pixels technique and Gaussian filter for image enhancement is proposed by Nasir [15]. Almost 91 microscopic image are collected from Hospital Universiti Sains Malaysia (HUSM). The image resolution is 600  400. There are of 14 microscopic images of Acute Lymphoblastic Leukemia (ALL), 43 images of Acute Myelogenous Leukemia (AML) and normal blood cells with 34 images. In the image enhancement process, the removing pixels technique and Gaussian filter are used. Removing pixels technique was applied to eliminate unwanted image and noise. 100 pixel value is set and if the image contains less than 100, the image will be eliminated. Gaussian filter remove noise by smoothing and preserve edge better than a similar sized mean filter. Again, research finding by Nasir also points towards a new method based on partial contrast, dark stretching and bright stretching technique [16]. The image is acute leukemia microscopic with 800  600 resolution. Partial contrast, bright stretching and dark stretching are applied to improve the quality of the image. In this research, the partial contrast and bright stretching is combine and applied to enhance for blast detection while a combination of partial contrast and dark stretching is applied to enhance for nucleus detection. Partial contrast is a linear function and bright stretching techniques is based on linear mapping function. This technique is applied to resultant partial contrast image for enhancing cytoplasm of the image. The mapping function is given as Eq. 1 and for the bright stretching technique will interpret as Eq. 2. Dark stretching techniques also linear mapping function and it is the reverse process of bright stretching technique. To enhance the contrast between nucleus and background, the dark stretching technique is applied to the resultant partial contrast image. Pk ¼

Pk : qk : fmax : fmin : min: max:

ðmax  minÞ ðqk  fmin Þ þ min ðfmax  fmin Þ

Colour level of the output level Colour level of the input level Maximum colour level values in the input image Minimum colour level values in the input image Desired minimum colour levels in the input image Desired maximum colour levels in the input image

ð1Þ

718

N. Nahrawi et al.

( inðx;yÞ TH  SFb outðx; yÞ ¼ h ðinðx;yÞTH Þ 255TH

inðx; yÞ: outðx; yÞ: TH: SFb:

for inðx; yÞ\TH for inðx; yÞ [ TH

ð2Þ

Colour level for the input pixel Colour level for the output pixel Threshold value Bright stretching factor

( inðx;yÞ TH  SFd outðx; yÞ ¼ hðinðx;yÞTH Þ 255TH

inðx; yÞ: outðx; yÞ: TH: SFd:

i  ð255  SFbÞ þ SFb

i

 ð255  SFd Þ þ SFb

for inðx; yÞ\TH for inðx; yÞ [ TH

ð3Þ

Colour level for the input pixel Colour level for the output pixel Threshold value Dark stretching factor

However, interestingly, this is contrary to a study conducted by Harum et al. [17]. They focused on Local Contrast Stretching (LCS) in order to enhance the contrast variation. The size of the image is 800  600 resolution. For the enhancement process, LCS is used to increase the contrast of an image. The equation is shown in Eq. 4. The goal of the study is to compare the segmentation technique based on HSI and RGB color space.   Iinput ðx; yÞ  min Ioutput ðx; yÞ ¼ 255: ðmax  minÞ

ð4Þ

Where, I outputðx; yÞ: I inputðx; yÞ: max: min:

The color level for the output pixel The color level for the input pixel Maximum value for color level in the input image Minimum value for color level in the input image

Almost similar with [17] technique, Halim [18] applied Global Contrast Stretching (GCS) technique on the degraded images. The sample images are acute leukemia microscopic images. The KERNEL is sliding windows which is applying across the image and the center element is adjusted by using Eq. 5. During the contrast stretching process, the maximum value and minimum value will be used. To determine the maximum and minimum value of RGB color image, GCS consider all range of color plates.

Contrast Enhancement Approaches on Medical Microscopic Images …

Ip ðx; yÞ ¼ 255  Ip ðx; yÞ: Io ðx; yÞ: max: min:

½Io ðx; yÞ  min ðmax  minÞ

719

ð5Þ

The color level for the output pixel The color level for the input pixel Maximum value for color level in the input image Minimum value for color level in the input image

In another study, Rejintal [19] used a histogram equalization. The aim of the research paper is to segment the cells and extract the feature to detect cancer. Leukemia microscopic images are used as sample images. For the pre-processing process, the image is converted into grayscale images. Next, the filtering process and HE is applied to the images. HE is used to increasing the contrast of the images.

2.3

Malaria

The aim of the research by Purwar [20] is to classify between positive and negative cases of malaria using thin blood smear image. Local histogram equalization is used to enhance the grayscale image. There are over 500 microscopic images is tested from two independent laboratories. Local histogram equalization is used to enhancing the visibility of the RBC and parasite. Partial contrast stretching technique (PCS) is used by Nasir [21] to improve the contrast of pixels in the image. The objective of this research is to segment the RBC with malaria parasites. The microscopic images sample is used. An infinity-2 digital camera is to capture the image at a resolution of 800  600 pixels. The PCS is applying to the original image. Mehrjou [22] proposed an adaptive histogram shaping function to improve the contrast of the image. The aim of the research is to quantify the number of RBC and determine the RBC is normal or infected by parasites. The image is divided into many several tiles and histogram shaping is applied. To eliminate artificially induced boundaries, bilinear interpolation is applied on adjacent tiles. Uniform histogram shaping is used for each tile to get the best results. May [23] applied Histogram stretching to adjust the contrast or intensity values of the image. There are 507 images sample of malaria Plasmodium vivax at trophozoites stages and the resolution is 764  574 pixels. The aim of this research is to detect infected RBC. Somasekar [24] found that Gamma Equalization (GE) can improve the low contrast image. GE algorithm starts with the input image. Then, the RGB color image was converted to a grayscale image by using Eq. 6.

720

N. Nahrawi et al.

2 G ¼ ½ 0:299

0:587

Fr

3

6 7 0:114 4 Fg 5 Fb

ð6Þ

¼ 0:299  Fr þ 0:587  Fg þ 0:114  Fb Fr : Red Channel intensity of original image F Fg : Green Channel intensity of original image F Fb : Blue Channel intensity of original image F After convert into grayscale, cth order image analysis is carried out. H ¼ ½Gc

ð7Þ

The maximum and minimum value of image intensity is calculated. Ma ¼ maxðHÞ;

Mi ¼ minðHÞ

ð8Þ

The differences between the maximum and minimum value of intensity are defined as a range value. R ¼ Ma  Mi

ð9Þ

The LUT value is calculated using Eq. 10. 

ðH  M i Þ L¼ R

 ð10Þ

Lastly, the LUT values are transformed to intensity values of the grayscale image. GE is compared with Histogram equalization (HE), Imadjust (IA) and Contrast-limited adaptive histogram equalization (CLAHE). There is three image quality measure (IQM) which are entropy, average luminance and absolute brightness error (AMBE) to evaluate the performance. There are 20 malaria images. As a result, GE shows better image quality. Savkare [25] used Imadjust (IA) to increase the contrast of the image. There are 68 images of malaria parasite at ring, trophozoite and gametocyte stages of P. Falciparum and P. Vivax. The image’s data has a different resolution. The objective of this research is to identify the species of malaria. To enhance the image, the intensity values of the grayscale image is a map to a new value. Studies by Abidin [26] has found that the combination of Lowpass filtering and Contrast stretching shows the best result among six combinations. There are 50 images sample of malaria parasites and the resolution of the data is 140  140 pixels. This research is more focus on image enhancement and segmentation steps. The image is tested with the proposed method and another combination method. The combination methods are median filtering-contrast stretching, gaussian

Contrast Enhancement Approaches on Medical Microscopic Images …

721

filtering-contrast stretching, lowpass filtering-contrast stretching, median filtering-dark stretching, Gaussian filtering, dark stretching, dan lowpass filtering-dark stretching. The result of this test is based on visual inspection. As a result, Lowpass filtering and Contrast stretching shows that the background image is darker and the object is brighter.

2.4

Tuberculosis

Another study by Raof [27] is also used Partial Contrast Stretching (PCS). The sample images are collected from the Department of Microbiology and Parasitology, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian. Contrast stretching is based on linear mapping function. Usually, contrast stretching is used to enhance the brightness and level of contrast of the image. The mapping function is shown in Eq. 11. Pk ¼

Pk : qk : fmax : fmin : min: max:

ðmax  minÞ ðqk  fmin Þ þ min ðfmax  fmin Þ

ð11Þ

Colour level of the output level Colour level of the input level Maximum colour level values in the input image Minimum colour level values in the input image Desired minimum colour levels in the input image Desired maximum colour levels in the input image

Partial contrast is a combination of stretching and compressing process. Figure 1 illustrates the stretching and compressing process.

Fig. 1 The partial contrast stretching process

722

N. Nahrawi et al.

Wahidah [28] applied a linear contrast stretching technique to 50 images. The images are captured from the collected set of positive control TB slide sample for sputum which stained with Ziehl-Neelsen method. The sample images were collected from the Department of Microbiology and Parasitology of HUSM, Kelantan. The aim of this paper is to compare color thresholding and global thresholding techniques. Before segmentation process, linear contrast stretching technique is applied to enhance the contrast of the image.

2.5

Anaemia

Hirimutugoda [29] proposed an adapted grey world normalization method. The aim of this paper is to develop an automated diagnosis of the disorder of RBC and detect malaria parasite and thalassemia in blood. There are 300 images with the size 160  160 pixels. The color in each sensor channel averages is assumed to grey. This method based on the diagonal model of illumination change that uses certain characteristics of microscopic peripheral blood images. Maitra [30] used adaptive histogram equalization in the paper. Detecting and counting RBC using Hough Transform is the aim of the research. There are five microscopic images. The image sample used for this paper is the blood cell of microscopic images. For the preprocessing process, adaptive histogram equalization is used to enhance the images. In 2016, Tyagi [31] published a paper in which they applied histogram equalization for image enhancement. The objective of the research is to classify normal RBC and poikilocyte cells by using Artificial Neural Network. There are 100 images with different blood samples collected from Haematological Department AIIMS, New Delhi. The image is captured in JPEG format and the size is 1024  768 pixels. There are pre-processing, segmentation, morphological operations, feature extraction and classification process to identify the cells. In the pre-processing process, the images are converted into grey scale image and histogram equalization is applied to obtain cells boundary (Table 1).

Table 1 Summary of selected contrast enhancement method Method

Description

Energy Method

There are two mean filters used in energy method which are background and local energy. Background has larger mask and local has smaller mask. The local energy is subtracted from background energy to enhance the cell nuclei in the image [10] A variant of adaptive histogram equalization(AHE). The development of CLAHE is to reduce the problem of noise amplification that AHE can give rise to [11] (continued)

Contrast-limited adaptive histogram equalization (CLAHE)

Contrast Enhancement Approaches on Medical Microscopic Images …

723

Table 1 (continued) Method

Description

Moving k-means + Linear Contrast Enhancement

Moving k-means constantly checked for fitness of each centre during clustering process. The centre will moved to the most active centre region if the centre fails to satisfied a specific criteria [32] Linear contrast enhancement linearly manipulate the histogram of the image to fulfilled it’s dynamic range. The contrast of the image will be more uniform [14] Increasing global contrast of an image. The most frequent intensity values is spreading out [19] Increasing contrast for overall image. The minimum and maximum value of threshold will be mapped to new wider range of pixels [16] Locally adjust each pixel value to improve the visualization of structures in both darkest and lightest area of image at the same time. In contrast stretching process, range of each color palate in the image will be used to represent each range of color. Each of color palate will have a minimum and maximum values [33] All color plate range are considered at once to determine only one maximum and minimum value for combination of RGB color. The value will be used in contrast stretching process [33] Improve low contrast image. The value of look-up-table (LUT) is calculated and the intensity values of gray scale image is converted into LUT values. c value is important in getting enhancement of input image [24] The averages color in each sensor channel is assumed to grey in entire image. Based on the diagonal model of illumination change that uses certain characteristics of microscopic peripheral blood images [29]

Histogram Equalization (HE) Partial Contrast Stretching (PCS)

Local Contrast Stretching (LCS)

Global Contrast Stretching (GCS)

Gamma Equalization (GE)

Grey world normalization

3 Conclusion Contrast enhancement is widely used and plays important roles in image processing application. The purpose of CE is to enhance the interpretability or acquire a more visually pleasing and informative image. HE, CLAHE and contrast stretching techniques are commonly used. In this paper, there is various contrast enhancement techniques review for microscopic images. The review of this paper hopefully will help the researcher to improve the existing techniques and develop a new algorithm to produce a better quality of an image. An image with poor contrast and the quality of the image is not good may lead to the mistaken conclusion, even in experienced hands. The image quality is important for diagnosis result.

724

N. Nahrawi et al.

Acknowledgements This work was supported by the Ministry of Higher Education Malaysia under the Fundamental Research Grant Scheme (FRGS/1/2018/SKK13/UNIMAP/02/1)

References 1. Al-amri SS, Kalyankar NV, Khamitkar SD (2010) Linear and non-linear contrast enhancement image. J Comput Sci 10:139–143 2. Mustafa WA, Yazid H (2017) Image enhancement technique on contrast variation: a comprehensive review. J Telecommun Electron Comput Eng 9:199–204 3. Kanafiah SNAM, Mashor MY, Mustafa WA, Mohamed Z (2018) A novel contrast enhancement technique based on combination of local and global statistical data on malaria images. J Biomim Biomater Biomed Eng 38:23–30. https://doi.org/10.4028/www.scientific. net/JBBBE.38.23 4. Mustafa WA, Yazid H, Yaacob S (2014) A review : comparison between different type of filtering methods on the contrast variation retinal images. In: IEEE international conference on control system, computing and engineering, pp 542–546 5. Arici T, Dikbas S, Altunbasak A (2009) A histogram modification framework and its application for image contrast enhancement. IEEE Trans Image Process 18:1921–1935. https://doi.org/10.1109/TIP.2009.2021548 6. Ibrahim H, Kong NSP (2007) Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans Consum Electron 53:1752–1758. https://doi.org/10. 1109/TCE.2007.4429280 7. Baby J, Karunakaran V (2014) Bi-Level Weighted Histogram Equalization with Adaptive Gamma Correction. Int J Comput Eng Res 4(3):25–30 8. Mustafa WA, Yazid H (2017) Contrast and luminosity correction based on statistical region information. Adv Sci Lett 23:5383–5386 9. Mustafa WA, Yazid H (2016) Illumination and contrast correction strategy using bilateral filtering and binarization comparison. J Telecommun Electron Comput Eng 8:67–73 10. Chang CW, Lin MY, Harn HJ, Harn YC, Chen CH, Tsai KH, Hwang CH (2009) Automatic segmentation of abnormal cell nuclei from microscopic image analysis for cervical cancer screening. In: 2009 IEEE 3rd international conference on nano/molecular medicine and engineering NANOMED 2009, pp 77–80. https://doi.org/10.1109/NANOMED.2009. 5559114 11. Plissiti ME, Nikou C, Charchanti A (2011) Accurate localization of cell nuclei in pap smear images using gradient vector flow deformable models, pp 284–289. https://doi.org/10.5220/ 0002746702840289 12. Plissiti ME, Nikou C, Charchanti A (2011) Automated detection of cell nuclei in pap smear images using morphological reconstruction and clustering. IEEE Trans Inf Technol Biomed 15:233–241. https://doi.org/10.1109/TITB.2010.2087030 13. Tareef A, Song Y, Cai W, Huang H, Chang H, Wang Y, Fulham M, Feng D, Chen M (2017) Automatic segmentation of overlapping cervical smear cells based on local distinctive features and guided shape deformation. Neurocomputing 221:94–107. https://doi.org/10.1016/j. neucom.2016.09.070 14. Isa NAM (2015) Contrast enhancement image processing technique on segmented pap smear cytology images, 6:3375–3379. https://doi.org/10.13040/IJPSR.0975-8232.6(8).3375-79 15. Abdul-nasir AS, Mustafa N, Mohd-nasir NF (2009) Application of thresholding technique in determining ratio of blood cells for leukemia detection. In: Proceedings of the international conference on man-machine systems, pp 11–13

Contrast Enhancement Approaches on Medical Microscopic Images …

725

16. Aimi Salihah AN, Mashor MY, Harun NH, Abdullah AA, Rosline H (2010) Improving colour image segmentation on acute myelogenous leukaemia images using contrast enhancement techniques. In: 2010 IEEE EMBS conference on biomedical engineering and science, pp 246– 251. https://doi.org/10.1109/IECBES.2010.5742237 17. Harun NH, Mashor Y, Mokhtar NR, Osman MK (2010) Comparison of acute leukemia image segmentation using HSI and RGB. In: International conference on information science, signal processing and their applications 2010, pp 749–752 18. Halim NHA, Mashor MY, Abdul Nasir AS, Mokhtar NR, Rosline H (2011) Nucleus segmentation technique for acute leukemia. In: Proceedings - 2011 IEEE 7th international colloquium on signal processing and its applications CSPA 2011, pp 192–197. https://doi.org/ 10.1109/CSPA.2011.5759871 19. Ashwini R, Aswini N (2017) Image processing based leukemia cancer cell detection. In: 2016 IEEE international conference on recent trends in electronics, information & communication technology RTEICT 2016 – Proceedings, pp 471–474. https://doi.org/10.1109/RTEICT.2016. 7807865 20. Purwar Y, Shah SL, Clarke G, Almugairi A, Muehlenbachs A (2011) Automated and unsupervised detection of malarial parasites in microscopic images. Malar. J. 10:364. https:// doi.org/10.1186/1475-2875-10-364 21. Nasir ASA, Mashor MY, Mohamed Z (2012) Segmentation based approach for detection of malaria parasites using moving k-means clustering. In: 2012 IEEE EMBS conference on biomedical engineering and science, pp 653–658. https://doi.org/10.1109/IECBES.2012. 6498073 22. Mehrjou A, Abbasian T, Izadi M (2013) Automatic malaria diagnosis system. In: International conference on robotics and mechatronics, ICRoM 2013, pp 205–211. https:// doi.org/10.1109/ICRoM.2013.6510106 23. May Z, Sarah S, Mohd A (2013) Automated quantification and classification of malaria parasites in thin blood smears, pp 369–373 24. Somasekar J, Reddy BE (2015) Contrast-enhanced microscopic imaging of malaria parasites. In: 2014 IEEE international conference on computational intelligence and computing research IEEE ICCIC 2014, pp 1–4. https://doi.org/10.1109/ICCIC.2014.7238439 25. Savkare SS, Narote SP (2015) Automated system for malaria parasite identification. In: Proceedings - 2015 international conference on communication, information & computing technology ICCICT 2015, pp 15–18. https://doi.org/10.1109/ICCICT.2015.7045660 26. Abidin SR, Salamah U, Nugroho AS (2017) Segmentation of malaria parasite candidates from thick blood smear microphotographs image using active contour without edge. In: Proceedings of the 2016 1st international conference on biomedical engineering: empowering biomedical technology for better future IBIOMED 2016. https://doi.org/10.1109/IBIOMED. 2016.7869824 27. Raof RAA, Mashor MY, Ahmad RB, Noor SSM (2012) Image segmentation of Ziehl-Neelsen sputum slide images for tubercle bacilli detection. In: Image segmentation. https://doi.org/10.5772/15808 28. Wahidah MFN, Mustafa N, Mashor MY, Noor SSM (2015) Comparison of color thresholding and global thresholding for Ziehl-Neelsen TB bacilli slide images in sputum samples. In: Proceedings - 2015 2nd international conference on biomedical engineering ICoBE 2015, pp 30–31. https://doi.org/10.1109/ICoBE.2015.7235913 29. Hirimutugoda Y, Wijayarathna G (2010) Image analysis system for detection of red cell disorders using artificial neural networks. Sri Lanka J Bio-Med Inform 1. https://doi.org/10. 4038/sljbmi.v1i1.1484 30. Maitra M, Kumar Gupta R, Mukherjee M (2012) Detection and counting of red blood cells in blood cell images using Hough transform. Int J Comput Appl 53:13–17. https://doi.org/10. 5120/8505-2274

726

N. Nahrawi et al.

31. Tyagi M, Saini LM, Dahyia N (2016) Detection of Poikilocyte cells in iron deficiency anaemia using artificial neural network. In: 2016 international conference on computation of power, energy information and commuincation ICCPEIC 2016, pp 108–112. https://doi.org/ 10.1109/ICCPEIC.2016.7557233 32. Mashor MY (2000) Hybrid training algorithm for RBF network. Int J Comput Internet Manag 8:50–65 33. Ravindraiah R, Srinu MV (2012) Quality improvement for analysis of leukemia images through contrast stretch methods. Procedia Eng 30:475–481. https://doi.org/10.1016/j.proeng. 2012.01.887

Effect of Different Filtering Techniques on Medical and Document Image Wan Azani Mustafa, Syafiq Sam, Mohd Aminudin Jamlos, and Wan Khairunizam

Abstract Image enhancement is very important stages used in image processing. A normal image enhancement process is using the filtering technique. Filtering helps the problems of the image display and can improvise the quality of the image. The problems that always happened in the image is illumination, noise and under-light images. In addition, these problems also caused a few troubles for image recognition for the daily life of certain people for their work. The objective of this study is to explore and compare a few starts of art filtering techniques based on the mathematical algorithm of the filters and then identifying the best method of the filters. There were a few methods that were selected in this project such as a high pass filter, low pass filter, high boost filter and others. All the selected filter experimented on the medical images and document images. The resulting images were evaluated using the Image Quality Assessments (IQA) which is a global contrast factor (GCF) and signal to noise ratio (SNR). Based on the numerical result, homomorphic low pas filter (HLF) provides a better performance among the other filters in terms of GCF (2.066) and SNR (8.907) value of the selected images. Keywords Contrast Filtering

 Illumination  Signal noise ratio  Global contrast factor 

W. A. Mustafa (&)  S. Sam  M. A. Jamlos Faculty of Engineering Technology, University of Malaysia Perlis, Kampus Sg. Chuchuh, 02100 Padang Besar, Perlis, Malaysia e-mail: [email protected] W. Khairunizam School of Mechatronic Engineering, University of Malaysia Perlis, 02600 Arau, Perlis, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_52

727

728

W. A. Mustafa et al.

1 Introduction In recent decades, the filtering technique has been one of the major interesting research subjects due to improving the image quality. In the image processing field, the filtering technique is one of the important things should be considered before applying the post-processing such as detection or segmentation [1, 2]. The main objective of filtering is to remove or eliminate the noise effect on the input image [3]. This unwanted signal will be the effect on the output image apply process stage. Nowadays, many researchers have studied the effect of filtering technique on the non-uniform image such as medical image [4] and document image [5]. In the year 2013, Xu et al. [6] from Beihang University has proposed a new filtering technique Hessian-based technique. They applied a few steps of filtering technique on filter bank and the filtered direction images were reassembled to produce the final result. Mustafa et al. [7] presented a comprehensive review of filtering types such as a high pass filter, low pass filter and homomorphic filtering. This comparison was tested on the medical images which are a retinal image of DRIVE dataset. According to this experiment, they conclude that the homomorphic high pass filter is more effective compared to other filters. In another study, Rahul Rajendran et al. presented a different filtering technique to improve the small resolution in the input images [8]. The proposed technique has applied the use of conducting filtering, edge enhancement, and morphological filtering and many more. This filtering method has numerous uses such as image combination and colorization [9]. The results achieved show that the filtering algorithm enhanced effectively for X-ray image as well. Their method distributes better outcomes than the way in [10] as it removes the unwanted noise. This experiment was applied to Computer Tomography (CT) image and X-ray image. In the year 2016, the previous project was about the process for detection of a brain tumor that applied the accuracy and efficiency with its position and the area in the MRI Images of the [11]. The brain was scanned and the x-ray image was taken in the entire process. The pre-processing stage is to remove the noises on that image and resizing the x-ray image of the brain. According to an investigation by Mustafa et al. [12], the mean filtering is the good alternative ways to remove the noise in non-uniform input images. In this study, they proposed a new enhancement technique by applied two times mean filtering called as Double Mean Filtering (DMF) technique. Based on the result, the luminosity was successful normalize and improved the image quality [13]. In this paper, a comprehensive review of a few selected filtering types was discussed. The objective of this study is to explore the mathematical algorithm for each filtering and find the best technique. The experiment was conducted on the document image and the medical image (cell image). A few image quality assessment such as Global Contrast Factor (GCF) and Signal Noise Ratio (SNR) was performed in order to compare the effectiveness of every method. The rest of this paper is organized as follows: Sect. 2 describes of selected filtering technique. Experimental results and discussion are shown in Sect. 3. Finally, Sect. 4 explained the conclusion of this work.

Effect of Different Filtering Techniques on Medical …

729

2 Methodology The methods to be used in this paper will be explained in this part, for example, the basic meaning of the filters and its mathematical algorithms.

2.1

Low Pass Filter (LPF)

An LPF also named a blurring or smoothing filter. Its image will look a lot blurrier. Even how good the camera is, it will always add an amount of snow or noise into the image. The numerical nature of light itself also donates noise into the image [14]. However, the LPF divided into 2 types which is discrete and continuous. The equation of LPF as follows; ( LPF ðx; yÞ ¼

2.2

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2 \ cf pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0 if x2 þ y2 [ cf 1 if

ð1Þ

High Pass Filter (HPF)

HPF can make the identical technique as LPF with an unlike complication kernel [7]. Equation (2) shows the HPF mathematical algorithm. In fact, Fourier HPF is applied to detach low-frequency illumination from high-frequency reflectance. The main purpose of using the high pass filter is to stop the low-frequency component, while the high-frequency component is passing on the signal [15, 16].  HPFðx; yÞ ¼

1

if r ðx; yÞ [ cf

0

otherwise

ð2Þ

HPF ðx; yÞ ¼ 1  LPF

2.3

High Boost Filter (HBF)

High boost filtering of a black & white image included time domain image sharpening techniques. It is frequently necessary to underline high-frequency parts demonstrating the image data by means such as refining minus removing low-frequency modules demonstrating the simplest form of the signal [17]. In this case, the HBF can be used to filter high-frequency parts while unmoving the low-frequency parts. Figure 1 shows the comparison position between the low pass filter, high pass filter, and a high boost filter.

730

W. A. Mustafa et al.

Fig. 1 Homomorphic high boost filter graph

2.4

Homomorphic Filtering

This method is a typical method for filtering and improvement of the image. It stabilizes the illumination of the image and improves the contrast while illumination is the components that could not be excluded [18, 19]. There are two components to apply to improve the unwanted presence simultaneously, which is the illumination and contrast [20]. Fundamentally, an image can be mathematically in terms of illumination and reflectance as follows in Eqs. 3. Mainly, the model is identified in the natural log domain and the Fourier transform is obtained as in Eq. 4. Next, the inverse Fourier transform will apply as denoted Eq. 5. Illumination images are I(x, y), reflectance images are R(x, y) and type of filter (spatial filter) is H(x, y). Lastly, the inverse transform of the natural log is inserted to get the spatial domain which is exponential as following Eq. 6. F ðx; yÞ ¼ Iðx; yÞ Rðx; yÞ

ð3Þ

Z ðx; yÞ ¼ lnfFðx; yÞg ¼ lnfIðx; yÞg x lnf Rðx; yÞg

ð4Þ

Sðx; yÞ ¼ F 1 fHðx; yÞ I ðx; yÞg þ F 1 fHðx; yÞ Rðx; yÞg

ð5Þ

Gðx; yÞ ¼ expfIðx; yÞg x expfRðx; yÞg

ð6Þ

Effect of Different Filtering Techniques on Medical …

2.5

731

Image Quality Assessment (IQA)

Image quality assessment (IQA) is one the method to measure the quality display of an image and it plays a variety of roles in many image processing techniques [21]. In common, the evaluation of image quality usually can categorize into two types which is a subjective and objective evaluation of quality. A subjective quality technique such as Mean Opinion Score (MOS) that decisive, but too troublesome, also more period engaged and costly. However, objective evaluation is more simple and easy compared to the subjective technique. In addition, the objective evaluation more accurate caused used quantitative data. In this study, an objective measurement technique was used to evaluate the performance of each filtering type. In this assessment part, two types of image quality assessment (IQA) were calculated known as Global Contrast Factor (GCF) and Signal Noise Ratio (SNR). Signal to Noise Ratio (SNR). SNR is the ratio of the average signal value of the standard deviation of the signal. Higher SNR value showed a better quality image and lower SNR specifies the region of image flaw of the background noise [22, 23]. The represents an input image and is the standard deviation of the image. The SNR equation as follows; SNR ¼ 10 log10 ¼

mean ½Iðx; yÞ std ½Iðx; yÞ

ð7Þ

Global Contrast Factor (GCF). The newly introduced GCF nearer to the human awareness of difference by calculating the local contrast at several spatial frequencies and to use these local contrast for the computation of the global contrast factor. The low GCF value represents the image is uniform [24, 25].

GCF ¼

N X

wi x ci

ð8Þ

i¼1

In this study, 10 document images and 10 cell images were tested. Firstly, the input data was through each filtering process. Each output image will be saved on storage. After all the process finish, the GCF and SNR were obtained in order to find effective filtering techniques.

3 Results and Discussion Indication of image quality is significant for image processing technique. Independent methods for measuring perceptual image quality by tradition way was discovered to measure the emergence of defects amongst a partial image and an

732

W. A. Mustafa et al.

original photo using a selection of known properties. First, it can apply to animatedly computer and control the intensity of the picture. For example, a network digital video server can observe the performance of the video is to switch and distribute on display assets. Next, it can be heightened algorithms and parameter settings of image processing systems. For occasion, in a visual system, a quality measurement can contribute to the ideal design of filtering and a few algorithms [6]. In this part, two types of image quality assessment (IQA) was calculated known as SNR and GCF. The primary goal of this analysis is to explore and study the effect of each IQA on tested images. Figures 2 and 3 shows the resulting image after applying a few selected filtering technique on document images and medical images. According to Table 1, there were twenty (20) images were filtered. It was divided into 2 types of the image which is document image and medical image. The

Type of filter

Number of images

Original image

LPF

HPF

HHPF

HLPF

HHBF

Fig. 2 Comparison of different types of the filter of the document image

Effect of Different Filtering Techniques on Medical … Type of filter

733

Number of images

Original Image

LPF

HPF

HHPF

HLPF

HHBF

Fig. 3 Comparison of different types of the filter of medical image

outcome images after put on, unlike filters. The homomorphic HPF shows the best output to abolish the contrast variation in both the document images and medical images associated than the others. Table 1 shows the value of GCF and SNR with dissimilar kinds of filter. The best image must produce high SNR value and lower GCF value. Based on Table 1, the image of number 9 from the document images gives the highest SNR value which is 10.974 and the lowest GCF value is 0.679 which is using the method of homomorphic HPF. However, the image of number 17 from the medical images gives the highest SNR value which is 10.177 and the lowest GCF value is 1.791 which is also by using the method of homomorphic HPF. The theory of SNR, if the SNR value is high it defines better image because noise and intensity have even now diminished, while the GCF value ratio is low. If the GCF reduce to zero, then the image comes to be perfect. The best performance in this IQA result is the homomorphic HPF because it shows the highest SNR value which is 10.974 and the lowest is 0.679 compared to the other filters and overall of the experimented image.

734

W. A. Mustafa et al.

Table 1 Comparison GCF & SNR value using different filtering methods Number of images

Method

IQA

MEDICAL IMAGES

DOCUMENT IMAGES

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Total Average Highest Lowest

LPF GCF SNR 4.445 5.900 3.661 10.001 3.666 7.748 2.613 7.335 2.510 7.248 3.072 7.187 3.431 7.052 2.965 6.904 1.307 4.591 3.367 2.965 3.514 7.316 3.540 7.132 2.896 8.002 3.222 7.406 2.915 7.925 3.062 7.740 2.678 8.368 2.745 8.135 2.733 8.411 3.865 6.566 62.20 143.93 3.110 7.197 4.445 10.001 1.307 2.965

Lowest Value

HPF GCF SNR 3.653 6.272 3.341 6.308 2.323 8.477 2.129 8.427 1.875 8.487 2.530 7.630 2.634 8.029 2.454 8.157 1.088 10.420 2.912 7.469 3.023 8.314 3.062 7.937 2.099 9.626 2.614 8.571 2.340 9.035 2.610 8.539 2.103 9.563 2.180 9.339 2.211 9.452 3.244 7.497 50.42 167.54 2.521 8.377 3.653 10.420 1.088 6.272

HHPF GCF SNR 4.494 5.912 3.525 4.926 2.652 8.244 2.240 8.334 2.313 7.708 2.920 7.190 2.850 7.743 2.730 7.873 1.136 10.235 3.093 7.355 3.278 7.669 3.350 7.358 2.676 8.345 3.027 7.727 2.647 8.364 2.861 8.004 2.438 8.756 2.548 8.450 2.519 8.758 3.608 6.868 56.90 155.81 2.845 7.791 4.494 10.235 1.136 4.926

HLPF GCF SNR 2.970 6.512 3.132 6.615 1.707 9.396 1.434 9.350 1.644 8.802 1.832 7.737 1.806 8.596 1.924 8.524 0.679 10.974 1.952 8.085 2.655 8.974 2.736 8.389 1.857 10.157 2.288 9.172 1.983 9.710 2.299 9.039 1.791 10.177 1.906 9.812 1.907 10.046 2.816 8.075 41.31 178.13 2.066 8.907 3.132 10.974 0.679 6.512

HHBF GCF SNR 4.318 5.769 3.685 5.361 2.643 8.058 2.160 8.381 2.197 7.822 2.689 7.156 2.769 7.749 2.730 7.729 1.069 10.357 2.921 7.417 2.987 8.282 3.083 7.834 2.241 9.300 2.678 8.415 2.356 8.942 2.603 8.476 2.147 9.414 2.288 9.020 2.233 9.357 3.308 7.333 53.10 162.17 2.655 8.109 4.318 10.357 1.069 5.361

Highest Value

4 Conclusion The filters can be applied to certain images that have illumination problem, noises and under-light images. This paper has shown that an overview of the background and related work in the area of filtering technique. This work described a few selected popular filtering methods such as HPF, LPF and Homomorphic. Other than that, the study also to explore mathematical algorithms from the filtering formula techniques. The filtering was tested on the medical image and document image. The output image was calculated to calculate by using the formula of global contrast factor (GCF) and signal noise ratio (SNR). The highest SNR value and the lowest GCF value are the best methods. Based on the numerical result, homomorphic HLF provides a better performance among the other filters in terms of GCF (2.066) and SNR (8.907) value of the selected images. It is recommended that in the following areas for further experimental research are needed to estimate the weakness in this method because the GCF value still not close to zero. From the results, if GCF is closer zero the image will be more perfect. So, there a few researcher is needed to determine to get the best result of GCF value and SNR value. Otherwise, a future study by investigating the many filters would be very interesting.

Effect of Different Filtering Techniques on Medical …

735

Acknowledgements This work was supported by the Ministry of Higher Education Malaysia under the Fundamental Research Grant Scheme (FRGS/1/2018/SKK13/UNIMAP/02/1).

References 1. Al-Rawi M, Qutaishat M, Arrar M (2007) An improved matched filter for blood vessel detection of digital retinal images. Comput Biol Med 37:262–267 2. Thilagamani S, Shanthi N (2014) Gaussian and Gabor filter approach for object segmentation. J Comput Inf Sci Eng 14:1–7. https://doi.org/10.1115/1.4026458 3. Chandel R, Gupta G (2013) Image filtering algorithms and techniques: a review. Int J Adv Res Comput Sci Softw Eng 3:198–202 4. Mustafa WA, Yazid H, Yaacob S (2015) Illumination correction of retinal images using superimpose low pass and Gaussian filtering. In: International conference on biomedical engineering (ICoBE), pp 1–4 5. Sehad A, Chibani, Y, Hedjam R, Cheriet M (2018) Gabor filter-based texture for ancient degraded document image binarization. https://doi.org/10.1007/s10044-018-0747-7 6. Xu X, Liu B, Zhou F (2013) Hessian-based vessel enhancement combined with directional filter banks and vessel similarity. In: 2013 ICME international conference on complex medical engineering, CME 2013, pp 80–84. https://doi.org/10.1109/ICCME.2013.6548216 7. Mustafa WA, Yazid H, Yaacob S (2014) A review : comparison between different type of filtering methods on the contrast variation retinal images. In: IEEE international conference on control system, computing and engineering, pp 542–546 8. Rajendran R, Panetta K (2016) A versatile edge preserving image enhancement approach for medical images using guided filter. In: IEEE international conference on systems, man and cybernetics, SMC 2016, pp 2341–2346 9. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22:2864–2875. https://doi.org/10.1109/TIP.2013.2244222 10. Rajendran R, Rao SP, Agaian SS, Liss M (2016) A novel technique to enhance low resolution CT and magnetic resonance images. In: Simulation series 11. Shivakumarswamy GM, Aksha Patil V, Chethan TA, Prajwal BH, Hande SV (2016) Brain tumour detection using Image processing and sending tumour information over GSM. Int J Adv Res Comput Commun Eng 5:179–183. https://doi.org/10.17148/IJARCCE.2016.5543 12. Mustafa WA, Yazid H, Yaacob S (2014) Illumination normalization of non-uniform images based on double mean filtering. In: IEEE international conference on control systems, computing and engineering, pp 366–371 13. Mustafa WA, Yazid H, Kader MMMA (2018) Luminosity correction using statistical features on retinal images. J Biomim Biomater Biomed Eng. 37:74–84. https://doi.org/10.4028/www. scientific.net/JBBBE.37.74 14. Zhu S, Zeng B, Yan S (2012) Image super-resolution via low-pass filter based multi-scale image decomposition. In: Proceedings - IEEE international conference on multimedia and expo, pp 1045–1050. https://doi.org/10.1109/ICME.2012.29 15. Liu M, Wang A (2014) Fully homomorphic encryption and its applications. Comput Sci Res Dev 51:2593–2603. https://doi.org/10.7544/issn1000-1239.2014.20131168 16. Gangkofner UG, Pradhan PS, Holcomb DW (2008) Optimizing the high-pass filter addition technique for image fusion. Photogramm Eng Rem S 74:1107–1118. https://doi.org/10. 14358/PERS.74.9.1107 17. Alirezanejad M, Amirgholipour S, Safari V, Aslani S, Arab M (2014) Improving the performance of spatial domain image watermarking with high boost filter. Indian J Sci Technol 7:2133–2139 18. Mustafa WA, Kader MMMA (2018) Contrast enhancement based on fusion method: a review. J Phys Conf Ser 1019:1–7. https://doi.org/10.1088/1742-6596/1019/1/012025

736

W. A. Mustafa et al.

19. Gu H, Lv W (2012) A modified homomorphic filter for image enhancement. In: Proceedings of the 2nd international conference on computer application and system modeling. https://doi. org/10.2991/iccasm.2012.45 20. Mustafa WA, Khairunizam W, Yazid H, Ibrahim Z, Ab S, Razlan ZM (2018) Image correction based on homomorphic filtering approaches : a study. In: IEEE international conference on computational approach in smart systems design and applications (ICASSDA). IEEE, pp 1–5 21. Mustafa WA, Yazid H, Jaafar M, Zainal M, Abdul- AS, Mazlan N (2017) A review of image quality assessment (IQA): SNR, GCF, AD, NAE, PSNR, ME. J Adv Res Comput Appl 7:1–7 22. Mustafa WA, Yazid H (2016) Background correction using average filtering and gradient based thresholding. J Telecommun Electron Comput Eng 8:81–88 23. Mustafa WA, Yazid H (2016) Illumination and contrast correction strategy using bilateral filtering and binarization comparison. J Telecommun Electron Comput Eng 8:67–73 24. Kanafiah SNAM, Mashor MY, Mustafa WA, Mohamed Z (2018) A novel contrast enhancement technique based on combination of local and global statistical data on malaria images. J Biomim Biomater Biomed Eng 38:23–30. https://doi.org/10.4028/www.scientific. net/JBBBE.38.23 25. Matkovic K, Neumann L, Neumann A, Psik T, Purgathofer W (2005) Global contrast factor a new approach to image contrast. In: Computational aesthetics in graphics, visualization and imaging, pp 159–167

Implementation of Seat Belt Monitoring and Alert System for Car Safety Zainah Md Zain, Mohd Hairuddin Abu Bakar, Aman Zaki Mamat, Wan Nor Rafidah Wan Abdullah, Norsuryani Zainal Abidin, and Haris Faisal Shaharuddin

Abstract Modern cars have many safety features which are playing a significant role in reducing traffic injuries and deaths. One of the reasons that cause cars accident’s fatalities is not wearing a seat belt. In order to overcome this problem, an attempt has been made to design a car safety system whereby the car will not run unless the driver and passengers use the seat belt first before turning on the car. In the proposed system, the ultrasonic devices and limit switches are used to detect driver and passengers and also to detect seat belts that have been used, respectively. In addition, the switch of electric circuits is designed and installed between seat belts and ignition systems to control start engines. Arduino Mega microcontroller act as a signal processing unit to control the security system in the car. The experimental results show that the system is accurately able to enhance the safety aspects of driver and passengers. Keywords Seat belt

 Alert systems  Car safety

1 Introduction Seat belt is one of the best safety features in the modern car where it can secure passengers in a car during collisions and other accident [1]. All cars are equipped with the three pointed seat belt for driver (mandatory), front passenger (mandatory), and rear passenger (optional). Seat belt plays a vital role in preventing injuries. The basic idea of a seat belt is very simple; it keeps you from flying through the windshield or hurdling toward the dashboard when your car comes to an abrupt stop. A seat belt applies the stopping force to more durable parts of the body over a longer period of time to prevent injuries. A typical seat belt consists of a lap belt, Z. Md Zain (&)  M. H. Abu Bakar  A. Z. Mamat  W. N. R. Wan Abdullah  N. Zainal Abidin  H. F. Shaharuddin Robotics and Unmanned Research Group (RUS), Instrument and Control Engineering (ICE) Cluster, Faculty of Electrical and Electronics Engineering, Universiti Malaysia Pahang, 26600 Pekan, Pahang, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_53

737

738

Z. Md Zain et al.

which rests over your pelvis, and a shoulder belt, which extends across your chest. The two belt sections are tightly secured to the frame of the car in order to hold passengers in their seats. In modern car ECU’s are intelligent enough to alert the driver/passenger about the seatbelt information whether it is buckled/unbuckled through different ways- sometime only a tell-tale, sometime tell-tale with buzzer (if vehicle moving) or tell-tale + buzzer + text warning display [2–5]. According to a study by the Malaysian Institute of Road Safety Research (MIROS) [6], Malaysia’s survival survivors are guaranteed 60% when they wear seat belts. The attitude of the public to consider the use of seat belts as trivial is the major contributor to the increase in death rate due to accidents. The excuse given includes a feeling of comfort or travel just near, while wearing a seat belt only takes a couple of seconds. The user considers the air bag in the car is sufficient to minimize the impact at the accident, whereas the equipment should also be used with the main safety device namely seat belts [5]. Accordingly, the aim of this study is to design a system that allows the use of seat belts in reducing the risk of death during the accident. The car will not be starting the engine as long as the driver/passenger does not use the seat belt. The system will detect the driver if only the driver in the car and the driver should use seat belt first before turning on the car engine. If the driver does not use the seat belt first and then turn on the car’s engine, the engine cannot be switched on as long as the driver does not use the seat belt first. If there is a passenger 1, even though the driver has installed the seat belt but passenger 1 still does not install the seat belt, the car engine also cannot be turned on until the passengers 1 use the seat belt. The same situation is designed for the next passenger 2, 3 or 4.

2 System Overview The main components proposed in system Fig. 1 includes Arduino Mega microcontroller acts as the heart of the system, limits switch, proximity switch, Isd1820 voice recorder and player, 8 relay module type 5VDC, HC-SR04 ultrasonic sensor, mini siren and display (16  2). The connection between electronics components illustrated in Fig. 2. The flowchart of the proposed system is shown in Fig. 3, starting from designing the ultrasonic sensor circuit and limit switch to ensure that the car engine will not be switched on as long as the sensor limits switch is not activated while the sensor ultrasonic detectors are active. Thereafter, there is a voice command used to tell the current driver status of this system. The alarm output has been designed using three items that are deemed suitable for this project. The audio instruction, the indicator on the LED has its own red code indicates that the car cannot be switched on again. When the LED code turns from red to green means the car can be switched on. If the car in switched on condition and the seat belt is removed, the sound output will go out with the siren to indicate that the car engine will die within 15 min if it does not reuse the seat belt. The 15 min given, the car engine will survive and instead

Implementation of Seat Belt Monitoring …

739

4 ULTRASONIC SENSORS: DRIVER_1, PASSENGER_1, PASSENGER_2, PASSENGER_3

4 LIMIT SWITCH: BELT DRIVER_1, BELT PASSENGER_1, BELT PASSENGER_2, BELT PASSENGER 3

SPEAKER

ALARM

CONTROLLER (ARDUINO MEGA 2560)

DISPLAY

OUTPUT TO START ENGINE

STARTING ENGINE CAR

Fig. 1 Block diagram of the proposed system

Fig. 2 Wiring diagram of the proposed system

designed and built a hardware that means a combination of design circuits with a compact panel mounted on the real car. Once completed to design the hardware, it runs to the next level, which is hardware testing. From this stage, the hardware will

740

Fig. 3 Flowchart of the developed system

Z. Md Zain et al.

Implementation of Seat Belt Monitoring …

741

be tested to check whether its function and achieve the goal or not. If the test is unsuccessful, it will go back to the redesign and build the hardware. The last stage is to analyse the results in which it gets the output of the simulation and the test performed and will be recorded.

3 Project Implementation 3.1

Panel Design

The front panel is required to show users what the system is up to. At the front of the display contains a display LCD display to indicate the current status of the system, 4 LED lights that can change color green and red each representing drivers and passenger vehicles and a LED blue to display a warning when the safety belt is opened while the engine is running.

Fig. 4 Front panel and internal panel design

742

Z. Md Zain et al.

Fig. 5 Power supply circuit

The red LED will light up when the ultrasonic sensor detects the driver and the passengers who are in the vehicle, when the driver and passengers are wearing the safety belt, the red LED will turn into green and then the driver can turn on the vehicle’s engine. On the front of the front panel is placed power supply, Arduino mega, voice recorder and 8 channel relay to facilitate the installation of this project on the vehicle so it looks neat. This front panel will be mounted on the top of the radio (Fig. 4).

3.2

Arduino Circuit Power Supply Form Direct Car System

The Arduino board can operate on an external supply from 6–20 V. However, if the supply voltage less than 7 V, the 5 V pin will supply less than 5 V and the board may become unstable. The voltage regulator may overheat and damage the board if the voltage more than 12 V. The recommended range is 7–12 V. The basic car electrical system gives you around 12–13 V when the engine is off and 13–14 V when it is running. So the basic idea is to use a simple voltage which takes an unregulated voltage in and outputs a regulated voltage. The LM1084IT-12 is a 12 V, 5 A, low dropout voltage regulator in a TO220 package to which a heatsink will be added. The low dropout feature is nice as the cars supply varies so much. We loop the power supply from the point accessories system (radio) cause this point max current is 10 A. From calculation, radio used ± 6 A and Arduino controller maximum running with load is 1.5 A (Fig. 5).

3.3

Start-Stop Wiring Diagram

The start-stop wiring diagram for start and stop engine is shown in Fig. 6.

Implementation of Seat Belt Monitoring …

743

Fig. 6 Start-stop wiring diagram

3.4

Buckle Design

Buckle is one of the important parts of this project, the buckle will be plugged in with the switch limit to detect the vehicle users wearing a safety belt or not. Limit switch will be connected to relay and relay will signal to display. This design buckle is not so difficult, just pulling the screw hole and installing the switch limit just at the level that can detect the safety belts already worn (Fig. 7).

4 Results and Discussions Rescuing drivers with rear passengers as rear passengers who do not wear back belts can invite risks and hazards to drivers and passengers who are in front as when a vehicle moves at 50 km/hr speed and a 60 kg back passenger can hit the front seat with impact 2.4 tons. Prevents passengers from hurling or exit to the vehicle when the vehicle stops suddenly. This project is a very useful reminder to drivers who drive without ignoring the importance of seat belt. The complete electronic part in the developed panel is shown in Fig. 8.

744

Z. Md Zain et al.

Fig. 7 Buckle design

Fig. 8 Electronic parts in the panel

4.1

Power Consumptions

The power consumptions for the developed hardware is measured as shown in Fig. 9. Full power consumption reading is presented in Table 1. • Using the formula Power = Voltage  Current • 404.8 mA to 140.4 mA is a range running LED in current draw from 1 unit LED to 5 units LED used in system • 134.5 mA to 624.5 mA is current range for 1 to 8 unit relay operate • 141.8 mA to 361.8 mA is current range for 1 to 4 unit Ultrasonic sensor operate • Total current for full system operating is 1.5A and power is 16859 mW

Implementation of Seat Belt Monitoring …

745

Fig. 9 Measuring power consumption

Table 1 Power consumption

4.2

Working Principle

Figure 10 shows the ultrasonic sensor detects the driver and the passengers, LED will show a red colour and the LCD display will display the status “TIGHTEN SEAT BELT FIRST” so that the safety belt is worn. A Fig. 11 show after all the safety belt is worn, the display on LED will turn green and LED display shows the driver can turn on the vehicle engine.

746

Fig. 10 Ultrasonic sensors detect the passengers in the car

Z. Md Zain et al.

Implementation of Seat Belt Monitoring …

747

Fig. 11 After all the safety belt is worn

Figure 12 shows the LED passenger 2 lights turn red; this is because passenger 2 has opened the safety belt. In the meantime the blue LED warning will light up and the LCD display will show the engine will stop within 15 min if the safety belt is

Fig. 12 The LED passenger 2 lights turn red

748

Z. Md Zain et al.

Fig. 13 The safety belt is still not worn

not reused. Figure 13 shows if the safety belt is still not worn LCD display will indicate the engine has stopped.

5 Conclusion In this paper, we have seen the system designed to monitor and alert the driver and passengers’ seat belt in order to obtain a car safety system where the car cannot be turned on unless the driver and passengers use the seat belt first before turning on the car has successfully developed. Therefore, this system can be used in the car as an alert system to the driver and passenger in order to reduce the fatality caused by accident due to not wearing a seat belt while driving. Acknowledgements The authors would like to thank for the support given to this research by Universiti Malaysia Pahang (UMP) under grant RDU1803189.

References 1. Hammadi KA, Ismaeel M, Faisal T (2016) Intelligent car safety system. In: 2016 IEEE industrial electronics and applications conference (IEACon), pp 319–322. https://doi.org/10. 1109/ieacon.2016.8067398 2. Seelam K, Lakshmi CJ (2017) An Arduino based embedded system in passenger car for road safety. In: 2017 international conference on inventive communication and computational technologies (ICICCT). https://doi.org/10.1109/icicct.2017.7975201johansson 3. Johansson P, Bernhard J (2012) Advanced control of a remotely operated underwater vehicle, Department of Electrical Engineering, Linköpings Universitet, Sweden, Technical report. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79364

Implementation of Seat Belt Monitoring …

749

4. Kulanthayan S, Law TH, Raha AR, Radin Umar RS (2004) Seatbelt use among car users in Malaysia. IATSS Res 28:19–25 5. Fahmi AM, Othman I, Ahmad MS, Batcha WA, Mohamed N. Chapter of seatbelt wearing among vehicle occupants, report of evaluation of the effectiveness of ops Chinese new year 2013 conducted over the Hari Raya period from February 3 to February 17 2013. Malaysian Institute of Road Safety Research, Kuala Lumpur 6. Seat belts, helmet reduces fatalities, 27 July 2015. http://www.thesundaily.my/news/1500754

Electroporation Study: Pulse Electric Field Effect on Breast Cancer Cell Nur Adilah Abd Rahman, Muhammad Mahadi Abdul Jamil, Mohamad Nazib Adon, Chew Chang Choon, and Radzi Ambar

Abstract Electroporation has been found since the mid-1980 s but the first clinical trial was conducted at 100 ls with an amplitude of 1–1.5 kV/cm to bring bleomycin to the target region of the carcinoma cell in the 1990 s. Nowadays pulse electric field has rapidly growing with applications in medicine, food, industry and environment. Pulse electric field can be applied in variety of ways which is by the pulse width (nanosecond to millisecond) with specific intended application. Electroporation means increasing the plasma membrane permeability. There are two types of electroporation, reversible (temporary) and irreversible (permanently). These applications were widely used for the therapy of cancer and reversible electroporation was the most frequently used. In this research, a small range of pulse electric field amplitude (100–1000 V/cm, 30 ls and a single pulse) will be applied to breast cancer cell to explore the method of electroporation. Besides, this research concentrated on the efficacy of parameter reversible and irreversible electroporation to examine the anti-proliferation impact on the cancer cell. Thus, the existence of an electrical field as the stimulator for aggressive adsorption of the anti-cancer agent into the cells will lead to variables in the therapy of cancer cell. This technique will help to understand the factors in the cancer cell therapy that may lead to a new method for the drug free therapy.







Keywords Electroporation Pulse electric field Cancer treatment Irreversible Reversible



N. A. Abd Rahman  M. M. Abdul Jamil (&)  M. N. Adon  C. Chang Choon  R. Ambar Biomedical Modelling and Simulation Research Group, Faculty of Electrical and Electronics Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_54

751

752

N. A. Abd Rahman et al.

1 Introduction 1.1

Electroporation

Electroporation is a technique that has been commonly used since 1991 in medical areas. The first clinical trial was conducted to introduce anti-toxic agent for cancer treatment in malignant cells. This implementation discovered in 1754 by J. A. Nollet who did an experiment with electrical fields and noticed a red spot on human and animal skin in the areas where the sparks applied this was due to the heating impacts of the joule or it could be said as harm to capillaries showing irreversible electroporation and this was supported by A. J. Jex-Blake in 1913 who discovered the same injuries. Stamfli and Willi reported on 1957 that the membrane breakdown is referred to as irreversible under certain circumstances and vice versa as reversible. On 1961 and 1977, the technique continues to grow in the food processing industry. K. Kinosita and T. Tsong propose a cell membrane permeabilization because the application of the pulse electric field is linked to the creation of pores and can vary in size. In 1987 and 1988, R. C. Lee started the significant sequence of irreversible research on electrical discharge-induced tissue trauma. From 2003 to 2007, R. V. Davalos as well as B. Rubinsky points out that IRE can be readily implemented in regions where there is an elevated level of perfusion such as in the proximity of blood vessels and performed clinical scenario on the liver using 18-gage stainless steel needles placed with sonography assistance. On 2007, Al sakere et al. performed IRE in vitro on mice with subcutaneous tumors and studies immune reactions. In 2010, G. Onik and B. Rubin-sky report the first irreversible clinical trial on patients with prostate cancer in series of outpatient procedures and on 2015, 200 patients with locally advanced pancreatic adenocarcinoma (stage III) [1]. Transfer of material to cells and tissues through electrical induction provides a chance for many fresh medical procedures and provides a precious tool for the research of the cellular and intercellular system’s fundamental structural and biochemical behavior [2]. It has been found that this technique is an effective way to over-come the membrane barrier [3]. It is therefore interesting to explore morphological modifications in the membrane of cells such as the chance of moving, separating, fusing and deforming during electroporation [4]. Pulse electrical fields influence not only excitable tissues, such as muscles and nerves, but also non-excitable tissues, either thermally, by generating heat within the tissue or by causing structural changes down to cellular membranes [5]. However, electroporation is hard to directly observe because of the very tiny size of pores (nanometers) and their creations and development are very rapidly (micro-seconds). Therefore, in doing any experimental method, several things need to be measured. While important progress has been made, there are still fundamental electroporation elements that have not been fully experimentally determined.

Electroporation Study: Pulse Electric Field Effect …

753

This research involved in-vitro method to observe particular interactions at the cellular level by exposing the PEF outside the living organism in controlled settings. Many of the EP systems have been created since Eberhard Neumann used an EP scheme in molecular biology [6]. Treatment using the technique of electroporation is cancer treatment, ablation of tissue, food sterilization, and much more.

2 Literature Review 2.1

Breast Cancer Cell

Breast cancer is one of the most prevalent cancers in Malaysia among women with the largest proportion of patients who died as a result with 52 percent death among Malaysians [7]. Breast cancer cells created from breast tissue with signs of breast shape change, presence of lump in the breast, fluid from the nipple, skin dimpling or red scaly skin patch. The risk factor is woman, obesity, absence of physical activity, alcohol consumption, and hormone replacement therapy during menopause, ionizing radiation and early age at first menstruation, age and family history. It usually develops in cells from milk duct lining and the lobules that provide milk to the ducts. Operation, radiation therapy, chemotherapy, hormonal therapy, and focused therapy are the treatment given. Breast cancer cells are a metastatic disease that can spread beyond the initial organ like bone, liver, lung, and brain. As shown in Fig. 1, the breast cancer cell has a trait of growing in a group shape.

Fig. 1 Breast cancer cell, MCF-7

754

2.2

N. A. Abd Rahman et al.

Electroporation Type

Electroporation is a method that usually relates to the cells electrical fields. This phenomenon results in the cell being permeable to ions and macromolecules when inducing with short pulses of high voltage [8]. Because the voltage breakdown happens on the cell membrane causing the lipid bilayer to fold, this phenomenon produces an open pathway on the cell membrane called the cell membrane opening pore. Many applications such as introduction protein, large and small molecule, as well as cell fusion can be used once the cell membrane has an opening pore as shown in Fig. 2. Reversible electroporation is called a temporary open pore. But if the induction of voltage is too large, it can result in cell destruction, which is called irreversible electroporation.

2.3

Reversible Electroporation

Reversible electroporation is a temporary open of pores on membranes and cell survival after the pulse electrical field inducement. For molecular delivery to the cell, reversible electroporation is primarily used. RE is frequently used for the introduction into cells of substances such as colorants, drugs, proteins and nucleic acids [9]. By applying an electrical pulse of adequate amplitude and length, it is feasible to produce small pores in biological and artificial membranes. The production of the pore on the cell membrane is sufficient for reversible, low-amplitude and short-duration pulses and the pores close within milliseconds to minutes. Which is safer in the induction of electrical field. This type of electroporation was

Fig. 2 Electroporation applications

Electroporation Study: Pulse Electric Field Effect …

755

selected to see the cell responds of the breast cancer cell with the reversible range of electroporation technique.

2.4

Reversible Application

Several reversible applications commonly used in the field of drug delivery and gene therapy are presently undergoing thorough inquiry in electroporation-based cancer treatment techniques. Starting in the early 1980 s, this has developed into a clinically tested therapy for skin and subcutaneous tumor nodules [10]. The so-called electro-chemotherapy is derived from the mixture of electroporation and chemotherapy or ECT. Electro chemotherapy is an implementation by electrical pulses of cell membrane permeabilize and was used as a local therapy. Electro chemotherapy’s main mechanism is the induction of tumors by electroporation, a process that increases drug effectiveness by allowing the drug to influence intercellular targets [11]. Electrical pulses can be applied to tumors either through plate electrodes positioned on the skin above the tumors or through needle electrodes inserted into them [12]. The benefits of electro chemotherapy are effective, secure, low-cost, once-only therapy that can be given to cancer patients with tumors of various histologist [13].

2.5

Irreversible Electroporation

Irreversible electroporation (IRE) creates permanent defects in cell membranes and tempts cell death. Irreversible is considered to be the greater reversible limit, which is caused by the after effects by IRE. If the IRE is induced into the cells, the pore on the cell membrane will be permanently opened [14]. This technique is mostly used in other tissue ablation in food technology and water treatment. This occurs when the magnitude of the induced transmembrane potential exceeds a critical value that disrupts the cell membrane to the extent that the cell dies as a result of homeostasis failure.

2.6

Irreversible Application

In medical application, over the previous 7 years, IRE has emerged as a novel ablation instrument through the use of the impact of an applied electrical field to kill cancer cells without damaging the surrounding extracellular matrix, vessels, nerves and adjacent ordinary tissue [15]. While IRE has been investigated for a short time, its potential use for cancer and tissue ablation has received increasing attention,

756

N. A. Abd Rahman et al.

resulting in a consider able number of validity and safety studies, including recent in vivo animal and human studies. In food technology, irreversible electroporation is referred to as pulsed processing of electrical fields or electro plasmolysis in relation to cell membrane lysis to remove their contents and the bactericidal impact in these kinds of medicines. Through the first and second half of the twentieth century, the non-thermal bactericidal effect of electric fields remained a research area in the food industry and continues today. The use of electroporation as a method of microbial inactivation in foods is a good implementation of non-thermal food pasteurization. This implementation relies heavily on several elements: strength of the electric field, length, power supplied, electrical properties of the treated food, as well as microbial features including form, size, structure of the cell wall, composition and conditions of development. This implementation of microbial inactivation electroporation aimed primarily at pasteurizing food rather than sterilization [16]. Using electroporation for microbial inactivation is often referred to as therapy with pulsed electrical field (PEF). One of its applications is the treatment of wastewater that utilizes irreversible electroporation for hospital wastewater bacterial decontamination and also eradicates types of antibiotic resistance. This therefore limits the spread of such bacteria to the surroundings.

3 Methodology Figure 3 shows flow of work for this study, breast cancer was selected and maintain by subculture method. Next, by the parameter selected which is 100–100 V/cm with 30 µs pulse duration. These parameters will be induced on the breast cancer cell and find the best parameter for breast cancer cell (MCF7) then the

Fig. 3 Flow of work

Electroporation Study: Pulse Electric Field Effect …

757

morphological changes and cell response will be monitor towards cancer cell treatment or wound healing application.

3.1

Electroporation Setup

The unit used for the electroporation device consists of the High Voltage Pulse Generator (ECM 830) and the inverted Nikon TS100 microscope linked to the Dino Camera and Dino capture 2.0 software as shown on Fig. 4 and 5. Inside the cuvette’s 4 mm gap dimension, the cell was suspended and connected to the safety stand connected to ECM 830. While the parameter to be set in the pulse generator are 100–1000 V/cm (with 100 V/cm interval) of electric field intensity, 30 µs of pulse duration and a single number of pulse. EP effectiveness depends on the amplitude of the pulse, duration, frequency of repetition, number of pulses and shape of the pulse [9, 12].

Fig. 4 Experimental setup for electroporation exposure by using cuvette system

Fig. 5 Nikon TS100 inverted microscope with Dino camera and Dino capture 2.0 Software

758

3.2

N. A. Abd Rahman et al.

Cell Culture Setup

The cell line of breast cancer is an immortal line of cells that is used mostly for scientific research. It is the oldest and most commonly used line of human cells. The primary benefit of this cell is that it can be split into an infinite amount of occasions as long as the fundamental cell survival requirements are met. Breast cancer cell was therefore used as the main form of cell. These samples of breast cancer cells were acquired from the animal laboratory cell cultures (Kulliyyah of Allied Health Sciences, IIUM).

4 Morphological Changes During Electroporation The result in Figs. 6 and 7 shows the difference in MCF-7 cell diameter with and without exposure to the pulse electric field. It showed that the cell expanded after pulsed electric field induction due to cell swelling. It indicates the comparison of MCF-7 before with the size of MCF-7 in the range of 20–24 lm, during the beginning of expansion of the PEF induction cell in the range of 26–30 lm, it

Fig. 6 MCF7 cells (a) with pulse electric field inducement and (b) without pulse electrical field inducement

Fig. 7 MCF7 cell before (0– 300 s), during (600–1200 s) and after (1200–1800 s) with PEF inducement

Electroporation Study: Pulse Electric Field Effect …

759

increases 10–20% of the cell size before induction of PEF due to an adsorption of nutrient on cell surroundings and after induction with EP the size of MCF-7 decreases back in the range of 24–25 lm and continues to raise and proliferate. The result of this experiment demonstrated the effect of Reversible Electroporation (RE).

5 Summary In conclusion, the method of electroporation is a convenient and non-invasive technique that requires no-chemical method to be applied to the cells. Electroporation is known as phenomenon where the tension breakdown on the cell membrane causes the cell permeable to any adjacent molecule due to an opening pore. These occurred because of the effect of the electrical field, causing the lipid molecule to change its orientation and creating hydrophilic pores. The induction of the pulse electrical field influences the morphological changes in the cell depending on the range used for the induction. As explained above, there are two types of electroporation, reversible (temporary) and irreversible (permanently) electroporation that can either kill or heal the cell. From these phenomena numerous applications uses this method. The most application that used this method is in the medical and food industry as elaborated in the literature review above. But for this study, it will be more focused on medical applications as in cancer treatment. This is due to improve the cancer treatment that already has in the medical industry. Cancer treatment that already used in the medical field are chemotherapy and targeted therapy, but in some cases, the uses of the drug in this treatment causes varied side effect to the cancer patient. By finding an alternative that gives less or no side effect would be a great contribution to cancer treatment applications. The reasons for selecting reversible electroporation are due to the temporary open pore and after induction the cells survive. It may therefore not damage the ordinary cell surrounding it and can be used to improve the anti-cancer agent’s adsorption into the cancer cells without harming the other ordinary cell. The goal is to test the low range of pulse electrical field that starts at 100–1000 V/cm (with 100 V/cm interval) on breast cancer cells, monitor the response of the cell and find the best parameter of pulse electrical field induction that can enhance the cancer cell’s anti-proliferative. The research on the electroporation method are interesting due to the growing demand for alternative and less invasive treatments for localized tumors, we have seen the development and investigation of several electroporation methods in several applications. However, if the applied electric voltage is above a certain threshold, it leads to a larger potential gradient and the cells are unable to seal the formed pores and the result is cell death. There for by finding the best parameter for each cancer cell would be more benefit for the researcher or medical practices to provide a better treatment.

760

N. A. Abd Rahman et al.

Thus, for future biomedical applications such as cancer treatment or wound healing applications, the optimal parameter of the recognized pulse electric field may have excellent implications. Acknowledgements The authors would like to thank the Research Management Center (RMC), UTHM and Ministry of Higher Education for sponsoring the research under Tier 1 Research Grants (H161) and Geran Penyelidikan Pascasiswazah (GPPS), VOT U949.

References 1. Rolong A, Davalos RV, Rubinsky B (2018) History of Electroporation. In: Meijerink M, Scheffer H, Narayanan G (eds) Irreversible Electroporation in Clinical Practice. Springer, Cham 2. Jordan DW, Gilgenbach RM, Uhler MD, Gates LH, Lau YY (2004) Effect of pulsed, high-power radiofrequency radiation on electroporation of mammalian cells. IEEE Trans Plasma Sci 32(4):1573–1578 3. Dev SB, Rabussay DP, Widera G, Hofmann GA (2000) Medical applications of electroporation. IEEE Trans Plasma Sci 28(1):206–223 4. Adon MN (2015) Pulse electric field exposure effect on morphological. Universiti Tun Hussein Onn Malaysia 5. Abdul Jamil MM, Milad Zaltum MA, Abd Rahman NA (2018) Optimization of pulse duration parameter for hela cells growth rate. J Telecommun Electron Comput Eng 10(1– 17):1–4 6. Milad Zaltum MA, Adon MN, Hamdan S, Dalimin MN, Abdul Jamil MM (2015) Investigation a critical selection of pulse duration effect on growth rate of HeLa cells. In: International conference on BioSignal analysis, processing and systems ICBAPS 2015, pp 33–36 7. Weaver JC (2000) Electroporation of cells and tissues for drug and gene delivery. IEEE Trans Plasma Sci 28(1):1–10 8. Lim GCC (2002) Overview of cancer in Malaysia. Jpn J Clin Oncol 32(1):S37–S42 9. Batista Napotnik T, Miklavčič D (2018) In vitro electroporation detection methods – an overview. Bioelectrochemistry 120:166–182 10. Gehl J (2003) Electroporation: theory and methods, perspectives for drug delivery, gene therapy and research. Acta Physiol Scand 177(4):437–447 11. Rems L et al (2019) The importance of electric field distribution for effective in vivo electroporation of tissues. Bioelectrochemistry 125(2):127–133 12. Cemazar M, Sersa G, Frey W, Miklavcic D, Teissié J (2018) Recommendations and requirements for reporting on applications of electric pulse delivery for electroporation of biological samples. Bioelectrochemistry 122:69–76 13. Marty M et al (2006) Electrochemotherapy - an easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: results of ESOPE (European Standard Operating Procedures of Electrochemotherapy) study. Eur J Cancer Suppl 4(11):3–13 14. Gothelf A, Mir LM, Gehl J (2003) Electrochemotherapy: results of cancer treatment using enhanced delivery of bleomycin by electroporation. Cancer Treat Rev 29(5):371–87 15. Deipolyi R, Golberg A, Yarmush ML, Arellano RS, Oklu R (2014) Irreversible electroporation: evolution of a laboratory technique in interventional oncology. Diagn Interv Radiol 20:147–154 16. Kumar Y, Patel KK, Kumar V (2015) Pulsed electric field processing in food technology. Int J Eng Stud. Tech Approach 1(2):6–17

Influence of Electroporation on HT29 Cell Proliferation, Spreading and Adhesion Properties Hassan Buhari Mamman, Muhammad Mahadi Abdul Jamil, Nur Adilah Abd Rahman, Radzi Ambar, and Chew Chang Choon

Abstract The aim of this study is to investigate the influence of pulse electric field on cell proliferation, spreading and adhesion properties of HT29 cell line towards the enhancement of tissue regeneration and wound healing process. The HT29 cells were treated with an electric field of 600 V/cm for 500 µs in vitro. A time-lapse live imaging of the adhesion properties of the HT29 cell was carried out using integrated devices that was equipped with digital camera and an inverted microscope. The study found out that when HT29 cell lines were electroporated with 600 V/cm and 500 µs pulse duration, it reached 96.1% confluence after 64 h of seeding whereas the non-electroporated (NEP) cells reached only 76% confluence after 64 h. Interestingly, both the EP cell and the NEP cell attained their maximum length of 34.76 and 29.73 µm respectively after 24 h of seeding. Furthermore, the electric treatment was found to decrease the adhesiveness of the cell where it detaches from the substrate after 5.6 min as compared to control group that took 8 min to completely detach from substrate. Hence, the study suggests that the application of appropriate electric filed treatment, can cause cellular changes in cells such as proliferation and adhesion which could contribute to the facilitation of wound healing process via increased cellular proliferation and migration.









Keywords Adhesion Electroporation Proliferation Cell size Wound healing

1 Introduction Adhesion of cells to each other and to their ECM is important in creating cell shape and organization in tissue engineering. Likewise, comprehending how cells adhere is significant in knowing disease development like cancer and muscular dystrophies which mainly comprise of failure in cell adhesion. Additionally, cell adhesion and H. B. Mamman  M. M. Abdul Jamil (&)  N. A. Abd Rahman  R. Ambar  C. Chang Choon Biomedical Modelling and Simulation Research Group, Faculty of Electrical and Electronics Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_55

761

762

H. B. Mamman et al.

migration are essential processes in numerous physiological processes such as wound healing, malignancy as well as embryogenesis [1, 2]. In anchorage–dependent cells, cell adhesion plays a vital role in cell survival and growth because it supports tissue organization [1]. For example, the inhibition of early cell attachment events, like cell spreading triggers rapid apoptosis [3] or lack of cell colonization and differentiation [4]. In recent times, cells lines in tissue culture have been found to reveal a great dependency on adhesion to surface [5]. Additionally, cell to cell and cell to extracellular matrix (ECM) interaction have a great implication on numerous biological processes like migration, adhesion and differentiation [6]. Cell adhesion and migration have numerous common features and have vital role in wound healing process [7], embryogenesis [8] and in inflammatory reactions [9]. Impairment of cell attachment has intense effect and result to uncontrolled states like a defect in development and metastasis and invasion in cancer [10]. Many physical signals like external force [11], topography [12, 13] and elastic property of ECM [14] have been considered as significant factors that can control numerous biological process that are linked to cell migration and adhesion. Electrostatic and London-Van der Waal interactions (attraction and repulsion) have been related to the adhesion like charges [15]. That is for bodies of like charge to adhere to each other, the van der Waal force of attraction must be dominant over the electrostatic force of repulsion of the like charges [16]. In addition to several other effects, external pulse electric field has been proven to alter cellular functions including cell surface redistribution and cytoskeletal reorganization [17]. Manipulations of the cell adhesion, proliferation and migration abilities are very significant preconditions for inhibition of the cancer cell ability to grow and invade [2]. However, pulse electric field exposure on cell behaviors which could alter the electrostatic properties (by changing the charges on the cell due to external field exposure) of the cells and also affect cell attachment, proliferation, adhesion and migration have not been fully investigated. Therefore, the effect of pulse electric field on the proliferation, spreading and adhesion properties on HT29 cell line was investigated in this study. All experiments were repeated three times.

2 Materials and Methods 2.1

Introduction

The experimental setup involved two major sections; cell culture and cell exposure to electric field under controlled environment for live cell imaging. In the cell culture, the cells were grown and harvested using the standard cell culture techniques. Cell exposure to electric field is done with help of integrated equipment for live cell imaging which comprise an ECM 830 Electroporator, simulation magnetic chamber, Nikon (Ti-series) inverted microscope, live cell imaging equipment and Metamorph7.5.0 as imaging software.

Influence of Electroporation on HT29 Cell Proliferation, Spreading …

2.2

763

Cell Culture

The human colon cell line HT29 was used for the experiments. The HT29 cells lines were grown in a 25 cm2 culture flask. Details of the cell culture procedure were explained in previous study [18].

2.3

Electroporation

In this study, the commercial electroporator ECM 830 was used to electrically treat the HT29 cell line. The low voltage (LV) mode of the ECM 830 electroporator at a voltage of 240 V with a 4 mm gap Cuvette was used to achieve 600 V/cm electric field, for cell proliferation assay, cell length analysis (cell spreading) and cell adhesion assay. While the high voltage (HV) mode at a voltage of 600 V with a 10 mm electrode gap was used to achieve 600 V/cm electric field strength for cell attachment analysis and cell migration assay. First of all, cells are detached using the procedure explained elsewhere [18]. After neutralizing the effect of detaching enzyme, 800 µl of cells suspension at a concentration of 4.3  105 cells/ml, were put in a 4 mm cuvette and then placed in BTX ECM 830 electroporator chamber. Electroporation was executed with an electric field of 600 V/cm intensity (240 V using 4 mm cuvette) for 500 µs duration. Immediately after electroporation, the cuvette was then transferred to a biosafety hood. For cell proliferation and cell length measurement, 600 µl of the electroporated cells were seeded in 25 cm2 flasks containing 7 ml of pre-warm complete growth medium and incubated at 37 °C and 5% CO2. At the same time, 600 µl of cell suspensions from a same initial flask but without electric treatment, were seeded into another 25 cm2 flask containing 7 ml of pre-warm complete growth medium and incubated in the same incubator as a control. The flasks were then observed after 6, 24, 48, 64 and 72 h. Images from four different fields of view were acquired at each time point for cell proliferation and cell length analysis. Each experiment was repeated three times. For cell trypsinization adhesion, 300 µl of the electroporated cell were seeded in a well of 6-well plate, containing 2.5 ml of pre-warm complete growth medium and incubated at 37 °C, and 5% CO2. The well was labelled as EP. At the same time, 300 µl of non-electroporated cells were seeded in another well of the 6-well plate containing 2.5 ml of pre-warm complete growth medium and incubated as a control. The second well was labelled as NEP. The cells were harvested after 48 h at room temperature as describe in Sect. 2.

764

2.4

H. B. Mamman et al.

Cell Proliferation Assay

Images were acquired after 6, 24, 48, 64 and 72 h of cells seeding for the electroporated and non-electroporated cells. This was done to check cell proliferation rate or confluence percentage. Image acquisition was achieved with the Nikon Eclipse TS100 inverted microscope (phase contrast 10/0.25) equipped with Dino camera and DinoCapture2.0 software. Images from four different fields of view were captured during each image acquisition at each time point. The average confluence percentages were calculated for each time point. The measurement was carried out using graduation squares in the DinoCapture2.0 software. Each experiment was repeated three times. The average confluence percentage of HT29 cell lines and its standard error of the mean (SEM) over time for both electroporated (EP) and non-electroporated (NEP) cell were measured and exported to Microsoft Excel for analysis.

2.5

Cell Spreading (Cell Length Measurement)

Cell length measurement of the electroporated (EP) and non-electroporated (NEP) cell (control) began 6 h after the cell plating. This time is enough for the HT29 cells to attach onto the substrate. Images were captured with Nikon Eclipse TS100 inverted microscope (phase contrast 10/0.25) equipped with Dino camera at 6, 24, 48, 64, and 72 h. The cell length measurement was carried out using DinoCapture2.0 software on the images acquired at 6, 24, 48 and 72 h. At each time, forty cells are randomly selected, and their mean length was computed. The experiment was repeated three times. The mean and standard error of the mean for the measurement were evaluated using Microsoft Excel.

2.6

Cell Adhesion or Trypsinization Assay

The aim of this test was to investigate the effect of the electric field (600 V/cm for 500 µs) on HT29 cells line adhesion properties. After 48 h in culture, the 6-well plate was transferred from the main incubator to the biosafety cabinet. A time-lapse, multi-stage acquisition system in the DinoCapture2.0 software was prepared in order to capture the cell images from the wells during trypsinization. Cells from each well were washed twice with 1 ml of PBS. Thereafter, 0.5 ml of tryple express solution (detaching enzyme) was added to the cells in each well. The cells were immediately placed on the stage of Nikon TS100 inverted microscope. Imaged were acquired every 10 s for a duration of 10 min (60 frames in total) using a 10/ 0.25 phase-contrast objective microscope and DinoCapture2.0 software (with the time-lapse multi-dimensional acquisition).

Influence of Electroporation on HT29 Cell Proliferation, Spreading …

765

The process was carried out for both electrically treated cells and the cells in the control group (cell seeded without the electric treatment). In the course of trypsinization process, cell detaching from a substrate usually become rounded [19]. Later, the amount of cell detachment was computed by counting the number of spherical or rounded cells at each time divided by the total number of cells in that field of view (both rounded and unrounded) multiply by 100%. This gives the percentage of cell detachment at that time. Each experiment was repeated three times and the mean percentage of cells detachment was calculated at each time point.

3 Results and Discussion 3.1

Cell Proliferation Assay

All data obtained from the experiment were found to be normally distributed with P > 0.05 using Kolmogorov- Smirnov and Shapiro-Wilk test for normality. Therefore, one-way analysis of variance and Turkey HSD post Hoc test were used to test for statistical significance among the data obtained. P value of less than 0.05 shows a significant difference in treatment. Whereas, P-value greater than 0.05 indicates no significant difference in the treatment. Table 1 gives P-values obtained from the statistical analysis (Post Hoc Turkey SHD test in one way ANOVA) via SPSS. The PEF treated cells and control group were found to be 37.5% ± 5.58 and 31.0% ± 4.14 confluence respectively after 6 h of seeding with no significant difference in confluence percentage between the EP cells and NEP cells (P = 0.0910 > 0.05). This could be because cells have not started proliferating at this time and since both flasks were seeded with equal cell concentration, there could be no difference in the confluence percentage between them. The PEF treated cells and the cells in the control group were found to reach 51.3% ± 5.48 and 40.0% ± 3.28 confluences respectively after 24 h of seeding. There was a significant difference between the confluences percentage in PEF treated cells and cells in the control group after 24 h (P = 0.007 < 0.05).

Table 1 P-values from statistical analysis of proliferation percentage over time

Time of treatment (hour)

Parameter

P-values

6 24 48 64 72

EP EP EP EP EP

0.0910 0.0070 0.0001 0.0001 0.0700

-

NEP NEP NEP NEP NEP

766

H. B. Mamman et al.

Furthermore, the PEF treated cells and the cells in the control group reached 75.9% ± 2.13 and 60.0% ± 8.58 confluences respectively after 48 h of seeding. There was also a significant difference (P = 0.0001 < 0.05) between the percentage confluences in PEF treated cells and cells in the control group after 48 h in culture. Moreover, the PEF treated cells reached 96.1% ± 1.76 confluences after 64 h of seeding. On the other hand, the cells in the control group only reached 75.6% ± 5.75 confluences after 64 h of seeding. The result revealed further that there was a significant difference (P = 0.0001 < 0.05) in the percentage confluence of the cells after 64 h in culture. The cells in the control group were found to reach 90.6% ± 1.43 after 72 h in culture. There was no significant difference in percentage confluences of in PEF and control groups after 72 h in culture. This could be that the cell in the PEF group has already reached 96.1% ± 1.76 confluences after 64 h and stop proliferating due contact inhibition and unavailability of space for growth. From the results obtained and tabulated in Table 2, it shows that the growth is more rapid with EP than NEP. This could be that EP assisted the cells to take more nutrients from growth media and facilitated its growth as a result of pore formation. It could be also stated that the electric field modulated S-phase and M-phase of the cell cycle, of which consequence is the increases in the proliferation rate of the PEF cells as compared to the cell in the control group. Furthermore, it has shown that field parameter used did not irreversibly damage the cells, because if the opening stayed for a long time the cell might take more than enough nutrients, which could cause the membrane to burst and result in cell death. As shown in Fig. 1, there is a steady increase in the growth rate of the HT29 cells treated with EP. Even though NEP also showed a continuous increase in percentage confluence, it is always less than that of the PEF cells.

Table 2 Average confluence percentage of HT29 cell lines over time ± Standard error of the mean Treatment

Time (hours)

Percentage confluence ± SD (%)

EP

6 24 48 64 72 6 24 48 64 72

37.5 51.3 75.9 96.1 98.6 31 40 60 75.6 90.6

NEP

± ± ± ± ± ± ± ± ± ±

4.92 5.48 2.13 1.76 0.35 4.14 3.28 8.58 5.75 1.43

Standard error 2.84 3.17 1.23 1.01 0.20 2.40 1.89 4.96 3.32 0.83

Influence of Electroporation on HT29 Cell Proliferation, Spreading …

767

Fig. 1 Graph showing confluence percentage over time for both electroporated and non-electroporated HT29 cell lines

3.2

Cell Spreading (Cell Length Measurement)

From the results obtained and tabulated in Table 3, the electrically treated HT29 cells showed a continuous increase in cell length until it reaches a maximum length of 34.76 µm after 24 h of seeding. Furthermore, the cell exhibited a decrease in length with increased in proliferation (cells begins to divide) until it reached a length of 16.46 µm when it is fully confluence after 72 h. Interestingly, the cells in NEP group also reached its maximum length of 29.73 µm after 24 h of seeding. The cells in the control group also revealed a decrease in length after 24 h. The NEP cell length decreases to 15.6 µm when the cells reached confluence stage after 72 h. Since cell must attain a certain size before they could divide [20], and cell length in the electroporated group are larger than those in the control group; this could be the reason of higher proliferation rate in cells treated with PEF. This could be that electroporation increased extracellular matrix (ECM) production in cell and also stimulated cell spreading capability that resulted in the increment in cell length. This increase in length further accelerated the cell division in the M phase of the cell cycle.

Table 3 Average length of HT29 cell line over time

Time (hour)

Average cell length ± SEM (µm) EP NEP

6 24 48 64 72

21.4 34.76 26.37 23.25 16.46

± ± ± ± ±

1.08 0.69 1.8 0.75 1.19

16.84 29.73 24.73 17.69 15.6

± ± ± ± ±

0.4 1.35 0.63 1.06 0.55

768

H. B. Mamman et al.

Fig. 2 Graph showing average cell length over time for both electroporated and non-electroporated HT29 cell lines

Even though cell length is greater in treated cells than in control, it is interesting that both treated and control cells reached their maximum length after 24 h of plating as shown in Fig. 2. This could be possible because cells divide every 24 h [21] after reaching a certain minimum size [22]. Similarly, both cells almost return to their initial length after 72 h of seeding with increased in proliferation rate which is more pronounced in EP cells than NEP cells. Therefore, it could be stated that pulse electric field facilitated cell division and increased cell growth. Figure 3 also shows images of the growth rate of HT29 cells treated with and without EP after 0, 6, 24, 48, 64 and 72 h. Figure 3 revealed Images of the cell proliferation rate as described in Table 3.

3.3

Cell Adhesion or Trypsinization Assay

Figure 4 shows the trypsinization progression for the electrically treated HT29 cell line and the control group over a period of eight minutes. The electrically treated cells began to detach from the surface of the substrate at about 3 min after the application of the tryple express solution. On the other hand, the control group starts dissociating from the surface of the substrate at about 4.6 min after trypsinization. The cell detachment in the electrically treated group was 85.6% after four minutes while the cell detachment in the control group was 68.7% after 4 min. The electrically treated cell and cells in the control group completely detached from the substrate after 5.6 and 8 min respectively after the addition of the tryple express solution. The result shows that the electrically treated cells detached relatively faster during trypsinization process. Thus, the decreased in the adhesion properties could be beneficial to cellular behaviour during wound healing process [23]. The results suggested that the use of 600 V/cm and 500 µs decreased the degree of cell adhesion which could influence cell migration in wound healing process [24].

Influence of Electroporation on HT29 Cell Proliferation, Spreading …

EP (600V/cm at 500μs)

769

NEP (Control)

0 hours

6 hours

24 hours

48 hours

64 hours

72 hours

Fig. 3 Images of electroporated (EP) and non-electroporated HT29 cells at 0, 6, 24, 48, 64 and 72 h respectively (Scale bar = 100 µm)

In this research, the electrically treated HT29 cell line was found to reach 96.1% confluence after 64 h of seeding whereas the non-electroporated (NEP) cells reached only 76% confluence after 64 h. Interestingly, both the EP cell and the NEP

770

H. B. Mamman et al.

(600V/cm at 500μs)

NEP (Control)

0 minutes

4 minutes

6 minutes

8 minutes

Fig. 4 Trypsinization process of HT29 cell line under pulse electric treatment and control group. Scale bar = 50 µm

cell attained their maximum length of 34.76 and 29.73 µm respectively after 24 h of seeding. Furthermore, the electric treatment was found to decrease the adhesiveness of the cells where they detach from the substrate after 5.6 min as compared to the control group that took 8 min to completely detach from the substrate. Therefore, the findings suggested that the application of appropriate electric field treatment can cause cellular changes in cells such as attachment, proliferation and adhesion which could contribute to the facilitation of wound healing process via increased cellular proliferation and migration. The study revealed that the electroporation has a significant effect on HT29 cells line attachment, proliferation and adhesion. Thus, when HT29 cells line were exposed with 600 V/cm electric field strength and 500 µs pulse duration made the

Influence of Electroporation on HT29 Cell Proliferation, Spreading …

771

cells attached faster to the monolayer for growth and development when compared to those in the control group. This could be that the electric field up-regulates the signalling pathway of cell adhesion molecules such as integrin and cadherin and facilitated cell attachment [25]. The study also revealed that electroporation has an influence on the cell length and proliferation rate of the HT29 cell line. The reason could be that electroporation-facilitated the synthesis of extracellular matrix protein and assisted cell in taking more nutrients for growth and proliferation due to pore formation. Therefore, the study could be useful in understanding cell adhesion and migration in wound healing application, since cell adhesion and proliferation forms the basis of cell migration and other physiological processes. The study further investigated the effect of PEF on the HT29 cells proliferation rate. Subsequently, it was found that PEF has great influence on the HT29 cell proliferation rate. The HT29 cells under PEF treatment were found to reach 75% confluence in 16 h faster than the untreated cells. Similarly, cells spread wider under PEF treatment when compared to the untreated cells. The increase in the spreading characteristics of the cells could also be the reason for their increase in proliferation. The outcome of this investigation is in agreement with that of [1] and [2]. This is because cells proliferation is greatly dependent on the cell size [22]. That is cells must grow to a certain size before they can divide. The study, therefore, suggests an opportunity for facilitating wound healing process without the need for adding external drug or growth factor.

4 Summary In this study, the influence of EP on the proliferation, spreading and adhesion properties of HT29 cells line using the optimum parameter identified for HT29 cell line [24]. EP was also found to increase the proliferation rate of the HT29 cells by 27.1 as compared to the NEP cells. On the other hand, the EP reduced the adhesion strength of the cells by 29.2% as compared to the cells in the NEP group. This increment in the cell proliferation and the corresponding decrement in cell adhesion have great implication for wound application where cells are required to proliferate and migrate faster to heal a wound. Thus the outcomes of this study could have great potentials use in drug-free wound healing process. Acknowledgements The authors would like to thank the Research Management Center (RMC), UTHM and Ministry of Higher Education for sponsoring the research under Tier 1 Research Grants (H161).

772

H. B. Mamman et al.

References 1. Pehlivanova VN, Tsoneva IH, Tzoneva RD (2012) Multiple effects of electroporation on the adhesive behaviour of breast cancer cells and fibroblasts. Cancer Cell Int 12(9):1–14 2. Pehlivanova VN, Tsoneva IH, Tzoneva RD (2010) Influence of electroporation on cell adhesion, growth and viability of cancer cells and fibroblasts. Biol Cell 64(4):581–590 3. Gekas J, Hindié M, Faucheux N, Lanvin O, Mazière C, Fuentès V, Nagel MD (2004) The inhibition of cell spreading on a cellulose substrate (cuprophan) induces an apoptotic process via a mitochondria-dependent pathway. FEBS Lett 563(1–3):103–107 4. Ivanov II, de Llanos Frutos R, Manel N, Yoshinaga K, Rifkin DB, Sartor RB, Littman DR (2008) Specific microbiota directs the differentiation of IL-17-producing T-helper cells in the mucosa of the small intestine. Cell Host Microbe 4(4):337–349 5. Lloyd AC (2013) The regulation of cell size. Cell 154(6):1194–1205 6. Geiger B, Bershadsky A, Pankov R, Yamada KM (2001) Transmembrane crosstalk between the extracellular matrix and the cytoskeleton. Nat Rev Mol Cell Biol 2(11):793–805 7. Fletcher SJ (2013) Investigating the role of vesicle trafficking in epithelial cell migration. University of Birmingham, PhD Thesis 8. Jacinto A, Woolner S, Martin P (2002) Dynamic analysis of dorsal closure in Drosophila: from genetics to cell biology. Dev Cell 3(1):9–19 9. Muller WA (2003) Leukocyte–endothelial-cell interactions in leukocyte transmigration and the inflammatory response. Trends Immunol 24(6):326–333 10. Thiery JP (2002) Epithelial–mesenchymal transitions in tumour progression. Nat Rev Cancer 2(6):442–454 11. Desprat N, Supatto W, Pouille PA, Beaurepaire E, Farge E (2008) Tissue deformation modulates twist expression to determine anterior midgut differentiation in Drosophila embryos. Dev Cell 15(3):470–477 12. Dalby MJ (2009) Nanostructured surfaces: cell engineering and cell biology. Nanomedicine 4 (3):247–248 13. Le Digabel J, Ghibaudo M, Trichet L, Richert A, Ladoux B (2010) Microfabricated substrates as a tool to study cell mechanotransduction. Med Biol Eng Comput 48(10):965–976 14. Ghassemi S, Meacci G, Liu S, Gondarenko AA, Mathur A, Roca-Cusachs P, Hone J (2012) Cells test substrate rigidity by local contractions on submicrometer pillars. Proc Nat Acad Sci 109(14):5328–5333 15. Poortinga AT, Bos R, Norde W, Busscher HJ (2002) Electric double layer interactions in bacterial adhesion to surfaces. Surf Sci Rep 47(1):1–32 16. Leckband D (2000) Measuring the forces that control protein interactions. Ann Rev Biophys Biomol Struct 29(1):1–26 17. Kanthou C, Kranjc S, Sersa G, Tozer G, Zupanic A, Cemazar M (2006) The endothelial cytoskeleton as a target of electroporation-based therapies. Mol Cancer Ther 5(12):3145–3152 18. Mamman HB, Sadiq AA, Adon MN, Jamil, MMA (2015) Study of electroporation effect on HT29 cell migration properties. In: 2015 IEEE international conference on control system, computing and engineering (ICCSCE). IEEE, pp 342–346 19. Rubinsky B (2007) Irreversible electroporation in medicine. Technol Cancer Res Treat 6 (4):255–259 20. Marguerat S, Bähler J (2012) Coordinating genome expression with cell size. Trends Genet 28(11):560–565 21. Cooper GM, Hausman RE (2000) The cell. Sinauer Associates, Sunderland, pp 725–730 22. Turner JJ, Ewald JC, Skotheim JM (2012) Cell size control in yeast. Curr Biol 22(9):R350– R359

Influence of Electroporation on HT29 Cell Proliferation, Spreading …

773

23. Sevilla C (2013) The role of extracellular matrix fibronectin and collagen in cell proliferation and cellular self-assembly. University of Rochester, PhD thesis 24. Mamman HB, Jamil MMA, Adon MN (2016) Optimization of electric field parameters for HT29 cell line towards wound healing application. Indian J Sci Technol 9(46):1–7 25. Zhao M (2009) Electrical fields in wound healing—an overriding signal that directs cell migration. Semin Cell Dev Biol 20(6):674–682 Academic Press

Wound Healing and Electrofusion Application via Pulse Electric Field Exposure Muhammad Mahadi Abdul Jamil, Mohamad Nazib Adon, Hassan Buhari Mamman, Nur Adilah Abd Rahman, Radzi Ambar, and Chew Chang Choon Abstract This study is concerned with the investigation of pulsed electric field (PEF) towards biological cells. Biological cells selected in this study are HeLa (cervical cancer) cells. Experimental setup involves different important parts, which include; the source of square wave PEF (ECM®830) that can generate until 3 kV field strength; a modified EC magnetic chamber with incubator system that has been used in order to expose HeLa cells to PEF and a Nikon inverted microscope (Ti-series) for subsequent visualization techniques, image and video. In the early stage, experimental setup was tested by monitoring the proliferation rate of HeLa cells within 0 to 48 h. Then HeLa cells were tested to look at the swelling effect via PEF exposure. After that, we continued to identify the optimum PEF parameters for reversible condition on HeLa cell. As a result, HeLa cells give a good response at 2.7 kV field strength, 30 ls pulse length with single pulse. Further study showed that two or more adjacent HeLa cells merged together due to increased cell membrane permeability (electrofusion). This discovery triggered an idea to look at the PEF effect on wound healing process. An artificial wound site was investigated with and without PEF exposure. The finding shows that wound area exposed to PEF took 3 h to completely heal while the untreated area took 10 h. This provide a novel technique (electrical based novel treatment) which could be an alternative to drug usage for wound healing process. Overall, the findings achieved in this study could lead us onto a drug free wound healing method. Keywords HeLa cells

 Electric field  Electro-fusion  Wound healing

M. M. A. Jamil (&)  M. N. Adon  H. B. Mamman  N. A. A. Rahman  R. Ambar  C. C. Choon Biomedical Modelling and Simulation Research Group, Faculty of Electrical and Electronics Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_56

775

776

M. M. A. Jamil et al.

1 Introduction Despite the widespread use of electroporation in biotechnology and biomedical engineering, there is no uniform and comprehensive theory that explains the mechanism that triggers electroporation phenomenon. It is, therefore, paramount important and interesting to look into the experimental fact that theory needs to encompass. Therefore, this paper is intended to offer an explanation of experiment in the area of electroporation for wound healing application. Effects of pulse electric field (PEF) on biological cell have been intensively investigated over the last decade. Ušaj et al. 2010 have been studying the optimization of electric field amplitude and hypotonic treatment for mouse melanoma (B16-F1) and Chinese hamster ovary (CHO) cells [1]. It has been shown that appropriate hypotonic treatment of cells before the application of electric pulses can cause a significant increase in electrofusion efficiency. This phenomenon is potentially, the basis for many in vivo applications such as electro chemotherapy and gene therapy. However, it still lacks a comprehensive theoretical and experimental basis. This study involves in vitro technique to evaluate specific cellular level interactions with PEF under controlled environments outside of living organism. This fascination of controlling cell functions by using PEF has led to the discovery of electrofusion and has been a topic of great interest in physiological and morphological changes [2–4]. Based on previous simulation combined with current experimental work, we can observe that the main advantage of this study is some of the exposure conditions can be easily and precisely controlled (e.g., changing exposure duration, background temperature, or exposure field intensity) as a mean of determining the dose-response relationships and the effect of applying different threshold levels [5–8]. In order to delve deeper into the quantitative interaction mechanisms between electric field and biological cells, an experimental setup that confines cell observation during exposure to electric field has been reported [9]. Here, we are reporting the effect of multicellular cell on the transient increase in the permeability of cell membranes for cultured HeLa cells. Besides that, we are also monitoring this effect in real time using confocal laser Nikon inverted research microscope (Ti-series). There is 2.7 kV/cm amplitude voltage applied with pulsing sequences of 30 ls long pulses. These are simply single pulses produced by a commercially available pulsed power generator [10]. Due to the fact that the cell membrane permeability depends on the applied pulse number, the experiment was performed only one pulse number for the amplitude voltage. Furthermore, wound healing is a complex and dynamic process with the wound environment changing with the changing health status of the individual. The knowledge of the physiology of the normal wound healing trajectory through the phases of homeostasis, inflammation, granulation and maturation provides a framework for an understanding of the basic principles of wound healing. Through this understanding the health care professional can develop the skills required to care for a wound and the body can be assisted in the complex task of tissue repair.

Wound Healing and Electrofusion Application via PEF Exposure

777

A chronic wound should prompt the health care professional to begin a search for unresolved underlying causes. Healing a chronic wound requires care that is patient centered, holistic, interdisciplinary, cost effective and evidence based. This is one of five articles made available by the Canadian Association of Wound Care to assist the wound care clinician develop an increased understanding of wound healing.

2 Materials and Methods 2.1

Preparation of HeLa Cells

In this study HeLa cells were cultured on circular coverslip glass (25 mm diameter) in a 6 well culture plate. The cell was harvested and appropriate amount was seeded into the six well and incubated for 24 h until HeLa cells growth rate increased as much as 40%. Lastly, the glass cover slip was transferred into controlled EC magnetic chamber to performed real time visualization and data analysis at the same time.

2.2

Inducement of PEF Towards HeLa Cells

The same optimum PEF parameter will be used in this investigation namely, 2700 V (voltage amplitude), 30 ls (pulse duration) and single pulse number. This optimum parameter will excite plasma membrane to induced transmembrane potential which performed polarization effects because of the ion concentration namely potassium and sodium. After that, ions can cross the plasma membrane and cause an action of cell depolarization, until it attempts to return to its resting state (normal condition).

2.3

Integrated Devices of Real Time Imaging System

In this study the significance of real time imaging equipped with high speed camera was higher priority to record electrofusion phenomena that happened at multiple seconds [11, 12]. In this studies, high resolution and sensitivity CCD camera (QImaging, Exi Aqua) with short exposure time (10.9 frame per second full resolution @ 14 bits (20 MHz)) will be obtained to performed high resolution bright field imaging such as phase contrast for electrofusion phenomena in short duration ( < @; ¼ > : @t

R

R

1H ð;Þdx I ð1H ð;ÞÞdx X R H ð;Þdx ; c2 ð;Þ ¼ X I ð1H ð;ÞÞdx 2 2 ; dð;Þ½lr: jr r; j  v  k1 ðI  c1 Þ þ k2 ðI  c2 Þ

c 1 ð ;Þ ¼

X R X

ð7Þ

;ð0; xÞ ¼ ;0 ð xÞ

where l  ¼ 0, v  0, k 1, k 2 [ 0 are fixed parameters, 

1; z  0 are Heaviside function and dðzÞ ¼ dzd H(z) is the Dirac 0; z\0 function. Using the dðzÞ ¼ e = ðp ðe2 þ z2 ÞÞ. This algorithm can spontaneously trace the interior contours. When the interior contour is small and far away from initial contour, this model tent to unsatisfactory in behavior by seeking the local minima [22]. HðzÞ ¼

3.10

QuadTree Edge Method

Quadtree segmentation has been approached a top-down technique to make the algorithm simple and flexible. A quadtree is a segmentation based on tree structure which has four branches in one internal node. Each branch which is containing internal node will point to one node as a child of the given node in the quadtree. Simply said that every node will have its own four branches or not at all. Figure 9 illustrates the tree structure of the quadtree in three-level. Each node in the quadtree will link to the subblock of the image input with determined their size and location. Splitting of four equal-sized square blocks from one node which is four subblocks represented by four children of a precise parent node. Moment preserving technique is to classify the block activity whether want to decide the nodes will use or not. The calculation is based on the mean values of a block from given nodes. Pixel corresponds to the image block will be classified into two functional groups with guided from the mean value. The first group have greater or same pixel value with mean, while second groups is a group holds smaller pixel value than mean value.

Comparability of Edge Detection Techniques …

Internal node

901

Leaf node

Fig. 9 Three-level of the quadtree

An absolute error has been calculating whenever the two groups of means calculated. This absolute error is to decide either the node need to splitting or not. A low detail block only happens when absolute error results are below than threshold value. Further, the block will keep on going to splitting if there’s no limit. The same process of splitting continuously process until the smallest allowable block size has been reached [23].

4 The Proposed Procedure for License Plate Recognition A simple LPR system has been an approach to test various edge detection techniques in order to get the best edge detector for LPR system. The flowchart of the method is presented in Fig. 10. In general, the proposed procedures consist of three common steps which are license plate detection, character segmentation and character recognition. This approach has skipped the license plate detection steps by directly segment the character and remove unwanted region based on fixed character characteristic. There are elements that are highlighted as characteristic of character such as height, width and area of the alphanumeric character’s pixel. Based on the flowchart in Fig. 10, the procedure starts with input the original image which containing colour scale also known as the RGB image. RGB stands for Red, Green, and Blue, where it represents colour element automatically, shows the image is in colour scale. The images have been captured by Fujifilm Digital Camera with 14 megapixels. Three different angles have been captured where 45, 90 and 135° from the centre of target cars and 1.5–2.0 m distance used to captured. The method starts with image pre-processing by using modified white patch for enhancing the edges in the image [24]. This is due to the unbalance condition of image lighting causes the license plate edges not clear enough to the detection

902 Fig. 10 The flowchart of the procedure approach

F. N. M. Ariff et al.

Start

Original image

Apply image enhancement by using modified white patch Convert into gray scale image Filtering the image by using Median filter Apply edge detection (Canny, Sobel, etc.) for segmentation Fill hole Remove unwanted region based on the height, width and area Recognize character by using Template matching

End

process. Then, enhancement image is converted into greyscale by taking the average sum of RGB. The median filter has been applied for noise removing together with smoothing the image. Next, extraction of character is segmented by edge detection that has been discussed before. Fill hole algorithm is used to fill inside the illustrated line edges where they are joined together and form an object. With this algorithm, can help in identifying the alphanumeric characters and separate the unwanted region from the characters. After that, non-alphanumeric characters have been removed based on the specific height (46–107), width (14–93) and 442 area of alphanumeric pixel. These steps are important in order to clear the segmented image from the unwanted pixel and automatically easier for the recognition process later. The process has been done by using the region growing technique to analyze which objects achieve the

Comparability of Edge Detection Techniques …

903

(a) Original image

(b) Modified white patch

(c) Gray scale image

(d) Edge detection

(e) Fill hole

(f) Remove noise

(g) Template matching

(h) Result obtain

Fig. 11 Result of license plate segmentation

904

F. N. M. Ariff et al.

mentioned criteria pixels. An unselected object will be eliminated from the segmented image. Lastly, the segmented object will undergo Optical Character Recognition (OCR) process which to classifying the alphanumeric characters. This process is done by using the bounding box technique to highlight the object. Then, the highlighted object has been evaluating with used an algorithm of matching technique. Matching algorithm is able to find out the similarity of the obtained object with the training databased. The matching algorithm mentioned is template matching. This method highly used in OCR due to the simplest and fastest in OCR process. Figure 11 shows the result of segmentation works based on the flowchart explanation.

5 Results and Discussions The performance of the proposed procedures has been evaluated using two positions of camera views which are license plate in direct position which is no angle involved in taken image and second photo of license plate skew in 45° from first position taken. The image has been taken during daylight with different lighting featuring different distance between 1.3–2.5 m and its parked vehicles, have been photographed by 14-megapixel resolution of Fujifilm digital camera. Total numbers of image captured are 131 images with involve Malaysian license plate only. Figure 11 shows the results of segmentation overview on license plate. From Fig. 11, image (a) shows the original image captured by the digital camera. Image (b) shows the resulting images of the modified white patch, where the captured image has been enhanced to provide a clearer image in order to make it easier for edge detection process. Subsequently, image (c) shows the greyscale image converted from the RGB image using image enhancement. Conversions have been made based on the weighted average sum of RGB. Edge-based segmentation is shown in image (d). The process of this segment has been repeated later by using all the techniques selected. Next, the fill hole algorithm was used after the segment. Image (e) shows the image after compiling the fill hole algorithm inside the object, to help process of the recognition step. Image (f) shows the results after noise removing based on the growing region. It aims to smooth the recognition process by eliminated unwanted objects. Image (g) shows a bordering box surrounded by objects to undergo recognition process by using template matching. Finally, image (h) shows the results obtained from the template matching process. This result has been illustrated in a text file. Figure 12(a) shows the original image which used to shows the comparing techniques while Fig. 12(b) and (c) shows the image results of segmentation based on each techniques and Fig. 13 shows the others image result techniques. Figures 12 and 13 shows the comparison between the edges. All the edges have been used on the method proposed to get which are the best edge detection that can

Comparability of Edge Detection Techniques …

905

(a) Original image

(b) Approxcanny

(c) Canny

Fig. 12 The image of a image used in comparing edge detection techniques of b Approxycanny and c Canny

extract the alphanumeric character perfectly. The edges in the image almost correct detected but the are several techniques such as Approxcanny (see Fig. 12(b)) detect in details every edge that obtains in the image. Krisch shows on Fig. 13(b) and Quadtree shows in Fig. 13(h) shows that the characters have been expand from the actual size. This causes the characters cannot be recognized well. Some characters such as Fig. 13(d) show that the result of Prewitt’s edge is not filled correctly because the cut line causes the characters to not fill up properly. This can cause large numbers of unwanted regions and can interfere with extraction. Template matching was used to recognize the true character from the edge-based segmentation. Then, the evaluation has been made based on confusion matrix technique. Table 1 shows the result of segmentation from Canny edge detection. This classification also has been made for the rest edge detection techniques. Table 2 shows the result LPR system performance evaluated on comparison of different edge detection proposed. From the table, highest correctly recognized by the template matching is Canny edge technique with correct detection is 473 over 892 characters. Almost 419 of characters cannot be identified. The lowest true character recognition produce is from Quadtree edge detection because only 51 characters can be recognized correctly and about 841 of characters have been missing. This unwanted character exists due to the remaining noise which cannot be removed because it has the same features as the characters.

906

F. N. M. Ariff et al.

(a) Chan-vese

(b) Krisch

(c) LoG

(d) Prewitt

(e) Robert

(f) Sobel

(g) Zero crossing

(h) Quadtree

Fig. 13 The edge detection of various techniques

Comparability of Edge Detection Techniques …

907

Table 1 Segmentation result from Canny edge detection Input

Actual number

Result obtained

Correct segment

Remark

Car1 Car2 Car3 Car4 Car5 Car6 Car7 Car8 Car9 Car10

PMX 6573 JKH 6423 BHK 2563 BNR 9288 AHT 656 PHT 8578 PLK 8585 PLL 4128 WA 795 J QS 810 F

Px6573 344JKH6423 HK53

P _ X6573 JKH6423 _ HK _5_3

AHT656vjriziif PHT 8578 PL8585 rzt1PLL428 JwA795Jrsrr S1FP

AHT656 PHT8578 PL_8585 PLL4_128 wA795J _S_1_F

Incorrect Correct Incorrect Incorrect Correct Correct Incorrect Incorrect Correct Incorrect

Table 2 Performance comparison with different edge Edge method

Successful character recognition

Unsuccessful character recognition

Approxcanny Canny Chan-vese Kirsch Log Prewitt Robert Sobel Quadtree Zero crossing

77 473 329 323 436 255 310 253 51 88

815 419 563 569 456 637 582 639 841 804

The second result was shown in Table 3. The evaluation has been calculated separately into three classes which are TP (True positive) is correctly recognize by the system, FP (False positive) which means the character recognized wrongly and FN (False negative) represent that true character is missing. From the result obtained, five best of edge in producing true characters have been selected to proceed a second test. On this second test, the feature of characters has been implemented with following the measurement which suits with each edge detector. Localization of license plate also has been added in the system. Chan-Vese have been conquered the whole result by 60.15% of accuracy producing the highest correctly recognized of characters. The canny operator moves into second stage with 51.90% of true positive. Thus, this shows that every edge has its own characteristic and localization of license plate also needed in order to get a higher rate of character recognition. Template matching can recognize well for most of alphabet. However, there have some issues regarding the specific character which have a

908 Table 3 Performance of the second test on selected edge

F. N. M. Ariff et al. Edge

TP (%)

FP (%)

FN (%)

Accuracy (%)

Canny LoG Robert Chan-vese Krisch

51.90 48.70 36.23 59.39 36.69

6.18 8.45 5.78 28.64 4.34

41.29 42.85 57.99 11.64 58.96

51.90 48.70 36.23 60.15 36.69

unique shape which will lead to error recognition. The character that gives confusing recognition such as ‘I’ and ‘J’ with ‘1’, ‘B’ confuse with ‘D’, ‘O’ with ‘Q’, ‘2’ with ‘Z’ and ‘9’ with ‘S’. All this confusing recognition automatically leading to an error recognition and reduce the accuracy of edge detection performance. Thus, this part will be focus on the next work to overcome the misleading of recognition in template matching.

6 Conclusion In this paper, a simple method has been proposed in order to lead the comparison of the various edge detection technique. The technique of edge detections used for this experiment were Approxcanny, Canny, Chan-Vese, Kirsch, LoG, Prewitt, Robert, Sobel and Zero Crossing. The performance of edge detections has been done in two stage test. First stage was segmentation based on all edge detection techniques with standard measurement value for noise removing which based on height, width and area. Second stage, five selected techniques from the best performance on first analyses proceed for this stage. This stage has been removing the noise with specific measurement value on each technique. The first experiment was conquered by Canny that had been achieved 473 over 892 successful character recognition. However, overall results rate was low. This may cause from the minimum number of database used and the poor in quality of image. In future, number of database need to increase. Another option was uses of other enhancement techniques need to try or include in order to give better detection for every edge detection. Second test results for the best edge detection were conquered by Chan-vese with 59.39% of true character recognition. Template matching was the simplest recognition system and unstable in recognized the character. This due to the noise that has same pixel values with character and this cannot be reduced. This may cause the rate performance obtained was low. Thus, in future classification method must be changed in order to get more accurate in character recognition. Plus, the analysis data can be added in future work by using real time taken of images and also indoor license plate images such as in building car parking.

Comparability of Edge Detection Techniques …

909

References 1. Desertot M, Lecomte S, Gransart C, Delot T (2013) Intelligent transportation systems. In: Computer science and ambient intelligence. ISTE, London 2. Nasir ASA, Gharib NKA, Jaafar H (2018) Automatic passenger counting system using image processing based on skin colour detection approach. In: 2018 International conference on computational approach in smart system design and applications (ICASSDA), pp 1–8 3. Musoromy Z, Ramalingam S, Bekooy N (2010) Edge detection comparison for license plate detection. In: 11th International conference on control automation robotic vision, ICARCV 2010, no December, pp 1133–1138 4. Rashid MM, Musa A, Rahman MA, Farahana N, Farhana A (2012) Automatic parking management system and parking fee collection based on number plate recognition. Int J Mach Learn Comput 2(2):93–98 5. Saghaei, H.: Proposal for automatic license and number plate recognition system for vehicle identification. In: International conference on new research achievements in electrical and computer engineering, pp 1–5 (2016) 6. Yu S, Li B, Zhang Q, Liu C, Meng MQH (2015) A novel license plate location method based on wavelet transform and EMD analysis. Pattern Recognit 48(1):114–125 7. Kumari S, Gupta L, Gupta PP, Abdul APJ (2013) Automatic license plate recognition using OpenCV and neural network. Int J Comput Sci Trends Technol 5(3):1786–1792 8. Kakani BV, Gandhi D, Jani S (2017) Improved OCR based automatic vehicle number plate recognition using features trained neural network. In: 8th International conference on computing communication and networking technology, ICCCNT 2017, pp 1–6 9. Wen Y, Lu Y, Yan J, Zhou Z, Von Deneen KM, Shi P (2011) An algorithm for license plate recognition applied to intelligent transportation system. IEEE Trans Intell Transp Syst 12(3): 830–845 10. Yogheedha K., Nasir ASA, Jaafar H, Mamduh SM (2018) Automatic vehicle license plate recognition system based on image processing and template matching approach. In: 2018 International conference on computational approach in smart systems design and applications, ICASSDA 2018, pp 1–8 11. Sahoo T, Pine S (2017) Design and simulation of various edge detection techniques using Matlab Simulink. In: International Conference on Signal Processing, Communication, Power and Embedded System, SCOPES 2016, pp 1224–1228 12. Bala Krishnan K, Prakash Ranga S, Guptha N (2017) A survey on different edge detection techniques for image segmentation. Indian J Sci Technol 10(4):1–8 13. Rokibul M, Hossain S, Roy S, Alam N, Jahirul M (2016) Line segmentation and orientation algorithm for automatic Bengali license plate localization and recognition. Int J Comput Appl 154(9):21–28 14. Gou C, Wang K, Yao Y, Li Z (2016) Vehicle license plate recognition based on extremal regions and restricted Boltzmann machines. IEEE Trans Intell Transp Syst 17(4):1096–1107 15. Ha PS, Shakeri M (2016) License plate automatic recognition based on edge detection. Artif Intell Robot IRANOPEN 2016:170–174 16. Choubey S, Sinha GR, Choubey A (2011) Bilateral partitioning based character recognition for vehicle license plate. Commun Comput Inf Sci 147:422–426 17. Sharma G (2018) Performance analysis of vehicle number plate recognition system using template matching techniques. J Inf Technol Softw Eng 8(2):1–9 18. Babu KM, Raghunadh MV (2017) Vehicle number plate detection and recognition using bounding box method. In: Proceedings of 2016 international conference on advanced communication, control & computing technologies, no 978, pp 106–110 19. Al Taee EJ (2018) The proposed Iraqi vehicle license plate recognition system by using Prewitt edge detection algorithm. J Theor Appl Inf Technol 96(10):2754–2764 20. Arxiv.org. https://arxiv.org/abs/1905.11731v1. Accessed 15 Sept 2019

910

F. N. M. Ariff et al.

21. Shrivakshan GT, Chandrasekar C (2012) A comparison of various edge detection techniques used in image processing. Int J Comput Sci Issues (IJCSI) 9(5):269–276 22. Xia R, Liu W, Zhao J, Li L (2007) An optimal initialization technique for improving the segmentation performance of Chan-Vese model. In: Proceedings of IEEE international conference on automation and logistics, ICAL 2007, pp 411–415 23. Hu YC, Chang CC (1999) Variable rate vector quantization scheme based on quadtree segmentation. IEEE Trans Consum Electron 45(2):310–317 24. Khairudin NAA et al (2019) Image segmentation approach for acute and chronic leukaemia based on blood sample images. IOP Conf Ser Mater Sci Eng 557(1):1–6

Classification of Facial Part Movement Acquired from Kinect V1 and Kinect V2 Sheng Guang Heng, Rosdiyana Samad, Mahfuzah Mustafa, Zainah Md Zain, Nor Rul Hasma Abdullah, and Dwi Pebrianti

Abstract The aim of this study is to determine the motion sensor with better performance in facial part movements recognition among Kinect v1 and Kinect v2. This study has applied some classification methods such as neural network, complex decision tree, cubic SVM, fine Gaussian SVM, fine kNN and QDA in the dataset obtained from Kinect v1 and Kinect v2. The facial part movement is detected and extracted in 11 features and 15 classes. The chosen classifications are then applied to train and test the dataset. Kinect sensor that has the dataset with highest testing accuracy will be selected to develop an assistive facial exercise application in terms of tracking performance and detection accuracy.



Keywords Kinect V1 Kinect V2 matrix Facial part movement



 Face tracking  Classification  Confusion

1 Introduction Recently, assistive technologies have been widely used in human life in various aspects such as vision and hearing care. Hence, devices that featuring assistive technologies must have satisfying performance in terms of detection accuracy and time response. Furthermore, assistive technologies can help in rehabilitation by restoring the ability to its original state. For example, patients with Bell’s Palsy syndrome are difficult to make facial expressions correctly or precisely like before. They are required to do a series of rehabilitation to get back to normal. Normally, the rehabilitation process will take a long time because of the difficulty of physical exercises and lacking motivation repeating the same exercises. The assistive technologies help to improve the motivation for efficient rehabilitation.

S. G. Heng (&)  R. Samad  M. Mustafa  Z. M. Zain  N. R. H. Abdullah  D. Pebrianti Faculty of Electrical and Electronics Engineering, Universiti Malaysia Pahang, 26600 Pekan, Pahang, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_65

911

912

S. G. Heng et al.

For a detailed facial feature detection and tracking accuracy, 3D facial image processing is better than 2D [1, 2] due to 3D sensor supports depth sensing. There are various selections in RGB-D sensors but Kinect version 1 (v1) and Kinect version 2 (v2) are the better choices due to cheaper price, robust face tracking performance and high portability [3]. Besides, the Microsoft Kinect software development toolkits (SDKs) [4] are free and available online. The featured face tracking algorithms is robust enough to detect human face with the presence of occlusions and poor lighting environment. In this study, the Kinect motion sensors are chosen to acquire facial part movements for facial exercise analysis. The facial part movements are classified in order to determine which Kinect sensor has higher degree of recognition and accuracy. Hence, the Kinect sensor with higher classification accuracy will be chosen to develop an assistive facial exercise application.

2 Literature Review The face data is necessary in face recognition, facial expression analysis and biometric applications. The collectible facial features from a human face are important in many fields. For the facial image processing in medical field, the facial features of a stroke patient can help the specialist to diagnose illness condition and rehabilitation progress [5]. Thus, the analysis of the facial part movement has been carried out by using Kinect v1 and Kinect v2. The classification types used in this study include neural network, complex decision tree, cubic kernel SVM (Support Vector Machines), fine gaussian kernel SVM, fine kNN (k Nearest Neighbors) and QDA (Quadratic Discriminant Analysis). The basic working concept of these classifications is summarized.

2.1

Neural Network

A neural network consists of neurons organized in layers [6]. Each neuron will multiply the inputs by the adjustable weight value and deliver the sum through the transfer function to the next neurons. The number of neurons in each hidden layer is commonly set between the numbers of input nodes and output nodes. The equation of each neuron is given by Y¼

X

ðweight  inputÞ þ bias

ð1Þ

Classification of Facial Part Movement Acquired …

2.2

913

Decision Tree

Decision tree breaks down a dataset in to smaller and smaller subsets in a form of tree structure. The decision nodes represent predictors or features of the dataset while the leaf nodes represent the decision outputs or classes. Quinlan [7] had designed ID3 that uses Entropy and Information Gain to construct a decision tree. Entropy using the frequency table of one attribute E(S) and two attribute E(T, X) are E ð SÞ ¼

c X

pi log2 pi

ð2Þ

i¼1

E ðT; X Þ ¼

X

PðcÞE ðcÞ

ð3Þ

c2X

The information gain is GainðT; X Þ ¼ EntropyðT Þ  EntropyðT; X Þ

ð4Þ

The attribute with largest information gain is chosen as the decision node and the branch with 0 entropy is considered a leaf node. The non-leaf branches need further splitting by using the ID3 algorithm until all data is classified.

2.3

SVM

Support vectors are the difficult data points that locate nearest to the hyperplane. SVMs optimize the width of the margin of the hyperplane [8]. The data points that are most difficult to classify lie on the hyperplanes H1 and H2 , where the plane H0 is the median in between. The weight vector, input vector and bias are represented by w, x and b respectively. H1 : w  xi þ b ¼ þ 1

ð5Þ

H2 : w  xi þ b ¼ 1

ð6Þ

H 0 : w  xi þ b ¼ 0

ð7Þ

The kernel trick is used to solve nonlinear dataset. The common kernels used in this paper are cubic and gaussian radial basis function (RBF).

914

2.4

S. G. Heng et al.

kNN

Given a testing point, x in a 2-class training dataset, the kNN algorithm identifies the k nearest neighbors of x to find its class [9]. In a 2-class problem, k is set as an odd value to avoid tie. For example, k is set to 3, so 3 nearest feature points of the x will be identified. The class with the greatest number of nearest points to x will be the class of x as well. Basically, Euclidean function is used to measure the distance between testing point and feature points from the training dataset. vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u k uX t ðx  y Þ2 i i

ð8Þ

i¼1

2.5

LDA

LDA (Linear Discriminant Analysis) creates a new axis between feature points, projects the points onto the axis for maximizing the separability between classes. The axis is created according to the maximum distance between means and minimum scatter within each class. In overall, LDA is a classification method that reduce the dimensions of data and maximize the separation between classes [10]. The scatter matrices Si of a multi-class data ðc1 ; c2 ; . . .; cn Þ with m p-dimensional samples, x1 ; x2 ; . . .; xm (where = xi ¼ ðxi1 ; . . .; xip Þ) are Si ¼

X ðx  xi Þðx  xi Þ0

ð9Þ

x2ci

P P The intra-class scatter matrix c w and inter-class scatter matrix c b are given by X d w

¼ S1 þ S2 þ . . . þ Sn ¼

n X X ðx  xi Þðx  xi Þ0

ð10Þ

i¼1 x2ci

X d b

¼

n X

mi ðxi  xÞðxi  xÞ0

ð11Þ

i¼1

3 Materials and Methodology Both Kinect sensors have face tracking algorithms provided in their respective version of Kinect SDK library. The face tracking algorithms are modified and executed on Microsoft Visual Studio to extract desired facial point coordinates.

Classification of Facial Part Movement Acquired …

915

User is required to perform neutral expression as a reference for the following facial parts movement. Then, the distance between two paired facial points is calculated from the extracted 3D coordinates. The distance ratio can be obtained by dividing the facial feature points by neutral reference points. The distance ratio is then categorized into 15 classes for classification purpose. The MATLAB software is selected as a platform to apply classification on each dataset for Kinect v1 and Kinect v2. There are 3000 training sets and 3000 testing sets for both Kinect sensors. Each set comes with 11 distance ratios in respective to their paired facial feature points. The types of classifications applied to the dataset will be compared based on the accuracy results. Figure 1 shows the methodology workflow. Initially, user is asked to perform 15 facial parts movements, including neutral pose, raising eyebrows, closing eyelids and lips expression. The collected 3D facial points with significant changes in coordinate location are set as feature points. Then, the feature points are paired as facial components to calculate the 3D distance. The distance ratio between neutral face and moving face is obtained for classification. Total 6000 facial data are collected and divided into 3000 training set and 3000 testing set for each Kinect v1 and Kinect v2. For the neural network classification method, 30 hidden neurons are set for optimizing the training and testing accuracy. While for the other classification methods, 5-fold cross-validation is set to validate the training set with unseen data. The class category of each facial part movement is listed in Table 1.

Fig. 1 Methodology workflow

916 Table 1 Classes with corresponding facial part movement

S. G. Heng et al. Class

Facial part movement

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Neutral Raising eyebrows slightly Raising eyebrows greatly Frowning Closing right eyelid Closing left eyelid Closing both eyelids Stretching lips Lowering lip corner Pouting Opening mouth Stretching right lip corner Stretching left lip corner Pulling left lip corner Pulling right lip corner

4 Results and Discussion The optimum hidden neurons for dataset of Kinect v1 is 30. The training accuracy is satisfying which is 96.6%. However, the testing accuracy is only 68.9%. This is because some classes are undetectable by Kinect v1. The noises have been learned by the model and caused overfitting. Hence, the model is poorly fit with the testing data. In overall, the model does not generalize well. From the Fig. 2, the 6th and 15th target classes have the lowest accuracy of prediction due to the classes are asymmetrical facial parts movements and are not supported by Kinect v1. For the 4th target class, the predicted output classes included 3rd, 5th and 6th classes due to the Kinect v1 is likely to detect raising behavior between the inner corner of eyes and eyebrows. Moreover, Kinect v1 is responsive to the lip corners movement only. This results in poor output accuracy in 9th target class. Furthermore, Kinect v1 has inaccurate upper and lower lips detection when closing lips due to the distance between upper lips and lower lips is nearly zero. The overlapping issue in lips detection causes the 10th target class has low output accuracy of prediction. Both Kinect 1 and Kinect v2 use 30 hidden neurons. From the Fig. 3, the testing accuracy is 91.2% which is far better than Kinect v1. For eyebrows raising detection, Kinect v2 is less responsive to the outer eyebrows. This explains why some testing data from 2nd and 3rd target classes are predicted wrongly into 3rd and 2nd output classes respectively. For the decision trees classification, the complex tree classifier type is chosen. The model flexibility is high as the maximum number of splits is up to 100. From the Fig. 4, the testing accuracy is 50.4% which is considered low because of the 5th, 6th,

Classification of Facial Part Movement Acquired …

917

Fig. 2 Kinect v1: confusion matrices of neural network training (left) and testing set (right)

Fig. 3 Kinect v2: confusion matrices of neural network training (left) and testing set (right)

7th, 12th, 13th, 14th and 15th classes are not supported by Kinect v1. The only way to increase the training and testing accuracy is to dropout some noise from the dataset. From the Fig. 5, the testing accuracy is 80.0% which is satisfying for Kinect v2. The 2nd class has low detection accuracy due to the raising eyebrows slightly has been misdetected as raising eyebrows greatly and lowering lips. Besides, the 4th class has low detection accuracy due to the frowning expression has been categorized wrongly into closing right or left eyelid. The cubic kernel function is applied in SVM classification. From the Fig. 6, the testing accuracy of Kinect v1 is 71.0% which is an acceptable result. The 15th class has the lowest detection accuracy as Kinect v1 does not support asymmetrical detection.

918

S. G. Heng et al.

Fig. 4 Kinect v1: confusion matrices of complex tree training (left) and testing set (right)

Fig. 5 Kinect v2: confusion matrices of complex tree training (left) and testing set (right)

From the Fig. 7, the testing accuracy is 92.8% which is high for Kinect v2. The 2nd and 3rd classes have low detection accuracy due to Kinect v2 is less responsive to the outer eyebrows. Hence, the 2nd and 3rd classes have been classified wrongly into 3rd and 2nd classes respectively. The gaussian kernel is applied to the training dataset for both Kinect v1 and Kinect v2. From the Fig. 8, the testing accuracy for Kinect v1 is 76.0% which is considered acceptable. The 15th class has the lowest detection accuracy due to Kinect v1 failed to detect asymmetrical expression. Besides, the low detection accuracy of 9th class indicates that Kinect v1 is responsive to the lips corner movement only.

Classification of Facial Part Movement Acquired …

919

Fig. 6 Kinect v1: confusion matrices of cubic SVM training (left) and testing set (right)

Fig. 7 Kinect v2: confusion matrices of cubic SVM training (left) and testing set (right)

From the Fig. 9, the testing accuracy for the Kinect v2 is 93.7% which is satisfying. The lowest detection rates have been recorded on 2nd and 3rd classes where some testing samples have been misclassified into each other’s class due to the similarity of raising eyebrows slightly and greatly. Besides, the 1st and 9th classes have been mis-detected as each other’s class as well. For the fine kNN classification, the distinctions between classes is finely detailed as the number of neighbors is set to 1 only. From the Fig. 10, Kinect v1 has 78.8% of testing accuracy which is considered acceptable. The 9th class has the lowest detection rate due to the lowering lips corner has been misclassified into 12th and 13th classes which are stretching right and left lip corner respectively.

920

S. G. Heng et al.

Fig. 8 Kinect v1: confusion matrices of fine Gaussian SVM training (left) and testing set (right)

Fig. 9 Kinect v2: confusion matrices of fine Gaussian SVM training (left) and testing set (right)

From the Fig. 11, Kinect v2 has the highest testing accuracy among all the classification types which is 94.3%. Some 2nd and 3rd classes are predicted wrongly into 3rd and 2nd classes respectively due to Kinect v2 is less responsive to the outer eyebrows movement. The chosen classifier type for discriminant analysis is quadratic discriminant. It creates nonlinear boundaries axis between the training classes. From the Fig. 12, Kinect v1 has a low testing accuracy which is 60.1% only. The model is almost failed to detect the 6th class as Kinect v1 does not support eyelids movement detection. The 10th class is also categorized wrongly into 9th class.

Classification of Facial Part Movement Acquired …

921

Fig. 10 Kinect v1: confusion matrices of fine kNN training (left) and testing set (right)

Fig. 11 Kinect v2: confusion matrices of fine kNN training (left) and testing set (right)

From the Fig. 13, Kinect v2 has good testing accuracy which is 89.0%. Some of the 2nd and 3rd classes facial part movements have been misclassified into 3rd and 2nd classes respectively due to Kinect v2 is less responsive to the outer eyebrows movement detection. In overall, the fine kNN classifier dominates the other classification types on training and testing accuracy for both Kinect v1 and v2, which are shown in the following Table 2. Besides, the complex tree classifier records the lowest training and testing accuracy for both Kinect v1 and Kinect v2. For fine kNN classification, Kinect v1 has 98.8% of training accuracy which is slightly better than Kinect v2 that has 97.8% of training accuracy. Both Kinect v1 and v2 have the highest testing accuracy for fine kNN classification as well.

922

S. G. Heng et al.

Fig. 12 Kinect v1: confusion matrices of QDA training (left) and testing set (right)

Fig. 13 Kinect v2: confusion matrices of QDA training (left) and testing set (right)

Table 2 Classification accuracy Classification type

Training accuracy Kinect v1 (%) Kinect v2 (%)

Testing accuracy Kinect v1 (%) Kinect v2 (%)

Neural network Complex tree Cubic SVM Fine Gaussian SVM Fine kNN QDA

96.6 66.8 97.4 97.0 98.8 72

68.9 50.4 71.0 76.0 78.8 60.1

97.5 85.3 97.5 97.5 97.8 92.8

91.2 80.0 92.8 93.7 94.3 89.0

Classification of Facial Part Movement Acquired …

923

However, Kinect v1 has only 78.8% of testing accuracy compared to Kinect v2 that has 94.3% of testing accuracy. Due to Kinect v1 does not support eyelids and asymmetrical facial part movements, some relevant responses are not classified correctly and have dragged down the testing accuracy. On the other hand, Kinect v2 has 94.3% of testing accuracy which is satisfying compared to Kinect v1. Most responses are classified correctly due to Kinect v2 supports eyelids and asymmetrical facial part movements.

5 Conclusion In conclusion, the testing accuracy of Kinect v2 is better than Kinect v1. For Kinect v2 dataset, the fine kNN classification which uses only 1 nearest neighbor has the highest training and testing accuracy which are 97.8% and 94.3% respectively among all classification types used in this study. The training and testing accuracy are satisfying and closed to each other. Hence, the training dataset is proven suitable for the testing dataset as Kinect v2 does support the eyelids and asymmetrical facial part movements. In contrast, the Kinect v1 has relatively low testing accuracy which is only 78.8% compared to Kinect v2 for fine kNN classification. Although the Kinect v1 has 98.8% of testing accuracy which is the highest compared to Kinect v2 among all classification types, the poor testing accuracy shows that Kinect v1 has a lower face tracking performance compared to Kinect v2. In overall, the classification results show that Kinect v2 is more suitable to develop assistive facial exercise application. Acknowledgements The research is funded by Fundamental Research Grant Scheme FRGS/1/ 2016/TK04/UMP/02/1 and Universiti Malaysia Pahang (UMP).

References 1. Xu CH, Wang YH, Tan TN, Quan L (2004) Depth vs. intensity: which is more important for face recognition? In: 17th International conference on pattern recognition on proceedings, Cambridge, UK, vol 1. IEEE, pp 342–345 2. Abate AF, Nappi M, Riccio D, Sabatino G (2007) 2D and 3D face recognition: a survey. Pattern Recogn Lett 28(14):1885–1906 3. Wu HH, Bainbridge-Smith A (2011) Advantages of using a Kinect camera in various applications. University of Canterbury, pp 1–4 4. Webb J, Ashley J (2012) Beginning Kinect programming with the Microsoft Kinect SDK, 1st edn. Apress, New York 5. Umirzakova S, Whangbo TK (2018) Study on detect stroke symptoms using face features. In: 2018 International conference on information and communication technology convergence (ICTC), Korea. IEEE, pp 429–431 6. Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10):993–1001

924

S. G. Heng et al.

7. Quinlan JR (1986) Induction of decision trees. Mach Learn 1(1):81–106 8. Berwick R An idiot’s guide to support vector machines (SVMs). http://web.mit.edu/6.034/ wwwbob/svm.pdf. Accessed 15 Oct 2019 9. Guo G, Wang H, Bell D, Bi Y, Greer K (2003) KNN model-based approach in classification. In: Meersman R, Tari Z, Schmidt DC. (eds.) OTM confederated international conferences “On the move to meaningful internet systems”. LNCS, vol 1. Springer, Heidelberg, pp 986– 996 10. Li T, Zhu S, Ogihara M (2006) Using discriminant analysis for multi-class classification: an experimental investigation. Knowl Inf Syst 10(4):453–472

Hurst Exponent Based Brain Behavior Analysis of Stroke Patients Using EEG Signals Wen Yean Choong, Wan Khairunizam, Murugappan Murugappan, Mohammad Iqbal Omar, Siao Zheng Bong, Ahmad Kadri Junoh, Zuradzman Mohamad Razlan, A. B. Shahriman, and Wan Azani Wan Mustafa

Abstract The stroke patients perceive emotions differently with normal people due to emotional disturbances, the emotional impairment of the stroke patients can be effectively analyzed using the EEG signal. The EEG signal has been known as non-linear and the neuronal oscillation under different mental states can be observed by non-linear method. The non-linear analysis of different emotional states in the EEG signal was performed by using hurst exponent (HURST). In this study, the long-range temporal correlation (LRTC) was examined in the emotional EEG signal of stroke patients and normal control subjects. The estimation of the HURST was more statistically significant in normal group than the stroke groups. In this study, the statistical test on the HURST has shown a more significant different among the emotional states of normal subject compared to the stroke patients. Particularly, it was also found that the gamma frequency band in the emotional EEG has shown more statistically significant among the different emotional states. Keywords Electroencephalogram

 Hurst exponent (HURST)  Stroke  Emotion

W. Y. Choong (&)  W. Khairunizam  M. I. Omar  S. Z. Bong  Z. M. Razlan  A. B. Shahriman School of Mechatronic Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis, Malaysia e-mail: [email protected] M. Murugappan Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Block 4, Doha, Kuwait A. K. Junoh Institute of Engineering Mathematics, Universiti Malaysia Perlis, 02600 Arau, Perlis, Malaysia W. A. W. Mustafa Faculty of Engineering Technology, Universiti Malaysia Perlis, 02600 Arau, Perlis, Malaysia © Springer Nature Singapore Pte Ltd. 2021 Z. Md Zain et al. (eds.), Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, Lecture Notes in Electrical Engineering 666, https://doi.org/10.1007/978-981-15-5281-6_66

925

926

W. Y. Choong et al.

1 Introduction Stroke was ranked the second in the top ten causes of deaths worldwide according to the World Health Organization (WHO), where the statistic in year 2015 showed that out of 85 deaths caused by stroke per 100,000 population [1]. Stroke or cerebrovascular accident (CVA) happens due to the problem of oxygen supply to the brain. The stroke caused the brain cells damage and leads to the loss of brain functions. Hence, stroke patients often experience emotional problems, including emotion impairment and having different emotional perceptions compared with normal people for the same emotional situation [2, 3]. Yuvaraj has reviewed the emotional impairment in stroke patients and the author concluded the stroke patients experienced emotions differently with normal people and the right brain damaged stroke patients were found to have more impairment than the left brain damage stroke patients [4]. Most of the human activities are regulated by the brain, a huge amount of information including the emotional experiences can be analyzed from the brain activities [5–8]. Several methodologies have been implemented in the literature to “capture” the brain activities for analysis, such as the positron emission tomography (PET), functional magnetic resonance imaging (fMRI), magnetoencephalogram (MEG) and electroencephalogram (EEG). In this study, the EEG was used as the data acquisition devices are cheaper and portable in size than other devices, also EEG signals have good temporal and spatial resolution. EEG signal is a biosignal which generated from the stochastic phenomena of biological systems, thus the analysis of EEG signals is useful in explaining the neuronal activities of different mental states. The human brain is made up from four lobes, known as frontal (F), parietal (P), temporal (T) and occipital (O), these lobes located at the cerebral cortex (gray matter) of the brain. Different parts of the brain form a complex network of interconnected control and coordination of body activities. In order to understand how the brain works, researchers have been studied the EEG signals from different brain parts. In this study, the emotional states of human subject were collected in the form of EEG signals to study the brain behavior in processing different emotional states.

1.1

Properties of EEG Signals

EEG signal is the most powerful biosignal and used in several clinical and real-time systems design applications. Mostly, this signal is used for diagnosis of abnormalities or disorders in the brain region such as traumatic brain injury [9], stroke [6, 10, 11], Parkinson’s disease [12, 13], and Alzheimer’s disease [14]. This characteristic feature enables us to track the minute changes of emotions through brain signal activity than other biosignals.

Hurst Exponent Based Brain Behavior Analysis …

927

From earlier studies, EEG is found to be non-linear, non-stationary, and nonGaussian signal [15, 16]. Researches have been work on verifying the properties of EEG data to understand more on the human brain [17–19]. Thus, features from different domains and methods have been used in the EEG signal analysis. Since EEG exhibits non-linear characteristics, there were studies on EEG analysis using non-linear features. In past studies, the non-linear analysis has been reported to perform better than linear analysis in EEG signal processing [20]. EEG signals are oscillatory in nature and most of the useful information about brain state is reflected in the low frequency bands (