Examining Multimedia Forensics and Content Integrity [Team-IRA] 1668468646, 9781668468647

Due to the ubiquity of social media and digital information, the use of digital images in today's digitized marketp

379 74 7MB

English Pages 300 [318] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Examining Multimedia Forensics and Content Integrity [Team-IRA]
 1668468646, 9781668468647

  • Commentary
  • Thank you Team-IRA

Table of contents :
Title Page
Copyright Page
Book Series
Table of Contents
Detailed Table of Contents
Preface
Chapter 1: 3D Data Security
Chapter 2: One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing
Chapter 3: SSD Forensic Investigation Using Open Source Tool
Chapter 4: A Comparative Review for Color Image Denoising
Chapter 5: Blockchain-Based Multimedia Content Protection
Chapter 6: Blockchain-Based Platform for Smart Tracking and Tracing the Pharmaceutical Drug Supply Chain
Chapter 7: A Methodological Study of Fake Image Creation and Detection Techniques in Multimedia Forensics
Chapter 8: A Blockchain-Trusted Scheme Based on Multimedia Content Protection
Chapter 9: Integration of Blockchain and Mobile Edge Computing
Chapter 10: A Review on Spatial and Transform Domain-Based Image Steganography
Compilation of References
About the Contributors
Index

Citation preview

Examining Multimedia Forensics and Content Integrity Sumit Kumar Mahana National Institute of Technology, Kurukshetra, India Rajesh Kumar Aggarwal National Institute of Technology, Kurukshetra, India Surjit Singh Thapar Institute of Engineering and Technology, India

A volume in the Advances in Multimedia and Interactive Technologies (AMIT) Book Series

Published in the United States of America by IGI Global Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA, USA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail: [email protected] Web site: http://www.igi-global.com Copyright © 2023 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Names: Sumit Kumar (Economist) editor. | Aggarwal, Raj, editor. | Singh, Surjit, 1981- editor. Title: Examining multimedia forensics and content integrity / edited by Sumit Kumar Mahana, Rajesh Kumar Aggarwal, and Surjit Singh. Description: Hershey, PA : Information Science Reference, an imprint of IGI Global, [2023] | Includes bibliographical references and index. | Summary: “The Examining Multimedia Forensics and Content Integrity features a collection of innovative research on the approaches and applications of current techniques for the privacy and security of multimedia and their secure transportation. It provides relevant theoretical frameworks and the latest empirical research findings in the area of multimedia forensics and content integrity. Covering topics such as 3D data security, copyright protection, and watermarking, this major reference work is a comprehensive resource for security analysts, programmers, technology developers, IT professionals, students and educators of higher education, librarians, researchers, and academicians”-- Provided by publisher. Identifiers: LCCN 2022058486 (print) | LCCN 2022058487 (ebook) | ISBN 9781668468647 (h/c) | ISBN 9798369304396 (s/c) | ISBN 9781668468654 (ebook) Subjects: LCSH: Multimedia systems--Protection. | Data integrity. | Data protection. | Digital forensic science. Classification: LCC QA76.575 .E94 2023 (print) | LCC QA76.575 (ebook) | DDC 006.7--dc23/eng/20230123 LC record available at https://lccn.loc.gov/2022058486 LC ebook record available at https://lccn.loc.gov/2022058487 This book is published in the IGI Global book series Advances in Multimedia and Interactive Technologies (AMIT) (ISSN: 2327-929X; eISSN: 2327-9303) British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher. For electronic access to this publication, please contact: [email protected].

Advances in Multimedia and Interactive Technologies (AMIT) Book Series ISSN:2327-929X EISSN:2327-9303 Editor-in-Chief: Joel J.P.C. Rodrigues, Senac Faculty of Ceará, FortalezaCE, Brazil; Instituto de Telecomunicações, Portugal Mission

Traditional forms of media communications are continuously being challenged. The emergence of user-friendly web-based applications such as social media and Web 2.0 has expanded into everyday society, providing an interactive structure to media content such as images, audio, video, and text. The Advances in Multimedia and Interactive Technologies (AMIT) Book Series investigates the relationship between multimedia technology and the usability of web applications. This series aims to highlight evolving research on interactive communication systems, tools, applications, and techniques to provide researchers, practitioners, and students of information technology, communication science, media studies, and many more with a comprehensive examination of these multimedia technology trends. Coverage • Internet Technologies • Mobile Learning • Social Networking • Gaming Media • Web Technologies • Digital Games • Multimedia Services • Multimedia Streaming • Digital Watermarking • Multimedia Technology

IGI Global is currently accepting manuscripts for publication within this series. To submit a proposal for a volume in this series, please contact our Acquisition Editors at [email protected] or visit: http://www.igi-global.com/publish/.

The Advances in Multimedia and Interactive Technologies (AMIT) Book Series (ISSN 2327-929X) is published by IGI Global, 701 E. Chocolate Avenue, Hershey, PA 17033-1240, USA, www.igi-global.com. This series is composed of titles available for purchase individually; each title is edited to be contextually exclusive from any other title within the series. For pricing and ordering information please visit http://www.igi-global.com/book-series/advances-multimedia-interactivetechnologies/73683. Postmaster: Send all address changes to above address. Copyright © 2023 IGI Global. All rights, including translation in other languages reserved by the publisher. No part of this series may be reproduced or used in any form or by any means – graphics, electronic, or mechanical, including photocopying, recording, taping, or information and retrieval systems – without written permission from the publisher, except for non commercial, educational use, including classroom teaching purposes. The views expressed in this series are those of the authors, but not necessarily of IGI Global.

Titles in this Series

For a list of additional titles in this series, please visit: http://www.igi-global.com/book-series/advances-multimedia-interactive-technologies/73683

Handbook of Research on Advanced Practical Approaches to Deepfake Detection and Applications Ahmed J. Obaid (University of Kufa, Iraq) Ghassan H. Abdul-Majeed (University of Baghdad, Iraq) Adriana Burlea-Schiopoiu (University of Craiova, Romania) and Parul Aggarwal (Jamia Hamdar, India) Information Science Reference • © 2023 • 379pp • H/C (ISBN: 9781668460603) • US $295.00 Dynamics of Dialogue, Cultural Development, and Peace in the Metaverse Swati Chakraborty (GLA University, India) Information Science Reference • © 2023 • 236pp • H/C (ISBN: 9781668459072) • US $240.00 Handbook of Research on New Media, Training, and Skill Development for the Modern Workforce Dominic Mentor (Teachers College, Columbia University, USA) Business Science Reference • © 2022 • 439pp • H/C (ISBN: 9781668439968) • US $295.00 Handbook of Research on New Media Applications in Public Relations and Advertising Elif Esiyok (Atilim University, Turkey) Information Science Reference • © 2021 • 572pp • H/C (ISBN: 9781799832010) • US $295.00 Multidisciplinary Perspectives on Narrative Aesthetics in Video Games Deniz Denizel (Bahcesehir University, Turkey) and Deniz Eyüce Şansal (Bahcesehir University, Turkey) Information Science Reference • © 2021 • 300pp • H/C (ISBN: 9781799851103) • US $195.00 For an entire list of titles in this series, please visit: http://www.igi-global.com/book-series/advances-multimedia-interactive-technologies/73683

701 East Chocolate Avenue, Hershey, PA 17033, USA Tel: 717-533-8845 x100 • Fax: 717-533-8661 E-Mail: [email protected] • www.igi-global.com

Table of Contents

Preface................................................................................................................. xiv Chapter 1 3D Data Security: Robust 3D Mesh Watermarking Approach for Copyright Protection ...............................................................................................................1 Imen Fourati Kallel, Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia Ahmed Grati, Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia Amina Taktak, Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia Chapter 2 One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing .................................................................................................38 Gopal Singh Kushwah, National Institute of Technology, Kurukshetra, India Surjit Singh, Thapar Institute of Engineering and Technology, India Sumit Kumar Mahana, National Institute of Technology, Kurukshetra, India Chapter 3 SSD Forensic Investigation Using Open Source Tool .........................................56 Hepi Suthar, Rashtriya Raksha University, India Priyanka Sharma, Rashtriya Raksha University, India Chapter 4 A Comparative Review for Color Image Denoising ............................................79 Ashpreet, National Institute of Technology, Kurukshetra, India



Chapter 5 Blockchain-Based Multimedia Content Protection ............................................118 Sakshi Chhabra, Panipat Institute of Engineering and Technology, India Ashutosh Kumar Singh, National Institute of Technology, Kurukshetra, India Sumit Kumar Mahana, National Institute of Technology, Kurukshetra, India Chapter 6 Blockchain-Based Platform for Smart Tracking and Tracing the Pharmaceutical Drug Supply Chain ...................................................................144 Deepak Singla, Panipat Institute of Engineering and Technology, India Sanjeev Rana, Maharishi Markandeswar University, India Chapter 7 A Methodological Study of Fake Image Creation and Detection Techniques in Multimedia Forensics ....................................................................................173 Renu Popli, Chitkara Univeristy Institute of Engineering and Technology, Chitkrara University, Punjab, India Isha Kansal, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India Rajeev Kumar, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India Ruby Chauhan, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India Chapter 8 A Blockchain-Trusted Scheme Based on Multimedia Content Protection ........197 Aarti Sharma, National Institute of Technology, Kurukshetra, India Bhavana Choudhary, National Institute of Technology, Kurukshetra, India Divya Garg, National Institute of Technology, Kurukshetra, India Chapter 9 Integration of Blockchain and Mobile Edge Computing ...................................218 Aarti Sharma, University Institute of Engineering and Technology, Thanesar, India Mamtesh Nadiyan, National Institute of Technology, Kurukshetra, India Seema Sabharwal, Government P.G. College for Women, India



Chapter 10 A Review on Spatial and Transform Domain-Based Image Steganography .....241 Divya Singla, Panipat Institute of engineering and technology, India Neetu Verma, Deenbandhu Chhotu Ram University of Science and Technology, India Sakshi Patni, Panipat Institute of Engineering & Technology, Panipat, India Compilation of References ..............................................................................267 About the Contributors ...................................................................................296 Index ..................................................................................................................300

Detailed Table of Contents

Preface................................................................................................................. xiv Chapter 1 3D Data Security: Robust 3D Mesh Watermarking Approach for Copyright Protection ...............................................................................................................1 Imen Fourati Kallel, Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia Ahmed Grati, Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia Amina Taktak, Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia Three-dimensional data reveals more explicative information and more realistic visualization than that of the two-dimensional ones. This explains the remarkable growth in the use of 3D data in different fields of application, which increases in respect to the risk of illegal use of data and piracy, as well. Since its appearance, digital watermarking has been an essential solution to attest the identity of the owner, control illegal reproduction, and protect the copyright of the 3D object. The current work is about designing a robust 3D object watermarking technique in order to protect copyrights associated with triangular polygon meshes. The approach is based on a watermark’s insertion in the spatial domain. The findings are a proof demonstrating that the proposed 3D objects watermarking method does not only meet the need of imperceptibility but also shows, at least, similar or even better robustness, compared with other commonly used 3D objects watermarking approaches in the literature.



Chapter 2 One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing .................................................................................................38 Gopal Singh Kushwah, National Institute of Technology, Kurukshetra, India Surjit Singh, Thapar Institute of Engineering and Technology, India Sumit Kumar Mahana, National Institute of Technology, Kurukshetra, India Distributed denial of service (DDoS) attack affects the availability of multimedia cloud services to its users. In this attack, a huge traffic load is put on the victim server. Hence initially the server becomes slow to process legitimate requests and later becomes unavailable. Therefore implementing defensive solutions against these attacks is of utmost importance. In this work, the authors propose a bagging ensemble-based DDoS attack detection system for multimedia cloud computing. One class extreme learning machine (ELM) is used as a base classifier. An outlier detection based approach has been used to detect these attacks. Experiments have been performed using two benchmark datasets NSL-KDD and CICIDS2017 to evaluate the performance of the proposed system. Chapter 3 SSD Forensic Investigation Using Open Source Tool .........................................56 Hepi Suthar, Rashtriya Raksha University, India Priyanka Sharma, Rashtriya Raksha University, India According to the CIA triad, Cyber Forensic Investigation judicial point of view is the data integrity of volatile memory kinds of data storage devices. This has long been a source of concern, and it is critical for the chain of custody procedure. As an outcome result, it is a substantial advancement for the measured examination cycle to safeguard unstable data from SSD. In this study provides the easiest way to preserve potentially volatile based memory digital proof, store on SSDs, and generate forensically bit-streams, also known as bit-by-bit copies. The challenge of protecting the data integrity of an electronic piece of evidence that has been arrested at a crime scene frequently faces analysts. This academic article primarily suggests a process method and a few steps for carrying out forensic investigations on data obtained from solid state drives all the while avoiding the TRIM characteristic and garbage series from running lacking user input or interaction, preserving the data integrity of the facts as usable digital evidence.



Chapter 4 A Comparative Review for Color Image Denoising ............................................79 Ashpreet, National Institute of Technology, Kurukshetra, India With the explosion in the number of color digital images taken every day, the demand for more accurate and visually pleasing images is increasing. Images that have only one component in each pixel are called scalar images. Correspondingly, when each pixel consists of three separate components from three different signal channels, these are called color images. Image denoising, which aims to reconstruct a highquality image from its degraded observation, is a classical yet still very active topic in the area of low-level computer vision. Impulse noise is one of the most severe noises which usually affect the images during signal acquisition stage or due to the bit error in the transmission. The use of color images is increasing in many color image processing applications. Restoration of images corrupted by noises is a very common problem in color image processing. Therefore, work is required to reduce noise without losing the color image features. Chapter 5 Blockchain-Based Multimedia Content Protection ............................................118 Sakshi Chhabra, Panipat Institute of Engineering and Technology, India Ashutosh Kumar Singh, National Institute of Technology, Kurukshetra, India Sumit Kumar Mahana, National Institute of Technology, Kurukshetra, India This chapter presents a comprehensive overview of the methods and applications of blockchain technology for multimedia content security. These applications are categorised using a taxonomy that takes into account the technical features of blockchain technology, types of blockchain, content protection strategies including encryption, digital rights management, digital watermarking, and fingerprinting (or transaction tracing), as well as performance standards. Moreover, multimediabased content protection techniques have been covered in this chapter. According to a review of the literature, there is currently no comprehensive and organised taxonomy specifically devoted to blockchain-based content protection solutions. The comparative study is of the most noticeable work done on blockchain-based content protection techniques, which is highly cited by the authors.



Chapter 6 Blockchain-Based Platform for Smart Tracking and Tracing the Pharmaceutical Drug Supply Chain ...................................................................144 Deepak Singla, Panipat Institute of Engineering and Technology, India Sanjeev Rana, Maharishi Markandeswar University, India Every nation is presently addressing the threat posed by the sale of counterfeit medications. It is a growing global issue that has a significant effect on lower middleincome and lower income countries. According to current estimates from the WHO, one in ten of the medications circulating in low- and middle-income nations are either subpar or fake. According to the National Drug Survey 2014–2016, carried out by the National Institute of Biologics, Ministry of Health & Family Welfare, counterfeit or substandard drugs make up about 3% of all pharmaceuticals in India. There is an urgent need for increased visibility and traceability within the supply chain due to the growing threat of counterfeit medications entering it and, in particular, making it into customers’ hands. Chapter 7 A Methodological Study of Fake Image Creation and Detection Techniques in Multimedia Forensics ....................................................................................173 Renu Popli, Chitkara Univeristy Institute of Engineering and Technology, Chitkrara University, Punjab, India Isha Kansal, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India Rajeev Kumar, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India Ruby Chauhan, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India Nowadays, there is a huge concern of fabrication of real-world images/ videos using various computer-aided tools and software. Although these types of software are commonly used for personal entertainment but may create havoc when used by malicious people for concealing some sensitive contents from images for criminal forgery. Spread of fake information and illegal activities or creating morphed images of some individuals for taking revenge are some of the potentially destructive areas of advanced face information and structure manipulation technology in the wrong hands. Researcher fraternity in multimedia forensics have been working in this area since many years and in this paper, a comprehensive study of various techniques of fake image/video creation and detection are described. It also presents a survey on various benchmark datasets used by the researchers for fake image/video detection. The presented survey can be a useful contribution for the research community to develop a new method/ model for fake detection thereby overcoming the restrictions of the traditional methods.



Chapter 8 A Blockchain-Trusted Scheme Based on Multimedia Content Protection ........197 Aarti Sharma, National Institute of Technology, Kurukshetra, India Bhavana Choudhary, National Institute of Technology, Kurukshetra, India Divya Garg, National Institute of Technology, Kurukshetra, India There are two types of content on the blockchain: centralized and decentralized. On centralized video platforms, the platform owner controls most of the content uploaded, rather than the creator. However, some content creators post low-quality content in exchange for free cryptocurrencies, creating a cryptocurrency algorithm that demotivates other content creators. In contrast, decentralized blockchain-based video platforms aim to lessen ad pressure and eliminate intermediaries. On video platforms, copyright violations and the unauthorized dissemination of protected information are also significant issues. Copyright protection, illegitimate access restriction, and legitimate dissemination of video files are necessary to guarantee that authors’ original output is appropriately compensated. Chapter 9 Integration of Blockchain and Mobile Edge Computing ...................................218 Aarti Sharma, University Institute of Engineering and Technology, Thanesar, India Mamtesh Nadiyan, National Institute of Technology, Kurukshetra, India Seema Sabharwal, Government P.G. College for Women, India This chapter begins with the fundamentals of blockchain and MEC. Integrating new technologies like blockchain and MEC is seen as a potential paradigm for managing the voluminous amounts of data produced by today’s pervasive mobile devices and subsequently powering intelligent services. With blockchain technology, they can boost the safety of existing MEC systems by using decentralized, immutable, secure, private, and service-efficient smart contracts. These smart contracts fall into three broad categories: public blockchains, consortium blockchains, and private blockchains. Moreover, this chapter discusses the classification and current defence mechanisms of security threats. Potential solutions to MEC’s main security challenges are then discussed. Following that, the authors present a classification to assist developers of various architectures in selecting an appropriate platform for specific applications, as well as insights into potential research directions. Finally, the authors present key blockchain and MEC convergence features, followed by some conclusions.



Chapter 10 A Review on Spatial and Transform Domain-Based Image Steganography .....241 Divya Singla, Panipat Institute of engineering and technology, India Neetu Verma, Deenbandhu Chhotu Ram University of Science and Technology, India Sakshi Patni, Panipat Institute of Engineering & Technology, Panipat, India Steganography is a secret way of communicating, hiding the existence of information. It hides the message secretly without letting anyone know about its existence. This chapter gives a brief of various image steganography techniques in the spatial domain and transforms domain with their advantages and disadvantages. The characteristics to measure the performance of an image steganography technique are given as well. It also introduces the idea of drawing out the embedded data from the cover object called steganalysis. Compilation of References ..............................................................................267 About the Contributors ...................................................................................296 Index ..................................................................................................................300

xiv

Preface

This book features a collection of innovative research on the approaches and applications of current techniques for the privacy and security of multimedia and their secure transportation. It provides relevant theoretical frameworks and the latest empirical research findings in the area of multimedia forensics and content integrity. Covering topics such as 3D data security, copyright protection, and watermarking, this major reference work is a comprehensive resource for security analysts, programmers, technology developers, IT professionals, students and educators of higher education, librarians, researchers, and academicians. The chapters in this book are as follows:

ORGANIZATION OF THE BOOK The book is organized into 10 chapters. A brief description of each of the chapters follows: Chapter 1 identifies the three-dimensional data reveals more explicative information and more realistic visualization than that of the two-dimensional ones. This explains the remarkable growth in the use of 3D data in different fields of application, which increases in respect with the risk of illegal use of data and piracy, as well. Since its appearance, digital watermarking has been an essential solution to attest the identity of the owner, control the illegal reproduction, and protect the copyright of the 3D object. The current work is about designing a robust 3D object watermarking technique in order to protect copyrights associated with triangular polygon meshes. The approach is based on a watermark’s insertion in the spatial domain. The findings are a proof demonstrating that the proposed 3D objects watermarking method does not only meet the need of imperceptibility but also shows, at least, similar or even better robustness, compared with other commonly used 3D objects watermarking approaches in the literature. Chapter 2 addresses the issue of how distributed denial of service (DDoS) attack affects the availability of multimedia cloud services to its users. In this attack, a huge traffic load is put on the victim server. Hence initially the server becomes slow to

Preface

process legitimate requests and later becomes unavailable. Therefore, implementing defensive solutions against these attacks is of utmost importance. In this work, we propose a bagging ensemble-based DDoS attack detection system for multimedia cloud computing. One class extreme learning machine (ELM) is used as a base classifier. An outlier detection-based approach has been used to detect these attacks. Experiments have been performed using two benchmark datasets NSL-KDD and CICIDS2017 to evaluate the performance of the proposed system. Chapter 3 provides the easiest way to preserve potentially volatile based memory digital proof, store on SSDs, and generate forensically bit-streams, also known as bit-by-bit copies. The challenge of protecting the data integrity of an electronic piece of evidence that has been arrested at a crime scene frequently faces analysts. This academic article primarily suggests a process method and a few steps for carrying out forensic investigations on data obtained from solid state drives all the while avoiding the TRIM characteristic and garbage series from running lacking user input or interaction, preserving the data integrity of the facts as usable digital evidence. Chapter 4 discusses image denoising, which aims to reconstruct a high-quality image from its degraded observation, is a classical yet still very active topic in the area of low-level computer vision. Impulse noise is one of the most severe noises which usually affect the images during signal acquisition stage or due to the bit error in the transmission. The use of color images is increasing in many color image processing applications. Restoration of images corrupted by noises is a very common problem in color image processing. Therefore, work is required to reduce noise without losing the color image features. Chapter 5 presents a comprehensive overview of the methods and applications of blockchain technology for multimedia content security. These applications are categorised using a taxonomy that takes into account the technical features of blockchain technology, types of blockchain, content protection strategies including encryption, digital rights management, digital watermarking, and fingerprinting (or transaction tracing), as well as performance standards. Moreover, Multimediabased content protection techniques have been covered in this chapter. According to a review of the literature, there is currently no comprehensive and organised taxonomy specifically devoted to blockchain-based content protection solutions. The comparative study of most noticeable work done on blockchain-based content protection techniques which is highly cited by the authors. Chapter 6 addresses the threat posed by the sale of counterfeit medications, which is a worldwide issue. It is a growing global issue that has a significant effect on lower middle-income and lower income countries. According to current estimates from the WHO, one in ten of the medications circulating in low- and middle-income nations are either subpar or fake. According to the National Drug Survey 2014–2016, carried out by the National Institute of Biologics, Ministry of Health and Family xv

Preface

Welfare, counterfeit or substandard drugs make up about 3% of all pharmaceuticals in India. There is an urgent need for increased visibility and traceability within the supply chain due to the growing threat of counterfeit medications entering it and, in particular, making it into customers’ hands. Chapter 7 addresses the concern of fabrication of real-world images/ videos using various computer-aided tools and software. Although these types of software are commonly used for personal entertainment but may create havoc when used by malicious people for concealing some sensitive contents from images for criminal forgery. Spread of fake information and illegal activities or creating morphed images of some individuals for taking revenge are some of the potentially destructive areas of advanced face information and structure manipulation technology in the wrong hands. Researcher fraternity in multimedia forensics have been working in this area since many years and in this chapter, a comprehensive study of various techniques of fake image/video creation and detection are described. It also presents a survey on various benchmark datasets used by the researchers for fake image/video detection. The survey can be a useful contribution for the research community to develop a new method/ model for fake detection thereby overcoming the restrictions of the traditional methods. Chapter 8 analyses two types of content on the blockchain: centralized and decentralized. On centralized video platforms, the platform owner controls most of the content uploaded rather than the creator. However, some content creators post low-quality content in exchange for free cryptocurrencies, creating a cryptocurrency algorithm that demotivates other content creators. In contrast, decentralized blockchain-based video platforms aim to lessen ad pressure and eliminate intermediaries. On video platforms, copyright violations and the unauthorized dissemination of protected information are also significant issues. Copyright protection, illegitimate access restriction, and legitimate dissemination of video files are necessary to guarantee that authors’ original output is appropriately compensated. Chapter 9 begins with the fundamentals of blockchain and MEC. Integrating new technologies like blockchain and MEC is seen as a potential paradigm for managing the voluminous amounts of data produced by today’s pervasive mobile devices and subsequently powering intelligent services. With blockchain technology, one can boost the safety of existing MEC systems by using decentralized, immutable, secure, private, and service-efficient smart contracts. These smart contracts fall into three broad categories: public blockchains, consortium blockchains, and private blockchains. Moreover, this chapter discusses the classification and current defence mechanisms of security threats. Potential solutions to MEC’s main security challenges are then discussed. Following that, the author presents a classification to assist developers of various architectures in selecting an appropriate platform for specific applications,

xvi

Preface

as well as insights into potential research directions. Finally, the author presents key blockchain and MEC convergence features followed by conclusions. Chapter 10 gives a brief of various image steganography techniques in the spatial domain and transforms domain with their advantages and disadvantages. It also introduces the idea of drawing out the embedded data from the cover object called steganalysis. Sumit Kumar Mahana National Institute of Technology, Kurukshetra, India Rajesh Kumar Aggarwal National Institute of Technology, Kurukshetra, India Surjit Singh Thapar Institute of Engineering and Technology, India

xvii

1

Chapter 1

3D Data Security:

Robust 3D Mesh Watermarking Approach for Copyright Protection Imen Fourati Kallel Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia Ahmed Grati Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia Amina Taktak Ecole Nationale d’Electronique et des Télécommunications de Sfax (ENET’Com), Tunisia

ABSTRACT Three-dimensional data reveals more explicative information and more realistic visualization than that of the two-dimensional ones. This explains the remarkable growth in the use of 3D data in different fields of application, which increases in respect to the risk of illegal use of data and piracy, as well. Since its appearance, digital watermarking has been an essential solution to attest the identity of the owner, control illegal reproduction, and protect the copyright of the 3D object. The current work is about designing a robust 3D object watermarking technique in order to protect copyrights associated with triangular polygon meshes. The approach is based on a watermark’s insertion in the spatial domain. The findings are a proof demonstrating that the proposed 3D objects watermarking method does not only meet the need of imperceptibility but also shows, at least, similar or even better robustness, compared with other commonly used 3D objects watermarking approaches in the literature. DOI: 10.4018/978-1-6684-6864-7.ch001 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

3D Data Security

INTRODUCTION Nowadays, three-dimensional data have invaded several fields of activity, including medical imaging, cultural heritage, industry and video games. This type of data presents more significant information than the two-dimensional one. In this respect, the manipulation, visualization and transmission of three-dimensional objects have continuously been increasing in recent years, which make the protection of the 3D objects copyright vital (Beugnon, 2022). Watermarking (Corsini, 2003) has long been an efficient solution for copyright protection. It consists of inserting a secret message into a multimedia medium in a robust and invisible way. This message is also called “signature” or “watermark”, which proves the ownership of the said content. With the case of the present study, the 3D objects watermarking (Garg, 2022) is a device for marking the authorized distributions of the original 3D object with different signatures. In the event of leaks or illegal distributions, the owner can identify the source of weakness or even the copy that has been pirated. Equally, the watermark is an effective evidence of proving the possession of these data (3D objects). In this chapter, a robust 3D objects watermarking technique is developed for the copyright protection. In fact, it is approached by an insertion of the same signature several times in disjoint regions, precisely in the same 3D object in order to identify it from illegal copies. Accordingly, the redundancy of the signature will ensure more resistance against the various manipulations and attacks. The work presented in this chapter is an exploration of the different ways for the implementation of secret writing digital techniques on a very particular type of medium. It is the triangulated mesh surfaces which present the most useful solution with regard to their simplicity, flexibility, and availability. This chapter is composed of four sections: the first section is an overview of three-dimensional data accompanied with a general idea of digital watermarking. The second one is mainly about studying related works concerning the 3D object watermarking. The third is focalized on the newly suggested approach by a meticulous examination of both used techniques, particularly of insertion and detection. The fourth is essentially a summary of the main findings with an evaluation of the obtained performance, based on the suggested method. An in-depth comparison with other algorithms is also considered as a part of the last section. The chapter ends up with a general conclusion of the whole work.

2

3D Data Security

BACKGROUND 3D objects can be presented, constructed and processed in several ways using, point-clouds, slices, voxels, polygonal meshes, or by a collection of parametric curves (NURBS: Non-Uniform Rational B-Splines). The most used ways are conducted through the use of point-clouds (Yang, 2022), slicing and vertices. The first approach is basically the point cloud. It is the most natural representation of a 3D object by consisting of a list of points with their Cartesian coordinates. These points represent the geometry of the object, with no connection to each other. The second technique is slicing. The 3D object could be constructed through merging a series of two dimensional images of a parallel position or just slices in the light of a frame of reference, pursuing the model. Another technique taken for the presentation, construction and processing of 3D models is through the use of a surface mesh. It is the easiest way to represent a 3D object. The object is presented as a set of polygons. One of the main characteristics of polygons is that they are easily partitioned into triangles; 3D watermarking applications mainly focus on triangle surface meshes. Triangular surface meshes ℳ obviously containing a set of triangles are connected by their common edges or vertices, which form a surface (Bostch, 2006). ℳ Triangular surface meshes can possibly be presented by a couple as follow:

M= (v, κ)

(1)

Where 𝜅 represent the topology and v refer to the set of vertices Nv constituting the geometry information of the mesh. Each vertex vi is described by its 3D coordinates (xi, yi, zi). v = {vi= (xi, yi, zi) i∈{1,2,…,Nv}}

(2)

The edges and faces signify the adjacency relationships between vertices. They represent the mesh’s connectivity information or else its topology. The figure 1 below shows examples of the most used 3D representations, notably, the triangular mesh, the point clouds, and the slices representation, which is widely used in the medical field.

3

3D Data Security

Figure 1. Most used 3D representations: (a) triangular mesh representation, (b) point clouds 3D representation, (c) slices representation

(Almeida, 2021)

Mesh representation is potentially an appropriate representative of 3D objects, even though complex shapes are present. It is also easy switchable from any other representation, namely voxels and point-clouds, to a mesh. Various mesh storage 3D file formats have been created (McHenry, 2008). The most used ones are OFF (Object File format), VRML (Virtual Reality Modeling

4

3D Data Security

Language), OBJ (Wave front Object files), SMF (Simple Model Format) and PLY (Polygon File Format) (Bourke, 2009) Digital watermarking is the technique of embedding secret watermark into a 3D object, which can be extracted in the light of a receipt assuring 3D object security. Watermarks are considered as either robust (Ben Amar, 2012) or fragile, depending on the application requirements, and the purpose of which is practically content authentication. Tamper proofing fragile watermarking is used to verify integrity or content authenticity. It highlights the areas having been altered and tampered (Motwani, 2010), (Kallel, 2009). In copyright protection applications, digital watermarking provides copyright owners with the means to protect their intellectual property rights. The inserted watermark should be imperceptible and robust against different attack types. In data hiding applications (Solak, 2020) watermarking aim to hide a primary message (the watermark) in another secondary message (the cover 3D object). The secondary message must remain visually unchanged and the inserted message must be completely invisible but accessible by anyone with secret information (key) allowing its extraction. The large capacity is the most important requirement for this type of application. Watermarking techniques are possibly classified as blind and non-blind, based on the information needed during the extraction phase. In blind watermarking method, the original 3D object is not required in the extraction process. However, non-blind watermarking method (Ashoub, 2018) requires an original 3D object at the extraction phase. Watermarking techniques are generally subsumed under two main clusters based on the embedding domain techniques, whether they are spatial or transformational. The spatial domain techniques directly alter either the vertices’ coordinates or the other geometric primitives. However, the transformational domain techniques play the role of embedding the signature into the resulted coefficients after alternative transformations either as wavelet (Yin, 2001) or spectral analysis one (Ohbuchi, 2001). Watermarking methods related to the transformational domain are usually more robust against noise addition. Nevertheless, they are, inversely, more complex and less robust against any cropping attack. Capacity, imperceptibility, and robustness are the requirements of watermarking 3D objects method. These requirements depend on the application context and on the type of 3D object. In this Chapter, a non-blind and robust 3D watermarking technique is represented, which mainly protects the copyright of 3D objects, with slight alterations to the original 3D object vertices’ coordinates. The embedded watermark should be responsive to three essential constraints (Sharma, 2022):

5

3D Data Security

• •



Robustness: it reveals the watermark resistance to certain manipulations and attacks in order to recover the inserted mark. Imperceptibility: it shows the similarity recognized between the cover original 3D object and the watermarked one. Accordingly, the watermark is presumably invisible. Indeed, the distortions introduced during the watermarking are probably considered as low as possible in order to guarantee that no degradation is potentially perceived on the original 3D object. Capacity: it represents the amount of information, which is probably hidden within the cover 3D object. This characteristic often depends on the complexity of the 3D object, which, in turn, is directly dependent on the vertices number or faces in the 3D cover object as well as the application targeted by the watermarking algorithm.

Figure 2. Watermarking trade-off among robustness, imperceptibility and capacity

There is a trade-off among robustness, imperceptibility and capacity as shown in figure 2. On the one hand, if an attempt to improve robustness is made by increasing the watermark’s length, imperceptibility reduce as more number of modified vertices. On the other hand, if an attempt to assure a high imperceptibility, robustness cannot be assured as fewer vertices are watermarked. Therefore, there is a need for a watermarking method, which is robust and able to resist various attacks. Moreover, it avoids visual distortion of the 3D object. It obstructs the attacker from determining the quantity and location of inserted signature. In addition to the previous constraints, other criteria may also be valuable and efficient in use. •

6

Cost of the algorithm: The computation time is an important criterion for certain applications. Therefore, real-time implementation requires watermarking algorithms, which are computationally inexpensive.

3D Data Security



Encryption and key of the watermark: in order to ensure the security of a watermarking algorithm, it is strongly advisable that the algorithm for inserting or extracting the mark is kept away from the public reach. For this reason, the use of a cryptographic key is designed for digital watermarking algorithms.

Following Sharma’s 2022 suggestion, a robust and blind 3D watermarking method is capable of detecting the embedded watermark, and at the same time, it occurs resilient against several types of attacks (Cayre, 2004). It is important to note that these attacks are not necessarily intentional, with a performer’s willing falsification of the 3D object for an illegal use. These presumed attacks can be interpreted as transformations, aiming at adapting the 3D object to personal use. According to Voloshynovskiy’s classification (Voloshynovskiy, 2001), 3D watermarking attacks can be subsumed under four different classes: Geometrical, removal, cryptographic, and protocol attacks. Geometrical attacks can either be performed as a non-intentional transformation, very commonly used in computer graphics in order to position a 3D object inside a scene; such as scaling, rotation and translation or by an attacker to tamper with the watermark detector synchronization with the cover 3D object as additive noise and cropping. Accordingly, additive noise is an attack, which randomly perturbs mesh vertices and modifies the geometry of the 3D object. In cropping attack, an attacker can delete a part of the 3D object. Removal attacks are just attempts to complete a removal of a watermark from the cover 3D object. This attacks class includes image-processing operations, such as image noise removal. For example, mesh smoothing can be obtained by mesh filtering which allows to remove its noise. Cryptographic attacks are specifically targeting watermark technique security and attempting to crack watermark key, which encloses the embedding information. Protocol attacks exploit the entire watermarking system itself. These attacks are exemplified by copying and ambiguating ones. A copy attack may be executed by an attacker who tries to guess the embedded watermark from many 3D watermarked objects in order to copy it from one 3D object to another. Therefore, 3D objects, which are not watermarked, will be considered as secure. Ambiguity attacks are regarded as attempts to confuse the detector by embedding several additional faked watermarks to the point that it becomes impossible to distinguish which one is the genuine one.

7

3D Data Security

LITERATURE REVIEW In this section, the most recent researches related to the watermarking 3D objects files are reviewed. Multiple techniques in the spatial domain have been suggested. With the spatial watermarking techniques, the embedding process adjustes the vertices positions and the geometrical properties. (Benedens, 2000) proposed the “Vertex Flood Algorithm”. This method adjusts the vertex position by modifying the distance between the watermarked vertex and the gravity center of the reference triangle. This algorithm is efficient in hiding an important flow of information in the mesh. In addition, it does not only play the role of altering the geometry of the 3D object, but also ensures the copyright protection, and even the detection of illegal copies. This method seems at the core of the “Triangle Flood” algorithm proposed by the same authors. It consists in associating both the topological and the geometrical information, mainly to generate a path for the mesh triangles and ultimately for hiding the mark in the heights of the triangles by altering the triangles vertices position. In (Yu, 2003) Yu et al present a 3D objects watermarking method based on a watermark’s insertion in the spatial domain. First, to assure more security, the authors scramble the 3D object’s vertices and classify them into different group using a secret key. Second, the distances from the vertices (in each group) to the 3D object’s centroid are adjusted to incorporate a watermark bit. The quality distortion of the watermarked 3D object, the imperceptibility of the inserted signature and the robustness are controlled by a strength coefficient. The proposed method is nonblind and the watermark is recuperated by comparing the dissimilarity between the original and the watermarked 3D objects. The robustness is mainly against various signal processing attacks such as additive noise and cropping. Nevertheless, Yu method seems to be fragile to geometric transformations such as the affine ones. In the proposed watermarking algorithm, Zafeiriou (2005) inserts the watermark bit in several vertices in order to achieve robustness against mesh simplification. First, the cover 3D object is normalized and the model is translated in such a way that its mass center could coincide with the origin of the coordinate system. Later, the principal component could be aligned with the z-axis in order to achieve the 3D object rotation invariance. After that, the vertices coordinates are converted from the Cartesian (x, y, z) into spherical coordinates (r,𝜃,𝜑). The vertices are classified into different groups in accordance with their 𝜃 values. In this work, the authors assume that the distribution of the r component is a Gaussian one within each group. The watermark is inserted by modifying the variance of the Gaussian distribution. Only one bit per group will be inserted. Zafeiriou method seems robust to translation, rotation, mesh simplification and noise addition. However, it is perceived fragile to cropping. 8

3D Data Security

(Cho, 2007) presented a 3D watermarking method combining the ideas of Zafeiriou and Yu. The authors insert the watermark into 3D object by making use of statistical parameters (the variance or the mean) of a vertex norm distribution. A Cartesian spherical conversion makes a greater insurance to the vertex coordinates. Uniquely, vertex norm values are considered for embedding process while the two other spherical coordinates are kept intact. This method is generally robust against various attacks. Nevertheless, Cho method fails in the case of a 3D object of a very small size or in the case of computer-aided design (CAD) models, which have absolutely flat surfaces. In addition, this method produces apparent ripples on the surface of the watermarked 3D object. In (Hu, 2009) Cho’s method is extended. The author use quadratic programming to minimize the mean square error between the original 3D object, which is watermarked under several constraints. In this respect, it is assumed that the visual quality of the watermarked 3D objects is preserved while robustness is improved. The Hu method is more resistant to Gaussian noise than Cho’s. However, Hu’s method has its own weaknesses, especially with large 3D objects. At the same time, the problems of computational complexity of quadratic programming increases. In (Wang, 2011) the author proposes a non-blind and robust 3D watermarking method. A binary random sequence in spatial domain is embedded in the proposed technique. First, a Cartesian-spherical conversion is ensured for different vertices’ coordinates of the cover 3D object. Second, the vertices are classified into different groups according to the angle component of the corresponding spherical coordinate. Next, the watermark’s bits are embedded by altering vertex norms, which are identified as the distances between vertices and the gravity center of the 3D object. Indeed, here the authors use two different spherical components for the classification of the vertices and for the insertion process. Actually, the main problem posing itself is de-synchronization, which is supposed to be avoided hereafter. To assure more robustness, each watermark’s bit is embedded redundantly into different vertex‘s group. Wang method, which is non-blind, is robust against various attacks. Accordingly, the original cover 3D object is required in the extraction process. (Zhan, 2014) propose a 3D object watermarking method. The watermark’s embedding process is based on modulating the root mean square (RMS) curvature fluctuation. First, the authors calculate the root mean square curvature for each vertex. Second, the 3D objects’ vertices are separated into bins in the light of the values of the fluctuation. Zhan method is blind and it shows a suitable robustness against several attacks. (El Zein, 2016) suggest a non-blind, robust, and an intelligent watermark embedding technique. K-means algorithm is mainly used to classify the 3D object vertices into three different clusters: high, medium and low, based on the feature vector. The authors calculate the orientation of the surface to the average normal 9

3D Data Security

of the triangular faces, which form a 1 ring neighborhood for a vertex to determine the angles values. The feature vector is formed by the deduced angles. Accordingly, the angle will be high in magnitude if the region represents the peak. However, the angles will be small in magnitude if the region is flat. The medium cluster vertices are considered the best positions to embed the watermark bits so as to achieve high imperceptibility. The watermark embedding process modifies the 3D object’s vertices positions. The proposed method is non blind. The original 3D object and the location of the watermarked vertices are required to extract the watermark bits by calculating the vertex difference between the watermarked and the original 3D object. The approach is robust against different attacks, which may be embodied in smoothing, adding noise, and cropping. The proposed approach also shows a well-definite imperceptibility. Within the same context, the same author proposes in (El zein, 2017) a 3D object watermarking method and uses a clustering technique Fuzzy C-Means (FCM) to classify the 3D object‘s vertices into appropriate and inappropriate selection and to imperceptibly embed the watermark into the cover 3D object. The robustness of the proposed method is checked against different attacks such as noise addition, cropping, smoothed, and affine transformation. However, the computational complexity of the Fuzzy C-means slows down the process. (Al-Qudsy, 2018) proposes a spatial watermarking method, which alters the geometric primitives of the 3D object. The authors use the mean curvature (MC) to classify the surface vertices. This classification is not only aimed to select the appropriate vertices for embedding the watermark but also to assure the satisfaction of both the imperceptibility and the robustness of the proposed method. Al-Qudsy technique is blind. The original 3D object is not necessary for extracting the embedded watermark. (Farrag, 2020) introduces a robust and blind 3D meshes watermarking approach. It is performed by a mesh traversal algorithm, which is based on the shortest distances between neighboring vertices of the mesh. The mesh traversal algorithm generates the same traversing sequence once over a particular mesh; with a length equal to the vertices’ number in the 3D object. After that, the generated traversal order is encrypted to ensure more security. In the embedding process, if the embedded watermark bit is 1, the authors change the fourth or fifth digit of the Cartesian coordinate vertices after the decimal point to any even number. Nevertheless, if the embedded watermark bit is 0, the fourth or fifth digit of the Cartesian coordinate vertices after the decimal point is modified to any odd number. This approach is robust against different types of attacks such as translation, rotation, and smoothing. It assures a high insertion capacity. In (Mostafa, 2022) a robust data hiding approach for 3D objects is suggested. A gray code sequence is first created and converted into decimal form. The generated 10

3D Data Security

sequence of decimal numbers is used as the indices of the vertices, which will be watermarked. The secret message to be embedded is converted into ASCII code and subsequently into its 8-bit binary representation to produce a stream of binary bits. This stream is then encrypted using either AES–128 (Abdullah, 2017) or Blowfish (Singh,2013) just to produce a binary encrypted stream. Next, the bit stream is concatenated with an initialization vector and converted back into decimal representation. Finally, the message stream is embedded into each vertex identified by the gray code sequence. This approach is robust against different types of attacks. Simultaneously, it assures high capacity. Wang (2022) uses a deep learning network. It is named graph convolutional network (GCN), containing five cascaded graph residual blocks. They are followed by a batch normalisation and ReLU step, which extract the feature map from input vertices. The watermark is encoded and then expanded along the number of vertices. They are concatenated with the input vertices and the extracted mesh feature. Finally, they are fed into an aggregation module, which includes two graph residual blocks to produce the 3D coordinates of watermarked vertices. During the extraction phase, the watermarked 3D object passes through the same feature extraction module merely to acquire the mesh feature. The resulting mesh feature goes through two distinct layers: an average pooling layer and two fully connected ones, extracting the embedded signature. Although the Wang method provides a noticeable robustness and a visual quality compared to other 3D watermarking methods. This approach is notably very complex. Accordingly, deep learning techniques are actually of an intricate calculating amounts and time-consuming, as well. The convolution operation enacted on the 3D meshes is strenuous regarding both their irregularity and complexity.

SUGGESTED METHOD In this chapter section, the proposed 3D objects watermarking scheme is introduced. The detailed steps of the proposed watermark embedding and extraction algorithms are as follows. The proposed watermarking approach is a technique of giving copyright owners the tools to protect their intellectual property. It is now the ultimate tool to fight against the piracy of multimedia content. The signature insertion is performed in the spatial domain. Indeed, this choice is based on its simplicity by offering more robustness against geometric transformations (wang, 2007) in terms of data manipulations and its speed in terms of algorithm execution time. The 3D watermarking scheme consists of two phases: the embedding and the extraction phases. In this respect, figure 3 illustrates the block diagram of watermark embedding process. 11

3D Data Security

Figure 3. Watermark embedding process

Before initiating the watermark embedding phase, a set of pre-treatments are applied to the object to be marked. The starting point occurs with the normalization of the 3D object for the purpose of having an invariant space immune to geometric attacks. These attacks reflect changes in the geometrical part of the object by modifying the vertex’s positions, particularly translation, rotation and scaling known as affine transformation. Normalization involves three consecutive operations, the translation of the mesh so that its center of mass can coincide with the origin of the objective coordinate system. This step aims to ensure the robustness of the watermark to translation. Then, a rotation of the mesh is done to have its main axes coincide with the ones of the coordinate system. Finally, the uniform scaling of the 3D object is done in order to include it in a unitary box. This operation helps to impose a standard norm for ensuring robustness to all scaling operations.

TRANSLATION This first step aims to ensure the robustness of the watermark to translation (Kalivas, 2003). It is a question of moving the 3D object so that its center of mass can coincide with the origin of the reference. In order to achieve this objective, the coordinates of the center of mass M is supposed to be calculated using the equation (3):

M =

12

1 Nv

Nv

∑V i

i =1

(3)

3D Data Security

Where Vi presents the coordinates of vertex i and Nv having the vertices number of the mesh. Next, the mesh is translated according to the following equations:

  xi' = xi − M x    y' = y − M  i i y  '   z = zi − M z   i

(4)

Where (Mx, My, Mz) are the coordinates of M the center of mass, (xi, yi, zi). They are the original coordinates of the vertex Vi and (xi’, yi’, zi’), which present the coordinates of the same vertex Vi as it is translated. By completing this step, the robustness of the watermark against translation is ensured. Figure 4. Bunny model attacked by a translation 1 (a) bunny model normalized (b) bunny model attacked by a translation 2 (c) bunny model normalized (d)

13

3D Data Security

From the previous figure, the normalized Bunny model has the same coordinates of vertex i. consequently, whatever the attack the 3D object undergoes by applying different translation; it takes an invariant standard position. Subsequently, robustness to translation attacks is ensured.

Rotation To achieve the robustness of the watermark to rotation (Chen-Chung, 2011), a first rotation of the translated 3D model about the z-axis is performed by the following



equations so that the feature edge vector AB can lay on the xz-plane:

 x   cos ( γ ) sin ( γ ) 0  x '    i  i,1    y  = − sin γ cos γ 0 *  y '  () ( )   i  i,1       0 0 1  z 'i   zi,1     

(5)

 y − y  A γ = tan  B   xB − x A 

(6)

−1

With (xi,1, yi,1, zi,1) are the coordinates of vertex Vi after rotation around the z-axis



and 𝛾 is the angle between AB and the xz-plane and (xi’, yi’, zi’), which display the coordinates of the translated vertex. Then, a second rotation around the y-axis is performed on (xi,1, yi,1, zi,1) the results



of the previous step in order to have AB coincided with the positive axis of x in accordance with the equations below:

 x   sin (ϕ ) 0 cos (ϕ )  x    i,1   i,2     y  =   0 1 0  *  yi,1   i,2          zi,2  − cos (ϕ ) 0 sin (ϕ )  zi , 1

14

(7)

3D Data Security

   ( xB − x A )² + ( yB − y A ) ²   ϕ = tan−1    − z z  B A 

(8)

With (xi,2, yi,2, zi,2) are the coordinates of vertex i after rotation around the y-axis



and 𝜑 is the angle between AB and the positive z-axis.  Finally, a last rotation around the x-axis is performed so that the stop AB can coincide with the positive x-axis, and the triangle ∆ABC is completely rested on the xy-plane; see the equations below:

 x  1 0 0   xi,2   i, 3    y  = 0 cos δ ( ) sin (δ) *  yi,2   i, 3          zi,3  0 − sin (δ) cos (δ)  zi,2 

 z   δ = tan−1  c,3   yc,3 

(9)

(10)

With (xi,3, yi,3, zi,3) are the coordinates of vertex i after rotation around the x-axis, δ is the elevation angle between the triangle ∆ABC after rotation around the y-axis, and (xc3, yc,3, zc,3) are the coordinates of vertex C of triangle ∆ABC after rotation around of the y-axis. In the following Figure, the Bunny model will be manipulated by different rotation attacks. Later, they are normalized through the application of the currently suggested approach.

15

3D Data Security

Figure 5. Bunny model attacked by a rotation after around the y-axis 30°, (a) bunny model attacked by a rotation around the z-axis 60° (b) bunny model attacked by a rotation around the x-axis 90° (c) bunny model normalized (d) bunny model normalized (e) bunny model normalized (f)

From the figure above, it is concluded that this method offers more robustness to rotation attacks.

Mesh in Unitary “Box” This step is about generating the 3D object by a unitary box, which helps to impose a standard norm guaranteeing robustness, practically to all scaling operations. This result is achieved by applying the following equations to all the coordinate vertices (xi,3, yi,3, z i,3).

16

3D Data Security

(

)

(

)

(

)

  2 * xi,3 − min ( xi,3 )    x −1 = i, 4   x x max min − ( ) ( )  i, 3 i, 3    2 * yi,3 − min ( yi,3 )   −1  yi,4 =  max ( yi,3 ) − min ( yi,3 )     2 * zi,3 − min ( zi,3 )    = z −1 i, 4   max z − min z ( ) ( )  i, 3 i, 3  

(11)

Where (xi,3, yi,3, z i,3) are the coordinates of the vertex i after the rotation step and (xi,4, y i,4, z i,4) are the coordinates of the vertex Vi after the rotation step. Figure 6. Bunny model attacked by a scaling operation (a) bunny model normalized (b)

Figure 6 shows the normalized Bunny model. After the normalization, a cylindrical Cartesian conversion is ensured; it consists in converting the coordinates of the mesh vertices, which are presented by default, by the Cartesian coordinates into cylindrical ones. Indeed, each normalized vertex ViN with coordinates (xi,4, yi,4, zi,4) will be presented in a cylindrical frame by (Ri,𝜃i,Zi) (the value of Zi will be preserved). This conversion is achieved by applying the following formulas:

17

3D Data Security

  y    i,4  θ = arctg    i   xi,4    R = x2 + y 2 i, 4 i, 4  i

ou 0 ≤ i ≤ 2π

(12)

After achieving the pre-processing step, the 3D object becomes robust against translation, rotation and rescaling. The 3D object is notably presented by cylindrical coordinates. Subsequently, the insertion process is possible to apply. The signature bits are inserted by moving the vertices with regard to equation 11.

Ri' = Ri + ε × W ( j )

(13)

where R ' is the new position of the vertex, Ɛ represents the displacement factor i and W(j): the value of the watermark, which depends on the bit of the binary signature to be inserted S(j). If S(j)=1 then W(j) =1 else W(j) = -1 with j= {1,…………l} ; l: the length of the signature to insert. Ɛ the vertex displacement factor is used to control the intensity of the signature insertion. Thus, it is possible to find the most appropriate compromise between the imperceptibility and the robustness of the digital watermark. On the one hand, Ɛ depends on the smallest possible value so as to avoid any deformation of the watermarked 3D model and, at the same time, ensure the imperceptibility of the mark. On the other hand, a high value of Ɛ guarantees a correct and perfect signature extraction. It signifies more robustness against any type of manipulations. An empirical study shows that a displacement factor between 1.10-3 and 2.10-2 ensures an appropriate compromise between robustness and imperceptibility of the signature. The process of vertex watermarking consists either of increasing the value of its coordinate according to R by Ɛ if S(j) is equal to ‘1’ or of decreasing it by Ɛ if it is a ‘0 ‘. The presented approach is based on the redundancy notion. It consists in inserting n signatures in the 3D object on n different regions. n is more important, which shows that the watermarking scheme is robust. However, this redundancy can affect the visual quality of the 3D object in case of the insertion of an important number of signatures (a large amount of inserted information). Faced with this compromise, the choice of n is made with respect to the following equation with Nv is the vertices number: n = E(log(Nv))

18

(14)

3D Data Security

This choice ensures the compromise between robustness, imperceptibility and insert capacity. In order to improve the performance of the signature decoding phase and eliminate any conflicts, the number of insertions n is identifiably odd. For guaranteeing more robustness, another “Position” parameter has been added. Here, the tendency is mainly to use the vertices of the interior of the mesh more than those of the exterior. This operation offers more robustness, especially for smoothing and adding noise. After the redundant insertion of the signature, a reconversion of the vertex parameter set to the Cartesian coordinate system seems obligatory in order to save the new parameters of the mesh.

Post-Processing The conversion from the cylindrical coordinate system into the Cartesian coordinates is preceded in accordance with the following equations.

 x = R * cos θ i i  i  yi = Ri * sin θi 

(15)

Then, a reverse normalization (de-normalization) is performed to restore the object to its original state. First, a reverse boxing step is basically pursued, which allows the 3D object to be returned to its original size. It is the reverse step of boxing. All the coordinate vertices (xi,yi,zi) will be repositioned again so as to return to the same initial position, except for those which have been watermarked and will be slightly moved.

  ( xi + 1) * (max( xi ) − min ( xi ))  ’   x =  +min ( xi )  i  2    y ’ =( yi + 1) * (max( yi ) − min ( yi )) +min y ( i)  i  2    ( zi + 1) * (max( zi ) − min ( zi )) ’   z  = +min ( zi )  i  2   

(16)

Where (xi, yi, zi) are the Cartesian coordinates of the vertex i and (xi’, yi’, zi’),) are the coordinates of the vertex i after the first step in the de-normalization phase. Then, a reverse rotation step allowing the object to be returned to its original orientation by applying the inverse of the rotation algorithm described in the 19

3D Data Security

preprocessing phase, knowing that the inverse of a rotation matrix. The executed rotation equations are respectively presented as follows:

 x  1 0 0   x 'i   i,1    y  = 0 cos δ − sin δ  *  y '  () ( )  i   i,1          zi,1  0 sin (δ) cos (δ)   z 'i   x   sin (ϕ ) 0 − cos (ϕ )  x    i,1   i,2     y  =  0  1 0  *  yi,1   i,2          zi,2   cos (ϕ ) 0 sin (ϕ )   zi , 1

(17)

 x  cos ( γ ) − sin ( γ ) 0  x    i,2   i, 3       y  =  sin γ  i,3   ( ) cos ( γ ) 0 *  yi,2      0 0 1  zi,2   zi,3      Finally, a translation step is followed in order to allow the return of the object to its original position by applying the following formulas.

 x = x + M  i,4 i, 3 x  y = y + M i, 3 y  i,4  zi,4 = zi,3 + M z 

(18)

Where (Mx, My, Mz) are the coordinates of the center of mass M, (xi,4, yi,4, zi,4). they are the original coordinates of the vertex Vi and (xi3, yi.3, zi3) presents the coordinates of the same vertex Vi after the rotation step. The signature extraction phase showed in figure 7 is fundamental phase for the watermarking scheme. It is essentially done by extracting the hidden signature of the 3D object in order to prove its owner. Copyright protection is the main application of the proposed approach; the watermarking scheme is, therefore, non-blind. It requires the 3D original object to reconstruct the signature of the watermarked model. More precisely, the extraction of the signatures requires, in addition to the original 3D object, the knowledge of the various parameters of the schema. These 20

3D Data Security

parameters are provided solely by using a secret key. After reading the parameters of the two original and watermarked objects (vertex and faces), the two objects undergo a preprocessing procedure using the same modules described above. The two 3D objects are actually translated, rotated, and placed in a unit box before being converted into cylindrical coordinates in order to synchronize the two objects in the same frame. From the provided parameters with the secret key (n, Position), the signature extraction function consists of partitioning the watermarked object into n regions and extracting the hidden signature from each region. The set of n obtained signatures will be the main object of the last process, which is the coding phase. Indeed, this step is based on the n signatures received in order to extract a resulting signature. This signature, in turn, is the result of a bit-to-bit correlation of the obtained signatures. Figure 7. The proposed extraction scheme

ANALYSIS OF THE RESULTS The experiments are done on Intel (R) Core (I5) i3-3217U processor and 8 GB of memory with Matlab R2018. They have been performed on several 3D mesh objects, which are mostly used in research. (Wang, 2010) some of them are Bunny (34835 vertices, 69666 facets), Venus (100759 vertices, 201514 facets), Dragon (50000 vertices, 100000 facets), cows (2904 vertices, 5804 facets). 3D objects are used in the format. (.Off) Object File Format is a 3D file format used to represent and describe the geometry of a 3D object by specifying the polygons of the object surface. Polygons can include any number of vertices. They are all described in 21

3D Data Security

ASCII, starting with the keyword OFF. The number of vertices, faces, and edges is shown in the next line. Vertices are defined in Cartesians coordinates (x, y, z), each vertex’s coordinates is written per line. The faces are enumerated after the list of vertices, with one face per line. The number of vertices for each face is specified, followed by indices in the vertex list. The off format has the advantage of great simplicity. Now, it comes in several variants. In order to prove the efficiency of the presented approach, it is considered interesting to evaluate at a first step the imperceptibility of the signature.

Imperceptibility of the Signature Indeed, the inserted signature is supposed to be far away from affecting the geometry of the object. Accordingly, it is expected to be imperceptible to the human eye. The point of departure comes with a qualitative study, which consists of comparing the object before and after the insertion of the signature. Figure 8 proves the visual impact of the watermark embedding for Bunny, Dragon, Venus and Cow 3D objects. Figure 8. 3D original tested objects (a) Bunny, (b) dragon, (c) Venus and (d) cow. The respective watermarked 3D objects are presented from (e) to (h)

In the light of the obtained findings, in figure 8, it is noteworthy that the original and the watermarked 3D objects are visually indistinguishable. They seem identical. Apparently, the inserted signature is imperceptible to the naked eyes. Indeed, a slight

22

3D Data Security

modification of the position of the vertex does neither not modify the geometry nor the original structure of the object. The following step would be a quantitative study based on the VSNR (Vertex Signal Noise Ratio) metric (Sharma, 2020) typically used in order to measure the disturbance brought to the 3D object following the signatures insertion. The higher the value of the VSNR is, the better the imperceptibility of the embedded watermark. The VSNR of each model is calculated using the following formula:

 Nv  xi2 + yi2 + zi2  ∑ i =1 VSNR = 10 log10  2 2  Nv  '  ∑  xi − xi + yi' − yi + zi' − Z i  i=1 

(

(

) (

)

) (

    2   

(19)

)

Where xi, yi and zi denote the coordinates of the vertex Vi of the cover object x’i, y’i and z’i signify the coordinates of the same vertex Vi of the watermarked object, and Nv the number of total vertices of the 3D object. The VSNR measurement results from the four most used 3D objects in the literature is shown in the following table. Table 1. VSNR performance for 3D tested objects 3D Objects

Bunny

Dragon

Venus

Cows

VSNR (dB)

188.5

179.42

177.62

157.08

The experimental results shown in Table 1 are a proof that the obtained VSNR values are of a suitable quality not only for the watermarked 3D objects but also for the imperceptibility of the embedded watermarks. Hence, the proposed approach is efficient for keeping the visual quality and the general shape of the 3D watermarked objects. In addition, it can be observed that the imperceptibility results in terms of VSNR are not the same for the test of the 3D objects. This is due to the size, the shape and the nature of each one of these 3D objects. The 3D objects are selected to provide a diversity of mesh shapes. A shape like the bunny presents many rounded faces, whereas the dragon 3D object presents different vertex numbers and different shape complexities. The Cow object is sensitive to the deformation of the obtrusive parts of this object, particularly the feet and the ears. For the 3D objects test, the visual quality and the VSNR values are satisfactory.

23

3D Data Security

Robustness of the Signature Robustness is an important criterion for a watermark system, which protects copyrights. As a following step, the authors test the robustness of the presented method. The used metric is the Bit Error Rate (BER), which is defined as a ratio of the total errors bits to the total embedded bits (Xiang,, 2007). The lower the BER value is, the higher the exactness of the extracted watermark. To evaluate the robustness of the proposed method, an attack to the watermarked 3D object is applied with an attempt to extract the inserted watermark. In the case where no attack is applied, the inserted watermark into 3D test object is successfully recovered. This is verified by the BER measure, which is equal to 0. After that, the frequently used attacks in relating to 3D watermarking tests are carried out. These attacks concern translation, rotation, scaling, smoothing, adding noise and cropping with different 3D test objects. Due to the normalization of the 3D object as a preprocessing step, the embedded watermark is not affected by affine transformations attacks such as translation, rotation and scaling as shown in figure 4, figure 5 and figure 6. A BER of 0 is obtained for all affine transformations attacks. Table 2. BER for bunny object under affine transformations attacks Attack

Translation

Rotation

Attack parameters

0.5x

0.5y

0.5z

0.5x + 0.5y + 0.5z

BER

0

0

0

0

60° x axis

30° y axis

90° z axis

0

0

0

Scaling 30° x axis and 45° y axis 0

0

0.5

1.5

0

0

The experimental results presented in table 2 corroborates that the presented method is robust against affine transformations attacks since the 3D object is normalized before inserting the watermark. The robustness of the proposed approach against cropping by trying to extract the watermark from the cropped 3D object is verified. In cropping attack, one portion of the watermarked 3D object is cut off and ultimately lost. For a triangular mesh, cutting consists of removing a number of vertices and faces. This operation, in addition to reducing the surface of the object, modifies the list of vertices and faces. A set of cropping tests with different vertex cropping ratios Vcr have been applied

24

3D Data Security

to the different 3D objects. Figure 9 shows the remaining mesh of the Bunny object after removing 40% (b) and 60% (c) of the watermarked object (a). Figure 9. Watermarked bunny (a), watermarked bunny attacked with cropping Vcr = 40% (b) watermarked bunny attacked with cropping Vcr = 60% (c)

For cropping attacks ranging from Vcr = 20% to Vcr = 60% applied to test watermarked 3D objects, the signature is, indeed, detected during the extraction phase. This fact is explained by the redundant signature’s insertion in different regions of the 3D object. It means that the watermark’s bits are inserted into different locations in the 3D objects. If the signature is lost in one region, it can be recovered in another. The experimental results are presented in table 3. Table 3. BER for bunny object under cropping attack with different vertex cropping ratio Vertex cropping ratio (Vcr)

20%

40%

60%

BER

0

0

0.1

It is possible to note from table 3 that the presented method is robust against cropping attacks. In order to test the robustness of the watermark, a random noise of uniform and Gaussian distribution is applied to the vertices of each test object. This attack is not specific to 3D objects. Adding noise to vertex coordinates can make the watermark extraction unreliable with a degradation of the quality of the object. To a certain extent and within the context of the notion of destructive attacks there is no interest in protecting a 3D object, which is very noisy, and seemingly unfaithful to the original one. We are talking here about the notion of destructive attacks. 25

3D Data Security

These sets of tests consist in applying a uniform noise distribution to a watermarked 3D object. At this point, an attempt to recover the hidden watermark in the object is done. The noise amplitude factor A varies from 0.001 to 0.01. Figure 10 illustrates the watermarked Bunny 3D object after adding uniform noise A= 0.002 Figure 10 (b) A= 0.005 figure 10 (c). By applying a significant noise, the 3D object undergoes a large deformation and the visual quality is henceforth infected. Figure 10. Watermarked bunny (a), watermarked bunny attacked with a uniform noise A=0.002 (b) bunny attacked with a uniform noise A= 0.05 (c)

The robustness performance of the presented method is also tested against the Gaussian noise attack. The same as the uniform noise, the random noise of the Gaussian distribution have been applied to the different objects with a variation of the noise amplitude factor. The results of the noise tests applied to Bunny with the amplitude factors A=0.002 and A= 0.005 are presented in figure 11.

26

3D Data Security

Figure 11. Watermarked bunny (a), watermarked bunny attacked with a Gaussian noise A=0.002 (b) bunny attacked with a Gaussian noise A=0.005 (c)

Noise attack experiment results are shown in table 4. Table 4. BER for bunny object under noise attack with different noise amplitude factor Noise amplitude factor (A)

0.001

0.002

0.005

0.01

BER (Uniform noise)

0

0

0.04

0.22

BER (Gaussian noise)

0

0

0.1

0.13

The experimental results in table 4 signify that the proposed method is robust against the Uniform and the Gaussian noise. The robustness performance of the presented method is equally tested against the smoothing attack. Surface smoothing is a usual operation used to eliminate the introduced noise during the mesh generation process. The Laplacian smoothing on the watermarked model is applied with respective deformation factors λ =0.1, λ= 0.3 and λ =0.5 with a number of iterations Nitr varying from 1 to 10. In Figure 12, the Bunny object attacked with the smoothing (Nitr =10, λ= 0.5) is presented. Table 5 exhibits the robustness evaluation of the proposed method against smoothing attacks in terms of BER.

27

3D Data Security

Figure 12. Watermarked bunny (a), watermarked bunny attacked with smoothing (Nitr =10, λ= 0.5)

It is noted that the currently proposed method is also able to detect the signature after these manipulations. With reference to the experimental results in table 5, it can be seen that the presented method reveals suitable robustness against smoothing attack with 10 iterations. Table 5. BER for bunny object under smoothing attack with different deformation factors and number of iterations Deformation factor (λ)

0.1

0.3

0.5

Number of iterations (Nitr)

1

5

10

1

5

10

1

5

10

BER

0.04

0.13

0.19

0

0.30

0.29

0.19

0.32

0.39

Consequently, the robustness against noise addition and smoothing is satisfactory. The BER reaches 0.3 if the amplitude factor of the added noise exceeds 0.1 or if the number of smoothing iterations exceeds five. The parameter position is principally used when inserting the signature. It explains the robustness of this approach against this type of attack.

28

3D Data Security

The number of vertices in a mesh also influences the performance of the 3D watermarking algorithms. The stability of the efficiency of the proposed approach is validated for small 3D object (Nv < 104), large 3D objects (104 < Nv < 106) and very large 3D objects (Nv > 106). For a better evaluation of the effectiveness of the presented method against the various attacks, in general, and against the protocol attacks in particular, a watermark database is built, including 1000 randomly generated watermarks. The inserted watermark appears in position 500. Actually, only the true inserted watermark gives high correlation (Corr) value, while the other 999 keys have the similarity values distributes. Correlation (Cox, 2002), which can be calculated by using equation (20) is used to evaluate the similitude between the extracted watermark bit sequence and the originally inserted one. Correlation value are supposed to converge to one for completely accurate watermark recovery.

Corr (We , Wi ) =



n

(We

n

  Wen − We ∑ n

(

)(

− we Win − wi

)

2

)

2    ∑ Win − Wi     n 

(

)



(20)

Where We is the extracted signature (watermark) sequence and the Wi inserted one.

We and Wi , respectively, indicate the average of the signature sequence We and

Wi and n is the watermark length. The result presented in figure 13 proves the performance of the proposed technique and its ability to detect the inserted watermark (in position 500 in the watermark database) in the case of a non-attacked watermarked 3D objects with a correlation value equal to one.

29

3D Data Security

Figure 13. Watermark detector’s response curve (non-attacked watermarked bunny object)

Several attack types are applied to the watermarked 3D objects for the sake of evaluating whether the detector can expose the presence of the inserted watermark or not. It evaluates the robustness of the proposed method to different attacks types. As a consequence, the watermark detection procedure is conducted to every attacked watermarked 3D object with the 1000 watermarks. The results show that for all the presented attacks, the success of the decoder in making the appropriate decision is obvious. In fact, the correct (insert) signature is more significant than the proposed others. These results confirm that the proposed technique is quite robust to the four class attacks: Geometrical, removal, cryptographic, and protocol attacks. As shown in the watermark detector’s response curve after smoothing attack in figure 14, the response to the appropriate signature is more significant than the others. Furthermore, the obtained results illustrate that the response to the correct watermark continues to be the most important even if the quality parameter correlation is of the order of 7.6. 30

3D Data Security

Likewise for adding noise attack, it seems obvious that the response to the genuine watermark is the largest and highest peak in the position 500 in the watermark database. Figure 14. Watermark detector’s response curve after smoothing attack

However, the correlation decreases when the smoothing and adding noise attacks are applied to the watermarked 3D objects. The found peak exhibits that the correlation between the embedded and the extracted watermark remains considerable. The robustness of the suggested approach is validated against different affine transformations attacks. The watermark detector’s response curve presents a peak with a correlation value equal to one in the position 500 in the watermark database. Moreover, the developed approach implies a denial of the incorrect watermarks. Another watermark is inserted into the 3D object watermark, which is essentially different from the genuine watermark. An attempt to detect the watermark of this 3D object is carried out for ensuring the effectiveness the implemented approach and again proving that it detects the genuine inserted watermark rather than the fake ones. The correlation curve reveals that there is no actual peak. Hence, it strengthens the effectiveness of this method and its capability to differentiate the real embedded watermark from another. The case of the absence of signature is also studied. An effective 3D watermarking scheme is not capable of detecting a signature if nothing is inserted into the 3D object. The obtained detector’s response curve does not show any peak, which 31

3D Data Security

demonstrates the effectiveness of the proposed approach. With reference to it, it is possible to segregate the case where no watermark is present in the 3D object. The robustness of the proposed technique below protocol attacks is shown to be strong. In order to ensure more robustness to the proposed approach against cryptographic attacks, the authors encrypted the insertion key using the asymmetric cryptography technique RSA (Burnett, 2001). This operation guarantees that only the people for whom the information is intended will be able to access it. The findings are evidenced by the effectiveness of the presented approach. It is noticed that the imperceptibility and robustness against the most common attacks is comparable to the approaches presented in the literature. For the embedding capacity, the related work presented in the second section of this chapter can keep a constant capacity of 64 bits for El Zein’s method (El Zein, 2016), 128 bits for Cho’s method while the embedding capacity of the proposed scheme depends on the vertices number of the mesh. Capacity varies with 3D object size and number of vertices, hiding considerably numerous data for large 3D objects. The proposed method can be classified among the methods of high insertion capacity. This is an important criterion for watermarking techniques, which is oriented to the protection of copyrights as application. Despite the significant insertion capacity, it is worth highlighting that the proposed method surpasses the imperceptibility of the related 3D watermarking scheme in terms of VSNR. An inspection of the results of the proposed scheme and the schemes in El Zein, Wang and Cho’s method, all applied on the bunny 3D cover object, reveals that the proposed scheme produced better results in term of watermark imperceptibility. VSNR of proposed technique is 46.64 dB and 39.96 dB higher than that of El Zein and Cho’s method, respectively. Consequently, the higher the value of VSNR, the higher the watermark imperceptibility is, and the lower the distortion of the watermarked 3D object looks. For comparative reasons, the Bunny models have undergone rotation with α=30°, scaling 0.5, translation, smoothing with 10 iterations using a deformation factor λ = 0.10, adding noise with A= 0.01 and cropping with Vcr =30%. The 3D watermarking methods proposed by Wang, Zafeiriou u, El Zein, Farrag and Cho can completely resist rotation, translation and scaling attacks with BER=0 as well as the proposed scheme. However, the one of Yu’s method is fragile against affine transformations attacks. In this respect, the normalization phase applied to the 3D object before the insertion of the signature is essential for ensuring the robustness of the 3D watermarking techniques against the affine transformations attacks. Moreover, the proposed method shows comparable robustness results to the Laplacian smoothing attacks with BER= 0.13, and noise adding attacks with BER= 32

3D Data Security

0.04 to other 3D watermarking techniques robustness results. It is also clear that the proposed technique is more robust against cropping attacks compared to the other studied methods since higher values of BER are obtained using the proposed technique. This remarkable robustness is due to the redundant insertion of the binary signature. The robustness of the different related 3D watermarking techniques is validated only against the removal and the geometric attacks, whereas the proposed technique has dealt with the case of protocol attacks as well as cryptographic attacks. From the above mentioned comparisons and analyzes, it can be deduced that the performance of the proposed method is better than that of the other related methods with respect to imperceptibility and robustness. Firstly, the insertion capacity varies proportionally from to the size of 3D object size to another. Hence, the stability of the efficiency of the proposed approach is validated independently of the number of the 3D object vertices. Secondly, the technique of the slightly changing positions of the vertices of the watermark ensures the latter’s invisibility. Finally, the insertion of redundancy guarantees that the proposed method has a remarkably appropriate robustness against different attacks.

CONCLUSION AND FUTURE RESEARCH DIRECTIONS In this chapter, the authors proposed a robust and non-blind 3D object watermarking technique in spatial domain. First, the 3D object is translated so that the center of the mass can correspond to the origin of the axes. Second, the rotation invariance is achieved before generating the 3D object by a unitary box to ensure robustness to all scaling operations. Then, a Cartesian –cylindrical conversion is ensured while the watermark is embedded by moving the vertices in accordance with the R component. Moreover, in order to achieve more robustness against cropping, the watermark is redundantly embedded in different regions in the 3D object. The obtained experimental results prove that the presented approach can not only furnish watermarked 3D objects with highly appropriate visual quality but also resist the four different classes of attacks successfully. In the short run, the robustness to mesh compression and the re-meshing can lead to a future study by applying the proposed method in the multi-resolution domain. An improvement of this method in order to apply it to textured meshes can also be the subject of future research. In the long run, the 3D watermarking is mainly concerned with static meshes. The creation of animations is another kind of using meshes in entertainment. Thus, robust animated 3D watermarking algorithms may be the focus of future research.

33

3D Data Security

REFERENCES Abdullah, A. M. (2017). Advanced encryption standard (AES) algorithm to encrypt and decrypt data. Cryptography and Network Security, 16, 1–11. Al-Qudsy, Z. N., Shaker, S. H., & Abdulrazzque, N. S. (2018, October). Robust blind digital 3d model watermarking algorithm using mean curvature. In International Conference on New Trends in Information and Communications Technology Applications (pp. 110-125). Springer, Cham. 10.1007/978-3-030-01653-1_7 Almeida, D. F., Astudillo, P., & Vandermeulen, D. (2021). Three‐dimensional image volumes from two‐dimensional digitally reconstructed radiographs: A deep learning approach in lower limb CT scans. Medical Physics, 48(5), 2448–2457. doi:10.1002/ mp.14835 PMID:33690903 Ashoub, N., Emran, A., & Saleh, H. I. (2018). NonBlind Robust 3D Object Watermarking Scheme. Arab Journal of Nuclear Sciences and Applications, 51(4), 62–71. Ben Amar, Y., Fourati Kallel, I., & Bouhlel, M. S. (2012, March). Etat de l’art de tatouage robuste des modèles 3D. In The 6th international conference SETIT, Sousse, Tunisia (pp. 21-24). Benedens, O., & Busch, C. (2000, September). Towards blind detection of robust watermarks in polygonal models. Computer Graphics Forum, 19(3), 199–208. doi:10.1111/1467-8659.00412 Beugnon, S., Itier, V., & Puech, W. (2022). 3D Watermarking. Multimedia Security 1: Authentication and Data Hiding, 219-246. Botsch, M., Pauly, M., Rossl, C., Bischoff, S., & Kobbelt, L. (2006). Geometric modeling based on triangle meshes. In ACM SIGGRAPH 2006 Courses (pp. 1-es). doi:10.1145/1185657.1185839 Bourke, P. (2009). Ply-polygon file format. Dostupné. http://paulbourke. net/ dataformats/ply Burnett, S., & Paine, S. (2001). RSA Security’s official guide to cryptography. McGraw-Hill, Inc. Cayre, F. (2004). Contributions au tatouage de maillages surfaciques 3D [Doctoral dissertation, École nationale supérieure des télécommunications].

34

3D Data Security

Cho, J. W., Prost, R., & Jung, H. Y. (2006). An oblivious watermarking for 3-D polygonal meshes using distribution of vertex norms. IEEE Transactions on Signal Processing, 55(1), 142–155. doi:10.1109/TSP.2006.882111 Corsini, M., Uccheddu, F., Bartolini, F., Barni, M., Caldelli, R., & Cappellini, V. (2003, October). 3D watermarking technology: Visual quality aspects. In Proc. 9th Conf. Virtual System and Multimedia, VSMM’03. Semantic Scholar. Cox, I. J., Miller, M. L., Bloom, J. A., & Honsinger, C. (2002). Digital watermarking (Vol. 53). Morgan Kaufmann. El Zein, O., El Bakrawy, M., & Ghali, N. I. (2017). A robust 3D mesh watermarking algorithm utilizing fuzzy C-Means clustering. Future Computing and Informatics Journal, 2(2), 10. doi:10.1016/j.fcij.2017.10.007 El Zein, O. M., El Bakrawy, L. M., & Ghali, N. I. (2016). A non-blind robust watermarking approach for 3D mesh models. Journal of Theoretical and Applied Information Technology, 83(3), 353. Farrag, S., & Alexan, W. (2020). Secure 3d data hiding technique based on a mesh traversal algorithm. Multimedia Tools and Applications, 79(39), 29289–29303. doi:10.100711042-020-09437-w Garg, H. (2022). A comprehensive study of watermarking schemes for 3D polygon mesh objects. International Journal of Information and Computer Security, 19(1-2), 48–72. doi:10.1504/IJICS.2022.126753 Hu, R., Rondao-Alface, P., & Macq, B. (2009, April). Constrained optimisation of 3D polygonal mesh watermarking by quadratic programming. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 15011504). IEEE. 10.1109/ICASSP.2009.4959880 Kalivas, A., Tefas, A., & Pitas, I. (2003, July). Watermarking of 3D models using principal component analysis. In 2003 International Conference on Multimedia and Expo. ICME’03. Proceedings (Cat. No. 03TH8698) (Vol. 1, pp. I-637). IEEE. Kallel, I. F., Bouhlel, M. S., Lapayre, J. C., & Garcia, E. (2009). Control of dermatology image integrity using reversible watermarking. International Journal of Imaging Systems and Technology, 19(1), 5–9. doi:10.1002/ima.20172 Liu, C., & Chung, P. (2011, October). A Robust Normalization Algorithm for Three Dimensional Models Based on Clustering and Star Topology. International Journal of Innovative Computing. Information and Control Volum, 7(10), 5731–5748.

35

3D Data Security

McHenry, K., & Bajcsy, P. (2008). An overview of 3d data content, file formats and viewers. National Center for Supercomputing Applications, 1205, 22. Mostafa, G., & Alexan, W. (2022). A robust high capacity gray code-based double layer security scheme for secure data embedding in 3d objects. ITU Journal on Future and Evolving Technologies, 3, 1. Motwani, M., Sridharan, B., Motwani, R., & Harris, F. C. Jr. (2010, February). Tamper proofing 3D models. In 2010 International Conference on Signal Acquisition and Processing (pp. 210-214). IEEE. 10.1109/ICSAP.2010.85 Ohbuchi, R., Takahashi, S., Miyazawa, T., & Mukaiyama, A. (2001, June). Watermarking 3D polygonal meshes in the mesh spectral domain. In Graphics interface, pp. 9-17. Sharma, N., & Panda, J. (2020). Statistical watermarking approach for 3D mesh using local curvature estimation. IET Information Security, 14(6), 745–753. doi:10.1049/ iet-ifs.2019.0601 Sharma, N., & Panda, J. (2022). Assessment of 3D mesh watermarking techniques. Journal of Digital Forensics. Security and Law, 17(2), 2. Singh, P., & Singh, K. (2013). Image encryption and decryption using blowfish algorithm in MATLAB. International Journal of Scientific and Engineering Research, 4(7), 150–154. Solak, S. (2020). High embedding capacity data hiding technique based on EMSD and LSB substitution algorithms. IEEE Access : Practical Innovations, Open Solutions, 8, 166513–166524. doi:10.1109/ACCESS.2020.3023197 Voloshynovskiy, S., Pereira, S., Iquise, V., & Pun, T. (2001). Attack modelling: Towards a second generation watermarking benchmark. Signal Processing, 81(6), 1177–1214. doi:10.1016/S0165-1684(01)00039-1 Wang, F., Zhou, H., Fang, H., Zhang, W., & Yu, N. (2022). Deep 3D mesh watermarking with self-adaptive robustness. Cybersecurity, 5(1), 1–14. doi:10.118642400-02200125-w Wang, K., Lavoué, G., Denis, F., & Baskurt, A. (2007). Three-dimensional meshes watermarking: Review and attack-centric investigation. International Workshop on Information Hiding. Springer. 10.1007/978-3-540-77370-2_4 Wang, K., Lavoué, G., Denis, F., Baskurt, A., & He, X. (2010, June). A benchmark for 3D mesh watermarking. In 2010 Shape Modeling International Conference (pp. 231-235). IEEE. 10.1109/SMI.2010.33 36

3D Data Security

Wang, X., & Du, S. (2011). A Non-blind Robust Watermarking Scheme for 3D Models in Spatial Domain. In Electrical Engineering and Control (pp. 621–628). Springer. doi:10.1007/978-3-642-21765-4_76 Xiang, S., & Huang, J. (2007). Robust audio watermarking against the D/A and A/D conversions. arXiv preprint arXiv:0707.0397. Yang, J., Lu, X., & Chen, W. (2022). A robust scheme for copy detection of 3D object point clouds. Neurocomputing, 510, 181–192. doi:10.1016/j.neucom.2022.09.008 Yin, K., Pan, Z., Shi, J., & Zhang, D. (2001). Robust mesh watermarking based on multiresolution processing. Computers & Graphics, 25(3), 409–420. doi:10.1016/ S0097-8493(01)00065-6 Yu, Z., Ip, H. H., & Kwok, L. F. (2003). A robust watermarking scheme for 3D triangular mesh models. Pattern Recognition, 36(11), 2603–2614. doi:10.1016/ S0031-3203(03)00086-4 Zafeiriou, S., Tefas, A., & Pitas, I. (2005). Blind robust watermarking schemes for copyright protection of 3D mesh objects. IEEE Transactions on Visualization and Computer Graphics, 11(5), 596–607. doi:10.1109/TVCG.2005.71 PMID:16144256 Zhan, Y. Z., Li, Y. T., Wang, X. Y., & Qian, Y. (2014). A blind watermarking algorithm for 3D mesh models based on vertex curvature. Journal of Zhejiang University SCIENCE C, 15(5), 351–362. doi:10.1631/jzus.C1300306

37

38

Chapter 2

One-Class ELM EnsembleBased DDoS Attack Detection in Multimedia Cloud Computing Gopal Singh Kushwah National Institute of Technology, Kurukshetra, India Surjit Singh https://orcid.org/0000-0002-2386-7729 Thapar Institute of Engineering and Technology, India Sumit Kumar Mahana National Institute of Technology, Kurukshetra, India

ABSTRACT Distributed denial of service (DDoS) attack affects the availability of multimedia cloud services to its users. In this attack, a huge traffic load is put on the victim server. Hence initially the server becomes slow to process legitimate requests and later becomes unavailable. Therefore implementing defensive solutions against these attacks is of utmost importance. In this work, the authors propose a bagging ensemble-based DDoS attack detection system for multimedia cloud computing. One class extreme learning machine (ELM) is used as a base classifier. An outlier detection based approach has been used to detect these attacks. Experiments have been performed using two benchmark datasets NSL-KDD and CICIDS2017 to evaluate the performance of the proposed system.

DOI: 10.4018/978-1-6684-6864-7.ch002 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

INTRODUCTION In recent years, there is enormous growth in the number of multimedia devices due to the rapid development of the Internet and mobile networks. This increases the demand for multimedia applications and services such as online image editing, online gaming, video conferencing, storage, etc. Since these services require high storage and processing capabilities, the adoption of cloud infrastructure for this purpose has become popular which is known as multimedia cloud computing (Zhu et al., 2011). In this model, the service providers offer various types of multimedia services like storage and processing. Users with resource-constrained devices can use these services as utilities. Multimedia cloud computing must provide multimedia content according to the user’s quality of service requirements in an efficient and timely manner. For the smooth functioning of this technology, its services must be available all the time. Attackers can use DDoS attacks (Lau et al., 2000) to hinder the availability of these services. In these types of attacks, the attacker uses many devices from the Internet to send huge traffic to the cloud server. It results in the exhaustion of bandwidth and other resources in the cloud and it becomes unavailable to its legitimate users. Therefore developing solutions against these attacks becomes important. Machine learning has become popular in the area of intrusion detection. Several machine learning-based solutions for detecting DDoS attacks and other types of intrusions have been proposed in the literature. In (Bhushan & Gupta, 2019), a method to detect and mitigate fraudulent resource consumption (FRC) attacks is proposed. The attack detection approach is based on a hypothesis test. After detecting the attack, network flow analysis and Turing test are used to identify the bots. In (Garg et al., 2019), a deep learning-based anomaly detection system is proposed. An ensemble of restricted Boltzmann machine (RBM) and support vector machine is used as a classifier. RBM is modified to incorporate dropout functionality, and SVM is modified by encapsulating mixed kernel function and gradient descent. A hybrid intrusion detection system is proposed in (Venkatraman & Surendiran, 2020). The first module is signature-based and uses rules from Snort and IoT networks. Signature updation is performed by using crowd sourced framework. The second module is based on timed automata that works as anomaly based IDS. In (Sathya et al., 2021), a detection technique based on dual weight updation-based optimal deep belief network is proposed. It uses median absolute deviation around the median-based Kolmogorov-Smirnov test for feature extraction and robust confidence interval-based chimp optimization technique for feature selection. The authors in (Gopi et al., 2021) proposed an ANN-based method for DDoS attack detection. They used Levenberg – Marquardt (LM) method to train the ANN. The principal component analysis is used for feature reduction. In (Hsu et al., 2021), 39

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

a method based on the decision tree, support vector machine, and Naïve Bayes is presented. The selection of critical features is also proposed. In (Pundir et al., 2021), a method based on four classifiers namely random forest, ANN, logistic regression, and Naïve Bayes is proposed. A method for anomaly detection in multimedia traffic in edge computing is proposed in (Zhao et al., 2021). In the first stage, a method by combining the session analysis and protocol analysis of multimedia traffic is proposed. After that, a C4.5 classifier is used. The C4.5 algorithm is improved by reducing calculations on performance-constrained devices and avoiding overfitting. In (Mustapha et al., 2023), the authors propose a long short term memory (LSTM) based attack detection system. This work also demonstrated the use of a Generative adversarial network (GAN) to generate adversarial DDoS traffic. In (Anyanwu et al., 2023), an SVM based technique is proposed that uses the Radial basis function (RBF) kernel. The grid search cross-validation (GSCV) is used for parameter optimization to overcome overfitting. In (Elejla et al., 2022), the authors proposed an approach based on LSTM to detect ICMPv6 flood attacks. They have also proposed a feature selection method by combining chi-square test and information gain. In (Le et al., 2022), a CNN based method is proposed to detect attacks. This work also propose the generation of attack traffic by using conditional generative adversarial network. In (Subbiah et al., 2022), the authors proposed a feature selection method based on Boruta algorithm. The grid search random forest algorithm is used as a classifier for attack detection. The challenges in machine learning based solutions include high detection accuracy with low false alarms. Ensemble-based classification techniques have become popular due to their increased classification accuracy and low rate of false alarms over a single classifier. These techniques use several base classifiers and each of them is trained separately. During testing, the combined output of all base classifiers provides the final predicted value. In one class classification, the classifier is trained with samples having a single class label. During the testing phase, the other class samples are identified as outliers. This technique is useful for applications where many samples of one class are available but samples of the other class are either not available or rare like anomaly detection, fraud detection, novelty detection, etc. In this work, we propose a bagging ensemble-based technique for DDoS attack detection. One class extreme learning machine (ELM) (Leng et al., 2015; Gautam & Tiwari, 2016) is used as a base classifier. The reason behind using one class ELM as a base classifier is its low training time. Due to many base classifiers, a considerable amount of time is required for training in ensemble-based classifiers. ELM can be trained in a very short amount of time and provides better generalization than traditional gradient-based training methods.

40

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Figure 1. Multimedia cloud computing network

PROPOSED SYSTEM Figure 1 shows a multimedia cloud computing system connected to the Internet through a router. The DDoS attack detection system is placed between the router and the switch. It observes the incoming traffic to the cloud and classifies it into normal or attack. If an attack is detected then an alert to the admin is sent for further action. The attack detection system is comprised of three modules classifier, preprocessor, and training database (as shown in Figure 2). The training database contains samples used for training the classifier. The preprocessor converts real-time incoming traffic into samples that are used by the classifier for traffic classification.

Training Database Since the classifier is a one-class classifier, it is trained with normal samples only that are available in the training database. The training samples are of the form (xi, ti). Here, xi =[xi1,xi2,xi3,.. .,xin] is the vector of all n features for the ith training sample and ti=1 is the label of that sample.

41

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Figure 2. DDoS attack detection system

Preprocessor The preprocessor module captures real-time incoming traffic to the cloud and prepares samples. The traffic captured during each t interval is converted to a group of samples. First, the same features used in training samples are extracted. After that nominal features are encoded to numeric values. At last, normalization is performed to scale all the features in the range [0, 1]. Suppose μ is a feature, and its minimum and maximum values in the group are min(μ) and max(μ), respectively. Then, the normalized value for this feature in ith sample is calculated using equation (1). μ’i =

µi − min (µ)

max (µ) − min (µ)



(1)

After performing all the steps in preprocessing, a group of samples of the form yi =[yi1, yi2, …, yin] is created for traffic captured during each t time duration. Here, all the features of yi are the same as the features of xi. Each group of samples is applied to the classifier for attack detection.

Classifier The classifier is a bagging ensemble of one class ELM as shown in Figure 3. The ELMs in the ensemble are represented as ε1, ε2, ε3, …, εp. The training database is divided into p random sub-datasets with replacement that are represented as D1, D2, D3, … Dp. All the ELMs are trained with their corresponding sub-dataset. Suppose, 42

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

n and l represent the number of neurons in the input and hidden layer for each ELM, as shown in Figure 4. Suppose N is the number of training samples in each subdataset of the form (xi, ti), where xi = [xi1, xi2, …, xin]T ∈ Rn and ti = [ti1,ti2,…, tim]T ∈ Rm (in our system m=1) then the output of each ELM can be modeled as l

o i=

∑v

j

f(uj, bj, xi), for i=1 to N

(2)

j =1

Here, uj ∈ Rn is the weight vector that connects jth hidden neuron to input neurons and bj ∈ R is the bias of the jth hidden neuron. vj = [vj1,vj2,…, vjm]T represents the vector of connection weights between jth hidden neuron and output neurons. For all N samples the above equation can be written as O=HV

(3)

Figure 3. Bagging ensemble of ELMs

Where, 43

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

  f u , b , x vT   ( 1 1 1 )  f (ul , bl , x 1 )   1      and O= H=  , V=          T  f (u1, b1, x N )  f (ul , bl , x N ) vl     

oT   1       T oN   

The error ||O-T|| can be minimized by using random values for uj and bj, where T=(t1,t2,...,tN) is the target vector of all training samples. Now, the connection weights between hidden and output layers can be calculated by solving the following equation V=H†T

(4)

Where † represents the Moore-Penrose generalized inverse (Prasad & Bapat, 1992) of a matrix. As we have calculated the value of hidden to output layer weights matrix V, the ELM is trained. Similarly, all other ELMs are trained. In the training phase, the threshold for outlier detection is also determined. Equation (5) is used for threshold determination. Threshold = MSE + 0.2 * SD

(5)

Where MSE = Mean squared error over all the samples in training data and SD = Standard deviation of MSE. The training process of the classifier is summarized by Algorithm 1. Figure 4. Extreme learning machine

44

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Algorithm 1: Training Algorithm

Input: ensemble size (p), number of neurons (l), training database Output: trained classifier Create p training subsets of size N with replacement For all ELMs do            Randomly initialize input-hidden weight matrix U and hidden biases            For i= 1 to N do                 For j=1 to l do                      Calculate Hij=f (uj,bj,xi)                 End for            End for Calculate hidden-output weight matrix using V=H†T Calculate Threshold=MSE+SD*0.2 End for

During attack detection, each group of samples (with size M) prepared by the preprocessor is applied to the classifier. In the classifier, each sample (yi) of the group is given to all p ELMs and the output of each ELM is calculated. Now the difference between each ELM output and 1 (the label for normal samples) is calculated. For any ELM, if the value of the difference is more than the threshold for a sample, it means that the sample is identified as an attack sample by that ELM. Otherwise, the sample is identified as normal. After this, the output of all ELMs is combined by performing majority voting to determine the final output of a sample. For each ith sample, a vector Ci ∈ Z2>=0 is used to store the output of all ELMs for that sample. The value at the first position of this vector represents the number of ELMs that identified this sample as normal. Similarly, the value at the second position represents the number of ELMs that identified this sample as an attack. Initially, Ci is initialized to zero (Ci = [0, 0]). The values of Ci are updated according to the outputs of ELMs. For example, suppose the ith sample predicted by ε1 is an attack then Ci becomes [0,1]. Similarly, this sample is predicted as an attack by ε2, now Ci becomes [0,2]. The same sample is predicted by ε3 as normal, now Ci becomes [1,2]. When all p outputs arrived, the final output is calculated as follows. First, the max(Ci) is calculated and then the position of max(Ci) is calculated. If pos[max(Ci)]=1, then it is a normal sample, and if pos[max(Ci)]=2 then it is an attack sample. The attack detection process of the classifier is given by Algorithm 2.

45

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Algorithm 2: Detection Algorithm

Input: number of test samples (M), trained classifier Output: attack sample/ Normal sample For i=1 to M do            Initialize vector Ci with zeros            For each ELM do                 Apply sample yi                 Calculate output oi                 If |1-oi|>Threshold                      Increment Ci[2] with 1                 Else                      Increment Ci[1] with 1            End for            Find pos[max(Ci)]            If pos==1                 yi ϵ normal samples            If pos==2                 yi ϵ attack samples End for

EXPERIMENTAL RESULTS The performance of the proposed system is evaluated with experiments on an Intel Core i5 machine with 16GB RAM. Microsoft Windows 10 platform and MATLAB tool are used for experiments.

Datasets Used Two benchmark datasets namely NSL-KDD (Tavallaee et al., 2009) and CICIDS2017 (Sharafaldin et al., 2018) are used in experiments. All the features of both datasets are used. The details about the datasets are given in Table 1. Only DoS and DDoS attack samples along with normal samples are considered for experiments.

46

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Table 1. Dataset information Dataset name

No. of features

Number of samples Training

Testing

Normal

Normal

Attack

Total

NSL-KDD

41

2000

250

250

500

CICIDS2017

84

2000

250

250

500

Performance Metrics The following metrics are used to evaluate the performance of the proposed system. Accuracy =

TP + TN ×100 TP + FP + TN + FN

Sensitivity =

TP ×100 TP + FN

Specificity =

TN ×100 TN + FP

Precision =

TP ×100 TP + FP

Where, TP, TN, FP, and FN represent the number of true positives, number of true negatives, number of false positives, and number of false negatives, respectively.

47

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Figure 5. Performance of proposed system with NSL-KDD dataset accuracy

Figure 6. Performance of proposed system with NSL-KDD dataset sensitivity

48

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Figure 7. Performance of proposed system with NSL-KDD dataset Specificity

Figure 8. Performance of proposed system with NSL-KDD dataset precision

49

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Performance Evaluation and Discussions All the experiments are performed 20 times and the mean is taken for each metric. The system is trained using 2000 normal samples only. We varied the number of hidden neurons (l) and the number of ELMs (p) in the ensemble from 10 to 100 in the steps of 10 and results are taken. For testing, 500 samples (250 normal and 250 attack) are used and accuracy, sensitivity, specificity, and precision are measured. Table 2 gives the best accuracy achieved along with other metrics and the values of l and p where it is achieved when values of l and k are varied from 10 to 100. Figure 9. Performance of proposed system with CICIDS2017dataset accuracy

50

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Figure 10. Performance of proposed system with CICIDS2017dataset Sensitivity

Figure 11. Performance of proposed system with CICIDS2017dataset specificity

51

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Figure 12. Performance of proposed system with CICIDS2017dataset precision

Table 2. Performance of the proposed system Number of hidden neurons (l)

Number of ELMs (p)

Accuracy

Sensitivity

Specificity

Precision

Training time (Sec.)

NSLKDD

99.4

100

98.8

99.1

0.27

30

90

CICIDS 2017

99.8

99.5

99.0

99.9

0.37

20

90

Dataset Name

The values of accuracy, sensitivity, specificity, and precision for NSL-KDD dataset are given in Figures 5(a), 5(b), 5(c), and 5(d), respectively. It shows maximum values of accuracy, sensitivity, specificity, and precision as 99.4%, 100%, 98.80%, and 99.10%, respectively. The minimum values of these metrics as 87.20%, 79.90%, 89.60%, and 87.40%. The values of accuracy, sensitivity, specificity, and precision with CICIDS2017 dataset are given in Figures 6(a), 6(b), 6(c), and 6(d), respectively. It shows maximum values of accuracy, sensitivity, specificity, and precision as 99.80%, 99.5%, 99.0%, 52

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

99.90%, respectively. The minimum values of these metrics are 86.20%, 79.40%, 88.30%, and 87.80%. Table 3 gives the performance comparison of the proposed system with other machine learning models. Table 3. Performance comparison with other models Dataset

NSL-KDD

CICIDS2017

Accuracy (%)

Sensitivity (%)

Specificity (%)

Precision (%)

Training time (Sec.)

Backpropagation

92.9

99.3

90.4

91.1

2.07

ELM

94.6

98.7

90.5

92.3

0.11

Random Forest

95.20

100

92.4

93.2

2.6

Adaboost

95.80

100

93.8

93.9

1.57

Bagging (one class ELM)

99.4

100

98.8

99.1

0.27

Backpropagation

94.5

100

89.1

92.2

2.33

ELM

97.1

99.9

94.4

95.6

0.22

Random Forest

99.5

100

99.1

99.1

2.75

Adaboost

99.7

100

99.4

99.4

2.1

Bagging (one class ELM)

99.8

99.5

99.0

99.9

0.37

Algorithm

CONCLUSION Detecting DDoS attacks in the multimedia cloud is a challenging task. Machine learning techniques can play an important role in the detection of these attacks. In this work, we propose an ensemble-based method to detect DDoS attacks. A bagging ensemble of one class ELM is proposed as a classifier. Experiments with two datasets viz. NSL-KDD and CICIDS2017 have been performed. The proposed system achieves a detection accuracy of 99.40% with NSL-KDD and 99.80% with CICIDS2017.

REFERENCES Anyanwu, G. O., Nwakanma, C. I., Lee, J. M., & Kim, D. S. (2023). RBF-SVM kernel-based model for detecting DDoS attacks in SDN integrated vehicular network. Ad Hoc Networks, 140, 103026. doi:10.1016/j.adhoc.2022.103026

53

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Bhushan, K., & Gupta, B. B. (2019). Network flow analysis for detection and mitigation of Fraudulent Resource Consumption (FRC) attacks in multimedia cloud computing. Multimedia Tools and Applications, 78(4), 4267–4298. doi:10.100711042-0175522-z Elejla, O. E., Anbar, M., Hamouda, S., Faisal, S., Bahashwan, A. A., & Hasbullah, I. H. (2022). Deep-Learning-Based Approach to Detect ICMPv6 Flooding DDoS Attacks on IPv6 Networks. Applied Sciences (Basel, Switzerland), 12(12), 6150. doi:10.3390/app12126150 Garg, S., Kaur, K., Kumar, N., & Rodrigues, J. J. (2019). Hybrid deep-learningbased anomaly detection scheme for suspicious flow detection in SDN: A social multimedia perspective. IEEE Transactions on Multimedia, 21(3), 566–578. doi:10.1109/TMM.2019.2893549 Gautam, C., & Tiwari, A. (2016). On the construction of extreme learning machine for one class classifier. In Proceedings of ELM-2015 Volume 1: Theory, Algorithms and Applications (I) (pp. 447-461). Springer International Publishing. 10.1007/9783-319-28397-5_35 Gopi, R., Sathiyamoorthi, V., Selvakumar, S., Manikandan, R., Chatterjee, P., Jhanjhi, N. Z., & Luhach, A. K. (2021). Enhanced method of ANN based model for detection of DDoS attacks on multimedia internet of things. Multimedia Tools and Applications, 1–19. Hsu, C. Y., Wang, S., & Qiao, Y. (2021). Intrusion by machine learning for multimedia platform. Multimedia Tools and Applications, 80(19), 29643–29656. doi:10.100711042-021-11100-x PMID:34248394 Lau, F., Rubin, S. H., Smith, M. H., & Trajkovic, L. (2000, October). Distributed denial of service attacks. In Smc 2000 conference proceedings. 2000 ieee international conference on systems, man and cybernetics.’cybernetics evolving to systems, humans, organizations, and their complex interactions (Vol. 3, pp. 2275-2280). IEEE. 10.1109/ICSMC.2000.886455 Le, K. H., Nguyen, M. H., Tran, T. D., & Tran, N. D. (2022). IMIDS: An intelligent intrusion detection system against cyber threats in IoT. Electronics (Basel), 11(4), 524. doi:10.3390/electronics11040524 Leng, Q., Qi, H., Miao, J., Zhu, W., & Su, G. (2015). One-class classification with extreme learning machine. Mathematical Problems in Engineering, 2015.

54

One-Class ELM Ensemble-Based DDoS Attack Detection in Multimedia Cloud Computing

Mustapha, A., Khatoun, R., Zeadally, S., Chbib, F., Fadlallah, A., Fahs, W., & El Attar, A. (2023). Detecting DDoS attacks using adversarial neural network. Computers & Security, 127, 103117. doi:10.1016/j.cose.2023.103117 Prasad, K. M., & Bapat, R. B. (1992). The generalized moore-penrose inverse. Linear Algebra and Its Applications, 165, 59–69. doi:10.1016/0024-3795(92)90229-4 Pundir, S., Obaidat, M. S., Wazid, M., Das, A. K., Singh, D. P., & Rodrigues, J. J. (2021). MADP-IIME: Malware attack detection protocol in IoT-enabled industrial multimedia environment using machine learning approach. Multimedia Systems, 1–13. doi:10.100700530-020-00743-9 Sathya, M., Jeyaselvi, M., Krishnasamy, L., Hazzazi, M. M., Shukla, P. K., Shukla, P. K., & Nuagah, S. J. (2021). A novel, efficient, and secure anomaly detection technique using DWU-ODBN for IoT-enabled multimedia communication systems. Wireless Communications and Mobile Computing, 2021, 1–12. doi:10.1155/2021/4989410 Sharafaldin, I., Lashkari, A. H., & Ghorbani, A. A. (2018). Toward generating a new intrusion detection dataset and intrusion traffic characterization. ICISSp, 1, 108–116. doi:10.5220/0006639801080116 Subbiah, S., Anbananthen, K. S. M., Thangaraj, S., Kannan, S., & Chelliah, D. (2022). Intrusion detection technique in wireless sensor network using grid search random forest with Boruta feature selection algorithm. Journal of Communications and Networks (Seoul), 24(2), 264–273. doi:10.23919/JCN.2022.000002 Tavallaee, M., Bagheri, E., Lu, W., & Ghorbani, A. A. (2009, July). A detailed analysis of the KDD CUP 99 data set. In 2009 IEEE symposium on computational intelligence for security and defense applications. IEEE. Venkatraman, S., & Surendiran, B. (2020). Adaptive hybrid intrusion detection system for crowd sourced multimedia internet of things systems. Multimedia Tools and Applications, 79(5-6), 3993–4010. doi:10.100711042-019-7495-6 Zhao, X., Huang, G., Jiang, J., Gao, L., & Li, M. (2021). Research on lightweight anomaly detection of multimedia traffic in edge computing. Computers & Security, 111, 102463. doi:10.1016/j.cose.2021.102463 Zhu, W., Luo, C., Wang, J., & Li, S. (2011). Multimedia cloud computing. IEEE Signal Processing Magazine, 28(3), 59–69. doi:10.1109/MSP.2011.940269

55

56

Chapter 3

SSD Forensic Investigation Using Open Source Tool Hepi Suthar Rashtriya Raksha University, India Priyanka Sharma Rashtriya Raksha University, India

ABSTRACT According to the CIA triad, Cyber Forensic Investigation judicial point of view is the data integrity of volatile memory kinds of data storage devices. This has long been a source of concern, and it is critical for the chain of custody procedure. As an outcome result, it is a substantial advancement for the measured examination cycle to safeguard unstable data from SSD. In this study provides the easiest way to preserve potentially volatile based memory digital proof, store on SSDs, and generate forensically bit-streams, also known as bit-by-bit copies. The challenge of protecting the data integrity of an electronic piece of evidence that has been arrested at a crime scene frequently faces analysts. This academic article primarily suggests a process method and a few steps for carrying out forensic investigations on data obtained from solid state drives all the while avoiding the TRIM characteristic and garbage series from running lacking user input or interaction, preserving the data integrity of the facts as usable digital evidence.

DOI: 10.4018/978-1-6684-6864-7.ch003 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

SSD Forensic Investigation Using Open Source Tool

INTRODUCTION A SSD is a type of solid-state secondary types of storage device that stores data using integrated circuit (IC) assemblies as memory. Although SSDs do not contain physical disks, they are sometimes known as solid-state drives. SSDs can be utilised with conventional tough disk force shape elements and protocols, which include Serial superior era attachment and SAS, notably simplifying their integration into laptop systems. New shape elements, just like the M.2 shape factor, and new I/O protocols, such NVM Express, were evolved to satisfy the current technological wishes of the Flash reminiscence era utilized in SSDs (Geier, 2015; Kang et al., 2018). SSDs lack mechanical transmitting components. This separates them from ordinary electromechanical devices with moving R/W heads and rotating disks, like as Hard disk drives or floppy disks (Templeman & Kapadia, 2012). SSDs are regularly more injury resistant, operate silently, have quicker get right of entry to times, and have decreased latency when in contrast to electromechanical units (Suthar & Sharma, 2022). Although the cost of SSDs has decreased over time, they’re nevertheless greater costly than HDDs according to unit of garage in 2018 and are expected to hold so for the subsequent ten years. The majority of SSDs used 3D Triple Level Cell NAND flash reminiscence (Geier, 2015). It is a type of non-volatile memory (similar to ROM) that maintains data even when the energy is switched off (Bunker et al., 2012). SSDs can be composed of random-access memory (RAM) for applications that demand quick access but no longer always facts persistence after a strength outage. When external electricity is lost, batteries can be employed as integrated power sources in such units to retain facts for a set size of time (Suthar & Sharma, 2022). If energy is lost, strong nation disks save facts as electrical charges, which steadily leak over time. This is why solid kingdom drives are unsuitable for archiving applications, as outdated drives (that have handed their patience rating) often begin to lose facts after being stored for one to two years (at 30 °C). Hybrid drives (SSHDs) combine a giant challenging disk power with a strong country drive cache to speed up frequently accessed data. A hybrid force is one that combines the advantages of SSDs and HDDs into a single device, such as Apple’s Fusion Drive (Suthar & Sharma, 2022). Composition structure of SSD - The solid-state drive is mainly composed of the main control chip, flash memory particles, a cache chip, and a SATA interface chip (Ko, 2019). Solid State Drive (SSD) is a large-capacity memory that uses solid-state semiconductor chips as storage media. According to the semiconductor chip used, it can be divided into flash memory (NAND flash) and volatile storage (DRAM)based solid-state drives. The latter requires an independent power supply and can only be used in very special equipment. It is beyond the scope of this article.

57

SSD Forensic Investigation Using Open Source Tool

Main Control Chip The basic component of the complete solid-state hard disc is the primary control chip. One does the whole data transfer by interconnecting the flash storage semiconductor and the external SATA interface, while the other accepts system commands and logically distributes the data load on each flash drive (Hadi et al., 2021). The main control chip is connected to the flash memory chip and the external interface and controls the entire hard disk work by reasonably distributing the data storage on each flash memory chip (SpeedGuide, n.d.). Excellent main control chip has the characteristics of fast data processing speed and advanced algorithm.

Flash Particles In solid-state hard drives, flash memory particles replace mechanical disks as storage units. According to the difference of electronic unit density in NAND flash memory, it can be divided into SLC (single-level memory cell), MLC (doublelayer memory cell), and TLC (three-layer memory cell) (Ahn & Lee, 2017). These three types of memory cells have obvious life and cost. The difference (Cha et al., 2015). SLC (single-layer storage) is a single-layer electronic structure, with a small voltage change range when writing data, long life, and more than 100,000 reads and writes; it is expensive and is mostly used in enterprise-level high-end products. MLC (multi-layer storage) uses a double-layer electronic structure built with high and low voltages. It has a long life and a medium price. It is mostly used in civilian high-end products (Hepisuthar, 2021; Lee et al., 2013). The number of reads and writes is about 5,000. Compared with SLC, the write speed and number of times are reduced. The chip adopts a wear-leveling algorithm to meet the requirements of long-term use (Gubanov & Afonin, 2014)].TLC (three-layer storage) has the highest extended storage density of MLC flash memory (up to 3 bit/cell), with a capacity of 1.5 times that of MLC, the lowest cost, low mission life, and the number of reads and writes is about 1,000 to 2,000. TLC is the flash memory particle of choice for mainstream manufacturers (Wang, 2019). The structure of a solid-state hard disk based on flash memory is much simpler than that of a mechanical hard disk (HDD), which is composed of a shell and printed circuit board (PCB) (Hepisuthar, 2021). The shell only plays a protective role, and the core is the printed circuit board. There are main control chip cache chips (some low-end products without cache chips) and flash memory chips for storing data on the PCB (ISO/IEC 27037:2012, 2018).

58

SSD Forensic Investigation Using Open Source Tool

Cache Chip The cache chip is mainly used for random reading and writing of commonly used files and fast reading and writing of fragmented files. The cache chip is generally set next to the main control chip, and its function is similar to that of a mechanical hard disk. The cache chip can perform functions such as data pre-reading, cache writing, and storing recently accessed data. It is worth noting that some low-end products omit the cache chip to save costs, and its performance will inevitably decrease. Flash memory chips take on the important task of data storage, so they have the largest number on the circuit board and occupy the largest space. The capacity of the solid-state drive is mainly determined by the number of flash memory chips mounted. The capacity of common solid-state hard drives in the market is between 16GB and 1.6TB (Kang et al., 2018).

Working Principle The working principle of the solid-state hard disk is shown in Figure 1. NAND Flash refers to flash particles (Suthar & Sharma, 2022). The SSD controller operates these Flash particles in parallel through several main control channels, just like raid0, which can improve the parallelism and efficiency of data writing (Micheloni et al., 2010). Each Flash particle is further subdivided into multiple blocks (blocks), and each block contains multiple pages. Inside the SSD, the smallest access unit granularity between the SSD controller and Flash is the page. Generally, the size of a page is 4 k, and a block includes 16 pages. When writing data, like the working mechanism of raid0, the data is simultaneously written to the available pages in the block of each Flash particle in parallel. When a block is full, another block will be written. NAND Flash memory is essentially a long-life non-volatile (the stored data information can still be retained in the instance of a power outage) memory, data deletion is not a single byte as a unit but a fixed The block is a unit, and because of the working principle of the MOS tube, Flash memory’s written speed is slower than its reading speed (Chang, 2007; Kang et al., 2018).

LITERATURE REVIEW For quite some time, the field of digital forensics (computer forensics) or cyber forensics has required a way to execute forensic data acquisition as part of the forensic investigation procedure from SSD. Related to over 10 years, potential and strong electronic data have been lost or rejected as inadmissible because of the loss in standard operating procedures how exactly to image create forensic copies of 59

SSD Forensic Investigation Using Open Source Tool

judicial concern regarding sensitive information about a suspect electronic equipment and SSDs (Micron Technology, 2008). According to King and Vidas (2011), as the target device’s fundamental technology advances, new techniques are required. That article presents a replacement approach method for digital forensic analysts to take care of SSDs lacking limiting or impacting the information data integrity of each individual device. The method is based on a generalized gradient estimation. This experimental project advances the area An digital forensic analysis conducted by demonstrating which it is feasible to forensically capture a Solid State Drive’s data while keeping data that may otherwise be volatile be lost using traditional data gathering techniques (Chan et al., 2015). The use of current rhetorical acquisition techniques, such as verifyatory individual files, on SSDs would result in the lack of any possible digital proof and contradictions throughout the confirmation process because they were created to image non-volatile storage medium (Bunker et al., 2012). This article explains how to preserve the forensic image of SSDs in a highly utilising forensic live CDs, a write-blocker, and ASCII text file utilities in a forensically sound way (King & Vidas, 2011). Let us first deliver a fresh understanding of Cyber Forensic in SSD, then a new viewpoint on how to accomplish data capture using flammable gadgets like secondary drives like SSDs, and last, hoping the areas between enforcement agencies and digital forensics for Cyber Investigation (Kim et al., 2013). The notion the fact that solid-state SSDs represent the tip of the cyber-forensic spear must be addressed. Initially, the focus of cyber investigation was on trash collection (trash data) and the TRIM function, whereas technical study on developing a different solution was essentially non-existent (Antonellis, 2008). This study was created to dispel any notions or beliefs that it is impossible to produce SSDs’ aggressive trash collection and the TRIM function allow for the creation copies of forensically reliable bit-streams. It is demonstrated in this research that the intended approach Allowing digital forensic investigators to get SSDs with forensically reliable bit-stream copies (Lee et al., 2013). The experimental the findings in this study are squarely duplicable, and if the digital forensic analysts adhere to the planning guidelines, only one identical forensic gradual replica of an SSD is created at any given time (SpeedGuide, n.d.). This technique was specifically tested and validated by utilising open-source data collection tools and procedures. There should be no reason why, later on, the authenticity of an SSD’s information trustworthiness, when properly handled, should be called into question (SpeedGuide, n.d.).

60

SSD Forensic Investigation Using Open Source Tool

RESEARCH METHODOLOGY When a secondary storage machine is attached to a PC system, it is more inclined to data integrity, hash change, fact change, and metadata manipulation. The automatic mounting system on an SSD storage device can cause adjustments – such as file indexing or timestamp changes – to occur, principally in a computer’s unallocated areas fact stack a deleted files partition, which when set up can result in doable digital evidence being manipulated, overwritten or gone away (Cha et al., 2015). Because of the sturdy TRIM feature and trash collection on Solid State Drives (Gubanov & Afonin, 2014; Chang et al., 2016), these adjustments are more pronounced, and also the possibilities of losing feasible digital electronics evidence due to selfcontamination are greater. Figure 1 depicts it big states’ differences (Ko, 2019). Figure 1. Mounting status v: TRIM effectiveness

SSD in Mounted Status: The quantity information that can be retrieved from a secondary storage device (SSD) is determined by the file system structure storage 61

SSD Forensic Investigation Using Open Source Tool

technique and varies by OS. Fig. 2 depicts dual lists containing comparable some of these files or either two files have been removed but remain on the Solid State Drive (SSD) at an incident scene. 1). Think about the various scenario backgrounds listed below: After a few days, the SSD taken from the crime scene is linked images to a physical device at the forensic investigation laboratory. 2). (bit by bit copy). Current secondary device data gathering processes do not address whether or not a device should be attached. When a electronic device is linked to a forensic investigation workstation with the TRIM feature and auto-mount enabled is executed, and file number 37, which holds the incriminating or exculpating digital data, is destroyed or erased, and the Evidence cannot be recovered now. Figure 2. Mounted solid state drive

SSD in Unmounted Status: Consider constant Task in a very completely different context (See Figure 3), however now the SSD is connected to a forensic digital computer using an auto mount turned off. As a result of the device wasn’t

62

SSD Forensic Investigation Using Open Source Tool

mounted, the TRIM function perform can not be utilised, but file system thirty seven continues to be recoverable. Figure 3. Unmounted solid state drive

Solid State Drive Hashing: By definition, digital evidence is unstable and susceptible to modification (ISO/IEC 27037:2012, 2018). This is frequently right in forensic investigations process involving digital data discovered on SSD (Chang, 2007). Hashing techniques are used to assess the legitimacy and accuracy of digital evidence by comparing hash values across pictures (bit by bit copy) (Takeuchi, 2009; Wang, 2019), the data integrity, and its legal validity in judicial court (Geier, 2015; ISO/IEC 27037:2012, 2018). When confirming a device’s hash value, it is critical to consider the device’s state, which is also reflected in the chain of custody. A device can produce one of two completely different hash values, depending on its state: The hash values produced by a mounted device will be different from the mounted device’s hash values.

63

SSD Forensic Investigation Using Open Source Tool

Figure 4. Hashing unmounted v. mounted storage devices

Cabling and Adapter: Using various Serial advanced technology attachment cables and Serial advanced technology attachment adapters may result in surprising variances between the hash values produced from the same storage disc volume, according to real study and trials. There is no clear explanation for this alteration, which necessitates additional study (refer section 10). The outcomes also demonstrate that a bit-stream copy’s hash value will coincide with the hash value supplied by the adaptor used to validate and forensic image the SSD (see Figure 6). Regardless of the adapter being used, the partitioned disk’s produced hash value will always match, regardless of any changes to the hash values produced from the disk’s volume. As seen in Figure 5, adapters might provide false-negative findings to ensure consistency. (Notice Figure 6). Seized Evidence: S(I0) Adapter: An(I0) Verification: Hk(I0) where k represents the hashing algorithm used e.g. SHA256 or SHA1.

64

SSD Forensic Investigation Using Open Source Tool

Figure 5. Hashing solid state drives

Figure 6. Hashing solid state drives

65

SSD Forensic Investigation Using Open Source Tool

Data Acquisition of Solid State Drive: Electronic proof necessitates appropriate procedures for ensuring and maintaining electronic proof (Takeuchi, 2009). The appropriate procedure should ensure that the truthfulness of information and the dependability of the evidence presented in court cannot be contested (Chang, 2007; Hepisuthar, 2021). In the world of digital crime scene investigation, evaluating the hash estimates of a cycle stream duplicate with the initial proof is the best technique to assess if the integrity of computerised proof has been damaged. If the created bit-stream duplicate and any more copies are identical to the initial proof, then its genuineness and dependability are confirmed. To ensure the dependability of computerised proof saved on SSDs, a good method for efficiently saving the uprightness of erratic data should be developed. The method suggested in this work will enable sophisticated criminologists to create many and indistinguishable scientific bit-stream replicas of Solid State Drives. To complete Cyber Investigation step data acquisition on SSDs, the auto-mount should be deactivated in mode, and the device must be identified but not mounted when attached to the computer. The TRIM mechanism and garbage collection become ineffective when auto-mount is disabled, stabilising SSDs’ volatile nature and enabling multiple identical bit-stream copies to be produced from the same device (Chang, et al., 2016). (See Figure 7). Figure 7. Disabling auto-mount, prevents the TRIM function from operating and ensures that the generated bit-stream copy In is always identical to I0

1. Disable auto-mounting or start the computer using a Forensic Live CD. 2. While the device is unmounted, complete all verification and acquisition steps (Fukami et al., 2017). 3. Select an adapter or cable to use, noting the name and model. To validate and image an SSD, only one adaptor should be utilised. 4. Using a write blocker, connecting the SSD to the forensic workstation.

66

SSD Forensic Investigation Using Open Source Tool

5. Using the hashing value technique SHA-1, verify the device’s integrity and record the resultant hash value (Chang, 2007). 6. Make a bit-by-bit copy or forensic copy. 7. Compare the data integrity of an bit-by-bit streaming copy to the unique hash value. Figure 8. Forensic imaging of SSDs

Vn : C (I n ) ∈ S (I 0 ) ∴ H n (I n ) = H l (I 0 ) For all copies In are a subset of I0, therefore all the hash values generated from In will always match the original hash value of I0. If the directions are followed precisely, the outcomes ought to be constant (see Figure 8). Validation is the assessment and offering of unbiased evidence that a tool, strategy, or procedure functions well and as expected (Micheloni et al., 2010). According to the requirements set forth by the administration of the International Organization for Standardization digital evidence, repeatability is necessary to determine whether the same results may be attained by: 67

SSD Forensic Investigation Using Open Source Tool



Using the same measuring approach and methods, as well as the same instructions and circumstances. Inquiring if the experiment may be repeated after the initial test.



The proposed technique is not without limitations: •

Only ensures that deleted files that were still accessible during the last session on the computer can be recovered. The file format and system software both affect how well lost files can be recovered. Aportion of the retrieved deleted data can be incomplete or corrupted. Taking a SSD out of a computer system could be challenging. key Points electronic evidence stored on the accused’s storage device will be lost if the Solid State Drive is linked to a workstation with auto-mount activated (see section 3). This approach has only been evaluated and validated on Linux OS.

• • • • •

Imagine that an SSD was taken from a criminal investigation and will be imaging later (Suthar & Sharma, n.d.). The exhibit is stored in the lab after it has been hashed and photographed. The gadget is then connected to a different computer so that pictures can be taken after the defence requests access to the original evidence. Will the most recent hash match the one from the beginning? Experimental Framework: Disable auto-mount to imitate a situation in which a SSD is taken from a crime scene. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Fully formatted with many partitions. Containing 4.1 GB of the same data. The identical 2.4 GB of material was removed. Manual unmounting followed by verification using the SHA256 and SHA1 algorithms. Each SSD was fully formatted and had a distinct operating system installed. Containing a random assortment of facts. Deleted data at random Turn off the computer. Started the Forensic Live CD. SHA256 and SHA1 algorithms were used to validate the data.

The following is a list of the equipment used in the tests.

68

SSD Forensic Investigation Using Open Source Tool

Table 1. Solid state drives ID

Device

TRIM

1

Kingston V300

Yes

2

Transcend TS64GSSD370

Yes

3

OCZ Agility 3

Yes

4

SanDisk SDSSDA240G

Yes

5

Zheino A1

Yes

6

mSATA SSD 20GB

Yes

7

mSATA Zheino Q1

Yes

Table 2. List of solid state drives ID

Device

1

Toshiba MK8032 GSX

Table 3. List of SATA and mSATA adapters ID

Device

0

Internal SATA Connecto

1

Mini Pcie mSATA SSD to 2.5 SATA

2

Unbranded USB 3.0 to SATA 22 Pin 2.5

3

Nimitz USB 3.0 to SATA 22 Pin 2.5

4

USB 3.0 to Mini PCIE mSATA SSD

5

Sabrent USB 3.0 to SSD/2.5-Inch SATA

6

USB 2.0 to SATA Cable Model: SYZD-168

69

SSD Forensic Investigation Using Open Source Tool

Table 4. List of write-blockers ID

Device

1

Tableau T35es eSATA Forensic Bridge

Practical Experiment 1: Turning off Auto-Mount mode - The goal of this experiment work is to see if deactivating auto-mount disables the Trash removal and the TRIM function stop changes from occurring while attaching an SSD to a computer. The test will be conducted twice, once with the auto mount disabled and once with a Forensic Live CD. Practical Experiment 2: Validating the data Authenticity of Every SSD in the Allotted Time - The Experiment’s Goal: Check to see if attaching an auto-mount SSD to a computer feature off will allow the TRIM function from deleting any remnants of volatile data after 30 days. Practical Experiment 3: Comparing Serial advanced technology attachment to USB Cables and m Serial advanced technology attachment to USB/ Serial advanced technology attachment Adapters seven SSDs, one Hard Disk Drive (HDD), and one internal Serial advanced technology attachment six different kinds of cables/adapters were included with the connection utilised in this experiment. The experiment’s goal is to see if the converters used to image solid-state drives and hard drives produce the same hashing results. The HDDs and SSDs will be scrambled and imaged via the internal Serial advanced technology attachment connection first. The USB-toSerial advanced technology attachment converters will then be utilised to ensure the device’s integrity. Practical Experiment 4: Are all of the produced bit-stream copies identical to the original evidence? The purpose of the experiment was to integrate Experiments 1 through 3. to determine if each bit-stream copy’s hash value finally reaches the SSDs’ respective original hash value. Evaluation and Results (1) Experiment 1 Results: Disabling Auto-Mount Following the connection of the SSDs to a system without auto mount, and data consistency of volatile forensic SHA1 and SHA256 have been used to verify the evidence approach, and the resulting hashes from every SSD verified their corresponding authentic hashing values. when the test was conducted carried utilising a Forensic Live CD, Hashing values produced coincided with the actual hashing values.

70

SSD Forensic Investigation Using Open Source Tool

Table 5. Verification results – auto-mount disabled SSD

Verified n Times

Hn=H1

1

9

Match

2

9

Match

3

9

Match

4

9

Match

5

9

Match

6

12

Match

7

12

Match

Experiment 2 Results: Verifying Every SSD’s Integrity inside the Allotted Time. When the SSDs were linked, and the equipment integrity was checked, and the values and the produced hashes corresponded of the original hashes. Table 6. Results – Verifying the integrity of the devices after 30 days SSD

After 30 Days

1

Match

2

Match

3

Match

4

Match

5

Match

6

Match

7

Match

Experiment 3 Results: Various Serial Advanced Technology Attachments to USB Cables, as well as mSerial Advanced Technology Attachment to USB/Serial Advanced Technology Attachment Adapters. This adapters utilized to scan the HDDs and SSDs produced 3 distinct hashing results. See variations in hashing values are brought on by the chosen and do not indicate self-contamination. No matter which adapter is used to validate and image the disk drives, the partition hashing remains the same.

71

SSD Forensic Investigation Using Open Source Tool

Table 7. Results – Testing cables and adapters Adapter ID

Hn=H1

0

Match

1

Match

2

Match

3

Mismatch

4

Mismatch

5

Match

6

Mismatch

This experiment examined numerous cables and adapters and the way they might mislead digital rhetorical analysts into thinking that volatile electronic evidence data integrity has been broken when, in reality, it’s not. Knowledge spoliation didn’t cause the discrepancy in hash worth’s; rather, it has to try to while using a cable or adapter accustomed check the device’ integrity. Adapters three and four shared identical mismatched volume of the disk’s hash, however adapter six did not match any of the opposing adapters’ hash values. adapters. Despite this, each SSD’s hash values at the partition level remained consistent and the adapter/cable accustomed perform knowledge volume acquisition for the disk.

72

SSD Forensic Investigation Using Open Source Tool

Figure 9. SSD’ volume hash values – integrity check

Figure 10. SSD’ partition hash values – integrity check

73

SSD Forensic Investigation Using Open Source Tool

Imaging Solid-State Drives: Results of Practical 4: Make each Bit-Stream Copy that have been produced match the initial proof? - it was possible to image SSD without threatening the technology’s integrity since the hash values for each copy of a bit stream coincided with the SSD’s the first hash values. Table 8. Results – Verifying the integrity of the devices SSD

Day 1

Day 15

Day 30

1

Match

Match

Match

2

Match

Match

Match

3

Match

Match

Match

4

Match

Match

Match

5

Match

Match

Match

6

Match

Match

Match

7

Match

Match

Match

According to findings of the research, the matching SSD hash values coupled to an personal Computer system using auto-mount mode disabled can keep unaltered notwithstanding what number times the machine is expounded to a rhetorical workstation. The findings in addition disclose that a lot of constant functioning photos could also be created from an unmounted SSD. It is recommended that: • •

• • •

74

The suggested approach should be accepted and adjusted as necessary to conform to existing forensic laboratory standards and SOP. To warrant the data integrity of digital evidence (Like Computer, External Drives) and avoid unexpected findings, the forensic laboratory’s cables and adapters ought to be inspected and already verified in advance to forensic imaging to confirm that all cables give the identical hash values. To hash value and photograph a solid state drive, forensic Live CDs that by standard do not auto-mount ought to be used. At the crime scene, the SSD needs to be hashed; that would be tremendous if an external storage device has to be used to image the device. When at the site of the crime, switch off the suspect’s laptop and cast off the Solid State Drive; if this is not possible, shut down the computer and the Forensic Live CD to start. Barring mounting the SSD, copy the entire facts series to an exterior storage device.

SSD Forensic Investigation Using Open Source Tool

• •



FRTs should raise laptops with write-blocking software to hash and image seized SSD. The hash fee created will be used as a starting point to see if statistical spoliation has happened. The Serial advanced technology attachment adapter used to image or hash the SSD at the crime scene should be the identical device for hashing and imaging the SSD in the forensic laboratory. This allows you to see if the device has changed in any way. Forensic analysts should avoid employing imaging technologies that need a storage device to be installed while collecting data.

CONCLUSION Due to the vary of operating structures of assorted software system and rhetorical imaging instrumentation utilized in property right and private zone laboratories, the strategy for activity cyber forensic knowledge data acquisition from Solid State Drives that’s projected during this paper can now not give a foolproof sure answer to the matter law enforcement and computer forensics are round-faced international laboratories, however it will furnish an established and forensically examined solution which will be utilized by both. Thanks to the absence of connected working techniques and therefore the unstable nature of SSDs, it’s been robust to supply equal forensic bit-stream copies. This instability calls into question the validity of the integrity of the digital proof in addition because the case itself. A digital forensic analyst is more likely to come across an SSD for the duration of an examination as a result of SSDs’ rising recognition and potential over the preceding ten years among customers. It is essential to use an alternative approach because fashionable forensic imaging techniques cannot be utilized to undertake statistics gathering on SSDs due to the unstable nature of SSDs. The findings of the research reveal that, Providing the auto- mount mode in Solid state drive is able to false, your unmounted hash values design of Solid state drive will not change, aside from how usually the system is attached with your computer forensic workstation environment.

ACKNOWLEDGMENT This paper and the research behind it would not have been possible without the exceptional support of my Supervisor, Dr. Priyanka Sharma. Her Enthusiasm, Knowledge, and exacting of details have been an inspiration and kept my work on track from my first encounter with research work. Also thankful to Rashtriya Raksha University, Gandhinagar. We would like to thank all the members of the cyber 75

SSD Forensic Investigation Using Open Source Tool

Security Laboratory for their knowledge and suggestions through daily discussions in advancing this research.

REFERENCES Ahn, N.-Y., & Lee, D. H. (2017). Duty to delete on Non-volatile Memory. doi:10.8080/1020190046820 Antonellis, C. J. (2008). Solid state disks and computer forensics. ISSA Journal, 36–38. Bunker, T., Wei, M., & Swanson, S. (2012). Ming II: A ñexible platform for NAND ñash-based research. UCSD CSE. Cha, J., Kang, W., Chung, J., Park, K., & Kang, S. (2015). A New Accelerated Endurance Test for Terabit NAND Flash Memory Using Interference Effect. IEEE Transactions on Semiconductor Manufacturing, 28(3), 399–407. doi:10.1109/ TSM.2015.2429211 Chang, D., Lin, W., & Chen, H. (2016). FastRead: Improving Read Performance for Multilevel-Cell Flash Memory. IEEE Transactions on Very Large Scale Integration (VLSI). Systems, 24, 2998–3002. Chang, L. (2007). On Efficient Wear Leveling for Large Scale Flash Memory Storage Systems (Vol. 07). ACM. doi:10.1145/1244002.1244248 Hepisuthar, M. (2021). Comparative Analysis Study on SSD, HDD, and SSHD. Turkish Journal of Computer and Mathematics Education, 12(3), 3635–3641. doi:10.17762/turcomat.v12i3.1644 Fukami, A., Ghose, S., Luo, Y., Cai, Y., & Mutlu, O. (2017). Improving the reliability of chip-off forensic analysis of NAND flash memory devices. Digital Investigation, 20, S1–S11. doi:10.1016/j.diin.2017.01.011 Geier, F. (2015). The differences between SSD and HDD technology regarding forensic investigations. Gubanov, Y., & Afonin, O. (2014). Recovering evidence from SSD drives: understanding TRIM, garbage collection and exclusions. Belkasoft. Hadi, H. J., Musthaq, N., & Khan, I. U. (2021). SSD forensic: Evidence generation and forensic research on solid state drives using trim analysis. 2021 International Conference on Cyber Warfare and Security (ICCWS). IEEE. 10.1109/ ICCWS53234.2021.9702989 76

SSD Forensic Investigation Using Open Source Tool

Suthar, H. & Sharma, P. (2022) An Approach to Data Recovery from Solid State Drive: Cyber Forensics. Apple Academic Press. https:// www.appleacademicpress.com/advancements-in-cyber-cri me-investigation-and-digital-forensics-/1119 ISO/IEC. (2018). Security Techniques. ISO. https://www.iso.org/standard/44381.html Kang, M., Lee, W., & Kim, S. (2018). Subpage-Aware Solid State Drive for Improving Lifetime and Performance. IEEE Transactions on Computers, 67(10), 1492–1505. doi:10.1109/TC.2018.2827033 Kim, J., Lee, Y., Lee, K., Jung, T., Volokhov, D., & Yim, K. (2013). Vulnerability to flash controller for secure usb drives. J. Internet Serv. Inf. Secure, 3(3/4), 136–145. King, C., & Vidas, T. (2011). Empirical analysis of solid state disk data retention when used with contemporary operating systems (Vol. 8). Elsevier Science Publishers B. Ko, J. (2019). Variation-Tolerant WL Driving Scheme for High-Capacity NAND Flash Memory. IEEE Transactions on Very Large Scale Integration (VLSI). Systems, 27, 1828–1839. Lee, J., Kim, Y., Shipman, G. M., Oral, S., & Kim, J. (2013). Preemptible i/o scheduling of garbage Collection for solid state drives, Computer- Aided Design of Integrated Circuits and Systems. IEEE Transactions On, 32(2). Micheloni, R., Marelli, A., & Commodaro, S. (2010). Nand overview, From memory to systems: Inside NAND flash memories. Springer. doi:10.1007/978-90-481-9431-5 Micron Technology. (2008) Wear Leveling Techniques in NAND Flash. Micron. SpeedGuide. (n.d.). SLC, MLC or TLC NAND for Solid State Drives? SpeedGuide. https://www.speedguide.net/faq/slc-mlc-or-tlc-Nand-for-solid -state-drives-406 Suthar, H., & Sharma, P. (2022). Guaranteed Data Destruction Strategies and Drive Sanitization: SSD. Research Square., doi:10.21203/rs.3.rs-1896935/v1 Suthar, H., & Sharma, P. (2022). Buy Computer Forensic: Practical Handbook book online at low prices in India. Notion Press. https:// www.amazon.in/Computer-Forensic-Practical-Hepi-Sutha r/dp/B0B1DZ45R4

77

SSD Forensic Investigation Using Open Source Tool

Takeuchi, K. (2009). Novel Co-Design of NAND Flash Memory and NAND Flash Controller Circuits for Sub30 nm Low-Power High-Speed Solid-State Drives (SSD). IEEE Journal of Solid-State Circuits, 44(4), 1227–1234. doi:10.1109/ JSSC.2009.2014027 Templeman, R., & Kapadia, A. (2012). Gangrene: exploring the mortality of ñash memory. In HotSec’12 (pp. 1–1). USENIX Association. Wang, P. (2019). Three-Dimensional NAND Flash for Vector-Matrix Multiplication. IEEE Transactions on Very Large Scale Integration (VLSI). Systems, 27, 988–991.

KEY TERMS AND DEFINITIONS Computer Forensic: Computer forensics is a branch of digital forensic science pertaining to evidence found in computers and digital storage media. Cyber Security: A Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes. SSD: A solid-state drive is a solid-state storage device that uses integrated circuit assemblies to store data persistently, typically using flash memory, and functioning as secondary storage in the hierarchy of computer storage.

78

79

Chapter 4

A Comparative Review for Color Image Denoising Ashpreet https://orcid.org/0000-0002-8121-7214 National Institute of Technology, Kurukshetra, India

ABSTRACT With the explosion in the number of color digital images taken every day, the demand for more accurate and visually pleasing images is increasing. Images that have only one component in each pixel are called scalar images. Correspondingly, when each pixel consists of three separate components from three different signal channels, these are called color images. Image denoising, which aims to reconstruct a highquality image from its degraded observation, is a classical yet still very active topic in the area of low-level computer vision. Impulse noise is one of the most severe noises which usually affect the images during signal acquisition stage or due to the bit error in the transmission. The use of color images is increasing in many color image processing applications. Restoration of images corrupted by noises is a very common problem in color image processing. Therefore, work is required to reduce noise without losing the color image features.

1. INTRODUCTION 1.1 Overview A person receives maximum information about an object or a living being through images. An image is an illustration or general imprint of an object. It can also be defined as a two variable function o(i,j) where for each position (i,j) in the projection DOI: 10.4018/978-1-6684-6864-7.ch004 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

A Comparative Review for Color Image Denoising

plane o(i,j) defines the light intensity at that point. The most commonly used types of images are binary image, gray image and color image. Binary images contain only black and white colors, also known as one-bit images. Images which have only brightness information and grayscale intensity are called gray images. They contain 8-bit data which implies 256 brightness levels. A 0 is used to represent black while 255 is used for white. Color images are those that contain three band monochrome information. These bands contain the brightness level information. Color image is comprised of picture elements called as pixels and the pixel is represented by a vector o(i,j) for a particular location, which has three intensity values o1(i,j), o2(i,j) and o3(i,j) each corresponding to red, green and blue colors, respectively (Plataniotis & Venetsanopoulos, 2000; Gonzalez & Woods, 2018; Petrou & Petrou, 2010). In the present day, visual information transferred in the form of digital images is becoming a primary medium of communication. The received image needs processing before it can be used in various applications like face recognition, surveillance, medical imaging, robot vision, underwater imaging, satellite imaging, remote sensing (Pal & Biswas, 2009; Dubey & Katarya, 2021; Ashok, 2021) etc. Frequently, the received image is of low quality due to problems such as noise, poor brightness, contrast, blur or artefacts. Image processing is a branch of engineering that investigates ways for restoring a damaged image to its original state. Image denoising (the reduction of noise from images) is primary pre-processing task for image analysis methods because noise is an unwanted and unavoidable component that is mixed with the original image in a variety of situations, such as during image acquisition, storage and transmission. Noise can highly dilute the image quality as it occurs due to multiple sources such as the transmission of image, dust on the camera lens, faulty photo sensors and faulty memory locations (Julliand et al., 2016). Generally, faulty photo sensors and faulty memory locations cannot be avoided as these occur due to the aging of electronic components. The possible types of noise that can affect images are: Gaussian noise, Shot noise, Impulse noise, Speckle noise, Thermal noise, etc. Gaussian noise originates from thermal vibration of atoms and discrete nature of radiation. Shot noise is a noise that occurs due to discrete nature of light. Impulse noise is the one type of noise which randomly modifies the pixel values and can be classified into fixed valued or SPN and RVIN. The pixel values get modified in case of SPN by only two values, either high or low value of the range whereas in case of RVIN the pixel values get modified independently as well as randomly. Speckle noise comes under the category of multiplicative noise which when introduces in any image then it is multiplied with the true pixel value of the noise free image. Thermal noise arises due to thermal energy of the chip. The effect of SPN on an image is shown in Figure 1a and noise reduction to get the denoised image is shown in Figure 1b.

80

A Comparative Review for Color Image Denoising

Figure 1. Effect of SPN (a) Noisy image (b) Denoised image

Every day a massive number of images are captured and stored, but both these tasks are prone to noise. Because these images are regarded as a crucial source of information, a large number of them are communicated, stored and analyzed. Any loss of image information might have a negative impact on the overall performance of the system that contains the image processing step. So, day by day the demand for more conspicuous and accurate images is increasing. To fulfill this demand noise is required to be removed from the images. The most prevalent type of image noise is impulse noise, which may form any pattern, making it even more difficult to locate the source of the noise and forecast the original value of the noisy pixel. This is a basic problem in digital image processing, yet it continues to capture the attention of diverse academics since the requirement for improved image visual clarity is constantly in demand. Last few decades various methods have been developed for denoising of color images such as based upon Median Filter (Ko & Lee, 1991; Astola et al., 1990; Dong & Xu, 2007; Toh et al., 2008; Kang & Wang, 2009; Toh & Isa, 2009; Wang et al., 2010; Nair & Raju, 2012; Nair & Mol, 2013; Jin et al., 2008; Xu et al., 2014; Li et al., 2014; Jin et al., 2016; Hung & Chang, 2017; Roy et al., 2017; Zhu et al., 2018; Erkan et al., 2018; Erkan & Gokrem, 2018; Chen et al., 2019; Taha & Ibrahim, 2020; Jin et al., 2019; Noor et al., 2020; Erkan et al., 2020; Gupta et al., 2015; Smolka & Chydzinski, 2005; Celebi & Aslandogan, 2008; Zhao et al., 2012; Gellert & Brad, 2016; Erkan & Kilicman, 2016; Roig & Estruch, 2016; Hwang & Haddad, 1995; Sreenivasulu & Chaitanya, 2014; Sun et al., 2015; Lu & Chou, 2012; Yin et al., 1996; Arce, 1998; Arce & Paredes, 2000; Pattnaik et al., 2012; Palabaş & Gangal, 2012; Yu & Lee, 1993; Tsirikolias, 2016; Chen et al., 1999; Habib et al., 2015; Malinski & Smolka, 2019; Smolka & Malinski, 2018; Habib et al., 2015; Chen et al., 2020; Sa & Majhi, 2010; Singh et al., 2020; Geng et al., 2012; Wang et 81

A Comparative Review for Color Image Denoising

al., 2014; Jin et al., 2011), Fuzzy Logic (Wang et al., 2015; Habib et al., 2016; Roy et al., 2018; Xiao et al., 2016; Schulte et al., 2006; Schulte et al., 2007; Masood et al., 2014; Astola & Kuosmanen, 2020; Singh & Verma, 2021; Xiao et al., 2011; Jin et al., 2012), Principal Component Analysis (PCA) (Zhang et al., 2010; Dai et al., 2017), Anisotropic Diffusion (AD) (Xu et al., 2016; Jiranantanagorn, 2019), Optimization (Kumar et al., 2017; Khaw et al., 2019), Neural Networks (NN) (Li et al., 2020; Islam et al., 2018; Turkmen, 2016; Zhang et al., 2020), etc. and it is very difficult to choose one method for the desired application. This demands more technological advancements in image denoising methods to maintain image quality especially in color images. The problem imposed by impulse noise is more challenging for color images as there are three channels for noise reduction and the other most prominent distortion involved in color image deterioration is color artifacts. Due to these inevitable challenges, image denoising in color images is still a significant field and demanding for constant improvement. Hence it will be a challenging task to find a suitable method for color images.

1.2 General Framework of Denoising In general, any denoising system consists of two processes: degradation and restoration as shown in Figure 2 where the degradation function is noise η(i,j) at location (i,j), which works on image pixel o(i,j) to produce corrupted pixel x(i,j). A denoising method is then used on corrupted pixel to obtain an approximation y(i,j) of the original value. Figure 2. Denoising framework

1.3 Problem Identification There are countless image processing stimuli, but the majority of them fall into one of two categories: (i) to eliminate unnecessary elements that degrade the image and (ii) to extract information by transforming it into a more usable form. Image de-noising 82

A Comparative Review for Color Image Denoising

fits into both categories (i) & (ii) and is critical not just for visual improvement but also for facilitating automated processing. As discussed above, image distortion is heavily influenced by Impulse Noise. Because this noise can degrade image details, it is vital to identify and eliminate noise before passing the image to the image processing step. The problem imposed by impulse noise is more challenging for color images as three channels are there to be dealt for noise reduction and the other most prominent distortion involved in color image deterioration is color artifacts. The “Standard Median Filter” (Petrou & Petrou, 2010) is the most common filter used to remove impulse noise by preserving the details from the color images. This filter operates on all pixels whether it is corrupted or uncorrupted and works well only for low noise densities. Also, the denoised image loses some details like sharp corners when the window size is increased. Various advancements have been proposed to improve its performance keeping in view its above-mentioned limits. In color image denoising, the designing of an efficient denoising method is a major challenge since it desires to have accurate noise detection, high proximity to original image and noise reduction for wide range of color images. However, due to vast color image applications usage in today’s world, methods are required to be designed for color image dataset. The method applicable for color images is required to be tested on wide range of images along with wide range of noise levels.

1.4 Image Denoising Methods There are diverse classes of methods present in spatial domain filtering. Among these, the “standard median filter” (SMF) methods are powerful for images damaged by impulse noise that takes after sliding the window guideline and removes the exceptionally less similar pixel from a region. Subsequently the median value is the approximation of one of the pixels in the area, so the median filter does not produce new improbable pixel value when the filter overlaps an edge. Henceforth the median filter is extraordinarily enhanced to protect object edges. These central values help median filters in de-noising uniform noise from a digital image as illustrated in Figure 3.

83

A Comparative Review for Color Image Denoising

Figure 3. Median filtering operation

The primary disadvantage of SMF is that it is powerful just for images with low noise levels. In order to determine the current research aspects, a full literature study was undertaken in accordance with the stated work, wherein the description of different methods proposed in various research papers is given. Firstly, median based methods are discussed as below. Ko and Lee developed the “Center Weighted Median Filter (CWMF)” (Ko & Lee, 1991) where only the center kernel of the filter weighs more than 1. The “Directional Weighted Median (DWM)” filter (Dong & Xu, 2007) considered the neighbour pixel information under processing in four directions to weight the pixels in a local window and eliminated by the weighted median filter in the optimum direction. Toh et al. developed a recursive “Fuzzy Switching Median Filter (FSMF)” (Toh et al., 2008), wherein the detection module searches for the two intensities of noise in the noisy image and the filtering operation using fuzzy inference will continue by windowing it. A “Fuzzy Reasoning-based Directional Median (FRDM)” filter (Kang & Wang, 2009) is proposed in which the current pixel is defined as impulse noise pixel, informative pixel and noise free pixel using fuzzy reasoning and eliminate the noise using median filter or directional median filter. “Fuzzy Based Decision Algorithm (FBDA)” (Nair & Raju, 2012)is proposed in which median filtering is applied by using the difference for each pixel based on the central pixel in the window and determines the membership value based on the largest difference. Nair et al. suggested an effective “Direction based Adaptive Weighted Switching Median Filter (DAWSMF)” ( Nair & Mol, 2013), in which the detection step used the histogram estimation algorithm and weighted median filtering is applied to the pixels detected. A “Directional Weighted Filter” (Li et al., 2014) is suggested that first detects SPN by existing directional gray level differences along with opinion of gray level extremes and weighted mean of filtering window is taken as the restored value. An “Adaptive Fuzzy Inference system based Directional Median (AFIDM)” filter (Habib et al., 2016) is proposed, 84

A Comparative Review for Color Image Denoising

which employs fuzzy inference-based noise detector and filtering is implemented based on median & directional median filter. A “Region Adaptive Fuzzy Filter (RAFF)” (Roy et al., 2018) is proposed in which an enhanced minimum mean value identification algorithm is used for recognition of noisy & non-noisy pixels. Three different filtering methods namely median filtering, weighted fuzzy filtering and mean filter employing pixel intensity based inverse mapping are used depending on the availability of non-noisy pixels. Chen et al. suggested an “Adaptive Sequentially Weighted Median Filter (ASWMF)” (Chen et al., 2019) that consists of a noise detector that employs 3σ principle of regular distribution and local amplitude statistics and noise reduction by the adaptive sequentially weighted median processing. Taha et al. proposed “Recursive Switching Adaptive Median Filter (RSAMF)” (Taha & Ibrahim, 2020) that produces mask to indicate the noisy pixels and adaptive filtering is done towards local noise by counting the number of noise-free pixels. Noor et al. developed median filters with convolutional neural network (Noor et al., 2020) where median filters are used to remove impulse noise and convolutional neural network to remove gaussian noise. Erkan et al. proposed an “Adaptive Frequency Median Filter (AFMF)” (Erkan et al., 2020) which restores the grey values of damaged pixels using frequency median rather than the standard median. Gupta et al. introduced the “Adaptive Dual Threshold Median Filter (ADTMF)” (Gupta et al., 2015) that uses adaptive dual threshold for RVIN detection and a median filter for noise removal. Following that, methods other than the median are described, including Mean Filter, Principal Component Analysis (PCA), Anisotropic Diffusion (AD), Neural Networks (NN) and Particle Swarm Optimization (PSO). Erkan et al. proposed an “Improved Adaptive Weighted Mean Filter (IAWMF)” (Erkan et al., 2020) that uses euclidean pixel similarity to weight the noise-free pixels and integrating this weighting process into adaptive weighted mean filter to evaluate a new grey value of the center pixel for restoration. Another filter (Ahmed & Das, 2013) proposed that uses an adaptive fuzzy filter to detect noisy pixels and weighted mean filtering to restore the image. Chang and Liu proposed a “Fuzzy Weighted Mean Aggregation (FWMA)” (Chang & Liu, 2015) method which uses a “fuzzy weighted mean aggregation” to detect whether or not the pixel is noisy and recovered the detected noisy pixels using a weighted average filter. Lin et al. proposed a morphological mean filter (Lin et al., 2016) that detects both the number and position of the noise-free pixels in the image and the dilation operation of the noise-free pixels is iteratively executed to replace the neighbour noise pixels. Wang et al. developed an improved non-local means filter (Wang et al., 2018) by merging the bilateral filter and non-local means filter where the similar patches at the centre and the neighbours are grouped by estimating weights using the pixel information. Nain et al. described an “adaptive thresholdingbased edge detection” approach based on morphological operators in (Nain et al., 2008). Zhang et al. presented image denoising method by “Principal Component 85

A Comparative Review for Color Image Denoising

Analysis (PCA)” with “Local Pixel Grouping (LPG)” (Zhang et al., 2010) in which a pixel and its nearest neighbors are modelled as a vector variable whose samples are selected by grouping the pixels with similar contents which are used for PCA transform estimation to remove the noise. Dai et al. presented a general denoising framework based on guided principal component analysis (Dai et al., 2017), in which new back projection is utilised to retrieve usable information from noisy images and then PCA-based denoising is used. An improved anisotropic diffusion filter (Xu et al., 2016) is proposed where local difference value is used to differentiate corrupted pixels and noise-free pixels. The corrupted pixels are replaced by the pixels which have been pre-denoised through a gaussian filter and an anisotropic diffusion model with a semi-adaptive threshold is applied to get a restored image. Jiranantanagorn proposed a method (Jiranantanagorn, 2019) that firstly categorize pixels having the maximum and minimum value as corrupted pixels followed by the substitution with a value computed by the anisotropic diffusion for getting the denoised image. Li et al. proposed a “densely connected convolutional network (DenseNet)” (Li et al., 2020) consists of convolution layers, batch normalization layers, rectified linear unit and two convolution layers are used after the network for denoising. Kumar et al. proposed adaptive methods (Kumar et al., 2017) in which the noise is removed by “Fuzzy Median Filter (FMF)” and restored the noise free image by an “Adaptive Particle Swarm Optimization (APSO)” based “Richardson-Lucy (R-L)” algorithm. Khaw et al. designed a “Convolutional Neural Network (CNN)” algorithm combined with “Particle Swarm Optimization (PSO)” technique (Khaw et al., 2019) that incorporates the capability of very deep CNN in exploiting image details, parameters optimization potential of PSO and a median filter to eliminate any possible false detection. Islam et al. proposed a method using CNN for mixed gaussian-impulse noise reduction (Islam et al., 2018) in which the corrupted image is preprocessed by rank order filtering and filtered image is fed to 4-stage CNN architecture.

2. EXPERIMENTS 2.1 Datasets Dataset 1: The commonly used color images: Fisher, Lena, Parrots, Butter & Flower, Couple and Peppers with size 512×512 shown in Figure 4 are used as dataset 1.

86

A Comparative Review for Color Image Denoising

Figure 4. Considered color images (a) Fisher (b) Lena (c) Parrots (d) Butter & Flower (e) Couple and (f) Peppers

Dataset 2: The six standard color images: Lena, Peppers, Baboon, Goldhill, Tower and Barbara as shown in Figure 5 each of resolution 512×512 are used as dataset 2.

87

A Comparative Review for Color Image Denoising

Figure 5. Considered color images (a) Lena (b) Peppers (c) Baboon (d) Goldhill (e) Tower and (f) Barbara

Dataset 3: A set of standard 8-bit color images: Peppers, Lena, Boat, House and Barbara shown in Figure 6 are used as dataset 3.

88

A Comparative Review for Color Image Denoising

Figure 6. Considered color images (a) Peppers (b) Lena (c) Boat (d) House and (e) Barbara

2.2 Comparison Methods Some existing filtering methods viz. VMF (Astola et al., 1990), FAPGF (Malinski & Smolka, 2016), TSQSVF (Jin et al., 2016), MIVMF (Hung & Chang, 2017), LRDQSF (Zhu et al., 2018), AWQDF (Jin et al., 2019), MSVMAF (Roy & Laskar, 2016), NAFSM (Toh & Isa, 2009), DAMF (Erkan et al., 2018), BPDF (Erkan & Gokrem, 2018), SAWMF (Jin et al., 2008), MSMF (Wang et al., 2010), SWVMF (Xu et al., 2014), FDF (Wang et al., 2015), CAVMFWMF (Roy et al., 2017), ARmF (Enginoglu et al., 2019), MWMF (Biswas, 2020), MDFMF (Ashpreet & M. B., 2020) and ATDWMF (Biswas, 2022) are used to develop the framework for a comparative review of color images de-noising. Many research papers have been studied to understand the working of the aforementioned methods. Settings of parameters put forward by the reference papers are used in simulation results for all the methods used in comparison implemented using a system with Intel Core i7, 3.2 GHz processor and 8 GB RAM using MATLAB R2013a.

89

A Comparative Review for Color Image Denoising

2.3 Performance Metrics It is desirable to know the efficacy of any denoising method in both qualitative and quantitative manner. Performance evaluation in terms of visual observation of denoised images is termed as qualitative evaluation while the performance evaluation done by utilizing metrics or parameters like “Peak Signal-to-Noise Ratio (PSNR)”, “Structural Similarity Index Measure (SSIM)” and “Normalized Mean Square Error (NMSE)” is termed as quantitative evaluation. The various quality metrics used for evaluating the efficiency of the proposed methods quantitatively are discussed below.

2.3.1 Peak Signal-to-Noise Ratio (PSNR) PSNR is the most popular parameter calculated to evaluate the efficacy of any denoising method quantitatively (Wang et al., 2004). Higher PSNR value between denoised and original image indicates that the performance of the denoising method is better. Mathematically, it is defined as:

(255)

2

PSNR (dB ) = 10log 

MSE

(1.1)

Where, MSE is called as Mean Square Error and is defined as:

1 MSE = m ×n

m

n

∑∑ (o (i, j ) − x (i, j )) , 2

i =1 j =1

m×n represents image dimensions, o(i,j) represents original image pixel and x(i,j) represents corrupted image pixel.

2.3.2 Structural Similarity Index Measure (SSIM) It is the performance evaluation parameter for measuring the similarity between two images (Wang et al., 2004) and its value varies from 0 to 1. If the value of SSIM is 1 then both the images are exactly same in all respect. It is perception-based model in which image deterioration is considered as change in structural information. Mathematically, SSIM is calculated as:

90

A Comparative Review for Color Image Denoising

SSIM =

(x

(2xy +C 1)(2σxy + C 2) 2

)(

)

+ y 2 + C 1 σx 2 + σy 2 + C 2



(1.2)

Where, x is average of x, y is average of y, 𝜎x2 is the variance of x, 𝜎y2 is the variance of y, 𝜎xy is the covariance of x and y, C1 and C2 are variables to stabilize the division with weak denominator and are chosen as 0.01 and 0.03 respectively.

2.3.3 Normalized Mean Square Error (NMSE) NMSE is the metric that measures the normalized average error of the denoised image in comparison to original (Wang et al., 2010). Essentially, it calculates the mean squared error between the predicted values and original intensities of the associated pixels after normalising them into the range [0, 1]. Mathematically, NMSE is calculated as: m

n

o (i, j ) − y (i, j )

∑ ∑ ∑ ∑

NMSE =

i =1

2

j =1

2

o (i, j )

m

n

i =1

j =1

2



(1.3)

2

Here, m×n represents image dimensions, o(i,j) represents original image pixel and y(i,j) represents denoised image pixel.

91

A Comparative Review for Color Image Denoising

2.4 Experimental Results Table 1. Results for PSNR of considered de-noising methods for color images corrupted by different impulse noise levels Image

Fisher

Lena

Butter& Flower

Couple

Peppers

Noise level (%)

De-noising methods VMF

FAPGF

TSQSVF

MIVMF

LRDQSF

AWQDF

MWMF

5

34.55

34.92

37.23

35.38

36.92

37.94

39.55

10

31.73

32.45

34.29

32.13

34.76

35.65

37.36

15

28.62

30.92

32.56

29.62

32.75

33.35

35.90

20

25.65

29.54

30.37

27.07

30.98

31.76

34.66

25

22.75

28.29

28.36

23.94

28.77

29.50

33.82

30

19.62

25.34

25.45

20.85

25.86

26.84

32.95

5

32.38

34.45

36.08

32.26

36.46

37.75

40.82

10

30.81

32.02

33.30

31.17

33.58

34.61

37.69

15

28.15

30.23

31.07

29.36

31.40

32.25

35.92

20

25.19

28.92

29.28

26.72

29.49

30.21

34.64

25

22.57

27.78

27.57

23.95

27.76

28.56

33.51

30

20.17

26.52

25.56

21.24

25.94

26.62

32.68

5

30.54

30.64

31.43

30.24

31.45

32.57

31.33

10

28.45

28.55

30.71

29.14

30.77

31.59

30.65

15

24.42

27.56

27.25

27.91

30.15

30.21

29.96

20

20.87

26.27

26.84

23.00

27.25

28.08

29.34

25

17.92

25.48

25.07

19.77

25.53

26.44

28.92

30

15.68

24.68

24.15

17.56

24.58

25.62

27.90

5

29.54

31.25

32.54

30.58

32.45

33.56

34.89

10

27.75

28.11

30.57

28.06

30.80

31.66

32.71

15

25.34

26.91

29.75

26.78

29.56

30.54

31.24

20

22.15

25.25

25.93

23.67

26.16

26.91

30.02

25

19.66

24.09

23.85

21.06

24.05

24.70

29.11

30

17.58

22.05

21.37

19.25

22.58

23.04

28.31

5

30.45

31.04

32.23

30.22

30.87

30.92

30.99

10

29.77

30.58

30.31

29.88

29.97

30.14

30.43

15

27.07

29.18

29.27

28.56

28.85

29.94

29.89

20

23.83

28.79

28.89

27.24

27.74

28.57

29.41

25

21.03

27.56

26.66

25.31

25.64

28.12

29.01

30

20.68

25.45

24.67

23.54

23.76

25.88

28.55

Continued on following page 92

A Comparative Review for Color Image Denoising

Table 1. Continued Image

Parrots

Noise level (%)

De-noising methods VMF

FAPGF

TSQSVF

MIVMF

LRDQSF

AWQDF

MWMF

5

28.80

29.38

31.14

31.75

31.86

31.97

31.82

10

27.51

28.02

31.01

31.34

32.24

32.34

31.24

15

25.80

26.58

29.75

29.35

30.48

30.58

30.70

20

25.49

29.22

27.93

27.21

29.13

30.05

30.16

25

21.20

24.61

25.78

23.87

29.56

29.68

29.74

30

21.05

22.75

25.18

22.56

27.32

28.23

29.18

a) Results on Dataset 1: The objective analysis is done by the parameters namely: Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) using different levels of impulse noise ranging from 5% to 30% for considered color images as shown by Table 1.1 and Table 1.2 respectively. Subjective analysis is done by de-noising the images using visual quality. Here, the visual results of only two images namely Parrots (Figures 7(b)-7(h)) and Lena (Figures 8(b)-8(h)) are shown.

93

A Comparative Review for Color Image Denoising

Table 2. Results for SSIM of considered de-noising methods for color images corrupted by different impulse noise levels Image

Fisher

Lena

Butter & Flower

Couple

Peppers

Parrots

94

Noise level (%)

De-noising methods VMF

FAPGF

TSQSVF

MIVMF

LRDQSF

AWQDF

MWMF

5

0.8446

0.9524

0.9674

0.8894

0.9732

0.9765

0.9759

10

0.8056

0.9154

0.9257

0.8469

0.9668

0.9687

0.9673

15

0.7732

0.8855

0.8965

0.8154

0.9565

0.9624

0.9681

20

0.7149

0.8335

0.8481

0.7559

0.8726

0.8885

0.9575

25

0.6984

0.8125

0.8254

0.7249

0.8623

0.8765

0.9315

30

0.5824

0.7132

0.7456

0.6958

0.8542

0.8656

0.9371

5

0.9103

0.9671

0.9651

0.9143

0.9721

0.9765

0.9978

10

0.8770

0.9340

0.9328

0.8971

0.9440

0.9509

0.9888

15

0.7828

0.8986

0.8969

0.8531

0.9138

0.9221

0.9722

20

0.6403

0.8618

0.8518

0.7518

0.8735

0.8855

0.9735

25

0.4885

0.8140

0.7960

0.6069

0.8235

0.8397

0.9662

30

0.3519

0.7521

0.7195

0.4390

0.7544

0.7744

0.9327

5

0.7809

0.9256

0.9235

0.8156

0.9551

0.9768

0.9905

10

0.7265

0.9157

0.9056

0.7968

0.9435

0.9489

0.9515

15

0.6589

0.8964

0.8889

0.7256

0.9264

0.9312

0.9412

20

0.5218

0.8362

0.8349

0.6580

0.8612

0.8890

0.9245

25

0.5046

0.8167

0.8123

0.6324

0.8465

0.8765

0.8918

30

0.4725

0.8054

0.8021

0.6058

0.8367

0.8561

0.8648

5

0.8574

0.9756

0.9854

0.8897

0.9913

0.9934

0.9946

10

0.8021

0.9432

0.9533

0.8554

0.9638

0.9794

0.9845

15

0.7561

0.8990

0.9034

0.8163

0.9267

0.9527

0.9674

20

0.6431

0.8373

0.8478

0.7414

0.8623

0.8767

0.8989

25

0.5932

0.8023

0.8126

0.6921

0.8367

0.8474

0.9386

30

0.5512

0.7832

0.7957

0.6532

0.8138

0.8225

0.9396

5

0.7623

0.8671

0.8651

0.8143

0.8721

0.9565

0.9749

10

0.6780

0.8340

0.8328

0.7971

0.8440

0.9409

0.9694

15

0.5828

0.7986

0.7969

0.7531

0.8138

0.9221

0.9353

20

0.5403

0.7618

0.7518

0.7318

0.7735

0.8855

0.9173

25

0.4885

0.7140

0.6960

0.7069

0.7235

0.8397

0.9348

30

0.3519

0.6521

0.6195

0.6390

0.6544

0.7744

0.8945

5

0.8674

0.9856

0.9754

0.8897

0.9913

0.9934

0.9984

10

0.8221

0.9532

0.9443

0.8554

0.9638

0.9794

0.9944

15

0.7561

0.8890

0.8834

0.8163

0.9267

0.9527

0.9892

20

0.7141

0.8754

0.8663

0.7884

0.8903

0.9308

0.9859

25

0.6532

0.8223

0.8126

0.6921

0.8367

0.8474

0.9814

30

0.5812

0.7832

0.7757

0.6532

0.8138

0.8225

0.9757

A Comparative Review for Color Image Denoising

b) Results on Dataset 2: In the experimental results, the performance of the methods is evaluated in terms of objective analysis as well as subjective analysis. The considered images are artificially corrupted by SPN with low density i.e., from 10% to 30% and with high density i.e., from 60% to 80%. The performance metrics “peak signal to noise ratio (PSNR)”, “structural similarity index size (SSIM)” and “normalized mean square error (NMSE)” for objective analysis using low density SPN (10% to 30%) and high density SPN (60% to 80%) are shown in Table 1, Table 4 and Table 5 respectively. The subjective analysis is performed to determine the performance from the point of view of human visual perception using images Lena (Figures 9(b)-9(f)) and Peppers (Figures 10(b)-10(f)) c) Results on Dataset 3: In the simulations 10%, 30% and 50% levels of impulse noise are used for considered color images with window size 5×5. The objective analysis is done by the parameters namely: “Peak Signal-to-Noise Ratio (PSNR)”, “Structural Similarity Index Measure (SSIM)”, “Normalized Mean Square Error (NMSE)” and Computation Time. Table 6 shows the performance analysis of considered denoising methods in terms of PSNR on considered color images. Table 7 shows the results of SSIM of the denoising methods on considered color images at noise levels of 10%, 30% and 50%. Table 8 shows NMSE results of the considered methods on considered color images. Table 9 shows the Computation Time in seconds taken by the denoising methods on considered color images. Subjective analysis is conducted to identify the performance of methods from the aspect of human vision perception. During this analysis, visual outputs of images namely Peppers (Figures 11(c)-(j)) and Lena (Figures 12(c)-(j)) are used from the considered images with their corrupted version as 10%, 30% and 50%.

95

A Comparative Review for Color Image Denoising

Figure 7. (a) Parrots image (b) noisy image corrupted by 10% noise level, de-noised image by (c) VMF (Astola et al., 1990) (d) FAPGF (Malinski & Smolka, 2016) (e) TSQSVF (Jin et al., 2016) (f) MIVMF (Hung & Chang, 2017) (g) LRDQSF (Zhu et al., 2018) (h) AWQDF (Jin et al., 2019) and (i) MWMF (Biswas, 2020)

96

A Comparative Review for Color Image Denoising

Figure 8. (a) Lena image (b) noisy image corrupted by 15% noise level, de-noised image by (c) VMF (Astola et al., 1990) (d) FAPGF (Malinski & Smolka, 2016) (e) TSQSVF (Jin et al., 2016) (f) MIVMF (Hung & Chang, 2017) (g) LRDQSF (Zhu et al., 2018) (h) AWQDF (Jin et al., 2019) and (i) MWMF (Biswas, 2020)

97

A Comparative Review for Color Image Denoising

Table 3. Results for PSNR of considered de-noising methods for color images corrupted by different impulse noise levels Image

Lena

Peppers

Baboon

Goldhill

Tower

Barbara

98

Noise Density (%)

      De-noising Methods MSVMAF

NAFSM

DAMF

BPDF

MDFMF

      10

      42.31

      37.94

      40.83

38.02

      43.04

      20

      40.63

      34.79

      37.38

      34.52

      39.43

      30

      37.13

      33.03

      35.06

      32.27

      36.80

      60

      25.52

      29.28

      30.59

      26.20

      35.88

      70

      22.59

      28.13

      29.26

      23.77

      35.10

      80

      20.02

      26.75

      27.76

      19.85

      34.27

      10

      41.98

      30.39

      31.00

      31.19

      42.72

      20

      38.95

      29.20

      29.79

      29.97

      38.56

      30

      36.39

      28.22

      29.00

      28.75

      36.01

      60

      24.22

      25.10

      25.60

      23.78

      36.74

      70

      21.67

      23.95

      24.59

      20.96

      35.94

      80

      19.11

      22.44

      23.10

      16.26

      34.98

      10

      41.65

      29.66

      31.22

      30.28

      39.60

      20

      39.24

      26.74

      28.17

      26.93

      36.32

      30

      35.89

      24.91

      26.20

      24.83

      34.30

      60

      24.02

      21.72

      22.26

      20.24

      31.99

      70

      20.84

      20.86

      21.13

      18.55

      31.26

      80

      19.17

      19.89

      19.99

      15.29

      30.62

      10

      42.56

      31.74

      40.09

      37.56

      42.42

      20

      40.12

      30.83

      38.45

      34.57

      38.30

      30

      36.93

      30.14

      36.88

      32.36

      35.55

      60

      24.78

      28.12

      32.51

      25.75

      37.30

      70

      22.12

      27.44

      30.92

      22.99

      36.38

      80

      20.27

      26.24

      29.04

      19.21

      35.42

      10

      41.14

      34.34

      37.71

      36.11

      42.02

      20

      39.09

      32.64

      36.10

      33.53

      37.90

      30

      35.12

      31.90

      34.64

      31.16

      35.34

      60

      27.33

      28.89

      30.22

      24.69

      35.89

      70

      25.82

      27.81

      28.85

      21.89

      35.07

      80

      25.34

      26.26

      26.99

      18.03

      34.26

      10

      33.45

      41.27

      47.33

      42.17

      43.70

      20

      30.25

      38.09

      43.04

      37.84

      39.15

      30

      27.32

      36.12

      40.20

      34.81

      36.29

      60

      18.70

      31.96

      34.51

      26.47

      37.36

      70

      15.41

      30.58

      32.85

      23.14

      36.35

      80

      13.68

      28.84

      30.83

      17.41

      35.36

A Comparative Review for Color Image Denoising

Table 4. Results for SSIM of considered de-noising methods for color images corrupted by different impulse noise levels Image

Lena

Peppers

Baboon

Goldhill

Tower

Barbara

Noise density (%)

De-noising Methods MSVMAF

NAFSM

DAMF

BPDF

MDFMF

10

0.97

0.98

0.99

0.98

0.99

20

0.96

0.97

0.98

0.95

0.93

30

0.96

0.97

0.97

0.90

0.82

60

0.69

0.89

0.89

0.63

0.93

70

0.60

0.81

0.81

0.58

0.91

80

0.52

0.79

0.73

0.32

0.82

10

0.97

0.95

0.98

0.96

0.97

20

0.96

0.93

0.97

0.93

0.92

30

0.96

0.87

0.96

0.90

0.87

60

0.68

0.75

0.87

0.69

0.77

70

0.59

0.70

0.77

0.52

0.73

80

0.51

0.69

0.71

0.50

0.67

10

0.97

0.98

0.98

0.97

0.97

20

0.96

0.95

0.95

0.92

0.92

30

0.95

0.89

0.93

0.87

0.88

60

0.68

0.71

0.80

0.54

0.75

70

0.56

0.67

0.78

0.49

0.72

80

0.51

0.55

0.63

0.42

0.61

10

0.98

0.98

0.99

0.99

0.99

20

0.97

0.94

0.98

0.98

0.95

30

0.97

0.93

0.97

0.94

0.94

60

0.82

0.88

0.95

0.83

0.88

70

0.79

0.82

0.90

0.65

0.86

80

0.72

0.78

0.86

0.51

0.84

10

0.98

0.98

0.99

0.98

0.99

20

0.96

0.95

0.98

0.95

0.98

30

0.96

0.95

0.97

0.92

0.96

60

0.69

0.84

0.93

0.79

0.86

70

0.59

0.83

0.89

0.72

0.84

80

0.52

0.82

0.88

0.62

0.77

10

0.998

0.996

0.999

0.998

0.997

20

0.997

0.9939

0.998

0.994

0.995

30

0.993

0.9936

0.996

0.992

0.994

60

0.62

0.99

0.982

0.958

0.99

70

0.53

0.98

0.983

0.808

0.983

80

0.44

0.96

0.973

0.444

0.980

99

A Comparative Review for Color Image Denoising

Table 5. Results for NMSE of considered de-noising methods for color images corrupted by different impulse noise levels Image

Lena

Peppers

Baboon

Goldhill

Tower

Barbara

100

Noise Density (%)

MSVMAF

NAFSM

De-noising Methods DAMF

BPDF

MDFMF

10

0.0011

0.0005

0.0002

0.0005

0.0008

20

0.0016

0.0011

0.0006

0.0012

0.0029

30

0.0036

0.0017

0.0010

0.0020

0.0090

60

0.0597

0.0039

0.0029

0.0082

0.0039

70

0.1532

0.0051

0.0040

0.0149

0.0050

80

0.2752

0.0072

0.0057

0.0362

0.0067

10

0.0014

0.0036

0.0031

0.0030

0.0014

20

0.0020

0.0047

0.0040

0.0040

0.0045

30

0.0041

0.0058

0.0051

0.0053

0.0131

60

0.0503

0.0118

0.0104

0.0161

0.0088

70

0.1257

0.0161

0.0141

0.0301

0.0102

80

0.2262

0.0228

0.0188

0.0902

0.0124

10

0.0022

0.0037

0.0025

0.0039

0.0039

20

0.0031

0.0071

0.0052

0.0068

0.0093

30

0.0062

0.0109

0.0081

0.0111

0.0200

60

0.0715

0.0228

0.0201

0.0320

0.0231

70

0.1768

0.0277

0.0261

0.0468

0.0279

80

0.2793

0.0342

0.0341

0.0964

0.0340

10

0.0018

0.0033

0.0004

0.0009

0.0019

20

0.0020

0.0040

0.0007

0.0017

0.0057

30

0.0032

0.0048

0.0010

0.0029

0.0163

60

0.0968

0.0077

0.0029

0.0133

0.0080

70

0.1098

0.0091

0.0040

0.0246

0.0092

80

0.2076

0.0117

0.0063

0.0591

0.0110

10

0.0023

0.0020

0.0004

0.0007

0.0027

20

0.0032

0.0028

0.0007

0.0015

0.0074

30

0.0078

0.0038

0.0011

0.0027

0.0188

60

0.1129

0.0069

0.0034

0.0145

0.0127

70

0.2176

0.0091

0.0050

0.0287

0.0146

80

0.3185

0.0124

0.0075

0.0784

0.0180

10

0.0005

0.0003

0.0001

0.0003

0.0006

20

0.0009

0.0007

0.0002

0.0007

0.0028

30

0.0013

0.0010

0.0004

0.0014

0.0106

60

0.0088

0.0027

0.0015

0.0097

0.0027

70

0.0075

0.0037

0.0022

0.0208

0.0036

80

0.0062

0.0056

0.0035

0.0778

0.0050

A Comparative Review for Color Image Denoising

Figure 9. Lena image (a) 10% noise-dense image; de-noised image by (b) MSVMAF (Roy & Laskar, 2016) (c) NAFSM (Toh & Isa, 2009) (d) DAMF (Erkan et al., 2018) (e) BPDF (Erkan & Gokrem, 2018) and (f) MDFMF (Ashpreet & M. B., 2020)

101

A Comparative Review for Color Image Denoising

Table 6. Results for PSNR of considered de-noising methods for color images corrupted by different impulse noise levels Image

Peppers

Lena

Boat

House

Barbara

Noise level (%)

SAWMF

MSMF

VMF

SWVMF

De-noising Methods FDF

CAVMFWMF

BPDF

ATDWMF

10

38.08

40.26

40.03

39.83

41.40

48.65

40.47

53.70

30

32.17

38.35

38.02

38.74

36.39

34.72

30.41

49.36

50

26.86

33.82

33.40

33.46

32.47

27.29

25.24

42.37

10

40.33

43.15

43.61

43.40

44.17

50.53

43.03

55.45

30

34.06

41.40

41.61

41.75

39.46

37.72

32.20

51.36

50

28.68

36.03

35.97

36.07

35.24

29.19

26.99

45.12

10

37.50

40.45

41.31

41.25

42.17

46.65

41.69

49.08

30

32.74

38.92

39.52

39.45

38.46

36.56

31.31

46.46

50

28.72

34.96

35.17

35.06

34.82

28.72

26.34

41.87

10

38.22

41.08

41.85

41.72

42.30

57.30

43.14

66.38

30

33.01

39.22

39.81

39.76

38.99

38.46

32.50

59.96

50

28.64

35.56

35.45

35.86

35.76

29.63

27.44

50.86

10

37.56

40.13

40.70

40.40

42.09

49.85

42.74

50.13

30

32.91

39.18

38.67

39.02

38.59

36.29

31.71

46.95

50

28.32

35.44

35.16

35.20

35.05

28.95

26.90

42.27

Table 7. Results for SSIM of considered de-noising methods for color images corrupted by different impulse noise levels Image

Peppers

Lena

Boat

House

Barbara

102

Noise level (%)

De-noising Methods SAWMF

MSMF

VMF

SWVMF

FDF

CAVMFWMF

BPDF

ATDWMF

10

0.5040

0.6103

0.6090

0.6045

0.5786

0.8307

0.7746

0.8563

30

0.3387

0.5196

0.5087

0.4794

0.4295

0.5325

0.5381

0.6257

50

0.2329

0.4702

0.3965

0.3448

0.2736

0.3841

0.3665

0.4115

10

0.3206

0.4149

0.4269

0.4949

0.5618

0.8744

0.8548

0.8815

30

0.2909

0.3475

0.3547

0.4274

0.3747

0.5749

0.4650

0.6194

50

0.1554

0.3278

0.3354

0.3739

0.2724

0.3822

0.2303

0.4005

10

0.2688

0.4014

0.4734

0.4597

0.5309

0.9382

0.8132

0.9455

30

0.1974

0.3241

0.3368

0.3296

0.3435

0.6498

0.5247

0.6524

50

0.1862

0.2851

0.2984

0.2704

0.2637

0.3571

0.3857

0.4020

10

0.3490

0.5797

0.5812

0.5483

0.5598

0.8356

0.6211

0.8908

30

0.2360

0.5040

0.5463

0.5251

0.4847

0.4650

0.3266

0.6471

50

0.1014

0.3004

0.2987

0.3104

0.2903

0.1444

0.2114

0.4882

10

0.4411

0.5899

0.6262

0.6163

0.6174

0.8706

0.2306

0.8843

30

0.2790

0.5441

0.5734

0.5819

0.3618

0.3592

0.1368

0.6118

50

0.1107

0.3247

0.3178

0.3589

0.1956

0.1305

0.0534

0.4379

A Comparative Review for Color Image Denoising

Table 8. Results for NMSE of considered de-nosing methods for color images corrupted by different impulse noise levels Image

Peppers

Lena

Boat

House

Barbara

Noise level (%)

SAWMF

MSMF

VMF

SWVMF

De-noising Methods FDF

CAVMFWMF

BPDF

ATDWMF

10

0.0954

0.0725

0.0687

0.0703

0.0621

0.0206

0.0731

0.1015

30

0.1712

0.0885

0.0852

0.0847

0.0970

0.1058

0.1999

0.1967

50

0.2469

0.1505

0.1419

0.1442

0.1505

0.2732

0.3354

0.2527

10

0.0568

0.0441

0.0424

0.0432

0.0393

0.0126

0.0449

0.0635

30

0.1123

0.0539

0.0516

0.0524

0.0631

0.0537

0.1325

0.1422

50

0.1893

0.0918

0.0895

0.0878

0.0962

0.1377

0.2230

0.2015

10

0.0648

0.0627

0.0580

0.0586

0.0526

0.0193

0.0557

0.0707

30

0.1331

0.0710

0.0687

0.0720

0.0761

0.0584

0.1572

0.1493

50

0.2124

0.1093

0.1092

0.1046

0.1074

0.1405

0.2585

0.1941

10

0.0649

0.0627

0.0585

0.0592

0.0558

0.0040

0.0384

0.0403

30

0.1384

0.0746

0.0713

0.0709

0.0773

0.0349

0.1113

0.1124

50

0.1977

0.1127

0.1088

0.1104

0.1092

0.0968

0.1846

0.1756

10

0.1050

0.0774

0.0740

0.0762

0.0636

0.0165

0.0618

0.0830

30

0.1686

0.0882

0.0873

0.0875

0.0900

0.0706

0.1861

0.1668

50

0.2548

0.1296

0.1330

0.1326

0.1294

0.1685

0.3010

0.2225

103

A Comparative Review for Color Image Denoising

Figure 10. Peppers image (a) 60% noise-dense image; de-noised image by (b) MSVMAF (Roy & Laskar, 2016) (c) NAFSM (Toh & Isa, 2009) (d) DAMF (Erkan et al., 2018) (e) BPDF (Erkan & Gokrem, 2018) and (f) MDFMF (Ashpreet & M. B., 2020)

104

A Comparative Review for Color Image Denoising

Table 9. Results for Computation Time (seconds) of considered de-noising methods for color images corrupted by different impulse noise levels Image

Peppers

Lena

Boat

House

Barbara

Noise level (%)

SAWMF

De-noising Methods

10 30

MSMF

VMF

SWVMF

FDF

CAVMFWMF

BPDF

ATDWMF

14.96

97.86

19.98

36.41

86.01

8.78

1.25

4.96

16.52

105.05

20.85

63.35

89.56

8.28

2.49

4.90

50

8.86

118.05

21.51

62.76

87.72

8.22

3.84

4.80

10

10.40

102.25

21.72

31.30

87.71

8.64

1.12

4.95

30

13.90

118.12

23.03

82.26

91.61

8.32

2.22

4.80

50

13.99

127.27

21.09

76.58

87.79

8.23

3.60

4.92

10

13.89

79.54

22.05

38.30

80.63

9.13

1.44

4.96

30

15.91

86.89

22.13

79.43

87.31

9.09

2.77

4.98

50

13.73

89.83

21.17

68.64

87.15

9.45

4.33

4.96

10

16.17

83.11

22.64

69.33

84.58

31.35

4.35

5.13

30

22.06

85.69

23.28

84.05

84.30

30.71

8.55

5.66

50

19.46

89.45

22.31

90.05

87.89

30.67

13.36

5.79

10

18.60

77.19

21.93

46.26

79.15

8.99

1.18

5.01

30

13.20

85.50

23.39

70.83

85.02

8.66

2.27

4.99

50

12.18

92.18

24.05

84.91

86.56

8.35

3.53

4.95

105

A Comparative Review for Color Image Denoising

Figure 11. (a) Peppers Image (b) Noisy Image with 10% noise, de-noised image by (c) SAWMF (Jin et al., 2008) (d) MSMF (Wang et al., 2010) (e) VMF (Astola et al., 1990) (f) SWVMF (Xu et al., 2014) (g) CAVMFWMF (Roy et al., 2017) (h) FDF (Wang et al., 2015) (i) BPDF (Erkan & Gokrem, 2018) and (j) ATDWMF (Biswas, 2022)

106

A Comparative Review for Color Image Denoising

Figure 12. (a) Lena Image (b) Noisy Image with 30% noise, de-noised image by (c) SAWMF (Jin et al., 2008) (d) MSMF (Wang et al., 2010) (e) VMF (Astola et al., 1990) (f) SWVMF (Xu et al., 2014) (g) CAVMFWMF (Roy et al., 2017) (h) FDF (Wang et al., 2015) (i) BPDF (Erkan & Gokrem, 2018) and (j) ATDWMF (Biswas, 2022)

3. CONCLUSION AND FUTURE WORK One of the important challenges in the color image de-noising is the complexity in the filtering method design as the color images consist of three channels to process 107

A Comparative Review for Color Image Denoising

for noise reduction. Another challenge is to get better performance of the filtering method developed than the other existing methods. By observing the performance of various methods, this can be concluded that aforementioned challenges are resolved to some possibility for developing the framework of color image denoising methods. The modified weighted median filter (MWMF) (Biswas, 2020) performs well under low noise levels (up to 30%) while the modified directional fuzzy based median filter (MDFMF) (Ashpreet & M. B., 2020) performs well under low noise levels (up to 30%) as well as high noise levels (up to 80%) of SPN for considered color images. The adaptive threshold and directional weighted median filter (ATDWMF) (Biswas, 2022) for reduction of random valued impulse noise is proved to be much better under low and medium random noise levels up to 50% than many reported state-of-the-art methods. MWMF produces visually improved denoised images in comparison to considered existing methods namely VMF (Astola et al., 1990), FAPGF (Malinski & Smolka, 2016), TSQSVF (Jin et al., 2016), MIVMF (Hung & Chang, 2017), LRDQSF (Zhu et al., 2018), AWQDF (Jin et al., 2019) for taken color images: Parrots, Lena and Peppers. It can be concluded that MWMF successfully restored the damaged images with increased edge information and image details. Furthermore, as compared to other existing methods, the bright lustre is kept in a superior way. It may be noted that there is some noise visible in VMF and MIVMF but it is not that prominent in MWMF output at 25% noise level. MDFMF produces denoised images with more detailed information similar to the input image in comparison to considered existing methods MSVMAF (Roy & Laskar, 2016), NAFSM (Toh & Isa, 2009), DAMF (Erkan et al., 2018), BPDF (Erkan & Gokrem, 2018) for Lena, Peppers and Barbara color images. It may be concluded that MSVMAF produces blurred denoised images while BPDF produces heavily distorted denoised images but in comparison to considered existing methods, MDFMF produces much visually appealing denoised images as noise levels increase. The ATDWMF for reduction of RVIN gives better denoised images relative to considered existing methods SAWMF (Jin et al., 2008), MSMF (Wang et al., 2010), VMF (Astola et al., 1990), SWVMF (Xu et al., 2014), CAVMFWMF (Roy et al., 2017), FDF (Wang et al., 2015) and BPDF (Erkan & Gokrem, 2018) for Peppers, Lena and Boat images but results in blurring of the images as noise level increases. MWMF, MDFMF and ATDWMF efficiently reduces impulse noise from color images and we can say that they exhibit better performance than many other existing denoising methods in terms of performance parameters viz. PSNR, SSIM, NMSE and computation time. This study is not restrained up to impulse noise only but it can be analyzed further for other categories of noise such as mixture noise, rician noise, speckle noise where denoising may become more complex. Also, the performance evaluation of denoising methods presented here can be performed as well with other types of image datasets such as medical images, satellite images, underwater images, etc. The 108

A Comparative Review for Color Image Denoising

filtering process can be extended in a way to reduce the use of methods causing blur in the resultant images. Additionally, neural networks or any other recent methods like deep learning used in combination with the discussed methods for getting better performance is an interesting problem to be explored.

REFERENCES Ahmed, F., & Das, S. (2013). Removal of high-density salt-and-pepper noise in images with an iterative adaptive fuzzy filter using alpha-trimmed mean. IEEE Transactions on Fuzzy Systems, 22(5), 1352–1358. doi:10.1109/TFUZZ.2013.2286634 Arce, G. R. (1998). A general weighted median filter structure admitting negative weights. IEEE Transactions on Signal Processing, 46(12), 3195–3205. doi:10.1109/78.735296 Arce, G. R., & Paredes, J. L. (2000). Recursive weighted median filters admitting negative weights and their optimization. IEEE Transactions on Signal Processing, 48(3), 768–779. doi:10.1109/78.824671 Ashok, B. (2021). Diabetes Diagnosis using Ensemble Models in Machine Learning. [TURCOMAT]. Turkish Journal of Computer and Mathematics Education, 12(13), 177–184. Ashpreet, M. B. (2020). Modified Directional and Fuzzy Based Median Filter for Salt-and-Pepper Noise Reduction in Color Image. Solid State Technology, 63(5), 4033–4053. Astola, J., Haavisto, P., & Neuvo, Y. (1990). Vector median filters. Proceedings of the IEEE, 78(4), 678–689. doi:10.1109/5.54807 Astola, J., & Kuosmanen, P. (2020). Fundamentals of nonlinear digital filtering. CRC press. doi:10.1201/9781003067832 Biswas, M. (2020). Impulse Noise Detection and Removal Method Based on Modified Weighted Median. [IJSI]. International Journal of Software Innovation, 8(2), 38–53. doi:10.4018/IJSI.2020040103 Biswas, M. (2022). Adaptive Threshold and Directional Weighted Median FilterBased Impulse Noise Removal Method for Images. [IJSI]. International Journal of Software Innovation, 10(1), 1–18.

109

A Comparative Review for Color Image Denoising

Celebi, M. E., & Aslandogan, Y. A. (2008). Robust switching vector median filter for impulsive noise removal. Journal of Electronic Imaging, 17(4), 043006–043006. doi:10.1117/1.2991415 Chang, J. Y., & Liu, P. C. (2015, August). A fuzzy weighted mean aggregation algorithm for color image impulse noise removal. In 2015 IEEE International Conference on Automation Science and Engineering (CASE) (pp. 1268-1273). IEEE. 10.1109/CoASE.2015.7294273 Chen, J., Zhan, Y., & Cao, H. (2019). Adaptive sequentially weighted median filter for image highly corrupted by impulse noise. IEEE Access: Practical Innovations, Open Solutions, 7, 158545–158556. doi:10.1109/ACCESS.2019.2950348 Chen, J., Zhan, Y., & Cao, H. (2020). Iterative deviation filter for fixed-valued impulse noise removal. Multimedia Tools and Applications, 79(33-34), 23695–23710. doi:10.100711042-020-09123-x Chen, T., Ma, K. K., & Chen, L. H. (1999). Tri-state median filter for image denoising. IEEE Transactions on Image Processing, 8(12), 1834–1838. doi:10.1109/83.806630 PMID:18267461 Dai, T., Xu, Z., Liang, H., Gu, K., Tang, Q., Wang, Y., & Xia, S. T. (2017). A generic denoising framework via guided principal component analysis. Journal of Visual Communication and Image Representation, 48, 340–352. doi:10.1016/j. jvcir.2017.05.009 Dong, Y., & Xu, S. (2007). A new directional weighted median filter for removal of random-valued impulse noise. IEEE Signal Processing Letters, 14(3), 193–196. doi:10.1109/LSP.2006.884014 Dubey, V., & Katarya, R. (2021, May). Adaptive histogram equalization based approach for sar image enhancement: A comparative analysis. In 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS) (pp. 878-883). IEEE. 10.1109/ICICCS51141.2021.9432287 Enginoglu, S., Erkan, U., & Memiş, S. (2019). Pixel similarity-based adaptive Riesz mean filter for salt-and-pepper noise removal. Multimedia Tools and Applications, 78(24), 35401–35418. doi:10.100711042-019-08110-1 Erkan, U., Enginoğlu, S., Thanh, D. N., & Hieu, L. M. (2020). Adaptive frequency median filter for the salt and pepper denoising problem. IET Image Processing, 14(7), 1291–1302. doi:10.1049/iet-ipr.2019.0398

110

A Comparative Review for Color Image Denoising

Erkan, U., & Gokrem, L. (2018). A new method based on pixel density in salt and pepper noise removal. Turkish Journal of Electrical Engineering and Computer Sciences, 26(1), 162–171. doi:10.3906/elk-1705-256 Erkan, U., Gokrem, L., & Enginoglu, S. (2018). Different applied median filter in salt and pepper noise. Computers & Electrical Engineering, 70, 789–798. doi:10.1016/j. compeleceng.2018.01.019 Erkan, U., & Kilicman, A. (2016). Two new methods for removing salt-and-pepper noise from digital images. scienceasia, 42(1), 28. Erkan, U., Thanh, D. N., Enginoglu, S., & Memiş, S. (2020, June). Improved adaptive weighted mean filter for salt-and-pepper noise removal. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1-5). IEEE. 10.1109/ICECCE49384.2020.9179351 Gellert, A., & Brad, R. (2016). Context‐based prediction filtering of impulse noise images. IET Image Processing, 10(6), 429–437. doi:10.1049/iet-ipr.2015.0702 Geng, X., Hu, X., & Xiao, J. (2012). Quaternion switching filter for impulse noise reduction in color image. Signal Processing, 92(1), 150–162. doi:10.1016/j. sigpro.2011.06.015 Gonzalez, R. C., & Woods, R. E. (2018). Digital Image Processing. Pearson. Gupta, V., Chaurasia, V., & Shandilya, M. (2015). Random-valued impulse noise removal using adaptive dual threshold median filter. Journal of Visual Communication and Image Representation, 26, 296–304. doi:10.1016/j.jvcir.2014.10.004 Habib, M., Hussain, A., & Choi, T. S. (2015). Adaptive threshold based fuzzy directional filter design using background information. Applied Soft Computing, 29, 471–478. doi:10.1016/j.asoc.2015.01.010 Habib, M., Hussain, A., Rasheed, S., & Ali, M. (2016). Adaptive fuzzy inference system based directional median filter for impulse noise removal. AEÜ. International Journal of Electronics and Communications, 70(5), 689–697. doi:10.1016/j. aeue.2016.02.005 Habib, M., Rasheed, S., Hussain, A., & Ali, M. (2015). Random value impulse noise removal based on most similar neighbors. In 2015 13th International Conference on Frontiers of Information Technology (FIT) (pp. 329-333). IEEE. 10.1109/FIT.2015.64 Hung, C. C., & Chang, E. S. (2017). Moran’s I for impulse noise detection and removal in color images. Journal of Electronic Imaging, 26(2), 023023–023023. doi:10.1117/1.JEI.26.2.023023 111

A Comparative Review for Color Image Denoising

Hwang, H., & Haddad, R. A. (1995). Adaptive median filters: New algorithms and results. IEEE Transactions on Image Processing, 4(4), 499–502. doi:10.1109/83.370679 PMID:18289998 Islam, M. T., Rahman, S. M., Ahmad, M. O., & Swamy, M. N. S. (2018). Mixed Gaussian-impulse noise reduction from images using convolutional neural network. Signal Processing Image Communication, 68, 26–41. doi:10.1016/j. image.2018.06.016 Jin, L., Liu, H., Xu, X., & Song, E. (2011). Color impulsive noise removal based on quaternion representation and directional vector order-statistics. Signal Processing, 91(5), 1249–1261. doi:10.1016/j.sigpro.2010.12.011 Jin, L., Xiong, C., & Li, D. (2008). Selective adaptive weighted median filter. Optical Engineering (Redondo Beach, Calif.), 47(3), 037001–037001. doi:10.1117/1.2891297 Jin, L., Xiong, C., & Liu, H. (2012). Improved bilateral filter for suppressing mixed noise in color images. Digital Signal Processing, 22(6), 903–912. doi:10.1016/j. dsp.2012.06.012 Jin, L., Zhu, Z., Song, E., & Xu, X. (2019). An effective vector filter for impulse noise reduction based on adaptive quaternion color distance mechanism. Signal Processing, 155, 334–345. doi:10.1016/j.sigpro.2018.10.007 Jin, L., Zhu, Z., Xu, X., & Li, X. (2016). Two-stage quaternion switching vector filter for color impulse noise removal. Signal Processing, 128, 171–185. doi:10.1016/j. sigpro.2016.03.025 Jiranantanagorn, P. (2019, July). High-Density Salt and Pepper Noise Filter using Anisotropic Diffusion. In 2019 16th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON) (pp. 645-648). IEEE. 10.1109/ECTI-CON47248.2019.8955185 Julliand, T., Nozick, V., & Talbot, H. (2016). Image noise and digital image forensics. In Digital-Forensics and Watermarking: 14th International Workshop, IWDW 2015, (pp. 3-17). Springer International Publishing. Kang, C. C., & Wang, W. J. (2009). Fuzzy reasoning-based directional median filter design. Signal Processing, 89(3), 344–351. doi:10.1016/j.sigpro.2008.09.003 Khaw, H. Y., Soon, F. C., Chuah, J. H., & Chow, C. O. (2019). High‐density impulse noise detection and removal using deep convolutional neural network with particle swarm optimisation. IET Image Processing, 13(2), 365–374. doi:10.1049/ iet-ipr.2018.5776 112

A Comparative Review for Color Image Denoising

Ko, S. J., & Lee, Y. H. (1991). Center weighted median filters and their applications to image enhancement. IEEE Transactions on Circuits and Systems, 38(9), 984–993. doi:10.1109/31.83870 Kumar, N., Shukla, H., & Tripathi, R. (2017). Image Restoration in Noisy free images using fuzzy based median filtering and adaptive Particle Swarm OptimizationRichardson-Lucy algorithm. International Journal of Intelligent Engineering and Systems, 10(4), 50–59. doi:10.22266/ijies2017.0831.06 Li, G., Xu, X., Zhang, M., & Liu, Q. (2020). Densely connected network for impulse noise removal. Pattern Analysis & Applications, 23(3), 1263–1275. doi:10.100710044-020-00871-y Li, Z., Liu, G., Xu, Y., & Cheng, Y. (2014). Modified directional weighted filter for removal of salt & pepper noise. Pattern Recognition Letters, 40, 113–120. doi:10.1016/j.patrec.2013.12.022 Lin, P. H., Chen, B. H., Cheng, F. C., & Huang, S. C. (2016). A morphological mean filter for impulse noise removal. Journal of Display Technology, 12(4), 344–350. Lu, C. T., & Chou, T. C. (2012). Denoising of salt-and-pepper noise corrupted image using modified directional-weighted-median filter. Pattern Recognition Letters, 33(10), 1287–1295. doi:10.1016/j.patrec.2012.03.025 Malinski, L., & Smolka, B. (2016). Fast averaging peer group filter for the impulsive noise removal in color images. Journal of Real-Time Image Processing, 11(3), 427–444. doi:10.100711554-015-0500-z Malinski, L., & Smolka, B. (2019). Fast adaptive switching technique of impulsive noise removal in color images. Journal of Real-Time Image Processing, 16(4), 1077–1098. doi:10.100711554-016-0599-6 Masood, S., Hussain, A., Jaffar, M. A., & Choi, T. S. (2014). Color differences based fuzzy filter for extremely corrupted color images. Applied Soft Computing, 21, 107–118. doi:10.1016/j.asoc.2014.03.006 Nain, N., Jindal, G., Garg, A., & Jain, A. (2008). Dynamic thresholding based edge detection. In Proceedings of the World Congress on Engineering, (pp. 2-7). Nair, M. S., & Mol, P. A. (2013). Direction based adaptive weighted switching median filter for removing high density impulse noise. Computers & Electrical Engineering, 39(2), 663–689. doi:10.1016/j.compeleceng.2012.06.004

113

A Comparative Review for Color Image Denoising

Nair, M. S., & Raju, G. (2012). A new fuzzy-based decision algorithm for highdensity impulse noise removal. Signal, Image and Video Processing, 6(4), 579–595. doi:10.100711760-010-0186-4 Noor, A., Zhao, Y., Khan, R., Wu, L., & Abdalla, F. Y. (2020). Median filters combined with denoising convolutional neural network for Gaussian and impulse noises. Multimedia Tools and Applications, 79(25-26), 18553–18568. doi:10.100711042020-08657-4 Pal, A. K., & Biswas, G. P. (2009). On improving Visual Quality of Remote-Sensed Earthquake Images in Proceedings of National Seminar on Recent Advances in Theoretical and Applied Seismology. MDPI. Palabaş, T., & Gangal, A. (2012, July). Adaptive fuzzy filter combined with median filter for reducing intensive salt and pepper noise in gray level images. In 2012 International Symposium on Innovations in Intelligent Systems and Applications (pp. 1-4). IEEE. 10.1109/INISTA.2012.6247003 Pattnaik, A., Agarwal, S., & Chand, S. (2012). A new and efficient method for removal of high-density salt and pepper noise through cascade decision based filtering algorithm. Procedia Technology, 6, 108–117. doi:10.1016/j.protcy.2012.10.014 Petrou, M. M., & Petrou, C. (2010). Image processing: the fundamentals. John Wiley & Sons. doi:10.1002/9781119994398 Plataniotis, K., & Venetsanopoulos, A. N. (2000). Color image processing and applications. Springer-Verlag. doi:10.1007/978-3-662-04186-4 Roig, B., & Estruch, V. D. (2016). Localised rank‐ordered differences vector filter for suppression of high‐density impulse noise in colour images. IET Image Processing, 10(1), 24–33. doi:10.1049/iet-ipr.2014.0838 Roy, A., & Laskar, R. H. (2016). Multiclass SVM based adaptive filter for removal of high density impulse noise from color images. Applied Soft Computing, 46, 816–826. doi:10.1016/j.asoc.2015.09.032 Roy, A., Manam, L., & Laskar, R. H. (2018). Region adaptive fuzzy filter: An approach for removal of random-valued impulse noise. IEEE Transactions on Industrial Electronics, 65(9), 7268–7278. doi:10.1109/TIE.2018.2793225 Roy, A., Singha, J., Manam, L., & Laskar, R. H. (2017). Combination of adaptive vector median filter and weighted mean filter for removal of high‐density impulse noise from colour images. IET Image Processing, 11(6), 352–361. doi:10.1049/ iet-ipr.2016.0320

114

A Comparative Review for Color Image Denoising

Sa, P. K., & Majhi, B. (2010). An improved adaptive impulsive noise suppression scheme for digital images. AEÜ. International Journal of Electronics and Communications, 64(4), 322–328. doi:10.1016/j.aeue.2009.01.005 Schulte, S., De Witte, V., Nachtegael, M., Van der Weken, D., & Kerre, E. E. (2007). Histogram-based fuzzy colour filter for image restoration. Image and Vision Computing, 25(9), 1377–1390. doi:10.1016/j.imavis.2006.10.002 Schulte, S., Valerie, D. W., Nachtegael, M., Dietrich, V. D. W., & Etienne, E. K. (2006). Fuzzy two-step filter for impulse noise reduction from color images. IEEE Transactions on Image Processing, 15(11), 3567–3578. doi:10.1109/TIP.2006.877494 PMID:17076414 Singh, A., Sethi, G., & Kalra, G. S. (2020). Spatially adaptive image denoising via enhanced noise detection method for grayscale and color images. IEEE Access : Practical Innovations, Open Solutions, 8, 112985–113002. doi:10.1109/ ACCESS.2020.3003874 Singh, I., & Verma, O. P. (2021). Impulse noise removal in color image sequences using fuzzy logic. Multimedia Tools and Applications, 80(12), 18279–18300. doi:10.100711042-021-10643-3 Smolka, B., & Chydzinski, A. (2005). Fast detection and impulsive noise removal in color images. Real-Time Imaging, 11(5-6), 389–402. doi:10.1016/j.rti.2005.07.003 Smolka, B., & Malinski, L. (2018). Impulsive noise removal in color digital images based on the concept of digital paths. In 2018 13th International Conference on Computer Science & Education (ICCSE) (pp. 1-6). IEEE. 10.1109/ ICCSE.2018.8468771 Sreenivasulu, P., & Chaitanya, N. K. (2014). Removal of Salt and Pepper Noise for Various Images Using Median Filters: A Comparative Study. IUP Journal of Telecommunications, 6(2). Sun, C., Tang, C., Zhu, X., Li, X., & Wang, L. (2015). An efficient method for saltand-pepper noise removal based on shearlet transform and noise detection. AEÜ. International Journal of Electronics and Communications, 69(12), 1823–1832. doi:10.1016/j.aeue.2015.09.007 Taha, A. Q., & Ibrahim, H. (2020). Reduction of Salt-and-Pepper Noise from Digital Grayscale Image by Using Recursive Switching Adaptive Median Filter. In Intelligent Manufacturing and Mechatronics: Proceedings of the 2nd Symposium on Intelligent Manufacturing and Mechatronics–SympoSIMM 2019, (pp. 32-47). Springer Singapore. 115

A Comparative Review for Color Image Denoising

Toh, K. K. V., Ibrahim, H., & Mahyuddin, M. N. (2008). Salt-and-pepper noise detection and reduction using fuzzy switching median filter. IEEE Transactions on Consumer Electronics, 54(4), 1956–1961. doi:10.1109/TCE.2008.4711258 Toh, K. K. V., & Isa, N. A. M. (2009). Noise adaptive fuzzy switching median filter for salt-and-pepper noise reduction. IEEE Signal Processing Letters, 17(3), 281–284. doi:10.1109/LSP.2009.2038769 Tsirikolias, K. (2016). Low level image processing and analysis using radius filters. Digital Signal Processing, 50, 72–83. doi:10.1016/j.dsp.2015.12.001 Turkmen, I. (2016). The ANN based detector to remove random-valued impulse noise in images. Journal of Visual Communication and Image Representation, 34, 28–36. doi:10.1016/j.jvcir.2015.10.011 Wang, G., Li, D., Pan, W., & Zang, Z. (2010). Modified switching median filter for impulse noise removal. Signal Processing, 90(12), 3213–3218. doi:10.1016/j. sigpro.2010.05.026 Wang, G., Liu, Y., Xiong, W., & Li, Y. (2018). An improved non-local means filter for color image denoising. Optik (Stuttgart), 173, 157–173. doi:10.1016/j. ijleo.2018.08.013 Wang, G., Liu, Y., & Zhao, T. (2014). A quaternion-based switching filter for colour image denoising. Signal Processing, 102, 216–225. doi:10.1016/j.sigpro.2014.03.027 Wang, G., Zhu, H., & Wang, Y. (2015). Fuzzy decision filter for color image denoising. Optik (Stuttgart), 126(20), 2428–2432. doi:10.1016/j.ijleo.2015.06.005 Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612. doi:10.1109/TIP.2003.819861 PMID:15376593 Xiao, L., Li, C., Wu, Z., & Wang, T. (2016). An enhancement method for X-ray image via fuzzy noise removal and homomorphic filtering. Neurocomputing, 195, 56–64. doi:10.1016/j.neucom.2015.08.113 Xiao, Y., Zeng, T., Yu, J., & Ng, M. K. (2011). Restoration of images corrupted by mixed Gaussian-impulse noise via l1–l0 minimization. Pattern Recognition, 44(8), 1708–1720. doi:10.1016/j.patcog.2011.02.002 Xu, J., Jia, Y., Shi, Z., & Pang, K. (2016). An improved anisotropic diffusion filter with semi-adaptive threshold for edge preservation. Signal Processing, 119, 80–91. doi:10.1016/j.sigpro.2015.07.017

116

A Comparative Review for Color Image Denoising

Xu, J., Wang, L., & Shi, Z. (2014). A switching weighted vector median filter based on edge detection. Signal Processing, 98, 359–369. doi:10.1016/j.sigpro.2013.11.035 Yin, L., Yang, R., Gabbouj, M., & Neuvo, Y. (1996). Weighted median filters: A tutorial. IEEE Transactions on Circuits and Systems. 2, Analog and Digital Signal Processing, 43(3), 157–192. doi:10.1109/82.486465 Yu, P., & Lee, C. S. (1993). Adaptive fuzzy median filter. In International Symposium on Artificial Neural Networks (pp. 25-34). Zhang, L., Dong, W., Zhang, D., & Shi, G. (2010). Two-stage image denoising by principal component analysis with local pixel grouping. Pattern Recognition, 43(4), 1531–1549. doi:10.1016/j.patcog.2009.09.023 Zhang, M., Liu, Y., Li, G., Qin, B., & Liu, Q. (2020). Iterative scheme-inspired network for impulse noise removal. Pattern Analysis & Applications, 23(1), 135–145. doi:10.100710044-018-0762-8 Zhao, F., Ma, R. C., & Ma, J. Q. (2012). An Algorithm for Salt and Pepper Noise Removal Based on Information Entropy. [). Trans Tech Publications Ltd.]. Applied Mechanics and Materials, 220, 2273–2279. doi:10.4028/www.scientific.net/ AMM.220-223.2273 Zhu, Z., Jin, L., Song, E., & Hung, C. C. (2018). Quaternion switching vector median filter based on local reachability density. IEEE Signal Processing Letters, 25(6), 843–847. doi:10.1109/LSP.2018.2808343

117

118

Chapter 5

Blockchain-Based Multimedia Content Protection Sakshi Chhabra Panipat Institute of Engineering and Technology, India Ashutosh Kumar Singh National Institute of Technology, Kurukshetra, India Sumit Kumar Mahana National Institute of Technology, Kurukshetra, India

ABSTRACT This chapter presents a comprehensive overview of the methods and applications of blockchain technology for multimedia content security. These applications are categorised using a taxonomy that takes into account the technical features of blockchain technology, types of blockchain, content protection strategies including encryption, digital rights management, digital watermarking, and fingerprinting (or transaction tracing), as well as performance standards. Moreover, multimediabased content protection techniques have been covered in this chapter. According to a review of the literature, there is currently no comprehensive and organised taxonomy specifically devoted to blockchain-based content protection solutions. The comparative study is of the most noticeable work done on blockchain-based content protection techniques, which is highly cited by the authors.

DOI: 10.4018/978-1-6684-6864-7.ch005 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Blockchain-Based Multimedia Content Protection

INTRODUCTION A peer-to-peer network called a blockchain provides security for multimedia assets like photographs, audio, video, and more. Technology for multimedia protection shields data from dangers posed by unauthorised users, particularly in network environments [1]. Multimedia data is susceptible to dangers like eavesdropping, malicious alteration, illicit distribution, copying, watermarking, and more, especially in network situations. In this chapter, the authors are mainly focused on various methods of how to secure the multimedia related content which helps to preserve the protected properties include the confidentiality, integrity, ownership, authorization. Blockchain is a decentralized peer-to-peer network and stores the information electronically in digital format. As a result of decentralization, cryptocurrencies are not issued by any centralized organization, such as a bank or the government [2-3]. Mining, a procedure in which transactions are processed and verified by a network of computers, is used to maintain and confirm cryptographic algorithms that are used to safeguard these. As a result of this process, the network’s miners receive rewards in the form of cryptocurrencies. Like real money, cryptocurrencies can be sold or swapped for one another since they are fungible. The blockchain is used in many real-time multimedia applications like IoT, business, healthcare, energy, agriculture, multimedia content security becomes important and urgent. etc. Using blockchain we can track the transactions like tracking digital use & payments for content creators like in image distribution for photographers. Blockchain-based alternative gives artists more control over how multimedia related data spread among dearest [4-6]. Using blockchain we can provide security and privacy to multimedia or information transfer and can track the products [8-10]. We can leverage blockchain in the medical sector to provide peerto-peer health checks. Blockchain technology may be used to verify the legitimacy of unique product, shipment, and document identifiers as well as to store permanent records of transactions [6-7]. Four types of blockchain networks now available are public, private, hybrid and consortium as illustrated in Figure 1.

119

Blockchain-Based Multimedia Content Protection

Figure 1. Classification of blockchain

1. Public blockchain: This is where distributed ledger technology (DLT) first emerged and where Bitcoin and other cryptocurrencies first surfaced. The public blockchain has no limitations and requires no permissions. The mining and trading of digital currency like Bitcoin is the most well-known use of public blockchains. Electronic notarization of affidavits and public property ownership paperwork are just two examples of how it can be used to keep a continuous record with a traceable chain of proof. Two of the most well-known public blockchains are those for bitcoin and ethereum [8-13]. 2. Private blockchain: It works in a constrained network, or is managed by a single entity. Private blockchain uses include supply chain management, asset ownership etc. 3. Hybrid blockchain: Both public and private blockchains are present. In order to regulate who has access to certain blockchain data and what information is made public, it enables enterprises to create both a private, permission-based system and a public, permission-less system. One of the many intriguing applications of hybrid blockchain is real estate. A hybrid blockchain can be used by businesses to run systems safely while simultaneously exposing some data to the public, like listings. Retailers who use hybrid blockchain can streamline their operations, and it can also benefit heavily regulated industries like the financial services sector. The hybrid blockchain can be used to store medical records [3].

120

Blockchain-Based Multimedia Content Protection

4. Consortium blockchain: A consortium blockchain mixes private and public blockchain property in the same way that a hybrid blockchain does. It differs in that it involves several organisational members working together on a decentralised network. A consortium blockchain removes the hazards related to a private blockchain maintained by a single organisation by restricting access to a certain group. The preset nodes on this blockchain are in charge of the consensus techniques. It has a validator node that handles transaction initiation, receipt, and validation. Member nodes have the ability to initiate or receive transactions. This kind of blockchain could be utilised for payments and banking [12-13]. A decentralised and open multimedia distribution system can be created using blockchain technology, which is well-known for its ability to enable transaction verification [11-12. A distributed digital database containing blocks of cryptographically signed transactions is known as the blockchain [14-15]. Each block is verified and subject to a consensus decision before being cryptographically linked to the one before it as shown in Figure 2. Older blocks become increasingly challenging to change as new blocks are added (i.e., creating resistance against tampering). With its wide range of applications, including finance, healthcare, supplychain management, and intrusion detection, to mention a few, blockchain technology has recently become a source of fresh hope [13]. Recently, its influence has been seen in applications for multimedia or intellectual property protection. Transparency, decentralisation, dependable databases, collective maintenance, trackability, security, and credibility, along with digital currency and programmable contracts, are the key characteristics of blockchain technology, and they offer creative solutions for safeguarding digital intellectual property and guaranteeing traceability. Although decentralised applications built on blockchain technology have developed quickly in recent years, experts haven’t given this combination of content protection and blockchain technologies any thought. Only a small number of blockchain-based multimedia protection strategies can be found in the literature, with the exception of a few commercially available platforms [14-18].

121

Blockchain-Based Multimedia Content Protection

Figure 2. Overview of blockchain operation

NEN (The Dutch Standardization Organization) uses blockchain with QR Codes to authenticate multimedia content. In recent years, cryptocurrency has become a worldwide phenomenon, however, there is still plenty to learn about this rapidly expanding technology. Many people are concerned about technology’s potential to disrupt established multimedia systems [19-20]. It is anticipated that cryptocurrency would ultimately displace paper money as the global standard of exchange. This study aims to analyse the requirements of multimedia security and its types. The main aim of this learning is to offer a comprehensive overview of blockchainbased solutions for multimedia content protection. Based on technological blockchain properties, content protection techniques, and performance standards, a taxonomy is created to categorise these applications. To the best of our knowledge, no comprehensive taxonomy has been established in the literature to describe the state-of-the-art multimedia protection strategies based on blockchains. In order to address problems with workable solutions and pinpoint potential research gaps in blockchain-based multimedia protection applications, the suggested taxonomy incorporates technical elements and application knowledge. The authors think that given the recent rise of blockchain-based multimedia protection technologies, this is an ideal time to publish this survey. The remainder of the chapter is structured as follows. The classification of content protection techniques for protecting material is shown in Section 2. Section 3 examines current research on blockchain-based multimedia content protection 122

Blockchain-Based Multimedia Content Protection

systems. We also contrasted the schemes in relation to the attributes listed in the taxonomy. The comparative study is also discussed in section 4. A summary discussion and several new insights into the research field are provided in Section 5. The directions for further research are also described. Finally, in Section 5, we describe the research’s findings.

CONTENT PROTECTION TECHNIQUES The proposed classification is presented in this section to allow systematic dissection and comparison of blockchain-based multimedia content applications in the review. The common characteristics and specifications of blockchain-based multimedia protection systems are identified by this taxonomy as in Figure 1. Seven groups and their corresponding subcategories are defined by the suggested taxonomy. Figure 1 illustrates a thorough and detailed classification of the identified categories. The practise of delivering multimedia material, including as music, text, animation, and video, digitally is known as intent distribution. Multimedia files were formerly sent physically, whether it be via paper documents, CDs, or DVDs. As a result of technical development and Internet expansion, multimedia material in digital formats may now be published online via digital distribution channels like peer-to-peer (P2P) file distribution and sharing systems or Internet-based delivery platforms. These internet distribution channels have established themselves as the industry norms for content delivery, guaranteeing high quality, widespread accessibility, and economical pricing [21-23].

Encryption Encryption is a mathematical strategy that makes use of one or more cryptographic techniques to safeguard digital data. Because an algorithm converts the plaintext (original text) into ciphertext, encryption makes the data input unreadable (an alternate form of the text). When a designated user wishes to access the data, a binary key or password can be used to decrypt it. By doing so, the user will be able to access the original data by turning the ciphertext back into plaintext. To stop hackers from gaining access to sensitive information, it should always be encrypted. For instance, websites that communicate sensitive information like bank account and credit card numbers encrypt it to prevent fraud and identity theft [24-25]. The strength of the encryption is influenced by the size of the security key. In the latter half of the 20th century, web developers used either 56-bit encryption or 40-bit encryption, which has a key with 240 possible permutations.

123

Blockchain-Based Multimedia Content Protection

The Advanced Encryption Standard (AES) encryption length for web browsers was adjusted to 128 bits around the turn of the century after hackers found a means to decrypt those keys. The US National Institute of Standards and Technology created keys with widths of 128, 192, and 256 bits in 2001. Most military, banks, and government agencies use 256-bit encryption. Types of Encryption: •



Symmetric Key: Symmetric encryption, which uses a single, secret symmetric key to both encrypt and decode the plaintext, is used when speed is more critical than higher security. This encryption is widely used in credit card transactions. Symmetric encryption types include Advanced Encryption Standard (AES), the U.S. government standard and the gold standard for data encryption, and Data Encryption Standards (DES), a low-level encryption block cypher method. DES breaks plain text into blocks of 64 bits and uses keys of 48 bits to encrypt the blocks as ciphertext. Asymmetric Key: Asymmetric cryptography is employed when speed is not as crucial as security and identity verification is required. Blockchain technology employs this form of encryption to validate bitcoin transactions as well as electronic document signatures.

Asymmetric-key techniques employ different keys for encryption and decryption. Techniques for asymmetric encryption include RSA and PKI. The well-known RSA encryption and decryption algorithm is used to encrypt data using a public key and send it securely [37]. Through the issuing and administration of digital certificates, public key infrastructure (PKI) regulates encryption keys [26-28]. When digital data is encrypted before being stored on computers or sent over the internet, it becomes confidential. As businesses increasingly rely on hybrid and multi-cloud configurations, public cloud security and data protection across complex environments are issues [33]. The security of the cloud is the responsibility of cloud service providers, but clients are in charge of the security of any data kept there. A company’s sensitive information must be protected while allowing authorised individuals to do their tasks. Along with data encryption, this security solution must include robust key management, access control, and audit recording capabilities.

Digital Rights Management (DRM) Systems with DRM were created to make it possible to transmit digital material to a designated recipient securely while putting limitations on how that content may be used after delivery (such as no copying, no printing, and no modification). The tools for content protection, rights generation and enforcement, user identification, 124

Blockchain-Based Multimedia Content Protection

and use tracking are often provided by DRM systems as in Figure. 3. Three parties make up a generic DRM architecture: a user, a licence provider, and a content provider. The content provider is in charge of producing licences and managing the content encryption keys, while the licence provider is in charge of creating licences and managing the content encryption keys. DRM can be implemented via hardware or software solutions [38]. Figure 3. Digital rights management

A DRM system is made to adhere to the following security standards [41]: • • •



Unauthorized copying: It makes sure that the digital item is packed securely to avoid being used without authorization. With encryption, this safe packing is made possible. Secure distribution: The authorised user must get the digital item in a secure manner. Conditional access: It is required to fulfil the requirements for access (licences) set out by the owners of the restricted material. The licence is made up of rights expression language, metadata or a watermark, and a security feature to stop users from changing the access requirements to get around DRM. Tamper resistance: To process protected data and uphold content use rights, it must offer a reliable tamper-resistant method.

125

Blockchain-Based Multimedia Content Protection

Encryption, passwords, watermarking, digital signatures, and payment mechanisms are the main DRM methods employed to combat piracy [21-24]. Technology such as encryption and passwords is used to control who has access to the information and how it is used. Watermarks and digital signatures are used to protect the content’s legitimacy and integrity as well as the users, multimedia holders, and other parties involved. To safeguard the digital rights of multimedia holders, DRM is combined with digital watermarking.

Fingerprinting In contrast to digital watermarking, which cannot pinpoint the source of piracy, multimedia fingerprinting, also known as transaction tracing, allows one to find the identities of the pirates (colluders) after detecting an illegal copy. This traceability is made feasible by imprinting several copies of the same material with a unique user-specific piece of data known as a fingerprint [25]. A multimedia fingerprinting algorithm is a three-step process that links the client and the content owner and makes it possible to identify pirates from copies that have been obtained unlawfully or under duress. A multimodal fingerprinting system is expected to be able to overcome the following limits [29]: •





Robustness: The resistance of a fingerprint to signal processing operations is dependent on the watermark embedding technique used. After the digital content has been modified by traditional signal processing methods, a trustworthy watermarking algorithm must be applied to detect an illegal redistributor [26]. Collusion resistance: Even while digital fingerprinting could be successful at detecting a single opponent, a number of unscrupulous purchasers could band together to conduct potent collusion assaults against the fingerprinting system. The colluders can attempt to locate the areas carrying the fingerprint signal by comparing their various copies, erase the information from these locations, and then produce a duplicate that cannot be traced back to any of the originals. Tolerance for quality: Fingerprinted information ought to be perceptually comparable to the original and have excellent visual quality [41-44].

The amount of space available for embedding is what decides how long a fingerprint is given to each user. A potentially lengthy binary string serves as the fingerprint. In order to accommodate a whole fingerprint, a digital fingerprint system has to have a large enough embedding capacity [45].

126

Blockchain-Based Multimedia Content Protection

Hashing Due to the prevalence of numerous image-editing programmes, it is becoming more and more usual to alter stolen photographs before uploading them in order to get around multimedia restrictions. The registration of picture multimedias and the identification of potentially illegal photographs have become quite challenging as a result. The image-oriented perceptual hashing approach has slowly come to the attention of academics in recent years. It is possible to think of perceptual image hashing as a one-way mapping that converts pictures into brief sequences of a specific length. In sectors related to picture multimedia protection, such as copy detection, image retrieval, multimedia registration, etc., perceptual image hashing has significant application value.

Cryptocurrency A cryptocurrency, also known as a crypto-currency or crypto, is a type of digital currency that trades value through a computer network and is not backed or controlled by a single, centralised organisation, such as a bank or government. The necessity for conventional middlemen like banks when money is being exchanged between two businesses is eliminated thanks to this decentralised method of confirming that the participants to a transaction actually have the monies they claim to have [31]. A unit of currency is represented by its encrypted data as shown in Figure 4. A blockchain is a peer-to-peer network that ensures the security of transactions. As a result of decentralization, Cryptocurrencies are not issued by any centralized organization, such as a bank or the government. These are secured using cryptographic techniques, which are upheld and verified by mining, a procedure in which transactions are processed and verified by a network of computers. Because of this process, the miners who oversee the network are rewarded with money. Similar to real money, cryptocurrencies may be bought, sold, or swapped for one another. These are secured using cryptographic techniques, which are upheld and verified by mining, a procedure in which transactions are processed and verified by a network of computers. Because of this process, the miners who oversee the network are rewarded with money. There are two forms of cryptocurrency: • •

Tokens, which are programmable assets that exist inside the blockchain of a certain platform. Coins, which can include Bitcoin and altcoins (non-Bitcoin cryptocurrencies).

Even though many people are confused about tokens and coins. It is very essential to know how they vary. 127

Blockchain-Based Multimedia Content Protection







Coins: Coins are distinguished from altcoins by the fact that they are based on the blockchain. They are used on a blockchain for a gas or fuel payment token, however, the gas might be paid in a different cryptocurrency. Bitcoin used on the Bitcoin blockchain and Ether used on the Ethereum blockchain are two examples. Cryptocurrencies are growing with the development of a blockchain [34]. Altcoins: Although they are classified as coins, they are all seen as alternatives to Bitcoin, the original cryptocurrency. Apart from Ethereum, the majority of the initial ones were split from Bitcoin and were dubbed “shitcoins.” Namecoin, Peercoin, Litecoin, Dogecoin, and Auroracoin are just a few of the cryptocurrencies available. However, other cryptocurrencies have their blockchains, such as Ethereum, Ripple, Omni, and NEO. Others, though, disagree. Tokens: In a blockchain, tokens are digital representations of certain assets or services. Tokens are distributed as an Initial Coin Offering(ICO), which works in the same way that stock does. Tokens, like American dollars, signify value, but their worth isn’t always the same, just as a paper dollar’s value isn’t always the same. Tokens, on the other hand, can be utilized in a variety of ways. A token differs from a coin in the way it is built on the blockchain of an existing coin, such as Ethereum and Bitcoin.

Figure 4. Working of cryptocurrency

128

Blockchain-Based Multimedia Content Protection

There are several characteristics of cryptocurrencies that justify their use [30]: •









Transaction speed: Cryptocurrencies are one of the fastest ways to transfer money and assets from one account to another. In the United States, the majority of money transfers require 3 to 5 days to complete. It takes at least 24 hours for a wire transfer to be completed. Three days is the time it takes for stock deals to be settled. However, one of the benefits of bitcoin transactions is that they may be done in minutes. The money is accessible to utilize after the network has validated the block containing your transaction. Transaction costs: In comparison to other financial services, the cost of dealing in bitcoin is rather cheap. A domestic wire transfer, for example, can cost somewhere between $25 and $30. It might be significantly more costly to send money internationally. Transactions involving cryptocurrency are typically cheaper. Even on the busiest blockchains, median transaction fees are still cheaper than wire transfer prices. Accessibility: Cryptocurrency is accessible to anyone. In comparison to opening an account at a typical financial institution, the process of creating a bitcoin wallet is incredibly quick. Unbanked persons may exploit cryptocurrencies to obtain financial services without having to go via a central authority. Bitcoin may be used to perform online transactions or transfer money to relatives and friends by those who do not use conventional banking systems. Security: They won’t be able to sign transactions or access your funds unless they have access to your crypto wallet’s private key. You won’t be able to reclaim your cash if you lose your private key. The transactions are secure because of the decentralized network and architecture of blockchain technology. The security of the network improves as more processing power is added to it Any attempt to alter the blockchain through a network assault would require enough computing power to validate a large number of blocks before the rest of the network could verify the ledger’s accuracy. Such an assault would be prohibitively costly on popular blockchains like Bitcoin or Ethereum. Privacy: You can maintain a level of privacy by transacting with cryptocurrency without having to register for an account with a financial institution. Transactions are pseudonymous, this means you have a unique identity (your wallet address) on the blockchain but no personal details. This level of privacy is advantageous in many situations. (both innocent and illicit). All transaction data is public if a wallet address is linked to an identity. There are various techniques to further obfuscate transactions, as well as several privacy-focused currencies, to increase cryptocurrency’s private nature. 129

Blockchain-Based Multimedia Content Protection



Transparency: The publicly distributed blockchain ledger is where all bitcoin transactions take place. Using tools, anybody may see transaction data, such as where, when, and how much bitcoin was sent from a wallet address. Anyone with access to a wallet may see how much cryptocurrency is kept within. This degree of transparency can help to decrease fraudulent transactions. Someone can show that they have the resources to carry out a transaction or that they transferred money and it was received.

Cryptocurrency is exploding all over the world. Based on three metrics—total crypto activity, non-professional user trading activity, and peer-to-peer exchange transaction volume—the top 20 countries in the Chainalysis 2022 Global Crypto Adoption Index. To weight everything, we utilise the purchasing power parity per capita. Countries are scored from 0 to 1 on a scale as shown in Table 1 [44]. Table 1. Analysis of top 20 countries Global Crypto Adoption Index, 2022

130

Blockchain-Based Multimedia Content Protection

Four categories based on their utility: •







Currency: Bitcoin, the world’s first cryptocurrency, was created to be used. The goal was to reduce the cost and speed of cross-border payments “Any public decentralized blockchain can employ cryptocurrency(s).” It’s as if Ether is the token for the Ethereum blockchain. The coin for the Solana Blockchain is Sol. As a result of the tokens, developers and the general public may now use the blockchain’s native coins [29].” Asset: Because the value of stable coins is determined from the value of an external item, these cryptocurrencies may be classified as assets. The US dollar, for example, gives USDT its value. The value of gold is reflected in the gold GLC Object: Many purchasers believe that this is where cryptocurrencies will go in the future. These cryptocurrencies were launched to fund unique projects aimed at resolving global issues. Decentraland, for example, is an Ethereumbased programme that allows users to acquire virtual land (NFT-based) using its coin (MANA). Meme or Joke Coin: Even though they were made only for pleasure and with no clear objective or purpose in mind, they are now worth millions. Some of the popular cryptocurrencies are shown in Table 2 [28]:

Table 2. Market cap scenario Name

Symbol

Technology

Market Cap

Bitcoin

BTC

Bitcoin

$815,765,050,034

Ethereum

ETH

Ethereum

$364,498,285,086

Tether

USDT

Bitcoin and Ethereum

$80,919,220,472

Cardano

ADA

Ethereum

$36,699,200,133

Solana

SOL

Ethereum

$31,154,356,532

Polkadot

DOT

Ethereum

$20,724,841,422

MULTIMEDIA-BASED CONTENT PROTECTION TECHNIQUES Cloud-Based Multimedia Content Protection System A system is proposed that protects multimedia material on a big scale. It was designed to safeguard several types of material, including 2-D and 3-D films, audio snippets, photos, and music videos. Their system may be set up in a private or public cloud. 131

Blockchain-Based Multimedia Content Protection

They suggested two innovative parts, including a technique for creating multimedia content signatures and a distributed matching engine for protecting multimedia assets. In order to conduct object matching and query processing, content providers like Pixar or Disney that host new multimedia material online create signatures and store them in distributed indexes as part of the content reference registration procedure. In order to find violations when pirated copies are discovered online, query signatures are created and compared against signatures saved in the distributed index [31]. Signature Creation: The method allows for the production of signatures using a variety of media. In fact, it permits the development of composite signatures that may contain one or more of the components, such as visual, audio, depth, and information signatures. The crucial phases in signature generation are listed below. a. Calculating picture visual descriptors. b. Blocking up each picture into blocks. c. Using Euclidean distance to match visual descriptors d. Block computation of discrepancy e. Create a signature Distributed Matching Engine: Object matching and distributed index components make up the distributed matching engine. Multimedia items are distinguished by several aspects with large proportions. For instance, 100– 200 SIFT descriptors might be used to describe a picture. There may be up to 128 dimensions per description. This varies from each multimedia product, though. A distributed index tree that contains the signatures of multimedia objects is created as part of a matching engine that defines object matching logic. Object matching takes place in three phases. Partitioning of the query data set comes first. The K-nearest neighbours are determined for each data point. Then, object matching that is particular to the application is done. By computing the precision of the K-nearest neighbours for a point and across all the points in the query set, the precision (1) and average precision (2) are utilised to assess the system. Equations (1) and (2) are used to compute block disparity and match visual descriptors.





FiT − Fj

F

T

132

i

(mil − m jl )2 +  + (miM − m jM )2

− F j = (mi 1 − m j 1 )2 +  + (miM − m jM ) Q

2

(1)

Blockchain-Based Multimedia Content Protection

((p

i

− pj ) / Xn

) ( 2

+ (qi − q j ) / Yn

)

2



(2)

Video Fingerprinting for Content-Based Identification In order to identify a video, a technique of video fingerprinting computes statistics for video frames and creates a fingerprint as a change in the statistics over the video frames. It is a technique for identifying videos by their fingerprints computes values that reflect motion between various parts of the film and uses information related to at least those values to create a fingerprint. The fingerprinting notion was used to offer a system for content-based video identification. Fingerprint extraction and fingerprint matching are two crucial stages. While the latter is used to compare two films using their respective fingerprints, the former is used to extract a fingerprint from a specific multimedia asset. The provided video is first separated into resampled frames, and these are subsequently changed to greyscale frames. Grayscale enhances the reliability of fingerprint extraction, hence it is done. Then, different blocks are created from the scaled frames. The centroid of gradient orientations is then calculated for each block. The video clip’s compact characteristics are then extracted to create a fingerprint vector, which is used to uniquely identify the video. A crucial step in the proposed system is fingerprint matching, which is in charge of extracting a fingerprint from a video under inquiry and comparing it to one in the database [43].

Decentralized Video Streaming Platform Protection System A network whose nodes are not reliant on a single master node is referred to as a decentralised system. The distribution of control among several nodes. This kind of service delivery is used by cryptocurrency service providers like Bitcoin, Ethereum, Litecoin, etc. The Blockchain-based digital video security protection architecture is presented. This concept uses the Ethereum Virtual Machine (EVM) to install the Blockchain, and each block contains transactions’ private digital video data saved on a data-hiding server. The framework is made up of three main layers: the Interface layer, the Blockchain layer, and the server layer. Owners of digital videos and potential consumers have access to an interface provided by the API layer, which abstracts the capabilities of the Blockchain layer to enable various datahiding operations. In particular, this layer serves as a kind of middleware between the Blockchain network and the server that hides data. The following operations are found in the Interface layer:

133

Blockchain-Based Multimedia Content Protection







Interface Layer: This layer offers a user interface for digital video owners and potential consumers that abstracts the blockchain layer’s functionalities to allow for various data-hiding manipulations. In particular, this layer serves as a kind of middleware between the Blockchain network and the server that hides data. The following operations are found in the interface layer: Read, write and execute the transaction with smart contracts [27]. The authorization operation for secret data stored in the Blockchain network is incorporated within the security policy. In order to track down any unlawful attacks and conduct assessments on the data concealing server, query operations can query operation records of secret data on Blockchain and the data hiding server. Blockchain Layer: The blockchain layer allows for the modification of block contents. Transaction constructions first accept API layer requests, after which they translate secret data into a few transactions. Second, miners will broadcast to the network and package the newly created transactions into new blocks in accordance with the requirements for transaction validation, such as the timestamp, account balance, and hash validation, etc. Data hiding Server Layer: The extraction and embedding server is included in this layer where our primary goal in this study is visual quality, which has a significant impact on digital video security protection.

3.4 Blockchain Consensus Algorithm: The protocols and algorithms that make up a consensus mechanism provide the guidelines that the nodes must adhere to in order to validate the blocks. Data synchronisation between nodes in a distributed system that do not trust one another is resolved using this approach. The consensus protocol is a fault-tolerant method for achieving the necessary consensus on a particular data value or network state. The following objectives are pursued by it: reaching a consensus, teamwork, cooperation, granting each node equal rights, and requiring each node to take part [31-34]. Most blockchain employ one of the popular consensus protocols listed below: •

134

Proof-of-work (PoW): Validators, also known as miners or node participants in the proof of work process, must demonstrate that the work they have completed and submitted entitles them to add new transactions to the blockchain network. To do this, miners must crack the new block’s mathematical riddles before approving it to the ledger. Before approving the copies of the ledger, the solution is then sent to additional validators for review. The blockchain’s core network is able to avoid double-spending by using Proof of Work (PoW) verification for each transaction. To put it another way, if someone tries to replicate a transaction on the blockchain network, it

Blockchain-Based Multimedia Content Protection







will be detected and rejected by the system. Once the transaction has been confirmed and accepted by all node participants, it cannot be changed. Proof-of-Stake (PoS): This is the most common substitution for PoW. PoS has replaced PoW as the Ethereum consensus. By staking a percentage of their own coins in this type of consensus process, validators invest in the system’s currency rather than forking over cash to solve a difficult problem. The blocks will then start to go through all validators. When a validator encounters a block they think should be added to the chain, they validate it by placing a bet on it. In line with their stakes, which increase when actual blocks are uploaded to the Blockchain, all validators earn rewards depending on their stakes. Proof-of-Elapsed-Time: PoET is one of the most moral consensus algorithms because it only selects the following block based on moral standards. It is frequently used in Blockchain networks with permissions. This technique gives any validator on the network an equal chance to create their own block. To do this, each node waits for a different amount of time before adding a proof of their wait to the block. The generated blocks are broadcast to the network for peer assessment. In the proof section, the validator with the smallest timeout value prevails. The block of the successful validator node is added to the Blockchain. Additional programme safeguards prevent nodes from always producing the lowest timer value or winning the election. Proof-of-Capacity: Instead of investing money on expensive equipment or burning coins, the Proof of Capacity consensus expects validators to use the space on their hard drives. Validators are more likely to be chosen to mine the following block and get the block reward if they have a bigger hard drive.

COMPARISONS BETWEEN DIFFERENT BLOCKCHAIN CONTENT PROTECTION SYSTEMS The most noticeable work for blockchain multimedia content protection techniques or models which are highly cited as shown in table below. As discussed earlier, this paper also concentrates on reviewing the content protection techniques, the relevant literature review is presented in Table 3:

135

Blockchain-Based Multimedia Content Protection

Table 3. Summary of methods for blockchain-based multimedia content protection Author & Year

Technique

Advantages

Yuqing Ding et. al.

Hybrid P2P Content Distribution Network

It safeguards the integrity of the movie during dissemination and prevents arbitrary alteration. The suggested P2P-CDN might accomplish security and privacy protection, according to the security evaluation.

Limitations

Remarks

Indexing schemes for parallel convolution are yet to be implemented.

Colour and intensity based signatures are used.

Hafeeda et al. (2015)

System for protecting multimedia material in the cloud that uses a distributed matching engine and signature creation.

Supports different types of multimedia content.

Batch processing, support for multi-view plus depth videos are not explored.

The distributed index helps in object matching and query processing.

Khodabakshi and Hefeeda (2013)

Novel content-based copy detection for 3D videos.

High precision and recall

3D formats of videos are supported.

3D formats of videos are supported.

Hongguo Zhao et. al. (2020)

Ethereum based data hiding algorithms,

The suggested architecture might provide accurate and highly effective digital video security protection.

Lack of supporting different types of multimedia content

to enable the blockchain nodes’ storage of the private information related to digital video.

Liming Liu et. al. (2021)

Inter-Planetary File System, Verifiable Certificates, Multimedia Encryption Key

It resolves the uploading, reviewing, transacting, reporting, and distributing issues related to multimedia in the decentralised situation.

Indexing schemes for parallel convolution are yet to be implemented.

Chain transactions and issues with several people’s revenue sharing can be solved by the system.

Gabin Heo et. al. (2021)

Hyperledger Sawtooth and Hyperledger Caliper for experiment in the blockchain environment

This study enhanced digital rights management (DRM) and digital fingerprinting to address the issue of unlawful digital content copying and leakage.

It does not work as a portable blockchain system. It can be created according to the siuations.

It also included blockchain to address the issue of profit sharing, forgery, and falsification.

Amma Qureshi et. al. (2019)

Perceptual hash functions, collusion-resistant fingerprinting, peer-to-peer file distribution network

A plan for the security of picture multimedia transactions via networks. Multimedia protection, resistance to collusion, atomic payment, traceability of pirated goods, transparency, proof-of-delivery, revocable privacy, and dispute resolution are all provided.

3D formats of videos are supported.

The report also examines several assaults and defences that compromise security and privacy.

FUTURE RESEARCH DIRECTIONS In this part, we outline potential research areas that emerged from a detailed examination of the blockchain-based systems for safeguarding multimedia information.

136

Blockchain-Based Multimedia Content Protection









Design enhancements: The fine-grained examination of the assessed cutting-edge systems reveals the necessity of enhancing conventional content protection techniques (encryption, DRM, watermarking/fingerprinting) in order to facilitate amicable integration with blockchain technology. Future research can examine topics like key management, concurrent key acquisition, and key security while developing blockchain-based multimedia encryption systems. The seamless execution of the transaction between the multimedia provider and the client, the transfer of access rights to consumers without a trusted third party, and privacy-aware fine-grained use management in blockchain-based DRM systems all require more research [32]. Similarly, to enable widespread use of the blockchain-based multimedia protection applications, future research should focus on low embedding and computational complexity, high robustness against potential security attacks, and acceptable transparency in blockchain-based watermarking and fingerprinting schemes [35]. Systems without trust: Before the invention of the blockchain, there were fully decentralised content protection techniques. In addition to decentralisation, blockchain has another important benefit: trustworthiness. The examined blockchain-based content protection solutions are built on hybrid trust models that take into account the existence of trusted users or third parties. Therefore, it is necessary to fully utilise blockchain technology and create systems for multimedia protection that are completely trustworthy. Security concerns: To ensure that only authorised parties can access the sensitive information, it is necessary to study the integration of blockchain technology with an access control system for off-chain sources. The issue of making the off-chain sources fault resilient and preventing them from acting as bottlenecks or single points of failure must also be addressed. Promoting the use of blockchain in multimedia applications: The majority of studies have not yet addressed the costs and constraints in deploying the content protection mechanisms on blockchain at the commercial level. It would also take a lot of time to develop and would require acceptance by all parties involved of additional technological advancements and security guarantees (such as multimedia owners, multimedia producers, buyers, and others).

CONCLUSION The goal of this study is to provide a summary of blockchain-based content protection technologies. In this study, we create a taxonomy to categorise the most 137

Blockchain-Based Multimedia Content Protection

complex blockchain-based multimedia protection schemes according to performance requirements, the most popular content protection methods, and the technical characteristics of blockchain technology. Four widely used content protection measures and some background information on the blockchain technology are covered at the outset of the session. The applications for multimedia protection based on blockchain are then explained in detail. These approaches are also contrasted in relation to the established taxonomy. The blockchain technology and other key research difficulties related to content protection strategies are also covered. Several potential areas for further study are then presented. Blockchain has a great chance of being widely used in multimedia management and protection applications, according to academics. Applications for multimedia protection based on blockchain enable direct communication between customers and multimedia owners without the use of expensive middlemen [30]. These programmes give content owners the ability to upload multimedia material, manage licencing and multimedia choices, regulate distribution, track down sources of piracy, and get paid when their material is used. To develop practical multimedia protection systems that may effectively benefit from the usage of blockchain technology, there are still a lot of outstanding issues that need to be further investigated and studied. However, there are a number of blockchain technology-related factors that are hard to forecast but essential for the success of such applications, including as scalability, reliability, and market adoption. Researchers must take into account each of these aspects while developing and deploying a new blockchain-based content protection system. We think that this poll will be important for finding the most relevant data on how to combine content protection methods with blockchain technology.

REFERENCES . Chainalysis Team. (2022). The 2022 Global Crypto Adoption Index: Emerging Markets Lead in Grassroots Adoption, China Remains Active Despite Ban, and Crypto Fundamentals Appear Healthy. Chainalysis. Averin A. & Averina, O. (2020). Review of Blockchain Frameworks and Platforms. 2020 International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon), Vladivostok, Russia. . doi:10.1109/FarEastCon50210.2020.9271217 Berti, J. (2009, November). Multimedia infringement and protection in the Internet age. IT Professional, 11(6), 42–45. doi:10.1109/MITP.2009.118

138

Blockchain-Based Multimedia Content Protection

Bhaskaran, K., Ilfrich, P., Liffman, D., Vecchiola, C., Jayachandran, P., Kumar, A., Lim, F., Nandakumar, K., Qin, Z., Ramakrishna, V., Teo, E. G., & Suen, C. H. (2018). Double-blind consent-driven data sharing on blockchain. In Proc. IEEE Int. Conf. Cloud Eng. (IC2E) (pp. 385–391). IEEE. 10.1109/IC2E.2018.00073 Market Trends. (2022). Top 8 Best Cryptocurrencies to Invest in 2022. Analytics Insight. https://www.analyticsinsight.net/top-8-best-cryptocurrencies-to-investin-2022/. Campidoglio, M., Frattolillo, F., & Landolfi, F. (2009). The multimedia protection problem: Challenges and suggestions. Proc. 4th Int. Conf. Internet Web Appl. Services, (pp. 522–526). Chen, Y. Y., Jan, J. K., Chi, Y. Y., & Tsai, M. L. (2009). A Feasible DRM Mechanism for BT-Like P2P System. In Proceedings of the International Symposium on Information Engineering and Electronic Commerce, Ternopil, Ukraine. Chhabra, S., & Singh, A. K. (2020). Secure VM Allocation Scheme to Preserve against Co-Resident Threat. International Journal of Web Engineering and Technology, 15(1), 96–115. doi:10.1504/IJWET.2020.107686 Chhabra, S., & Singh, A. K. (2021). Dynamic Resource Allocation Method for Load Balance Scheduling over Cloud Data Center Networks. Journal of Web Engineering, 20(8). doi:10.13052/jwe1540-9589.2083 Cox, I. J., Miller, M. L., Bloom, J. A., & Honsinger, C. (2002). Digital Watermarking (Vol. 53). Morgan Kaufmann. Gao, S., Yu, T., Zhu, J., & Cai, W. (2019, December). T-PBFT: An EigenTrust-based practical byzantine fault tolerance consensus algorithm. China Communications, 16(12), 111–123. doi:10.23919/JCC.2019.12.008 Hamidouche, W., Farajallah, M., Sidaty, N., Assad, S. E., & Deforges, O. (2017). Real-time selective video encryption based on the chaos system in scalable HEVC extension. Signal Processing Image Communication, 58, 73–86. doi:10.1016/j. image.2017.06.007 Hao, Y., Li, Y., Dong, X., Fang, L., & Chen, P. (2018). Performance analysis of consensus algorithm in private blockchain. In Proc. IEEE Intell. Vehicles Symp. (IV), (pp. 280–285). IEEE. 10.1109/IVS.2018.8500557 Heo, G., Yang, D., Doh, I., & Chae, K. (2009). Design of blockchain system for protection of personal information in digital content trading environment. In Proc. Int. Conf. Inf. Netw. (ICOIN), (pp. 152–157). IEEE. 10.1109/ICOIN48656.2020.9016501 139

Blockchain-Based Multimedia Content Protection

Hon, W., Palfreyman, J., & Tegart, M. (2016). Distributed ledger technology & cybersecurity–Improving information security in the financial sector. In Eur. Union Agency Netw. Inf. Secur., (pp. 1–36). NIH. Kan, L., Wei, Y., Hafiz Muhammad, A., Siyuan, W., Gao, L. C., & Kai, H. (2018). A multiple blockchains architecture on inter-blockchain communication. In Proc. IEEE Int. Conf. Softw. Qual., Rel. Secur. Companion (QRS-C), (pp. 139–145). IEEE. 10.1109/QRS-C.2018.00037 Kawase, Y., & Kasahara, S. (2017). Transaction-Confirmation Time for Bitcoin: A Queueing Analytical Approach to Blockchain Mechanism. In Queueing Theory and Network Applications (pp. 75–88). Springer. doi:10.1007/978-3-319-68520-5_5 Khan, U., An, Z. Y., & Imran, A. (2020, November). A blockchain ethereum technologyenabled digital content: Development of trading and sharing economy data. IEEE Access : Practical Innovations, Open Solutions, 8, 217045–217056. doi:10.1109/ACCESS.2020.3041317 Kuribayashi, M., & Funabiki, N. (2019). Decentralized tracing protocol for fingerprinting system. APSIPA Transactions on Signal and Information Processing, 8(1), 1–8. doi:10.1017/ATSIP.2018.28 Lao, L., Dai, X., Xiao, B., & Guo, S. (2020). G-PBFT: A location-based and scalable consensus protocol for IoT-blockchain applications. In Proc. IEEE Int. Parallel Distrib. Process. Symp. (IPDPS), (pp. 664–673). IEEE. 10.1109/IPDPS47924.2020.00074 Liu, Q., Safavi-Naini, R., & Sheppard, N. P. (2003). Digital rights management for content distribution. In Proc. Australas. Inf. Secur. Workshop Conf. ACSW Frontiers, (pp. 49–58). ACSW. Ma, Z., Jiang, M., Gao, H., & Wang, Z. (2018, December). Blockchain for digital rights management. Future Generation Computer Systems, 89, 746–764. doi:10.1016/j. future.2018.07.029 Megías, D. (2014). Improved Privacy-Preserving P2P Multimedia Distribution Based on Recombined Fingerprints. IEEE Transactions on Dependable and Secure Computing, 12(2), 179–189. doi:10.1109/TDSC.2014.2320712 Megías, D., & Qureshi, A. (2017). Collusion-resistant and privacy-preserving P2P multimedia distribution based on recombined fingerprinting. Expert Systems with Applications, 71, 147–172. doi:10.1016/j.eswa.2016.11.015

140

Blockchain-Based Multimedia Content Protection

Menendez-Ortiz, A., Feregrino-Uribe, C., Hasimoto-Beltran, R., & GarciaHernandez, J. J. (2019). A survey on reversible watermarking for multimedia content: A robustness overview. IEEE Access : Practical Innovations, Open Solutions, 7, 132662–132681. doi:10.1109/ACCESS.2019.2940972 Menendez-Ortiz, A., Feregrino-Uribe, C., Hasimoto-Beltran, R., & GarciaHernandez, J. J. (2019). A survey on reversible watermarking for multimedia content: A robustness overview. IEEE Access : Practical Innovations, Open Solutions, 7, 132662–132681. doi:10.1109/ACCESS.2019.2940972 Menendez-Ortiz, A., Feregrino-Uribe, C., Hasimoto-Beltran, R., & GarciaHernandez, J. J. (2019). A survey on reversible watermarking for multimedia content: A robustness overview. IEEE Access : Practical Innovations, Open Solutions, 7, 132662–132681. doi:10.1109/ACCESS.2019.2940972 Meng, Z., Morizumi, T., Miyata, S., & Kinoshita, H. (2018). Design scheme of multimedia management system based on digital watermarking and blockchain. Proc. IEEE 42nd Annu. Comput. Softw. Appl. Conf. (COMPSAC), (pp. 359–364). IEEE. Piva, A., Bartolini, F., & Barni, M. (2002, May). Managing multimedia in open networks. IEEE Internet Computing, 6(3), 18–26. doi:10.1109/MIC.2002.1003126 Pizzolante, R., Castiglione, A., Carpentieri, B., Santis, A. D., & Castiglione, A. (2015). Reversible Multimedia Protection for DNA Microarray Images. In Proceedings of the 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, Poland. Puthal, D., Malik, N., Mohanty, S. P., Kougianos, E., & Das, G. (2018, July). Everything you wanted to know about the blockchain: Its promise, components, processes, and problems. IEEE Consumer Electronics Magazine, 7(4), 6–14. doi:10.1109/MCE.2018.2816299 Qiu, Y., Gu, H., & Sun, J. (2018). Reversible watermarking algorithm of vector maps based on ECC. Multimedia Tools and Applications, 77(18), 23651–23672. doi:10.100711042-018-5680-7 Qureshi, A., & Megías, D. (2019). Blockchain-based P2P multimedia content distribution using collusion-resistant fingerprinting. In Proceedings of the 11th Asia-Pacific Signal and Information Processing Association (APSIPA) Annual Summit and Conference, Lanzhou, China. Qureshi, A., Megías, D., & Rifà-Pous, H. (2014). Secure and Anonymous Multimedia Content Distribution in Peer-to-Peer Networks. In Proceedings of the 6th International Conference on Advances in Multimedia, Nice, France. 141

Blockchain-Based Multimedia Content Protection

Qureshi, A., Megías, D., & Rifà-Pous, H. (2016). PSUM: Peer-to-Peer Multimedia Content Distribution using Collusion-Resistant Fingerprinting. Journal of Network and Computer Applications, 66, 180–197. doi:10.1016/j.jnca.2016.03.007 Rouhani, S., & Deters, R. (2019, April). Security, performance, and applications of smart contracts: A systematic survey. IEEE Access : Practical Innovations, Open Solutions, 7, 50759–50779. doi:10.1109/ACCESS.2019.2911031 Shahriar Hazari, S., & Mahmoud, Q. (2020). Improving Transaction Speed and Scalability of Blockchain Systems via Parallel Proof of Work. Future Internet, 12(8), 125. doi:10.3390/fi12080125 Shrestha, B., Halgamuge, M. N., & Treiblmaier, H. (2020). Using Blockchain for Online Multimedia Management: Characteristics of Existing Platforms. In Blockchain and Distributed Ledger Technology Use Cases: Applications and Lessons Learned (pp. 289–303). Springer. doi:10.1007/978-3-030-44337-5_14 Tayan, O., & Alginahi, Y. M. (2014). A review of recent advances on multimedia watermarking security and design implications for digital Quran computing. 2014 International Symposium on Biometrics and Security Technologies (ISBAST) (pp. 304-309). IEEE. 10.1109/ISBAST.2014.7013139 Wang, S., Ouyang, L., Yuan, Y., Ni, X., Han, X., & Wang, F. Y. (2019). Blockchainenabled smart contracts: Architecture, applications, and future trends. IEEE Transactions on Systems, Man, and Cybernetics. Systems, 49(11), 2266–2277. doi:10.1109/TSMC.2019.2895123 Wang, W., Hoang, D. T., Hu, P., Xiong, Z., Niyato, D., Wang, P., Wen, Y., & Kim, D. I. (2019, January). A survey on consensus mechanisms and mining strategy management in blockchain networks. IEEE Access : Practical Innovations, Open Solutions, 7, 22328–22370. doi:10.1109/ACCESS.2019.2896108 Wirth, C., & Kolain, M. (2018). Privacy by blockchain design: A blockchainenabled GDPR-compliant approach for handling personal data,’’ in Proc. ERCIM Blockchain Workshop, Eur. Soc. Socially Embedded Technol (pp. 1–7) . EUSSET. Wu, Z., Zheng, H., Zhang, L., & Li, X. (2019). Privacy-friendly Blockchain Based Data Trading and Tracking. In Proceedings of the 5th International Conference on Big Data Computing and Communications, QingDao, China. 10.1109/BIGCOM.2019.00040 Zhao, J., Zong, T., Xiang, Y., Gao, L., & Beliakov, G. (2020). Robust BlockchainBased Cross-Platform Audio Copyright Protection System Using Content-Based Fingerprint. In Web Information Systems Engineering (pp. 201–212). Springer.

142

Blockchain-Based Multimedia Content Protection

Zheng, W., Zheng, Z., Chen, X., Dai, K., Li, P., & Chen, R. (2019). Nutbaas: A blockchain-as-a-service platform. IEEE Access : Practical Innovations, Open Solutions, 7, 134422–134433. doi:10.1109/ACCESS.2019.2941905

143

144

Chapter 6

Blockchain-Based Platform for Smart Tracking and Tracing the Pharmaceutical Drug Supply Chain Deepak Singla Panipat Institute of Engineering and Technology, India Sanjeev Rana Maharishi Markandeswar University, India

ABSTRACT Every nation is presently addressing the threat posed by the sale of counterfeit medications. It is a growing global issue that has a significant effect on lower middleincome and lower income countries. According to current estimates from the WHO, one in ten of the medications circulating in low- and middle-income nations are either subpar or fake. According to the National Drug Survey 2014–2016, carried out by the National Institute of Biologics, Ministry of Health & Family Welfare, counterfeit or substandard drugs make up about 3% of all pharmaceuticals in India. There is an urgent need for increased visibility and traceability within the supply chain due to the growing threat of counterfeit medications entering it and, in particular, making it into customers’ hands.

DOI: 10.4018/978-1-6684-6864-7.ch006 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Blockchain-Based Platform

INTRODUCTION E-health is a field of technology that is gaining importance throughout time, from individual real-time data interchange to faraway access to medical information like Electronic Health Records (EHR) or Electronic Medical Records (EMR) (Xie et al., 2018). With the development of IoT and linked products, e-capacity Health’s to give patients access to their clinical data and real-time health monitoring throughout the globe is crucial. Significantly larger availability of Health care, performance in remedies and health management, and less strain on community health budgets are all benefits of improved patient-health effective communication (Ren et al., 2018). EHR is a consistent data that enables inclusion among various health-care providers, and this integration is thought to be its main benefit (Boore et al., 2018). EHR offers a number of advantages, including assisting with prescriptions, enhancing illness management, and reducing serious pharmaceutical errors. However, the accessibility of EHR, the confidentiality of data sent across healthcare organisations, and the exclusion of information on patient well being are all constraints (Schumacher et al., 2017). As a result, blockchain transforms the way clinical data is kept and communicated in the context of e-health by acting as a secure data and decentralised internet platform of numerous computers referred to as nodes. It streamlines operations, keeps an eye on data security and correctness, and lowers maintenance costs. As the blockchain technology is growing in the area of e-hospital rapidly, Table 1 tells the publications trend in e-hospital research since 2017 to 2021.Scope of blockchain is increasing day by day in healthcare domain to provide high security and trust in medical sector. Blockchain presents a chance to ensure the credibility of the data, the seamless transfer of data between the stakeholders, and the confidentiality of the customer. For the application of blockchain technology, trust issues like adoption, legislation, and morality should be taken into account. Medical care is necessary as much as there is modern society. Table 1. Number of publication count Publications year 2017

Count 2

2018

8

2019

10

2020

18

2021

7

145

Blockchain-Based Platform

The production of health information, or patient history in general, has evolved from the initial paper medical records (PMRs) to the present electronic medical records as a result of the ongoing improvement of technology(Basu et al., 2017) (EMRs). Without a doubt, e-health systems make it simpler to save, exchange, collect, and trace EMRs (6).Blockchain based e-healthcare system is working as shown in the Figure 1 Figure 1. Blockchain based e-healthcare system

Reruirment of Decentralized Authorization Model for E-Healthcare System All of the centralised classical permission approaches that were covered above. Using centralised methods is usually plagued by problems like single points of failure and lack of confidence. Centralized approaches are not appropriate in collaborative settings since trust issues would always arise. Consequently, a decentralised strategy is needed to engage in a cooperative atmosphere. This decentralised strategy would eliminate the problem of trust and eliminates the risk by removing the necessity of middleman. Let’s take the pharmaceutical medicine supply chain as an example 146

Blockchain-Based Platform

(Omar et al., 2019). Every nation is presently addressing the threat posed by the sale of counterfeit medications, which is a worldwide problem. It is a growing global issue that has a significant influence on low low- and middle-income countries. According to current estimates from the WHO, one in ten of the medications circulating in low- and middle-income nations are either subpar or fake. According to the National Drug Survey 2014-2016, carried out by the National Institute of Biologics, Ministry of Health & Family Welfare(Premarathne et al., 2016), counterfeit or substandard drugs make up about 3% of all pharmaceuticals sold in India.Most of the time, it is discovered that medications obtained straight from the production facility are reliable, whereas the chance of receiving substandard medications increases as soon as the items are transferred between the multiple phases and layers of the intricate supply chain (i.e. wholesalers, distributors, or sub-distributors). Drugs are susceptible to theft, adulteration, and replacement at every point of transit from the factory to the patient (Gondal et al., 2020). Such misconduct results in financial loss for the medication manufacturers and, more crucially, poses a serious danger to consumer safety (Radanović et al., 2020). Internal systems automatically placed each activity into the ledger as the pharmaceutical drug(Zhuang et al., 2020 ) travelled through the supply chain. This was done to assure the manufacturer’s confidentiality and safety. A significant amount of linked data can be presented to the consumers without endangering the information integrity thanks to decentralization, encryption techniques, and irreversible maintaining records. Even the processing inputs, such as adjuvant and pharmaceutically active components, were recorded and connected to the finished pharmaceutical goods. Further, the blockchain recorded crucial information from IoT sensors attached to the parcels, such as latitude and temperatures, enabling the journey available to all partners and reducing the risk of document forgery (Yánez et al., 2020). The ideal case of blockchain implementation is shown in Figure 2.

147

Blockchain-Based Platform

Figure 2. The ideal case blockchain implementation

Elements and History of Blockchain A blockchain is a continuously expanding collection of documents, or blocks, that are connected through encryption. A cryptographic hash of the previous block is contained in each block. The blockchain, in a nutshell, is a straightforward yet innovative method of securely and totally automated information transfer from point A to point B. By generating a block, one party to a transaction starts the procedure. Thousands, possibly millions, of machines dispersed over the internet are verifying this block. In order to create a unique record with a unique history, the validated block is added to a chain that is preserved throughout the internet. 1) Blockchain Technology: Blockchains are usually regarded as a new sort of network architecture that offers distributed verifiability, auditability, and consensus (a mechanism to manage how information and value flow around on the internet). A single entity cannot control a blockchain network, and it cannot modify the data that is stored there without the consent of its peers. Blockchains act as a distributed database (Bohli et al., 2020) instead, with no centralised control and no single point of failure, that is dispersed across enormous peer-to-peer networks. Only agreement among the network’s many nodes, or through a process known as distributed consensus, may add new data to a blockchain. Every node in the network maintains a local copy of the blockchain’s data, which prevents one node from modifying its copy without the approval of the other nodes and keeps the other nodes honest. Data is stored on a time-stamped, never-ending chain on a blockchain. Towards the end, new information is added, and once it is included, it cannot be removed. 148

Blockchain-Based Platform

Older data cannot be edited or erased because the blocks of data that follow it include a snapshot of it. Table 2. Blockchain data Database

a ledger-like collection of information and transactions that expands as new entries are made.

Which is Distributed

Multiple computers connected by a network each keep a copy of the complete database, which syncs in a matter of minutes or seconds.

adjustably Transparent

The database’s records can be made accessible to pertinent parties without changing them.

highly Secure

Hackers may no longer just target one machine and modify any information

and Immutable

Any data that has been recorded and accepted cannot be changed or deleted due to mathematical algorithms.

2) Miner The process of adding transactional entries to the public/private ledger of a blockchain is known as mining. A miner is a node or member of the blockchain network who is capable of confirming transactions based on some sort of consensus. 3) Attack: A miner or group of miners (Chen et al., 2020) trying to control more than 50% of a network’s mining power, computing power, or hash rate is known as a “51 percent attack” on a blockchain. The owners of such mining power have the authority to halt the confirmation or execution of new transactions. 4) Smart contract: It is a programmed (I.Bentov et al., 2014) created to adhere to particular computer protocols in order to carry out digital contract negotiation and execution facilitation, verification, and enforcement. Smart contracts typically permit transactions that are trustworthy, irrevocable, and traceable without the involvement of outside parties. 5) Smart detractor: It is a cutting-edge programming idea designed to stop suspicious contract executions from happening.

Market Value of Blockchain Nearly all businesses and economies are expected to undergo significant change as a result of blockchain technology. By 2030, it is estimated that blockchain would produce USD3 trillion in annual company value. By 2025, 10% of the global GDP is expected to be kept on blockchain, according to the World Economic Forum (WEF), that also names blockchain as one of seven technologies that are expected to impact a variety of facets of our life. Even if blockchain is currently a young technology with 149

Blockchain-Based Platform

limited acceptance, its strategic worth in the near future for streamlining procedures, lowering inefficiencies, cost optimization, etc. cannot be discounted as shown in fig 3.. By cutting back on middlemen and the administrative work involved in preserving records and reconciling transactions, significant savings can be made in terms of resource conservation. By recouping lost income and generating fresh income for blockchain service providers, this can change the way value is flowing. According to a McKinsey analysis, the potential value gained would vary by industry. From the standpoint of potential effect and applicability, the public sector is arguably best positioned to benefit Figure 3. Features of blockchains

USE CASES Block Chain Features for E-Healthcare 1) Decentralization: You may store your assets, like as contracts and papers, using decentralized technology, and then access them online. In this instance, the owner has total control over his account, giving him the freedom to distribute his assets to anybody he chooses. 2) Transparency: Because each public address’s holding and transactions are visible to everyone, blockchains are transparent. 3) Immutability: The data cannot be modified (Mertz et al., 2020) once it has been recorded in the ledger. You are powerless to affect the circumstance, no matter who you are. An error must be fixed before a new transaction may be produced. At that point, both transactions are visible. The first transaction that is regarded as a mistake is also apparent in the ledger that has been recorded.

150

Blockchain-Based Platform

WORKING PRINCIPLE OF BLOCKCHAIN To add information to the blockchain, a public address and a unique key are needed for login; the private key seals the transaction. A hacker would need to have control over more than 51% of the nodes in order to change this type of data because it is duplicated thousands of times. Distributed ledger technology blockchain sentrys into networks promotes “trust” by introducing a new type of network architecture. Figure 4. Blockchain network diagram

Blockchains serve as a shared database that is dispersed across enormous peer-topeer networks with no central authority and no single point of failure. A blockchain network cannot be owned by a single entity, nor can a single company unilaterally change the data stored on it without the consent of its peers. Only by agreement amongst the network’s many nodes, or through a process known as distributed consensus, can new data be added to a blockchain. The data on the blockchain is kept locally by each node in the network (Hölbl et al., 2020), which serves to keep the other nodes honest since if one node modifies its copy, the other nodes will reject the modification. 151

Blockchain-Based Platform

Figure 5. Blockchain data storage

Blockchains store data on an endlessly extending, time-stamped chain. At the end, new information is added, and once added, it cannot be removed. Older data cannot be changed or removed because the blocks of data that follow it contain a snapshot of it. Blockchains utilise cryptographic techniques to sign every transaction (such instance when assets like money are transferred from one individual to another.) with an exclusive digital signature that belongs to the user who initiated the transaction. These signatures are kept in confidence but can be independently verified. This means that anyone can confirm that money given to identity B by user A was sent by A, but they cannot use A’s signature for their own transactions. By establishing accountability and combating identity fraud, this cryptographic system: You cannot afterwards deny making a payment or updating information on a blockchain or absolve yourself of accountability. Blockchains build trust naturally through the underlying technology of dispersed networks, unlike modern networks, which rely on dependable intermediaries for security and trust. They enable direct, transparent, incorruptible (data cannot be modified once added) exchange of digital assets between users (Every transaction is recorded on the time-stamped ledger along with the individual who committed it.). Blockchains can save money by streamlining procedures and minimising inefficiencies (Bach et al., 2018) that are often brought to systems owing to various layers of control, since they successfully decrease the reliance on “middlemen” By 2030, distributed ledgers and blockchains, which are fueled by fraud prevention, cost reductions, and more transparency, are expected to bring USD3.1 trillion in value to company operations, according to Gartner.

152

Blockchain-Based Platform

TYPES OF BLOCKCHAIN Blockchain architecture refers to the interconnectedness of nodes that function on a system for transactional or authentication purposes. Any entity node can join the network when a framework is open to the public; as a result, public blockchains like Ethereum and bitcoin can be found there. Such a blockchain, like Hyperledger Fabric and Ripple, is deemed to be necessary to conduct this study if the participants of the participating nodes in the blockchain have come to be identifiable to the network. The database systems of blockchain enable users to instantaneously carry out and verify the transfer of funds without such a centralised government. As a result, P2P networks of nodes may establish and share digital ledgers for distributed transactions. The cost of system configuration, maintenance, change, and adjudication in communication is significantly reduced by this decentralised method because this procedure should only be completed once in a central location. This type of technology would only encounter one failure or a very limited number of failures, and it would have scaling problems while having excellent performance in many situations.

Permissionless or Public Blockchains When nodes that belong to the network are reachable by anybody online, the ledger is referred to as a public blockchain (such as Bitcoin or Ethereum). Consequently, any activity uses can verify a transaction and contribute in the authorization through the consensus mechanism (Hussien et al., 2019), including proof of work(Kombe et al., 2018 ) or stake evidence. A blockchain’s main purpose is to safely reduce centralised control during the exchange of digital assets. P2P transactions create a block of chains to guarantee decentralisation. Before being added to the system’s irreversible database, each transaction is connected to the preceding it using the cryptographic hash Merkle tree as a piece of the chains.The blockchain based ledger is thus coordinated and compatible with each access point. A comprehensive blockchain ledger will be made available to anyone who has computer and a Connection to the internet who registers to become a station. The technology is entirely safe due to the duplication of synchronised public blockchains among each station in the network (Zhang et al., 2018). However, the operation of validating activities for this kind of blockchain has been slow and ineffective. The amount of electrical power required to validate each transaction is enormous, and as more nodes are added to the network, this power should grow dramatically. 1.2 Permissioned or Private blockchains: This particular class of constrained blockchains enables a facilitator to be fairly compensated. Connectivity authorization is strictly controlled in private blockchains (Benchoufi et al., 2018). Without authorization, stations in the P2P network cannot take part in checking and 153

Blockchain-Based Platform

authenticating transactions. Instead, the broadcaster’s transactions can only be verified and validated by businesses or organisations. 1.3 Consortium blockchain: A semidecentralized blockchain is one in which decisions regarding how to provide users with blockchain services are made jointly by a number of entities. As a result, the permissioned method is tailored to the consumers while placing limitations on the blockchain infrastructure’s right. 1.4 Hybrid blockchain: It is described as a fusion of private and public blockchain network infrastructure. When a user is seamlessly provided with access to both public and private data, this sort of blockchain is actually employed (Zhou et al., 2018). As a result, the application may require either permissioned or unrestricted access from a user on this blockchain, depending on certain implications.

BLOCKCHAIN CONSENSUS ALGORITHMS IN E-HEALTHCARE Blockchain relies on consensus to provide a mechanism for agreement among all blockchain nodes. Numerous consensus algorithms exist for various cryptocurrencies. The following list of chosen consensus algorithms is provided for various use cases, including the provision of e-healthcare services. 1) Proof of work (PoW): The concept was developed by C. Dwork and M. Naor, it was reported in a 1993 journal publication. In a 1999 research, M. Jakobsson and A. Juels used the phrase “PoW” for the first time. The “client problem,” “computational challenge,” and “CPU pricing function” are some other names for the PoW. It is inappropriate for the IoT due of the high network bandwidth needs. Due of its widespread use across various platforms, PoW may be included into e-healthcare services (Zhang et al., 2018) . 2) Proof of stake (PoS): The node that will mine the following block is selected by lottery or at random. It’s incredibly democratic. There is no mining reward or coin production; instead, rewards consist entirely of transaction fees. A node might get a transaction fee as a result of the Nothing at Stake dilemma. We assume that using it as a viable solution for e-healthcare application submissions. 3) Delegated proof of stake (DPoS): It is democratically representative. Although transactions are completed more quickly, centralization expenses are higher. There is a mechanism in place for identifying and voting out rogue delegates. As a result, it could be used in scenarios involving e-healthcare. 4) Leased proof of stake (LPoS): By addressing the centrality issue in the PoS, enabling low-balance nodes and the leasing contract, and allocating the incentive

154

Blockchain-Based Platform

5)

6)

7)

8)

9)

10)

11)

to wealth holders, it resolves the centrality issue. Applying such an algorithm may promote the development of an extremely high quality e-health service. Proof of importance (PoI): Over PoS, it is an enhancement. It takes into account both Nodes’ reputation and balance. It is a network that is more effective. We advise using it for online healthcare services since patients may use doctors’ reputations to help them make decisions. Practical byzantine fault tolerance (PBFT): To decide whether to add the next block, all nodes take part in the voting process. Consensus of more than 2/3 of nodes is required. Compared to PoW and PoS, this idea is more practical, superior, and suitable for private blockchains. It is not very tolerant of malicious nodes. In order to affect e-health service utilisation, we would prefer it. Delegated byzantine fault tolerance (DBFT): It is a level up from PBFT. Delegates of one node are chosen from among nodes. Thus, it appears that using dBFT in the IoT-blockchain architecture may not fully materialise e-healthcare services. Proof of capacity (PoC): Compared to PoW, it is an improvement. In order to possibly mine the upcoming blocks of other nodes, it needs to store a lot of data. The Internet of Things cannot use it. Additionally, we advise against using it for services related to health. Proof of activity (PoA): It is a mashup between PoW and PoS. First, the PoW is finished. Following a PoS, a group of validators signs collectively to add the transaction to the miner’s header. Due to the significant latency, it is not appropriate for the IoT and is therefore not a good option for e-healthcare. Proof of burn (PoB): It alludes to delivering money to an untraceable location. A miner is given priority to mine more burned coins. Because it depends on the presence (Angelett et al., 2017) of a financial system and coin burning, it is useful for cryptocurrency design but poor for the Internet of Things. It is unsuitable for e-health-related applications due to its random burning methodology. Proof of elapsed time (PoET): According to Intel, it is an improvement over PoW in terms of energy efficiency. Based on a wait time that is decided at random, the winning miner is selected. The Software Guard Extension from Intel is one of the trusted (Ali et al., 2018) execution environments for the Internet of Things (SGX). It may be compared to e-healthcare as it is tailored specifically for the SGX-based environment.

155

Blockchain-Based Platform

BLOCKCHAIN BASED HEALTHCARE PLATFORMS Blockchain has been recognised as a disruptive technology with the potential to significantly impact numerous industries. It is impossible to assess every platform because there are so many of them and they are always changing. In this section, we concentrate on the ones that are most widely used and most suited for IoT sectors. 1) Etherum:Ethereum is an open-source, publicly available distributed computing platform and operating system built on the blockchain. The system backs smart contracts. It provides a modified version of Nakamoto consensus using transaction-based state changes. 2) Bitcoin: Bitcoin is the original cryptocurrency and an open-source global money. The decentralised digital money is the first of its kind. We can send and receive Bitcoin from anywhere in the world, making it possible to buy things. 3) Ripple:Ripple is a network for remittances, a real-time gross settlement system, and a currency exchange. It is based on a decentralised open-source protocol and allows for the creation of tokens that stand in for fiat currency, cryptocurrencies, goods, or other forms of value like frequent flyer miles or mobile minutes. 4) Quorum: Quorum is a suitable fit for any application that has to handle private transactions quickly and thoroughly among a permitted group of known participants. Specific obstacles to the adoption of blockchain technology by the banking sector and others are resolved by Quorum. 5) Hyperledger Sawtooththe: A modular framework for creating, implementing, and managing distributed ledgers is called Hyperledger Sawtooth. Distributed ledgers offer a digital record that is maintained without a central authority or implementation, such as asset ownership. 6) Hyperledger Fabric: One of the Hyperledger projects managed by the Linux Foundation is Hyperledger Fabric, a blockchain framework implementation. With a goal of serving as a base for the creation of applications or solutions with a modular architecture, Hyperledger Fabric enables the plug-and-play operation of many components, including membership and consensus services. The application logic of the system is composed of smart contracts, also known as chain-code, which are hosted by Hyperledger Fabric using container technology. As a result of the inaugural hackathon, Digital Asset and IBM donated the first version of Hyperledger Fabric. 7) Hyperledger Iroha: One of the Hyperledger projects managed by the Linux Foundation is Hyperledger Iroha, a blockchain platform implementation (Aktas et al., 2019). Both the BFT ordering service and Yet Another Consensus, a 156

Blockchain-Based Platform

cutting-edge chain-based Byzantine fault-tolerant consensus algorithm, are present in the Hyperledger Iroha C++ code. 8) Stellar: An open-source, decentralized payment technology called Stellar enables quick cross-border exchanges between any two currencies. It uses blockchain technology to operate, just like other cryptocurrencies. 9) NEO: Neo is a community-based, nonprofit blockchain project that employs smart contracts to automate the management of digital assets, digitises assets, and builds a distributed network to implement a smart economy. 10) Medical chain: Full control and access options for the electronic health record’s data are provided to patients by Medicalchain (EHR). Upon consent from the patient, authorised medical practitioners and clinicians are permitted to do “read/write” operations on the EHR (Aladwani et al., 2019). As a result, a clever telemedicine service is established together with a cutting-edge licencing system for the EHR. Below is a discussion of a case study that considers the use of blockchain in healthcare and does a comparative analysis of current approaches for electronic healthcare records.

PHARMACEUTICAL MEDICINE SUPPLY CHAIN: BLOCKCHAINBASED ENABLING TRUST TO “SELF-REGULATE” Every nation is presently addressing the threat posed by the sale of counterfeit medications, which is a worldwide issue. It is a growing global issue that has a significant effect on lower middle-income and lower income countries. According to current estimates from the WHO, In low- and middle-income countries, one in ten medicines are either mediocre or phony. According to the National Drug Survey 2014-2016 done by the National Institute of Biologics, Ministry of Health & Family Welfare, counterfeit or subpar drugs make up about 3% of all medicines in India. Due to the increased danger of fake pharmaceuticals entering the supply chain and especially getting into the hands of customers, there is an urgent need for enhanced visibility and traceability into the origin of medicines and how they have been treated along their journey in the supply chain. Research and interviews carried out during the initiative’s early stages revealed (Mamoshina et al., 2018) that, for the most part, drugs coming directly from the manufacturing facility are reliable as well as the risk of fake drugs entering the system arises whenever the products are transferred between the various stages and layers of the intricate supply chain (i.e. wholesalers, distributors, or sub-distributors). Drugs are susceptible to theft, adulteration, and replacement at every point of transit from the factory to the 157

Blockchain-Based Platform

patient. Such misconduct results in financial loss for the drug manufacturers and, more significantly, poses a serious risk to patient safety.

TRADITIONELY APPLIED MEASURES FOR DRUG TRACEABILITY 1) The ability to collect complete information about the object under investigation, at any point in its life cycle, through the use of recorded identifications is known as traceability. A traceable resource unit (TRU), often known as the subject of the inquiry, is any traceable item in the supply chain. Tracking the transaction history and the present location of the TRU are the two objectives of traceability. In this case, a traceability system must have access to information about the medicine, which acts as the TRU in the supply chain, by documenting its identification and differentiating it from other TRUs using a variety of identifying techniques. A method for locating TRUs, a method for determining the connections between TRUs. 2) Barcodes, RFID tags, wireless sensor networks (WSN), and electronic product codes (EPC) have all been used in traditional supply chain management solutions to identify, gather, and communicate information about products. As a result, it is now simpler to follow items as they pass through various phases. GS1 standards barcodes are used in this instance by Smart-Track to provide the lot production and expiration dates as well as a distinctive serialised product identifier. A running log of ownership changes is kept using the data included in the GS1 barcode, which is gathered through a number of supply chain procedures. An end user (patient) may confirm authenticity through a central data repository maintained as the Global Data Synchronized Network (GDSN) by using a smartphone app as each stakeholder confirms having the product in their hands. To verify the product’s identification and specifications, pharmacy and hospital departments can scan the barcode in the warehouse’s downstream supply chain. Similar to this, Every medicine is given its own Data-Matrix, which is made up of the manufacturer ID, product ID, unique ID of the package, authentication code, and optional meta-data. Using the associated Data- Matrix, the patient may confirm where the medication was produced.

158

Blockchain-Based Platform

SOLUTIONS TO DRUG TRACEABILITY BASED ON BLOCKCHAIN Since participants in the pharmaceutical supply chain are not transparent and traditional techniques for maintaining traceability are often centralised, the central authority is able to change information without informing the other parties. However, a blockchain-based system provides transaction records that are verified, unchangeable, transparent, and data secure. A variety of commercial situations that require transactional processes can employ blockchain, a decentralised, immutable shared ledger.Transparency and traceability are frequently used interchangeably, although they really pertain to quite different concepts. Transparency is frequently used to refer to specific supply chain facts. To map the whole supply chain, it is important to have information on the product’s components, the locations of the facilities, the names of the suppliers, etc. Contrarily, traceability relates to specific information and calls for selecting a specific component to trace, selecting a set of agreed-upon guidelines for partner communication, putting strategies for producing and gathering precise data into action, choosing a platform to store traceability data, and selecting a procedure for sharing data on the platform. a high-level architecture of the involved parties and how they interact with the proposed drug traceability system’s smart contract. It is made feasible for the stakeholders’ intended means of accessing the smart contract, decentralised storage system, and on-chain resources using computer programmes with a front-end layer called as a DApp (Decentralized Application). An application programming interface (API) like Infura, Web3, or JSON RPC connects the DApp to the smart contract, on-chain resources, and decentralised storage system. To start pre-authorized function calls and get access to data files, stakeholders will communicate with the smart contract and decentralised storage systems. They will also interface with the on-chain resources to obtain information like as logs, IPFS hashes, and transactions. Here are some details regarding the components of the system. Fig 4 Shows an outline of the planned blockchain-based solution for the pharmaceutical supply chain. 1) Stakeholders include patients, manufacturers, distributors, pharmacies, and regulatory organisations like the FDA. Based on their position in the supply chain (Genestier et al., 2017), these stakeholders participate in the smart contract and are given specific tasks to do. Additionally, they have access to on-chain resources like history and log data to follow supply chain transactions. Additionally, they have permission to access data kept on the IPFS, including photographs of medicine lots and informational pamphlets. 2) System for Decentralized Storage IPFS provides a low-cost off-chain storage to store supply chain transaction data in order to guarantee the reliability, 159

Blockchain-Based Platform

accessibility, and integrity of the data saved. In order to safeguard the data’s integrity, each uploaded file on the server is assigned a unique hash. Following that, these hashes are stored on the blockchain and are retrievable via a smart contract. Any modifications to submitted files are reflected in the hash associated with them. 3) The supply chain deployment is managed via a smart contract on Ethereum. The main and most important function of the smart contract is to manage the hashes from the decentralised storage server and maintain the transaction history, enabling users to access supply chain data. The smart contract also establishes the responsibilities of the various supply chain actors and, via the use of modifiers, provides authorised users access to these roles. A modifier is essentially a way to embellish a function by giving it new features or imposing constraints. Additionally, the smart contract manages transactions like the sale of medicine lots or cartons. 4) The logs and Events produced by the track and trace-enabling smart contract are saved utilising on-chain resources. A registration and identity system is also used as an on-chain resource to connect each participant’s Ethereum address to a human readable text that is kept in a decentralised fashion. Legitimate-time tracking is unnecessary since the DApp user simply has to utilise the proposed solution to verify that the medicine they are considering purchasing is real and comes from a trustworthy source (Xia et al., 2017). The components of the system are created to cooperate in order to trace the history of the medicine under consideration and confirm its legitimacy. The whereabouts of a drug lot may be tracked in real time using a number of different methods. As an example, IoT-enabled smart containers are equipped with sensors to track and monitor TRUs as they move from their originating point to their destination. The Internet of Things sensor has a GPS receiver for locating the TRU, a temperature sensor for keeping tabs on the temperature, and a pressure sensor for monitoring pressure changes that signify when a container is being opened or shut. 5) Manufacturing: Usually, a company will send an FDA approval request before beginning the manufacturing of a medicine lot. The producer starts the manufacturing process after the FDA grants the request, and an event is announced to all participants. To enable later access by authorized (Aileni et al., 2020) participants, the manufacturer will upload photos of the medication Lot to the IPFS, and the IPFS will then provide a hash to the smart contract. The manufacturing process will be completed when the distributor receives the medicine lot for packing.

160

Blockchain-Based Platform

Figure 6. An outline of the planned blockchain-based solution for the pharmaceutical supply chain

6) Distribution: The process of distribution will now start. The distributor will package the pharmaceutical batch, post a picture of it to IPFS, and then submit a hash to the smart contract. Once the pharmacy packages containing the pharmaceutical Lot have been delivered, the distribution procedure will be complete. 7) Sale/Consumption The contact between the pharmacy and the patients is discussed in the final phase of the sequence diagram. The pharmacy will start this medicine Lot box sale, and all supply chain players will be informed of it. After that, a hash will be sent from the IPFS to the smart contract along with a picture of the drug package that was sold. The patient will then purchase the medicine Lot box, bringing an end to the selling of drug Lots. This procedure will guarantee that all transactions are recorded and can later be accessed by all supply chain participants to verify the legitimacy and authenticity of the goods in the supply chain as a series of events.

ANALYSIS OF THE PROPOSED SOLUTION IN RELATION TO CURRENT SOLUTIONS In this section, we contrast the suggested approach for a traceable supply chain for pharmaceuticals with pertinent alternatives already available. The decentralised nature of the suggested approach is crucial because it prohibits any one party from manipulating or changing the data. Resilience is another key component of our system; because it is decentralised, it has no single point of failure. Due to properties like data immutability, which means that once the information is entered to the ledger it cannot be deleted or changed, blockchain offers an excellent solution for data integrity and security. Data is kept secure since 161

Blockchain-Based Platform

it is kept in a decentralised manner, which makes no single entity has the ability to manipulate data simultaneously. Any supply chain should prioritise transparency of transactions. All participants in our suggested solution can access and read the verified records of all transactions in a secure setting. Last but not least, while all the systems in Table 1 have the track and trace feature in common, other properties like decentralised storage, integrity, and transparency are essential to building a reliable track and trace system. Comparing our suggested approach to alternative blockchain-based systems is shown in Table 2. Our solution makes use of the Ethereum blockchain, whilst the other solutions make use of the Bitcoin blockchain and Hyperledger-Fabric. Furthermore, while both of our solutions and operate in public permissioned mode, Hyperledger-private fabric’s permissioned mode has a built-in capability. The Ethereum currency, Ether, serves as the payment method in our solution. The only form of currency used in the solution is BTC. Additionally, although data is always saved on-chain in other solutions, ours also features a feature that enables off-chain data storage. Finally, both of our solutions—the smart contract and the Docker container, respectively—have programmable modules. The solution provided, however, does not offer a programmable module. Table 3. Analysis of alternative blockchain-based technologies and our suggested solution Our Solution

Huange(23)

Faisal(28)

Blockchain Platform

Ethereum

Bitcoin

Hyperledger-Fabric

Type of operation

Public

Public

Private

Currency

Ether

BTC

None

Off Chain Data Storage

Yes

No

No

Programmable Module

Smart Contract

None

Docker Container

PROPOSED TRACEABILITY SYSTEM DETAILS The Ethereum blockchain platform is utilised in the creation of the suggested fix. Because Ethereum is a permissionless public blockchain, anyone can access it. Solidity is used to write the smart contract, which is then tested and assembled using the Remix IDE.The manufacturer will first execute the smart contract, which will specify and disclose the specifics of the manufactured medication Lot and cause an event that will be sent to everyone involved in the supply chain. Since the events are permanently recorded on the ledger, any new participants who join the network 162

Blockchain-Based Platform

will have access to them and be able to follow and trace any created medicine Lot’s history. In order for participating entities to obtain the image of the medication Lot and physically inspect it, the manufacturer also has the option of posting it to the IPFS. The newly made Lot must be packaged before it can be sold. The builder will then notify other participants via an event that the recently created Lot is now available for purchase. When a deal is done, a gathering will be held and the attendees will be informed who the new owner of the Lot is. Participating Entities must use a function created especially for selling Lots in order to purchase the recently made Lot. Because it would be difficult for the manufacturer to implement the smart contract for the drug Lot, the FDA approval is not included in the smart contract for the purpose of simplicity. We describe numerous algorithms employed in our suggested solution in order to further illustrate the many functionalities of the smart system. It should be emphasised that only entities with authorisation to perform the functions are included when referring to a buyer or a seller. The primary functions and the accompanying algorithms are listed below. 1)

Making a Lot: Algorithm 1 describes how to make a Lot. The functions are shown together with the inputs to the smart contract that they require. Only when the caller’s address matches the owner ID’s address does the function actually run. The caller will be able to modify the fields in Algorithm 1 if access is granted to them. The two events will update the status as stated in Algorithm 1 after all fields have been updated. Grant 2) Lot Sale: The algorithm 2 explains allowing the medicine to be sold in lots. This algorithm can only be activated if the caller is the owner ID holder, and it is responsible for delivering an event alerting all participants that the Lot is presently up for sale. 3) Buying Lot: The exchanges between the buyer and supplier of the medication Lot are described in Algorithm 3. In order to prevent Lot Owner from Purchasing His Own Lot, it is necessary for the buyer (caller) of the function to not reside at the same location as the seller and for the transferred amount to be exactly equal to the Lot Price. The selling proceeds will be given to the seller once both conditions have been met. The owner ID will also be updated. The sale of the Lot will then be announced and the information on the new owner updated through an event. The fact that the smart contract can only be used by trusted parties is crucial to keep in mind. The buyer can be confident that the seller can be trusted and that the Lot will be delivered when a Lot is declared sold.

163

Blockchain-Based Platform

Algorithm 1 for Smart Contract Lot Creation Input: IPFShash, Caller, OwnerID, NumBoxes, boxPrice, LotName, LotPrice Output: An occurrence stating that the Lot was manufactured An indication that the Lot picture has been uploaded Data: LotName: The name of the Lot is lotName. LotPrice is the Lot Number’s stipulated price. NumBoxes The entire number of boxes in a Lot box IPFShash is the IPFS hash of the lot image. OwnerID: The initialization of the Lot’s owner’s Ethereum address; if Caller==OwnerID then Update LotName Update LotPrice Update numboxes Update boxPrice Add IPFShash Create an event announcing the manufacture of the Lot. Declare that the Lot image has been uploaded to the IPFS server by emitting an event. else       contract state reverts then display an error.

Algorithm 2 Granting Lot Sale Output: an occurrence designating the Lot as being for sale initialization; if Caller == ownerID then Notify that the Lot is available for sale. else contract state reverts, then display an error.

164

Blockchain-Based Platform

Algorithm 3 Buying Lot Input: ownerID, Purchaser, LotPrice, Vendor, Amount Transferred Output: An event designating the sale of the Lot Data: ownerID: The current Lot owner’s Ethereum address Purchaser: The purchaser’s Ethereum address Vendor: The Vendor ‘s Ethereum address Transferred Amount: The quantity that was sent to the function lotPrice: The startup cost of the Lot; if Purchaser = Vendor ^AmountTransferred=LotPrice then Transfer the Lot’s purchase price to the Vendor. Replace the seller’s Ethereum address with the buyer’s address to update ownerID Send out a notification that the Lot has been sold. else else restore the contract’s status and display an error

BLOCKCHAIN-BASED HEALTHCARE SUPPLY CHAIN SECURITY ANALYSIS The security analysis of the system and the proposed blockchain-based solution for the integrity, accountability, authorisation, availability, and non-repudiation of the healthcare supply chain are briefly covered in this paragraph. Additionally, we demonstrate how our strategy is immune to well-known threats such distributed denial of service and man-in-the-middle (DDoS). a) Integrity: The main goal of the suggested blockchain system is to keep track of all transactions that take place throughout the healthcare supply chain, allowing for the ability to trace the history of the Lots, ownership changes, and the boxes that go with them. The recommended approach achieves this by maintaining a comprehensive record of all transactions in the immutable blockchain ledger. The recommended method’s integrity is further enhanced by using IPFS to store pictures of the produced Lots. This then makes it possible to track and trace every transaction that occurs along the healthcare supply chain. b) Accountability: As was demonstrated, each time a function is called, the caller’s Ethereum address is logged in the blockchain, making it possible 165

Blockchain-Based Platform

c)

d)

e)

f)

166

to always determine who called a function. Therefore, everyone involved is accountable for their actions. Every drug lot that the manufacturer creates will be his responsibility if the lot is used in the healthcare supply chain. Details function and pharmacies will be held responsible for any prescriptions they issue to a function because buyBox function will make it clear where each patient is getting their meds. Authorization: Only authorised participants may use the modifier to perform the crucial tasks in the smart contract. This guarantees defence against unauthorised access and stops any undesirable entities from utilising the implemented functions. This is crucial for the healthcare supply chain because only a verified manufacturer and verified pharmacies should be able to manufacture the drug Lot and write prescriptions for it. AvailabilityBlockchains are distributed and decentralised by design. Everyone will have access to all logs and transactions when the smart contract is published on the blockchain. The transaction data is maintained at each participating node, as opposed to centralised systems, hence the loss of a node does not result in the loss of transaction data. For the implementation of the healthcare supply chain to be successful, the blockchain network must be active at all times. Any disruption might cause delays, which can be quite costly for the healthcare sector. Non-Repudiation: PKI’s cryptographic features ensure that private keys cannot be derived from public keys since transactions are cryptographically signed by the initiators’ private keys. Therefore, the owner of a private key may be identified by the transaction it signed. Similar to account-ability, participants in the blockchain-based healthcare supply chain are unable to undo their actions since their private keys, which are associated with their real identities, have already been used to sign them. MITM Attacks: The blockchain will not recognise any effort by an outsider to change the original data or information unless it has been signed by the initiator’s private key. This is because the private key of the transaction initiator is required to validate every transaction on the blockchain. MITM attacks are therefore not feasible in the blockchain system. This feature is essential for the use of healthcare supply chains since it guarantees that only accredited organisations can conduct business there and forbids outsiders from producing fake pharmaceuticals while acting as a recognised producer.

Blockchain-Based Platform

LIMITATIONS OF BLOCKCHAIN IN HEALTHCARE SUPPLY CHAINS Although the suggested system makes use of some of the most salient benefits of blockchain technology, there are a number of potential constraints that need to be emphasised in order to better understand how they might affect the system as a whole. Below, we address some potential blockchain in healthcare supply chain restrictions. a) Immutability: Blockchains are immutable, meaning that any data added to the ledger cannot be changed or deleted. There is no way to repair errors on a blockchain because they are immutable, which creates a significant difficulty despite the fact that this can be advantageous for data integrity. For instance, when entering data into the ledger, the operators doing the physical operations in the medication supply chain are still susceptible to making mistakes.As a result, even if these problems are discovered, they cannot be fixed. This might lead to unfavourable outcomes in a healthcare supply chain. For instance, if a manufacturer inserts incorrect information about a drug lot, it may cause problems when it gets to the pharmacy and a pharmacist may improperly prescribe a drug to a patient. b) Data Privacy: Although immutability is one of the benefits of blockchain technology, it can be in contradiction with newly passed legislation that deals with issues related to information storage. For instance, the General Data Protection Regulation (GDPR) in Europe mandates that businesses keep a close eye on where and how data is stored because individuals whose data is collected have the right to modify or delete it at any time, and if actions are not taken in response to their requests, the business runs the risk of receiving hefty fines. Patients who object to having their personal information kept on blockchains across the healthcare supply chain may be able to file a lawsuit against the hospital. c) Scalability: Blockchain improves the system’s security and verifiability but limits its ability to scale since each node on the network must handle each transaction separately. However, there is current study being done on this issue. For instance, two scaling alternatives for Ethereum are sharding and plasma, which do away with the need for each Ethereum node to handle each network transaction. If the manufacture is carried out in small to moderate volumes, this might not be an issue in the health care supply chains. However, it will be difficult and time-consuming to produce medications on a large scale. d) Interoperability: When separate blockchains are unable to communicate with one another, it creates interoperability issues since other blockchain networks function differently than Ethereum. This issue could be resolved if healthcare 167

Blockchain-Based Platform

organisations adopted a solitary blockchain-based system. Making blockchainbased solutions in hospitals interoperable would be very difficult if they chose to implement several of them across various platforms. e) Efficiency: The consensus technique used to evaluate and confirm a transaction as well as the coding of the smart contract are both essential parts of the blockchain solution. The cost of the implementation and execution process is determined by the former, and the degree of energy consumption is determined by the latter. The healthcare supply chain comprises a large number of transactions, so it is crucial that the smart contract is built correctly to ensure speedy and effective execution.

CONCLUSION AND FUTURE SCOPE In this work, problems with conventional permission models for usage in collaborative settings in e-healthcare systems are highlighted. The necessity for a decentralised strategy for a community setting was then investigated. The blockchain can address a number of complex problems in e-healthcare thanks to its immutable processes, cryptography methods, and decentralisation mechanism. Peer to peer transactions using the blockchain system are significantly less expensive. Protecting the privacy of all internet-enabled IoT devices is the biggest challenge facing the medical industry. The adoption of new technology by the health industry to build technological infrastructure is a key factor in determining blockchain’s potential in the field of health care.Utilizing this innovation provides the ability to link disparate systems and generate ideas while improving the value of care. Long-term efficiency gains and enhanced outcomes for patients are possible with a national blockchain EMR network. The improvement of insurance claim or other administrative processes inside the pharmaceutical systems, as well as the availability of survey data for scientists and clinicians, are other facets of medical care that the distributed ledger technology may assist with. A variety of industrial fields are implementing blockchain to expedite procedures, boost security, and improve efficiency. Because of this technology’s shown value, businesses in the financial, energy, supply chain, healthcare, and manufacturing sectors are implementing it.As a result, a lot of new feature has been created to increase the blockchain technology’s usefulness in various fields. This technology is only now being used by researchers to tackle various e-healthcare concerns in a group setting. With the use of this technology, numerous issues related to collaborative environments can be solved. Therefore, our main focus will be on using blockchain-based business models to solve challenges that cannot be handled via conventional methods.

168

Blockchain-Based Platform

REFERENCES Aileni, R. M., & Suciu, G. (2020). IoMT: A blockchain perspective. Decentralised Internet of Things: A Blockchain Perspective, (pp. 199-215). Semantic Scholar. Aktas, F., Ceken, C., & Erdemli, Y. E. (2018). IoT-based healthcare frame-work for bio medical applications. Journal of Medical and Biological Engineering, 38(6), 966–979. doi:10.100740846-017-0349-7 Al Omar, A., Bhuiyan, M. Z. A., Basu, A., Kiyomoto, S., & Rahman, M. S. (2019). Privacy-friendly platform for healthcare data in cloud based on blockchain environment. Future Generation Computer Systems, 95, 511–521. doi:10.1016/j. future.2018.12.044 Aladwani, T. (2019). Scheduling IoT healthcare tasks in fog computing based on their importance. Procedia Computer Science, 163, 560–569. doi:10.1016/j. procs.2019.12.138 Ali, F., El-Sappagh, S., Islam, S. R., Ali, A., Attique, M., Imran, M., & Kwak, K. S. (2021). An intelligent healthcare monitoring framework using wearable sensors and social networking data. Future Generation Computer Systems, 114, 23–43. doi:10.1016/j.future.2020.07.047 Ali, F., Islam, S. R., Kwak, D., Khan, P., Ullah, N., Yoo, S. J., & Kwak, K. S. (2018). Type-2 fuzzy ontology–aided recommendation systems for IoT–based healthcare. Computer Communications, 119, 138–155. doi:10.1016/j.comcom.2017.10.005 Angeletti, F., Chatzigiannakis, I., & Vitaletti, A. (2017, September). The role of blockchain and IoT in recruiting participants for digital clinical trials. In 2017 25th international conference on software, telecommunications and computer networks (SoftCOM) (pp. 1-5). IEEE. 10.23919/SOFTCOM.2017.8115590 Bach, L. M., Mihaljevic, B., & Zagar, M. (2018). Comparativeanalysisofblockchain consensus algorithms. In Proc. 41st Int. Conv. Inf. Commun.Technol. (MIPRO). Electron. Microelectron, (pp. 1545–1550). Semantic Scholar. Benchoufi, M., & Ravaud, P. (2017). Blockchain technology for improving clinical research quality. Trials, 18(1), 1–5. doi:10.118613063-017-2035-z PMID:28724395 Bentov, I., Lee, C., & Mizrahi, A. (2014). Proof of activity: Extending Bitcoin’s proof of work via proof of stake. ACMSIGMETRICS Perform. Eval. Rev., 42(3), 34–37. doi:10.1145/2695533.2695545

169

Blockchain-Based Platform

Chen, R., Li, Y., Yu, Y., Li, H., Chen, X., & Susilo, W. (2020). Blockchain-based dynamic provable data possession for smart cities. IEEE Internet of Things Journal, 7(5), 4143–4154. doi:10.1109/JIOT.2019.2963789 Dubovitskaya, A., Xu, Z., Ryu, S., Schumacher, M., & Wang, F. (2017). Secure and trustable electronic medical records sharing using blockchain. Annual Symposium Proceedings - AMIA Symposium, (pp. 650). AMIA. PMID:29854130 Fan, K., Wang, S., Ren, Y., Li, H., & Yang, Y. (2018). Medblock: Efficient and secure medical data sharing via blockchain. Journal of Medical Systems, 42(8), 1–11. doi:10.100710916-018-0993-7 PMID:29931655 Genestier, P., Zouarhi, S., Limeux, P., Excoffier, D., Prola, A., Sandon, S., & Temerson, J. M. (2017). Blockchain for consent management in the ehealth environment: A nugget for privacy and security challenges. Journal of the International Society for Telemedicine and eHealth, 5, GKR-e24. Grossman, R., Qin, X., & Lifka, D. (1993, April). A proof-of-concept implementation interfacing an object manager with a hierarchical storage system. In (1993) Proceedings Twelfth IEEE Symposium on Mass Storage systems (pp. 209-213). IEEE. 10.1109/MASS.1993.289758 Hölbl, M., Kompara, M., Kamišalić, A., & Nemec Zlatolas, L. (2018). A systematic review of the use of blockchain in healthcare. Symmetry, 10(10), 470. doi:10.3390ym10100470 Hussien, H. M., Yasin, S. M., Udzir, S. N. I., Zaidan, A. A., & Zaidan, B. B. (2019). A systematic review for enabling of develop a blockchain technology in healthcare application: Taxonomy, substantially analysis, motivations, challenges, recommendations and future direction. Journal of Medical Systems, 43(10), 1–35. doi:10.100710916-019-1445-8 PMID:31522262 Kamau, G., Boore, C., Maina, E., & Njenga, S. (2018, May). Blockchain technology: Is this the solution to emr interoperability and security issues in developing countries?. In 2018 IST-Africa Week Conference (IST-Africa) (pp. 1). IEEE. Kim, J. Y. (2018). A comparative study of block chain: Bitcoin· Namecoin· MediBloc. Journal of Science and Technology Studies, 18(3), 217–255. Kombe, C., Dida, M., & Sam, A. (2018). A review on healthcare information systems and consensus protocols in blockchain technology.

170

Blockchain-Based Platform

Li, W., Andreina, S., Bohli, J. M., & Karame, G. (2017). Securing proof-of-stake blockchain protocols. In Data Privacy Management, Cryptocurrencies and Blockchain Technology: ESORICS 2017 International Workshops, (pp. 297-315). Springer International Publishing. 10.1007/978-3-319-67816-0_17 Mamoshina, P., Ojomoko, L., Yanovich, Y., Ostrovski, A., Botezatu, A., Prikhodko, P., Izumchenko, E., Aliper, A., Romantsov, K., Zhebrak, A., Ogu, I. O., & Zhavoronkov, A. (2018). Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare. Oncotarget, 9(5), 5665–5690. doi:10.18632/oncotarget.22345 PMID:29464026 Mertz, L. (2018). (Block) chain reaction: A blockchain revolution sweeps into health care, offering the possibility for a much-needed data solution. IEEE Pulse, 9(3), 4–7. doi:10.1109/MPUL.2018.2814879 PMID:29757744 Omar, A. A., Rahman, M. S., Basu, A., & Kiyomoto, S. (2017) Medibchain: a blockchain based privacy pre-serving platform for healthcare data. In: International conference on security, privacy and anonymity in computation, communication, and storage. Springer. Premarathne, U., Abuadbba, A., Alabdulatif, A., Khalil, I., Tari, Z., Zomaya, A., & Buyya, R. (2016). Hybrid cryptographic access control for cloud-based EHR systems. IEEE Cloud Computing, 3(4), 58–64. doi:10.1109/MCC.2016.76 Radanović, I., & Likić, R. (2018). Opportunities for use of blockchain technology in medicine. Applied Health Economics and Health Policy, 16(5), 583–590. doi:10.100740258-018-0412-8 PMID:30022440 Rana, S. K., Kim, H. C., Pani, S. K., Rana, S. K., Joo, M. I., Rana, A. K., & Aich, S. (2021). Blockchain-based model to improve the performance of the next-generation digital supply chain. Sustainability (Basel), 13(18), 10008. doi:10.3390u131810008 Rana, S. K., & Rana, S. K. (2020). Blockchain based business model for digital assets management in trust less collaborative environment. International Journal of Computing and Digital Systems, 9, 1–11. Rana, S. K., & Rana, S. K. (2021). Intelligent Amalgamation of Blockchain Technology with Industry 4.0 to Improve Security. In Internet of Things (pp. 165175). CRC Press. Rana, S. K., Rana, S. K., Nisar, K., Ag Ibrahim, A. A., Rana, A. K., Goyal, N., & Chawla, P. (2022). Blockchain technology and Artificial Intelligence based decentralized access control model to enable secure interoperability for healthcare. Sustainability (Basel), 14(15), 9471. doi:10.3390u14159471 171

Blockchain-Based Platform

Roehrs, A., Da Costa, C. A., & da Rosa Righi, R. (2017). OmniPHR: A distributed architecture model to integrate personal health records. Journal of Biomedical Informatics, 71, 70–81. doi:10.1016/j.jbi.2017.05.012 PMID:28545835 Siyal, A. A., Junejo, A. Z., Zawish, M., Ahmed, K., Khalil, A., & Soursou, G. (2019). Applications of blockchain technology in medicine and healthcare: Challenges and future perspectives. Cryptography, 3(1), 3. doi:10.3390/cryptography3010003 Uddin, M. A., Stranieri, A., Gondal, I., & Balasubramanian, V. (2020). Blockchain leveraged decentralized IoT eHealth framework. Internet of Things, 9, 100159. doi:10.1016/j.iot.2020.100159 Xia, Q. I., Sifah, E. B., Asamoah, K. O., Gao, J., Du, X., & Guizani, M. (2017). MeDShare: Trust-less medical data sharing among cloud service providers via blockchain. IEEE Access : Practical Innovations, Open Solutions, 5, 14757–14767. doi:10.1109/ACCESS.2017.2730843 Yánez, W., Mahmud, R., Bahsoon, R., Zhang, Y., & Buyya, R. (2020). Data allocation mechanism for Internet-of-Things systems with blockchain. IEEE Internet of Things Journal, 7(4), 3509–3522. doi:10.1109/JIOT.2020.2972776 Zhang, P., Schmidt, D. C., White, J., & Lenz, G. (2018). “Blockchain technology use cases in healthcare. In Advances in Computers (Vol. 111). Elsevier. Zhang, P., White, J., & Schmidt, D. C. (2017). Design of block chain-Based apps using familiar software patterns to address interoperability challenges in healthcare. In Proc. 24th Pattern Lang. Program. Conf., Ottawa, ON, Canada, . Zhang, X., & Poslad, S. (2018, May). Blockchain support for flexible queries with granular access control to electronic medical records (EMR). In 2018 IEEE International conference on communications (ICC) (pp. 1-6). IEEE. Zheng, Z., Xie, S., Dai, H. N., Chen, X., & Wang, H. (2018). Blockchain challenges and opportunities: A survey. International Journal of Web and Grid Services, 14(4), 352–375. doi:10.1504/IJWGS.2018.095647 Zhou, L., Wang, L., & Sun, Y. (2018). MIStore: A blockchain-based medical insurance storage system. Journal of Medical Systems, 42(8), 149. doi:10.100710916-0180996-4 PMID:29968202 Zhuang, Y., Sheets, L. R., Chen, Y. W., Shae, Z. Y., Tsai, J. J., & Shyu, C. R. (2020). A patient-centric health information exchange framework using blockchain technology. IEEE Journal of Biomedical and Health Informatics, 24(8), 2169–2176. doi:10.1109/JBHI.2020.2993072 PMID:32396110 172

173

Chapter 7

A Methodological Study of Fake Image Creation and Detection Techniques in Multimedia Forensics Renu Popli Chitkara Univeristy Institute of Engineering and Technology, Chitkrara University, Punjab, India Isha Kansal Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India Rajeev Kumar https://orcid.org/0000-0001-7189-3836 Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India Ruby Chauhan Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India

ABSTRACT Nowadays, there is a huge concern of fabrication of real-world images/ videos using various computer-aided tools and software. Although these types of software are commonly used for personal entertainment but may create havoc when used by malicious people for concealing some sensitive contents from images for criminal forgery. Spread of fake information and illegal activities or creating morphed images

DOI: 10.4018/978-1-6684-6864-7.ch007 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

A Methodological Study of Fake Image Creation

of some individuals for taking revenge are some of the potentially destructive areas of advanced face information and structure manipulation technology in the wrong hands. Researcher fraternity in multimedia forensics have been working in this area since many years and in this paper, a comprehensive study of various techniques of fake image/video creation and detection are described. It also presents a survey on various benchmark datasets used by the researchers for fake image/video detection. The presented survey can be a useful contribution for the research community to develop a new method/ model for fake detection thereby overcoming the restrictions of the traditional methods.

INTRODUCTION Social media plays an integral role in our lives today and has revolutionized the way people communicate and socialize on the web. Even the development of social media is an advantage to humans as well as has some negative effects. Fake content and growing disinformation by malevolent users have not only troubled our online social media system into disarray but also dispense humankind. The various studies explore the various methods of combating fake news on social media such as the Hybrid model and Natural Language Processing. Disclosure holds that the implementation of hybrid machine learning Techniques and combined effort of mankind could stand a more chance of fighting deception on social media. Massive advances have been made in the field of automatic video enhancement strategies over the previous few years. In particular, amazing success has been demonstrated in the direction of facial manipulation tactics (Li et al., 2018). Similar facial features can be reproduced now with the help of the advanced technologies which involves altering facial emotions from one film to another (Korshunov et al., 2018), (Afchar et al., 2018). This makes it possible to switch between speaker identifications with little or no effort. Face manipulation systems and equipments have improved to the point where even users with no prior knowledge in photo editing or digital arts may use them. Indeed, all the things are pre-written in some code form or other forms, which are freely available to the general public on a high rate (Hsu et al., 2020), (Villan et al., 2017). On the one hand, technical advancements open up new creative prospects (e.g., film making, visible effect, visible arts, and so forth). Unfortunately, it also facilitates the technique of video forgery with the help of malevolent consumers. Spread of fake information and illegal activities or creating morphed images of some individuals for taking revenge are some of the potentially destructive areas of advanced face information and structure manipulation technology in the wrong hands. The ability of detecting whether or not a face is manipulated in a videotape and pictures series is getting progressively important (Cueva et al., 174

A Methodological Study of Fake Image Creation

2020), as the distribution of these types of manipulated films invariably has severe and dangerous consequences (e.g., diminished trust in the media, centered opinion formation, cyber bullying, and so on. Detecting whether or not a video has been tampered with is not a new challenge. Researchers in multimedia forensics experts have been working on this content for many times, presenting a variety of solutions to specific problems (Zhang et al., 2019), (Marra et al., 2018), (Natraj et al., 2019). For example, the authors of (Chen et al., 2021), (El et al., 2013) analyze the coding records of motion movies. Exclusive strategies for detecting body duplication or deletion are proposed in AlShariah et al., 2019), (Zhuo et al., 2018). The data set containing real and fake images after data processing is categorized in training phase and testing phase as shown in Figure 1.1 given below: Figure 1. Generalized process followed in deep fake detection

All of the methods mentioned above work on the same base method: none of them can be reversed; action leaves an unseen, different type of imprint that may be used to discover the specific improvement. Forensic footprints, on the other hand, are typically dispersed and difficult to locate. This is the case with films that have been compressed excessively, have had many boosting treatments applied to them right away, or have been subjected to robust down sampling (Marra et al., 2018). This is also true in the case of largely realistic fakes carried out using procedures that are delicate to model formally. As a result, modern facial alteration techniques are 175

A Methodological Study of Fake Image Creation

difficult to detect from a forensic standpoint (Nguyen et al., 2021). Several unique face modification techniques (Kaur et al. 2021), (Mukhtar et al., 2020) exist (i.e., entire face synthesis, attribute manipulation, identity swap, expression swap). At the end, modified motion pictures are commonly exchanged via social structures that practice resizing as well as coding procedures, impeding the overall performance of traditional forensic detectors (Kumar et al., 2020). Fake sample or fake set of images is generated from every pristine face (Frank et al., 2020), (Chauhan et al., 2022). A sample of input dataset taken from FF++ and DFDC is shown in Figure 2. given below: Figure 2. Face sample inputs taken from FF++ and DFDC datasets (Kumar et al., 2020)

There are few steps involved in making a face alteration video as shown in Figure 1.3 given below: a) By using AI methods known as embedded, firstly employ thousands of twoperson facial photos. b) Embedded identifies and learns similarities between two faces, then restored to compressed images by Decoder. c) By training one code to restore the first person’s face then another code to restore the second person’s face. Then insert the encoded photos into incorrect code to make a face alteration. Recorder then uses the contour and shape of Human face A to remodel human face B likewise a video.

176

A Methodological Study of Fake Image Creation

Figure 3. Representation of the technology behind face manipulation

(Zhang et al., 2020)

Furthermore, Facial manipulation can be categorized in four main different categories from high to low level based on level of manipulations. a) Synthesis of the Entire Face: With the help of advanced algorithms such as Generated Adversarial Network (GAN), completely non-existing face images can be created very easily (Islam et al., 2020). With great quality, and realistic features, these images look very real, and the images and the results achieved are very surprising. Although, this feature has several benefits in certain industries, for example gaming industry and 3d model making industry, but there are several dark sides of these advancements too. b) Identity Swap: As the name suggests, this is swapping the face of one individual with another individual, thus completely manipulating the identity of that individual. It basically uses two approaches: ◦◦ Classical computer graphics-based techniques like Face Swap. ◦◦ Various different types of deep learning methods such as deep Fakes e.g. ZAO mobile apps. Nowadays videos that look like a real, of this type can be seen on internet this is used for taking revenge or defaming the popular individuals.

177

A Methodological Study of Fake Image Creation

c) Attribute Manipulation: It is also known as retouching. It means to modify certain features of the image, (face particularly), so as to manipulate the image. With the help of several advanced techniques such as GAN (Rossler et al., 2019). For example, an app called Face App, used by consumers on a wide range of products similar as cosmetics, haircut in virtual terrain. d) Expression Swap: As the name suggests, it means to manipulate the facial features or changing of facial expressions i.e. The GAN structure just replaces the expressions of the face of one person with another person. This kind of manipulation can create a lot of damage to the identity of an individual as well as to the society (Chen et al., 2021), (El et al., 2013). The principle motives of this method are given below: • • •

To bring in Deep Fake tools and technologies this can be used to change different aspects of pictures and videotapes. To bring in Deep Fake datasets and some traditional datasets for forensic evaluation To judge some newly built Deep Fake detection techniques widely used in pictures and videotapes.

LITERATURE REVIEW In media forensics some of the earlier fake detection methods have been commonly based on: (i) in-camera fingerprints, the analysis of the intrinsic fingerprints taken by the camera device, both hardware and software, like color filter array, optical lens and interpolation, compression among others, and ii) out-camera fingerprints, the external fingerprints introduced by editing software, such as copy-move or copypaste different elements of the image, reduce the frame rate in a video etc. Most of the features in traditional fake detection methods are dependent on the training scenario, so are not robust against unseen conditions. Special importance in the era we live is in that most of the media fake content is being shared on social media networks, whose platforms automatically modify the original image/video, e.g., through compression and resize operations. The increasing rate of applications in the field of fake face detection is the motivating factor behind this research (Wubet et al., 2020). In addition, various researchers have been working in this field by using state of the art approaches such as classical Machine Learning Based techniques, statistical techniques, Deep Learning based Techniques and Block-chain based techniques Research fraternity has been using Deep learning Methods as well as Artificial Intelligence Techniques for identifying forgery contents. Nguyen et al. (2021) 178

A Methodological Study of Fake Image Creation

suggested method which has a promising performance in detecting false videos, which can be further enhanced by taking into account dynamic patterns of blinking, such as excessively recurrent blinking can be a sign of tampering. Islam et al. (2020) told that GRU (Gated Recurrent Unit) has the best accuracy in both datasets, at 0.88 and 0.91, respectively. In GRU, LSTM, and Tech: tan-RNN (GRU) given by Shorten et al. (2019), when the models are assessed on supplemented test data, they achieve 50.99 percent accuracy on the CIFAR-10 dataset against 70.06 percent accuracy on the CIFAR-10 dataset. Ahmad et al. (2020) beyond B0, using higher scaled forms of Efficient Net results in over learning and decreased accuracy 85.3 percent and 81.2 percent, respectively. Furthermore, a strategy was given by Hsu et al. (2020) which beats previous state-of-the-art systems in terms of accuracy, recall rate, according to experimental results. The research work done by Wubet et al. (2020) gave the fact that when attempting to detect deep fake movies, the image quality metrics and the lip-syncing approach with Support Vector Machine (SVM) reveal an error (Zhang et al., 2020). Pillai et al. (2020) proposed false colourized picture detection using convolutional neural networks (CNN) which outperforms fake colourized image detection using histograms and feature extraction. Welekar et al. (2020) used a convolution neural network to detect deep fake and resulted that the accuracy of the model reported in the paper is roughly 70%. Over the collection of low-resolution photos, the model exhibits reasonable accuracy. Tanaka et al. (2021) proposed dataset for Image Manipulation hashing approach was chosen because of its high robustness against image compression and resizing. AlShariah et al. (2019) aims to build a model that can be used to classify Instagram content to detect any threats and forged images. The model was built using deep algorithms learning which is CNN, Alexnet network and transfer learning using Alexnet. Nasir et al. (2021) proposed a novel hybrid deep learning model that combines convolutional and recurrent neural networks for fake news classification. The model was successfully validated on two fake news datasets (ISO and FA-KES), achieving detection results that are significantly better than other non-hybrid baseline methods. Wu et al. (2020) aimed to use TI-CNN. TI-CNN is trained with both text and picture input at the same time by projecting explicit and latent characteristics into a unified feature space. Thota et al. (2018) described a method for fake news detection to obtain an accuracy of 94.21 percent on test data, a precisely calibrated Tiff-IDF – Dense neural network (DNN) model was used (Kumar et al., 2020). Although, deep neural methods and LSTM were proposed by on Fake news detection using machine learning (Kumar et al., 2020). In comparison to other neural network models, the LSTM allows for faster training with huge datasets (Ahmad et al., 2020), (Zhang et al., 2020). A research done by Suhail Yousaf et al. (2020), Ahmad et al. (2020) used a CNN based approach for deep fake detection with 179

A Methodological Study of Fake Image Creation

attention target specific regions and manual distillations extraction, using ensemble approaches and a variety of linguistic feature sets, able to categories as true or false. Moreover, Zhang et al. (2020) explained about Fake detector is a framework that consists of two primary components: portrayal of characteristics learning and credibility label inference, which in combination form a deep diffusive network model. Several processes based on “Fast text” and “Shallow-and-wide CNN” was applied and transformed (Kumar et al., 2020). Rossler et al. (2019) focused on the impact of compression on the detestability of state-of-the-art manipulation algorithms, and a standardized baseline for future research is proposed. A related literature is shown as given below in Table 1.

180

A Methodological Study of Fake Image Creation

Table 1. Literature review of detecting fake contents Sr. No.

1

2

3

4

5

6

7

YoP

Author/s

2021

Quoc Viet Hung Nguyen, Thanh Thi Nguyen, SaeidNahavan di, Cuong M. Nguyen, Fellow, IEEE

Deep Learning For Deep Fakes Creation and Detection: A Survey

Institute of Electrical and Electronics Engineers (IEEE Xplore)

The suggested method has a promising performance in detecting false videos, which can be further enhanced by taking into account dynamic patterns of blinking, such as excessively frequent blinking, which could be a sign of tampering.

2020

Md Rafiqul, Islam, ShaowuLiu, Guandong Xu, Xianzhi Wang

Deep learning for misinformat ion detection on online social networks: a survey and new perspectives

SpringerVerlag GmbH Austria

GRU has the best accuracy in both datasets, at 0.88 and 0.91, respectively. Gated Recurrent Unit, LSTM, and Tech:tanh-RNN (GRU)

2019

Connor Shorten, Khoshgoftaar, Taghi M.

A survey on Image Data Augmentati on for Deep Learning

Springer Open

When the models are assessed on supplemented test data, they achieve 50.99 percent accuracy on the CIFAR-10 dataset against 70.06 percent accuracy on the CIFAR-10 dataset.

2021

BhuvaneshSi ngh, Dilip Kumar Sharma

Predicting Image credibility In fake news over Social media Using multimodal approach

Springer -Verlag London Ltd.

Beyond B0, using higher scaled forms of Efficient Net results in over learning and decreased accuracy. The accuracy is 85.3 percent and 81.2 percent, respectively.

2019

Chih-Chung Hsu, Chia-Yen Lee ,Yi-Xiu Zhuang

Deep Fake Image Detection based on Pairwise Learning

Preprints (www.preprints. org)

The suggested strategy beats previous state-of-theart systems in terms of precision and recall rate, according to experimental results.

WorkuMuluy eWubet

The Deepfake Challenges and Deepfake Video Detection

International Journal of Innovative Technology and exploring Engineering (IJITEE)

When attempting to detect deepfake movies, the image quality metrics and the lip-syncing approach with Support Vector Machine (SVM) reveal an error.

Neetu Pillai, PHCET, Maharashtra, India

Fake colorized and morphed image detection using convolutio nal neural network

ISSN (online)

False colourized picture detection using convolutional neural networks (CNN) outperforms fake colourized image detection using histograms and feature extraction.

2020

2020

      Title

   Source

                  Summary

Continued on following page 181

A Methodological Study of Fake Image Creation

Table 1. Continued Sr. No.

8

9

10

11

12

13

14

YoP

Author/s

      Title

   Source

                  Summary

2020

Aarti Karandikar, S.R.C.E Maharashtra India

Deepfake Video Detection Using Convolutio nal Neural Network

International Journal of Advanced Trends in Computer Science and Engineering

The accuracy of the model reported in the paper is roughly 70%. Over the collection of low-resolution photo the model exhibits reasonable accuracy.

2021

Miki Tanaka, Tokyo Metropolitan University

A Detection Method of Operated FakeImages Using Robust Hashing

Journal of Imaging

Dataset for image Manipulation robust Hashing approach was chosen because of its high robustness against image compression and resizing.

2019

Njood Mohammed AlShariah, (IMSIU), Saudi Arabia

Detecting Fake Images on Social Media using Machine Learning

IJACSA

93.3 percent accuracy, 95.5 percent precision, 91.4 percent recall, 95.3 percent specificity, and 91.4 percent sensitivity.

2021

Jamal Abdul Nasi, IraklisVarlam is, Osama Subhani Khan

A hybrid CNN-RNN based deep learning approach

International Journal of Information Management Data Insights

Use of hybrid CNN-RNN on ISOT Dataset with accuracy =0.99 ± 0.02

2019

Krithi Dinesh Kottary, Mr. Sunil B.N

Fake News Detection Using Machine Learning

Sahyadri International Journal of Research

Use of SVM, Naïve Bayes, Logistic Regression on ISOT dataset

2018

Philip S. Yu

TI-CNN: Convolutio nal Neural Networks for Fake News Detection

University of Illinois at Chicago, United States

In this paper, TI-CNN is proposed. TI-CNN is trained with both text and picture input at the same time by projecting explicit and latent characteristics into a unified feature space.

2018

Aswini Thota1, Priyanka Tilak1, Simeratjeet Ahluwalia1, NibhratLohia

Fake News Detection: A Deep Learning Approach

SMU Data Science Review

To obtain an accuracy of 94.21 percent on test data, a precisely calibrated Tf-IDF – Dense neural network (DNN) model was used.

Continued on following page

182

A Methodological Study of Fake Image Creation

Table 1. Continued Sr. No.

15

16

17

18

19

YoP

Author/s

      Title

   Source

                  Summary

K.ArunKuma r, G.Preethi, K.Vasanth

Fake News Detection Using Machine Learning

International Journal of Technology and Engineering System (IJTES)

Deep neural methods and LSTM were proposed. In comparison to other neural network models, the LSTM allows for faster training with huge datasets.

I.M.V.Krishn a, Dr.S.Sai Kumar

Fake News Detection Using Naive Bayes Classifier

International Journal Of Creative Research Thoughts (IJCRT)

This research claims that a model based on the count vectorizer or a tf-idf matrix (i.e. word tallies relative to how often they are used in other articles in your dataset) can aid in finding relevant articles.

Suhail Yousaf, IftikharAhma d, Muhammad Yousaf, Muhammad Ovais Ahmad

High Performan Ce DeepFake Video Detection On CNNBased with Attention TargetSpecific Regions and Manual Distillatio n Extraction

Department of Mathematics and Computer Science, Karlstad University, Karlstad, Sweden

Using ensemble approaches and a variety of linguistic feature sets, we are able to categorise news articles from various categories as true or false.

2019

Bowen Dong Jiawei Zhang, Philip S. Yu

FAKEDE TECTOR: Effective Fake News Detection with Deep Diffusive Neural Network

Florida State University, FL, USA

FAKEDETECTOR is a framework that consists of two primary components: representation feature learning and credibility label inference, which combined form a deep diffusive network model.

2019

Hyeong-Jun Kim, Dong-Ho Lee, Yu-Ri Kim, SeungMyung Park, and Yu-Jun Yang

Fake News Detection Using Deep Learning

Journals of Information Processing System

To create a model for identifying false news, several processes based on “Fasttext” and “Shallow-and-wide CNN” were applied and transformed.

2020

2021

2020

Continued on following page 183

A Methodological Study of Fake Image Creation

Table 1. Continued Sr. No.

20

21

YoP

Author/s

2021

I.M.V.Krishn a, Dr.S.Sai Kumar

2019

Andreas Rossler, Davide Cozzolino, Justus Thies

      Title

   Source

                  Summary

Fake News Detection Using Naive Bayes Classifier

International Journal of Creative Research Thoughts (IJCRT)

This research claims that a model based on the count vectorizer or a tf-idf matrix (i.e. word tallies relative to how often they are used in other articles in your dataset) can aid in finding relevant articles.

Face Forensics

Technical University of Munich

There is focus on the impact of compression on the detectability of state-of-the-art manipulation algorithms in this paper, and a standardised baseline for future research is proposed.

DESCRIPTION OF DATASETS Datasets play a major role in detecting and predicting of deep fakes i.e. a large number of pictures and videotapes to feed the data to a ML algorithm. In this section, different types of datasets used by various authors to detect deep fake videos are illustrated in Table 2.

184

Celeb-DF fake processed videos

Celeb-DF Real processed videos

Face Forensics++ Real and fake processed videos

DFDC Fake processed videos

DFDC Real processed videos

1

2

3

4

5

Sr. No.

Table 2. Dataset description used in fake video detection Dataset

DFDC2

DFDC1

FF++

Celeb-DF2

Celeb-DF1

Links

A Methodological Study of Fake Image Creation

185

A Methodological Study of Fake Image Creation

Forensics datasets can be classified into two broad types: traditional and deep fake datasets. Traditional forensics datasets are created manually with extensive manual effort under carefully controlled conditions such as camera artifacts, splicing, inpainting, resampling and rotation detection. Several datasets were proposed that include image manipulations. Classical forensics datasets have been created with significant manual effort under very controlled conditions. In this section, Table 3. illustrates a dataset description used in fake image detection. Table 3. Dataset used in fake image detection S.No

Dataset Description

Dataset Link

1

Dataset consists of video sequences which are manipulated using automated face manipulation methods.

FaceForecncis+

2

Over 70,000 images of human faces having very good quality of 1024 * 1024.

Flickr Faces

3

Unique new dataset created by experts to benchmark deep fake detection models.

DFDC

4

Non commercial dataset of over 367,000 faces annotated using 3,100 subjects’ facial points.

UMDFaces

5

Deep fake TIMIT swaps videos using a GAN-based approach developed from the encoder-based Deep fake algorithm.

DeepFakeTIMIT

6

Dataset containing people of all ages, ethnicity and gender.

UTKFace

7

Collective of 3 datasets A, B, C CAISA Gait includes 20 people with 4 sequences for each dimension.

CASIA Gait

8

Google AI dataset of over 156,000 facial images meticulously annotated by six human annotators.

GFEC

9

Commercial research purpose dataset having over 200,000 celebrity images.

CelebA

10

VidTIMIT consists of videos of people reciting short phrases. Used for research on face recognition.

VidTIMIT

11

Dataset of over 10,000 images of people from across 15 countries aged 4 and 70 years.

TuftsFace

12

Over 10,000 images from seven different cameras were converted to greyscale and scaled to 512*512.

BossBase

CLASSIFYING TECHNIQUES OF FAKE IMAGE / VIDEO DETECTION Computer vision nowadays is using a ton of machine learning and deep learning (Lee et al., 2019), (Thota et al., 2018) for detection and prediction of deep fakes. Alike stereotypes have been used in the prediction of fake image detection. In this 186

A Methodological Study of Fake Image Creation

section, Table 4. illustrates an overview of many modern ways, used by different authors for fake video detection and Table 5 illustrates classification of contemporary techniques for Fake Image Detection. Table 4. Classification of modern techniques used for fake video detection Ref. (Nguyen et al., 2021)

CFFN ü

CNN ü

GAN

RNN

ü

ü

SVM

Net

NaiveBayes

ü

(Hsu et al., 2020)

ü

(Wubet et al., 2020)

ü

ü

(Pillai et al., 2020)

ü

(Welekar et al., 2020)

ü

ü

ü

Tanaka et al., 2021)

ü ü

ü

ü

(Nasir et al., 2021)

ü

ü

(Wu et al., 2020)

ü

(Thota et al., 2018)

ü

(Kumar et al., 2020)

ü

(Ahmad et al., 2020)

DNN

ü

(Singh et al., 2022)

(AlShariah et al., 2019)

Alex

Robust Hashing

ü

(Islam et al., 2020) (Shorten et al., 2019)

LSTM

ü

ü ü

ü

(Zhang et al., 2020)

ü

(Lee et al., 2019)

ü

(Rossler et al., 2019)

ü

ü ü

187

A Methodological Study of Fake Image Creation

Table 5. Classification of contemporary techniques for fake image detection Ref.

NLP

Frequency Analysis

Y. Li et al. Li et al., 2018

CNN

ML



Resnet50

SVD

SVM

Deep Learning

 

(Afchar et al., 2018)





(Hsu et al., 2020)



(Villan et al., 2017)







(Zhang et al., 2019)



(Marra et al., 2018)



(Natraj et al., 2019)

 

(Chen et al., 2021)



(El et al., 2013) (Frank et al., 2020)

VGG 

P. Korshunov et al. (Korshunov et al., 2018)

(Cueva et al., 2020)

Feature Learning

 

(Chauhan et al., 2022)



(AlShariah et al., 2019)



(Zhuo et al., 2018)



 

PRESENT STATE OF ART TECHNIQUES USED IN FAKE IMAGE AND VIDEO DETECTION In this section, various latest techniques are described below:

GAN-Generated Faces Detection GAN (generative adversarial networks) generates high realistic human face digital image which are difficult to distinguish by a human being (Zhang et al., 2020), (Rossler et al., 2019). The faces generated by GAN can be easily used to create fake social media accounts and trick people like raising funds on the name of another person. Two major challenges in making a GAN-face detection model, firstly the method should be accurate and flexible which is able to expose a large number of GAN models creating fake images. Secondly, the decision process and detection result should be understood by a normal human. Early face detection methods are mainly deep learning- based methods, although these methods had achieved promising performance in past but there are still difficult to explain the undertaking mechanism. An important aspect of GAN - face detection is much worse than AI algorithmic methods that are 50%-60%.GAN-face detection task is closely related

188

A Methodological Study of Fake Image Creation

to other fake face detection tasks including morphed face detection and manipulated face detection. Various types of GAN – face Detection Methods are described below: a) Deep learning based methods: To train deep neural network classifies to distinguish fake faces from real ones in an end-to-end learning framework. This also used a dual-channel CNN to reduce the impact of many widely used image post-processing operations. The deep CNN of their network extracts features of the pre-processed images, and the shallow CNN extracts features from the high-frequency components of the original image. b) GAN-face detection in real- world scenarios: This is a framework for evaluating detection methods under cross-model, cross-data, and post-processing evaluations. c) One-shot, incremental and advance learning: Scene understanding is applied to determine out-of-context objects that appeared in the GAN-faces to distinguish GAN-faces from the real ones. d) Physical based methods: These methods identify GAN faces using artifacts and inconsistency from the faces in the physical world. Johnson and Fadrid worked on it and analyzed the parameters and light sources from the perspective distortion of specular highlights of eyes in order to reveal image tampering used the inconsistency between eyes to detect the GAN faces. e) Physiological based methods: Semantic aspects of human face are used in these methods as cue i.e., pupil shape, symmetry etc. found that GAN can generate these parts with high accuracy but can be detected using automatic algorithms.

DeepFake Generation and Detection DeepFakes generally has 4 types i.e.: Video with audio, Video without audio, image and video without audio. Current research work and studies focus on face and short video, Face Swapping and Facial re-enactment are two major techniques used nowadays, in face swapping real face is replaced with the face of reference other things in background and environment are same, RSGAN is a deep neural network-based algorithm which generates Deep fakes using face swapping, FSGAN is another face swapping-based algorithm its working is based on RNN. There are different types of algorithms which can be used to detect deep fakes such as Deep learning, RNN, LSTM, CNN, GAN etc.

189

A Methodological Study of Fake Image Creation

Predicting Image Credibility in Fake News Over Social Media Using Multi-Modal Approach A Multi-modal strategy based on explicit deep learning to detect false photographs shared on social media networks. The visual and textual modalities were learned on separate channels in the proposed model, then combined to obtain feature sets from both modalities. Understanding the relationship between modalities did not require any additional components. For extracting image and text features, the model used EfficientNet-B0 and sentence transformer ROBERT, respectively. The CNN model used the ELA-generated images as input. The EfficientNetB0 was also tested against the CASIA2.0 dataset, which contained manipulated images. Over the course of CASIA 2.0, an efficiency of 87.13 percent was recorded (Pillai et al., 2020), (Welekar et al., 2020). The experiment was run on datasets from Twitter and Weibo. 85.3 percent and 81.2 percent accuracy were achieved. These findings outperformed the earlier state-of-the-art models. A multi-modal deep learning model can detect false photographs on social media platforms, according to this study. A fresh Twitter dataset was built based on the most recent 2020 events in India. The observation was that there were significant differences in picture and textual cues from the previous dataset then, which reduced the accuracy of models trained on old data. This highlighted the urgent necessity to establish a social media picture dataset based on current patterns in order to keep up with the microblogging industry’s shifting tendencies. There was no mention of detection of satire images. Furthermore, the proposed approach was not tested against generative adversarial networks-generated fake photos. The text that was put over the photos was most likely ignored. This would be pursued as part of the investigation.

Fake Colorized and Morphed Image Detection Using Convolutional Neural Network As technology advances, the use of false images grows. As a result, most of the images saved on a server or in the cloud are being changed or counterfeited, according to an overview. As a result, determining whether the photographs stored are real or not is challenging. As a result, there aren’t many technologies available today that can detect whether or not a photograph is genuine. Previously, approaches based on histograms and feature extraction were employed to detect fraudulent photos (Tanaka et al., 2021), (Lee et al., 2019). The neural network, which is the most advanced technology, detects phoney images by examining numerous elements of the image and learning how to fake an image. As a first step, the suggested system incorporates features such as dark channel, bright channel, RGB channel, and alpha channel. To 190

A Methodological Study of Fake Image Creation

improve the possibility of spotting phoney photographs, deep layer analysis is used in conjunction with a convolutional neural system and a fuzzy classification process. This research study focuses on the usage of FCID-CNN to detect false images and how it compares to existing methods. The hue, saturation, dark channel, brilliant channel, and alpha channel are all examined in detail in this work. Then, using a Gaussian distribution model, distribution factors for all of these features are computed (Nasir et al., 2021), (Mukhtar et al., 2020). The convolution neural network then handles the fake picture identification procedure well using this distribution factor. FCID- CNN outperforms FCID-HIST by 4.36% and FCID-FE by 2.85%. Later on, this phoney picture detection technique can be improved even more by taking into account aspects such as DCT and wavelet modification strategies to deal with more detailed high- definition room images, and so on.

Blind Fake Image Detection With advancements in technology such as computer graphics and digital imagery, the possibilities are endless. Modification of photographs has become easier, and this manipulation is more difficult to detect than before (Wu et al., 2020). Fakes are typically made by combining or changing existing images. Because the change is made at the pixel level, detection becomes difficult. It becomes difficult to verify the image’s authenticity. If an image represents a witness to a real event, place, or time, it is authentic. The difference between a genuine and a phoney photograph should be obvious. This work provides a blind detection method that employs singular value decomposition (SVD) as a classifier to make a binary judgement on whether an image is false or real in order to detect fake images. This work is a method of enhancing one’s existence.SVD is used to detect fraudulent images. A factorization of a matrix into three matrices is called SVD (Thota et al., 2018). It has several fascinating algebraic characteristics as well as some data science applications (Garg, A., Popli, R., & Sarao, B. S., 2021). The SVD algorithm is based on the notion that each matrix may be broken down into a product of three matrices. When the image is changed, SVD remains stable. To detect fraudulent images, the SVD image detection technique exploited the change identified in the EIGEN vectors in orthogonal subspace. The detection efficiency improved dramatically as a result of all of the improvements. We need the original image as well as the fake image in all of the ways to see whether there is a difference between them, but since the original image was converted using SVD, the original image was no longer required in this technique.

191

A Methodological Study of Fake Image Creation

Fake Image Detection with Robust Hashing The use of new image manipulation tools and techniques has made it easier to create fraudulent photos, making it more difficult to detect them. These fake images are now widely circulated on the internet, including social media, posing a serious threat to the community. Detecting false photographs has become a pressing concern. SNS (social networking services) compress or downsize these altered photographs into JPEG format when they are submitted to social networking sites. These alterations wreak havoc on the unique features, making it more difficult to spot fake photographs (Kumar et al., 2020), (Ahmad et al., 2020). A novel approach to Robust Hashing was introduced for picture retrieval, and it was discovered that this method has higher fake-detection accuracy, even when numerous manipulation techniques are applied. Robust hashing ensures that the input data yields a hash result that can be matched to any image with similar visual content. The value of a Photo DNA hash, like that of binary hashes, cannot be reversed. Although two copies of the same image in different file formats will have completely distinct binary hashes, robust hashing technology can detect the image even if minor changes, such as scaling or altering the file format, have been made. Because recognition is based on an image’s visual content rather than binary file data, this is the case. The resilient hash method is calculated and saved in a database in this methodology. This determined hash value is used to determine whether or not the image is phoney. When the query image is received, a comparable value is calculated for it, and these calculated values are compared to the hash value stored in the database. Finally, the realness or Fakeness of the query image is determined based on the distance between the two computed hash values.

CONCLUSION AND FUTURE SCOPE The increasing rate of applications in the field of fake face detection is the motivating factor behind this research. In addition, various researchers have been working in this field by using state of the art approaches such as classic Machine Learning Based techniques, analytical techniques, deep learning based techniques. In this paper, various contemporary techniques used for fake image creation are classified which will help the research fraternity to choose the best approach based on their requirements. After careful analysis and review of various research techniques, it is observed that deep learning techniques have gained maximum accuracy but work can be done to improve this accuracy hence there is still scope of improvement in this field of research in the future. Deep fake creation still faces many challenges; therefore this paper provides a meaningful resource for the researchers to develop 192

A Methodological Study of Fake Image Creation

effective creation methods and alternative solutions for it. In this paper, various contemporary techniques used for fake creation are classified which will help the research fraternity to choose the best approach based on their requirements. After careful analysis and review of various research techniques, it is observed that deep learning techniques have gained maximum accuracy but work can be done to improve this accuracy hence there is still scope of improvement in this field of research in the future.

REFERENCES Afchar, D., Nozick, V., Yamagishi, J., & Echizen, I. (2018, December). Mesonet: a compact facial video forgery detection network. In 2018 IEEE international workshop on information forensics and security (WIFS) (pp. 1-7). IEEE. Ahmad, I., Yousaf, M., Yousaf, S., & Ahmad, M. O. (2020). Fake news detection using machine learning ensemble methods. Complexity, 2020, 1–11. doi:10.1155/2020/8885861 AlShariah, N. M., Khader, A., & Saudagar, J. (2019). Detecting fake images on social media using machine learning. International Journal of Advanced Computer Science and Applications, 10(12), 170–176. doi:10.14569/IJACSA.2019.0101224 Chauhan, R., Popli, R., & Kansal, I. (2022, October). A Comprehensive Review on Fake Images/Videos Detection Techniques. In 2022 10th International Conference on Reliability, Infocom Technologies and 20Optimization (Trends and Future Directions)(ICRITO) (pp. 1-6). IEEE. Chen, H. S., Zhang, K., Hu, S., You, S., & Kuo, C. C. J. (2021). Geo-defakehop: High-performance geographic fake image detection. arXiv preprint arXiv:2110.09795. Cueva, E., Ee, G., Iyer, A., Pereira, A., Roseman, A., & Martinez, D. (2020, October). Detecting fake news on twitter using machine learning models. In 2020 IEEE MIT Undergraduate Research Technology Conference (URTC) (pp. 1-5). IEEE. El Abbadi, N., & Hassan, A. M., & AL-Nwany, M. M. (2013). Blind fake image detection. [IJCSI]. International Journal of Computer Science Issues, 10(4), 180. Frank, J., Eisenhofer, T., Schönherr, L., Fischer, A., Kolossa, D., & Holz, T. (2020, November). Leveraging frequency analysis for deep fake image recognition. In International conference on machine learning (pp. 3247-3258). PMLR.

193

A Methodological Study of Fake Image Creation

Garg, A., Popli, R., & Sarao, B. S. (2021). Growth of digitization and its impact on big data analytics. [). IOP Publishing.]. IOP Conference Series. Materials Science and Engineering, 1022(1), 012083. doi:10.1088/1757-899X/1022/1/012083 Hsu, C. C., Zhuang, Y. X., & Lee, C. Y. (2020). Deep fake image detection based on pairwise learning. Applied Sciences (Basel, Switzerland), 10(1), 370. doi:10.3390/ app10010370 Islam, M. R., Liu, S., Wang, X., & Xu, G. (2020). Deep learning for misinformation detection on online social networks: A survey and new perspectives. Social Network Analysis and Mining, 10(1), 1–20. doi:10.100713278-020-00696-x PMID:33014173 Kaur, H., Koundal, D., & Kadyan, V. (2021). Image fusion techniques: A survey. Archives of Computational Methods in Engineering, 28(7), 4425–4447. doi:10.100711831-021-09540-7 PMID:33519179 Korshunov, P., & Marcel, S. (2018). Deepfakes: a new threat to face recognition? assessment and detection. arXiv preprint arXiv:1812.08685. Kumar, K. A., Preethi, G., & Vasanth, K. (2020). A study of fake news detection using machine learning algorithms. [IJTES]. Int. J. Technol. Eng. Syst., 11(1), 1–7. Lee, D. H., Kim, Y. R., Kim, H. J., Park, S. M., & Yang, Y. J. (2019). Fake news detection using deep learning. Journal of Information Processing Systems, 15(5), 1119–1130. Li, Y., & Lyu, S. (2018). Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656. Marra, F., Gragnaniello, D., Cozzolino, D., & Verdoliva, L. (2018, April). Detection of gan-generated fake images over social networks. In 2018 IEEE conference on multimedia information processing and retrieval (MIPR) (pp. 384-389). IEEE. Mukhtar, M., Bilal, M., Rahdar, A., Barani, M., Arshad, R., Behl, T., & Bungau, S. (2020). Nanomaterials for diagnosis and treatment of brain cancer: Recent updates. Chemosensors (Basel, Switzerland), 8(4), 117. doi:10.3390/chemosensors8040117 Nasir, J. A., Khan, O. S., & Varlamis, I. (2021). Fake news detection: A hybrid CNN-RNN based deep learning approach. International Journal of Information Management Data Insights, 1(1), 100007. doi:10.1016/j.jjimei.2020.100007 Nataraj, L., Mohammed, T. M., Chandrasekaran, S., Flenner, A., Bappy, J. H., Roy-Chowdhury, A. K., & Manjunath, B. S. (2019). Detecting GAN generated fake images using co-occurrence matrices. arXiv preprint arXiv:1903.06836.

194

A Methodological Study of Fake Image Creation

Nguyen, T. T., & Reddi, V. J. (2021). Deep reinforcement learning for cyber security. IEEE Transactions on Neural Networks and Learning Systems, 1–17. doi:10.1109/ TNNLS.2021.3121870 PMID:34723814 Pillai, N. (2020). Fake colorized and morphed image detection using convolutional neural network. ACCENTS Transactions on Image Processing and Computer Vision, 6(18), 8–16. doi:10.19101/TIPCV.2020.618011 Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M. (2019). Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1-11). Shao, C., Kaur, P., & Kumar, R. (2021). An improved adaptive weighted mean filtering approach for metallographic image processing. Journal of Intelligent Systems, 30(1), 470–478. doi:10.1515/jisys-2020-0080 Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6(1), 1–48. doi:10.118640537-019-0197-0 Singh, B., & Sharma, D. K. (2022). Predicting image credibility in fake news over social media using multi-modal approach. Neural Computing & Applications, 34(24), 21503–21517. doi:10.100700521-021-06086-4 PMID:34054227 Singh, M., Kumar, R., Tandon, D., Sood, P., & Sharma, M. (2020, December). Artificial intelligence and iot based monitoring of poultry health: A review. In 2020 IEEE International Conference on Communication, Networks and Satellite (Comnetsat) (pp. 50-54). IEEE. 10.1109/Comnetsat50391.2020.9328930 Tanaka, M., Shiota, S., & Kiya, H. (2021). A detection method of operated fake-images using robust hashing. Journal of Imaging, 7(8), 134. doi:10.3390/jimaging7080134 PMID:34460770 Thota, A., Tilak, P., Ahluwalia, S., & Lohia, N. (2018). Fake news detection: A deep learning approach. SMU Data Science Review, 1(3), 10. Villan, M. A., Kuruvilla, A., Paul, J., & Elias, E. P. (2017). Fake image detection using machine learning. IRACST-International Journal of Computer Science and Information Technology & Security (IJCSITS). Welekar, R., Karandikar, A., & Tirpude, S. (2020, May). Emotion Categorization Using Twitter. In Proceedings of International Journal (Vol. 9, No. 3). doi:10.30534/ ijatcse/2020/32932020

195

A Methodological Study of Fake Image Creation

Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Philip, S. Y. (2020). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4–24. doi:10.1109/TNNLS.2020.2978386 PMID:32217482 Wubet, W. M. (2020). The deepfake challenges and deepfake video detection. International Journal of Innovative Technology and Exploring Engineering, 9(6), 9. doi:10.35940/ijitee.E2779.049620 Zhang, J., Dong, B., & Philip, S. Y. (2020, April). Fakedetector: Effective fake news detection with deep diffusive neural network. In 2020 IEEE 36th international conference on data engineering (ICDE) (pp. 1826-1829). IEEE. Zhang, X., Karaman, S., & Chang, S. F. (2019, December). Detecting and simulating artifacts in gan fake images. In 2019 IEEE international workshop on information forensics and security (WIFS) (pp. 1-6). IEEE. Zhuo, L., Tan, S., Zeng, J., & Lit, B. (2018, November). Fake colorized image detection with channel-wise convolution based deep-learning framework. In 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) (pp. 733-736). IEEE. 10.23919/APSIPA.2018.8659761

196

197

Chapter 8

A Blockchain-Trusted Scheme Based on Multimedia Content Protection Aarti Sharma National Institute of Technology, Kurukshetra, India Bhavana Choudhary National Institute of Technology, Kurukshetra, India Divya Garg National Institute of Technology, Kurukshetra, India

ABSTRACT There are two types of content on the blockchain: centralized and decentralized. On centralized video platforms, the platform owner controls most of the content uploaded, rather than the creator. However, some content creators post low-quality content in exchange for free cryptocurrencies, creating a cryptocurrency algorithm that demotivates other content creators. In contrast, decentralized blockchain-based video platforms aim to lessen ad pressure and eliminate intermediaries. On video platforms, copyright violations and the unauthorized dissemination of protected information are also significant issues. Copyright protection, illegitimate access restriction, and legitimate dissemination of video files are necessary to guarantee that authors’ original output is appropriately compensated.

DOI: 10.4018/978-1-6684-6864-7.ch008 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

INTRODUCTION Many media-sharing services have been developed due to the internet’s rapid expansion. There are various issues with media distribution. Plagiarism and unauthorized dissemination are made simple to do since media copies are simple to make and can be simply manipulated. Furthermore, because the copyright issue is so serious, a centralized management system cannot resolve it. Utilizing Blockchain technology, these issues are solved. The central server and the reliability of each network participant are not required by this technique. Transactions are referred to as data on the blockchain. Blockchain is a shared and distributed database. In the blockchain, blocks that contain one or more transactions are chained to each other. A unanimous decision from all the participants is required to add a new block. To defend the media, it is necessary to use blockchain technology, which has advantages like transparency, security, safety, and decentralization (Yaga et al. 2019). Blockchain technology has been discovered to be an effective method of transaction verification. This can be used to create a system that distributes multimedia content decentralized and transparently. It consists of cryptographically signed blocks that are connected by a distributed digital record. The links between each block are cryptographically established after validation and consensus. Existing blocks become harder to edit as new blocks are introduced (building tamper resistance). Recently, blockchain technology has gained a lot of attention because of its range of applications, such as in finance, health care, supply-chain management, and intrusion detection. Several applications for copyright and intellectual property protection have been developed with its help. There are numerous online multimedia applications based on blockchain technology, including those in the music and advertising industries, healthcare, social media, and content delivery networks. Blockchain technology has transparency, decentralization, a reliable database, collective maintenance, trackability, security, credibility, digital crypto-currencies, and programmable contracts, as well as innovative ideas for protecting digital intellectual property and ensuring traceability. In this chapter, digital content, and media blockchain are discussed. This chapter includes - fundamental principles of blockchain and the blockchain structure, content on the blockchain: centralized and decentralized, major problems related to blockchain technology, a taxonomy for classifying applications using blockchain technology, Content protection techniques, and future research directions and technical challenges.

198

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

FUNDAMENTAL PRINCIPLES OF THE BLOCKCHAIN TECHNOLOGY The principles of the blockchain depend upon several components such as (Nawari and Ravindran 2019): •

• •





Distributed database: Every user of a blockchain has access to the full database and its history. Data and information are not controlled by a single organisation. Each participant may independently check the information of its transaction partners without the use of an intermediary. Peer-to-peer transmission: Instead, then using a central node, peer-to-peer communication is used. Each node stores information and transmits it to every other node. Pseudonymity and transparency: Every transaction and its associated value are visible to everyone with system access. Each node or user on a blockchain is individually identified by an alphanumeric address of at least 30 characters. Users can opt to reveal their names to others or remain anonymous. In a blockchain, transactions take occur between addresses. Irreversibility of records: Records cannot be changed because they are connected to all previous transaction records after a transaction is entered into the database and the accounts are updated (thus, the term “chain”). To guarantee that the recording on the database is everlasting, chronologically organized, and accessible to everyone on the network, a number of computational techniques and methods are used. Computational logic: Because the ledger is digital, transactions on the blockchain may be linked to computational logic and, in a sense, programmed. As a result, users can create algorithms and rules that start transactions between nodes automatically.

CONTENT BLOCKCHAIN TECHNOLOGY The information contained in blockchain technology is divided into two groups: One is Centralized and the other is decentralized. 1. Centralized Content: The integrity of the data in a centralized database is managed by a single controller. It is a technology that ensures integrity and dependability without the need for a centrally approved authority. Blockchain technology makes use of a “decentralized system,” which is made up of several

199

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

nodes, as opposed to the centralized system, in which any transaction or data transmission must be confirmed by a single authority. 2. Decentralized Content: The distributed, decentralised platform is the most distinguishing feature of blockchain technology. Blockchain technology, a distributed digital platform, enables all network members to pool their transaction information storage and verification efforts (Fotiou, Computer, 2016). By executing a large number of distributed apps on the network, the decentralised functionality provides condition processing, state change management, data storage, verification, and control. Figure 1 shows how the properties of this distributed platform and block generation give blockchain technology technological advantages over centralised platforms. The blockchain technology that is decentralized can significantly lower transaction costs. Unnecessary fees can be avoided because transactions can be made directly between participants without going via a central authority. Additionally, by eliminating the brokerage process, the pace of transactions can be increased without sacrificing security. Figure 1. Centralized content vs decentralized content

200

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

TAXONOMY OF SYSTEMS FOR PROTECTING MULTIMEDIA CONTENT BASED ON BLOCKCHAIN The proposed taxonomy is presented in this section to allow for systematic examination and comparison of blockchain-based multimedia content applications in the literature. The characteristics and specifications of blockchain-based copyright protection solutions are identified under this taxonomy (Kim and Kim 2020). Seven groups and their corresponding subcategories are defined by the suggested taxonomy. Figure 2 illustrates a thorough and detailed classification of the identified categories. Figure 2. Taxonomy of blockchain-based copyright protection system

Blockchain Types The following three blockchain configurations are possible: •





Public blockchain: A decentralised blockchain allows any node to join and have read and write access. By employing a consensus approach that renders the transaction ephemeral once it has been saved on the network, it creates confidence. The transaction rate is unusually slow due to the requirement that each node takes part in the consensus process. Several instances of public blockchains include Bitcoin and Ethereum. Private blockchain: A blockchain where permissioned nodes have exclusive access to the consensus, accounting, and constructing blocks and are only partially decentralized. The permissioned nodes often share the encrypted database, which is run by a reliable individual. An illustration of this kind is a multi-chain. Hybrid blockchain: The “one highly-trusted entity” paradigm of private blockchains with the “low-trust” features given by public blockchains is Hybrid Blockchain. It makes use of a multi-party consensus architecture 201

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

where a set of pre-selected nodes specifically verify each operation. The platforms Quorum, Hyperledger, and Corda are a few examples.

Transaction Types A transaction represents a change in the values of data in the blockchain that results in a state transition. A blockchain transaction may involve the usage of digital currency, smart contracts, documents, or data storage. The following is a list of the three different kinds of blockchain transactions: •





On-chain transactions: All network participants can see these because they are available on the distributed ledger. The transaction is irreversible since different information about it is recorded on the block and disseminated to the entire blockchain, making it impossible to change. There must be a predetermined number of confirmations from miners for the transaction to be finalized. Completion is also impacted by network congestion. Transactions may occasionally be delayed as a result of many transactions awaiting confirmation. Off-chain transactions: These don’t get broadcasted on the network and take place outside the main blockchain. The parties involved are free to decide to reach a deal outside of the blockchain. A guarantor, who is responsible for ensuring the success of the transaction and the agreement’s observance, may also be involved. The actual transaction is carried out on the blockchain after being accepted by the participating parties outside of it. These transactions can be executed in a number of ways, including multi-signature technology and credit-based systems. In contrast to on-chain transactions, off-chain transactions are completed instantly. Hybrid transactions: These transactions include elements from on-chain and off-chain transactions. A multitude of characteristics, like as price, decentralisation, storage, privacy, and so on, are used to differentiate between on-chain and off-chain processes.

Data Automation A computer code, a storage file, and an account balance make up a smart contract, often known as a self-automated code. It is carried out by miners who concur via consensus protocols on the order in which the contract’s code should be executed. By publishing a transaction to the blockchain, any user can form a contract. A contract has immutability because its program code is set when it is created and cannot be altered. 202

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

The storage file for a contract is saved on the open blockchain. A smart contract can be activated by entities both inside the blockchain (by other smart contracts) and outside the network (by external data sources). The contract may read from or write to its storage file while running its code. Additionally, it has the ability to transfer money to other users or contracts as well as receive money into its account balance. A 160-bit hash identifies a smart contract, which functions in an environment where public-key cryptography is allowed (a hexadecimal address used by many cryptocurrencies such as Ethereum and LiteCoin). The most distinctive feature of Ethereum is its use of smart contracts, which is widely used in the bulk of existing cryptocurrency networks. A distributed application (dApp) can group several smart contracts to carry out complex operations, whereas a smart contract can only be used to carry out certain types of transactions. Users can engage with the smart contracts stored on the blockchain using this dApp’s user-friendly interface (much like a conventional website). The accuracy (less prone to human error), lower execution risk (automatic network execution), reduced reliance on third-party intermediaries, fewer intermediaries, lower cost (less human intervention and fewer intermediaries), speed and real-time updates (automated tasks), and new business models (ranging from DRM and watermarking/fingerprinting of multimedia content to automated access to storage units) are some of the benefits of smart contracts.

Cryptocurrency Cryptography is a technology used by cryptocurrencies to protect and authenticate transactions, as well as to limit the number of new units of a specific cryptocurrency that can be issued. The most appealing feature of cryptocurrencies is their natural nature; they are not issued by a centralised body (e.g., a bank or financial intermediary). The fundamental benefit of cryptocurrencies is the ease with which money may be transferred between any two parties involved in a transaction. Public and private keys are used to facilitate all of these transactions for security reasons, and their processing costs are kept to a minimum. Since the beginning of the cryptocurrency frenzy, Bitcoin and Ethereum have been the two most widely used cryptocurrencies. Ripple is currently the third-largest cryptocurrency according to the most recent rankings of trading markets (Qureshi, Sciences, and 2020 2020). These three cryptocurrencies are the most traded on well-known trading platforms such as Plus500 due to their high market value and liquidity rate. The top three cryptocurrencies are described briefly below: •

BitcoinIt is the first peer-to-peer payment network for electronic currencies based on blockchain technology. The Bitcoin network employs the proof203

A Blockchain-Trusted Scheme Based on Multimedia Content Protection





of-work (PoW) distributed consensus protocol. Because Bitcoin is pseudonymous, money is sent to Bitcoin addresses rather than to actual people’s names. The block size is restricted to 1 MB and the average block formation time is 10 min, which limits the network throughput (Kawase and Kasahara 2017). Additionally, the scaling is restricted to 3–7 transactions per second. In addition, although being fully encrypted, it is susceptible to theft and cyber assaults. Ethereum: It is an open-source, public, distributed computing platform based on the proof-of-stake (PoS) consensus mechanism that allows for the programming of many types of smart contracts within the system. Ether is the currency that can be moved in Ethereum. In Ethereum, all operations require gas, which is utilized as a tax in place of ether to facilitate computations. The average block formation time is 17 s, and there are 6.7 million gas limits for each block. A straightforward Ethereum transaction may cost as little as 21,000 gas units. A complex smart contract, however, can be very expensive. Ethereum’s scalability is capped at 15–20 transactions per second (Ledwaba et al. 2021). Additionally, Solidity Language makes it vulnerable to security flaws, which can endanger stored data. Ripple: It is a commercial technology platform that allows banks, payment processors, and digital asset exchanges to settle payments faster and at lower exchange rates. The Ripple payment network uses XRP, a cryptocurrency, to carry out international transactions. Unlike blockchain mining, the Ripple network verifies transactions using a proprietary distributed consensus protocol. As a result, transactions can take place faster and independently of a centralised authority. The XRP gateways quickly confirm each transaction in the Ripple network, which takes about 4 seconds. 1500 transactions may be processed per second on the ripple network. Because Ripple is pre-mined, common nodes have little to no motivation to participate in the network, hence businesses are left to supply the validator nodes.

Consensus Protocols A consensus mechanism is made up of protocols and algorithms that specify the rules that nodes, or computers that govern the blockchain and occasionally carry out transactions, must follow in order to confirm blocks. This solution addresses the issue of data synchronisation across nodes in a distributed system that does not trust each other. The consensus protocol is a fault-tolerant approach for reaching agreement on a particular data value or network state. It has the following goals: coming to an agreement, teamwork, cooperation, giving each node equal rights, and

204

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

requiring each node to participate. The majority of blockchains employ one of the consensus protocols shown in Figure 3: Figure 3. Consensus protocols





PoW: In this technique, nodes (miners) compete against one another to solve a difficult mathematical challenge. The authority to validate the block and construct a new block that performs a transaction rest with the first node that finds a solution. PoW is used by both Ethereum 1.0 and Bitcoin (Namasudra et al. 2021). PoW is an exceedingly expensive technique since it demands a large amount of energy and processing capacity to obtain a consensus. PoS: To establish their stake in this process, the miners must demonstrate that they are the owners of a specific number of currency tokens. As a result, a node’s likelihood of validating the block and determining its authenticity increases with the number of tokens it owns. Neo and Dash employ PoS. Ethereum 2.0, an upgrade to the Ethereum blockchain, has switched from PoW to PoS to improve scalability and throughput. One of the main disadvantages of PoS is that it encourages “crypto-coin saving” rather than spending.’

205

A Blockchain-Trusted Scheme Based on Multimedia Content Protection





Delegated proof-of-stake (dPoS): To participate in the validation process under dPoS, a variation of PoS, each token owner must select a group of delegates they can rely on. The transactions are then verified by the nodes having the most votes (Qureshi, Sciences, and 2020 2020). dPoS is used by BitShares and Lisk. Even while dPoS doesn’t need a lot of computer power, it is susceptible to centralization because there are only so many witnesses. Practical Byzantine Fault Tolerance (PBFT): Byzantine fault tolerance is the ability of two nodes to reach an agreement while communicating over a distributed network while avoiding rogue nodes (BFT). The PBFT is one of the BFT examples, aiming to be a high-performance consensus approach that may rely on a number of trustworthy network nodes. When malicious or broken nodes constitute fewer than one-third of all nodes, PBFT cannot accept them. The majority will reject misleading information because more truthful nodes will agree on the correct judgement than untrustworthy or malicious nodes will agree on an inaccurate conclusion. This technique is used by both Zilliqa and Hyperledger Fabric (Fan et al. 2020).

CONTENT PROTECTION TECHNIQUES A content protection strategy is typically viewed as a line of defence against dangers posed by a person or group of users having unauthorised access to multimedia material. Examples of protected properties include content protection, traceability, source and receiver identification, usage management, and digital rights attached to the content. According to Arnold et al. (Arnold, Schmucker, and Wolthusen 2003), any end-to-end content protection technique should cover all of these essential security aspects. To ensure authorized content access and control content usage after content transmission, end-to-end copyright protection systems must provide security both before and after the content is delivered. Figure 4 depicts the best methods for protecting multimedia content are briefly described in the section that follows.

206

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

Figure 4. Content protection techniques

Multimedia Encryption The process of transforming ciphertext messages back into plaintext is known as decryption. The process of transforming plaintext messages into ciphertext messages is known as encryption. It is anticipated that this technique will exhibit one or more of the following qualities: • • •

Confidentiality: It limits access to the data to authorised users exclusively to prevent unauthorised access. Integrity: This is the protection of data from modification or variation, whether unintentional or intentional. Authenticity: This is the ability of the receiver of the info to identify its source.

In the naive technique, the entire audio-visual material is encrypted using simple encryption algorithms. The entire text is encrypted using both symmetric and asymmetric cryptography, including Rivest Cipher (RC5), Advanced Encryption Standard (AES), and Rivest-Shamir-Adleman (RSA). Because multimedia content, such as audio or video data, is typically very large in size, the naive technique necessitates a significant amount of work. Many unique audio and video encryption methods have recently been proposed in an attempt to avoid the simplistic method and boost efficacy. Only a few of the unique algorithm types are scalable encryption,

207

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

full encryption, selective encryption, syntax-compliant encryption, joint compression and encryption, and multiaccess encryption. Homomorphic public-key cryptography allows certain operations (addition/ multiplication (partially homomorphic), or both (fully homomorphic)) to be performed on encrypted data with the same encrypted results as if they were performed on plaintext. A homomorphic cryptosystem preserves copyright by allowing content owners to calculate directly on the ciphertext without surrendering the keys. This property safeguards the personal information of buyers who value their privacy.

Digital Rights Management (DRM) DRM systems were created to send digital content securely to authorised recipients while imposing limitations on how the content can be used after delivery (e.g., copying, printing, or editing). In a typical DRM system, content can be protected, rights can be created and enforced, users can be tracked, and content usage can be seen. The generic DA DRM architecture is made up of three parties: the user, the license provider, and the content provider. The content provider is in charge of producing licenses and managing content-encryption keys, while the license provider is in charge of producing multimedia content and its metadata (who can access content downloaded through a local software agent, called a DRM agent). Hardware (such as Apple’s FairPlay) or software can be used to install DRM. The following security considerations are adhered to when creating a DRM system: Unauthorized copying is prevented by making sure the digital asset is wrapped securely. To create this secure packaging, encryption is required. • •

208

Secure distribution: The authorized user must receive the digital product in a secure way. Conditional access: There must be a tamper-resistant mechanism for processing protected data and enforcing content usage rights. The primary DRM anti-piracy technology is encryption, passwords, watermarking, digital signatures, and payment systems. Encryption and password technologies are used to restrict who can access and utilise the content. Watermarks and digital signatures are used to protect the integrity and validity of content, as well as the copyright owners and customers. Digital watermarking is used in conjunction with DRM to safeguard the digital rights of copyright holders. Modern DRM systems have been proposed to support the encryption of scalable code streams with diverse keys to allow multiple accesses, as opposed to traditional DRM approaches that compress and encrypt a single piece of data.

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

Modern DRM systems have been proposed to support the encryption of scalable code streams with multiple keys to allow multiple accesses, as opposed to traditional DRM schemes, which compress and encrypt a single piece of multimedia content into multiple copies, each copy targeted at specific applications and offering a single form of access control. A watermark prevents users from misrepresenting the content as their own, distributing it, or sharing it with unauthorised people since it can be used to identify the original material owner.

Digital Watermarking Digital watermarking, in contrast to multimedia encryption, offers post-decryption security when authorized users decrypt the multimedia content. By hiding the identification data (watermark) within the original content (host signal), it subtly modifies it. Later, this information can be used to demonstrate who owns the carrier signal and its authenticity. The two stages of a digital watermarking system are typically watermark embedding and watermark extraction. The extraction operation extracts the watermark from the manipulated/modified signal, whereas the embedding approach embeds a watermark into the host signal to create a watermarked signal. The watermark is still present and can be recovered if the signal was not changed while being transmitted. While watermark extraction can prove ownership, watermark detection can only confirm it. During the embedding and extraction procedures, a secret key is employed to prevent unauthorized access to the watermark. When using a specific watermarking technique, each of the following properties must be considered (Shih 2017): • •

• •

Imperceptibility: The perception that the watermarked and original versions of digital content are similar. The quality must not be compromised by distortion caused by the inserted watermark. Robustness: The ability to recognise the watermark using standard signal processing procedures (such as cropping, compression, or additive noise). Watermarks must be resistant to all signal processing processes (at least below some distortion threshold). Depending on how vulnerable it is to attacks, digital watermarking can be classified as robust, fragile, or semi-fragile. Capacity: How many bits a watermark can encrypt in a specific amount of time. Security is the ability to fend off intentional and/or malicious attacks. In order for embedded data to remain secure, a watermarking technique must prevent detection or extraction by hackers. Information about watermarks should only be accessible to authorised persons.

209

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

Digital watermarking has been utilised successfully in a variety of applications, including copyright protection, transaction monitoring, content authentication, broadcast monitoring, and so on.

Multimedia Fingerprinting When an illegal copy is discovered, multimedia fingerprinting, often referred to as transaction tracing, can locate pirates (colluders) as opposed to digital watermarking, which is unable to pinpoint the source of piracy. This is done by adding unique user-specific information, or a fingerprint, to several copies of the same content. A multimedia fingerprinting algorithm is a protocol that involves three operations between the owner of the content and the user: fingerprint generation, embedding, and the ability to track down pirates using copies that have been obtained illegally or through collusion (Stefan and Fabien 2000). •



• •

Robustness: The watermark embedding method used determines how robust a fingerprint is to signal processing operations. A powerful watermarking algorithm must be utilised so that the fingerprinting technique can identify an unauthorised re-distributor after the digital content has been updated using typical signal processing attacks. Collusion resistance: While digital fingerprinting can be used to identify a single opponent, it is also susceptible to collusion assaults from numerous hostile purchasers. The colluders can attempt to determine which places have the fingerprint signal by comparing their various versions, wipe the data from those locations, and therefore generate a copy that cannot be traced back to any of them. As a result, a fingerprinting system must be designed to withstand collusion attacks. Quality tolerance: The visual and perceived similarity of fingerprinted content to the original should be good. Embedding capacity: The capacity determines the fingerprint length for each user. The binary string that makes up the fingerprint can be rather large. A digital fingerprint system should therefore have sufficient embedding space to store an entire fingerprint.

Customers dislike standard fingerprinting systems since the owner of the content can find out who they are when they embed it. This method allows an attacker to insert a customer’s identity information into a piece of material without that customer’s knowledge and then accuse them of disseminating the item unlawfully. The creation of anonymous fingerprinting technologies based on cryptographic techniques is an effective deterrent (such as homomorphic encryption or secure 210

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

multiparty computation). Buyer frame-proofness, traceability, collusion resistance, anonymity, non-repudiation, dispute resolution, and unlikability are all expected from an extensive and trustworthy anonymous fingerprinting (Qureshi et al. 2015).

LIMITATIONS, OPEN CHALLENGES, AND FUTURE RESEARCH DIRECTIONS This section addresses the constraints and research concerns that are frequently encountered when developing blockchain-based multimedia copyright protection systems. Potential research directions are also indicated for future studies.

Limitations and Research Challenges of Content Protection Techniques The parts that follow go over the strategies for protecting multimedia content’s drawbacks and research difficulties: • • • •



The security of encrypted data depends entirely on the encryption key. After the content has been decrypted, encryption techniques cannot stop a user from using and distributing the content illegally. To ensure that data stays safe while nevertheless being accessible to numerous users of a system, cryptographic keys must be carefully handled (e.g., during transmission, storage, or updating). The majority of DRM research is not interoperable, which means it does not provide customers with an effective choice. Customers may seek alternative methods of acquiring the content, such as peer-to-peer filesharing programmes. Because content suppliers or makers of multimedia players must be aware of sensitive information regarding the DRM protection system in order for DRM systems to be interoperable, the danger of leakage is enhanced. A single leak (or “hack”) in this situation might jeopardise not only one of many distribution routes, but also the dissemination of all compatible DRM content. Incorrect use of DRM systems may result in a number of legal problems, including the use of monitoring tools to report and gather information about consumers’ habits and preferences (such as the kind of content they enjoy, when they enjoy it, and even where, by accessing users’ location information, among other things). If the data were to be sold to third parties or used for non-platform purposes, this might have major privacy consequences.

211

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

• •



Because these attributes compete with one another, establishing an acceptable balance between robustness, capacity, and imperceptibility properties at the embedder’s end in a digital watermarking system is a difficult task. Watermarking methods have evolved with less regard for complexity and security. The embedding and detecting procedure expenses enhance the complexity of the watermarking system. To meet real-time needs, the expenses associated with watermark embedding and detection should be kept to a minimum. The bulk of studies have assumed that a trustworthy third party is in charge of establishing the fingerprint and discovering the copyright violators. This trust implies the user’s belief that the trusted entity would behave reliably in order to ensure security and anonymity. Some additional solutions that have avoided the use of such a party have considerable computational and communicational overheads due to the use of at least one of the following extremely difficult technologies: homomorphic encryption, bit-commitment, or secure multi-party computing. When analysing watermarking and fingerprinting approaches, it is critical to consider both internal and external threats.

Future Research Directions Outline possible research directions revealed by a thorough assessment of 18 blockchain-based multimedia content protection systems in this section. Future study should focus on the following research opportunities to solve the challenges indicated in Sections 5.2 and 5.3 and to improve the usability of blockchain technology in copyright protection applications.

Making an Effective Blockchain-Based Framework to Meet the System Needs of Copyright Protection Applications •

212

Scalability: The blockchain’s scalability issue can be rectified via off-chain approaches such as Lightning Network (LN) for Bitcoin and Plasma for Ethereum (Poon and Dryja 2016), as well as on-chain alternatives such as SegWit for Bitcoin and Sharding for Ethereum (Xie et al. 2019). However, centralization (the existence of hubs) and security threats are the two key obstacles for LN, whilst long withdrawal times (7–14 days) and security issues are the two unresolved issues for Plasma. Similar to how Sharding (KokorisKogias et al. 2018) has communication and security problems, SegWit has problems with complexity, additional storage space, and network bandwidth.

A Blockchain-Trusted Scheme Based on Multimedia Content Protection











Future research on security, complexity, decentralization, performance, and communication mechanisms is predicted to be very popular. Validation of a framework: Any blockchain-based copyright protection app ignores implementation issues in favour of focusing solely on the technology’s benefits. It is critical to develop a workable blockchain-based framework that considers both technical and implementation details, such as weighing the benefits and drawbacks of permissioned and permissionless systems before deciding on one of these solutions, and selecting the best consensus mechanism based on the requirements (such as transaction throughput, latency, the minimum transaction fee, centralization or decentralisation, and security, among others). Standardization: Recognized technology standards define requirements and processes that improve security, dependability, and efficiency. Our in-depth analysis demonstrates the necessity for a global standard that multimedia content creators, producers, and associated businesses may utilise to exchange fresh copyright protection ideas built on blockchain technology and incorporate them into current infrastructure. Similarly, in order to improve user experience, this standard should enable automatic conversion between multiple cryptocurrencies. Privacy-aware design: Future research should look into potential privacyaware solutions that could secure the privacy of the parties involved in blockchain-based content protection application transactions (content owner, buyer, etc.). Because the data (such as facts about the content owner, participant public keys, pseudonyms, and copyright information, among others) is visible to everyone on the network, privacy and security rules should be established at the outset of these schemes. Unlinkability: To prevent linkability and potential identification, the blockchain’s privacy leakage issue must be handled. In order to meet the requirements for anonymity, further research should look into the viability of incorporating anonymity technologies like CoinShuffle (Ruffing, MorenoSanchez, and Kate 2014) (shuffle addresses), Zerocash (Sasson, E. B., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., & Virza 2014) hides the payment’s origin, destination, and sent amount), or differential privacy into the apps. Smart contract security: Formal security analysis should be performed on all potential security and privacy concerns (eavesdropping, DDoS attack, or impersonation) on the smart contract. To prove their long-term usefulness, smart contract transactions must also technically be reversible. Additionally, the code must foresee both the modification’s triggering event and its termination or extension in order to change or reverse a smart contract. As 213

A Blockchain-Trusted Scheme Based on Multimedia Content Protection



a result, more research is necessary to address the issue of handling security and privacy assaults on a smart contract. Dispute resolution: The capacity to alter or amend the material or copyright information stored on the blockchain is eliminated, which might be a doubleedged sword. The issue of settling copyright disputes might also be a fascinating research topic in the case of immutability.

The Development of Multimedia Content Protection Systems to Support Blockchain Technology •





214

Design improvements: To enable for peaceful integration with blockchain technology, traditional content protection measures (encryption, DRM, watermarking/fingerprinting) must be improved. The fine-grained study of the examined cutting-edge systems reveals this. Concurrent key acquisition, key security, and key management for blockchain-based multimedia encryption systems could be the subject of future research. The transfer of access rights to consumers without the involvement of a trustworthy third party, the efficient completion of the transaction between the copyright provider and the client, and privacy-aware fine-grained usage management all necessitate further research into blockchain-based DRM systems. Furthermore, in order for blockchain-based copyright protection applications to be widely implemented, a number of ongoing research difficulties in blockchain-based watermarking and fingerprinting systems must be answered. Low computing complexity, and strong robustness against potential security threats are a few of these. Trustless systems: There were entirely decentralised content protection solutions before the invention of the blockchain. Trustworthiness is another important benefit of blockchain, in addition to decentralisation. The content protection systems built on blockchains that are being considered are based on hybrid trust models that may or may not involve trusted users or third parties. Therefore, it is necessary to make full use of blockchain technology and develop really trustworthy copyright protection solutions. Security issues: Before the blockchain, there were completely decentralized content protection systems. One of the main benefits of blockchain is dependability, along with decentralization. The presence of either a trusted third party or trustworthy users is taken into account by the suggested hybrid trust models for blockchain-based content protection systems. As a result, to develop really trustworthy copyright protection solutions, blockchain technology must be properly leveraged.

A Blockchain-Trusted Scheme Based on Multimedia Content Protection



Promoting the adoption of blockchain in copyright applications: It would take time to develop, and all parties involved would have to accept additional technology advancements and security assurances. A number of research findings have not yet properly evaluated the costs and restrictions of establishing commercial content protection systems on blockchain (such as copyright owners, multimedia producers, buyers, and others).

CONCLUSION This chapter seeks to provide an overview of blockchain-based content protection solutions. In this paper, we developed a taxonomy of cutting-edge blockchain-based copyright protection methods based on blockchain technology’s technical capabilities, the most popular content protection mechanisms, and performance requirements. Four widely used content protection measures are introduced at the outset of the session, along with some background information on blockchain technology. Then, a thorough analysis of applications for copyright protection based on blockchain technology is provided. These schemes are also contrasted with the specified taxonomy. Additionally, certain significant research challenges related to blockchain technology and content protection strategies are covered. Several potential paths for future investigation are then suggested. Researchers must take into account each of these aspects while developing and putting into practise a new blockchain-based content protection system. In order to find the most pertinent information about the fusion of content protection techniques and blockchain technology, hope this survey will be seen as a primary resource.

REFERENCES Arnold, M., Schmucker, M., & Wolthusen, W. D. (2003). Techniques and Applications of Digital Watermarking and Content Protection. Fan, C. (2020). Performance Evaluation of Blockchain Systems: A Systematic Survey, 8, 126927–50. IEEE. Fotiou, N., & Polyzos, G. C. (2016). Decentralized Name-Based Security for Content Distribution Using Blockchains. IEEE Conference on Computer, and Undefined (pp. 415–20). IEEE.

215

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

Kawase, Y., & Kasahara, S. (2017). Transaction-Confirmation Time for Bitcoin: A Queueing Analytical Approach to Blockchain Mechanism. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10591 LNCS. Kim, A., & Kim, M. (2020). A Study on Blockchain-Based Music Distribution Framework: Focusing on Copyright Protection. International Conference on ICT Convergence (pp. 1921–25). IEEE. 10.1109/ICTC49870.2020.9289184 Kokoris-Kogias, E., Jovanovic, P., & Gasser, L. (2018). Omniledger: A Secure, Scale-out, Decentralized Ledger via Sharding. In IEEE Symposium and Undefined 2018. (pp. 583–98). IEEE. Ledwaba, L.P.I. (2021). Smart Microgrid Energy Market: Evaluating Distributed Ledger Technologies for Remote and Constrained Microgrid Deployments. MDPI 10(6), 714. Namasudra, S., Deka, G. C., Johri, P., Hosseinpour, M., & Gandomi, A. H. (2021). The Revolution of Blockchain: State-of-the-Art and Research Challenges. Archives of Computational Methods in Engineering, 28(3), 1497–1515. doi:10.100711831020-09426-0 Nawari, N. O., & Ravindran, S. (2019). Blockchain Technologies in BIM Workflow Environment. Computing in Civil Engineering 2019: Visualization, Information Modeling, and Simulation - Selected Papers from the ASCE International Conference on Computing in Civil Engineering 2019, (pp. 343–52). ASCE. 10.1061/9780784482421.044 Poon, J. & Dryja, T. (2016). Scalable off-Chain Instant Payments. The Bitcoin Lightning Network. Qureshi, A, Megías, D., & Rifa-Pous. (2015). Framework for Preserving Security and Privacy in Peer-to-Peer Content Distribution Systems. Elsevier, 42(3), 1391–1408. Qureshi, A, & Megías Jiménez, D. (2020). Applied Sciences, and undefined 2020. Blockchain-Based Multimedia Content Protection: Review and Open Challenges. MDPI. Ruffing, T., Moreno-Sanchez, P., & Kate, A. (2014). CoinShuffle: Practical Decentralized Coin Mixing for Bitcoin. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8713 LNCS(PART 2), 345–64.

216

A Blockchain-Trusted Scheme Based on Multimedia Content Protection

Sasson, E. B., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., & Virza, M. (2014). Zerocash: Decentralized Anonymous Payments from Bitcoin. IEEE. https://ieeexplore.ieee.org/abstract/document/6956581/?casa_ token=KngizekyPsEAAAAA:gDzlzXO82l8nEmf3WzxyVR1gTtJaCNyitIFUC zBtlPpIdRrsNTzBrhg_bED0cXK-qN_zx4UpM4M (January 3, 2023). Shih, F. Y. (2017). Digital Watermarking and Steganography: Fundamentals and Techniques, (2nd ed), 1–270. Taylor and Francis. doi:10.1201/9781315121109 Stefan, K, & Fabien, A. P. (2000). Information Hiding Techniques for Steganography and Digital Watermarking. Xie, J. (2019). A Survey of Blockchain Technology Applied to Smart Cities: Research Issues and Challenges. IEEE, 21(3), 2794–2830. Yaga, D., Mell, P., Roby, N., & Scarfone, K. (2019). Blockchain Technology Overview. NIST.

217

218

Chapter 9

Integration of Blockchain and Mobile Edge Computing Aarti Sharma University Institute of Engineering and Technology, Thanesar, India Mamtesh Nadiyan National Institute of Technology, Kurukshetra, India Seema Sabharwal Government P.G. College for Women, India

ABSTRACT This chapter begins with the fundamentals of blockchain and MEC. Integrating new technologies like blockchain and MEC is seen as a potential paradigm for managing the voluminous amounts of data produced by today’s pervasive mobile devices and subsequently powering intelligent services. With blockchain technology, they can boost the safety of existing MEC systems by using decentralized, immutable, secure, private, and service-efficient smart contracts. These smart contracts fall into three broad categories: public blockchains, consortium blockchains, and private blockchains. Moreover, this chapter discusses the classification and current defence mechanisms of security threats. Potential solutions to MEC’s main security challenges are then discussed. Following that, the authors present a classification to assist developers of various architectures in selecting an appropriate platform for specific applications, as well as insights into potential research directions. Finally, the authors present key blockchain and MEC convergence features, followed by some conclusions.

DOI: 10.4018/978-1-6684-6864-7.ch009 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Integration of Blockchain and Mobile Edge Computing

INTRODUCTION As the internet of things advances, billions of devices are becoming connected to the network. There are more devices connected to the system than ever before, but it’s the data volume, not the number of devices, that is growing exponentially. Due to the coordination between devices throughout the network, data traffic is increasing dramatically. According to estimates, by 2022, every person will produce about 2 GB of data per day (Bhat et al., 2020). This is a significant amount of data, but the data collected by devices will blow it away. According to estimates, a connected aeroplane generates five terabytes of data per day. In contrast, a connected hospital produces three terabytes, a smart factory produces three petabytes, and an autonomous vehicle can produce four terabytes. As data is generated, it cannot simply be sent to the cloud for analysis. Even though we have a fast uplink, we cannot carry such a large volume of data with the available bandwidth. Those factors encourage academics and businesses to develop next-generation cloud computing technologies such as Mobile Edge Computing (MEC). In addition, this chapter first discussed the fundamental principles of blockchain and the MEC system. After that, a discussion of the integration of blockchain and MEC systems, including their requirements and the importance of integration, was presented. Furthermore, several key components based on blockchain technology are discussed. This chapter also discusses some challenges for future research directions with its real-life application and outlines key convergence features of integration based on blockchain and MEC.

A. FUNDAMENTAL PRINCIPLE OF BLOCKCHAIN AND MEC SYSTEM  Over the years, cloud computing has evolved into an essential component of data processing. In contrast, cloud servers deployed globally must process enormous amounts of data. In addition, response times and QoS deterioration grow with the user’s separation from the cloud. Moreover, computing time is significantly affected by the performance of the user’s device. In the future, edge devices will be able to store, compute, and analyse data as we move towards next-generation computing technologies. It is possible to create a wide range of applications using 5G networks and blockchain technology. These include trade and finance, smart cities, smart hospitals, supply chain management, the internet of vehicles, the energy internet, digital asset management, and the industrial internet. A 5G network integrated with blockchain technology offers the following technical benefits (Mafakheri et al., 2018): 219

Integration of Blockchain and Mobile Edge Computing

 Blockchain smart contracts can be used to control IoT devices via data capitalization methods. New studies have been proposed to use smart-contractbased transactions to exchange data commodities for industrial IoT. Access to equipment can be controlled and scheduled via blockchain platforms. In IoT devices, authentication technologies can be implemented through blockchain over fibre and 5G networks (“Survey on Blockchain for Internet of Things,” 2019).  Each blockchain network can operate independently in different slices. To improve 5G network spectrum utilisation for blockchain data, slice technology can be combined with virtualization resource mapping or network function virtualization (NFV).  Blockchain technology has the potential to be integrated into 5G networks and mobile edge computing (MEC) applications like distributed resource allocation. To effectively reduce network congestion and maximise terminal node users’ service quality experiences, predefined rules are implemented on mobile edge computing networks (MECs) using blockchain smart contracts.

BLOCKCHAIN-BASED 5G NETWORK A substantial portion of computing power is required for blockchain. MEC helps optimise blockchain development in 5G networks (Zhang et al., 2019). Figure 1 depicts a platform for network architecture based on 5G wireless networks and blockchain. The platform collects data in real time using IoT, and all IoT devices are linked to sensors via 5G networks. There are four layers to the platform (Xiong et al., 2018): device perception, data processing, the network core layer, and data preservation. Modules in the device perception layer monitor device positioning, energy status, device abnormalities, radio frequency identification (RFID), sensors, fingerprint identification, and face recognition. The 5G mobile edge subsystem MECs collect and accelerate data processing and analysis by providing distributed processing capability through the data processing layer. It effectively improves cloud responsiveness by distributing task loads among core cloud computing resources. With its distributed processing capabilities, MEC’s 5G network edge subsystem makes cloud computing faster and more responsive. It also speeds up the processing and analysis of data. High reliability, security, speed, wide coverage, low latency, and data redundancy are all provided by the network core layer. The data preservation layer validates the transaction by using the blockchain platform to match the contract to the agreed-upon trading rules. Finally, the data and events are uploaded to the blockchain platform, which can also perform 5G terminal device security scheduling. As an outcome, blockchain technology secures 5G networks (Wu et al., 2018). 220

Integration of Blockchain and Mobile Edge Computing

Figure 1. Platform for network architecture based on a fifth-generation (5G) wireless network and blockchain

TRANSACTION PROCESS BASED ON BLOCKCHAIN All available resources have been assigned. In particular, after the edge clouds have completed their action, the intelligent terminals will receive a notification and be free to continue processing transactions on the consortium blockchain network. Figure 2 shows the general layout of a blockchain platform optimized for MEC resource sharing. RPS, third-party spectrum and computation management, identity authentication institutions, and other parts make up the blockchain. Consensus is reached through the use of smart contracts and the Solo ordering service. The architecture consists of a “physical layer,” which is the lowest level, and three “logical” layers: the application and the interface (PHY). The blockchain platform installs programmes on the intelligent terminals that are referred to as existing in the application layer, and the intelligent terminals use the interface layer’s code to connect to the blockchain database. To manage members, transactions, and contracts, the blockchain layer is in charge of such tasks. Registration, authentication, authorization, encryption, and mathematical signature are all part of member management’s efforts to guarantee the safety and veracity of access users. Transaction management is primarily responsible for ensuring the secure and orderly transfer of resources between the two parties, 221

Integration of Blockchain and Mobile Edge Computing

in addition to recording transaction data to the global ledger, including the content of block management and the consensus process (Shae et al., 2017). In addition, when we talk about contract management, we’re usually referring to the process of encoding and automating the network integration of the many code components involved, such as requests for resources, allocation of those resources, monitoring of delivery, and so on. Lastly, the PHY layer is made up of RPS clouds at the edges and RPS clouds themselves. Figure 2. Blockchain platform architecture

As previously stated, the transaction information will be written to the blockchain platform’s global ledger. There are two methods for dealing with transactions based on trading characteristics: offline and online trading models. Some real-time and high-frequency trading is done using the offline trading paradigm, whereas the majority is done using the online trading model. What follows is a comprehensive analysis of both online and offline business methods. Last but not least, a delivery monitoring module identifies nodes that are unable to supply service because of an attack or some other reason.

222

Integration of Blockchain and Mobile Edge Computing

WORKING OF BLOCKCHAIN The blockchain is a distributed ledger that anyone can access. At its core, it is a distributed, shared, and immutable peer-to-peer ledger of all records (transactions) ever made and communicated between the participants. The majority of miners are constantly authenticating and confirming transactions using a consensus process, and the blockchain is made up of data blocks that have been linked together with dependent hash values that have been timestamped or authenticated. Once timestamped and validated by miners, the transactions become irreversible (i.e., immutable). A blockchain stores data in groups or segments known as blocks. When the one before it is full, a block is added to it. Blockchain is an immutable database that creates an irreversible timeline of data. Written data becomes an indelible part of this timeline and cannot be changed. Every node has a complete history of all data saved on the blockchain since its inception. Each block stores its own hash, the hash of the previous block, and the timestamp (Dorri et al., 2019). If an attacker tampers with the blockchain in any way or attempts to update the data in his own node, the hash changes, identifying the attacker’s node as invalid. A blockchain stores data in groups or segments known as blocks, as shown in Figure 3. When the one before it is full, a block is added to it. Blockchain is an immutable database that creates an irreversible timeline of data. Written data becomes an indelible part of this timeline and cannot be changed. Every node has a complete history of all data saved on the blockchain since its inception. Each block stores its own hash, the hash of the previous block, and the timestamp. If an attacker tampers with the blockchain in any way or attempts to update the data in his own node, the hash changes, identifying the attacker’s node as invalid. Because of hashing, asymmetric-key cryptography, and smart contracts, data saved on the blockchain is trustworthy and unchangeable.

223

Integration of Blockchain and Mobile Edge Computing

Figure 3. Working of Blockchain

A smart contract is a piece of computer code that executes on a blockchain when certain criteria are met. It is used to complete a contract without the use of a middleman or the loss of time. Smart contracts, for example, could allow IoT devices to receive secure software upgrades. Now that we’ve established the context, let’s look at how edge computing and blockchain can be applied in real-world scenarios.

KEY COMPONENTS OF A BLOCKCHAIN TECHNOLOGY Normally, a blockchain platform are formed by five components as shown in Figure 4, that includes the following: a. Distributed Ledger b. Peer-to-Peer Network (P2P) c. Consensus mechanism d. Cryptography e. Virtual Machine a. Distributed Ledger •

Definition

An electronic ledger is primarily a database that contains and is constantly updated with all transactions. It consists of a series of blocks, or groups of transactions, that are cryptographically linked together. That is to say, the cryptographic identities 224

Integration of Blockchain and Mobile Edge Computing

from the prior block will be incorporated into the subsequent block. So, the integrity of the whole chain depends on each block, and problems in any of the blocks that came before it will affect the integrity of all the blocks that came after it (Novotny et al., 2018). •

Highlights

A ledger eliminates the need for centralized processing and validation of transactions. When the stakeholders reach an agreement, data records are only stored in the ledger. Each participant will receive one copy of the ledger with all updated records. A ledger provides a verifiable and trackable history of all information stored on a specific data set in chronological order. b. Peer-to-peer network – P2P •

Definition

A P2P network is a decentralized communication paradigm that operates independently of servers and other intermediary nodes. Each participant in a peerto-peer (P2P) network can take on the role of either a client or a server. Assuming all goes well with the network launch, each user will have access to their own personal copy of the distributed ledger. After that, you can use it as a file-sharing and storage system without resorting to an external provider. •

Highlights

On a blockchain network, each node can act as both a client and a server to other nodes in order to provide and control data. Decentralizing database and management rights removes the intermediary in traditional models, allowing members to exchange information directly with one another. All data records are copied by all nodes to ensure system continuity and to limit single point failures (SPOF) and denial of service (DoS). Improving data and validation method availability aids the system in avoiding information loss or inability to verify. •

Classification ◦◦ ◦◦ ◦◦

Unstructured P2P Network Structured P2P Network Hybrid P2P Network

225

Integration of Blockchain and Mobile Edge Computing

Figure 4. Key components of a blockchain technology

c. Consensus Mechanism •

Definition

The consensus mechanism specifies sets of rules that must be followed by nodes in the peer-to-peer network in order for them to agree on which transactions are legitimate and eligible for inclusion in the blockchain. The consensus mechanism determines the current state of the blockchain. •

Highlights

To achieve the desired agreement on a single data value or a single network status, ensure that the entire system is fault-tolerant. Make it possible for all participants to contribute to the safety and security of the blockchain network. Prevent doublespending on the Blockchain platform for cryptocurrency transactions. •

Classification

Each type of Blockchain will have a different consensus mechanism. Currently, there are two types of consensus mechanisms most commonly used:

226

Integration of Blockchain and Mobile Edge Computing





Proof-of-work (PoW): The PoW algorithm is operated by miners (nodes) working together to solve a cryptographic problem to generate the next block. The first miner to find the solution will reach consensus, be allowed to choose the block to be added to the blockchain network, and receive the corresponding award. However, these problems are often complex and require miners to have high computing power. Proof-of-stake (PoS): To simplify the mining process, the term proof of stake is used when multiple tokens need to be verified. The PoS rule requires miners to prove their ownership of % shares in order to perform the corresponding % of mining activity. This saves more energy (from electronics) and operating costs. d. Cryptography



Definition

This aspect ensures the safety, integrity, and veracity of information stored in the ledger or transmitted between nodes. Cryptography has developed encryption methods that are impossible to break by building on a base of mathematics (particularly probability theory) and game theory knowledge. •

Classification There are two main types of encryption methods:



Symmetric Encryption: It is a form of encryption to secure data, in which the encryption and decryption of data use the same key, as shown in Figure 5.

Figure 5. Symmetric encryption

227

Integration of Blockchain and Mobile Edge Computing

Since the key is used to decrypt the data, it should be kept secret. Therefore, when using a symmetric key, the sender and receiver need a mechanism to exchange keys before exchanging data. •

Asymmetric Encryption: is a form of encryption to secure data in which the encryption and decryption of data use two different keys, as shown in Figure 6. A public key is the encryption key, which can be widely distributed and used as a form of authentication. A private key, also known as a decryption key, is required for security purposes in order to preserve the privileges of the receiver.

Figure 6. Asymmetric encryption



Relevant Techniques ◦◦

◦◦

228

Blockchain address: is represented as a long string of alphanumeric characters, which is publicly shared so other users can send transactions. Each blockchain address will be generated from a public key. This public key is generated from a private key that serves as a mechanism to prove ownership of the public key (or, in other words, the blockchain address). When performing an interactive transaction with the Blockchain network, the user will use the private key to sign a digital signature, proving that the user is the owner of the valid Blockchain address in the transaction (Brincat et al., 2019). Digital Signature: is an encrypted string of characters sent with the original data of the transaction on the blockchain platform. To create a digital signature, the user will use a private key to encrypt (thus creating a digital signature) the data contained in the transaction sent to the recipient. Remember that the secret key used for this encryption is

Integration of Blockchain and Mobile Edge Computing

◦◦

the same secret key that generates the sender’s Blockchain address. The digital signature will change if the transaction data used for encryption changes, or if the same data is used but with a different user’s private key. Hash function: is the process of converting an unlimited amount of input data and creating a fixed length of output data. Hash functions are often used to protect the integrity of data. Users can verify the validity of a transaction by comparing the hash value of the transaction on the application with the hash value of the transaction on the block explorer.

5. Virtual Machine •

Definition

A computer system can be “run” inside a programme called a virtual machine. CPU, RAM, and VM space are all included. A virtual machine (VM) performs the same functions as a traditional computer, such as storing information, running software, and contributing to the operation of a blockchain network. •

Ethereum Virtual Machine (EVM)

The Ethereum virtual machine is used to ensure that transactions processed on completely different environments and computer configurations will always create the same results on the Ethereum platform. Essentially, an EVM is a machine that processes smart contracts running on Ethereum. Nodes participating in the Ethereum system process transactions received through the EVM. Any transaction that wants to change the status of the network must go through the EVM process. EVM is just a virtual machine, but many copies are made. Each node participating in the execution of the same transactions owns a copy of the EVM to ensure the consistency of the computation.

MEC-BASED SYSTEM • •

Multi-access edge computing, as defined by ETSI, is an IT service environment that offers cloud computing capabilities at the edge of cellular networks and more generally at any edge network (Wen et al., 2018). It is an IT service environment and cloud computing concept that takes place at the edge of a cellular network, which is known as mobile edge computing (MEC). 229

Integration of Blockchain and Mobile Edge Computing

• • •

Currently, MEC is called Multi-Access Edge Computing (MAEC). CC is enhanced by MEC when it is brought to the edge of the network. Previously called “cloudlet technology”, this technology is used at the Internet’s edge. For telco cloud the next logical step in Edge Computing is MAEC. Computability is delivered at the edge of the network, literally on the same infrastructure as the network itself. MEC has provided the improved offloading techniques. It describes the network with low latency and high bandwidth. Mobile-edge computing allows deploying applications and services, storing content, and computing content in close proximity to mobile users through a highly distributed computing environment.

Architecture of MEC A mobile network architecture was developed in 2014 by an industry specification group (ISG) within ETSI (European Telecommunication Standard Institute) (Sabella et al., 2016). MEC is the architecture outlined by ETSI as shown in Figure 7. •

Mobile edge host (ME host)

The object consists of two main components: a) Mobile Edge Platform (MEP); and b) Virtualization Infrastructure Manager (VIM). •



MEP: Mobile Edge uses MEP services to run apps and allows MD apps to access MEP services. Services for ME are provided by the platform as well as by the apps. MEP’s main function is to provide data transit between applications and the network. VIM: It can handle the virtual resources of ME apps. VIM handles tasks that include allocating resources, storing data, and managing networks. As a result of this systematization, VIM will track software images for quicker access to applications, and those images will be stored in VIM for faster access. In the event that those resources cannot function, tracking them is imperative. A server collects data from resources during fault maintenance and performance analysis. The goal is to virtualize resources by focusing on infrastructure. ◦◦ ME applications (ME Apps)

ME host allows ME apps to run as virtual machines on top of a virtualized infrastructure. Mobile apps connect to the MEP through a locus point to make use

230

Integration of Blockchain and Mobile Edge Computing

of the platform’s services. The resources and services ME apps require as well as their latency constraints are described. •

ME platform manager (MEPM)

The MEP element and ME app lifecycle management are both under the purview of MEPM, which functions as a single entity. In addition, there are standards for ME apps and service management demands. The management of an application’s lifecycle includes methods for app deployment and termination as well as suggestions related to application events. In order to manage services and policies, it is necessary to have configuration, authorization, and traffic rules in place. These three things are used to fix data transfer problems. Rules are organized, traffic is filtered, applications are supported, and areas are reconfigured using the locus points that connect the MEP and the MEPM. Furthermore, they consistently plan application relocations and approve changes. The management of application lifecycles, app policies, and the upkeep of current information on various ME services offered in the ME system are all covered by the locus of control between the MER and the MEPM. •

Mobile edge Ruler (MER)

It plays a crucial part in the ME model. It is capable of observing how the entire ME network utilizes its resources and capabilities. As a result of its inclusion of ME hosts, the utilization of the resources and services that each host can access, initiated apps, and the topography of the NW, MER maintains data on the entire ME model. Moreover, it manages ME apps and the techniques used to support them. It includes checking the legitimacy and integrity of the application, validating the policies, and collecting the most widely used apps. The steps for granting VIM permissions and how to handle specific applications are laid out in the MER. •

Customer Facing Service gateway (CFS)

Third parties can use it as an access point. The CFS portal oversees the selection, provisioning, and sequencing of ME apps. The creator parties use this gateway to access their generated ME apps that are displayed in the operator’s ME system. The CFS gateway gives users the option to choose apps based on their interests and gives them access to the apps they have chosen. Additionally, this gateway is capable of providing customers with business-related information. •

The Client’s application lifecycle management proxy (Client’s app LCM proxy) 231

Integration of Blockchain and Mobile Edge Computing

It is an LCM proxy feature that has to do with end users and applications. A request for facilities is used to initiate and end applications. The LCM proxy enables the transfer of applications from the ME system to the cloud server or from the cloud server to the ME system. Earlier, for promotional actions, this proxy was used to authorize requests. When and where a user wants to gain access to a particular network, it can only be accessed through mobile permissions.

REQUIREMENTS OF INTEGRATION OF BLOCKCHAIN AND MOBILE EDGE COMPUTING a) Authentication: It’s crucial to verify these entities’ authentication in edge computing environments with numerous interacting service providers, infrastructures, and services. Even though they are members of separate security domains, this is required to build safe communication routes between the components of edge ecosystems. b) Adaptability: As technology advances, so do the number of devices and the complexity of apps, particularly blockchain applications used on devices with limited resources. In order to allow objects or nodes to freely connect to or leave the network, the integrated system of blockchain and edge computing should be able to accommodate a fluctuating number of end users and jobs with varied levels of complexity. c) Network Security: Due to their heterogeneity and attack sensitivity, edge computing networks are very concerned about network security. Blockchain needs to be integrated into edge computing networks in order to replace cumbersome key management in some communication protocols, make massively distributed edge server maintenance simple, and improve control plane monitoring to thwart malicious behavior (Luo et al., 2020). d) Data integrity: The upkeep and assurance of data’s accuracy and consistency across its entire life-cycle are known as data integrity. Utilizing the wealth of edge computing’s distributed storage resources, duplicating data over a network of edge servers, and using a blockchain-based architecture for data integrity service in a completely decentralized environment significantly reduce the likelihood of data integrity violations. Therefore, both data owners and data consumers need a more trustworthy way to verify the integrity of their data. e) Verifiable Computation: Verifiable computation makes it possible to outsource computation to a few unreliable clients while preserving accurate results. Without being limited by the scalability of blockchain, edge computing outsourcing can scale to massive quantities of computations.

232

Integration of Blockchain and Mobile Edge Computing

INTEGRATION OF BLOCKCHAIN AND MEC Many MEC devices openly share their resources or content with no regard for personal privacy. The combination of blockchain and MEC has the potential to create a secure and private MEC system (Bhattacharya et al., 2019).

Blockchain for Edge Caching The rapid progress of IoT and wireless technologies is driving the exponential rise of data and content. The MEC strategy places dispersed computing and caching resources in close proximity to users to facilitate enormous content caching and accommodate the low-latency needs of content requesters. Figure 8. Blockchain-empowered secure content caching

Therefore, data traffic and latency on backhaul networks can be reduced by processing and caching material at the network edge. A device with sufficient cache resources can be called a caching provider, increasing the caching capacity of the network edge because cutting-edge devices have limited caching resources. However, because content frequently contains sensitive personal information about the creator, devices may be wary of storing it with an untrustworthy caching provider. Blockchain’s secure peer-to-peer communication between nodes makes it an attractive option for edge caching. In Figure 8, we present our proposed blockchainbased, distributed content caching architecture. Devices can play two roles in this content caching system: A caching requester is a device with limited cache capabilities 233

Integration of Blockchain and Mobile Edge Computing

that requests access to a significant amount of content, whereas a caching provider is a device with ample caching resources. The placement of edge servers on base stations is deliberate. If a piece of material is successfully cached by a given caching provider, the caching requester must create a transaction record and submit it to the closest base station. Local transaction data is collected and managed by base stations. Following the conclusion of the base station consensus procedure, the transaction data is organised into blocks and stored in each base station indefinitely. These are the specific procedures:  System initialization: Privacy can only be guaranteed if each component registers a valid identity during system initialization. Edge caching blockchains use asymmetric cryptography and a digital signature mechanism based on elliptic curves for system initialization. The only way for a gadget to get its hands on a real ID is for someone to verify its identification first. The identity is made up of a public key, a private key, and a certificate.  Roles in edge caching: Depending on their current needs and their anticipated caching needs, devices will take on the roles of caching requesters or caching providers. Caching providers can be mobile devices that have extra caching resources and offer caching services to caching requesters.  Caching transactions: Requesters provide information to the nearest base station about the amount of cached resources and the estimated serving time. The caching requests that are received by the base station are then sent out to the local caching providers. Providers of caching services update their customers on their current and future caching infrastructure at base stations. Each base station then uses a deep reinforcement learning algorithm to divide bandwidth between the base station and the devices, match caching supply and demand pairings among the connected devices, and discover the caching resources available from each caching provider.  Building blocks in a caching blockchain: Over a predetermined time period, base stations gather, encrypt, and digitally sign all transaction data to guarantee their veracity and accuracy. Each block in the consortium blockchain stores the cryptographic hash of the preceding block, and the blocks themselves are used to organise the records of transactions. For each new block, the consensus algorithm (like PBFT) checks its validity. After reaching agreement, one node will take the reins and create new blocks for the network. Due to broadcasts, each base station may see the full history of transactions and compete to take the lead. Prior to beginning block construction, the consortium blockchain elects a leader who will serve in that capacity until the consensus phase is complete.  The consensus process: The newly generated block is then broadcast to all of the nodes in the network so that the leader can check it and audit it. A consensus 234

Integration of Blockchain and Mobile Edge Computing

is reached on the validity of the generated block by all of the base stations, and this information is then broadcast. After analysing the audit results, the leader sends the block back to the hubs for a second check. The audit results and related signatures will be used to identify compromised base stations and hold them accountable. In addition to increasing the safety of edge networks, blockchain and MEC could also make it easier for untrusted parties to share resources and cache data at the network’s periphery.

NETWORK ARCHITECTURE BASED ON INTEGRATED MEC AND BLOCKCHAIN Figure 9 demonstrates the MEC and blockchain integration architecture, which is focused on a white paper published by a Chinese blockchain and MEC technology team on 5G and blockchain integration development and application. There are three layers to the architecture: (1) Allocating, scheduling, and managing computing, storage, and networking resources are the responsibilities of the bottom MEC IaaS layer (F. Wang et al., 2017). It also schedules 5G base station fragments and can provide server resources for external blockchain systems. A separate public blockchain can be created for each piece of the puzzle. (2) The blockchain platform enhances the MEC platform by providing fundamental block-chain support functionalities, including block storage, smart contracts, and consensus, while the PaaS (Platform as a Service) layer of the MEC platform provides network and professional capacity. The MEC platform’s capabilities include an open subsystem that enables services for both carrier-level and higher-level applications. When it comes to the blockchain platform’s resources, everything is scheduled and allocated centrally. Thirdly, the system’s application service capability is contained within the SaaS (Software as a Service) layer.

235

Integration of Blockchain and Mobile Edge Computing

Figure 9. Network architecture platform based on integrated mobile edge computing (MEC) and blockchain platform components

Figure 10 displays a 5G network deployment architecture with distinct platforms for MEC and blockchain. This situation is comparable to typical blockchain system deployment, but in order to provide smooth 5G connectivity, it requires the usage of unique devices to access the business channel. In order to establish a connection between the MEC and the blockchain platform, the modem analyses modulated data transmitted from the blockchain platform to the 5G network.

236

Integration of Blockchain and Mobile Edge Computing

Figure 10. Platform based on MEC and blockchain as independent components

Among the features provided by the MEC are a routing system, the location of terminal devices, and sensor data feedback. The blockchain allows MEC to verify terminal devices and record significant events on the chain using an authenticity technique. MECs can control blockchain platform access permissions through blockchain platform control.

CHALLENGES OF INTEGRATION OF BLOCKCHAIN AND MOBILE EDGE COMPUTING a). Security and privacy: Outsourcing services at the edges poses extra security and privacy problems in the combined blockchain and edge computing system. In the worst-case scenario of a node failure, off-chain solutions are still problematic when it comes to lost transactions (Yang et al., 2019; Sharma et al., 2021). b). Self-Organization: With the expansion of edge computing nodes, network and application management will become a significant concern. A coordinated 237

Integration of Blockchain and Mobile Edge Computing

attack could result from self-organization. Attackers may, for example, result in system slowdowns as well as reduced network connectivity and data transfer rates, or they may claim to have large amounts of data when they actually have only a small amount compared to the amount relayed to them. c). Resource Management: Numerous situations and much research have been done on resource management as the dominant strategy in networks. A multicriteria scheduler, which gathers the computing resources and schedules the activities utilizing various techniques, is necessary to distribute jobs to run on a set of computing resources properly. The development of such a multicriteria scheduler for blockchain-based integrated processing, storage, and network service optimization is a problem. d). Function integration: Edge computing integrates several infrastructure components, including servers, networks, and platform types. Data and resource management for several apps on multiple diverse platforms is difficult. In terms of data management, many storage servers use a wide variety of operating systems.

A REAL-LIFE APPLICATION OF BLOCKCHAIN AND MOBILE EDGE COMPUTING Although Bitcoin and Ethereum are the two cryptocurrencies most often associated with blockchain, it may be applied to a wide range of applications. Healthcare, industrial IoT, smart cities, and smart home automation are some sectors that gain from blockchain’s security characteristics and decentralized nature. Let’s quickly examine how blockchain technology and edge computing improve the security of patient medical records in a hospital context. A patient’s health information is taken from wearables and stored in an electronic medical card. Then, this data can be delivered encrypted to edge servers. For increased data security and secrecy, edge servers store this data on the edge blockchain. Data from the edge can be accessed by patients and authorized hospital staff considerably more quickly than data from the cloud. Any data that is not necessary for real-time analysis is sent to the cloud by edge servers. We can create a distributed and secure edge computing architecture that can support the integrity and safety of IoT data throughout its lifetime by integrating edge computing with blockchain. The adoption of edge computing use cases based on blockchain technology will increase along with the number of applications and their demand for secure, real-time data access (Y. Wang et al., 2019).

238

Integration of Blockchain and Mobile Edge Computing

CONCLUSION We have presented an in-depth exposure to edge computing and blockchain technologies, highlighting the need to expand edge computing’s applicability and the ways in which cutting-edge technologies like blockchain, AI, IoT, and ML can contribute to the creation of a safe edge computing paradigm. Blockchain, smart contracts, consensus mechanisms, AI, and ML algorithms are all predicted to greatly improve the quality of service (QoS) and security (security) of edge computing in the near future. We also talked about many blockchain-based ways to solve the most common security problems in edge computing. We concluded by summarizing our findings and outlining numerous unanswered research questions and problems in the area of delivering effective, scalable, and trustworthy edge computing security services. Edge-blockchain frameworks for widespread cooperation, reconfiguration, energy efficiency, scalability, and adaptability require further study. There is little doubt that if we are to realize the dream of secure and reliable edge computing networking services, the gaps between edge computing, blockchain architectures, and other protocols must be closed. We intend to investigate how to use blockchain-based solutions to address the security and privacy issues that arise with edge computing and its associated applications in the future. Lightweight consensus algorithms, distributed trust approaches, and distributed throughput management strategies are all useful tools for optimizing blockchain topologies in the face of IoT edge device and app resource constraints.

REFERENCES Bhat, S. & Sofi, I. (2020). Edge computing and its convergence with blockchain in 5G and beyond: security, challenges, and opportunities. IEEE. https://ieeexplore. ieee.org/abstract/document/9253517/ Bhattacharya, P., Tanwar, S., Shah, R., & Ladha, A. (2019). Mobile edge computingenabled blockchain framework—A survey. Lecture Notes in Electrical Engineering, 597, 797–809. doi:10.1007/978-3-030-29407-6_57 Brincat, A., Lombardo, A., Morabito, G., Networks, S. Q.-C., & 2019, U. (2019). On the use of Blockchain technologies in WiFi networks. Elsevier, 162, 1–9. Dorri, A., Kanhere, S. S., & Jurdak, R. (2019). Blockchain in internet of things. Challenges and Solutions., 162, 106855. Luo, C., Xu, L., & Li, D. (2020). Edge computing integrated with blockchain technologies, 268–288. Springer. 239

Integration of Blockchain and Mobile Edge Computing

M a fa k h e r i , B . , & S u b r a m a nya , T. ( 2 0 1 8 ) . B l o ck ch a i n - b a s e d infrastructure sharing in 5G small cell networks, 313–317. IEEE. https://ieeexplore.ieee.org/abstract/document/8584920/?casa_ token=sjhVjROQ-68AAAAA:S7kszO7r5cI8piWow7ljtME1kp20jIQ0S4lHU jqMoKCxs_Ar6utg2MH8QwbnkiS5m_9cbBN Novotny, P., Zhang, Q., Hull, R., & Use, S. B.-… S. (2018). U. (2018). Permissioned blockchain technologies for academic publishing. Content.Iospress. Com, 38(3), 159–171. Sabella, D., & Vaillant, A., & P., K.-I. C. (2016). Mobile-edge computing architecture: The role of MEC in the Internet of Things. IEEE, 5(4), 84–91. Shae, Z., & On, J. T. (2017). On the design of a blockchain platform for clinical trial and precision medicine, (pp. 1972–1980). 37th international conference. IEEE. Sharma, A. (2021). Future aspects on MEC (Mobile Edge Computing): Offloading Mechanism, 34–39. IEEE. Wang, F., Xu, J., Wang, X., & Wireless, S. C.-I. T. on. (2017). U. (2017). Joint offloading and computing optimization in wireless powered mobile-edge computing systems. IEEE, 17(3), 1784–1797. Wen, Z., Yang, K., Liu, X., Li, S., & Access, J. Z.-I. (2018). U. (2018). Joint offloading and computing design in wireless powered mobile-edge computing systems with full-duplex relaying. IEEE, 6, 72786–72795. Wu, Y., Chen, X., Shi, J., Ni, K., Qian, L., Huang, L., & Sensors, K. Z. (2018). U. (2018). Optimal computational power allocation in multi-access mobile edge computing for blockchain. Mdpi. Sensors (Basel), 18(10), 3472. doi:10.339018103472 PMID:30326649 Xiong, Z., Zhang, Y., & Niyato, D. (2018). When mobile blockchain meets edge computing. IEEE, 56(8), 33–39. Yang, R., Yu, F., & Si, P. (2019). Integrated blockchain and edge computing systems: A survey, some research issues and challenges. IEEE, 21(2), 1508–1532. Zhang, K., Zhu, Y., & Maharjan, S., Network, Y. Z.-I. (2019). Edge intelligence and blockchain empowered 5G beyond for the industrial Internet of Things. IEEE, 33(5), 12–19.

240

241

Chapter 10

A Review on Spatial and Transform Domain-Based Image Steganography Divya Singla Panipat Institute of engineering and technology, India Neetu Verma Deenbandhu Chhotu Ram University of Science and Technology, India Sakshi Patni Panipat Institute of Engineering & Technology, Panipat, India

ABSTRACT Steganography is a secret way of communicating, hiding the existence of information. It hides the message secretly without letting anyone know about its existence. This chapter gives a brief of various image steganography techniques in the spatial domain and transforms domain with their advantages and disadvantages. The characteristics to measure the performance of an image steganography technique are given as well. It also introduces the idea of drawing out the embedded data from the cover object called steganalysis.

INTRODUCTION Digital image steganography contains two words: digital image and steganography. The term digital image is described as an image containing a finite number of elements, generally called pixels, each of which has a digital value of one or more bits at a particular location. The term steganography refers to the art of concealing DOI: 10.4018/978-1-6684-6864-7.ch010 Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

A Review on Spatial and Transform Domain-Based Image Steganography

a message in any medium. It originates from two different Greek words, which are steganos, meaning covered, and graphia, which means calligraphy. Digital Image steganography refers to concealing the message in a digital image to hide its presence from the unwanted user. Steganography aims to prevent the very existence of the news away from inquisitive eyes. We can use any digital media as cover media like images, text files, audio, video, etc., which is required to hide and carry information from one place to another. It allows two or more people to silently communicate with each other leading to the protection of secure data. Usually, images are preferred as cover media because the human eye cannot be able to differentiate between the pixel value of two adjacent pixels (244, 245). Both pixel intensities appear to be the same. The Gray image consists of the pixel having an intensity value ranging from 0 to 255. This variation in pixel intensity can be exploited to insert secure data without providing any clue to the person’s eye. The human eye can’t able to differentiate between the original image and the message-encrypted steganographic image. In this chapter, we will cover the History of steganography, how steganography differs from the term cryptography, the role of steganography, and various techniques used to hide data in an image. Steganography is a prehistoric practice used in several forms for past thousands of years to keep Information hidden and Secure. For example: 1. The steganographic technique was first used around 440 B.C in Greece. Histaeus, the ruler of Greek, used steganography to send secret messages through an enslaved person. They shave the head of the deprived and tattoo the message on scalp and then wait for the hair to come, so that message will get hidden. The receiver of the message reverses the process by shaving the enslaved person’s head to get the hidden message and then replies in the same or different form of steganography. 2. During the Revolutionary War of America, both the British and American forces used invisible ink to pass secret communication. They form invisible ink with familiar sources, like vinegar, fruit juices, milk, and urine, for the hidden text. Heat or light is required to decipher these hidden messages. 3. In secret message communication, null ciphers were also used. They were unencrypted messages containing real messages embedded in the current text. Hidden messages were hard to interpret explicitly. For example, basically freshwater bonus and salt awash reward anyone feeding agreed. Resourceful anglers usually find masterful leapers’ buns and admit above-rank tweaking any day. By taking the third letter in each word, the following message emerges:

242

A Review on Spatial and Transform Domain-Based Image Steganography

Send Lawyers, Guns, and Money 4. Today, all government agencies are using steganography to secretly exchange messages to remove the risk of information leakage. All departments like Ethical hacking, banking, forensics examiners, security agencies, etc., and even spies use digital steganography methods for information exchange. 5. The concept of digital steganography is getting importance in past few years as some recent illegitimate use of steganography also came into notice. In May 2011, in Berlin, a memory card is recovered from a suspected Al-Qaeda member that contained more than 100 text files hiding in a pornographic video containing information about future operations of Al-Qaeda (Robertson, Cruickshank & Lister, 2012). In October 2018, in Japan, banking trojan horses is delivered to customers using steganography (Micro, 2018). With the adaptation of wireless networks and digital media, the demand for Steganography is growing fast in the last few years. As it ensures security by hiding the presence of information during transmission. We will discuss the steganographic techniques in detail in further topics. Before that, we will see how steganography is different from cryptography.

CRYPTOGRAPHY It is a process that transforms the information (message) into a form that cannot be understood intangibly by the unintended user. It is a technique used to protect information. It does not hide the presence of a message. Some terms related to cryptography are: Plain Text: Original message or information that needs to be communicated is called plain Text. Cipher Text: The transformed form of a message or coded message is called cipher text. It is a scrambled form of the original text produced with the help of a secret key. Encryption: The method of converting plain text into cipher text is called encryption. The Encryption algorithm needs two things for conversion 1. Message (Plain Text) 2. Encryption Key

243

A Review on Spatial and Transform Domain-Based Image Steganography

So, with the plain Text X, and the encryption Key K, the encryption algorithm produces the Cipher Text Y. Y = E (X, K) Decryption: The process of taking the plain text back from the cipher text is called decryption. The intended recipient, after receiving the cipher text Y, in the hold of key(K), is able to reverse the process: X=D (Y, K) An intruder having Y, but not having the key K is not able to revert the message. The cryptography algorithms are based on two general principles: 1. Substitution 2. Transposition In substitution, the letters of plaintext are changed by other letters, symbols, or by numbers. While in transposition, the letters of plaintext are rearranged. Some sort of permutation is applied over the alphabet of plaintext to convert it into an intangible form. Based on the Number of Key, Cryptography is further divided into two categories: 1. Symmetric Key _Cryptography: The same key is used by both the sender and receiver for encryption and decryption of message. Thus, the key needs to be reciprocated secretly between the sender and receiver. 2. Asymmetric Key_ Cryptography: The different keys are used by both the sender and receiver for communication i.e., one key is used by the sender for encryption and a different key is by the receiver for decrypting the message, thus referred to as two-key cryptography, public cryptography, or asymmetric key cryptography. The two keys used for encryption and decryption are termed the public key and private key. The private keys are private to the user as it is known only to the owner of the message. The sender encrypts the message with the recipient’s public key and sends the message on the network which the intended receiver gets and decrypts with its own private key.

244

A Review on Spatial and Transform Domain-Based Image Steganography

Let us suppose some sender A produces a plaintext message X for recipient B. B has a pair of keys: a public key (U), and a private key(P). The Key U is known publicly, and the key P is known only to B. With the message X, and the public key U, A produces the ciphertext Y with an encryption algorithm: Y = E (X, U) Here, E is an encryption technique used to convert the message into an unreadable form. The intended recipient, after receiving the cipher text Y in the hold of his private key(P), is able to reverse the transformation: X=D (Y, P) Here, D is a decryption technique used to convert the message back into a readable form. Table 1. Difference between cryptography and steganography Steganography

Cryptography

Objective

Remove the presence of the message

Hides the content of the message

Impendence Security

Yes (Hard to identify presence by Human Visual System)

No (It is visible directly)

Output type

Stego-Object - Depends on the medium (Text, Image, Audio, Video)

Cipher Text

Key Requirement

Technique dependent

Must

Extraction Complexity

Complex

Easy to detect the presence of communication but laborious to decode the cipher text

Challenges

High Hiding capacity, Hide the presence, robustness, Temper Resistance

Key management, Complexity of encryption algorithm

Example

Plain Text: Hello, I am here Cipher text: z#5@!f Y WV td! ^

245

A Review on Spatial and Transform Domain-Based Image Steganography

CHARACTERISTICS OF A STEGANOGRAPHY IMAGE The image is considered suitable for steganography based on the following four characteristics (Stoyanova, 2015), (Roy.et.al, 2016), (Kadhim.et.al, 2019). (a) Capacity: It is termed as the consignment(portion) of confidential data that can be flourishingly immersed into a cover image without leaving visible impression behind. (b) Fidelity or imperceptibility: It is defined as the discernible characteristic of the steganographic image obtained after inserting the message into the cover image. However, high fidelity value hints at the better visual quality of the stego image, and thus, it is one of the fundamental needs of any image steganography technique. The Metric used to measure fidelity is the Peak Signal Noise Ratio (PSNR), as shown in equation (1) measured in decibels (dB). PSNR measures the degree of distortion originating in the steganographic image after encrypting the message as compared to the original cover image. The formula used to calculate PSNR is defined as follows:

MAX 2 PSNR = 10 log10 ( )db MSE

(1)

Where: MAX is the Maximum pixel value MAX=1, Grey Scale Image MAX=255, RGB Image (2)

To calculate the value of PSNR we need Mean Square Error as shown in equation MSE is Mean Square Error M

MSE =

N

2 1 xij − yij ) ( ∑∑ M×N i =1 j =1

(2)

M, N are the horizontal and vertical pixel dimension values of the cover image, xij and yij are the pixels value in the cover and the stego image respectively. 246

A Review on Spatial and Transform Domain-Based Image Steganography

(c) Security: The level of security is defined by measuring the relative entropy value between the original image and the Stego-image. Lower the value of entropy between the original image and Stego-images contributes to a high value of security as changes are undetectable to the naked human eye. If the probability distribution of the cover and stego image is denoted by Pc and Ps respectively, then relative entropy is calculated as shown in equation (3): D(PC||PS) = ∑ PC log

PC Ps



(3)

(d) Robustness: The strength of a steganography technique against attack is termed robustness. It is the fundamental need of a steganography technique to being robust to steganalysis and also to some unknown attacks like scaling, Rotation and transformation. As transformations applied to a stego- image may reduce the quality and the secret message is lost. Figure 1. Classification of steganography techniques

247

A Review on Spatial and Transform Domain-Based Image Steganography

STEGANOGRAPHIC TECHNIQUES CLASSIFICATION Classification on the Basis of Cover Format/Carrier Medium Based on the cover or host data used to hide the data, the steganography technique is classified as: 1. Image Steganography: In Image steganography, an image is explored as a carrier that contains secret data. 2. Audio Steganography: In this, an audio file is used as cover media to embed the secret messages, generally employed through phase coding, post encoder techniques, spread spectrum, amplitude coding, and low-bit encoding for inserting secret information (Djebbar.et.al, 2012). 3. Video Steganography: In this technique, a video stream consists of a sequence of images accompanied by audio, and is used as a medium to embed data through transform domain, format-based, and substitution methods (Sadek. et.al, 2015). 4. Text Steganography: It is a technique of hiding a text message inside another text as a cover message through techniques like line-shift encoding, featurespecific encoding, and word-shift encoding (Liu.et.al, 2015). 5. Network Protocol Steganography: Secret message can be embedded in the header fields of TCP/IP (type of service (ToS), Fragment offset, Identification), thus acting as a carrier of secret communication (Murdoch.et.al, 2005). Image steganography is widely popular due to the ease of use of multimedia communication, done through various low-cost and hand-held devices (i.e., smart mobiles, laptops, IP cameras) and a lot of social media applications (i.e., linkedIn, WhatsApp, Snapchat, Facebook, Twitter). Images are also having a high frequency of redundant data as a result; a user can smoothly immerse their secret data through various different steganographic techniques.

248

A Review on Spatial and Transform Domain-Based Image Steganography

Figure 2. Image steganography

Classification Based on Embedding Method Based on the embedding process, image steganography is classified into three types: 1. Spatial Domain Image steganography 2. Transform Domain Image steganography 3. Adaptive Domain Image steganography 4.2.1 Spatial Domain Image Steganography: It refers to the form of twodimensional matrices that represent an image’s intensity distribution. This approach directly manipulates the picture pixel. This approach conceals the secret data by substituting selected bits from the cover picture with the personal message’s bit value.

Least Significant bit Technique It is the repeatedly used technique for embedding a secret message in an image, popularly known as LSB. It is very easy and allows the concealing of a huge amount of information into a cover image without any noticeable distortion of the image. The least significant bit, the bit at the rightest end, is used for embedding the data. The selection of pixels for concealing is done by a random number generator (Stego-key). By changing the last pixel coefficient to one bit the difference between the original pixel value and the embedded pixel intensity is not substantially different. Thus, changes done in the cover image are not noticeable. To a computer, an image is just a matrix of a series of 0’s and 1’s representing the intensity value of each pixel. In 249

A Review on Spatial and Transform Domain-Based Image Steganography

Grayscale images, each pixel requires one byte to store values ranging from 0-255, representing the intensity or the amount of light that the pixel carries. However, for colored images, each pixel requires three bytes to store the RGB value. Each pixel is a mix of these three colors. The 24-bit (RGB) Color images are considered better for concealing information because of their size (Gupta.et.al, 2012), (Kutade.et.al, 2015), (Gedkhaw.et.al, 2018), (Kadhim.et.al, 2019), (Chhabra, & Singh, 2022). For example, to embed the letter “B” in a gray scale image, it should be represented in the binary format as follow. 11010101 00101101 10001001 10110110 00101111 11011011 10011011 00010000 The binary representation of B is 01000010 and is immersed in the extreme right bit of each pixel value. 11010100 00101101 10001000 10110110 00101110 11011010 10011011 00010000 Here the B was embedded into the grid, only the 4 bits needed to be changed. On average, only half of the bits in an image will need to be modified to hide a secret message. Algorithms: Input -: cover C for j = 1 to Length(C), do Sj ←Cj for i = 1 to Length(m), do Compute index ji where to store the ith message bit of m Sji ←LSB(Cji) = mi End for Output -: Stego image

An LSB-based Extracting Algorithm Input -: Secret image S for i = 1 to Length (m), do Compute index ji where to store the ith message bit of m

250

A Review on Spatial and Transform Domain-Based Image Steganography

mji ←LSB(Cji) End

The message bits are extracted from the selected pixel and lined up to construct the original message, Only the pixels set containing the message need to be selected from the Stego -image, which can be done using the same sequence of steps as during the embedding process. The LSB (Least Significant Bit) method is the easiest and fastest method to embed a secret message into cover photos. However, have certain disadvantages: 1. Compressing the image may change or destroys the secure data bits. 2. The smallest change in the cover media image may disappear the secure data. 3. Only small messages can be embedded. Various modifications are applied on this basic LSB technique to improve performance. (Rafrastara.et.al,2019) Rafrastara proposes the three-bit embedding pattern, which is second, third and fourth bit and produces a better result than two-bit embedding method. (Talukder.et.al,2022) Talukder proposes a four-step method for image encryption and decryption which enhances the robustness of the LSB technique. (Manindra.et.al,2022) Manindra uses the combination of both steganography and cryptography to enhance the security of the information to be hide, it first encrypts using a public key cryptography and then embed in a cover image using LSB.

Pixel Value Difference (PVD) Methods In this method pixel value difference between two adjacent pixels is explored to decide how many secret bits can be embedded (Wu & Tsai, 2003). The difference between two adjacent pixels is calculated by di = (Pi+1 - Pi) The di value will range from -255 to 255. The lower value (close to 0) of di denotes the smoother region and the extreme value (close to -255 or 255) denotes the noise region. The fundamental concept of this method is to find the noise areas to hide the secret message as human vision tolerance is high in noise areas. (Abbood. et.al, 2018), (Chhabra, & Singh, 2020) In the method, the complete cover image is scanned from left to right in raster scan order first and then is segregated into non-overlapping consecutive pixels block of 251

A Review on Spatial and Transform Domain-Based Image Steganography

size two in a zig-zag fashion. Further, in each block, the pixel value of two adjacent pixels is calculated to decide the size of the embedding bits, and the difference values are arranged in ranges to form a table. The number of bits embedded in each block is decided by the formula bit_length = log2(upperr-lowerr+1) The value must be in the power of 2. Then a sub-stream k of width equal to bit_length is selected and embedded in the block. The higher the difference more, the secure message bits can be concealed in the pixel pair. The selection of range intervals in a range table is based on human eyes tolerance. After that, a new value of difference d” is computed based on equation (4).

d " = lower + b) for d >= 0 -lowerr + b for d 0 thus d” = 32 + (01100)2 = 32 +(12)10 = (44)10 Step 5: The new pixel value is = Pi”, Pi”+1 = Pi – upperbound(n/2), Pi+1 + lowerbound(n/2), d is even 253

A Review on Spatial and Transform Domain-Based Image Steganography

n= 44-34=10 thus, stego pixel value is 155-5 = 154, 189+5 = 194 = (154, 194) With PVD method we can embed a large amount of secret data into images with higher optical invisibility as compared to LSB substitution method. However, the major issues are: 1. The histogram equalization of the original and Stego image will reveal the existence of a secret message. 2. A general image contains smooth areas contrary to noisy areas, so the secret bits of the message will be concealed in the ranges with less value. (Vishnu.et.al,2020) Vishnu proposes a more secure steganography technique by using edge detection algorithm with PVD. Canny algorithm detects the edges which can be used to embed the data. (Thanekar.et.al,2013) Thanekar proposes a technique to make PVD method immune to histogram equalization method. 4.2.2 Transform Domain Image Steganography: It is the other technique of embedding data in an image. A Binary image is a composition of low and high-frequency components. The plane and smooth areas are the low-frequency component however the sharp and edge areas are the high-frequency components. The domain-specific characteristics of the cover image are basically exploited to conceal secure information. The cover image is decomposed by the appropriate type of transformation based on the application to obtain coefficients of transformation. These transformation coefficients are modified to embed secret data in the canvas image. Various transformations are used to conceal the data based on their application, namely DCT, DFT (Discrete Cosine and Fourier Transform) is used, and wavelet transform DWT (Discrete Wavelet Transform) used in the frequency domain. This technique is more effective in terms of robustness than the spatial domain, as the secret information is less susceptible to change due to compression, cropping and other image processing techniques like scaling rotation.

Discrete Cosine Transform (DCT) Discrete Cosine Transform is a notably effectively used technique for converting an image from one domain (Spatial) to another (frequency domain). It generally converts JPEG images, which is a widely used format on the internet nowadays (Tsai.et.al, 2014). In this method, the DCT-based coefficients are altered to store the private data in it. In DCT based steganography technique, the cover image is 254

A Review on Spatial and Transform Domain-Based Image Steganography

fractionated into uncorrelated pixels block of size 8*8. Thus, by using the DCT equation shown below, DCT coefficients corresponding to each block are obtained (Zhang.et.al, 2015). Given a two-dimensional image f (x, y) of size M*N, its DCT Z (k, l) is calculated by formula shown in equation (7-9),

 (2 j + 1) ∏ l   (2i + 1) Àk      Z (k, l) = ± (k) ± (l) ∑∑f (x 9 y) cos    * cos  N 2 2 M     i=0 j=0     M−1 N−1

(7)

Where,

 1  ,K = 0  M ± (k ) =   2  , 1 ≤ k ≤ M − 1  M

(8)

 1 N 9l =0  ± (l ) =   2 N 1 ≤ l ≤ N − 1 9 

(9)

The DCT Coefficient thus obtained is quantized by using the default quantization table. The private message is then concealed into the DCT coefficients using combination coding. Various combination coding technique is used to embed the secret data, like (Subhedar.et.al, 2014) embedding data into the eigenvalue of quantized DCT coefficients. The method proposed in (Chhabra.et.al, 2020) is based on the fact that many DCT quantized coefficients are closer to zero, so neighbour coefficients having a value of zero are selected to embed the secret data. Some also replace the LSB of DCT Coefficients with secret data. Secret messages quantized in DCT coefficients are then coded by using a combination of Huffman and runlength coding. The Stego image can then be transferred through unsecured means without human vision recognition and knowledge. DCT steganography divides the image into low, middle and high-frequency components. After quantization the higher frequency components of DCT blocks are often becomes zero thus are better places for data hiding. Low frequency components are less resistant to noise than high frequency components.

255

A Review on Spatial and Transform Domain-Based Image Steganography

An exact reverse procedure is followed to retrieve the payload at the receiver end. During extraction the Stego-image is again fractioned into uncorrelated continuous pixel blocks of size 8*8, and each block of the steganographic-image is modified using a discrete cosine transform. This method is suitable only for embedding smaller messages, as embedding large message changes the DCT coefficient of each block, thus degrading the image. The main disadvantage of using DCT as image steganography is that the image generated after steganography is only stored in JPEG format. Secondly, there will be high quantization error during translation because of long basis functions, thus can affect the accuracy of Stego-image. (Zhang.et.al,2021) proposed a robust steganography using DCT in lossy channels. (Vakani.et.al,2021) proposes a DCT based Adaptive scaling technique to achieve the high payload imperceptibility without relenting the Stego-Quality. So, other frequency transformation techniques are used to inflate the steganographic techniques, namely, Wavelet and integer wavelet transforms. While comparing with DCT, wavelet transform carries more power, which we are going to study in the succeeding topic.

Discrete Wavelet Transform (DWT) The wavelets are the small wave of varying frequency that lies in the time domain and used as basis function to represent spikes and discontinuation in a compact way. Wavelet transform of a digital image gives both time and frequency determination. Wavelets are created by the movement and amplifications of mother wavelets (Baby. et.al, 2015). The sub band images are created by the series of down sampling and filtering operation. The 2D- Haar DWT involves two operations as an initial step which is performed horizontally and vertically. Initially the image pixels are scan righthanded horizontally and arithmetic operation of addition and subtraction is performed in adjacent pixels and the sum is stored in left sub bands and difference is stored right sub bands respectively. This iteration goes on until the entire image pixels are processed. The difference pixels are thus high frequency pixels whereas the sum pixels are low frequency.

256

A Review on Spatial and Transform Domain-Based Image Steganography

Figure 3. Horizontal operations

Same way, pixels are scanned from peak to bottom perpendicularly, addition and subtraction is performed in adjacent pixels and the sum and difference is stored in top and bottom respectively. Figure 4. Vertical operations

In this way, DWT divides the image into four frequency sub bands LL, LH, HL, HH. High frequency components hold the additional information (edges and textures) about the image, and low-frequency components approximately hold the information relevant to the image. For the n+1 level of decomposition, the LL subbands of the former level n are used as the input. These high-frequency components are owned to conceal the secret message in the image to protect it from human vision. The coefficient space of the sub band is changed as per the secret message bits. However, we can also embed the secret data in LL portion but it leads to distortion in Stego-image. DWT provides better image fidelity and non-recoverability of the hidden data as the coefficients represents both frequency and spatial domain features. 257

A Review on Spatial and Transform Domain-Based Image Steganography

The Stego image produced through DWT is of better quality than DCT without any interferences due to artifacts. It also provides high compression ratios. Thus, a better method for hiding confidential data. The implementation of DWT is very easy. The number of resources required and computation time is minimized using DWT. However, this technique needs extra supplementary data to attain reversibility. (Chauhan.et.al,2022) presented a review on existing algorithms used in detection on fake videos (Mstafa.et.al,2017) proposes a multiple object tracking and error correcting codes based DCT, DWT based video steganography that improves the imperceptibility, security capacity of video steganography. 4.2.2.3 Integer Wavelet Transform (IWT): It is an efficient approach that provides the lossless decomposition of image. It is an invertible integer to integer transform. When an image is processed through IWT, four different matrices obtained where the lower matrix is just the smaller version of the original input image as shown in the figure below whereas in DWT it is slightly distorted. Figure 5. (a) Original Image (b) One level IWT in LL band

In DWT the perfect reconstruction of original image becomes difficult as the output image is no longer consists of integers (decimal Coefficients) thus truncations of floating-point values of pixels may cause the drop of concealed information leads to collapse of data hiding system, but in IWT, the output image is completely characterized with integers leads to perfect reconstruction of original image. To obtain integer coefficients lifting operations like floor and ceiling functions are employed, thus the reverse operation is also possible. Error percentage is very low in the IWT because it uses the integer coefficients in embedding process. If the original image (I) is U pixels high and V pixels wide, the level of each of the pixel at (i,j) is denoted by I(i,j)(Weng.et.al, 2017), (Miri.et.al, 2018), (Chhabra.

258

A Review on Spatial and Transform Domain-Based Image Steganography

et.al, 2019) their IWT Coefficient can be calculated as in equation (10-13) and inverse IWT transform coefficient can be calculated with equation (14-17). The IWT coefficients are given as:

(

)

LLi, j = I2 i 2 j + I2 i+1,2 j  2

(10)

HLi,j = I2i+1,2j – I2i,2j

(11)

LHi,j = I2i,2j+1 – I2i,2j

(12)

HHi,j = I25+1,2j+1 – I2i,2j

(13)

9

The inverse IWT transform is given as:

I2i, 2 j = LL i j −

HL i j 9

2

9



(14)

I2i, 2 j + 1 = LL i j + HL i j+1 / 2

(15)

I2i+1,2j = I2i,2j+1+LHi,j – HLi,j

(16)

I2i+1,2j+1 = I2i+1,2j+HHi,j – LHi,j

(17)

9

9

where, 1 ≤ i ≤ U/2, 1 ≤ j ≤ V/2 The IWT coefficients thus obtained is used to hide secret bits in steganographic system. The secret data can be embedded in LSB of the coefficients or we can also alter the colors codes to hide the secret data into the cover image and then colors code pass through the lossless IWT transform.

259

A Review on Spatial and Transform Domain-Based Image Steganography

In spite of reversal lossless and robust transforms, it also suffers from low payload capacity, low defence against geometrical attacks. Some other transforms like Dualtree Complex Wavelet transform (DT-CWT), complex wavelet transform (CWT) has been proposed to overcome the disadvantages.

STEGANALYSIS It is the technique of drawing out the embedded hidden message from Stego-image embedded by using steganographic techniques (Boroumand.et.al, 2018), (You.et.al, 2020), (Dalal.et.al, 2021), (Singh.et.al, 2021). It is used to evaluate the strength of the steganography technique. It plays a significant role in many applications like digital forensics, law enforcement. For steganalysis we must have the information of the media of cover object. Steganalysis requires the knowledge of characteristics and features of cover object. The forensic expert uses steganalysis to detect the existence of hidden data or to extract the hidden information if it is present in a media for national security. In steganalysis the main task is classification, that tells whether the received files contains the embedded data or not. It can be done in two ways: 1. Signature Steganalysis: In this system will search for some particular pattern or signature of a popular steganographic technique. The concept behind is that the embedded secret bits will create special repeated patterns like in histogram arrangement. This information is used for extraction of hidden data. 2. Statistical Steganalysis: It uses statistical parameters to determine even if any data is hidden there. Mathematical model is used to evaluate the characteristics of stego-image instead of guessing. Table 2. Characteristics comparison of discussed techniques Technique

Capacity

Fidelity

Security

Robustness

Least Significant bit

High

Medium

Low

Low

Pixel Difference Value

High

High

Low

Low

Discrete Cosine Transform

Low

High

Low

Low

Discrete Wavelet Transform

Medium

Medium

High

High

Integer Wavelet Transform

Low

Medium

High

High

260

A Review on Spatial and Transform Domain-Based Image Steganography

Table 3. Advantages and disadvantages of discussed techniques

Spatial Domain

Technique

Advantages

Disadvantages

LSB

• High Payload Capacity • Inserting and recovering of secret data is quite simple and easy task. • High Fidelity.

• Less secure and robust against statistical analysis and geometric transformations.

PVD

• Satisfactory Imperceptibility • Better payload capacity in colored images.

• Not secure and robust against transformations and statistical attacks.

Technique

Advantages

Disadvantages

DCT

• High Fidelity. • Visual Quality of image is better.

• Payload Capacity is not satisfactory. • Basis functions are too long leads to quantization error during transformations. • Poor Robustness against attacks.

DWT

• High Security and robustness. • Satisfactory embedding capacity. • Image Quality is good with no interferences

• For recovery of secret data from stego-image needs high supplementary data.

IWT

• Easy to recover the hidden data. • Highly secure.

• Not satisfactory payload capacity.

Transform Domain

CONCLUSION In this chapter the concept of cryptography along with the difference between cryptography and steganography was discussed. We also try to explore concepts, performance evaluation characteristics for image steganography along with the different types of steganography techniques. Different embedding techniques are explored along with their advantages and disadvantages. The spatial domain techniques are suitable for high payload capacity but less secure against geometric transformation attacks, also only lossless image formats offer high retrieval quality. Using JPEG, the data may be lost due to compression. The Transform domain techniques offers high imperceptibility and robustness but provides low payload capacity. Based on the requirement we can choose the steganographic technique. The performance analysis metrics used in steganalysis is also discussed. An ideal image steganography technique should be able to provide higher visual imperceptibility, higher data embedding capacity and resistant to steganalysis attack. All the techniques discussed above had their own advantages and limitations that can be adopted based on the application.

261

A Review on Spatial and Transform Domain-Based Image Steganography

REFERENCES Abbood, E. A., Neamah, R. M., & Abdulkadhm, S. (2018). Text in Image Hiding using Developed LSB and Random Method. International Journal of Electrical & Computer Engineering (2088-8708), 8(4). AbdelWahab, O. F., Hussein, A. I., Hamed, H. F., Kelash, H. M., Khalaf, A. A., & Ali, H. M. (2019). Hiding data in images using steganography techniques with compression algorithms. [Telecommunication Computing Electronics and Control]. TELKOMNIKA, 17(3), 1168–1175. doi:10.12928/telkomnika.v17i3.12230 Abdulla, A. A., Sellahewa, H., & Jassim, S. A. (2019). Improving embedding efficiency for digital steganography by exploiting similarities between secret and cover images. Multimedia Tools and Applications, 78(13), 17799–17823. doi:10.100711042-019-7166-7 Abdullah, D. M., Ameen, S. Y., Omar, N., Salih, A. A., Ahmed, D. M., Kak, S. F., & Rashid, Z. N. (2021). Secure data transfer over internet using image steganography. Asian Journal of Research in Computer Science, 33-52. Adi, P. W., Rahmanti, F. Z., & Abu, N. A. (2015, October). High quality image steganography on integer Haar Wavelet Transform using modulus function. In 2015 International Conference on Science in Information Technology (ICSITech) (pp. 79-84). IEEE. 10.1109/ICSITech.2015.7407781 Baby, D., Thomas, J., Augustine, G., George, E., & Michael, N. R. (2015). A novel DWT based image securing method using steganography. Procedia Computer Science, 46, 612–618. doi:10.1016/j.procs.2015.02.105 Behbahani, Y. M., Ghayour, P., & Farzaneh, A. H. (2011, November). Eigenvalue Steganography based on eigen characteristics of quantized DCT matrices. In ICIMU 2011: Proceedings of the 5th international Conference on Information Technology & Multimedia (pp. 1-4). IEEE. 10.1109/ICIMU.2011.6122769 Boroumand, M., Chen, M., & Fridrich, J. (2018). Deep residual network for steganalysis of digital images. IEEE Transactions on Information Forensics and Security, 14(5), 1181–1193. doi:10.1109/TIFS.2018.2871749 Chauhan, R., Popli, R., & Kansal, I. (2022, October). A Comprehensive Review on Fake Images/Videos Detection Techniques. In 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO) (pp. 1-6). IEEE.

262

A Review on Spatial and Transform Domain-Based Image Steganography

Chhabra, S., & Kumar Singh, A. (2020). Security Enhancement in Cloud Environment using Secure Secret Key Sharing. Journal of Communications Software and Systems, 16(4), 296–307. doi:10.24138/jcomss.v16i3.964 Chhabra, S., & Singh, A. K. (2016, December). Dynamic data leakage detection model based approach for MapReduce computational security in cloud. In 2016 Fifth International Conference on Eco-friendly Computing and Communication Systems (ICECCS) (pp. 13-19). IEEE. 10.1109/Eco-friendly.2016.7893234 Chhabra, S., & Singh, A. K. (2019). Dynamic hierarchical load balancing model for cloud data centre networks. Electronics Letters, 55(2), 94–96. doi:10.1049/ el.2018.5427 Chhabra, S., & Singh, A. K. (2020). A secure VM allocation scheme to preserve against co-resident threat. International Journal of Web Engineering and Technology, 15(1), 96–115. doi:10.1504/IJWET.2020.107686 Chhabra, S., & Singh, A. K. (2022). Dynamic Resource Allocation Method for Load Balance Scheduling Over Cloud Data Center Networks. arXiv preprint arXiv:2211.02352. Dalal, M., & Juneja, M. (2021). Steganography and Steganalysis (in digital forensics): A Cybersecurity guide. Multimedia Tools and Applications, 80(4), 5723–5771. doi:10.100711042-020-09929-9 Djebbar, F., Ayad, B., Meraim, K. A., & Hamam, H. (2012). Comparative study of digital audio steganography techniques. EURASIP Journal on Audio, Speech, and Music Processing, 2012(1), 1–16. doi:10.1186/1687-4722-2012-25 Gedkhaw, E., Soodtoetong, N., & Ketcham, M. (2018, September). The performance of cover image steganography for hidden information within image file using least significant bit algorithm. In 2018 18th International Symposium on Communications and Information Technologies (ISCIT) (pp. 504-508). IEEE. 10.1109/ISCIT.2018.8588011 Gupta, S., Gujral, G., & Aggarwal, N. (2012). Enhanced least significant bit algorithm for image steganography. IJCEM International Journal of Computational Engineering & Management, 15(4), 40–42. Hussain, M., Wahab, A. W. A., Ho, A. T., Javed, N., & Jung, K. H. (2017). A data hiding scheme using parity-bit pixel value differencing and improved rightmost digit replacement. Signal Processing Image Communication, 50, 44–57. doi:10.1016/j. image.2016.10.005

263

A Review on Spatial and Transform Domain-Based Image Steganography

Kadhim, I. J., Premaratne, P., Vial, P. J., & Halloran, B. (2019). Comprehensive survey of image steganography: Techniques, Evaluations, and trends in future research. Neurocomputing, 335, 299–326. doi:10.1016/j.neucom.2018.06.075 Kutade, P. B., & Bhalotra, P. S. A. (2015). A survey on various approaches of image steganography. International Journal of Computer Applications, 109(3), 1–5. doi:10.5120/19165-0620 Liu, J., Jiao, G., & Sun, X. (2022). Feature passing learning for image steganalysis. IEEE Signal Processing Letters. Liu, Y., Yang, T., & Xin, G. (2015). Text steganography in chat based on emoticons and interjections. Journal of Computational and Theoretical Nanoscience, 12(9), 2091–2094. doi:10.1166/jctn.2015.3992 Manindra, A. P. V., & Karthikeyan, B. (2022, August). OTP Camouflaging using LSB Steganography and Public Key Cryptography. In 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC) (pp. 109-115). IEEE. 10.1109/ICESC54411.2022.9885542 Micro, T. (2018). Spam Campaign Targets Japan. Uses Steganography to Deliver the BEBLOH Banking Trojan. Miri, A., & Faez, K. (2018). An image steganography method based on integer wavelet transform. Multimedia Tools and Applications, 77(11), 13133–13144. doi:10.100711042-017-4935-z Mstafa, R. J., Elleithy, K. M., & Abdelfattah, E. (2017). A robust and secure video steganography method in DWT-DCT domains based on multiple object tracking and ECC. IEEE Access : Practical Innovations, Open Solutions, 5, 5354–5365. doi:10.1109/ACCESS.2017.2691581 Murdoch, S. J., & Lewis, S. (2005, June). Embedding covert channels into TCP/ IP. In International Workshop on Information Hiding (pp. 247-261). Springer. 10.1007/11558859_19 Rafrastara, F. A., Prahasiwi, R., Rachmawanto, E. H., & Sari, C. A. (2019, July). Image Steganography using Inverted LSB based on 2 nd, 3 rd and 4 th LSB pattern. In 2019 International Conference on Information and Communications Technology (ICOIACT) (pp. 179-184). IEEE. 10.1109/ICOIACT46704.2019.8938503 Rawat, D., & Bhandari, V. (2013). A steganography technique for hiding image in an image using lsb method for 24bit color image. International Journal of Computer Applications, 64(20), 15–19. doi:10.5120/10749-5625 264

A Review on Spatial and Transform Domain-Based Image Steganography

Robertson, N., Cruickshank, P., & Lister, T. (2012). Documents reveal al Qaeda’s plans for seizing cruise ships, carnage in Europe. Cable News Network (CNN), 1. Roy, R., & Changder, S. (2016). Quality evaluation of image steganography techniques: A heuristics based approach. International Journal of Security and Its Applications, 10(4), 179–196. doi:10.14257/ijsia.2016.10.4.18 Sadek, M. M., Khalifa, A. S., & Mostafa, M. G. (2015). Video steganography: A comprehensive review. Multimedia Tools and Applications, 74(17), 7063–7094. doi:10.100711042-014-1952-z Singh, B., Sur, A., & Mitra, P. (2021). Steganalysis of digital images using deep fractal network. IEEE Transactions on Computational Social Systems, 8(3), 599–606. doi:10.1109/TCSS.2021.3052520 Stoyanova, V., & Zh, T. (2015). Research of the characteristics of a steganography algorithm based on LSB method of embedding information in images. Machines. Technologies. Materials (Basel), 9(7), 65–68. Subhedar, M. S., & Mankar, V. H. (2014). Current status and key issues in image steganography: A survey. Computer Science Review, 13, 95–113. doi:10.1016/j. cosrev.2014.09.001 Tabares-Soto, R., Ramos-Pollán, R., Isaza, G., Orozco-Arias, S., Ortíz, M. A. B., Arteaga, H. B. A., & Grisales, J. A. A. (2020). Digital media steganalysis. In Digital Media Steganography (pp. 259–293). Academic Press. doi:10.1016/B978-0-12819438-6.00020-7 Talukder, M. S. H., Hasan, M. N., Sultan, R. I., Rahman, M., Sarkar, A. K., & Akter, S. (2022, February). An Enhanced Method for Encrypting Image and Text Data Simultaneously using AES Algorithm and LSB-Based Steganography. In 2022 International Conference on Advancement in Electrical and Electronic Engineering (ICAEEE) (pp. 1-5). IEEE. 10.1109/ICAEEE54957.2022.9836589 Thanekar, S. A., & Pawar, S. S. (2013, December). OCTA (STAR) PVD: A different approach of image steganopgraphy. In 2013 IEEE International Conference on Computational Intelligence and Computing Research (pp. 1-5). IEEE. 10.1109/ ICCIC.2013.6724139 Tsai, Y. Y., Chen, J. T., & Chan, C. S. (2014). Exploring LSB Substitution and Pixelvalue Differencing for Block-based Adaptive Data Hiding. International Journal of Network Security, 16(5), 363–368.

265

A Review on Spatial and Transform Domain-Based Image Steganography

Vakani, H., Abdallah, S., Kamel, I., Rabie, T., & Baziyad, M. (2021, July). Dct-in-dct: A novel steganography scheme for enhanced payload extraction quality. In 2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT) (pp. 201-206). IEEE. 10.1109/IAICT52856.2021.9532553 Vishnu, B., Namboothiri, L. V., & Sajeesh, S. R. (2020, March). Enhanced image steganography with PVD and edge detection. In 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC) (pp. 827-832). IEEE. 10.1109/ICCMC48092.2020.ICCMC-000153 Weng, C. Y., Huang, C. T., & Kao, H. W. (2017, August). DCT-based compressed image with reversibility using modified quantization. In International Conference on Intelligent Information Hiding and Multimedia Signal Processing (pp. 214-221). Springer. Wu, D. C., & Tsai, W. H. (2003). A steganographic method for images by pixelvalue differencing. Pattern Recognition Letters, 24(9-10), 1613–1626. doi:10.1016/ S0167-8655(02)00402-6 You, W., Zhang, H., & Zhao, X. (2020). A Siamese CNN for image steganalysis. IEEE Transactions on Information Forensics and Security, 16, 291–306. doi:10.1109/ TIFS.2020.3013204 Zhang, J., Zhao, X., He, X., & Zhang, H. (2021). Improving the robustness of JPEG steganography with robustness cost. IEEE Signal Processing Letters, 29, 164–168. doi:10.1109/LSP.2021.3129419 Zhang, Y., Luo, X., Yang, C., Ye, D., & Liu, F. (2015, August). A JPEG-compression resistant adaptive steganography based on relative relationship between DCT coefficients. In 2015 10th International Conference on Availability, Reliability and Security (pp. 461-466). IEEE. 10.1109/ARES.2015.53

266

267

Compilation of References

Abbood, E. A., Neamah, R. M., & Abdulkadhm, S. (2018). Text in Image Hiding using Developed LSB and Random Method. International Journal of Electrical & Computer Engineering (20888708), 8(4). AbdelWahab, O. F., Hussein, A. I., Hamed, H. F., Kelash, H. M., Khalaf, A. A., & Ali, H. M. (2019). Hiding data in images using steganography techniques with compression algorithms. [Telecommunication Computing Electronics and Control]. TELKOMNIKA, 17(3), 1168–1175. doi:10.12928/telkomnika.v17i3.12230 Abdulla, A. A., Sellahewa, H., & Jassim, S. A. (2019). Improving embedding efficiency for digital steganography by exploiting similarities between secret and cover images. Multimedia Tools and Applications, 78(13), 17799–17823. doi:10.100711042-019-7166-7 Abdullah, D. M., Ameen, S. Y., Omar, N., Salih, A. A., Ahmed, D. M., Kak, S. F., & Rashid, Z. N. (2021). Secure data transfer over internet using image steganography. Asian Journal of Research in Computer Science, 33-52. Abdullah, A. M. (2017). Advanced encryption standard (AES) algorithm to encrypt and decrypt data. Cryptography and Network Security, 16, 1–11. Adi, P. W., Rahmanti, F. Z., & Abu, N. A. (2015, October). High quality image steganography on integer Haar Wavelet Transform using modulus function. In 2015 International Conference on Science in Information Technology (ICSITech) (pp. 79-84). IEEE. 10.1109/ICSITech.2015.7407781 Ahmad, I., Yousaf, M., Yousaf, S., & Ahmad, M. O. (2020). Fake news detection using machine learning ensemble methods. Complexity, 2020, 1–11. doi:10.1155/2020/8885861 Ahmed, F., & Das, S. (2013). Removal of high-density salt-and-pepper noise in images with an iterative adaptive fuzzy filter using alpha-trimmed mean. IEEE Transactions on Fuzzy Systems, 22(5), 1352–1358. doi:10.1109/TFUZZ.2013.2286634 Ahn, N.-Y., & Lee, D. H. (2017). Duty to delete on Non-volatile Memory. doi:10.8080/1020190046820 Aileni, R. M., & Suciu, G. (2020). IoMT: A blockchain perspective. Decentralised Internet of Things: A Blockchain Perspective, (pp. 199-215). Semantic Scholar.

Compilation of References

Aktas, F., Ceken, C., & Erdemli, Y. E. (2018). IoT-based healthcare frame-work for bio medical applications. Journal of Medical and Biological Engineering, 38(6), 966–979. doi:10.100740846017-0349-7 Al Omar, A., Bhuiyan, M. Z. A., Basu, A., Kiyomoto, S., & Rahman, M. S. (2019). Privacyfriendly platform for healthcare data in cloud based on blockchain environment. Future Generation Computer Systems, 95, 511–521. doi:10.1016/j.future.2018.12.044 Aladwani, T. (2019). Scheduling IoT healthcare tasks in fog computing based on their importance. Procedia Computer Science, 163, 560–569. doi:10.1016/j.procs.2019.12.138 Ali, F., El-Sappagh, S., Islam, S. R., Ali, A., Attique, M., Imran, M., & Kwak, K. S. (2021). An intelligent healthcare monitoring framework using wearable sensors and social networking data. Future Generation Computer Systems, 114, 23–43. doi:10.1016/j.future.2020.07.047 Ali, F., Islam, S. R., Kwak, D., Khan, P., Ullah, N., Yoo, S. J., & Kwak, K. S. (2018). Type-2 fuzzy ontology–aided recommendation systems for IoT–based healthcare. Computer Communications, 119, 138–155. doi:10.1016/j.comcom.2017.10.005 Almeida, D. F., Astudillo, P., & Vandermeulen, D. (2021). Three‐dimensional image volumes from two‐dimensional digitally reconstructed radiographs: A deep learning approach in lower limb CT scans. Medical Physics, 48(5), 2448–2457. doi:10.1002/mp.14835 PMID:33690903 Al-Qudsy, Z. N., Shaker, S. H., & Abdulrazzque, N. S. (2018, October). Robust blind digital 3d model watermarking algorithm using mean curvature. In International Conference on New Trends in Information and Communications Technology Applications (pp. 110-125). Springer, Cham. 10.1007/978-3-030-01653-1_7 AlShariah, N. M., Khader, A., & Saudagar, J. (2019). Detecting fake images on social media using machine learning. International Journal of Advanced Computer Science and Applications, 10(12), 170–176. doi:10.14569/IJACSA.2019.0101224 Angeletti, F., Chatzigiannakis, I., & Vitaletti, A. (2017, September). The role of blockchain and IoT in recruiting participants for digital clinical trials. In 2017 25th international conference on software, telecommunications and computer networks (SoftCOM) (pp. 1-5). IEEE. 10.23919/ SOFTCOM.2017.8115590 Antonellis, C. J. (2008). Solid state disks and computer forensics. ISSA Journal, 36–38. Anyanwu, G. O., Nwakanma, C. I., Lee, J. M., & Kim, D. S. (2023). RBF-SVM kernel-based model for detecting DDoS attacks in SDN integrated vehicular network. Ad Hoc Networks, 140, 103026. doi:10.1016/j.adhoc.2022.103026 Arce, G. R. (1998). A general weighted median filter structure admitting negative weights. IEEE Transactions on Signal Processing, 46(12), 3195–3205. doi:10.1109/78.735296 Arce, G. R., & Paredes, J. L. (2000). Recursive weighted median filters admitting negative weights and their optimization. IEEE Transactions on Signal Processing, 48(3), 768–779. doi:10.1109/78.824671 268

Compilation of References

Arnold, M., Schmucker, M., & Wolthusen, W. D. (2003). Techniques and Applications of Digital Watermarking and Content Protection. Ashok, B. (2021). Diabetes Diagnosis using Ensemble Models in Machine Learning. [TURCOMAT]. Turkish Journal of Computer and Mathematics Education, 12(13), 177–184. Ashoub, N., Emran, A., & Saleh, H. I. (2018). NonBlind Robust 3D Object Watermarking Scheme. Arab Journal of Nuclear Sciences and Applications, 51(4), 62–71. Ashpreet, M. B. (2020). Modified Directional and Fuzzy Based Median Filter for Salt-and-Pepper Noise Reduction in Color Image. Solid State Technology, 63(5), 4033–4053. Astola, J., Haavisto, P., & Neuvo, Y. (1990). Vector median filters. Proceedings of the IEEE, 78(4), 678–689. doi:10.1109/5.54807 Astola, J., & Kuosmanen, P. (2020). Fundamentals of nonlinear digital filtering. CRC press. doi:10.1201/9781003067832 Averin A. & Averina, O. (2020). Review of Blockchain Frameworks and Platforms. 2020 International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon), Vladivostok, Russia. . doi:10.1109/FarEastCon50210.2020.9271217 Baby, D., Thomas, J., Augustine, G., George, E., & Michael, N. R. (2015). A novel DWT based image securing method using steganography. Procedia Computer Science, 46, 612–618. doi:10.1016/j.procs.2015.02.105 Bach, L. M., Mihaljevic, B., & Zagar, M. (2018). Comparativeanalysisofblockchain consensus algorithms. In Proc. 41st Int. Conv. Inf. Commun.Technol. (MIPRO). Electron. Microelectron, (pp. 1545–1550). Semantic Scholar. Behbahani, Y. M., Ghayour, P., & Farzaneh, A. H. (2011, November). Eigenvalue Steganography based on eigen characteristics of quantized DCT matrices. In ICIMU 2011: Proceedings of the 5th international Conference on Information Technology & Multimedia (pp. 1-4). IEEE. 10.1109/ ICIMU.2011.6122769 Ben Amar, Y., Fourati Kallel, I., & Bouhlel, M. S. (2012, March). Etat de l’art de tatouage robuste des modèles 3D. In The 6th international conference SETIT, Sousse, Tunisia (pp. 21-24). Benchoufi, M., & Ravaud, P. (2017). Blockchain technology for improving clinical research quality. Trials, 18(1), 1–5. doi:10.118613063-017-2035-z PMID:28724395 Benedens, O., & Busch, C. (2000, September). Towards blind detection of robust watermarks in polygonal models. Computer Graphics Forum, 19(3), 199–208. doi:10.1111/1467-8659.00412 Bentov, I., Lee, C., & Mizrahi, A. (2014). Proof of activity: Extending Bitcoin’s proof of work via proof of stake. ACMSIGMETRICS Perform. Eval. Rev., 42(3), 34–37. doi:10.1145/2695533.2695545 Berti, J. (2009, November). Multimedia infringement and protection in the Internet age. IT Professional, 11(6), 42–45. doi:10.1109/MITP.2009.118

269

Compilation of References

Beugnon, S., Itier, V., & Puech, W. (2022). 3D Watermarking. Multimedia Security 1: Authentication and Data Hiding, 219-246. Bhaskaran, K., Ilfrich, P., Liffman, D., Vecchiola, C., Jayachandran, P., Kumar, A., Lim, F., Nandakumar, K., Qin, Z., Ramakrishna, V., Teo, E. G., & Suen, C. H. (2018). Double-blind consent-driven data sharing on blockchain. In Proc. IEEE Int. Conf. Cloud Eng. (IC2E) (pp. 385–391). IEEE. 10.1109/IC2E.2018.00073 Bhat, S. & Sofi, I. (2020). Edge computing and its convergence with blockchain in 5G and beyond: security, challenges, and opportunities. IEEE. https://ieeexplore.ieee.org/abstract/ document/9253517/ Bhattacharya, P., Tanwar, S., Shah, R., & Ladha, A. (2019). Mobile edge computing-enabled blockchain framework—A survey. Lecture Notes in Electrical Engineering, 597, 797–809. doi:10.1007/978-3-030-29407-6_57 Bhushan, K., & Gupta, B. B. (2019). Network flow analysis for detection and mitigation of Fraudulent Resource Consumption (FRC) attacks in multimedia cloud computing. Multimedia Tools and Applications, 78(4), 4267–4298. doi:10.100711042-017-5522-z Biswas, M. (2020). Impulse Noise Detection and Removal Method Based on Modified Weighted Median. [IJSI]. International Journal of Software Innovation, 8(2), 38–53. doi:10.4018/ IJSI.2020040103 Biswas, M. (2022). Adaptive Threshold and Directional Weighted Median Filter-Based Impulse Noise Removal Method for Images. [IJSI]. International Journal of Software Innovation, 10(1), 1–18. Boroumand, M., Chen, M., & Fridrich, J. (2018). Deep residual network for steganalysis of digital images. IEEE Transactions on Information Forensics and Security, 14(5), 1181–1193. doi:10.1109/TIFS.2018.2871749 Botsch, M., Pauly, M., Rossl, C., Bischoff, S., & Kobbelt, L. (2006). Geometric modeling based on triangle meshes. In ACM SIGGRAPH 2006 Courses (pp. 1-es). doi:10.1145/1185657.1185839 Bourke, P. (2009). Ply-polygon file format. Dostupné. http://paulbourke. net/dataformats/ply Brincat, A., Lombardo, A., Morabito, G., Networks, S. Q.-C., & 2019, U. (2019). On the use of Blockchain technologies in WiFi networks. Elsevier, 162, 1–9. Bunker, T., Wei, M., & Swanson, S. (2012). Ming II: A ñexible platform for NAND ñash-based research. UCSD CSE. Burnett, S., & Paine, S. (2001). RSA Security’s official guide to cryptography. McGraw-Hill, Inc. Campidoglio, M., Frattolillo, F., & Landolfi, F. (2009). The multimedia protection problem: Challenges and suggestions. Proc. 4th Int. Conf. Internet Web Appl. Services, (pp. 522–526).

270

Compilation of References

Cayre, F. (2004). Contributions au tatouage de maillages surfaciques 3D [Doctoral dissertation, École nationale supérieure des télécommunications]. Celebi, M. E., & Aslandogan, Y. A. (2008). Robust switching vector median filter for impulsive noise removal. Journal of Electronic Imaging, 17(4), 043006–043006. doi:10.1117/1.2991415 Cha, J., Kang, W., Chung, J., Park, K., & Kang, S. (2015). A New Accelerated Endurance Test for Terabit NAND Flash Memory Using Interference Effect. IEEE Transactions on Semiconductor Manufacturing, 28(3), 399–407. doi:10.1109/TSM.2015.2429211 Chang, D., Lin, W., & Chen, H. (2016). FastRead: Improving Read Performance for MultilevelCell Flash Memory. IEEE Transactions on Very Large Scale Integration (VLSI). Systems, 24, 2998–3002. Chang, J. Y., & Liu, P. C. (2015, August). A fuzzy weighted mean aggregation algorithm for color image impulse noise removal. In 2015 IEEE International Conference on Automation Science and Engineering (CASE) (pp. 1268-1273). IEEE. 10.1109/CoASE.2015.7294273 Chang, L. (2007). On Efficient Wear Leveling for Large Scale Flash Memory Storage Systems (Vol. 07). ACM. doi:10.1145/1244002.1244248 Chauhan, R., Popli, R., & Kansal, I. (2022, October). A Comprehensive Review on Fake Images/ Videos Detection Techniques. In 2022 10th International Conference on Reliability, Infocom Technologies and 20Optimization (Trends and Future Directions)(ICRITO) (pp. 1-6). IEEE. Chauhan, R., Popli, R., & Kansal, I. (2022, October). A Comprehensive Review on Fake Images/ Videos Detection Techniques. In 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO) (pp. 1-6). IEEE. Chen, J., Zhan, Y., & Cao, H. (2019). Adaptive sequentially weighted median filter for image highly corrupted by impulse noise. IEEE Access: Practical Innovations, Open Solutions, 7, 158545–158556. doi:10.1109/ACCESS.2019.2950348 Chen, J., Zhan, Y., & Cao, H. (2020). Iterative deviation filter for fixed-valued impulse noise removal. Multimedia Tools and Applications, 79(33-34), 23695–23710. doi:10.100711042-02009123-x Chen, R., Li, Y., Yu, Y., Li, H., Chen, X., & Susilo, W. (2020). Blockchain-based dynamic provable data possession for smart cities. IEEE Internet of Things Journal, 7(5), 4143–4154. doi:10.1109/JIOT.2019.2963789 Chen, T., Ma, K. K., & Chen, L. H. (1999). Tri-state median filter for image denoising. IEEE Transactions on Image Processing, 8(12), 1834–1838. doi:10.1109/83.806630 PMID:18267461 Chen, Y. Y., Jan, J. K., Chi, Y. Y., & Tsai, M. L. (2009). A Feasible DRM Mechanism for BTLike P2P System. In Proceedings of the International Symposium on Information Engineering and Electronic Commerce, Ternopil, Ukraine.

271

Compilation of References

Chhabra, S., & Singh, A. K. (2022). Dynamic Resource Allocation Method for Load Balance Scheduling Over Cloud Data Center Networks. arXiv preprint arXiv:2211.02352. Chhabra, S., & Kumar Singh, A. (2020). Security Enhancement in Cloud Environment using Secure Secret Key Sharing. Journal of Communications Software and Systems, 16(4), 296–307. doi:10.24138/jcomss.v16i3.964 Chhabra, S., & Singh, A. K. (2016, December). Dynamic data leakage detection model based approach for MapReduce computational security in cloud. In 2016 Fifth International Conference on Eco-friendly Computing and Communication Systems (ICECCS) (pp. 13-19). IEEE. 10.1109/ Eco-friendly.2016.7893234 Chhabra, S., & Singh, A. K. (2019). Dynamic hierarchical load balancing model for cloud data centre networks. Electronics Letters, 55(2), 94–96. doi:10.1049/el.2018.5427 Chhabra, S., & Singh, A. K. (2020). Secure VM Allocation Scheme to Preserve against CoResident Threat. International Journal of Web Engineering and Technology, 15(1), 96–115. doi:10.1504/IJWET.2020.107686 Chhabra, S., & Singh, A. K. (2021). Dynamic Resource Allocation Method for Load Balance Scheduling over Cloud Data Center Networks. Journal of Web Engineering, 20(8). doi:10.13052/ jwe1540-9589.2083 Cho, J. W., Prost, R., & Jung, H. Y. (2006). An oblivious watermarking for 3-D polygonal meshes using distribution of vertex norms. IEEE Transactions on Signal Processing, 55(1), 142–155. doi:10.1109/TSP.2006.882111 Corsini, M., Uccheddu, F., Bartolini, F., Barni, M., Caldelli, R., & Cappellini, V. (2003, October). 3D watermarking technology: Visual quality aspects. In Proc. 9th Conf. Virtual System and Multimedia, VSMM’03. Semantic Scholar. Cox, I. J., Miller, M. L., Bloom, J. A., & Honsinger, C. (2002). Digital watermarking (Vol. 53). Morgan Kaufmann. Cox, I. J., Miller, M. L., Bloom, J. A., & Honsinger, C. (2002). Digital Watermarking (Vol. 53). Morgan Kaufmann. Dai, T., Xu, Z., Liang, H., Gu, K., Tang, Q., Wang, Y., & Xia, S. T. (2017). A generic denoising framework via guided principal component analysis. Journal of Visual Communication and Image Representation, 48, 340–352. doi:10.1016/j.jvcir.2017.05.009 Dalal, M., & Juneja, M. (2021). Steganography and Steganalysis (in digital forensics): A Cybersecurity guide. Multimedia Tools and Applications, 80(4), 5723–5771. doi:10.100711042020-09929-9 Djebbar, F., Ayad, B., Meraim, K. A., & Hamam, H. (2012). Comparative study of digital audio steganography techniques. EURASIP Journal on Audio, Speech, and Music Processing, 2012(1), 1–16. doi:10.1186/1687-4722-2012-25

272

Compilation of References

Dong, Y., & Xu, S. (2007). A new directional weighted median filter for removal of random-valued impulse noise. IEEE Signal Processing Letters, 14(3), 193–196. doi:10.1109/LSP.2006.884014 Dorri, A., Kanhere, S. S., & Jurdak, R. (2019). Blockchain in internet of things. Challenges and Solutions., 162, 106855. Dubey, V., & Katarya, R. (2021, May). Adaptive histogram equalization based approach for sar image enhancement: A comparative analysis. In 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS) (pp. 878-883). IEEE. 10.1109/ ICICCS51141.2021.9432287 Dubovitskaya, A., Xu, Z., Ryu, S., Schumacher, M., & Wang, F. (2017). Secure and trustable electronic medical records sharing using blockchain. Annual Symposium Proceedings - AMIA Symposium, (pp. 650). AMIA. PMID:29854130 El Abbadi, N., & Hassan, A. M., & AL-Nwany, M. M. (2013). Blind fake image detection. [IJCSI]. International Journal of Computer Science Issues, 10(4), 180. El Zein, O. M., El Bakrawy, L. M., & Ghali, N. I. (2016). A non-blind robust watermarking approach for 3D mesh models. Journal of Theoretical and Applied Information Technology, 83(3), 353. El Zein, O., El Bakrawy, M., & Ghali, N. I. (2017). A robust 3D mesh watermarking algorithm utilizing fuzzy C-Means clustering. Future Computing and Informatics Journal, 2(2), 10. doi:10.1016/j.fcij.2017.10.007 Elejla, O. E., Anbar, M., Hamouda, S., Faisal, S., Bahashwan, A. A., & Hasbullah, I. H. (2022). Deep-Learning-Based Approach to Detect ICMPv6 Flooding DDoS Attacks on IPv6 Networks. Applied Sciences (Basel, Switzerland), 12(12), 6150. doi:10.3390/app12126150 Enginoglu, S., Erkan, U., & Memiş, S. (2019). Pixel similarity-based adaptive Riesz mean filter for salt-and-pepper noise removal. Multimedia Tools and Applications, 78(24), 35401–35418. doi:10.100711042-019-08110-1 Erkan, U., & Kilicman, A. (2016). Two new methods for removing salt-and-pepper noise from digital images. scienceasia, 42(1), 28. Erkan, U., Enginoğlu, S., Thanh, D. N., & Hieu, L. M. (2020). Adaptive frequency median filter for the salt and pepper denoising problem. IET Image Processing, 14(7), 1291–1302. doi:10.1049/ iet-ipr.2019.0398 Erkan, U., & Gokrem, L. (2018). A new method based on pixel density in salt and pepper noise removal. Turkish Journal of Electrical Engineering and Computer Sciences, 26(1), 162–171. doi:10.3906/elk-1705-256 Erkan, U., Gokrem, L., & Enginoglu, S. (2018). Different applied median filter in salt and pepper noise. Computers & Electrical Engineering, 70, 789–798. doi:10.1016/j.compeleceng.2018.01.019

273

Compilation of References

Erkan, U., Thanh, D. N., Enginoglu, S., & Memiş, S. (2020, June). Improved adaptive weighted mean filter for salt-and-pepper noise removal. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1-5). IEEE. 10.1109/ ICECCE49384.2020.9179351 Fan, C. (2020). Performance Evaluation of Blockchain Systems: A Systematic Survey, 8, 126927–50. IEEE. Fan, K., Wang, S., Ren, Y., Li, H., & Yang, Y. (2018). Medblock: Efficient and secure medical data sharing via blockchain. Journal of Medical Systems, 42(8), 1–11. doi:10.100710916-0180993-7 PMID:29931655 Farrag, S., & Alexan, W. (2020). Secure 3d data hiding technique based on a mesh traversal algorithm. Multimedia Tools and Applications, 79(39), 29289–29303. doi:10.100711042-02009437-w Fotiou, N., & Polyzos, G. C. (2016). Decentralized Name-Based Security for Content Distribution Using Blockchains. IEEE Conference on Computer, and Undefined (pp. 415–20). IEEE. Frank, J., Eisenhofer, T., Schönherr, L., Fischer, A., Kolossa, D., & Holz, T. (2020, November). Leveraging frequency analysis for deep fake image recognition. In International conference on machine learning (pp. 3247-3258). PMLR. Fukami, A., Ghose, S., Luo, Y., Cai, Y., & Mutlu, O. (2017). Improving the reliability of chip-off forensic analysis of NAND flash memory devices. Digital Investigation, 20, S1–S11. doi:10.1016/j.diin.2017.01.011 Gao, S., Yu, T., Zhu, J., & Cai, W. (2019, December). T-PBFT: An EigenTrust-based practical byzantine fault tolerance consensus algorithm. China Communications, 16(12), 111–123. doi:10.23919/JCC.2019.12.008 Garg, A., Popli, R., & Sarao, B. S. (2021). Growth of digitization and its impact on big data analytics. [). IOP Publishing.]. IOP Conference Series. Materials Science and Engineering, 1022(1), 012083. doi:10.1088/1757-899X/1022/1/012083 Garg, H. (2022). A comprehensive study of watermarking schemes for 3D polygon mesh objects. International Journal of Information and Computer Security, 19(1-2), 48–72. doi:10.1504/ IJICS.2022.126753 Garg, S., Kaur, K., Kumar, N., & Rodrigues, J. J. (2019). Hybrid deep-learning-based anomaly detection scheme for suspicious flow detection in SDN: A social multimedia perspective. IEEE Transactions on Multimedia, 21(3), 566–578. doi:10.1109/TMM.2019.2893549 Gautam, C., & Tiwari, A. (2016). On the construction of extreme learning machine for one class classifier. In Proceedings of ELM-2015 Volume 1: Theory, Algorithms and Applications (I) (pp. 447-461). Springer International Publishing. 10.1007/978-3-319-28397-5_35

274

Compilation of References

Gedkhaw, E., Soodtoetong, N., & Ketcham, M. (2018, September). The performance of cover image steganography for hidden information within image file using least significant bit algorithm. In 2018 18th International Symposium on Communications and Information Technologies (ISCIT) (pp. 504-508). IEEE. 10.1109/ISCIT.2018.8588011 Geier, F. (2015). The differences between SSD and HDD technology regarding forensic investigations. Gellert, A., & Brad, R. (2016). Context‐based prediction filtering of impulse noise images. IET Image Processing, 10(6), 429–437. doi:10.1049/iet-ipr.2015.0702 Genestier, P., Zouarhi, S., Limeux, P., Excoffier, D., Prola, A., Sandon, S., & Temerson, J. M. (2017). Blockchain for consent management in the ehealth environment: A nugget for privacy and security challenges. Journal of the International Society for Telemedicine and eHealth, 5, GKR-e24. Geng, X., Hu, X., & Xiao, J. (2012). Quaternion switching filter for impulse noise reduction in color image. Signal Processing, 92(1), 150–162. doi:10.1016/j.sigpro.2011.06.015 Gonzalez, R. C., & Woods, R. E. (2018). Digital Image Processing. Pearson. Gopi, R., Sathiyamoorthi, V., Selvakumar, S., Manikandan, R., Chatterjee, P., Jhanjhi, N. Z., & Luhach, A. K. (2021). Enhanced method of ANN based model for detection of DDoS attacks on multimedia internet of things. Multimedia Tools and Applications, 1–19. Grossman, R., Qin, X., & Lifka, D. (1993, April). A proof-of-concept implementation interfacing an object manager with a hierarchical storage system. In (1993) Proceedings Twelfth IEEE Symposium on Mass Storage systems (pp. 209-213). IEEE. 10.1109/MASS.1993.289758 Gubanov, Y., & Afonin, O. (2014). Recovering evidence from SSD drives: understanding TRIM, garbage collection and exclusions. Belkasoft. Gupta, S., Gujral, G., & Aggarwal, N. (2012). Enhanced least significant bit algorithm for image steganography. IJCEM International Journal of Computational Engineering & Management, 15(4), 40–42. Gupta, V., Chaurasia, V., & Shandilya, M. (2015). Random-valued impulse noise removal using adaptive dual threshold median filter. Journal of Visual Communication and Image Representation, 26, 296–304. doi:10.1016/j.jvcir.2014.10.004 Habib, M., Rasheed, S., Hussain, A., & Ali, M. (2015). Random value impulse noise removal based on most similar neighbors. In 2015 13th International Conference on Frontiers of Information Technology (FIT) (pp. 329-333). IEEE. 10.1109/FIT.2015.64 Habib, M., Hussain, A., & Choi, T. S. (2015). Adaptive threshold based fuzzy directional filter design using background information. Applied Soft Computing, 29, 471–478. doi:10.1016/j. asoc.2015.01.010

275

Compilation of References

Habib, M., Hussain, A., Rasheed, S., & Ali, M. (2016). Adaptive fuzzy inference system based directional median filter for impulse noise removal. AEÜ. International Journal of Electronics and Communications, 70(5), 689–697. doi:10.1016/j.aeue.2016.02.005 Hadi, H. J., Musthaq, N., & Khan, I. U. (2021). SSD forensic: Evidence generation and forensic research on solid state drives using trim analysis. 2021 International Conference on Cyber Warfare and Security (ICCWS). IEEE. 10.1109/ICCWS53234.2021.9702989 Hamidouche, W., Farajallah, M., Sidaty, N., Assad, S. E., & Deforges, O. (2017). Real-time selective video encryption based on the chaos system in scalable HEVC extension. Signal Processing Image Communication, 58, 73–86. doi:10.1016/j.image.2017.06.007 Hao, Y., Li, Y., Dong, X., Fang, L., & Chen, P. (2018). Performance analysis of consensus algorithm in private blockchain. In Proc. IEEE Intell. Vehicles Symp. (IV), (pp. 280–285). IEEE. 10.1109/IVS.2018.8500557 Heo, G., Yang, D., Doh, I., & Chae, K. (2009). Design of blockchain system for protection of personal information in digital content trading environment. In Proc. Int. Conf. Inf. Netw. (ICOIN), (pp. 152–157). IEEE. 10.1109/ICOIN48656.2020.9016501 Hepisuthar, M. (2021). Comparative Analysis Study on SSD, HDD, and SSHD. Turkish Journal of Computer and Mathematics Education, 12(3), 3635–3641. doi:10.17762/turcomat.v12i3.1644 Hölbl, M., Kompara, M., Kamišalić, A., & Nemec Zlatolas, L. (2018). A systematic review of the use of blockchain in healthcare. Symmetry, 10(10), 470. doi:10.3390ym10100470 Hon, W., Palfreyman, J., & Tegart, M. (2016). Distributed ledger technology & cybersecurity– Improving information security in the financial sector. In Eur. Union Agency Netw. Inf. Secur., (pp. 1–36). NIH. Hsu, C. C., Zhuang, Y. X., & Lee, C. Y. (2020). Deep fake image detection based on pairwise learning. Applied Sciences (Basel, Switzerland), 10(1), 370. doi:10.3390/app10010370 Hsu, C. Y., Wang, S., & Qiao, Y. (2021). Intrusion by machine learning for multimedia platform. Multimedia Tools and Applications, 80(19), 29643–29656. doi:10.100711042-021-11100-x PMID:34248394 Hung, C. C., & Chang, E. S. (2017). Moran’s I for impulse noise detection and removal in color images. Journal of Electronic Imaging, 26(2), 023023–023023. doi:10.1117/1.JEI.26.2.023023 Hu, R., Rondao-Alface, P., & Macq, B. (2009, April). Constrained optimisation of 3D polygonal mesh watermarking by quadratic programming. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 1501-1504). IEEE. 10.1109/ICASSP.2009.4959880 Hussain, M., Wahab, A. W. A., Ho, A. T., Javed, N., & Jung, K. H. (2017). A data hiding scheme using parity-bit pixel value differencing and improved rightmost digit replacement. Signal Processing Image Communication, 50, 44–57. doi:10.1016/j.image.2016.10.005

276

Compilation of References

Hussien, H. M., Yasin, S. M., Udzir, S. N. I., Zaidan, A. A., & Zaidan, B. B. (2019). A systematic review for enabling of develop a blockchain technology in healthcare application: Taxonomy, substantially analysis, motivations, challenges, recommendations and future direction. Journal of Medical Systems, 43(10), 1–35. doi:10.100710916-019-1445-8 PMID:31522262 Hwang, H., & Haddad, R. A. (1995). Adaptive median filters: New algorithms and results. IEEE Transactions on Image Processing, 4(4), 499–502. doi:10.1109/83.370679 PMID:18289998 Islam, M. R., Liu, S., Wang, X., & Xu, G. (2020). Deep learning for misinformation detection on online social networks: A survey and new perspectives. Social Network Analysis and Mining, 10(1), 1–20. doi:10.100713278-020-00696-x PMID:33014173 Islam, M. T., Rahman, S. M., Ahmad, M. O., & Swamy, M. N. S. (2018). Mixed Gaussianimpulse noise reduction from images using convolutional neural network. Signal Processing Image Communication, 68, 26–41. doi:10.1016/j.image.2018.06.016 ISO/IEC. (2018). Security Techniques. ISO. https://www.iso.org/standard/44381.html Jin, L., Liu, H., Xu, X., & Song, E. (2011). Color impulsive noise removal based on quaternion representation and directional vector order-statistics. Signal Processing, 91(5), 1249–1261. doi:10.1016/j.sigpro.2010.12.011 Jin, L., Xiong, C., & Li, D. (2008). Selective adaptive weighted median filter. Optical Engineering (Redondo Beach, Calif.), 47(3), 037001–037001. doi:10.1117/1.2891297 Jin, L., Xiong, C., & Liu, H. (2012). Improved bilateral filter for suppressing mixed noise in color images. Digital Signal Processing, 22(6), 903–912. doi:10.1016/j.dsp.2012.06.012 Jin, L., Zhu, Z., Song, E., & Xu, X. (2019). An effective vector filter for impulse noise reduction based on adaptive quaternion color distance mechanism. Signal Processing, 155, 334–345. doi:10.1016/j.sigpro.2018.10.007 Jin, L., Zhu, Z., Xu, X., & Li, X. (2016). Two-stage quaternion switching vector filter for color impulse noise removal. Signal Processing, 128, 171–185. doi:10.1016/j.sigpro.2016.03.025 Jiranantanagorn, P. (2019, July). High-Density Salt and Pepper Noise Filter using Anisotropic Diffusion. In 2019 16th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON) (pp. 645-648). IEEE. 10.1109/ ECTI-CON47248.2019.8955185 Julliand, T., Nozick, V., & Talbot, H. (2016). Image noise and digital image forensics. In DigitalForensics and Watermarking: 14th International Workshop, IWDW 2015, (pp. 3-17). Springer International Publishing. Kadhim, I. J., Premaratne, P., Vial, P. J., & Halloran, B. (2019). Comprehensive survey of image steganography: Techniques, Evaluations, and trends in future research. Neurocomputing, 335, 299–326. doi:10.1016/j.neucom.2018.06.075

277

Compilation of References

Kalivas, A., Tefas, A., & Pitas, I. (2003, July). Watermarking of 3D models using principal component analysis. In 2003 International Conference on Multimedia and Expo. ICME’03. Proceedings (Cat. No. 03TH8698) (Vol. 1, pp. I-637). IEEE. Kallel, I. F., Bouhlel, M. S., Lapayre, J. C., & Garcia, E. (2009). Control of dermatology image integrity using reversible watermarking. International Journal of Imaging Systems and Technology, 19(1), 5–9. doi:10.1002/ima.20172 Kamau, G., Boore, C., Maina, E., & Njenga, S. (2018, May). Blockchain technology: Is this the solution to emr interoperability and security issues in developing countries?. In 2018 IST-Africa Week Conference (IST-Africa) (pp. 1). IEEE. Kan, L., Wei, Y., Hafiz Muhammad, A., Siyuan, W., Gao, L. C., & Kai, H. (2018). A multiple blockchains architecture on inter-blockchain communication. In Proc. IEEE Int. Conf. Softw. Qual., Rel. Secur. Companion (QRS-C), (pp. 139–145). IEEE. 10.1109/QRS-C.2018.00037 Kang, C. C., & Wang, W. J. (2009). Fuzzy reasoning-based directional median filter design. Signal Processing, 89(3), 344–351. doi:10.1016/j.sigpro.2008.09.003 Kang, M., Lee, W., & Kim, S. (2018). Subpage-Aware Solid State Drive for Improving Lifetime and Performance. IEEE Transactions on Computers, 67(10), 1492–1505. doi:10.1109/ TC.2018.2827033 Kaur, H., Koundal, D., & Kadyan, V. (2021). Image fusion techniques: A survey. Archives of Computational Methods in Engineering, 28(7), 4425–4447. doi:10.100711831-021-09540-7 PMID:33519179 Kawase, Y., & Kasahara, S. (2017). Transaction-Confirmation Time for Bitcoin: A Queueing Analytical Approach to Blockchain Mechanism. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10591 LNCS. Kawase, Y., & Kasahara, S. (2017). Transaction-Confirmation Time for Bitcoin: A Queueing Analytical Approach to Blockchain Mechanism. In Queueing Theory and Network Applications (pp. 75–88). Springer. doi:10.1007/978-3-319-68520-5_5 Khan, U., An, Z. Y., & Imran, A. (2020, November). A blockchain ethereum technologyenabled digital content: Development of trading and sharing economy data. IEEE Access : Practical Innovations, Open Solutions, 8, 217045–217056. doi:10.1109/ACCESS.2020.3041317 Khaw, H. Y., Soon, F. C., Chuah, J. H., & Chow, C. O. (2019). High‐density impulse noise detection and removal using deep convolutional neural network with particle swarm optimisation. IET Image Processing, 13(2), 365–374. doi:10.1049/iet-ipr.2018.5776 Kim, A., & Kim, M. (2020). A Study on Blockchain-Based Music Distribution Framework: Focusing on Copyright Protection. International Conference on ICT Convergence (pp. 1921–25). IEEE. 10.1109/ICTC49870.2020.9289184

278

Compilation of References

Kim, J. Y. (2018). A comparative study of block chain: Bitcoin· Namecoin· MediBloc. Journal of Science and Technology Studies, 18(3), 217–255. Kim, J., Lee, Y., Lee, K., Jung, T., Volokhov, D., & Yim, K. (2013). Vulnerability to flash controller for secure usb drives. J. Internet Serv. Inf. Secure, 3(3/4), 136–145. King, C., & Vidas, T. (2011). Empirical analysis of solid state disk data retention when used with contemporary operating systems (Vol. 8). Elsevier Science Publishers B. Ko, J. (2019). Variation-Tolerant WL Driving Scheme for High-Capacity NAND Flash Memory. IEEE Transactions on Very Large Scale Integration (VLSI). Systems, 27, 1828–1839. Kokoris-Kogias, E., Jovanovic, P., & Gasser, L. (2018). Omniledger: A Secure, Scale-out, Decentralized Ledger via Sharding. In IEEE Symposium and Undefined 2018. (pp. 583–98). IEEE. Kombe, C., Dida, M., & Sam, A. (2018). A review on healthcare information systems and consensus protocols in blockchain technology. Ko, S. J., & Lee, Y. H. (1991). Center weighted median filters and their applications to image enhancement. IEEE Transactions on Circuits and Systems, 38(9), 984–993. doi:10.1109/31.83870 Kumar, K. A., Preethi, G., & Vasanth, K. (2020). A study of fake news detection using machine learning algorithms. [IJTES]. Int. J. Technol. Eng. Syst., 11(1), 1–7. Kumar, N., Shukla, H., & Tripathi, R. (2017). Image Restoration in Noisy free images using fuzzy based median filtering and adaptive Particle Swarm Optimization-Richardson-Lucy algorithm. International Journal of Intelligent Engineering and Systems, 10(4), 50–59. doi:10.22266/ ijies2017.0831.06 Kuribayashi, M., & Funabiki, N. (2019). Decentralized tracing protocol for fingerprinting system. APSIPA Transactions on Signal and Information Processing, 8(1), 1–8. doi:10.1017/ATSIP.2018.28 Kutade, P. B., & Bhalotra, P. S. A. (2015). A survey on various approaches of image steganography. International Journal of Computer Applications, 109(3), 1–5. doi:10.5120/19165-0620 Lao, L., Dai, X., Xiao, B., & Guo, S. (2020). G-PBFT: A location-based and scalable consensus protocol for IoT-blockchain applications. In Proc. IEEE Int. Parallel Distrib. Process. Symp. (IPDPS), (pp. 664–673). IEEE. 10.1109/IPDPS47924.2020.00074 Lau, F., Rubin, S. H., Smith, M. H., & Trajkovic, L. (2000, October). Distributed denial of service attacks. In Smc 2000 conference proceedings. 2000 ieee international conference on systems, man and cybernetics.’cybernetics evolving to systems, humans, organizations, and their complex interactions (Vol. 3, pp. 2275-2280). IEEE. 10.1109/ICSMC.2000.886455 Ledwaba, L.P.I. (2021). Smart Microgrid Energy Market: Evaluating Distributed Ledger Technologies for Remote and Constrained Microgrid Deployments. MDPI 10(6), 714.

279

Compilation of References

Lee, J., Kim, Y., Shipman, G. M., Oral, S., & Kim, J. (2013). Preemptible i/o scheduling of garbage Collection for solid state drives, Computer- Aided Design of Integrated Circuits and Systems. IEEE Transactions On, 32(2). Lee, D. H., Kim, Y. R., Kim, H. J., Park, S. M., & Yang, Y. J. (2019). Fake news detection using deep learning. Journal of Information Processing Systems, 15(5), 1119–1130. Le, K. H., Nguyen, M. H., Tran, T. D., & Tran, N. D. (2022). IMIDS: An intelligent intrusion detection system against cyber threats in IoT. Electronics (Basel), 11(4), 524. doi:10.3390/ electronics11040524 Leng, Q., Qi, H., Miao, J., Zhu, W., & Su, G. (2015). One-class classification with extreme learning machine. Mathematical Problems in Engineering, 2015. Li, W., Andreina, S., Bohli, J. M., & Karame, G. (2017). Securing proof-of-stake blockchain protocols. In Data Privacy Management, Cryptocurrencies and Blockchain Technology: ESORICS 2017 International Workshops, (pp. 297-315). Springer International Publishing. 10.1007/9783-319-67816-0_17 Li, Y., & Lyu, S. (2018). Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656. Korshunov, P., & Marcel, S. (2018). Deepfakes: a new threat to face recognition? assessment and detection. arXiv preprint arXiv:1812.08685. Afchar, D., Nozick, V., Yamagishi, J., & Echizen, I. (2018, December). Mesonet: a compact facial video forgery detection network. In 2018 IEEE international workshop on information forensics and security (WIFS) (pp. 1-7). IEEE. Li, G., Xu, X., Zhang, M., & Liu, Q. (2020). Densely connected network for impulse noise removal. Pattern Analysis & Applications, 23(3), 1263–1275. doi:10.100710044-020-00871-y Lin, P. H., Chen, B. H., Cheng, F. C., & Huang, S. C. (2016). A morphological mean filter for impulse noise removal. Journal of Display Technology, 12(4), 344–350. Liu, Q., Safavi-Naini, R., & Sheppard, N. P. (2003). Digital rights management for content distribution. In Proc. Australas. Inf. Secur. Workshop Conf. ACSW Frontiers, (pp. 49–58). ACSW. Liu, C., & Chung, P. (2011, October). A Robust Normalization Algorithm for Three Dimensional Models Based on Clustering and Star Topology. International Journal of Innovative Computing. Information and Control Volum, 7(10), 5731–5748. Liu, J., Jiao, G., & Sun, X. (2022). Feature passing learning for image steganalysis. IEEE Signal Processing Letters. Liu, Y., Yang, T., & Xin, G. (2015). Text steganography in chat based on emoticons and interjections. Journal of Computational and Theoretical Nanoscience, 12(9), 2091–2094. doi:10.1166/jctn.2015.3992 Li, Z., Liu, G., Xu, Y., & Cheng, Y. (2014). Modified directional weighted filter for removal of salt & pepper noise. Pattern Recognition Letters, 40, 113–120. doi:10.1016/j.patrec.2013.12.022

280

Compilation of References

Lu, C. T., & Chou, T. C. (2012). Denoising of salt-and-pepper noise corrupted image using modified directional-weighted-median filter. Pattern Recognition Letters, 33(10), 1287–1295. doi:10.1016/j.patrec.2012.03.025 Luo, C., Xu, L., & Li, D. (2020). Edge computing integrated with blockchain technologies, 268–288. Springer. Mafakheri, B., & Subramanya, T. (2018). Blockchain-based infrastructure sharing in 5G small cell networks, 313–317. IEEE. https://ieeexplore.ieee.org/abstract/document/8584920/?casa_ token=sjhVjROQ-68AAAAA:S7kszO7r5cI8piWow7ljtME1kp20jIQ0S4lHU jqMoKCxs_Ar6utg2MH8QwbnkiS5m_9cbBN Malinski, L., & Smolka, B. (2016). Fast averaging peer group filter for the impulsive noise removal in color images. Journal of Real-Time Image Processing, 11(3), 427–444. doi:10.100711554015-0500-z Malinski, L., & Smolka, B. (2019). Fast adaptive switching technique of impulsive noise removal in color images. Journal of Real-Time Image Processing, 16(4), 1077–1098. doi:10.100711554016-0599-6 Mamoshina, P., Ojomoko, L., Yanovich, Y., Ostrovski, A., Botezatu, A., Prikhodko, P., Izumchenko, E., Aliper, A., Romantsov, K., Zhebrak, A., Ogu, I. O., & Zhavoronkov, A. (2018). Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare. Oncotarget, 9(5), 5665–5690. doi:10.18632/oncotarget.22345 PMID:29464026 Manindra, A. P. V., & Karthikeyan, B. (2022, August). OTP Camouflaging using LSB Steganography and Public Key Cryptography. In 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC) (pp. 109-115). IEEE. 10.1109/ICESC54411.2022.9885542 Market Trends. (2022). Top 8 Best Cryptocurrencies to Invest in 2022. Analytics Insight. https:// www.analyticsinsight.net/top-8-best-cryptocurrencies-to-invest-in-2022/. Masood, S., Hussain, A., Jaffar, M. A., & Choi, T. S. (2014). Color differences based fuzzy filter for extremely corrupted color images. Applied Soft Computing, 21, 107–118. doi:10.1016/j. asoc.2014.03.006 Ma, Z., Jiang, M., Gao, H., & Wang, Z. (2018, December). Blockchain for digital rights management. Future Generation Computer Systems, 89, 746–764. doi:10.1016/j.future.2018.07.029 McHenry, K., & Bajcsy, P. (2008). An overview of 3d data content, file formats and viewers. National Center for Supercomputing Applications, 1205, 22. Megías, D. (2014). Improved Privacy-Preserving P2P Multimedia Distribution Based on Recombined Fingerprints. IEEE Transactions on Dependable and Secure Computing, 12(2), 179–189. doi:10.1109/TDSC.2014.2320712

281

Compilation of References

Megías, D., & Qureshi, A. (2017). Collusion-resistant and privacy-preserving P2P multimedia distribution based on recombined fingerprinting. Expert Systems with Applications, 71, 147–172. doi:10.1016/j.eswa.2016.11.015 Menendez-Ortiz, A., Feregrino-Uribe, C., Hasimoto-Beltran, R., & Garcia-Hernandez, J. J. (2019). A survey on reversible watermarking for multimedia content: A robustness overview. IEEE Access : Practical Innovations, Open Solutions, 7, 132662–132681. doi:10.1109/ACCESS.2019.2940972 Meng, Z., Morizumi, T., Miyata, S., & Kinoshita, H. (2018). Design scheme of multimedia management system based on digital watermarking and blockchain. Proc. IEEE 42nd Annu. Comput. Softw. Appl. Conf. (COMPSAC), (pp. 359–364). IEEE. Mertz, L. (2018). (Block) chain reaction: A blockchain revolution sweeps into health care, offering the possibility for a much-needed data solution. IEEE Pulse, 9(3), 4–7. doi:10.1109/ MPUL.2018.2814879 PMID:29757744 Micheloni, R., Marelli, A., & Commodaro, S. (2010). Nand overview, From memory to systems: Inside NAND flash memories. Springer. doi:10.1007/978-90-481-9431-5 Micron Technology. (2008) Wear Leveling Techniques in NAND Flash. Micron. Micro, T. (2018). Spam Campaign Targets Japan. Uses Steganography to Deliver the BEBLOH Banking Trojan. Miri, A., & Faez, K. (2018). An image steganography method based on integer wavelet transform. Multimedia Tools and Applications, 77(11), 13133–13144. doi:10.100711042-017-4935-z Mostafa, G., & Alexan, W. (2022). A robust high capacity gray code-based double layer security scheme for secure data embedding in 3d objects. ITU Journal on Future and Evolving Technologies, 3, 1. Motwani, M., Sridharan, B., Motwani, R., & Harris, F. C. Jr. (2010, February). Tamper proofing 3D models. In 2010 International Conference on Signal Acquisition and Processing (pp. 210214). IEEE. 10.1109/ICSAP.2010.85 Mstafa, R. J., Elleithy, K. M., & Abdelfattah, E. (2017). A robust and secure video steganography method in DWT-DCT domains based on multiple object tracking and ECC. IEEE Access : Practical Innovations, Open Solutions, 5, 5354–5365. doi:10.1109/ACCESS.2017.2691581 Mukhtar, M., Bilal, M., Rahdar, A., Barani, M., Arshad, R., Behl, T., & Bungau, S. (2020). Nanomaterials for diagnosis and treatment of brain cancer: Recent updates. Chemosensors (Basel, Switzerland), 8(4), 117. doi:10.3390/chemosensors8040117 Murdoch, S. J., & Lewis, S. (2005, June). Embedding covert channels into TCP/IP. In International Workshop on Information Hiding (pp. 247-261). Springer. 10.1007/11558859_19 Mustapha, A., Khatoun, R., Zeadally, S., Chbib, F., Fadlallah, A., Fahs, W., & El Attar, A. (2023). Detecting DDoS attacks using adversarial neural network. Computers & Security, 127, 103117. doi:10.1016/j.cose.2023.103117 282

Compilation of References

Nain, N., Jindal, G., Garg, A., & Jain, A. (2008). Dynamic thresholding based edge detection. In Proceedings of the World Congress on Engineering, (pp. 2-7). Nair, M. S., & Mol, P. A. (2013). Direction based adaptive weighted switching median filter for removing high density impulse noise. Computers & Electrical Engineering, 39(2), 663–689. doi:10.1016/j.compeleceng.2012.06.004 Nair, M. S., & Raju, G. (2012). A new fuzzy-based decision algorithm for high-density impulse noise removal. Signal, Image and Video Processing, 6(4), 579–595. doi:10.100711760-010-0186-4 Namasudra, S., Deka, G. C., Johri, P., Hosseinpour, M., & Gandomi, A. H. (2021). The Revolution of Blockchain: State-of-the-Art and Research Challenges. Archives of Computational Methods in Engineering, 28(3), 1497–1515. doi:10.100711831-020-09426-0 Nasir, J. A., Khan, O. S., & Varlamis, I. (2021). Fake news detection: A hybrid CNN-RNN based deep learning approach. International Journal of Information Management Data Insights, 1(1), 100007. doi:10.1016/j.jjimei.2020.100007 Nawari, N. O., & Ravindran, S. (2019). Blockchain Technologies in BIM Workflow Environment. Computing in Civil Engineering 2019: Visualization, Information Modeling, and Simulation Selected Papers from the ASCE International Conference on Computing in Civil Engineering 2019, (pp. 343–52). ASCE. 10.1061/9780784482421.044 Nguyen, T. T., & Reddi, V. J. (2021). Deep reinforcement learning for cyber security. IEEE Transactions on Neural Networks and Learning Systems, 1–17. doi:10.1109/TNNLS.2021.3121870 PMID:34723814 Noor, A., Zhao, Y., Khan, R., Wu, L., & Abdalla, F. Y. (2020). Median filters combined with denoising convolutional neural network for Gaussian and impulse noises. Multimedia Tools and Applications, 79(25-26), 18553–18568. doi:10.100711042-020-08657-4 Novotny, P., Zhang, Q., Hull, R., & Use, S. B.-… S. (2018). U. (2018). Permissioned blockchain technologies for academic publishing. Content.Iospress. Com, 38(3), 159–171. Ohbuchi, R., Takahashi, S., Miyazawa, T., & Mukaiyama, A. (2001, June). Watermarking 3D polygonal meshes in the mesh spectral domain. In Graphics interface, pp. 9-17. Omar, A. A., Rahman, M. S., Basu, A., & Kiyomoto, S. (2017) Medibchain: a blockchain based privacy pre-serving platform for healthcare data. In: International conference on security, privacy and anonymity in computation, communication, and storage. Springer. Pal, A. K., & Biswas, G. P. (2009). On improving Visual Quality of Remote-Sensed Earthquake Images in Proceedings of National Seminar on Recent Advances in Theoretical and Applied Seismology. MDPI. Palabaş, T., & Gangal, A. (2012, July). Adaptive fuzzy filter combined with median filter for reducing intensive salt and pepper noise in gray level images. In 2012 International Symposium on Innovations in Intelligent Systems and Applications (pp. 1-4). IEEE. 10.1109/INISTA.2012.6247003

283

Compilation of References

Pattnaik, A., Agarwal, S., & Chand, S. (2012). A new and efficient method for removal of high-density salt and pepper noise through cascade decision based filtering algorithm. Procedia Technology, 6, 108–117. doi:10.1016/j.protcy.2012.10.014 Petrou, M. M., & Petrou, C. (2010). Image processing: the fundamentals. John Wiley & Sons. doi:10.1002/9781119994398 Pillai, N. (2020). Fake colorized and morphed image detection using convolutional neural network. ACCENTS Transactions on Image Processing and Computer Vision, 6(18), 8–16. doi:10.19101/ TIPCV.2020.618011 Piva, A., Bartolini, F., & Barni, M. (2002, May). Managing multimedia in open networks. IEEE Internet Computing, 6(3), 18–26. doi:10.1109/MIC.2002.1003126 Pizzolante, R., Castiglione, A., Carpentieri, B., Santis, A. D., & Castiglione, A. (2015). Reversible Multimedia Protection for DNA Microarray Images. In Proceedings of the 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, Poland. Plataniotis, K., & Venetsanopoulos, A. N. (2000). Color image processing and applications. Springer-Verlag. doi:10.1007/978-3-662-04186-4 Poon, J. & Dryja, T. (2016). Scalable off-Chain Instant Payments. The Bitcoin Lightning Network. Prasad, K. M., & Bapat, R. B. (1992). The generalized moore-penrose inverse. Linear Algebra and Its Applications, 165, 59–69. doi:10.1016/0024-3795(92)90229-4 Premarathne, U., Abuadbba, A., Alabdulatif, A., Khalil, I., Tari, Z., Zomaya, A., & Buyya, R. (2016). Hybrid cryptographic access control for cloud-based EHR systems. IEEE Cloud Computing, 3(4), 58–64. doi:10.1109/MCC.2016.76 Pundir, S., Obaidat, M. S., Wazid, M., Das, A. K., Singh, D. P., & Rodrigues, J. J. (2021). MADPIIME: Malware attack detection protocol in IoT-enabled industrial multimedia environment using machine learning approach. Multimedia Systems, 1–13. doi:10.100700530-020-00743-9 Puthal, D., Malik, N., Mohanty, S. P., Kougianos, E., & Das, G. (2018, July). Everything you wanted to know about the blockchain: Its promise, components, processes, and problems. IEEE Consumer Electronics Magazine, 7(4), 6–14. doi:10.1109/MCE.2018.2816299 Qiu, Y., Gu, H., & Sun, J. (2018). Reversible watermarking algorithm of vector maps based on ECC. Multimedia Tools and Applications, 77(18), 23651–23672. doi:10.100711042-018-5680-7 Qureshi, A, & Megías Jiménez, D. (2020). Applied Sciences, and undefined 2020. BlockchainBased Multimedia Content Protection: Review and Open Challenges. MDPI. Qureshi, A, Megías, D., & Rifa-Pous. (2015). Framework for Preserving Security and Privacy in Peer-to-Peer Content Distribution Systems. Elsevier, 42(3), 1391–1408.

284

Compilation of References

Qureshi, A., & Megías, D. (2019). Blockchain-based P2P multimedia content distribution using collusion-resistant fingerprinting. In Proceedings of the 11th Asia-Pacific Signal and Information Processing Association (APSIPA) Annual Summit and Conference, Lanzhou, China. Qureshi, A., Megías, D., & Rifà-Pous, H. (2014). Secure and Anonymous Multimedia Content Distribution in Peer-to-Peer Networks. In Proceedings of the 6th International Conference on Advances in Multimedia, Nice, France. Qureshi, A., Megías, D., & Rifà-Pous, H. (2016). PSUM: Peer-to-Peer Multimedia Content Distribution using Collusion-Resistant Fingerprinting. Journal of Network and Computer Applications, 66, 180–197. doi:10.1016/j.jnca.2016.03.007 Radanović, I., & Likić, R. (2018). Opportunities for use of blockchain technology in medicine. Applied Health Economics and Health Policy, 16(5), 583–590. doi:10.100740258-018-0412-8 PMID:30022440 Rafrastara, F. A., Prahasiwi, R., Rachmawanto, E. H., & Sari, C. A. (2019, July). Image Steganography using Inverted LSB based on 2 nd, 3 rd and 4 th LSB pattern. In 2019 International Conference on Information and Communications Technology (ICOIACT) (pp. 179-184). IEEE. 10.1109/ICOIACT46704.2019.8938503 Rana, S. K., & Rana, S. K. (2021). Intelligent Amalgamation of Blockchain Technology with Industry 4.0 to Improve Security. In Internet of Things (pp. 165-175). CRC Press. Rana, S. K., Kim, H. C., Pani, S. K., Rana, S. K., Joo, M. I., Rana, A. K., & Aich, S. (2021). Blockchain-based model to improve the performance of the next-generation digital supply chain. Sustainability (Basel), 13(18), 10008. doi:10.3390u131810008 Rana, S. K., & Rana, S. K. (2020). Blockchain based business model for digital assets management in trust less collaborative environment. International Journal of Computing and Digital Systems, 9, 1–11. Rana, S. K., Rana, S. K., Nisar, K., Ag Ibrahim, A. A., Rana, A. K., Goyal, N., & Chawla, P. (2022). Blockchain technology and Artificial Intelligence based decentralized access control model to enable secure interoperability for healthcare. Sustainability (Basel), 14(15), 9471. doi:10.3390u14159471 Rawat, D., & Bhandari, V. (2013). A steganography technique for hiding image in an image using lsb method for 24bit color image. International Journal of Computer Applications, 64(20), 15–19. doi:10.5120/10749-5625 Robertson, N., Cruickshank, P., & Lister, T. (2012). Documents reveal al Qaeda’s plans for seizing cruise ships, carnage in Europe. Cable News Network (CNN), 1. Roehrs, A., Da Costa, C. A., & da Rosa Righi, R. (2017). OmniPHR: A distributed architecture model to integrate personal health records. Journal of Biomedical Informatics, 71, 70–81. doi:10.1016/j.jbi.2017.05.012 PMID:28545835

285

Compilation of References

Roig, B., & Estruch, V. D. (2016). Localised rank‐ordered differences vector filter for suppression of high‐density impulse noise in colour images. IET Image Processing, 10(1), 24–33. doi:10.1049/ iet-ipr.2014.0838 Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M. (2019). Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/ CVF international conference on computer vision (pp. 1-11). Rouhani, S., & Deters, R. (2019, April). Security, performance, and applications of smart contracts: A systematic survey. IEEE Access : Practical Innovations, Open Solutions, 7, 50759–50779. doi:10.1109/ACCESS.2019.2911031 Roy, A., & Laskar, R. H. (2016). Multiclass SVM based adaptive filter for removal of high density impulse noise from color images. Applied Soft Computing, 46, 816–826. doi:10.1016/j. asoc.2015.09.032 Roy, A., Manam, L., & Laskar, R. H. (2018). Region adaptive fuzzy filter: An approach for removal of random-valued impulse noise. IEEE Transactions on Industrial Electronics, 65(9), 7268–7278. doi:10.1109/TIE.2018.2793225 Roy, A., Singha, J., Manam, L., & Laskar, R. H. (2017). Combination of adaptive vector median filter and weighted mean filter for removal of high‐density impulse noise from colour images. IET Image Processing, 11(6), 352–361. doi:10.1049/iet-ipr.2016.0320 Roy, R., & Changder, S. (2016). Quality evaluation of image steganography techniques: A heuristics based approach. International Journal of Security and Its Applications, 10(4), 179–196. doi:10.14257/ijsia.2016.10.4.18 Ruffing, T., Moreno-Sanchez, P., & Kate, A. (2014). CoinShuffle: Practical Decentralized Coin Mixing for Bitcoin. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8713 LNCS(PART 2), 345–64. Sabella, D., & Vaillant, A., & P., K.-I. C. (2016). Mobile-edge computing architecture: The role of MEC in the Internet of Things. IEEE, 5(4), 84–91. Sadek, M. M., Khalifa, A. S., & Mostafa, M. G. (2015). Video steganography: A comprehensive review. Multimedia Tools and Applications, 74(17), 7063–7094. doi:10.100711042-014-1952-z Sa, P. K., & Majhi, B. (2010). An improved adaptive impulsive noise suppression scheme for digital images. AEÜ. International Journal of Electronics and Communications, 64(4), 322–328. doi:10.1016/j.aeue.2009.01.005 Sasson, E. B., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., & Virza, M. (2014). Zerocash: Decentralized Anonymous Payments from Bitcoin. IEEE. https://ieeexplore.ieee.org/abstract/document/6956581/?casa_ token=KngizekyPsEAAAAA:gDzlzXO82l8nEmf3WzxyVR1gTtJaCNyitIFUC zBtlPpIdRrsNTzBrhg_bED0cXK-qN_zx4UpM4M (January 3, 2023).

286

Compilation of References

Sathya, M., Jeyaselvi, M., Krishnasamy, L., Hazzazi, M. M., Shukla, P. K., Shukla, P. K., & Nuagah, S. J. (2021). A novel, efficient, and secure anomaly detection technique using DWUODBN for IoT-enabled multimedia communication systems. Wireless Communications and Mobile Computing, 2021, 1–12. doi:10.1155/2021/4989410 Schulte, S., De Witte, V., Nachtegael, M., Van der Weken, D., & Kerre, E. E. (2007). Histogrambased fuzzy colour filter for image restoration. Image and Vision Computing, 25(9), 1377–1390. doi:10.1016/j.imavis.2006.10.002 Schulte, S., Valerie, D. W., Nachtegael, M., Dietrich, V. D. W., & Etienne, E. K. (2006). Fuzzy two-step filter for impulse noise reduction from color images. IEEE Transactions on Image Processing, 15(11), 3567–3578. doi:10.1109/TIP.2006.877494 PMID:17076414 Shae, Z., & On, J. T. (2017). On the design of a blockchain platform for clinical trial and precision medicine, (pp. 1972–1980). 37th international conference. IEEE. Shahriar Hazari, S., & Mahmoud, Q. (2020). Improving Transaction Speed and Scalability of Blockchain Systems via Parallel Proof of Work. Future Internet, 12(8), 125. doi:10.3390/fi12080125 Shao, C., Kaur, P., & Kumar, R. (2021). An improved adaptive weighted mean filtering approach for metallographic image processing. Journal of Intelligent Systems, 30(1), 470–478. doi:10.1515/ jisys-2020-0080 Sharafaldin, I., Lashkari, A. H., & Ghorbani, A. A. (2018). Toward generating a new intrusion detection dataset and intrusion traffic characterization. ICISSp, 1, 108–116. doi:10.5220/0006639801080116 Sharma, A. (2021). Future aspects on MEC (Mobile Edge Computing): Offloading Mechanism, 34–39. IEEE. Sharma, N., & Panda, J. (2020). Statistical watermarking approach for 3D mesh using local curvature estimation. IET Information Security, 14(6), 745–753. doi:10.1049/iet-ifs.2019.0601 Sharma, N., & Panda, J. (2022). Assessment of 3D mesh watermarking techniques. Journal of Digital Forensics. Security and Law, 17(2), 2. Shih, F. Y. (2017). Digital Watermarking and Steganography: Fundamentals and Techniques, (2nd ed), 1–270. Taylor and Francis. doi:10.1201/9781315121109 Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6(1), 1–48. doi:10.118640537-019-0197-0 Shrestha, B., Halgamuge, M. N., & Treiblmaier, H. (2020). Using Blockchain for Online Multimedia Management: Characteristics of Existing Platforms. In Blockchain and Distributed Ledger Technology Use Cases: Applications and Lessons Learned (pp. 289–303). Springer. doi:10.1007/978-3-030-44337-5_14

287

Compilation of References

Singh, A., Sethi, G., & Kalra, G. S. (2020). Spatially adaptive image denoising via enhanced noise detection method for grayscale and color images. IEEE Access : Practical Innovations, Open Solutions, 8, 112985–113002. doi:10.1109/ACCESS.2020.3003874 Singh, B., & Sharma, D. K. (2022). Predicting image credibility in fake news over social media using multi-modal approach. Neural Computing & Applications, 34(24), 21503–21517. doi:10.100700521-021-06086-4 PMID:34054227 Singh, B., Sur, A., & Mitra, P. (2021). Steganalysis of digital images using deep fractal network. IEEE Transactions on Computational Social Systems, 8(3), 599–606. doi:10.1109/TCSS.2021.3052520 Singh, I., & Verma, O. P. (2021). Impulse noise removal in color image sequences using fuzzy logic. Multimedia Tools and Applications, 80(12), 18279–18300. doi:10.100711042-021-10643-3 Singh, M., Kumar, R., Tandon, D., Sood, P., & Sharma, M. (2020, December). Artificial intelligence and iot based monitoring of poultry health: A review. In 2020 IEEE International Conference on Communication, Networks and Satellite (Comnetsat) (pp. 50-54). IEEE. 10.1109/ Comnetsat50391.2020.9328930 Singh, P., & Singh, K. (2013). Image encryption and decryption using blowfish algorithm in MATLAB. International Journal of Scientific and Engineering Research, 4(7), 150–154. Siyal, A. A., Junejo, A. Z., Zawish, M., Ahmed, K., Khalil, A., & Soursou, G. (2019). Applications of blockchain technology in medicine and healthcare: Challenges and future perspectives. Cryptography, 3(1), 3. doi:10.3390/cryptography3010003 Smolka, B., & Malinski, L. (2018). Impulsive noise removal in color digital images based on the concept of digital paths. In 2018 13th International Conference on Computer Science & Education (ICCSE) (pp. 1-6). IEEE. 10.1109/ICCSE.2018.8468771 Smolka, B., & Chydzinski, A. (2005). Fast detection and impulsive noise removal in color images. Real-Time Imaging, 11(5-6), 389–402. doi:10.1016/j.rti.2005.07.003 Solak, S. (2020). High embedding capacity data hiding technique based on EMSD and LSB substitution algorithms. IEEE Access : Practical Innovations, Open Solutions, 8, 166513–166524. doi:10.1109/ACCESS.2020.3023197 SpeedGuide. (n.d.). SL C, ML C or TL C NAND for Solid State Dr ives? S p e e d G u i d e . h t t p s : / / w w w. s p e e d g u i d e . n et / fa q / s l c - m l c - o r- t l c - Na n d - fo r- s o l i d -state-drives-406 Sreenivasulu, P., & Chaitanya, N. K. (2014). Removal of Salt and Pepper Noise for Various Images Using Median Filters: A Comparative Study. IUP Journal of Telecommunications, 6(2). Stefan, K, & Fabien, A. P. (2000). Information Hiding Techniques for Steganography and Digital Watermarking.

288

Compilation of References

Stoyanova, V., & Zh, T. (2015). Research of the characteristics of a steganography algorithm based on LSB method of embedding information in images. Machines. Technologies. Materials (Basel), 9(7), 65–68. Subbiah, S., Anbananthen, K. S. M., Thangaraj, S., Kannan, S., & Chelliah, D. (2022). Intrusion detection technique in wireless sensor network using grid search random forest with Boruta feature selection algorithm. Journal of Communications and Networks (Seoul), 24(2), 264–273. doi:10.23919/JCN.2022.000002 Subhedar, M. S., & Mankar, V. H. (2014). Current status and key issues in image steganography: A survey. Computer Science Review, 13, 95–113. doi:10.1016/j.cosrev.2014.09.001 Sun, C., Tang, C., Zhu, X., Li, X., & Wang, L. (2015). An efficient method for salt-and-pepper noise removal based on shearlet transform and noise detection. AEÜ. International Journal of Electronics and Communications, 69(12), 1823–1832. doi:10.1016/j.aeue.2015.09.007 Suthar, H. & Sharma, P. (2022) An Approach to Data Recovery from Solid State Drive: Cyber Forensics. Apple Academic Press. https://www.appleacademicpress.com/advancements-in-cyber-cri me-investigation-and-digital-forensics-/1119 Suthar, H., & Sharma, P. (2022). Buy Computer Forensic: Practical Handbook book online at low prices in India. Notion Press. https://www.amazon.in/Computer-Forensic-Practical-Hepi-Sutha r/dp/B0B1DZ45R4 Suthar, H., & Sharma, P. (2022). Guaranteed Data Destruction Strategies and Drive Sanitization: SSD. Research Square., doi:10.21203/rs.3.rs-1896935/v1 Tabares-Soto, R., Ramos-Pollán, R., Isaza, G., Orozco-Arias, S., Ortíz, M. A. B., Arteaga, H. B. A., & Grisales, J. A. A. (2020). Digital media steganalysis. In Digital Media Steganography (pp. 259–293). Academic Press. doi:10.1016/B978-0-12-819438-6.00020-7 Taha, A. Q., & Ibrahim, H. (2020). Reduction of Salt-and-Pepper Noise from Digital Grayscale Image by Using Recursive Switching Adaptive Median Filter. In Intelligent Manufacturing and Mechatronics: Proceedings of the 2nd Symposium on Intelligent Manufacturing and Mechatronics– SympoSIMM 2019, (pp. 32-47). Springer Singapore. Takeuchi, K. (2009). Novel Co-Design of NAND Flash Memory and NAND Flash Controller Circuits for Sub30 nm Low-Power High-Speed Solid-State Drives (SSD). IEEE Journal of SolidState Circuits, 44(4), 1227–1234. doi:10.1109/JSSC.2009.2014027 Talukder, M. S. H., Hasan, M. N., Sultan, R. I., Rahman, M., Sarkar, A. K., & Akter, S. (2022, February). An Enhanced Method for Encrypting Image and Text Data Simultaneously using AES Algorithm and LSB-Based Steganography. In 2022 International Conference on Advancement in Electrical and Electronic Engineering (ICAEEE) (pp. 1-5). IEEE. 10.1109/ ICAEEE54957.2022.9836589 Tanaka, M., Shiota, S., & Kiya, H. (2021). A detection method of operated fake-images using robust hashing. Journal of Imaging, 7(8), 134. doi:10.3390/jimaging7080134 PMID:34460770 289

Compilation of References

Tavallaee, M., Bagheri, E., Lu, W., & Ghorbani, A. A. (2009, July). A detailed analysis of the KDD CUP 99 data set. In 2009 IEEE symposium on computational intelligence for security and defense applications. IEEE. Tayan, O., & Alginahi, Y. M. (2014). A review of recent advances on multimedia watermarking security and design implications for digital Quran computing. 2014 International Symposium on Biometrics and Security Technologies (ISBAST) (pp. 304-309). IEEE. 10.1109/ ISBAST.2014.7013139 Templeman, R., & Kapadia, A. (2012). Gangrene: exploring the mortality of ñash memory. In HotSec’12 (pp. 1–1). USENIX Association. Thanekar, S. A., & Pawar, S. S. (2013, December). OCTA (STAR) PVD: A different approach of image steganopgraphy. In 2013 IEEE International Conference on Computational Intelligence and Computing Research (pp. 1-5). IEEE. 10.1109/ICCIC.2013.6724139 Thota, A., Tilak, P., Ahluwalia, S., & Lohia, N. (2018). Fake news detection: A deep learning approach. SMU Data Science Review, 1(3), 10. Toh, K. K. V., Ibrahim, H., & Mahyuddin, M. N. (2008). Salt-and-pepper noise detection and reduction using fuzzy switching median filter. IEEE Transactions on Consumer Electronics, 54(4), 1956–1961. doi:10.1109/TCE.2008.4711258 Toh, K. K. V., & Isa, N. A. M. (2009). Noise adaptive fuzzy switching median filter for saltand-pepper noise reduction. IEEE Signal Processing Letters, 17(3), 281–284. doi:10.1109/ LSP.2009.2038769 Tsai, Y. Y., Chen, J. T., & Chan, C. S. (2014). Exploring LSB Substitution and Pixel-value Differencing for Block-based Adaptive Data Hiding. International Journal of Network Security, 16(5), 363–368. Tsirikolias, K. (2016). Low level image processing and analysis using radius filters. Digital Signal Processing, 50, 72–83. doi:10.1016/j.dsp.2015.12.001 Turkmen, I. (2016). The ANN based detector to remove random-valued impulse noise in images. Journal of Visual Communication and Image Representation, 34, 28–36. doi:10.1016/j. jvcir.2015.10.011 Uddin, M. A., Stranieri, A., Gondal, I., & Balasubramanian, V. (2020). Blockchain leveraged decentralized IoT eHealth framework. Internet of Things, 9, 100159. doi:10.1016/j.iot.2020.100159 Vakani, H., Abdallah, S., Kamel, I., Rabie, T., & Baziyad, M. (2021, July). Dct-in-dct: A novel steganography scheme for enhanced payload extraction quality. In 2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT) (pp. 201-206). IEEE. 10.1109/IAICT52856.2021.9532553 Venkatraman, S., & Surendiran, B. (2020). Adaptive hybrid intrusion detection system for crowd sourced multimedia internet of things systems. Multimedia Tools and Applications, 79(5-6), 3993–4010. doi:10.100711042-019-7495-6 290

Compilation of References

Villan, M. A., Kuruvilla, A., Paul, J., & Elias, E. P. (2017). Fake image detection using machine learning. IRACST-International Journal of Computer Science and Information Technology & Security (IJCSITS). Cueva, E., Ee, G., Iyer, A., Pereira, A., Roseman, A., & Martinez, D. (2020, October). Detecting fake news on twitter using machine learning models. In 2020 IEEE MIT Undergraduate Research Technology Conference (URTC) (pp. 1-5). IEEE. Vishnu, B., Namboothiri, L. V., & Sajeesh, S. R. (2020, March). Enhanced image steganography with PVD and edge detection. In 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC) (pp. 827-832). IEEE. 10.1109/ICCMC48092.2020.ICCMC-000153 Voloshynovskiy, S., Pereira, S., Iquise, V., & Pun, T. (2001). Attack modelling: Towards a second generation watermarking benchmark. Signal Processing, 81(6), 1177–1214. doi:10.1016/S01651684(01)00039-1 Wang, F., Xu, J., Wang, X., & Wireless, S. C.-I. T. on. (2017). U. (2017). Joint offloading and computing optimization in wireless powered mobile-edge computing systems. IEEE, 17(3), 1784–1797. Wang, F., Zhou, H., Fang, H., Zhang, W., & Yu, N. (2022). Deep 3D mesh watermarking with self-adaptive robustness. Cybersecurity, 5(1), 1–14. doi:10.118642400-022-00125-w Wang, G., Li, D., Pan, W., & Zang, Z. (2010). Modified switching median filter for impulse noise removal. Signal Processing, 90(12), 3213–3218. doi:10.1016/j.sigpro.2010.05.026 Wang, G., Liu, Y., Xiong, W., & Li, Y. (2018). An improved non-local means filter for color image denoising. Optik (Stuttgart), 173, 157–173. doi:10.1016/j.ijleo.2018.08.013 Wang, G., Liu, Y., & Zhao, T. (2014). A quaternion-based switching filter for colour image denoising. Signal Processing, 102, 216–225. doi:10.1016/j.sigpro.2014.03.027 Wang, G., Zhu, H., & Wang, Y. (2015). Fuzzy decision filter for color image denoising. Optik (Stuttgart), 126(20), 2428–2432. doi:10.1016/j.ijleo.2015.06.005 Wang, K., Lavoué, G., Denis, F., & Baskurt, A. (2007). Three-dimensional meshes watermarking: Review and attack-centric investigation. International Workshop on Information Hiding. Springer. 10.1007/978-3-540-77370-2_4 Wang, K., Lavoué, G., Denis, F., Baskurt, A., & He, X. (2010, June). A benchmark for 3D mesh watermarking. In 2010 Shape Modeling International Conference (pp. 231-235). IEEE. 10.1109/ SMI.2010.33 Wang, P. (2019). Three-Dimensional NAND Flash for Vector-Matrix Multiplication. IEEE Transactions on Very Large Scale Integration (VLSI). Systems, 27, 988–991. Wang, S., Ouyang, L., Yuan, Y., Ni, X., Han, X., & Wang, F. Y. (2019). Blockchain-enabled smart contracts: Architecture, applications, and future trends. IEEE Transactions on Systems, Man, and Cybernetics. Systems, 49(11), 2266–2277. doi:10.1109/TSMC.2019.2895123

291

Compilation of References

Wang, W., Hoang, D. T., Hu, P., Xiong, Z., Niyato, D., Wang, P., Wen, Y., & Kim, D. I. (2019, January). A survey on consensus mechanisms and mining strategy management in blockchain networks. IEEE Access : Practical Innovations, Open Solutions, 7, 22328–22370. doi:10.1109/ ACCESS.2019.2896108 Wang, X., & Du, S. (2011). A Non-blind Robust Watermarking Scheme for 3D Models in Spatial Domain. In Electrical Engineering and Control (pp. 621–628). Springer. doi:10.1007/978-3642-21765-4_76 Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612. doi:10.1109/TIP.2003.819861 PMID:15376593 Welekar, R., Karandikar, A., & Tirpude, S. (2020, May). Emotion Categorization Using Twitter. In Proceedings of International Journal (Vol. 9, No. 3). doi:10.30534/ijatcse/2020/32932020 Weng, C. Y., Huang, C. T., & Kao, H. W. (2017, August). DCT-based compressed image with reversibility using modified quantization. In International Conference on Intelligent Information Hiding and Multimedia Signal Processing (pp. 214-221). Springer. Wen, Z., Yang, K., Liu, X., Li, S., & Access, J. Z.-I. (2018). U. (2018). Joint offloading and computing design in wireless powered mobile-edge computing systems with full-duplex relaying. IEEE, 6, 72786–72795. Wirth, C., & Kolain, M. (2018). Privacy by blockchain design: A blockchainenabled GDPRcompliant approach for handling personal data,’’ in Proc. ERCIM Blockchain Workshop, Eur. Soc. Socially Embedded Technol (pp. 1–7) . EUSSET. Wubet, W. M. (2020). The deepfake challenges and deepfake video detection. International Journal of Innovative Technology and Exploring Engineering, 9(6), 9. doi:10.35940/ijitee.E2779.049620 Wu, D. C., & Tsai, W. H. (2003). A steganographic method for images by pixel-value differencing. Pattern Recognition Letters, 24(9-10), 1613–1626. doi:10.1016/S0167-8655(02)00402-6 Wu, Y., Chen, X., Shi, J., Ni, K., Qian, L., Huang, L., & Sensors, K. Z. (2018). U. (2018). Optimal computational power allocation in multi-access mobile edge computing for blockchain. Mdpi. Sensors (Basel), 18(10), 3472. doi:10.339018103472 PMID:30326649 Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Philip, S. Y. (2020). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4–24. doi:10.1109/TNNLS.2020.2978386 PMID:32217482 Wu, Z., Zheng, H., Zhang, L., & Li, X. (2019). Privacy-friendly Blockchain Based Data Trading and Tracking. In Proceedings of the 5th International Conference on Big Data Computing and Communications, QingDao, China. 10.1109/BIGCOM.2019.00040 Xiang, S., & Huang, J. (2007). Robust audio watermarking against the D/A and A/D conversions. arXiv preprint arXiv:0707.0397.

292

Compilation of References

Xiao, L., Li, C., Wu, Z., & Wang, T. (2016). An enhancement method for X-ray image via fuzzy noise removal and homomorphic filtering. Neurocomputing, 195, 56–64. doi:10.1016/j. neucom.2015.08.113 Xiao, Y., Zeng, T., Yu, J., & Ng, M. K. (2011). Restoration of images corrupted by mixed Gaussianimpulse noise via l1–l0 minimization. Pattern Recognition, 44(8), 1708–1720. doi:10.1016/j. patcog.2011.02.002 Xia, Q. I., Sifah, E. B., Asamoah, K. O., Gao, J., Du, X., & Guizani, M. (2017). MeDShare: Trust-less medical data sharing among cloud service providers via blockchain. IEEE Access : Practical Innovations, Open Solutions, 5, 14757–14767. doi:10.1109/ACCESS.2017.2730843 Xie, J. (2019). A Survey of Blockchain Technology Applied to Smart Cities: Research Issues and Challenges. IEEE, 21(3), 2794–2830. Xiong, Z., Zhang, Y., & Niyato, D. (2018). When mobile blockchain meets edge computing. IEEE, 56(8), 33–39. Xu, J., Jia, Y., Shi, Z., & Pang, K. (2016). An improved anisotropic diffusion filter with semi-adaptive threshold for edge preservation. Signal Processing, 119, 80–91. doi:10.1016/j.sigpro.2015.07.017 Xu, J., Wang, L., & Shi, Z. (2014). A switching weighted vector median filter based on edge detection. Signal Processing, 98, 359–369. doi:10.1016/j.sigpro.2013.11.035 Yaga, D., Mell, P., Roby, N., & Scarfone, K. (2019). Blockchain Technology Overview. NIST. Yánez, W., Mahmud, R., Bahsoon, R., Zhang, Y., & Buyya, R. (2020). Data allocation mechanism for Internet-of-Things systems with blockchain. IEEE Internet of Things Journal, 7(4), 3509–3522. doi:10.1109/JIOT.2020.2972776 Yang, J., Lu, X., & Chen, W. (2022). A robust scheme for copy detection of 3D object point clouds. Neurocomputing, 510, 181–192. doi:10.1016/j.neucom.2022.09.008 Yang, R., Yu, F., & Si, P. (2019). Integrated blockchain and edge computing systems: A survey, some research issues and challenges. IEEE, 21(2), 1508–1532. Yin, K., Pan, Z., Shi, J., & Zhang, D. (2001). Robust mesh watermarking based on multiresolution processing. Computers & Graphics, 25(3), 409–420. doi:10.1016/S0097-8493(01)00065-6 Yin, L., Yang, R., Gabbouj, M., & Neuvo, Y. (1996). Weighted median filters: A tutorial. IEEE Transactions on Circuits and Systems. 2, Analog and Digital Signal Processing, 43(3), 157–192. doi:10.1109/82.486465 You, W., Zhang, H., & Zhao, X. (2020). A Siamese CNN for image steganalysis. IEEE Transactions on Information Forensics and Security, 16, 291–306. doi:10.1109/TIFS.2020.3013204 Yu, P., & Lee, C. S. (1993). Adaptive fuzzy median filter. In International Symposium on Artificial Neural Networks (pp. 25-34).

293

Compilation of References

Yu, Z., Ip, H. H., & Kwok, L. F. (2003). A robust watermarking scheme for 3D triangular mesh models. Pattern Recognition, 36(11), 2603–2614. doi:10.1016/S0031-3203(03)00086-4 Zafeiriou, S., Tefas, A., & Pitas, I. (2005). Blind robust watermarking schemes for copyright protection of 3D mesh objects. IEEE Transactions on Visualization and Computer Graphics, 11(5), 596–607. doi:10.1109/TVCG.2005.71 PMID:16144256 Zhang, J., Dong, B., & Philip, S. Y. (2020, April). Fakedetector: Effective fake news detection with deep diffusive neural network. In 2020 IEEE 36th international conference on data engineering (ICDE) (pp. 1826-1829). IEEE. Zhang, P., Schmidt, D. C., White, J., & Lenz, G. (2018). “Blockchain technology use cases in healthcare. In Advances in Computers (Vol. 111). Elsevier. Zhang, X., & Poslad, S. (2018, May). Blockchain support for flexible queries with granular access control to electronic medical records (EMR). In 2018 IEEE International conference on communications (ICC) (pp. 1-6). IEEE. Zhang, X., Karaman, S., & Chang, S. F. (2019, December). Detecting and simulating artifacts in gan fake images. In 2019 IEEE international workshop on information forensics and security (WIFS) (pp. 1-6). IEEE. Marra, F., Gragnaniello, D., Cozzolino, D., & Verdoliva, L. (2018, April). Detection of gan-generated fake images over social networks. In 2018 IEEE conference on multimedia information processing and retrieval (MIPR) (pp. 384-389). IEEE. Nataraj, L., Mohammed, T. M., Chandrasekaran, S., Flenner, A., Bappy, J. H., Roy-Chowdhury, A. K., & Manjunath, B. S. (2019). Detecting GAN generated fake images using co-occurrence matrices. arXiv preprint arXiv:1903.06836. Chen, H. S., Zhang, K., Hu, S., You, S., & Kuo, C. C. J. (2021). Geo-defakehop: High-performance geographic fake image detection. arXiv preprint arXiv:2110.09795. Zhang, Y., Luo, X., Yang, C., Ye, D., & Liu, F. (2015, August). A JPEG-compression resistant adaptive steganography based on relative relationship between DCT coefficients. In 2015 10th International Conference on Availability, Reliability and Security (pp. 461-466). IEEE. 10.1109/ ARES.2015.53 Zhang, J., Zhao, X., He, X., & Zhang, H. (2021). Improving the robustness of JPEG steganography with robustness cost. IEEE Signal Processing Letters, 29, 164–168. doi:10.1109/LSP.2021.3129419 Zhang, K., Zhu, Y., & Maharjan, S., Network, Y. Z.-I. (2019). Edge intelligence and blockchain empowered 5G beyond for the industrial Internet of Things. IEEE, 33(5), 12–19. Zhang, L., Dong, W., Zhang, D., & Shi, G. (2010). Two-stage image denoising by principal component analysis with local pixel grouping. Pattern Recognition, 43(4), 1531–1549. doi:10.1016/j.patcog.2009.09.023 Zhang, M., Liu, Y., Li, G., Qin, B., & Liu, Q. (2020). Iterative scheme-inspired network for impulse noise removal. Pattern Analysis & Applications, 23(1), 135–145. doi:10.100710044-018-0762-8

294

Compilation of References

Zhang, P., White, J., & Schmidt, D. C. (2017). Design of block chain-Based apps using familiar software patterns to address interoperability challenges in healthcare. In Proc. 24th Pattern Lang. Program. Conf., Ottawa, ON, Canada, . Zhan, Y. Z., Li, Y. T., Wang, X. Y., & Qian, Y. (2014). A blind watermarking algorithm for 3D mesh models based on vertex curvature. Journal of Zhejiang University SCIENCE C, 15(5), 351–362. doi:10.1631/jzus.C1300306 Zhao, F., Ma, R. C., & Ma, J. Q. (2012). An Algorithm for Salt and Pepper Noise Removal Based on Information Entropy. [). Trans Tech Publications Ltd.]. Applied Mechanics and Materials, 220, 2273–2279. doi:10.4028/www.scientific.net/AMM.220-223.2273 Zhao, J., Zong, T., Xiang, Y., Gao, L., & Beliakov, G. (2020). Robust Blockchain-Based CrossPlatform Audio Copyright Protection System Using Content-Based Fingerprint. In Web Information Systems Engineering (pp. 201–212). Springer. Zhao, X., Huang, G., Jiang, J., Gao, L., & Li, M. (2021). Research on lightweight anomaly detection of multimedia traffic in edge computing. Computers & Security, 111, 102463. doi:10.1016/j. cose.2021.102463 Zheng, W., Zheng, Z., Chen, X., Dai, K., Li, P., & Chen, R. (2019). Nutbaas: A blockchain-asa-service platform. IEEE Access : Practical Innovations, Open Solutions, 7, 134422–134433. doi:10.1109/ACCESS.2019.2941905 Zheng, Z., Xie, S., Dai, H. N., Chen, X., & Wang, H. (2018). Blockchain challenges and opportunities: A survey. International Journal of Web and Grid Services, 14(4), 352–375. doi:10.1504/IJWGS.2018.095647 Zhou, L., Wang, L., & Sun, Y. (2018). MIStore: A blockchain-based medical insurance storage system. Journal of Medical Systems, 42(8), 149. doi:10.100710916-018-0996-4 PMID:29968202 Zhuang, Y., Sheets, L. R., Chen, Y. W., Shae, Z. Y., Tsai, J. J., & Shyu, C. R. (2020). A patientcentric health information exchange framework using blockchain technology. IEEE Journal of Biomedical and Health Informatics, 24(8), 2169–2176. doi:10.1109/JBHI.2020.2993072 PMID:32396110 Zhuo, L., Tan, S., Zeng, J., & Lit, B. (2018, November). Fake colorized image detection with channel-wise convolution based deep-learning framework. In 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) (pp. 733736). IEEE. 10.23919/APSIPA.2018.8659761 Zhu, W., Luo, C., Wang, J., & Li, S. (2011). Multimedia cloud computing. IEEE Signal Processing Magazine, 28(3), 59–69. doi:10.1109/MSP.2011.940269 Zhu, Z., Jin, L., Song, E., & Hung, C. C. (2018). Quaternion switching vector median filter based on local reachability density. IEEE Signal Processing Letters, 25(6), 843–847. doi:10.1109/ LSP.2018.2808343

295

296

About the Contributors

Sumit Kumar Mahana is pursuing his Ph.D. from National Institute of Technology, Kurukshetra. He has completed his B.Tech (Computer Engineering) and M.Tech (Software Engineering) from Kurukshetra University, Kurukshetra. He has also qualified National Eligibility Test (NET) conducted by Central Board of School Education (C.B.S.E) in 2017. He is in the teaching profession for more than 12 years and has several research publications to his credit. His research interests includes Image processing, Cryptography and Multimedia Security. Surjit Singh received the Ph.D. degree in computer engineering from National Institute of Technology, Kurukshetra, Haryana, India. He is currently employed as an Assistant Professor in the Computer Science and Engineering Department, Thapar Institute of Engineering and Technology, Patiala, India. Before joining to TIET, Patiala, he worked as a faculty member for NIT Kurukshetra. He has four books to his credit. He has published many research papers in journals of high repute including IEEE Transactions, Elsevier, Springer, Wiley, and Taylor & Francis. He has done extensive research work in the area of WSN, SDN, Blockchain Technologies, the Internet of Things, and Cloud Computing. *** Ashpreet received her B.Tech. degree in Information Technology from the Kurukshetra University, Kurukshetra, Haryana, India in 2006. From August 2008 to September 2013, she was a Lecturer at Shri Krishan Institute of Engineering & Technology, Kurukshetra, India. She completed her M.Tech. in Computer Science & Engineering from Kurukshetra University, Kurukshetra, India in 2014. She also worked as a Teaching Assistant in the Department of Computer Applications at National Institute of Technology, Kurukshetra. She has completed her Ph.D. in Computer Engineering with specialization in Image Processing from National Institute of Technology, Kurukshetra, India. Her research interests include Image Processing, Soft Computing, Deep Learning and Machine Learning.

About the Contributors

Sakshi Chhabra received the Bachelor degree in Computer Applications from the Punjab University, Chandigarh in 2012, and the Master’s degree in Computer Applications in 2015 (INDIA). She just completed her Ph.D degree in 2020 from National Institute of Technology, Kurukshetra in Department of Computer Applications. Currently, she is working as an Assistant Professor in the above institute. Her main research interests include Cloud Computing, Load Balancing and Information Security. She has published the research papers in SCI, Scopus journals and International Conferences. Ahmed Grati is a researcher specialized in image processing, digital watermarking and data hiding and multimedia authentication. Imen Kallel is a research member at ESSE (Advanced Electronic Systems and Sustainable Energy), ENET’COM –Tunisia. Research interests include image processing, digital watermarking and data hiding, multimedia authentication, human machine interaction (HMI), computer vision and environmental health and agriculture. Isha Kansal is working as an Assistant Professor in Chitkara University, Punjab. Priyanka Sharma is currently working as Professor (IT) and Dean (Research & Publications) at Rashtriya Raksha University. She has also worked as I/C Director (Research & Development), Raksha Shakti University and Head of IT and TC Department and Director of SITAICS. She is having total of 22+ years’ experience in teaching, admin and research at PG level. Her academic qualifications includes Master in Computers Applications, Ph.D. and D.Sc. in Computer Science and Certificate program in Computer language and Cyber Law from Symbiosis University. More than 10 PhDs have been awarded under her supervision and 5 are pursuing PhD at present. She is empaneled member at many universities like GTU, BISAG, DDIT, MSU and others. Assed Thesis and Synopsis of PhD for universities like Laurentian University Canada KIIT, GTU, Banasthali Vidhyapith, C. V. Raman Global University etc. Her area of research interest in Cyber Security, Artificial intelligence, Cyber Law. Ashutosh Kumar Singh is working as a Professor and Head in National Institute of Technology; Kurukshetra, India. He has more than 15 years research and teaching experience in various Universities of the India, UK, and Malaysia. Prior to this appointment he has worked as an Associate Professor and Head of Department Electrical and Computer Engineering in School of Engineering Curtin University Australia offshore Campus Malaysia, Sr. Lecturer and Deputy Dean (Research and Graduate Studies) in Faculty of Information Technology, University Tun Abdul Razak Kuala 297

About the Contributors

Lumpur Malaysia, Post Doc RA in the Department of Computer Science, University of Bristol, Faculty of Information Science and Technology, Multimedia University Malaysia and Sr. Lecturer in Electronics and Communication Department at NIST, India. He has obtained his Ph. D. degree in Electronics Engineering from Indian Institute of Technology, BHU, India, Post Doc from Department of Computer Science, University of Bristol, UK and Charted Engineer from UK. His research area includes Web Technology, Big Data, Verification, Synthesis, Design and Testing of Digital Circuits. He has published more than 170 research papers now in different journals, conferences and news magazines and in these areas. Divya Singla is currently pursuing her Ph.D from the Department of Computer Science & Engineering at Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Sonipat. She has more than 5 years of teaching & research experience. She has got Bachelor of Engineering & Masters of Engineering, both, from Kurukshetra University. She has published many research papers in reputed Journals & Conferences and delivered many expert lectures. Her area of expertise is in Image Processing and Biometric Authentication Hepi Suthar is currently working as a PhD Research Scholar in the School of Information Technology, AI & Cyber Security Department at Rashtriya Raksha University, Gujarat. She received her Diploma & B.E. degree in Computer Science and Engineering from Government Engineering College, Gujarat and M. Tech. in Cyber Security & Incident Response from Gujarat Forensic Science University, Gujarat. She worked in GTU affiliated college, for nearly five years, worked at different dynamic Universities like Marwadi University Rajkot, Gujarat. Her current research interests are cyber security, digital forensic, malware analysis & reverse engineering, dark & deep web also in threat management and data privacy compliances. She published paper on comparative analysis on study on SSD, HDD & SSHD (2019), one book Chapter on an approach of data recovery from SSD: Cyber Forensic. She published book on Computer Forensic (2022). Amina Taktak is a researcher specialized in image processing, digital watermarking and data hiding and multimedia authentication. Neetu Verma is currently working as Assistant Professor in the Department of Computer Science & Engineering at Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Sonipat. She has more than 12 years of teaching & research experience. She has got Bachelors of Engineering & Masters of Engineering, both, from MDU Rohtak and completed Ph.D from DCRUST, Murthal. She has published many research papers in reputed Journals & Conferences and delivered 298

About the Contributors

many expert lectures. Her area of expertise is in Wireless Sensor Network, Data Science and Compiler design. She has successfully guided numerous M.Tech scholars.

299

300

Index

3D Object Watermarking 1-2, 9-10, 33-34

A AI 176, 188, 218, 239 Attacks 1-2, 6-12, 14-16, 24-25, 27, 29-33, 38-40, 53-55, 78, 134, 137, 166, 209210, 247, 260-261

B Blockchain 118-122, 124, 127-131, 133135, 137-142, 145-154, 156-157, 159-162, 165-172, 197-205, 212-229, 232-240

C Classification 7, 9-10, 40-41, 54, 120, 122123, 179, 187-188, 191, 201, 218, 225-227, 247-249, 260 Cloud Computing 38-39, 41, 54-55, 171, 219-220, 229 Color Image 79-80, 82-83, 107-111, 114116, 264 Computation 6, 95, 105, 108, 132, 171, 211, 218, 221, 229, 232, 252, 258 Computer Forensic 75, 77-78 Content Protection 118, 121-124, 131, 135-138, 197-198, 206-207, 211-216 Controller 59, 77-78, 199 Copyright 1-2, 5, 8, 11, 20, 37, 142, 197198, 201, 206, 208, 210-216 Copyright Protection 1-2, 5, 8, 20, 37, 142, 197, 201, 206, 210-216

Cryptocurrency 118, 122, 127-131, 133, 155-156, 197, 203-204, 226 Cryptography 32, 34, 124, 168, 172, 203, 207-208, 223-224, 227, 234, 241-245, 251, 261, 264 Cyber Forensic 56, 60, 75 Cyber Security 75, 78, 195

D DDoS 38-42, 46, 53-55, 165, 213 Decentralized Content Protection 214 Denoising 79-83, 85-86, 90, 95, 108, 110, 113-117 Digital Content 126, 139-140, 144, 198, 208-210 Digital Watermarking 1-2, 5, 7, 35, 118, 126, 139, 141, 208-210, 212, 215, 217

E Edge Computing 40, 55, 218-220, 224, 229-230, 232, 236-240 E-Healthcare 144, 146, 150, 154-155, 168 Encryption 7, 34, 36, 123-126, 137, 139, 147-148, 208-212, 214, 221, 227-229, 243-245, 251 Ensemble 38-39, 42-43, 45, 50, 53, 109, 180, 193 Extreme Learning Machine 38, 40, 44, 54

F Fake Image/Video 173-174 Filter 79, 81, 83-86, 108-117, 178

Index

Fingerprint 126, 133, 142, 210, 212, 220 Forensics 36, 59-60, 75-78, 112, 173-175, 178, 186, 193, 196, 243, 260, 262263, 266 Forgery 147, 174, 178, 193

92-100, 102-103, 105-106, 108-117, 209, 246, 251, 255 Normalization 12, 17, 19, 24, 32, 35, 42, 86

I

Redundancy 2, 18, 33, 220 Robustness 1, 5-6, 8-14, 16, 18-19, 24-28, 30-33, 36, 126, 137, 141, 179, 209210, 212, 214, 247, 251, 254, 261, 266

Interactive Environment 144 IoT 39, 54, 119, 145, 147, 154-156, 168169, 172, 195, 220, 224, 233, 238-239

M Machine Learning 39-40, 53-55, 109, 174, 178-179, 186, 192-195 Median 39, 79, 81, 83-86, 108-117, 129 Morphed Images 173-174 Multimedia 2, 11, 34-35, 38-41, 53-55, 110, 114-115, 118-119, 121-123, 126-127, 131-133, 135-142, 173-175, 194, 197198, 201, 203, 206-216, 248, 262-266 Multimedia Content Protection 118, 122, 131, 135-136, 197, 212, 214, 216

N NAND Flash Memory 56, 58-59, 76-78 Network 11, 34, 39-41, 53-55, 85-86, 112114, 117, 119-121, 127, 129, 133-135, 140, 142, 148-149, 151, 153-158, 162, 166-168, 177, 179-180, 189-191, 193196, 198-204, 206, 212-213, 216, 219222, 224-238, 240, 244, 248, 262, 265 Noise 5, 7-10, 19, 23-28, 31-32, 79-86,

R

S SATA Interface 57-58 Security 1, 5, 7-8, 10, 34-36, 55, 76-78, 118-119, 121-125, 127, 129, 133-134, 137, 140, 142, 145, 152, 161, 165, 167-168, 170-171, 193, 195-196, 198, 200, 203-204, 206, 208-209, 211-216, 218, 220, 226, 228, 232, 237-239, 243, 247, 251, 258, 260, 262-263, 265-266 Solid State Drive 57, 60, 62-63, 66, 68, 74-75, 77 Spatial Domain 1, 5, 8-9, 11, 33, 37, 83, 241, 249, 254, 257, 261 SSD 56-57, 59-64, 66, 68, 70-78 Steganalysis 241, 247, 260-266 Steganography 217, 241-243, 245-249, 251, 254-256, 258, 260-266

T Transform Domain 248-249, 254, 261 Triangular Polygon Meshes 1

301