Beamforming in Medical Ultrasound Imaging 9819975271, 9789819975273

This book deals with the concept of medical ultrasound imaging and discusses array signal processing in ultrasound. Sign

135 26 9MB

English Pages [368] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Beamforming in Medical Ultrasound Imaging
 9819975271, 9789819975273

Table of contents :
Preface
Contents
1 Principles of Ultrasound Imaging
1.1 Introduction
1.2 Wave Equation
1.3 Impedance, Reflection, and Transmission
1.4 Scattering and Attenuation
1.5 Speckle
1.6 Time Gain Compensation
1.7 Transducer
1.7.1 Array of Transducers
1.7.2 Structural Characteristics of the Linear Array
1.8 Frame Rate
1.9 Image Display Steps
1.10 Scanning Modes
1.10.1 A-Mode Scanning
1.10.2 B-Mode Scanning
1.10.3 M-Mode Scanning
1.11 Different B-Mode Imaging Systems
1.11.1 Conventional B-Mode Imaging
1.11.2 Synthetic Transmit Aperture
1.11.3 Plane Wave Imaging
1.11.4 Diverging Wave Imaging
1.12 Evaluation Metrics
1.12.1 Axial/Lateral Resolution
1.12.2 Full-Width at Half Maximum
1.12.3 Contrast Ratio
1.12.4 Contrast-to-Noise Ratio
1.12.5 Speckle Signal to Noise Ratio
1.12.6 Generalized Contrast-to-Noise Ratio
1.13 Conclusion
References
2 Array Transducers and Beamforming
2.1 Beam Pattern
2.1.1 Beam Pattern Parameters
2.2 Focusing
2.3 Beamforming
2.3.1 Focusing in Transmission/Reception
2.3.2 Non-adaptive Beamforming
2.3.3 Adaptive Beamforming
2.4 Conclusion
References
3 Beamforming Algorithms in Medical Ultrasound Imaging: State-of-the-Art in Research
3.1 Introduction
3.2 Limitations and Requirements of Applying Beamforming Algorithms to Medical Ultrasound Imaging
3.3 Methods to Overcome the Specific Limitations of the Medical Ultrasound Data
3.3.1 Overcoming the Limited Data Restriction
3.3.2 Overcoming the Near-Field Restriction
3.3.3 Overcoming the Broadband Restriction
3.4 Resolution Enhancement Using Minimum Variance Beamforming
3.5 Contrast Improvement Algorithms
3.5.1 Coherence-Based Beamformers
3.5.2 Contrast Improvement in Minimum Variance Beamforming
3.6 Robustness Improvement of Adaptive Beamformers
3.6.1 Minimum Variance Parameters: Diagonal Loading Factor and Subarray Length
3.6.2 Forward-Backward Minimum Variance Algorithm
3.6.3 Amplitude and Phase Estimation Algorithm
3.6.4 Modified Amplitude and Phase Estimation Algorithm
3.6.5 Amplitude and Phase Estimation Plus Wiener Post-Filter Algorithm
3.7 Low-Complexity Minimum Variance Beamforming Algorithms
3.7.1 Low-Complexity Adaptive Beamformer Using Pre-determined Weights
3.7.2 Beamspace Adaptive Beamforming
3.7.3 Beamspace Method Based on Discrete Cosine Transform
3.7.4 Beamspace Method Based on Modified Discrete Cosine Transform
3.7.5 Minimum Variance Beamformer Based on Principal Component Analysis
3.7.6 Minimum Variance Beamformer Based on Legendre Polynomials
3.7.7 Decimated Minimum Variance Beamformer
3.7.8 Minimum Variance Method Based on QR Decomposition
3.7.9 Subspace Minimum Variance Beamformer
3.7.10 Low-Complexity Minimum Variance Beamformer Using Structured Covariance Matrix
3.7.11 Iterative Minimum Variance Algorithm
3.7.12 Dominant Mode Rejection-Based Minimum Variance Algorithm
3.7.13 Low-Complexity Minimum Variance Algorithm Based on Generalized Sidelobe Canceler
3.8 User Parameter-Free Minimum Variance-Based Algorithms
3.8.1 User-Independent Techniques to Determine the Diagonal Loading Factor
3.8.2 User-Independent Techniques to Determine the Subarray Length
3.8.3 User-Independent Techniques for Temporal Averaging
3.8.4 User-Independent Technique to Determine the Threshold Value of the Eigenspace-Based Minimum Variance Beamformer
3.9 Conclusion
References
4 Phase Aberration Correction
4.1 Introduction
4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation
4.2.1 Nearest Neighbor Cross-Correlation Method
4.2.2 Beamsum Correlation Method
4.2.3 Nearest Neighbor Cross-Correlation Combined with Beamsum Correlation Method
4.2.4 Filtered Normalized Cross-Correlation Method
4.2.5 Speckle Brightness Maximization Method
4.2.6 Sum of Absolute Differences Minimization Method
4.2.7 Time-Reversal Method
4.2.8 Adaptive Parallel Receive Compensation Method
4.2.9 Modified Beamformer Output Method
4.2.10 Multi-lag Cross-Correlation Method
4.2.11 Common-Midpoint Cross-Correlation Method
4.2.12 Adaptive Scaled Covariance Matrix Method
4.2.13 Continuous Estimation Methods
4.2.14 Phase Aberration Correction Method Combined with the Minimum Variance Beamformer
4.2.15 Coherent Multi-transducer Imaging Method
4.2.16 Phase Aberration Correction Using Blind Calibration of Array
4.3 Phase Aberration Correction Based on Sound Velocity Estimation
4.3.1 Image-Based Algorithm
4.3.2 An Efficient Sound Velocity Estimation Algorithm
4.3.3 Minimum Average Phase Variance Method
4.3.4 Sound Velocity Estimation in Dual-Layered Medium Using Deconvolution-Based Method
4.3.5 Local Sound Speed Estimation Using Average Sound Velocity
4.3.6 Spatial Domain Reconstruction Algorithm
4.3.7 Tomographic Sound Velocity Reconstruction and Estimation Based on Eikonal Equation
4.3.8 Local Sound Velocity Estimation Based on Pulse-Echo Imaging
4.4 Phase Aberration Correction Based on Image Reconstruction Algorithm
4.4.1 Dual Apodization with Cross-Correlation Method
4.4.2 Singular Value Decomposition Beamformer
4.5 Conclusion
References
5 Harmonic Imaging and Beamforming
5.1 Introduction
5.2 Harmonic Signal Extraction Methods
5.2.1 Pulse Inversion Algorithm
5.2.2 Amplitude Modulation Algorithm
5.2.3 Harmonic Beamforming
5.2.4 Multitone Nonlinear Coding
5.2.5 Coded Excitation Method Using Chirp Pulse
5.2.6 Coded Excitation Method Using Multiple Chirp Pulses
5.2.7 Coded Excitation Method Using Dual-Frequency Chirp Pulses
5.2.8 Excitation Method Using Mixed-Frequency Sine Pulses
5.2.9 Total Least Square Filtering Method
5.2.10 Adaptive Pulse Excitation Selection Method
5.3 Image Construction Techniques Based on Harmonic Imaging
5.3.1 Harmonic Imaging Combined with Delay-Multiply-and-Sum Algorithm
5.3.2 Harmonic Imaging Combined with Fundamental Imaging Based on Minimum Variance Algorithm
5.3.3 Harmonic Imaging Combined with a Phase Aberration Correction Algorithm
5.3.4 Harmonic Imaging Combined with Synthetic Aperture Sequential Beamforming
5.3.5 Harmonic Imaging Combined with Multiplane-Wave Compounding Method
5.3.6 Harmonic Imaging Combined with Attenuation Coefficient Estimation Method
5.3.7 Harmonic Imaging Using Dual-Frequency Transducers
5.4 Conclusion
References
6 Ultrafast and Synthetic Aperture Ultrasound Imaging
6.1 Ultrafast Imaging
6.1.1 Compounded Plane Wave Imaging
6.1.2 Focusing
6.1.3 Compounding
6.1.4 Introducing the PICMUS Datasets in CPWI
6.1.5 Image Quality Improvement in Compounded Plane Wave Imaging
6.1.6 Frame Rate Improvement in Compounded Plane Wave Imaging
6.2 Synthetic Aperture Ultrasound Imaging
6.2.1 Synthetic Transmit Aperture
6.2.2 Recursive Synthetic Transmit Aperture
6.2.3 Sparse Synthetic Transmit Aperture
6.2.4 Synthetic Receive Aperture
6.2.5 Virtual Source
6.3 Image Reconstruction Algorithms in Synthetic Aperture Imaging
6.3.1 Wavenumber Algorithm
6.3.2 Range-Doppler Algorithm
6.4 Compressive Sensing and Beamforming
6.4.1 Compression
6.4.2 Reconstruction
6.4.3 Compressive Sensing in Plane Wave Imaging
6.5 Conclusion
References
7 Ongoing Research Areas in Ultrasound Beamforming
7.1 Point Target Detection Using Ultrasound Imaging
7.1.1 Point Detection Based on Bayesian Information Criterion
7.1.2 Point Detection Based on Coherence Estimation and Covariance Matrix Analysis
7.1.3 Point Detection Based on Wavelet Coefficient
7.1.4 Point Detection Using Multilook
7.1.5 Point Detection Based on Phase Coherence Filtering
7.1.6 Other Point Detection Techniques
7.2 Deep Learning in Medical Ultrasound Imaging
7.2.1 Principles of Deep Neural Network
7.2.2 Challenges and Limitations
7.2.3 Applications of Deep Neural Network in Image Generation Process
7.2.4 Deep Neural Network-Based Beamforming
7.3 Super-Resolution Ultrasound Imaging
7.3.1 Principles of Super-Resolution Ultrasound Imaging
7.3.2 Performance Improvement of Super-Resolution Ultrasound Imaging
7.4 Conclusion
References

Citation preview

Springer Tracts in Electrical and Electronics Engineering

Babak Mohammadzadeh Asl Roya Paridar

Beamforming in Medical Ultrasound Imaging

Springer Tracts in Electrical and Electronics Engineering Series Editors Brajesh Kumar Kaushik, Department of Electronics and Communication Engineering, Indian Institute of Technology Roorkee, Roorkee, Uttarakhand, India Mohan Lal Kolhe, Faculty of Engineering and Sciences, University of Agder, Kristiansand, Norway

Springer Tracts in Electrical and Electronics Engineering (STEEE) publishes the latest developments in Electrical and Electronics Engineering - quickly, informally and with high quality. The intent is to cover all the main branches of electrical and electronics engineering, both theoretical and applied, including: . . . . . . . . . . . . . . . . . . . . . . . . .

Signal, Speech and Image Processing Speech and Audio Processing Image Processing Human-Machine Interfaces Digital and Analog Signal Processing Microwaves, RF Engineering and Optical Communications Electronics and Microelectronics, Instrumentation Electronic Circuits and Systems Embedded Systems Electronics Design and Verification Cyber-Physical Systems Electrical Power Engineering Power Electronics Photovoltaics Energy Grids and Networks Electrical Machines Control, Robotics, Automation Robotic Engineering Mechatronics Control and Systems Theory Automation Communications Engineering, Networks Wireless and Mobile Communication Internet of Things Computer Networks

Within the scope of the series are monographs, professional books or graduate textbooks, edited volumes as well as outstanding PhD theses and books purposely devoted to support education in electrical and electronics engineering at graduate and post-graduate levels. Review Process The proposal for each volume is reviewed by the main editor and/or the advisory board. The books of this series are reviewed in a single blind peer review process. Ethics Statement for this series can be found in the Springer standard guidelines here https://www.springer.com/us/authors-editors/journal-author/journal-author-hel pdesk/before-you-start/before-you-start/1330#c14214.

Babak Mohammadzadeh Asl · Roya Paridar

Beamforming in Medical Ultrasound Imaging

Babak Mohammadzadeh Asl Department of Biomedical Engineering Tarbiat Modares University Tehran, Iran

Roya Paridar Department of Biomedical Engineering Tarbiat Modares University Tehran, Iran

ISSN 2731-4200 ISSN 2731-4219 (electronic) Springer Tracts in Electrical and Electronics Engineering ISBN 978-981-99-7527-3 ISBN 978-981-99-7528-0 (eBook) https://doi.org/10.1007/978-981-99-7528-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.

The first author dedicates this book to Maryam, Kimia, and Parsa. The second author dedicates this book to her parents and sibling.

Preface

Achieving images of the internal parts of the human body without any invasive operations has been realized by using the ultrasound imaging technique. In the last 50 years, tremendous progress has been made in the field of medical ultrasound imaging, and it has been recognized as a popular method. Today, medical ultrasound imaging is widely used in practice; specialists diagnose diseases according to the results obtained from the ultrasound imaging. They even treat some diseases by using ultrasound-based techniques. Despite the many developments and advances made in ultrasound data processing, there are still many open areas in this field that necessitate the continuation of research and study by researchers. In particular, by processing the signals received from ultrasound imaging using different algorithms, the image of the considered medium can be reconstructed. The better the quality and clarity of the resulting image, the less error the doctor will make in the image interpretation and diagnosis. Therefore, the performance of the algorithm that is performed to generate the medical image is of importance. Consequently, researchers are still trying to develop efficient algorithms to produce images of the best possible quality. Another important topic is the computational complexity of image reconstruction algorithms; many improved algorithms that result in high-quality images suffer from high-computational complexity. In other words, it is not possible to use them in real-time imaging. It can be concluded that it is essential to provide methods to reduce the computational complexity of such algorithms. Another indispensable point that should be taken into account is the aberration that is induced into the received signal, which is a natural phenomenon in the ultrasound imaging process and leads to suppression of image quality and even information loss. Providing solutions to overcome this limitation is also a challenge and must be addressed. In addition to these cases, there exist many other challenges and limitations in medical ultrasound imaging, many of which have been discussed in this book. So far, our team has conducted many studies on the topics and challenges mentioned above and obtained valuable achievements, which will be collected and explained in detail in this book. While we are continuing our studies and research in this regard, many

vii

viii

Preface

other researches have been done by other groups. This book also deals with these achievements and has collected comprehensive and useful information, especially in the field of improved image reconstruction algorithms. Briefly, in this book, we aim to explain in detail the principles of ultrasound imaging in medicine, present and categorize the challenges that we face during the data processing, and discuss their corresponding solutions. We have also discussed new areas of beamforming in ultrasound imaging, and have provided the basis for continuing research on them for those interested in working in this field. This book is recommended for students and researchers in biomedical engineering who study array signal processing, since they can benefit from the process of medical data processing and various beamforming techniques that are explained in detail in this book. Also, this book will be useful for those whose principles of study include any topic based on ultrasound. As this book has also covered the recent advances in the field of medical ultrasound signal processing, it can provide the reader with useful information about the recent studies that have been done so far. Therefore, it will also contain valuable information for researchers who are looking for the development of a new method to improve the performance of ultrasound imaging. We sincerely hope that this book will be useful for the readers and provide a suitable platform for presenting new ideas. Tehran, Iran

Babak Mohammadzadeh Asl Roya Paridar

Acknowledgements We would like to thank the many researchers in the field of ultrasound signal processing. We have tried to acknowledge their contributions in the book. Also, we want to thank the editor and reviewers for their time in improving the book, and their help in its publication.

Contents

1 Principles of Ultrasound Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Impedance, Reflection, and Transmission . . . . . . . . . . . . . . . . . . . . . 1.4 Scattering and Attenuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Speckle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Time Gain Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Transducer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7.1 Array of Transducers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7.2 Structural Characteristics of the Linear Array . . . . . . . . . . 1.8 Frame Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Image Display Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 Scanning Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.1 A-Mode Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.2 B-Mode Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.3 M-Mode Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.11 Different B-Mode Imaging Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 1.11.1 Conventional B-Mode Imaging . . . . . . . . . . . . . . . . . . . . . . 1.11.2 Synthetic Transmit Aperture . . . . . . . . . . . . . . . . . . . . . . . . . 1.11.3 Plane Wave Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.11.4 Diverging Wave Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12.1 Axial/Lateral Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12.2 Full-Width at Half Maximum . . . . . . . . . . . . . . . . . . . . . . . . 1.12.3 Contrast Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12.4 Contrast-to-Noise Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12.5 Speckle Signal to Noise Ratio . . . . . . . . . . . . . . . . . . . . . . . 1.12.6 Generalized Contrast-to-Noise Ratio . . . . . . . . . . . . . . . . . . 1.13 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 3 5 6 7 9 9 11 12 13 15 15 16 17 18 18 20 21 22 23 23 24 25 25 26 26 27 27

ix

x

Contents

2 Array Transducers and Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Beam Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Beam Pattern Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Focusing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Focusing in Transmission/Reception . . . . . . . . . . . . . . . . . . 2.3.2 Non-adaptive Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Adaptive Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Beamforming Algorithms in Medical Ultrasound Imaging: State-of-the-Art in Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Limitations and Requirements of Applying Beamforming Algorithms to Medical Ultrasound Imaging . . . . . . . . . . . . . . . . . . . 3.3 Methods to Overcome the Specific Limitations of the Medical Ultrasound Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Overcoming the Limited Data Restriction . . . . . . . . . . . . . 3.3.2 Overcoming the Near-Field Restriction . . . . . . . . . . . . . . . 3.3.3 Overcoming the Broadband Restriction . . . . . . . . . . . . . . . 3.4 Resolution Enhancement Using Minimum Variance Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Contrast Improvement Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Coherence-Based Beamformers . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Contrast Improvement in Minimum Variance Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Robustness Improvement of Adaptive Beamformers . . . . . . . . . . . . 3.6.1 Minimum Variance Parameters: Diagonal Loading Factor and Subarray Length . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Forward-Backward Minimum Variance Algorithm . . . . . . 3.6.3 Amplitude and Phase Estimation Algorithm . . . . . . . . . . . 3.6.4 Modified Amplitude and Phase Estimation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.5 Amplitude and Phase Estimation Plus Wiener Post-Filter Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Low-Complexity Minimum Variance Beamforming Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Low-Complexity Adaptive Beamformer Using Pre-determined Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Beamspace Adaptive Beamforming . . . . . . . . . . . . . . . . . . . 3.7.3 Beamspace Method Based on Discrete Cosine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Beamspace Method Based on Modified Discrete Cosine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29 29 32 36 37 38 39 41 50 51 53 53 54 55 55 58 59 60 68 71 96 115 115 118 119 120 121 122 123 125 129 130

Contents

xi

3.7.5

Minimum Variance Beamformer Based on Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.6 Minimum Variance Beamformer Based on Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.7 Decimated Minimum Variance Beamformer . . . . . . . . . . . 3.7.8 Minimum Variance Method Based on QR Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.9 Subspace Minimum Variance Beamformer . . . . . . . . . . . . 3.7.10 Low-Complexity Minimum Variance Beamformer Using Structured Covariance Matrix . . . . . . . . . . . . . . . . . . 3.7.11 Iterative Minimum Variance Algorithm . . . . . . . . . . . . . . . 3.7.12 Dominant Mode Rejection-Based Minimum Variance Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.13 Low-Complexity Minimum Variance Algorithm Based on Generalized Sidelobe Canceler . . . . . . . . . . . . . . 3.8 User Parameter-Free Minimum Variance-Based Algorithms . . . . . 3.8.1 User-Independent Techniques to Determine the Diagonal Loading Factor . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 User-Independent Techniques to Determine the Subarray Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.3 User-Independent Techniques for Temporal Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.4 User-Independent Technique to Determine the Threshold Value of the Eigenspace-Based Minimum Variance Beamformer . . . . . . . . . . . . . . . . . . . . . 3.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Phase Aberration Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Nearest Neighbor Cross-Correlation Method . . . . . . . . . . . 4.2.2 Beamsum Correlation Method . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Nearest Neighbor Cross-Correlation Combined with Beamsum Correlation Method . . . . . . . . . . . . . . . . . . . 4.2.4 Filtered Normalized Cross-Correlation Method . . . . . . . . . 4.2.5 Speckle Brightness Maximization Method . . . . . . . . . . . . . 4.2.6 Sum of Absolute Differences Minimization Method . . . . . 4.2.7 Time-Reversal Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.8 Adaptive Parallel Receive Compensation Method . . . . . . . 4.2.9 Modified Beamformer Output Method . . . . . . . . . . . . . . . . 4.2.10 Multi-lag Cross-Correlation Method . . . . . . . . . . . . . . . . . . 4.2.11 Common-Midpoint Cross-Correlation Method . . . . . . . . . 4.2.12 Adaptive Scaled Covariance Matrix Method . . . . . . . . . . .

131 134 135 137 140 142 143 144 148 150 151 159 161

162 163 164 169 169 171 171 173 174 176 177 178 179 179 181 183 184 184

xii

Contents

4.2.13 Continuous Estimation Methods . . . . . . . . . . . . . . . . . . . . . 4.2.14 Phase Aberration Correction Method Combined with the Minimum Variance Beamformer . . . . . . . . . . . . . . 4.2.15 Coherent Multi-transducer Imaging Method . . . . . . . . . . . 4.2.16 Phase Aberration Correction Using Blind Calibration of Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Phase Aberration Correction Based on Sound Velocity Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Image-Based Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 An Efficient Sound Velocity Estimation Algorithm . . . . . 4.3.3 Minimum Average Phase Variance Method . . . . . . . . . . . . 4.3.4 Sound Velocity Estimation in Dual-Layered Medium Using Deconvolution-Based Method . . . . . . . . . . 4.3.5 Local Sound Speed Estimation Using Average Sound Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.6 Spatial Domain Reconstruction Algorithm . . . . . . . . . . . . . 4.3.7 Tomographic Sound Velocity Reconstruction and Estimation Based on Eikonal Equation . . . . . . . . . . . . 4.3.8 Local Sound Velocity Estimation Based on Pulse-Echo Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Phase Aberration Correction Based on Image Reconstruction Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Dual Apodization with Cross-Correlation Method . . . . . . 4.4.2 Singular Value Decomposition Beamformer . . . . . . . . . . . 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

186

5 Harmonic Imaging and Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Harmonic Signal Extraction Methods . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Pulse Inversion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Amplitude Modulation Algorithm . . . . . . . . . . . . . . . . . . . . 5.2.3 Harmonic Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Multitone Nonlinear Coding . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Coded Excitation Method Using Chirp Pulse . . . . . . . . . . . 5.2.6 Coded Excitation Method Using Multiple Chirp Pulses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.7 Coded Excitation Method Using Dual-Frequency Chirp Pulses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.8 Excitation Method Using Mixed-Frequency Sine Pulses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.9 Total Least Square Filtering Method . . . . . . . . . . . . . . . . . . 5.2.10 Adaptive Pulse Excitation Selection Method . . . . . . . . . . .

213 213 216 216 217 217 218 219

190 191 192 194 194 195 196 197 199 200 202 203 205 205 207 209 209

221 221 224 225 226

Contents

5.3

xiii

Image Construction Techniques Based on Harmonic Imaging . . . . 5.3.1 Harmonic Imaging Combined with Delay-Multiply-and-Sum Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Harmonic Imaging Combined with Fundamental Imaging Based on Minimum Variance Algorithm . . . . . . . 5.3.3 Harmonic Imaging Combined with a Phase Aberration Correction Algorithm . . . . . . . . . . . . . . . . . . . . . 5.3.4 Harmonic Imaging Combined with Synthetic Aperture Sequential Beamforming . . . . . . . . . . . . . . . . . . . 5.3.5 Harmonic Imaging Combined with Multiplane-Wave Compounding Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Harmonic Imaging Combined with Attenuation Coefficient Estimation Method . . . . . . . . . . . . . . . . . . . . . . . 5.3.7 Harmonic Imaging Using Dual-Frequency Transducers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

227

6 Ultrafast and Synthetic Aperture Ultrasound Imaging . . . . . . . . . . . . . 6.1 Ultrafast Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Compounded Plane Wave Imaging . . . . . . . . . . . . . . . . . . . 6.1.2 Focusing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.3 Compounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.4 Introducing the PICMUS Datasets in CPWI . . . . . . . . . . . 6.1.5 Image Quality Improvement in Compounded Plane Wave Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.6 Frame Rate Improvement in Compounded Plane Wave Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Synthetic Aperture Ultrasound Imaging . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Synthetic Transmit Aperture . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Recursive Synthetic Transmit Aperture . . . . . . . . . . . . . . . . 6.2.3 Sparse Synthetic Transmit Aperture . . . . . . . . . . . . . . . . . . 6.2.4 Synthetic Receive Aperture . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Virtual Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Image Reconstruction Algorithms in Synthetic Aperture Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Wavenumber Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Range-Doppler Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Compressive Sensing and Beamforming . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Compressive Sensing in Plane Wave Imaging . . . . . . . . . .

243 243 244 246 246 249

227 228 230 230

235 236 238 239 239

253 279 280 281 283 284 285 287 290 290 292 294 295 296 297

xiv

Contents

6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 7 Ongoing Research Areas in Ultrasound Beamforming . . . . . . . . . . . . . 7.1 Point Target Detection Using Ultrasound Imaging . . . . . . . . . . . . . . 7.1.1 Point Detection Based on Bayesian Information Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Point Detection Based on Coherence Estimation and Covariance Matrix Analysis . . . . . . . . . . . . . . . . . . . . . 7.1.3 Point Detection Based on Wavelet Coefficient . . . . . . . . . . 7.1.4 Point Detection Using Multilook . . . . . . . . . . . . . . . . . . . . . 7.1.5 Point Detection Based on Phase Coherence Filtering . . . . 7.1.6 Other Point Detection Techniques . . . . . . . . . . . . . . . . . . . . 7.2 Deep Learning in Medical Ultrasound Imaging . . . . . . . . . . . . . . . . 7.2.1 Principles of Deep Neural Network . . . . . . . . . . . . . . . . . . . 7.2.2 Challenges and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Applications of Deep Neural Network in Image Generation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Deep Neural Network-Based Beamforming . . . . . . . . . . . . 7.3 Super-Resolution Ultrasound Imaging . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Principles of Super-Resolution Ultrasound Imaging . . . . . 7.3.2 Performance Improvement of Super-Resolution Ultrasound Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

307 307 308 310 313 315 318 322 323 324 327 330 333 340 341 346 352 353

Chapter 1

Principles of Ultrasound Imaging

Abstract In this chapter, the principles of ultrasound imaging are explained; the necessity of ultrasound imaging technique in medicine, explanations of some basic concepts that we encounter in the imaging process, different types of arrays for imaging, and different imaging modes have been discussed. Since the main focus of the book is on B-mode imaging, this chapter continues by describing different B-mode imaging systems. Finally, the chapter is finished by defining some common evaluation parameters that are used for the quantitative evaluation of image reconstruction algorithms. Keywords Medical ultrasound · Array characteristics · Attenuation · B-mode imaging · Quantitative evaluation

1.1 Introduction The basic principles of ultrasound (US) imaging in medicine were firstly inspired by SONAR and RADAR imaging techniques in which the reflected echoes were used to detect whether an object exists underwater (in SONAR imaging), and also, acquire some detailed information about the imaging region (in RADAR imaging). Since 1930, a wide-spread range of applications of medical US imaging was developed (Chan and Perlas 2011). US applications in the medical field are categorized into two main parts: diagnosis and therapy. Brain imaging using the US imaging, as a diagnostic instrument, was first successfully performed on a human being (Edler and Lindström 2004). Several improvements have been made since then in this regard, and gradually, this imaging technique became known as one of the most commonly used imaging methods. The US has some valuable benefits that have led to a lot of attention being paid to it; non-ionizing nature, cost-efficiency, portability, and the possibility of imaging soft tissues are some of the advantages of this imaging modality. Using this technique, it would be possible to image the moving organs inside the body, e.g., the heart. Moreover, tumor detection in tissues, and also, real-time imaging would be achievable. In US imaging, an acoustic wave with a frequency beyond the human © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Mohammadzadeh Asl and R. Paridar, Beamforming in Medical Ultrasound Imaging, Springer Tracts in Electrical and Electronics Engineering, https://doi.org/10.1007/978-981-99-7528-0_1

1

2

1 Principles of Ultrasound Imaging

auditory range is used to perform the imaging process. The performed frequency is in the range of .20 KHz ≤ f ≤ 1 GHz. Depending on the intended application, the frequency of the wave is determined. Specifically, the 1–15 .MHz frequency range is used for diagnostic applications. After emitting the wave toward the tissue, a part of the transmitted wave is reflected from the tissue, which contains useful information. Recording and processing the reflected waves leads to revealing a series of characteristics of the tissue that are not normally visible, which helps the physician to diagnose and even treat the disease. The acoustic waves, or equivalently, the pressure waves, are actually the regular vibrations of molecules; the wave propagates in the environment and makes its adjacent molecules vibrate. This vibration transmission process continues in a similar way and leads the wave to be propagated in the environment. The acoustic wave generation is performed using piezoelectric transducers, which provide the vibration proportional to the applied electrical voltage. The acoustic waves are emitted consecutively in the environment (e.g., the tissue). Note that the transducers usually both emit and receive the waves. Therefore, the time duration between two consecutive emitted waves should be such that a transmitted wave can go through the tissue, reflect, and reach the transducer before the next emitting wave.

1.2 Wave Equation The acoustic wave can be interpreted as the generation of periodic compression and expansion pressure in the environment. The pressure wave is propagated in the volume of the propagating medium, i.e., in three dimensions. Specifically, consider the pressure wave in .x direction, . p(x, t). The pressure wave equation is formulated as follows: .

∂2 ∂2 p(x, t) − ρ K p(x, t) = 0, 0 ∂x2 ∂t 2

(1.1)

where .K and .ρ0 denote the compressibility constant and the average density of the medium, respectively. The solution of the above equation is considered as follows: .

p(x, t) = p0 cos(ωt − kx),

(1.2)

where . p0 denotes the amplitude of the propagated acoustic wave, .ω = 2π f and k = 2π f /c denote the temporal and spatial frequencies, respectively, and .c is the speed of sound. The spatial frequency is known as the wavenumber. Note that the wavelength .λ is defined as the spatial distance between two consecutive wavefronts, which is frequency and sound speed dependent. More precisely, we have .c = λ f . Accordingly, the wavenumber can also be expressed in terms of wavelength as .k = 2π/λ. Considering the wave equation presented in (1.1) and also, its corresponding

.

1.3 Impedance, Reflection, and Transmission

3

solution stated in (1.2), one can obtain that the following relation is met between the temporal and spatial frequencies: / k = ω ρ0 K.

.

(1.3)

1.3 Impedance, Reflection, and Transmission Consider an acoustic plane wave traveling through a boundary of two different mediums with different impedances, which is known as the incident wave. In such a situation, some part of the wave is reflected back toward the first medium, known as the reflected wave, and some part of it is transmitted through the second medium, which we call the transmitted wave. Specifically, Fig. 1.1a shows the schematic of the incident and transmitted waves of the described situation. The distance between the projection of the wavefronts on the boundary of two mediums stays constant, which is denoted as .d. Also, Fig. 1.1b shows the incident, reflected, and transmitted waves and their corresponding angles, which are denoted as .θi , .θr , and .θt , respectively. We have: d=

.

λ1 . sin θi

(1.4)

Note that the wavenumbers corresponding to the first and the second mediums are denoted as .λ1 and .λ2 , respectively, as shown in Fig. 1.1. From the above equation, and also, considering that the medium is similar for both the incident and reflected waves, one can obtain that the incident and reflected angles are equal, i.e., .θr = θi .

Fig. 1.1 a The schematic of the wave traveling through the boundary of two mediums, and b the angles corresponding to incident, reflected, and transmitted waves

4

1 Principles of Ultrasound Imaging

To maintain the parameter .d at the boundary of two medium constant, the following relation should be met: .

sin θi λ1 c1 = = . sin θt λ2 c2

(1.5)

This relation is called the refraction phenomenon. This phenomenon shows that the direction of the wave would be changed after reaching the boundary of two mediums with different sound velocities. It can be seen from (1.5) that the propagation angle in medium 2 would be decreased as .c1 > c2 and vice versa. It is worth noting that .θr = θt = 0 is concluded for the case in which .θi = 0. The amount of reflected echo depends on the impedance parameter, . Z . The impedance of a medium is obtained from the following equation: .

Z=

p , u

(1.6)

where .u denotes the particle velocity. The equivalent form of the above equation can be considered as. Z = ρ0 c. The impedance of each medium is different. As mentioned before, when a traveling wave reaches the boundary of two mediums, some part of the wave is reflected back to the existing medium, and some other part of the wave is transmitted through the next medium. The reflection coefficient is defined as the ratio between the reflected pressure wave and the incident pressure wave, i.e., . R = pr / pi , which can be obtained according to the following equation: .

R=

Z 2 cos θi − Z 1 cos θt . Z 1 cos θt + Z 2 cos θi

(1.7)

The above equation is the general formulation of the reflection coefficient. Now, consider a special case in which.θi = θt = 0. In such a situation, the reflection coefficient is simplified as follows: .

R=

Z2 − Z1 . Z1 + Z2

(1.8)

The following are concluded from the obtained reflection coefficient: • if . Z 1 = Z 2 , the reflection coefficient would be zero, indicating that no wave would be reflected back to the medium. • if . Z 2 > Z 1 , the reflection coefficient would be a positive value, . R > 0. • if. Z 2 < Z 1 , the reflection coefficient would be a negative value, which is interpreted as .180◦ change in the phase of the reflected wave compared to the incident wave. • as the difference between the impedances of the two mediums increases, the reflection coefficient would be increased. In particular, if there is a great difference between the impedance of two mediums, almost all of the wave would be reflected back, and no part of the wave would be transmitted through the second medium.

1.4 Scattering and Attenuation

5

1.4 Scattering and Attenuation Scattering is defined as the reflected waves that are propagated along different directions in the environment with a smaller amplitude compared to the reflected wave described in the previous section. This occurs due to two main reasons: (i) the boundary of two mediums is not flat, and (ii) the dimension of the particles of the medium is much smaller compared to the wavenumber Chan and Perlas (2011). This phenomenon is common in medical ultrasound imaging; almost always, the boundary of different tissues is not flat. Also, very small particles, such as blood cells, are found in abundance in the imaging environment. Note that the reflection and the scattering are different. Figure 1.2 shows the schematic of the reflection phenomenon in flat surfaces, and also, the scattering occurs in non-flat surfaces and small-scale particles. The scattering phenomenon results in the reduction of the amplitude of the wave, which is known as attenuation. In addition to the scattering, some other factors result in wave attenuation, the most common of which is the absorption process; due to the viscosity of the tissue, some part of the wave intensity is converted to heat. The pressure wave with the consideration of the attenuation phenomenon is expressed as below: .

p(x, t) = p0 e−αx cos(ωt − kx).

(1.9)

It can be seen that the wave energy (or equivalently, density) is diminished exponentially due to the attenuation effect. The parameter .α in (1.9) denotes the loss factor which is directly proportional to the frequency; as the frequency increases, the attenuation would be increased. In other words, the intensity of the echo would be damped. Consequently, the penetration depth of the wave would be decreased. Therefore, in cases where imaging in deep regions is of interest, the wave frequency should not be high. On the other hand, according to .c = λ f , it can be seen that as the frequency decreases, the parameter .λ would be increased, and consequently, the resolution of the resulting image would be degraded (note that a medium with the constant .c is considered). Generally, there is a trade-off between the penetration depth and the

Fig. 1.2 Schematic of a the reflected wave in flat tissue, b the scattering in non-flat tissue, and c the scattering in a very small particle

6

1 Principles of Ultrasound Imaging

Fig. 1.3 Schematic of US imaging procedure

resolution of the reconstructed image obtained from the imaging process. The loss 1 . However, it is usually expressed in decibels. factor unit is equivalent to . cm.M Hz According to the so far presented explanations, the general form of the procedure which is performed in US imaging of a medium can be imagined as the schematic shown in Fig. 1.3. The propagated wave is considered to be planar; initially, the propagated wave is spherical. However, as the wave travels farther, its shape is formed as a plane. In other words, the propagated wave is planar in the far-field. More explanations about how to use the far-field approximation in US imaging will be presented in more detail in Chap. 3.

1.5 Speckle In US imaging, the received echo is usually contaminated with artifacts that negatively affect the data interpretation. Speckle is one of the most common artifacts that appear in the received signal obtained from different types of tissues. The speckle generation originates from the scatterers; the scatterers are randomly positioned in the medium. Each scatterer reflects the incident wave along all directions, as mentioned before, which we call scattering. Interfering these scattering echoes from different scatterers leads to creating a granular pattern which is named speckle (Wells and Halliwell 1981). Note that the interference of the scattering echoes can be either constructive or destructive, both of which create speckle artifacts. The speckle covers the whole background of the final reconstructed image and negatively affects the diagnosis. Specifically, it may also disrupt the edge shape of a cyst in the tissue. To see how the speckle pattern appears in the reconstructed image, pay attention to Fig. 1.4. In this figure, the reflected echoes obtained from a designed cyst phantom shown in Fig. 1.4a are simulated and processed to construct the corresponding image. As can

1.6 Time Gain Compensation

7

Fig. 1.4 a The schematic of a designed cyst phantom, and b the corresponding reconstructed image. The speckle effect can be seen in the reconstructed image

be seen from the reconstructed image shown in Fig. 1.4b, the speckle pattern covers the background. Moreover, the circular shape of the cyst is not well retrieved due to the presence of the speckle artifact. Speckle degrades the contrast of the image. In other words, it makes the discrimination and separation of an object against its background difficult. Note that in the cases where the locations of the scatterers are fixed in a medium, the generated speckle would be independent of scan time. More precisely, a similar speckle pattern would be generated for different scan times. Therefore, imaging the medium from different angles and averaging the results suppresses the speckle successfully. Different speckle reduction techniques have been proposed so far to improve the contrast of the reconstructed images, including filters and modified algorithms. More explanations about some of these techniques are discussed in Chap. 3.

1.6 Time Gain Compensation To see the effect of attenuation on the intensity of the received wave, consider Fig. 1.5 in which a phantom consisting of three point targets, having similar intensity, is designed. The echoes reflected from the point targets are depicted in Fig. 1.5b. It can be seen that although the intensity of the point targets is equal, the corresponding amplitudes of the received echoes are not equal. The deeper the target is placed, the weaker the amplitude of the corresponding received echo. This is due to two main reasons: (i) as the wave reaches the first point target, a part of the signal is reflected back from the target, and another part of the wave, which is weakened compared to

8

1 Principles of Ultrasound Imaging

Fig. 1.5 a The schematic of the transducer position relative to the imaging phantom, and b the intensity plot of the received signal obtained from the transducer

Fig. 1.6 Two different TGC patterns for amplifying the received echo

the initial wave, continues going through the medium, also, (ii) attenuation due to different reasons that explained earlier results in further amplitude suppression of the wave. To compensate for the attenuation due to increasing the depth of the imaging region, time gain compensation (TGC) is usually performed. TGC is an amplifier that amplifies the received echo according to time; as the time index increases, the received echo is more amplified. Different patterns of TGC, including linear and nonlinear, can be performed to compensate for the attenuation. A simple TGC pattern is a linearly increasing gain with a constant slope of 1 .dB/s. This pattern is shown in Fig. 1.6a. However, in practice, the echoes received from farther distances are not amplified, and only the middle range of the received signal is usually considered for attenuation compensation, as shown in Fig. 1.6b. Some other non-linear patterns also exist which can be used for manipulating a specific structure, e.g., heart valve Christensen (1988).

1.7 Transducer

9

1.7 Transducer A transducer acts as both the transmitter and receiver in diagnostic US imaging. In transmit mode, it transmits the acoustic waves toward the environment. Also, in receive mode, it receives the reflected echoes from the environment. This process should be in the way that the whole medium is scanned. In this regard, different scanning methods can be imagined for the transducer. Figure 1.7 shows two main scanning modes of the transducer. In linear scanning, shown in Fig. 1.7a, the transducer translates along a specific line and performs the imaging. Moreover, the scanning process can be performed by angularly rotating the transducer while its position is not changed, as shown in Fig. 1.7b. Note that other types of the scanning process can be used by combining these two mentioned modes. For instance, the transducer can angularly rotate following the translation along a flat line. This process is repeated continuously along a specified direction to scan the whole medium.

1.7.1 Array of Transducers To avoid mechanically moving the transducer or angularly rotating it in order to complete the scanning, an array of elements are put together and they perform the US imaging. The array transducer configuration can be categorized into three main parts: 1. A one-dimensional (1D) array of transducers in which a number of elements are positioned along a dimension. 2. A two-dimensional (2D) array of elements in which the transducers are distributed on a plane. The configuration of such a case can be circular or rectangular.

Fig. 1.7 The schematic of the a linear translation and b angular rotation of the transducer

10

1 Principles of Ultrasound Imaging

Fig. 1.8 The schematic of the a linear and b convex array of the transducers

Fig. 1.9 The schematic of the illumination pattern using a sequential-linear array, b phased-linear array and c convex array

3. A three-dimensional (3D) array of transducers in which a volumetric shape is formed by distributing the elements along each dimension of the configuration space. From the configurations mentioned above, the 1D array elements are the most commonly used in practice. The main focus of this book is on this type of configuration. In particular, the 1D array elements are divided into two main types: linear and convex, as shown in Fig. 1.8. Using the array transducer, the need for mechanical movement of the transducer is eliminated, and therefore, the imaging process is sped up. The illuminated area using each type of array transducer is different; the convex array produces the pie-shaped illumination pattern, while the linear array creates the rectangular-shaped pattern. Note that it is also possible to produce the pie-shaped pattern using the linear array transducer by adjusting the transmission time of each transducer element. Figure 1.9 shows the illumination patterns using linear and convex arrays. The rectangular pattern obtained using the linear array is called the sequential-linear array, while the pie-shaped pattern obtained using the same array is known as the phased-linear array. It can be seen from the schematics that the pie-shaped pattern corresponding to the phased-linear and convex arrays, covers a wider field of view compared to the rectangular pattern. This property makes this pattern suitable to be used in cardiac imaging.

1.7 Transducer

11

1.7.2 Structural Characteristics of the Linear Array Generally, a linear array transducer is more common in practice due to its simple hardware structure. The quality of the reconstructed image obtained from the received echoes of the array transducers depends on the array characteristics. Figure 1.10 shows the parameters to be considered in linear array design. As can be seen from the figure, there is a space between every two consecutive elements, which is known as kerf. As the kerf parameter increases, an error may occur in determining the angle of the reflected echoes. Therefore, in order to prevent such an artifact, the kerf parameter is desired to be smaller than a specific value. In addition to the kerf, the length of the array (. L a ), or equivalently, the number of elements used in the linear array, affects the resulting image; as the parameter . L a increases, the image quality is improved. Also, a wider region would be covered for imaging using a larger array length. The distance between the center of two consecutive elements is known as the array pitch, and the following relation is met between this parameter and the array length: .

L a = number of elements × pitch.

Also, from Fig. 1.10, one can see that the following relationship is established between the parameters: pitch = element width + kerf.

.

The center of the linear array is usually considered to be at the origin, which leads to a simpler computational process. In such a case, the position of each element, .xi , is formulated as below: ) ( N −1 × pitch, i = 0, 1, . . . , N − 1, (1.10) .xi = i− 2 where . N denotes the number of array elements. In summary, the array length and the space between the elements are important structural parameters that determine how the array geometry affects the imaging process. This issue will be discussed in more detail in Chap. 2 of this book.

Fig. 1.10 The schematic of a linear array

12

1 Principles of Ultrasound Imaging

1.8 Frame Rate It is well understood that US imaging is based on the pulse-echo process in which an acoustic wave is transmitted from the transducer through the medium, and the reflected echoes from the medium are consequently reflected and recorded by the same transducer. This process is repeated consecutively. Note that the time interval between two consecutive transmissions should be adjusted in a way that all the reflected echoes due to the previous transmission reach the transducer. The frame rate of the imaging highly depends on this time interval. To achieve the frame rate equation, one should obtain the pulse repetition interval (PRI) first. To this end, consider the following equation: 2d = ct,

.

(1.11)

where .d denotes the distance between the transducer and the reflection point of the medium, and .t denotes the time instance. Due to the round trip path of the traveling wave, the distance between the medium and the transducer is doubled, as shown in the above equation. If the maximum depth of the imaging medium is considered to be .dmax , PRI is obtained as below: PRI ≥

.

2dmax . c

(1.12)

From the above equation, one can see that the time interval between two consecutive pulses should be at least twice the maximum depth of the imaging medium divided by the sound speed. The pulse repetition frequency (PRF) of the US imaging system is obtained according to PRI as follows: PRF =

.

1 . PRI

(1.13)

Note that PRF is expressed in Hz. The number of required transmissions to achieve an image with acceptable quality is denoted as. Nt . The frame rate (FR) of the imaging system will be obtained as below: FR =

.

PRF . Nt

(1.14)

It can be seen from the above equation that the FR decreases as the required emissions are increased. It is concluded that there is a trade-off between image quality and FR. Also, the FR and the maximum penetration depth of the imaging are inversely related. This trade-off is concluded from (1.13) and (1.14).

1.9 Image Display Steps

13

1.9 Image Display Steps A 2D cross-sectional image of the medium is the result of US imaging using the array transducers. To visualize such an image, some processing steps should be applied to the reflected echoes after they are acquired. The diagram of these processing steps is illustrated in Fig. 1.11. As it can be seen from the figure, the received echoes are amplified and digitalized before going through the PC for further processing and image displaying. Note that the reflected echoes are weak, and they may interfere with noise. Therefore, the amplifier used to amplify the echoes must be low-noise. After amplifying the received echoes, they are digitalized, which is shown as A/D block in Fig. 1.11. These two primary steps can be considered as the pre-processing steps of the image displaying procedure. The main processing steps which are performed on the digitalized signal are discussed separately in the following. Demodulator In the first processing step of image display, the frequency spectrum of the signal is shifted to the origin, which is known as the demodulation operation. In other words, the carrier frequency is removed from the received echoes. Note that by demodulating the signal, the phase information would be lost, and only the amplitude information remains. As the visualized image only contains the amplitude information, it can be concluded that there is no need to preserve the phase information of the signal. TGC After the demodulation process, the attenuation of the signal due to the distance increment is compensated using the TGC system, as previously discussed in Sect. 1.6. As shown in Fig. 1.6, the TGC rate is considered to be 1 .dB/(s.MHz). However, one should note that different tissues have different attenuation coefficients. Therefore, the attenuation rate is usually set by the operator empirically to obtain a better result. Log-compression The display monitor in which the final image is going to be displayed has a specific dynamic range (DR). Note that DR is defined as below:

Fig. 1.11 The diagram of image reconstruction steps

14

1 Principles of Ultrasound Imaging

( DR = 20 log

.

Imax Imin

) ,

(1.15)

where. Imax and. Imin denote the maximum and minimum amplitude of the image to be displayed, respectively. In other words, DR is interpreted as the difference between the maximum and minimum amplitude that can be displayed by the display monitor. If the DR of the signal is high compared to the DR of the display monitor, the signal would be cut to display the image, and therefore, the information of the displayed signal would be incomplete. To overcome this problem, the DR of the signals should be reduced to the DR of the display monitor. This can be achieved by logarithmically compressing the signal. Performing this compression process is known as log-compression. Note that as the amplitude of the received signal increases, the DR of the signal would be increased. This means that the log-compression process is always required. Also, note that by applying TGC to the signals, the DR is compressed to some extent. However, the resulting DR is still high compared to the DR of the display monitor. By compressing the signal logarithmically, the weaker signals are more amplified, and the stronger signals are less amplified. Thresholding The last processing step to display the image is thresholding. In this step, a threshold is simply applied to the log-compressed signal in order to remove the echoes which are in the noise level. To schematically show the output of each processing step of visualizing the final image, consider Fig. 1.12. As expected, the input signal is demodulated first. Then, TGC is applied and results in the farther signal being more amplified compared to the closer signal. Finally, log-compression and thresholding operations are applied,

Fig. 1.12 A simple example of image reconstruction steps. The red-dotted line represents the threshold

1.10 Scanning Modes

15

respectively. It can be seen from the figure that the signal with the lower amplitude compared to the selected threshold is removed. Also, the higher-amplitude signals compared to the selected threshold are preserved to display, as shown in Fig. 1.12.

1.10 Scanning Modes Depending on the imaging setup, different types of imaging systems, including a single transducer or an array of transducers, with different modes of scanning can be performed. In this section, different US imaging modes are discussed. The scanning modes are divided into the following categories: • Amplitude mode, which is known as A-mode scanning, • Brightness mode, which is known as B-mode scanning, • Motion mode, which is known as M-mode scanning, and corresponds to imaging a moving object. In the following, brief explanations of each of the mentioned scanning strategies are provided.

1.10.1 A-Mode Scanning A-mode is the oldest scanning strategy in which a single transducer is used to transmit a pulse toward the tissue and receive the reflected echo to present 1D amplitude information. A simple schematic of this scanning strategy is depicted in Fig. 1.13. The transducer element switches between the transmitter and receiver modes constantly. The pulser block sends the pulse command. After transmitting the pulse, and consequently, receiving the reflected echoes from the medium, the reflected echoes are acquired. Then, the attenuation is compensated using the TGC system. Finally, the 1D plot of the received signal is displayed. From the time intervals between the displayed echoes, one can obtain some information about the inner structure of the tissue. Specifically, the displayed echo corresponding to the time instance .t1 shown in the schematic of Fig. 1.13, is as a result of the following process: the pulse is transmitted through the medium and reaches the boundary between the tissues with the impedances of . Z 1 and . Z 2 . A part of the incident pulse is reflected back to the transducer, and another part of it goes through the tissue with the impedance of . Z 2 . The reflected echo travels the distance of.2x1 and reaches the transducer at the time of .t1 . Knowing that the speed of sound corresponding to the tissue with the impedance of . Z 1 is equivalent to .c1 , it can be concluded that .t1 = 2x1 /c1 . Therefore, having the time instance of the echo, .t1 , the distance .x1 would be achieved. The width of the tissue with the impedance of . Z 2 would be obtained in a similar manner. Note that a high-amplitude echo appears at the beginning, as depicted in the schematic of Fig. 1.13. This is due to the fact that there is a considerable difference

16

1 Principles of Ultrasound Imaging

Fig. 1.13 The schematic of A-mode scanning. .c1 and .c2 denote the sound speed in two different parts of the tissue which are separated with different colors

between the impedances of the transducer and the environment. Also, note that the amplitude of received echoes decreases with distance. This issue arises for two main reasons: (i) the farther echoes travel more distances which leads to amplitude suppression, and (ii) at the boundary of two tissues with different impedances, a large percentage of the incident wave is reflected, and the remaining enters the second tissue. More precisely, the transmitted wave corresponding to the second tissue is weak compared to the first tissue. The disadvantage of the A-mode scanning strategy is that its field of view is limited to its facing line. Nevertheless, it is applicable in cases such as ophthalmology, e.g., to detect whether an external object exists in the human eye or not.

1.10.2 B-Mode Scanning The 2D image cannot be obtained using the A-mode scanning strategy. To overcome this limitation, B-mode scanning is used which can be interpreted as juxtaposing several A-mode scan lines together. This can be achieved by moving the transducer (linearly or angularly). As can be seen from Fig. 1.13, the display monitor of the A-mode scanning includes two axes: the time (or depth) axis and the amplitude axis. To form a 2D image, the amplitude axis is neglected. Instead, the amplitude of the received echo converts to the brightness; as the amplitude of the received echo increases, the brightness of the corresponding echo would be increased. Note that the transducer is moving to scan the medium. Therefore, to display the corresponding brightness, the B-mode imaging system should be designed in the way that the transducer position is available; the specified brightness is displayed based on the corresponding position of the transducer for each scan line, and consequently, the 2D image would be obtained. To this end, the beam steering block is added to the scanning system, as shown in Fig. 1.14, to specify the position of the transducer.

1.10 Scanning Modes

17

Fig. 1.14 The schematic of B-mode scanning. The dashed block shows the difference of B-mode scanning compared to A-mode one

The vertical axis is changed in this way compared to A-mode scanning. In other words, the two axes of the display monitor specify the position, while the amplitude information appears as brightness. After scanning all the scan lines, the final 2D image is constructed and displayed. A single element or an array of elements can be used to perform the B-mode scanning process. In B-mode imaging, a cross-section image of the medium is obtained. This scanning mode is used extensively in medicine. Some applications of this technique include fetus imaging, gynecological diseases examination, abdominal imaging such as the kidney and liver, breast imaging which plays an important role in cancer diagnosis, and heart imaging.

1.10.3 M-Mode Scanning This scanning strategy is used to obtain some information about the moving pattern of a medium. It can be efficiently used for heart valve imaging. M-mode scanning can be considered as the combination of A-mode and B-mode scanning strategies; similar to A-mode scanning, it acquires the received echoes on a single line. Also, similar to B-mode scanning, the amplitude of the received echo is converted to the brightness information. Moving information of the medium is specified along the horizontal axis of the display monitor. Figure 1.15 shows an example of the M-mode scanning result of a medium consisting of a moving object. Note that in M-mode scanning, a single line is scanned iteratively. At each iteration, the brightness information is depicted in a new line of the display monitor, and therefore, the moving pattern of the object would be specified. From the obtained M-mode image, one can achieve the speed of the moving target (.v0 in Fig. 1.15).

18

1 Principles of Ultrasound Imaging

Fig. 1.15 a The schematic of a moving medium, and b its corresponding M-mode image

1.11 Different B-Mode Imaging Systems B-mode imaging has extensive applications in medicine, as mentioned before. The general concept of conventional B-mode imaging is described in the previous section. It is worth investigating different types of B-mode imaging systems and examining and comparing their performances. B-mode imaging systems using a linear array of transducers are divided into the following categories: • • • •

Conventional B-mode imaging, Synthetic Transmit Aperture (STA) imaging, Plane Wave Imaging (PWI), Diverging Wave Imaging (DWI).

Depending on the requirements of imaging, such as high frame rate, imaging the deep tissue, or high-resolution image, each type of these techniques can be used. In the following, each category is discussed separately.

1.11.1 Conventional B-Mode Imaging The schematic of the conventional B-mode imaging is depicted in Fig. 1.16. In this imaging system, a linear array of . N elements is used to perform the US imaging. The scanning process is performed in a way that a limited number of adjacent transducers (. MV behavior with the subarray length

By increasing the subarray length in the adaptive MV algorithm, the resolution of the reconstructed image would be improved. However, this improvement is at the expense of robustness degradation. Adjusting the subarray length as one (. L = 1), the output would be equivalent to the non-adaptive DAS beamformer.

To investigate the behavior of the MV algorithm for different values of the diagonal loading factor, Fig. 3.7 is presented in which the simulation is performed for loading factors of .1/100L, .1/10L, .1/L, and .10/L. The subarray length is considered to be a constant value (. L = N /2) for a fair comparison. It can be seen from the resulting images that the image resolution is degraded as the parameter.ξ increases, as expected.

Fig. 3.7 The reconstructed images obtained from the MV algorithm with a.ξ = 1/100L, b .ξ = 1/10L, c.ξ = 1/L and d.ξ = 10/L. 96-element array is used to perform the simulation. The subarray length is considered to be . L = N /2. The figures are shown with the dynamic range of 60 .dB

3.4 Resolution Enhancement Using Minimum Variance Beamforming

67

Fig. 3.8 The lateral variations of the simulated point target shown in Fig. 3.7

Fig. 3.9 The FWHM plot for different values of subarray length and diagonal loading factor. The solid black graph with square marks corresponds to Fig. 3.5 in which the FWHM is calculated for different values of subarray length. The dashed graph with circle marks also corresponds to Fig. 3.7 in which this evaluation metric is computed for different values of .ξ . The FWHM of the DAS method is shown as a reference

Moreover, the lateral variations plot corresponding to Fig. 3.7 is depicted in Fig. 3.8 to show the main lobe width of the reconstructed point target for different values of .ξ . Again, it can be seen that the resolution of the point target reaches the DAS method as the parameter .ξ increases. This conclusion was previously made in (2.37). To further evaluate the results quantitatively, the FWHM plot for different values of .ξ is depicted in Fig. 3.9 (the dashed line). A similar conclusion is also made by relying on this evaluation metric. The following key point is obtained from the discussion made so far:

68

3 Beamforming Algorithms in Medical Ultrasound …



> MV behavior with the loading factor parameter

Diagonal loading leads the MV algorithm to be more robust. As the loading factor increases, the robustness of the algorithm would be improved. However, the resolution of the resulting image is degraded. As .ξ → ∞, the MV algorithm reaches the non-adaptive DAS beamformer.

It is shown that the adaptive MV algorithm outperforms the non-adaptive DAS method in terms of resolution. However, it should be noted that as the imaging medium tends to be more complex, the quality of the resulting image obtained from the MV beamformer may have some shortcomings, e.g., it may suffer from the signal cancellation phenomenon. In Sect. 3.6, we will show that in such cases, the performance of the MV algorithm would be highly dependent on the adjusted parameters. In other words, by correctly setting the parameters, the MV algorithm performs well enough to obtain a desirable image. In addition to high-resolution 2D B-mode reconstruction application, the MV algorithm and its modified versions (that are going to be discussed in the following sections) can also be used in other medical fields, such as 3D US imaging and adaptive estimation of the Doppler spectrum in blood flow measurement (Avanji et al. 2013; Majd and Asl 2020; Makouei and Asl 2020). One should note that the transducer parameters (such as the number of elements, sampling frequency, and type of the excitation pulse) affect the quality of the resulting image (Avanji et al. 2013); in particular, the higher the number of array elements, the better the quality of the resulting image. However, this improvement is achieved at the expense of higher computational complexity. Also, it has been shown that in the case in which the linear frequency modulation chirp (instead of the sine wave) is used as the excitation pulse, the SNR of the received signal will be increased, and consequently, a better quality reconstructed image will be achieved (Izadi et al. 2015). Besides, a variety of techniques to further improve the performance of the MV algorithm in terms of image quality have been developed, which are going to be discussed in the following sections.

3.5 Contrast Improvement Algorithms The resolution of the reconstructed medical US images is improved using the MV algorithm. Speckle is a kind of noise that unavoidably exists in the received signals, as discussed in Section 1.5. Speckle reduction leads the targets to be more clearly seen in the final image. However, in some cases, such as imaging a cystic medium in medical US imaging, maintaining the speckle statistics in the background of the reconstructed image is required for better detection. The MV beamforming method decreases the intensity of the speckle in the reconstructed image. In other words, not only does it not improve the contrast of the image, but it also makes it worse compared to the non-adaptive DAS algorithm. To overcome this limitation and retain the speckle

3.5 Contrast Improvement Algorithms

69

statistics similar to the non-adaptive DAS algorithm, temporal averaging is used, as discussed earlier in Sect. 3.3.1.2. In this regard, for the .l th subarray, temporal averaging is performed over .2K + 1 time-delayed samples such that the following samples are used to apply the temporal averaging at time index .k: [ .

] x (l) (k − K ), · · · , x (l) (k), · · · , x (l) (k + K ) .

(3.17)

Inspired by temporal averaging as well as spatial smoothing, the sample covariance matrix shown in (3.13) is updated as below:

.

ˆ R(k) =

K NΣ −L+1 Σ 1 x (l) (k + n)x (l) (k + n) H , (N − L + 1)(2K + 1) n=−K l=1

(3.18)

which leads the speckle statistics to be maintained while the resolution is improved. Temporal averaging is usually performed with the number of samples equal to twice the transmitted pulse length. More precisely, if the pulse length is considered to be . L P , temporal averaging is performed with . K ≥ L p . To see the effect of temporal averaging on retaining the speckle statistics of the image, a cyst phantom is designed; a large number of scatterers with equal amplitude are uniformly distributed in a .20 × 10 × 40 .mm3 volume. A circular region at the center of the volume in which the scatterers are distributed is considered as the cyst with a radius of 3 .mm. The amplitude inside the cyst is considered to be zero. The parameters of the imaging system are similar to the system expressed in the previous section. The resulting images are presented in Fig. 3.10. The reconstructed image obtained from the DAS algorithm is also shown as a reference for comparing the background speckle resulting from the MV algorithm. As it can be seen from the figure, by using the adaptive MV algorithm without using temporal averaging, the background speckle is constructed as a collection of several point-like sources; the homogeneity of the regions around the cyst phantom is lost. Also, it is qualitatively clear that the intensity, or equivalently, the contrast of the image is degraded compared to the DAS algorithm; the CNR evaluation metric which was previously expressed in Sect. 1.12 is obtained as 5.14 .dB and .−0.51 .dB for the DAS algorithm and the MV technique without temporal averaging, respectively. The considered regions inside and outside the cyst to calculate the CNR metric are shown with the dashed circles in Fig. 3.10a. The calculated CNR values show that the contrast of the MV algorithms is degraded for about 5.6 .d B compared to the non-adaptive DAS beamformer. Using temporal averaging in addition to spatial smoothing in order to estimate the covariance matrix, and consequently, the weight vector, the resulting image is improved considerably in terms of contrast compared to the case in which this technique is not applied, as shown in Fig. 3.10c. Note that using the MV algorithm, the circular form of the cystic region is better constructed compared to the DAS algorithm. Also, the density of the inside region of the cyst is suppressed more successfully using the adaptive MV algorithm. Therefore, after maintaining the background speckle by temporal averaging, the contrast of the MV algorithm is expected to be improved;

70

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.10 The reconstructed images obtained from a DAS, b MV without temporal averaging, and c MV with temporal averaging (K=11). The images are shown with the dynamic range of 50 .dB. The dashed circles demonstrate the considered inside and outside regions of the cyst to calculate the CNR evaluation metric

Fig. 3.11 The lateral variations plot corresponding to Fig. 3.10. The two vertical dotted lines show the interval in which the cyst is located

the CNR value of the MV algorithm with temporal averaging is obtained as 4.67 dB, indicating that the contrast is improved compared to the case in which temporal averaging is not applied. The lateral variations plot of the reconstructed images at a depth of 45 .mm, the line that passes through the middle of the cyst, is depicted in Fig. 3.11. A similar conclusion is also made from the demonstrated lateral variations. The effective role of the temporal averaging technique to retain the speckle statistics and improve the contrast of the resulting image is revealed. Other techniques also exist to further improve the image contrast. In the following, coherence-based beamformers will be discussed. Then, the minimum variance-based algorithms that have been developed to improve image contrast will be introduced.

.

3.5 Contrast Improvement Algorithms

71

3.5.1 Coherence-Based Beamformers In this section, different coherence-based beamformers that can be used to improve the image contrast compared to the non-adaptive DAS algorithm are explained in detail.

3.5.1.1

Coherence Factor

Coherence Factor (CF) is a sidelobe suppressing method that weights the beamformed data adaptively to improve the image contrast (Wang et al. 2007). Using the CF method, the unwanted sidelobes are suppressed while the main lobe is maintained. The CF weighting is formulated as below: |Σ |2 | N | | i=1 xi (k)| .C F(k) = . ΣN |xi (k)|2 N i=1

(3.19)

It can be interpreted as the ratio of the main lobe energy to the total energy. The coefficients obtained using CF are calculated in a way that the in-phase signal is emphasized while the out-of-phase signals are suppressed. Therefore, it is expected that the contrast is improved using this adaptive weighting; high values of CF correspond to the samples that their magnitude should be retained. In other words, high values of CF are assigned to the samples in which the focusing quality is performed well. Conversely, low values of CF correspond to the unwanted sidelobes, or equivalently, the weakly focused samples. Therefore, they are suppressed by multiplying them by the low-value weights. It can be concluded from the above explanations that, according to the quality of the focusing process, the CF method weights the beamformed data adaptively to improve the image contrast. The output of CF is multiplied to a beamformed data, such as DAS, to further suppress the sidelobes: y

. DAS+CF

(k) = C F(k) × y D AS (k).

The implementation steps of the CF weighting are schematically presented in Fig. 3.12. In this schematic, the output of the process which results in the numerator of (3.19) is shown as “Numerator”. Similarly, the output of the process that results in the denominator of (3.19) is shown as ‘’Denominator". Using the CF weighting process, one can take advantage of contrast improvement with low-computational complexity. Two other weighting methods, which can be included as modified versions of the CF weighting method, are discussed in the following. All of these methods are performed with the goal of enhancing the contrast of the reconstructed images.

72

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.12 The schematic of the CF weighting implementation steps

3.5.1.2

Generalized Coherence Factor

Using the CF method, the image quality would be improved, as will be shown in the following simulation result. However, note that this weighting method is appropriate for imaging phantoms consisting of point targets. In the cases in which the data is speckle-generating, such as clinical breast imaging, the CF weighting method does not perform well enough; some part of the main lobe is located at the lowfrequency regions in speckle-generating data. Using the CF method, this part of the main lobe would be suppressed as well as the unwanted sidelobes, and therefore, the output would be underestimated. To better understand this issue, it may be better to define the CF weighting method in the spatial frequency domain. Consider the vector . x 1:N (k) the entries of which consist of the time-delayed signals of the elements at time instance .k: .

x 1:N (k) = [x1 (k), x2 (k), · · · , x N (k)] .

Taking the Fourier transform of{the above vector, } the Fourier spectrum of it with the spatial frequency index of .l ∈ − N2 : N2 − 1 would be obtained, i.e., . pl (k), each entry of which is denoted as . p(l, k). The CF weighting method expressed in (3.19) can then be reformulated as below based on the defined frequency spectrum: | p(0, k)|2 C F(k) = Σ N /2−1 . 2 l=−N /2 | p(l, k)|

.

(3.20)

From the above equation, it can be seen that the CF weights are obtained from the ratio of the energy corresponding to the on-axis component of the signal (or equivalently, the DC component) to the total energy of the spectrum. To prevent underestimation due to the presence of a part of the main lobe in the low-frequency regions, it is desirable to modify (3.20) such that a low-frequency range is considered

3.5 Contrast Improvement Algorithms

73

Fig. 3.13 The schematic of the GCF weighting implementation steps

instead of only the DC component of the spectrum. Such modification is known as the Generalized CF (GCF), the weights of which are obtained from the following equation (Hu et al. 2022): Σ M0 .

l=−M

| p(l, k)|2

l=−N /2

| p(l, k)|2

GC F(k) = Σ N /2−10

.

(3.21)

In the above equation, . M0 denotes the cut-off frequency. It is clear that for . M0 = 0, the performance of the GCF technique reaches the CF method. The processing steps of the GCF weighting method are depicted in Fig. 3.13. To see the performance of the CF and GCF methods qualitatively, pay attention to Fig. 3.14. It can be seen from the figure that the weights obtained from the CF method considerably suppress the sidelobes and improve the contrast (shown as DAS+CF in Fig. 3.14b). The GCF method also improves the image contrast (shown as DAS+GCF in Fig. 3.14c). However, comparing DAS+CF and DAS+GCF, it can be seen that the CF method outperforms GCF in point target phantoms, as mentioned earlier. Figure 3.15 demonstrates the lateral variations plot of the reconstructed images corresponding to Fig. 3.14 for better visualizing the performance of the algorithms. It can be seen that the CF method improves the resolution of the non-adaptive DAS beamformer as well as the contrast. To show the superiority of the GCF compared to CF in the cases in which the data is speckle-generating, the simulation is performed on the cyst phantom, and the results are presented in Fig. 3.16. It can be clearly seen that the reconstructed image obtained from the CF weighting method is not desirable; the intensity of the background speckle is degraded due to the underestimation phenomenon explained earlier. The GCF method, the resulting image of which is shown in Fig. 3.16c, outperforms the CF method and the speckle statistics are retained more successfully. In

74

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.14 The reconstructed images of the point target obtained from a the non-adaptive DAS beamformer, b DAS with the CF weighting method, and c DAS with the GCF weighting method (. M0 = 1). The images are shown with the dynamic range of 60 .dB

Fig. 3.15 The lateral variations of the simulated point target shown in Fig. 3.14

Fig. 3.16 The reconstructed images of the simulated cyst phantom obtained from a the non-adaptive DAS beamformer, b DAS+CF, and c DAS+GCF (. M0 = 1). The images are shown with the dynamic range of 50 .dB. The dashed circles demonstrate the considered inside and outside regions of the cyst to calculate the CNR evaluation metric

Fig. 3.16, the GCF method is applied with the consideration of . M0 = 1. By changing the cut-off frequency, the performance of the reconstructed image would be affected. To see this issue, the image of the simulated cyst phantom is reconstructed for three different values of . M0 , and the result is shown in Fig. 3.17. It can be concluded that as the . M0 parameter increases, the reconstructed image tends to be similar to the DAS beamformer shown in Fig. 3.16a. Also, Fig. 3.18 shows the lateral variations plot corresponding to the simulated cyst phantom for the line that passes through the middle of the cystic region. It can be seen that the CF weighting method considerably

3.5 Contrast Improvement Algorithms

75

Fig. 3.17 The reconstructed images of the simulated cyst phantom obtained from the DAS+GCF method using a . M0 = 1, b . M0 = 2 and c . M0 = 3. The images are shown with the dynamic range of 50 .d B. The dashed circles demonstrate the considered inside and outside regions of the cyst to calculate the CNR evaluation metric Fig. 3.18 The lateral variations plot corresponding to Figs. 3.16 and 3.17

decreases the intensity of the input area of the cyst; this is evident from the suppression of the amplitude of the graph at the region where the cyst is located. However, the area outside of the cystic region is not constructed well enough. Reaching the performance of the GCF method to the non-adaptive DAS beamformer by increasing the value of . M0 can also be seen from the lateral variations plot. To quantitatively evaluate the performance of the GCF algorithm, the CNR parameter is calculated for the reconstructed images, and the corresponding graph is demonstrated in Fig. 3.19. Considering the qualitative and quantitative results together, one can conclude that the GCF outperforms CF in terms of contrast improvement in speckle-generating imaging mediums.

3.5.1.3

Scaled Coherence Factor

In order to improve the performance of the CF method in terms of speckle preservation in low-SNR cases, one can use the modified SNR-dependent weighting method which is known as the scaled CF (ScCF) algorithm (Wang and Li 2014). This weighting method is expressed as below: .

ScC F(k) =

C F(k) , C F(k) + η(k) [1 − C F(k)]

(3.22)

76

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.19 The calculated CNR value of the reconstructed cyst phantom corresponding to Figs. 3.16 and 3.17

where .η(k) is an estimate of the SNR, and obtained by using the Sigmoid function as below: { [ ( )]] N −1 E S (k) 1 .η(k) = (3.23) 1 − tanh α −β + . 2N E N (k) N As the tangent hyperbolic function (.tanh(.)) ranges from .−1 to .1, it can be concluded that .1/N ≤ η(k) ≤ 1. In the above equation, . E S and . E N denote the signal power and noise power, respectively, that are obtained by using the Fourier spectrum of the received signals (similar to GCF). Also, .α and .β are constant parameters that determine the contrast variations of the image. In particular, for a small value of .α, contrast enhancement will be limited. However, some artifacts will appear in the reconstructed image for a high value of this parameter. The obtained weighting coefficient is finally multiplied by the output of the DAS beamformer as below: y

. DAS+ScCF

(k) = ScC F(k) × yDAS ,

(3.24)

and an improved image will be obtained accordingly. The limitation of the ScCF algorithm is that the constant parameter.β is not capable of separating the desired component of the signal from the noise for low-SNR cases (i.e., in the presence of a high level of noise). Also,. E S and. E N are obtained according to the cut-off frequency . M0 , where the low-frequency components correspond to . E S , and the remaining high-frequency components correspond to . E N . Considering . M0 a constant value for different imaging points leads to performance degradation of the ScCF algorithm. In order to overcome the limitations mentioned above, the adaptive ScCF (AScCF) algorithm was proposed in Lan et al. (2022) in which the parameters . M0 and .β are determined adaptively. In this algorithm, the modified GCF method is

3.5 Contrast Improvement Algorithms

77

used in which the cut-off frequency linearly varies with the imaging depth .z. More precisely, we have: Σ M0 (k)

.

l=−M (k)

GC F , (k) = Σ N /2−10

l=−N /2

| p(l, k)|2

| p(l, k)|2

,

(3.25)

] [ z where . M0 (k) = round zmax Mmax , in which .z max and . Mmax are the maximum imaging depth and the maximum considered value of the cut-off frequency, respectively. The weighting coefficient, which is obtained from the above equation, differs from the one obtained from the conventional GCF due to variations of. M0 . By using (3.25), the adaptive value of the parameter .β is formulated as below: [ ] β , (k) = 1 − GC F , (k) × Mmax .

.

(3.26)

By using the above equation, a great value of .β , (k) is obtained for the regions corresponding to noise. Also, for speckle regions, the obtained .β , (k) value is moderately small. As can be seen from (3.26), the maximum value of.β , (k) depends on the parameter . Mmax . Therefore, it is necessary to re-estimate the signal power (by adjusting an appropriate value for the cut-off frequency) in order to prevent dark artifacts. To achieve this, the modified adaptive cut-off frequency, which is denoted as . M0, (k), is calculated as below: .

] [ M0, (k) = round GC F , (k) × Mmax .

(3.27)

for each imaging point .k. In particular, by analyzing the above equation, one can conclude that . M0, (k) = 0 for noisy regions. Therefore, the ratio . E S /E N will be a small value, and consequently, a large value for .η will be achieved (which is an estimate of SNR). Also, for speckle regions, the obtained . M0, (k) will be a moderate value. By using (3.27), the modified value of the adaptive parameter .η(k), which is denoted as .η, (k), is defined as below: η, (k) =

.

{ [ ( , )]] E E (k) 1 , 1 − tanh α − β (k) . 2 E N, (k)

(3.28)

Note that in the above equation, . E S, (k) and . E N, (k) are the signal power and noise power that are obtained by using the modified adaptive value of . M0, (k). By substituting (3.28) into (3.22), the AScCF weighting coefficient is obtained as below: .

AScC F(k) =

C F(k) . C F(k) + η, (k) [1 − C F(k)]

(3.29)

78

3 Beamforming Algorithms in Medical Ultrasound …

Finally, by multiplying the obtained output by the DAS beamformed data, the improved quality reconstructed image will be achieved: y

. DAS+AScCF

3.5.1.4

(k) = AScC F(k) × yDAS (k).

(3.30)

Phase Coherence Factor

Another way to reduce the sidelobe level, and also, the main lobe width of the reconstructed image, is to perform the phase coherence factor (PCF) weighting method (Camacho et al. 2009). This method can be applied to the DAS beamformed data and considerably improve the quality of the resulting image without a significant computational burden. The instantaneous phase of the time-delayed received signal of the .ith element that is coming from a specific point source is expressed as below: φ (k) ≈ ωkTS +

. i

2π pxi (sin θi − sin θ0 ) , λ

(3.31)

where .ω and .TS denote the angular frequency and the sampling period, respectively. Also, . pxi is the .x position of the .ith element. It is assumed that the considered point source is located at the angle of .θ0 . Consider the linear array is located on the y-axis and symmetric with respect to the origin. Also, assume that the element spacing is .λ/2 to prevent the grating lobes. Therefore, we have: .

) ( N +1 λ , 1 ≤ i ≤ N. pxi = i − 2 2

(3.32)

According to (3.31) and (3.32), the instantaneous phase equation is rewritten as below: ) ( N +1 (3.33) .φi (k) ≈ ωkTS + π i− (sin θi − sin θ0 ) . 2 Analyzing the above equation, it can be seen that the first term on the right-hand side of the instantaneous phase equation is constant. However, the second term of (3.33) changes according to the element position and the angle .θ ; for each element, at the focus point, i.e., .θ = θ0 , the second term would be zero and the instantaneous phase would be a constant value along the aperture. If .θ /= θ0 , the second term, and consequently, the instantaneous phase changes according to the element position, or equivalently, its index .i. It can be concluded that the instantaneous phase gives us information about whether the signal comes from the focus point or not. The standard deviation of the instantaneous phase at time index .k, denoted as .σ (φ(k)), is an appropriate metric to evaluate this issue. The calculated standard deviation is interpreted as the estimate of the phase diversity. One problem may arise during the standard deviation calculation: in the cases in which the instantaneous phase

3.5 Contrast Improvement Algorithms

79

crosses from .π to .−π , a jump or discontinuity will occur. This leads to an increase in the value of .σ (φ(k)) while the phases of the aperture data are the same around the discontinuity. This causes the calculated standard deviation to deviate from its correct value. To overcome this problem, another set of phases (known as the auxiliary phase set) is considered as below: { A .φi (k)

=

φi (k) + π φi (k) − π

for φi (k) < 0 for φi (k) > 0.

(3.34)

The standard deviation of the resulting phase set is also calculated and denoted as σ (φ A (k)). The estimate of the phase diversity is then obtained according to the following equation: [ ] A . pd(k) = min σ (φ(k)), σ (φ (k)) . (3.35)

.

According to the explanations stated above, the PCF is defined as below: [ ] γ pd(k) . PC F(k) = max 0, 1 − , σ0

(3.36)

√ where .σ0 = π/ 3 denotes the nominal standard deviation, .γ is a constant parameter that determines the sensitivity of the algorithm to the out-of-focus signals. The function “max” is used to make sure that the resulting weighting factor is in the range of .[0, 1]. According to the above equation, it can be seen that the PCF value equals 1 for . pd(k) = 0. This occurs when the received signal of the .ith element is coming from the focus point. For the out-of-focus data, the corresponding standard deviation increases, and consequently, the PCF value gets close to zero. The value of the parameter .γ determines the suppression level of the out-of-focus data; the higher the value of .γ , the faster the PCF value decreases to zero. The obtained PCF is multiplied with the DAS beamformed data as a weighting factor as below: y

. DAS+PCF

(k) = PC F(k) × y D AS (k),

(3.37)

which significantly improves the contrast of the resulting image. One should note that the instantaneous phase can also be obtained from the aperture data; the signal can be analytically represented as below: x (k) = X Ii (k) + j X Q i (k),

. i

(3.38)

where . X Ii (k) and . X Q i (k) denote the in-phase and the quadrature components of the received signal corresponding to the .i th element, respectively. The instantaneous phase is obtained as below: φ (k) = tan

. i

−1

(

) X Q i (k) . X Ii (k)

(3.39)

80

3 Beamforming Algorithms in Medical Ultrasound …

Note that the quadrature component of the signal can be obtained by a Hilbert transformation of the real RF data.

3.5.1.5

Sign Coherence Factor

The sign coherence factor (SCF) is interpreted as a specific case of the PCF which is obtained based on representing the signal phase with a sign bit. According to the phase variations, one can obtain whether the signal originated from the focal point or not, as mentioned earlier. Also, one should note that the phase term depends on the signal polarity. This makes it possible to represent the signal phase using a sign bit. The phase interval is divided into two parts: .(−π/2, π/2] and .[−π, −π/2] ∪ (π/2, π ]. If the phases of the received signals fall only in one of the defined intervals, one can conclude that the signals are fully coherent. In this regard, the sign bit for the received signal associated with the .ith element is defined as below: { −1 .bi (k) = +1

for xi (k) < 0 for xi (k) ≥ 0.

(3.40)

Using the defined sign bit, the phase term of the received signals is represented and their coherency is determined accordingly. The variance of the sign bit is calculated as below:

σ 2 (k) =

M

ΣM i=1

bi2 (k) −

.



M i=1

bi (k)

M2

)2 .

(3.41)

Σ It is well-known that . i bi2 (k) = M. Therefore, (3.41) is simplified as below: 1 .σ (k) = 1 − M 2

( M Σ

)2 bi (k)

.

(3.42)

i=1

One should note that.0 ≤ σ 2 (k) ≤ 1. Therefore, the SCF is obtained by modifying the PCF presented in (3.36) such that the “max” operation is omitted, and also, .σ0 , γ = 1 are substituted. Consequently, the SCF is obtained according to the following equation: [ ]2 [ | M | 1 Σ | . SC F(k) = 1 − σ = 1 − bi (k) . 1− M i=1

(3.43)

In the case in which the signals are fully coherent,.σ (k) tends to be zero, and therefore, SC F(k) = 1, indicating that the SCF reaches its maximum value when the signs of the received signals are similar. In contrast, in the case in which the received

.

3.5 Contrast Improvement Algorithms

81

signals are not coherent, i.e., the sign bits are not the same, .σ (k) tends to be one. Therefore, . SC F ≈ 0, and the out-of-focus signals would be degraded accordingly. The generalized formulation of the SCF is expressed as below: | | [ | [ ]2 | p | M | | | | |1 − 1 Σ b (k) | , p p . SC F (k) = |1 − σ | = |1 − | i | | M i=1 | |

(3.44)

where the parameter . p is included to adjust the sensitivity. As this parameter increases, the effect of the obtained SCF would be increased which results in more sidelobe suppression and main lobe width reduction. Similar to the PCF, the SCF is also used as a weighting factor and multiplied by the DAS beamformed data as below: y

. DAS+SCF

(k) = SC F(k) × y D AS (k),

(3.45)

which results in image quality improvement compared to the standard DAS beamformer.

3.5.1.6

Dynamic Phase Coherence Factor

It has been concluded that as the value of the parameter .γ in the PCF algorithm increases, better noise suppression will be performed and the image contrast will be improved. However, the performance of this algorithm will be degraded in terms of speckle preservation. Indeed, the speckle pattern is over-suppressed for large values of .γ . In contrast, the speckle is better preserved for small values of this parameter. However, this achievement is at the expense of contrast degradation. It is desired for the parameter .γ to be changed adaptively such that for noise and clutter regions, the value of this parameter increases, and also, for main lobe and speckle regions, a smaller value is assigned to it. To this end, in Wang et al. (2022), a dynamic PCF (DPCF) technique was proposed to calculate the parameter .γ automatically. In this regard, the ratio of the noise energy . E N to the total energy . E T is used according to the following equation: EN .r (k) = = ET

Σ M , /2−1

i=−M , /2

Σ M0 | p(i, k)|2 − i=−M | p(i, k)|2 0 , Σ M , /2−1 2 i=−M , /2 | p(i, k)|

(3.46)

where . p(i, k) is the Fourier spectrum of the signal (as also presented in Sect. 3.5.1.2), and. M , denotes the length of the frequency spectrum. In order to account for the depth of the imaging point (.z) in calculating .γ , a depth-dependent parameter .ρ(z) is also defined as below:

82

3 Beamforming Algorithms in Medical Ultrasound …

ρ(z) = β × tanh (z/z max .3) ,

.

(3.47)

where .β denotes the maximum constant value for the parameter .γ . From the above equation, it can be seen that .0 ≤ ρ(z) ≤ β. Also, it can be seen that the value of the parameter .ρ(z) increases by increasing the imaging depth. Based on (3.46) and (3.47), the adaptive value of .γ (k, z) is defined as below: γ (k, z) = ρ(z) × |r (k)|3 .

.

(3.48)

In particular, for the received signals corresponding to the hyperechoic targets in which the desired components of the signals are dominant, the calculated .r (k) will be a small value according to (3.46). In contrast, for the received signals corresponding to anechoic targets, the value of the calculated .r (k) will be increased, since the noise and clutter components are dominant. Also, by appropriate selection of the cut-off frequency . M0 in (3.46), the desired signals corresponding to the speckle can be preserved. Therefore, it can be concluded that by using (3.48), the obtained value of .γ (k, z) for hyperechoic targets as well as the speckle will be small. Also, for the anechoic targets, a greater value will be obtained for this parameter. In order to make the DPCF method more robust, it was proposed in Wang et al. (2022) to use the subarray averaging technique; the subarray . x (l) (k) ∈ C L×1 is considered, and its corresponding phase standard deviation, as well as the auxiliary phase A standard deviation, are calculated as .σ (φ (l) (k)) and .σ (φ (l) (k)), respectively. Then, the subarray DPCF is expressed as below: [ ]⎤ A γ (k, z) × min σ (φ (l) (k)), σ (φ (l) (k)) (l) ⎦. .w (k, z) = max ⎣0, 1 − √ π/ 3 ⎡

(3.49)

Finally, by averaging over the subarray DPCF weights, we have: wDPCF (k, z) =

.

NΣ −L+1 1 w (l) (k, z). N − L + 1 l=1

(3.50)

The obtained DPCF coefficient is combined with the normalized amplitude standard deviation of the reflected echoes (.σ0 ), and the result is used as the weighting factor which is applied to the DAS beamformed data as below: y

. DPCF

(k, z) = (1 − σ0 )2 × wDPCF (k, z) × yDAS (k, z).

(3.51)

Consequently, an improved contrast image is achieved while the speckle pattern is also preserved.

3.5 Contrast Improvement Algorithms

3.5.1.7

83

Short-Lag Spatial Coherence

Short-lag spatial coherence (SLSC) technique is a noise reduction algorithm that is developed to reduce the image clutter (Dahl et al. 2011). The SLSC method uses the values of short distances, or equivalently, short lags of the spatial coherence of backscattered echoes. The spatial covariance function at time instance .n which is defined as a function of the lag between the RF signals, i.e., the parameter .m, is expressed as below: C(m) =

.

n2 N −m Σ Σ 1 xi (n)xi+m (n). N − m i=1 n=n

(3.52)

1

In the above equation, .[n 1 , n 2 ] denotes a small kernel that is considered to average the covariance. Usually, the size of this kernel is adjusted as one wavelength. The spatial covariance function is normalized using the variances of the signals .xi (n) and . x i+m (n), which results in the spatial correlation at lag .m as below: Σn 2 N −m Σ 1 n=n 1 x i (n)x i+m (n) /(Σ . R(m) = ) (Σn 2 ). n2 N − m i=1 2 2 x (n) x (n) n=n 1 i n=n 1 i+m

(3.53)

The obtained spatial correlation is used to calculate the SLSC metric. The main differences in coherence mostly occur in short-lag regions for different targets. Therefore, the SLSC value at time instance .n will be achieved by integrating the spatial coherence over the first . M lags as below:

.

RSLSC (n) =

M Σ

R(m).

(3.54)

m=1

The parameter . M is usually considered a value between 1 and 30% of the transmit aperture. Using the SLSC algorithm as (3.54), the final image will be reconstructed directly, rather than by weighting the original image (such as CF and SCF).

3.5.1.8

Wiener Filter

The performance of the CF weighting method is degraded in speckle-generating mediums, as shown in Fig. 3.16b; the region outside the cyst is overcompensated for sidelobe suppression, and therefore, the contrast of the resulting image would be negatively affected. To overcome this limitation, the GCF is proposed and discussed in the previous section. Another solution to deal with such situations and increase the robustness of the algorithm is to perform the wiener post-filter (Nilsen and Holm 2010). The coefficients of the wiener post-filter are obtained by minimizing the

84

3 Beamforming Algorithms in Medical Ultrasound …

mean square error (MSE) of the output; to formulate this, note that the received signal includes the desired signal .s as well as the interference and noise: .

x(k) = s(k) + n(k) + i(k),

where .n(k) and .i(k) denote the noise and interference components of the received signal, respectively, at time instance .k. The problem based on MSE minimization is written as below: ( [| |2 ]) H . . Hwiener (k) = argmin E |s(k) − H w(k) x(k)| (3.55) w(k)

In the above problem, .w(k) is a weight vector obtained from any beamforming algorithm in which .w(k) H a = 1 such as DAS or MV algorithms. In particular, if the weight vector of the DAS beamformer is considered, then we will have .w(k) = − → 1 × 1 in (3.55). Solving the above problem, the wiener post-filter is obtained as N below: .

Hwiener (k) =

|s(k)|2 , w(k) H R(k)w

(3.56)

where .|s(k)|2 denotes the desired signal power. Assuming that the signal and interfering components are uncorrelated, the denominator of the above equation can be separated as the desired signal power and the interference plus noise signal power. Therefore, (3.56) is rewritten as below: .

Hwiener (k) =

|s(k)|2 |s(k)|2 + w H (k)Ri+n (k)w

.

(3.57)

It is well-known that the desired signal, and also, the interference plus noise components are not explicitly specified. In other words, the accurate values of .|s(k)|2 and . Ri+n are not available. Therefore, they should be estimated. The desired signal power is approximated using the beamformed data obtained from either DAS or MV algorithms: .

| | |sˆ (k)|2 = |y(k)|2 .

(3.58)

In the above equation, . y(k) can be considered as the output of the DAS algorithm (. y D AS (k)) or the adaptive MV technique (. y M V (k)). To estimate the interference plus noise power, the covariance matrix of the interference plus noise should be estimated, the result of which is denoted as . Rˆ i+n . The approximation is given as follows:

3.5 Contrast Improvement Algorithms

.

85

1 (2K + 1)(N − L + 1) −L+1 K NΣ Σ [ (l) ][ ]H × x (k − n) − y(k) x (l) (k − n) − y(k) ,

Rˆ i+n (k) =

n=−K

(3.59)

l=1

where . x (l) (k) denotes the .lth subarray according to (3.12). Note that the signal and interfering components are inherently correlated in medical US imaging, as mentioned before. To synthetically decorrelate the signal and interference, subarray division is performed, and consequently, the covariance matrix is estimated. This process is discussed in detail in Sect. 3.3.1.1. Also, note that by subtracting the approximated desired signal from the total received signal, i.e., .x(k) − y(k), the interference plus noise component is achieved. Again, . y(k) can be considered as the DAS or MV beamformed data. To fairly compare the wiener post-filter and the CF weighting method, . K = 0 is considered in (3.59) since only one snapshot is used in the CF method. Using the approximated components shown in (3.58) and (3.59), the wiener post-filter would be obtained as follows:

.

| | |sˆ (k)|2

. Hwiener (k) = | | |sˆ (k)|2 + w H (k) Rˆ i+n (k)w

(3.60)

Finally, the obtained . Hwiener is multiplied to the output of the considered beamformer to obtain an improved image in terms of contrast. In particular, applying the wiener post-filter to the DAS beamformed data, we have: y

. DAS+wiener

(k) = Hwiener (k) × y D AS (k).

(3.61)

Figure 3.20 shows the reconstructed images obtained from the simulated point target using the CF weighting method, and also, the wiener post-filter. Comparing Fig. 3.20a, c, it can be seen that by applying the wiener post-filter to the DAS beamformed data, the image contrast is improved and the noise around the reconstructed point target is suppressed. However, it is clear that the CF weighting method suppresses the noise level more successfully compared to wiener post-filter; this issue can be qualitatively seen by comparing the results obtained from DAS+CF and DAS+wiener shown in Fig. 3.20b, c. The lateral variations plots corresponding to the resulting images are also depicted in Fig. 3.21 to better compare the performance of the CF method and the wiener post-filter in terms of sidelobe suppression and contrast improvement. Although the performance of the CF weighting method seems to be better compared to the wiener post-filter, one should note that the wiener postfilter method is more robust against low-SNR situations compared to CF. To better understand this issue, let’s write the CF as a special case of the wiener post-filter; CF is defined as the ratio of the coherent summation (CS) and the incoherent summation (ICS) of the signal received by the array elements, as shown in (3.19). The general formulation of this method is considered as below:

86

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.20 The reconstructed images of the simulated point target obtained from a DAS, b DAS+CF, and c DAS+wiener methods. The images are shown with the dynamic range of 60 .dB

Fig. 3.21 The lateral variations plot of the reconstructed images shown in Fig. 3.20

C F(k) =

.

CS(k) , ICS(k)

CS(k) , CS(k) + [ICS(k) − CS(k)] |Σ |2 | N | | i=1 xi (k)| =| , |2 |Σ N | 2 1 ΣN ¯ | i=1 xi (k)| + N i=1 |xi (k) − x(k)| |Σ |2 | N | | i=1 xi (k)| =| , |2 | |Σ N | i=1 xi (k)| + N w H (k)Ri+n (k)w(k)

=

(3.62)

Σ − → where .x(k) ¯ = i xi (k) and .w(k) = N1 × 1 . Assuming the spatially white noise on the array elements, we have . Ri+n = σˆ 2 I, where . I ∈ C N ×N is the identity matrix. |Σ |2 | N | The desired signal power is estimated with .| i=1 xi (k)| , as can be seen from the

3.5 Contrast Improvement Algorithms

87

Fig. 3.22 The reconstructed images of the simulated cyst phantom obtained from a DAS, b DAS+CF, and c DAS+wiener methods. The images are shown with the dynamic range of 50 .dB. The dashed circles demonstrate the considered inside and outside regions of the cyst to calculate the sSNR evaluation metric Fig. 3.23 The lateral variations plot of the reconstructed images shown in Fig. 3.22

numerator of (3.62). From the new formulation shown in (3.62), CF can be interpreted as the wiener post-filter in which the spatially white noise is assumed. In this weighting method, the noise power is overestimated by assigning the coefficient . N to it. However, one should note that the spatially white noise assumption may not always be met. Also, in low-SNR cases, the performance of the CF method will be degraded due to the overestimation of the noise. In contrast, the wiener postfilter is robust compared to the CF method and has a better performance in such cases. To evaluate the performances of the CF method and the wiener post-filter, pay attention to Fig. 3.22. From the reconstructed images shown in this figure, one can conclude the superiority of the wiener post-filter compared to the CF method. Note that the CF weighting method suppresses the inside intensity of the cystic region more successfully compared to the wiener post-filter, as shown in Fig. 3.23. However, it overcompensates for the sidelobe suppression outside the cystic region and does not retrieve the speckle statistics of the imaging medium successfully. To quantitatively evaluate the performance of the mentioned algorithms, the sSNR evaluation metric is used; the sSNR values of DAS, DAS+CF, and DAS+wiener algorithms are obtained as 1.84, 1.23, and 1.52, respectively. This indicates that the SNR of the wiener postfilter is improved compared to the CF algorithm.

88

3 Beamforming Algorithms in Medical Ultrasound …

According to the discussion made so far, the following key point is obtained corresponding to the wiener post-filter and the CF weighting method:



> Wiener post-filter versus CF

The wiener post-filter achieves less contrast improvement compared to the CF weighting method. However, the robustness in low-SNR cases will be increased using the wiener post-filter. Generally, comparing the wiener post-filter and the CF method, a trade-off between the robustness and the image contrast is concluded.

3.5.1.9

Delay Multiply and Sum Algorithm

Delay multiply and sum (DMAS) beamformer is a non-linear method that aims to improve the image contrast (Matrone et al. 2014). In the non-adaptive DAS beamformer, the time-delayed signals are summed together to form the image, as shown in (2.14). In contrast, in the non-linear DMAS algorithm, as its name implies, the delayed signals are first mutually coupled and multiplied prior to the summation process. Mutual multiplication is interpreted as the cross-correlation between the signals received by the array elements. The multiplication is performed between all pairs of signals, the total number of which is obtained as below: ( ) N N (N − 1) . = 2 2

.

(3.63)

Note that the auto-multiplication is not considered in the mutual multiplication of the delayed signals. The DMAS beamformed data is formulated as below: y

. DMAS

(k) =

N N −1 Σ Σ

xi (k)x j (k).

(3.64)

i=1 j=i+1

From the above equation, it can be seen that.i /= j, implies that the auto-multiplication is excluded, as mentioned earlier. Multiplying the signals mutually, the dimension of the received signal is squared; volt turns into volt.2 . To make up for this, the square root and the sign of the delayed signals are considered in order to perform the mutual multiplication process. As a result, the dimensions of the output would not be changed. Also, the sign of the signals is preserved. More precisely, for the delayed signal corresponding to the .ith element, we have: / xˆ (k) = sign(xi (k)). |xi (k)|.

. i

(3.65)

3.5 Contrast Improvement Algorithms

89

Fig. 3.24 The schematic of DMAS implementation steps

The DMAS output presented in (3.64) is, therefore, rewritten as below: y

. DMAS

(k) =

N N −1 Σ Σ

xˆi (k)xˆ j (k).

(3.66)

i=1 j=i+1

The processing steps of the DMAS algorithm is schematically shown in Fig. 3.24. Mutual multiplication is interpreted as the cross-correlation between the received signals, as mentioned before. Applying the cross-correlation, the coherence of the signals is evaluated; if the input signals are highly coherent, the output is amplified. Conversely, in the case in which the inputs are low coherence, the output would be weakened. Accordingly, the contribution of the unwanted signal, or equivalently, the sidelobe level would be suppressed. The contrast of the resulting image is expected to be improved using the DMAS algorithm. It is worth noting that the summation is performed over . N (N2−1) samples in the DMAS algorithm, which is increased compared to the non-adaptive DAS method in which the summation is performed over . N samples. This issue, in turn, leads to SNR improvement. Also, note that the output spectrum of the DMAS algorithm contains two frequency components; this is due to the multiplication process between the signals with the same frequency components. If the central frequency of the received signal is considered to be . f 0 , the DMAS beamformed data includes the frequency components of . f 0 − f 0 and . f 0 + f 0 . Once the processing steps of the DMAS algorithm are performed according to (3.66), the .2 f 0 frequency component is extracted using a bandpass filter to remove the DC component from the beamformed data. To evaluate the performance of the DMAS beamformer, Fig. 3.25 is presented in which the reconstructed images of the simulated point target are shown. Also, the corresponding lateral variations plot is depicted in Fig. 3.26. It can be seen that the DMAS beamformer suppresses the sidelobes more successfully compared to the non-adaptive DAS method as well as the adaptive MV algorithm.

90

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.25 The reconstructed images of the simulated point target using a DAS, b MV and c DMAS 1 algorithms. . K = 0, . L = N /2 and .ξ = 100L are considered for the MV algorithm. The images are shown with the dynamic range of 60 .dB Fig. 3.26 The lateral variations plot corresponding to Fig. 3.25

To evaluate the performance of the DMAS algorithm on the speckle-generating medium, the simulation is performed on the cyst phantom and the result is shown in Fig. 3.27. It can be seen that by using the DMAS algorithm, the intensity of the inside region of the cyst is suppressed more successfully compared to DAS and MV methods. This issue can be more clearly seen from the lateral variations plot of the simulated cyst phantom shown in Fig. 3.28. The CNR evaluation metric is also calculated, and the value of 2.88 .dB is obtained for the non-linear DMAS algorithm. The CNR of the DMAS algorithm is 2.26 .dB lower compared to DAS; the reason is that the DMAS algorithm leads the background speckle pattern to be darker compared to DAS. One can conclude that the main limitation of the nonlinear DMAS algorithm is CNR degradation. Furthermore, by comparing MV and DMAS, it is obtained that the CNR value of the MV algorithm is better for about 1.79 .dB. This can also be concluded qualitatively; comparing Fig. 3.27b, c, it can be seen that the speckle statistics of the image background are better retrieved using the MV algorithm (due to applying the temporal averaging technique). Generally, it is concluded that the non-linear DMAS beamformer leads to contrast improvement of the reconstructed images. The non-adaptive DMAS algorithm suffers from high computational complexity which negatively affects the frame rate. To tackle this problem, one can take advantage of the efficient implementation technique that was proposed in Ramalli et al. (2017).

3.5 Contrast Improvement Algorithms

91

Fig. 3.27 The reconstructed images of the simulated cyst phantom using a DAS, b MV and c 1 DMAS algorithms. . K = 11, . L = N /2 and .ξ = 100L are considered for the MV algorithm. The images are shown with the dynamic range of 50 .dB. The dashed circles demonstrate the considered inside and outside regions of the cyst to calculate the CNR evaluation metric Fig. 3.28 The lateral variations plot corresponding to Fig. 3.27

In this regard, the sign and the square root of the received signal corresponding to each element are calculated separately in the first step according to (3.65). Then, the DMAS algorithm presented in (3.66) is reformulated as below: ⎤ ⎡( )2 N N Σ 1⎣ Σ . yDMAS (k) = xˆi (k) − |xi (k)|⎦ , 2 i=1 i=1

(3.67)

which reduces the computational complexity from . O(N 2 ) to . O(N ) compared to (3.66).

3.5.1.10

Delay Weight Multiply and Sum Algorithm

To further improve the quality of the reconstructed image compared to DMAS in terms of both resolution and contrast, a new algorithm, known as delay weight multiply and sum (DwMAS), was proposed in Vayyeti and Thittai (2020). In this algorithm, a weighting window is applied to the mutual multiplied signals. The Hanning function, as an example, can be considered as the weighting window. In this algorithm,

92

3 Beamforming Algorithms in Medical Ultrasound …

the window coefficients that are going to be multiplied by the signals are raised to the power of “r”. More precisely, we have: [ ]r x˜ (k) = xi (k)x j (k) wi (k)w j (k) ,

. i, j

(3.68)

where .x˜i, j (k) denotes the weighted mutually multiplied signal corresponding to .ith and . jth elements. Also, .wi (k) and .w j (k) are the weighting coefficients of .ith and . jth elements, respectively. Similar to the DMAS beamformer, the sign and the square root of the coupled signal are calculated according to the following equation in order to preserve its sign and dimensionality: | [ ] /| xˆ (k) = sign x˜i, j (k) . |x˜i, j (k)|.

. i, j

(3.69)

Finally, the DwMAS beamformed data is obtained as below: y

. DwMAS

(k) =

Σ

x(k). ˆ

(3.70)

Note that the parameter .r should be appropriately adjusted in order to achieve a good quality reconstructed image. The weighting window of the DwMAS algorithm is symmetric. In particular, the weighting coefficient that is applied to the first element is the same as the one that is applied to the last element, i.e., we have.w1 (k) = w N (k). This may not be appropriate, since the received signals of two close elements (e.g., .x N −1 (k) and .x N (k)) are more correlated compared to the ones that are farther away from each other (e.g., .x1 (k) and . x N (k)). In order to overcome this limitation, the delay Euclidean-weighted multiply and sum (DewMAS) beamformer was developed in Vayyeti and Thittai (2021a), in which the Euclidean distance is used in order to apply the appropriate weight to the received signals. In this algorithm, (3.68) is rewritten as below: [ ]|i− j| x˜ (k) = xi (k)x j (k) wi (k)w j (k) ,

. i, j

(3.71)

and the weighting coefficients are updated accordingly. The remainder of the processing steps of DewMAS is similar to the DwMAS beamformer (i.e., (3.69) and (3.70)). The DewMAS algorithm improves the image contrast compared to DwMAS. In addition to DewMAS, another modified version of the DwMAS algorithm was developed in Vayyeti and Thittai (2021b), in which the parameter .r is optimally selected for each imaging point. This algorithm is known as the delay optimallyweighted multiply and sum (DowMAS) beamformer, which compensates for the variety of the beam pattern in different imaging points.

3.5 Contrast Improvement Algorithms

3.5.1.11

93

High Order Delay Multiply and Sum Algorithm

The p-DMAS algorithm is a beamforming method in which the received signals are demodulated to simplify the signal processing compared to DMAS. Using the p-DMAS algorithm, the amplitude of the time-delayed signal is scaled by . pth root while its phase remains unchanged. After performing the summation operation, the dimensionality of the result is restored by . pth power (Shen and Hsieh 2019; Shen 2020). Note that the p-DMAS algorithm can be performed either in RF or baseband domain; in order to apply this algorithm to the RF data, the channel phase is preserved by using the sign operation. Also, in the case where the baseband signal is used, the channel phase is preserved by keeping the phase component of the signal unchanged. In particular, consider the received signal of .ith element in the baseband domain as jφ . z i = ai e i . The amplitude-scaled version of the signal . z i is obtained by . pth rooting of the signal amplitude. Then, the p-DMAS algorithm is applied to the amplitudescaled data as below: ( )p N 1 Σ√ jφi p . yp-DMAS (n) = ai e . (3.72) N i=1 The signal coherence can be tuned using the adjustable parameter . p; as the value of this parameter increases, more sidelobe suppression will be achieved. Also, the main lobe width will be decreased. To obtain a relationship between the p-DMAS and DAS beamformers, consider the following equation: ( .

yp-DMAS = yDAS

1 N

)p ΣN √ jφi ) ( p a e i i=1 1 jφi p−1 e . = 1 ΣN jφi N i=1 ai e N

(3.73)

It can be concluded from the above equation that the p-DMAS algorithm is equivalent to the DAS beamformer which is weighted by.( p − 1)th power of the phase coherence among the channel data.

3.5.1.12

Short-Lag Delay Multiply and Sum Algorithm

Although the DMAS algorithm improves the image contrast, however, in specklegenerating datasets, it results in some dark artifacts in the reconstructed image and background speckle disruption. This limitation negatively affects lesion detectability in US imaging. Moreover, the computational complexity of this algorithm is high due to the cross-correlation calculation for the received signals of all the array elements. To alleviate this problem, in Matrone and Ramalli (2018), it was proposed to limit the cross-correlation calculation between the received signals of the array elements to a

94

3 Beamforming Algorithms in Medical Ultrasound …

maximum lag. The resulting algorithm is known as the short-lag DMAS (SL-DMAS) technique which is formulated as below: y

. SL-DMAS

(k) =

N −l S Σ Σ

[ ] / sign xi (k)xi+l (k) . |xi (k)xi+l (k)|,

(3.74)

l=1 i=1

where . S = 1, · · · , N − 1 is considered as the maximum lag. Depending on the considered lag . S, the quality of the resulting image varies; the image contrast is better preserved for a greater value of . S. Also, the image resolution, as well as the SNR, will be improved for a smaller value of this parameter.

3.5.1.13

Correlation-Based Modified Delay Multiply and Sum Algorithm

In Esmailian and Asl (2022), a new beamformer was proposed in which the DMAS and CF methods are modified and combined with each other. This combinatorial algorithm, which is known as the modified DMAS (MDMAS) method, improves both the resolution and contrast of the resulting image compared to DMAS. Also, it prevents the production of dark artifacts in speckle-generating mediums. In the MDMAS method, the mutual multiplication process is performed adaptively based on the cross-correlation between the received signals; consider the time-delayed signals corresponding to .ith and . jth elements as . X i , X j ∈ C Nx ×N y (note that the imaging medium is considered as a region consisting of . N x × N y pixels). The crosscorrelation between these two signals .ρ(X i , X j ) is obtained according to the following equation: ρ(X i , X j ) =

.

cov(X i , X j ) , σ (X i )σ (X j )

(3.75)

where .cov(X i , X j ) denotes the covariance between . X i and . X j . Also, .σ (X i ) represents the standard deviation of . X i . The calculated cross-correlation would be in the range of .[−1, 1]. In the case in which .ρ(X i , X j ) = 0, one can conclude that the signals . X i and . X j are uncorrelated. Also, if .ρ(X i , X j ) > 0 or .ρ(X i , X j ) < 0, the signals. X i and. X j are positively or negatively correlated, respectively. If the obtained cross-correlation of two signals exceeds a pre-determined threshold value .τconst , the corresponding signals will be contributed in the mutual multiplication, i.e., we have: y

. MDMAS

(n) =

N −1 Σ N Σ

xˆi (n)xˆ j (n),

i=1 j=i+1

subject to: ρ(X i , X j ) ≥ τconst .

(3.76)

3.5 Contrast Improvement Algorithms

95

By doing so, the contribution of the low-coherent signals will be removed, and consequently, the image resolution will be improved. However, one should note that the total number of multiplied signals is reduced compared to the DMAS algorithm. Therefore, the image contrast is expected to be degraded in comparison with the DMAS algorithm. To compensate for this degradation, in Esmailian and Asl (2022), it was suggested to take advantage of the modified CF (MCF) weighting method as below: |Σ |2 | | | j∈E x j (n)| . MC F(i) = | |2 , Σ L E j∈E |x j | E :ρ(X i , X j ) ≥ τconst .

(3.77)

In the above equation, . L E denotes the number of signals in which the constraint ρ(X i , X j ) ≥ τconst is met. It can be seen from the above equation that the MCF (and consequently, the . L E value) should be calculated for each element separately. Finally, the weighted MDMAS (WMDMAS) algorithm is obtained by combining (3.76) and (3.77) as below:

.

y

. WMDMAS

(n) =

N −1 Σ i=1

MC F(i)

N Σ

xˆi (n)xˆ j (n),

j=i+1

subject to : ρ(X i , X j ) ≥ τconst .

(3.78)

Similar to the DMAS algorithm, the output of the WMDMAS beamformer includes a DC as well as a .2 f 0 frequency component. Therefore, a bandpass filter is required to remove the DC component.

3.5.1.14

Other Delay Multiply and Sum-Based Algorithms

Some other DMAS-based beamformers have also been developed, in addition to the ones mentioned so far, to achieve a well-quality reconstructed image (KaramFard and Asl 2017; Madhavanunni and Panicker 2022; Eslami et al. 2021; Ziksari et al. 2022). For instance, in Madhavanunni and Panicker (2023), Madhavanunni and Panicker (2022), an algorithm known as beam multiply and sum (BMAS) was proposed, the goal of which is to improve the contrast of the DMAS algorithm while its resolution is preserved. Moreover, by using the BMAS algorithm, the computational complexity is reduced compared to the DMAS beamformer. In this algorithm, a subarray and a stride with the lengths of . L and . L s , respectively, are defined. Then, the array is divided into . Nd subarrays according to the defined parameters. In the next step, the non-adaptive DAS algorithm is applied to each of the subarrays. Finally, the obtained outputs are mutually multiplied (similar to DMAS), and the BMAS beamformed data

96

3 Beamforming Algorithms in Medical Ultrasound …

will be achieved. More precisely, consider the DAS beamformed data corresponding (l) (k), the BMAS algorithm is formulated as below: to .lth subarray as . yDAS y

. BMAS

(k) =

NΣ d −1

Nd Σ

( j)

(i) yDAS (k)yDAS (k).

(3.79)

i=1 j=i+1

Note that, similar to the DMAS algorithm, a bandpass filter with the center frequency of .2 f 0 is applied to the output of the BMAS algorithm to remove the DC component. Another DMAS-based algorithm, known as DMAS3, was suggested in Eslami et al. (2021), in which the dimensions of the summation in the DMAS algorithm are increased from 2 to 3. By doing so, synthetic observations will be increased by the factor of . N /3 compared to DMAS, and therefore, a more robust algorithm will be achieved against noise. The so-called DMAS3 algorithm is formulated as below: y

. DMAS3

(k) =

N Σ N Σ N Σ

xˆi (k)xˆ j (k)xˆk (k).

(3.80)

i=1 j=1 k=1 j/=i k/=i k/= j

The DMAS algorithm can also be combined with the CF method, the result of which is known as the generalized DMAS (gDMAS) weighting factor and leads to image quality improvement compared to the conventional CF method (Ziksari et al. 2022). In the gDMAS algorithm, the coherent summation of CF is replaced with the spatial coherence of the received signals obtained from DMAS. More precisely, we have: [( ] )2 Σ ΣN N 1 x ˆ (k) − |x (k)| i i i=1 i=1 2 /Σ . (3.81) . g M D AS(k) = N 2 (k)] [x i=1 i where (3.67) is used to speed up the process. The calculated weighting factor is finally multiplied with the beamformed data in order to achieve the final reconstructed image.

3.5.2 Contrast Improvement in Minimum Variance Beamforming In this section, different modified versions of the adaptive MV algorithm that have been developed to further improve the image contrast are going to be discussed.

3.5 Contrast Improvement Algorithms

3.5.2.1

97

Minimum Variance Beamforming Combined with Coherence Weighting

It is well-known that the adaptive MV algorithm improves the image resolution, as discussed in Sect. 3.4. To improve the contrast of the image while the good resolution of MV is achieved, it is proposed to combine the MV algorithm with the CF weighting method to produce an improved image in terms of resolution and contrast (Asl and Mahloojifar 2009). The output of the MV-based method combined with CF is obtained as below: yˆ

. MV+CF

(k) =

NΣ −L+1 C F(k) × w H (k)x (l) (k). N −L +1 l=1

(3.82)

The processing steps are clear; the output of the MV method is first obtained according to the process described in Sect. 3.4. Then, the weights of the CF technique are obtained as (3.19). The MV-based method combined with CF is finally obtained by multiplying the results of two previous steps. This combinatorial technique is shown as MV+CF, which is used to construct both the simulated point target and the cyst phantom. The results corresponding to the simulated point target are shown in Fig. 3.29. Also, the lateral variations plot of the reconstructed point target for different algorithms is shown in Fig. 3.30. For the images obtained from the MV1 , and . K = 0 are considered. One can see from based algorithm, . L = N /2, .ξ = 100L the obtained results that the image obtained from MV+CF takes advantage of the improved resolution of MV, while the contrast is further improved due to the CF weighting method utilization. The MV+CF method is also evaluated on the simulated cyst phantom and the result is shown in Fig. 3.31. For the images obtained from the MV-based algorithm, 1 . L = N /2 and .ξ = are considered. Temporal averaging is necessary to main100L tain the speckle statistics of the image, as mentioned in Sect. 3.3.1.2. Therefore, the temporal averaging technique is performed with . K = 11 to obtain the reconstructed images of the simulated cyst phantom. In addition to Fig. 3.31, it can be seen from Fig. 3.32 that the area inside the cyst is suppressed more successfully

Fig. 3.29 The reconstructed images of the simulated point target obtained from a DAS, b MV and c MV+CF. The images are shown with the dynamic range of 60 .dB

98

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.30 The lateral variations plot of the simulated point target corresponding to Fig. 3.29

Fig. 3.31 The reconstructed images of the simulated cyst phantom obtained from a DAS, b MV, c MV+CF and d MV+GCF (. M0 = 1). The images are shown with the dynamic range of 50 .d B. The dashed circles demonstrate the considered inside and outside regions of the cyst to calculate the CNR evaluation metric

using the MV+CF method compared to the pure MV algorithm. According to the results obtained from Figs. 3.29 and 3.31, it can be seen that the performance of the MV+CF method is generally improved in terms of contrast enhancement. However, it can be seen qualitatively from Fig. 3.31 that the background speckle is degraded compared to DAS and MV, as discussed in Sect. 3.5.1.2. In particular, the CNR value corresponding to the MV+CF method is 1.09 .dB, which degrades for about 4.05 .dB and 3.58 .dB compared to DAS and MV, respectively. To retrieve the good intensity of the background in speckle-generating mediums, one can perform GCF the result of which is shown as MV+GCF. The result obtained from such a combination is demonstrated in Fig. 3.31d. In this figure, the cut-off frequency is considered to be one, i.e., . M0 = 1. Comparing Fig. 3.31c, d, one can see that the background speckle is retrieved more successfully using the MV+GCF method. Moreover, the lateral variations plot of the MV+GCF is depicted in Fig. 3.32. Using this technique, the CNR value is shown to be improved for about 2.3 .dB compared to MV+CF.

3.5.2.2

Eigenspace-Based Minimum Variance Algorithm

Eigenspace-based MV (EIBMV) beamforming algorithm is proposed to improve the image contrast while the good resolution of MV algorithm is retained (Asl and Mahloojifar 2010, 2009). In other words, using the EIBMV technique, the resolution

3.5 Contrast Improvement Algorithms

99

Fig. 3.32 The lateral variations plot of the simulated cyst phantom corresponding to Fig. 3.31

and contrast of the resulting image would be improved simultaneously. It can also be used for a better estimation of the blood velocity (Makouei and Asl 2020). This adaptive method is based on the eigendecomposition of the estimated covariance matrix; the weights of the EIBMV beamforming are obtained by projecting the MV weights onto a subspace vector resulting from the eigenstructure of the estimated covariance matrix. In this regard, the covariance matrix is calculated according to (3.18). Then, the obtained covariance matrix is decomposed into its corresponding eigenvalues and eigenvectors as below: .

ˆ = VΛ−1 V H , R

(3.83)

where .Λ ∈ C L×L is a diagonal matrix, diagonal entries of which consist of eigenvalues of the covariance matrix in descending order: ⎡

λ1 ⎢0 ⎢ .Λ = ⎢ . ⎣ ..

0 λ2 .. .

··· ··· .. .

⎤ 0 0⎥ ⎥ .. ⎥ , λi ≥ λ j for i < j. . ⎦

0 0 · · · λL

Also,.V ∈ C L×L denotes the eigenvectors corresponding to the obtained eigenvalues. More precisely, .V = [v1 , v2 , · · · , v L , ], where .vi ∈ C L×1 denotes the orthonormal eigenvector of .λi . To suppress the sidelobes and improve the contrast of the image, a limited number of first largest eigenvalues are selected. Then the eigenvectors corresponding to them are separated to construct the signal subspace .Es as below: Es = [v1 , v2 , · · · , vNum ] ,

.

(3.84)

where “Num” denotes the number of selected eigenvalues, and correspondingly, the separated eigenvectors. The resulting .Es includes information about the main lobe while the information about the sidelobes is reduced considerably. This is due to the fact that only the “Num” number of eigenvalues that can effectively represent the

100

3 Beamforming Algorithms in Medical Ultrasound …

signal subspace remains, and the eigenvalues that do not contain much information about the main lobe are discarded. After obtaining the matrix.Es , the EIBMV weights would be obtained from the following equation: wEIBMV (k) = Es EsH w(k),

.

(3.85)

where.w(k) is the weight vector obtained using the MV algorithm according to (3.15). Finally, the EIBMV beamformed output is achieved using (3.16) by replacing .w with the new obtained eigenspace-based weights .wEIBMV as below: yˆ

. EIBMV

(k) =

NΣ −L+1 1 H w EIBMV (k)x (l) (k). N − L + 1 l=1

(3.86)

The schematic of the implementation steps of the EIBMV algorithm is depicted in Fig. 3.33. The added processing part of the EIBMV beamforming in comparison with the basic MV algorithm is shown with dashed lines in the schematic. The number of selected eigenvalues depends on the pre-defined threshold .δ. Different values of .δ result in the different performances of the EIBMV algorithm. In the following, the effect of this parameter will be evaluated.

Fig. 3.33 The schematic of EIBMV implementation steps. The added processing steps compared to the MV beamforming is shown with dashed lines

3.5 Contrast Improvement Algorithms

101

Fig. 3.34 The reconstructed images of the simulated point target obtained from a DAS, b MV and c EIBMV with .δ = λmax /2. No temporal averaging is performed in MV-based methods. Also, . L = N /2. The images are shown with the dynamic range of 60 .dB Fig. 3.35 The lateral variations plot of the simulated point target corresponding to Fig. 3.34

Figure 3.34 shows the reconstructed images obtained from DAS, MV, and EIBMV methods. The result from the non-adaptive DAS beamformer is presented for better comparison. It can be seen that using the EIBMV method, the good resolution of the MV algorithm is achieved while the contrast is improved compared to MV. This issue can also be seen in Fig. 3.35; it can be seen that the sidelobes are suppressed more efficiently compared to the basic MV algorithm. Decomposing the estimated covariance matrix to its eigenvalues and eigenvectors, the largest eigenvalue is denoted as .λmax . The threshold .δ is considered to be .λmax /2 for the reconstructed image shown in Fig. 3.34; the eigenvectors corresponding to the eigenvalues larger than .λmax /2 are selected to construct the signal subspace shown in (3.84). For each time index, the threshold value may be varied since the largest eigenvalue .λmax would be different. One can apply other weighting methods to the EIBMV beamformer to further suppress the noise level; for instance, in Shamekhi et al. (2020), it was suggested to combine the SCF weighting method (Sect. 3.5.1.5) and the EIBMV algorithm to achieve an improved image quality. To investigate the effect of the threshold .δ on the resulting image, different values of this parameter are performed, and the result is shown in Fig. 3.36. It can be seen from the lateral variations plot of the simulated point targets, i.e., Fig. 3.36, that the constant value of .δ = 10 leads to sidelobe suppression more efficiently compared to other values. Comparing .δ = λmax /2 and .δ = λmax /10, it can be seen that the sidelobe suppression is slightly degraded by decreasing the value of .δ; this is due

102

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.36 The lateral variations plot of the reconstructed images obtained from the MV algorithm as well as the EIBMV method using different values of the selected threshold. . L = N /2 = 48 is considered to reconstruct the images

to the fact that as the threshold value decreases, more eigenvectors are selected to construct the signal subspace shown in (3.84), and consequently, obtain the weight vector. Therefore, the resulting signal subspace would contain information about the sidelobes as well as the main lobe. Generally, it can be concluded that the performance of the EIBMV beamformer reaches the MV algorithm by decreasing the threshold value .δ. To evaluate the performance of the EIBMV method on the speckle-generating mediums, pay attention to Fig. 3.37. Qualitatively, it can be seen that the EIBMV algorithm outperforms MV; the edges of the cystic region are constructed more clearly in both cases of using . L = N /2 and . L = N /4 compared to their corresponding MV results. In particular, Fig. 3.38 shows the lateral variations plot for the reconstructed images shown in Fig. 3.37 in which . L = N /2 is considered. It can be clearly seen that the density of the inside region of the cyst is suppressed more efficiently compared to the basic MV algorithm, demonstrating the superiority of the EIBMV in terms of contrast enhancement.

3.5.2.3

Forward-Backward Minimum Variance Algorithm

Forward-Backward MV (FBMV) algorithm is another modified version of the MV algorithm, the goal of which is to improve the contrast of the resulting image compared to the MV beamforming method (Asl and Mahloojifar 2011). Referring to the earlier discussion, as the observations increase, the covariance matrix would be estimated more accurately. Also, the more accurate the covariance matrix is estimated, the better the quality of the resulting image. The focus of the FBMV algorithm is to obtain a more accurate estimated covariance matrix compared to the MV technique. This is achieved by increasing the observations using the subarray division process in the backward direction as well as the forward direction,

3.5 Contrast Improvement Algorithms

103

Fig. 3.37 The reconstructed images of the simulated cyst phantom obtained from a MV with = N /2, b EIBMV with. L = N /2, c MV with. L = N /4 and d EIBMV with. L = N /4. Temporal averaging is performed with . K = 11 in all cases. The images are shown with the dynamic range of 50 .dB

.L

Fig. 3.38 The lateral variations plot of the simulated cyst phantom corresponding to Fig. 3.37a, b. The lateral variations plot of the DAS method is also depicted for better comparison

and consequently, obtaining the covariance matrix corresponding to the new set of subarrays; in this algorithm, the covariance matrix is estimated according to (3.13), which we call forward estimation. Furthermore, the subarray division and the covariance matrix calculation are also performed in the backward direction, which is known as the backward estimation. In other words, the backward estimation can be imagined as the left-right flipping the array, as shown in Fig. 3.39, and then, dividing it into . N − L + 1 overlapping subarrays. The obtained subarray in this [ ]T manner is denoted as . x˜ (l) (k) = x N −l+1 (k), x N −l (k), · · · , x N −l−L+2 (k) ∈ C L×1 , where .l = 1, · · · , N − L + 1. The covariance matrix corresponding to this new set of subarrays is estimated according to (3.13), except that the subarray vector . x (l) is ˆ B (k). replaced with. x˜ (l) , and the resulting estimated covariance matrix is denoted as. R

104

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.39 The schematic of subarray division in backward direction used in FBMV algorithm. The array consists of . N elements. The subarray length is considered to be . L

To obtain a more accurate estimated covariance matrix, the forward and backward estimated covariance matrices are averaged and results in the forward-backward estimated covariance matrix: .

ˆ ˆ ˆ FB (k) = R(k) + R B (k) . R 2

(3.87)

The estimated forward-backward covariance matrix can also be formulated as below: .

( ) ˆ ˆ TJ , ˆ FB (k) = 1 R(k) + J R(k) R 2

(3.88)

where . J ∈ C L×L denotes the anti-diagonal matrix, the entries of the anti-diagonal of which equal 1. Note that (3.87) and (3.88) are equivalent. After the estimation process of the covariance matrix is completed, the weight vector of the FBMV algorithm is ˆ with . R ˆ FB , which is obtained as (3.87) achieved according to (3.15) by replacing . R or (3.88). The obtained weight vector of the FBMV algorithm is denoted as .w FB (k). Finally, the FBMV beamformed data would be obtained similarly to other MV-based methods as below: yˆ (k) =

. FB

NΣ −L+1 1 H w FB (k)x (l) (k). N − L + 1 l=1

(3.89)

The schematic of the explained FBMV algorithm is shown in Fig. 3.40. The added processing steps of this algorithm in comparison with the MV technique are depicted with dashed lines in this schematic. Note that using the FBMV algorithm, the covariance matrix is estimated using a single temporal sample; no temporal averaging is performed to obtain the forward and backward covariance matrices. Also, the diag-

3.5 Contrast Improvement Algorithms

105

Fig. 3.40 The schematic of FBMV implementation steps. The added processing steps compared to the MV beamforming is shown with the dashed lines

onal loading technique is not used to achieve a robust covariance matrix. It is shown that the FBMV algorithm leads to improving the robustness against the mismatch between the assumed and the actual steering vectors compared to MV without using the diagonal loading technique (Asl and Mahloojifar 2011). This is due to the participation of the backward estimation in addition to the forward estimation which leads to increasing the observations, and consequently, a more robust output. Figure 3.41 demonstrates the reconstructed images obtained from the non-adaptive DAS beamformer as well as the MV and FBMV methods. It can be seen that the performance of the FBMV method is comparable with the MV algorithm in terms of resolution. The lateral variations plot of the simulated point target is also shown in Fig. 3.42. The calculated FWHM of the reconstructed images obtained from the MV and FBMV algorithms are obtained as 0.14 .mm and 0.13 .mm, respectively. To evaluate the performance of the FBMV algorithm on the speckle-generating medium, the simulated cyst phantom is performed and the results are presented in Fig. 3.43. It can be seen that without applying the temporal averaging technique, the MV beamformer suffers from poor contrast, as shown in Fig. 3.43b. In contrast, the FBMV beamformer retrieves the speckle statistics of the background and improves the image contrast without performing the temporal averaging technique. Comparing Fig. 3.43c, d, it can be seen that the performance of the FBMV algorithm is comparable with the MV method in which the temporal averaging technique is used. This issue can also be seen from the lateral variations plot of the simulated cyst phantom shown in Fig. 3.44. The calculated CNR of the FBMV beamformer is obtained as

106

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.41 The reconstructed images of the simulated point target using a DAS, b MV and cFBMV algorithms. . K = 0 and . L = N /2 are considered for MV-based algorithms. Diagonal loading is 1 applied to the MV algorithm with .ξ = 100L , and no diagonal loading is applied to the FBMV method. The images are shown with the dynamic range of 60 .dB

Fig. 3.42 The lateral variations plot corresponding to Fig. 3.41

Fig. 3.43 The reconstructed images of the simulated cyst phantom using a DAS, b MV (. K = 0), 1 c MV (. K = 11), and d FBMV algorithms. . L = N /2 and .ξ = 100L are considered for the MV algorithm. For the FBMV algorithm, . L = N /2, and neither the diagonal loading nor the temporal averaging is applied. The images are shown with the dynamic range of 50 .d B. The dashed circles demonstrate the considered inside and outside regions of the cyst to calculate the CNR evaluation metric

2.94 .d B; it can be concluded that FBMV leads to CNR improvement for about 3.4 dB compared to the MV algorithm without performing the temporal averaging technique (Fig. 3.43b). This indicates the superiority of the FBMV method compared to MV (with . K = 0) in terms of contrast enhancement. Generally, it is concluded that using the FBMV beamformer, the resolution would be slightly degraded compared to the MV algorithm. Instead, the FBMV beamformer

.

3.5 Contrast Improvement Algorithms

107

Fig. 3.44 The lateral variations plot corresponding to Fig. 3.43. . K = 11 is considered for the MV algorithm

results in robustness improvement compared to the MV method. This issue will be discussed in more detail in Sect. 3.6.

3.5.2.4

Covariance Matrix-Based Adaptive Weighting Combined with Minimum Variance Algorithm

In order to improve the performance of the MV algorithm in terms of contrast, in Wang et al. (2022), a covariance matrix-based statistical beamforming (CMSB) was proposed, which is obtained from the mean to standard deviation of the modified covariance matrix. In this technique, the subarray length varies adaptively for each imaging point, and its corresponding covariance matrix is estimated accordingly. Then, the obtained covariance matrix is further processed and results in the generation of a weighting coefficient, which is known as the covariance matrix-based statistical factor (CMSF). A detailed explanation of this adaptive weighting method is presented in the following. The amplitude standard deviation of the received signal. x(k) for.k th imaging point is obtained as below: [ | N |1 Σ 2 | .σ = ¯ , (3.90) [xi (k) − x(k)] N i=1 where .x(k) ¯ is the average value of the signal . x(k). By using the above equation, the normalized reciprocal of the amplitude standard deviation .σ , (k) is obtained as follows: ] [ 1 , , .σ (k) = N √ (3.91) σ (k) where .N[.] denotes the normalization operation. Then, the adaptive subarray length for .k th imaging point, i.e., . L(k), is obtained according to the following equation:

108

3 Beamforming Algorithms in Medical Ultrasound … .

L(k) = [σ , (k).L max ].

(3.92)

In the above equation, .[.] denotes the downward rounding operation, and . L max is the maximum length of th subarray. In the case in which . L(k) ≤ 2, the subarray length is considered to be 2. The covariance matrix corresponding to the subarray with the length of . L(k) which is obtained from (3.92), is estimated as below: ˆ . R(k) =

1 N − L(k) + 1

N −L(k)+1 Σ

( )H xˆ l (k) xˆ l (k) ,

(3.93)

l=1

where . xˆ (l) (k) ∈ C L(k)×1 is the .l th subarray with the adaptive length of . L(k). The next step is to apply the rotary averaging and diagonal reducing techniques. More ˆ we have: precisely, by applying the rotary averaging to the matrix . R(k), .

] 1[ˆ ˆ T (k) + J R(k) ˆ ˆ T (k) J . ˜ R(k) + J R J+R R(k) = 4

(3.94)

The diagonal reducing technique is then applied to the output of the above equation, and the modified covariance matrix is obtained as below: .

˜ ˜ ˜ R(k) = R(k) − ξ R(k)I.

(3.95)

It can be seen from the above equation that the loading factor is reduced from the covariance matrix by using the diagonal reducing technique. Finally, the weighting factor .wCMSF (k) is obtained from the following equation: [ ] ˜ E R(k)

wCMSF (k) = /

.

1 L 2 (k)

Σ L(k) Σ L(k) ( i=1

j=1

[ ])2 , ˜ i, j (k) − E R(k) ˜ R

(3.96)

[ ] Σ L(k) Σ L(k) ˜ ˜ where .E R(k) = L 21(k) i=1 j=1 R i, j (k) is the average value of the covariance ˜ ˜ i, j (k) represents the .i th row and . j th column of the covariance Also, . R matrix . R(k). matrix. By applying the obtained weighting factor to the MV beamformed data, we have: y

. MV+CMSF

(k) = wCMSF (k) × yMV (k).

(3.97)

By using the CMSB method, the subarray length corresponding to the off-axis signal is obtained a small value. Also, a greater value is assigned to it for the regions corresponding to the incoherent noise. Moreover, the subarray length corresponding to the speckle region is smaller compared to the one that is obtained for coherent signals. Accordingly, the obtained .wCMSF corresponding to the incoherent noise will be a small value. In other words, the contribution of incoherent noise will be sup-

3.5 Contrast Improvement Algorithms

109

pressed. Also, the obtained .wCMSF for the speckle region is greater compared to the one obtained for clutter, and consequently, the speckle is preserved better in comparison to the clutter. In the CMSF weighting method, as the value of the diagonal reducing factor increases, the performance of the algorithm will be improved in terms of resolution and speckle preservation. However, the contrast of the reconstructed image will be degraded. In order to simultaneously improve the resolution and contrast of the reconstructed image, it is desired to adaptively calculate the value of .ξ for each imaging point. In this regard, the adaptive reducing factor .ξ(k) based on CF weighting method is used in Wang et al. (2022) as below: ,

ξ(k) = C F σ (k) (k) × ξmax ,

.

(3.98)

where .ξmax denotes the maximum value of the diagonal reducing factor, and .C F(k) is obtained according to (3.19). Now, by considering the adaptive reducing factor, (3.95) is rewritten as below: .

˜ ˜ ˜ ac (k) = R(k) − ξ(k) R(k)I. R

(3.99)

˜ ac (k), the new weighting coefficient According to the obtained covariance matrix . R which is known as the covariance matrix-based adaptive weighting (CMSAW) is obtained as below: [ ] ˜ ac (k) E R .wCMSAW (k) = / (3.100) ( [ ])2 , 1 Σ L(k) Σ L(k) ˜ ˜ 2 i=1 j=1 R aci, j (k) − E R ac (k) L (k)

[ ] ˜ ac (k) denotes the average value of the matrix . R ˜ ac (k). Finally, the where .E R obtained weighting coefficient is applied to the output of the MV beamformer similar to (3.97), and consequently, the improved quality reconstructed image will be achieved in which the noise level is suppressed while the speckle is preserved. To better understand how the CMSAW method improves the image resolution and contrast while the speckle is also preserved, note that in the main lobe regions, a large value is obtained from the CF method. Also, the obtained CF value is small for incoherent noise. In contrast, the value of .σ , (k), which is obtained according to (3.91), is large for main lobe regions and small for incoherent noise regions. This causes the adaptive .ξ(k) presented in (3.98) to be a large value for main lobe regions. Also, a smaller value will be assigned to it for incoherent noise regions. Therefore, better noise suppression will occur in incoherent noise regions, and consequently, the image contrast will be improved. Also, the image resolution will be improved in main lobe regions. Furthermore, by using (3.98), the calculated .ξ(k) corresponding to the speckle region will be greater and smaller compared to the ones that are obtained for the incoherent noise and main lobe regions, respectively. As a result, the obtained

110

3 Beamforming Algorithms in Medical Ultrasound …

wCMSAW that is associated with the speckle is close to (or greater than) the main lobe, which leads to the speckle preservation.

.

3.5.2.5

Adaptive Subarray Coherence-Based Post-Filter Algorithm

In order to further improve the contrast of the MV algorithm, and also, increase the ability of the EIBMV and FBMV algorithms to better separate two close point targets, a new algorithm known as adaptive subarray coherence-based post-filter (ASCBP) was proposed in Eslami and Asl (2022). In this algorithm, the EIBMV, FBMV, and an adaptive coherence-based filtering method are combined to obtain the final weighting coefficient. The generalized form of a post-filter can be formulated as below: .

h(k) =

E S (k) , E S (k) + ηE N (k)

(3.101)

in which the parameter .η can be considered as either a constant or an adaptively varying value. In the ASCBP algorithm, this parameter is considered to be adaptive, which is obtained as the ratio of the input SNR to the output SNR. This comes from the concept of the array gain (AG), which is defined as the ratio of the output SNR to the input SNR. Indeed, .η(k) = (AG)−1 is considered in ASCBP. Assuming that the desired signal and noise are uncorrelated, the output SNR is formulated as below: |Σ |2 | N | | i=1 wiS (k)| E S (k) , . S N Rout put (k) = Σ |2 | N | N | i=1 wi (k) E N (k)

(3.102)

where .wiS (k) and .wiN (k) denote the weighting coefficients corresponding to the desired signal and noise, respectively. As the input SNR is defined as . S N Rinput (k) = E S (k)/E N (k), (3.102) can be rewritten as below: |Σ |2 | N | | i=1 wiS (k)| . S N Rout put (k) = S N Rinput (k) × Σ |2 . | N | N | w (k) i=1 i

(3.103)

Therefore, .η(k) is obtained according to the following equation: Σ N | N |2 |w (k)| S N Rinput = | i=1 i .η(k) = |2 . | |Σ N S N Rout put | i=1 wiS (k)|

(3.104)

Inspired by the EIBMV and FBMV algorithms, the signal and noise weights are obtained as below:

3.5 Contrast Improvement Algorithms

111

w S (k) = E s E sH wFBMV (k) = w ESB-FBMV (k),

.

w N (k) = E n E nH wFBMV (k),

(3.105)

where . E s is the signal subspace (as presented in (3.84)), and also, . E n denotes the noise subspace. By using the obtained adaptive .η, the ASCBP is obtained as below: .

HASCBP (k) =

C S(k) , C S(k) + η(k) (I C S(k) − C S(k))

(3.106)

where: NΣ −L+1 |2 | H 1 (l) | |w ESB-FBMV (k)x (k) , N − L + 1 l=1 ] NΣ −L+1 [ 1 ( (l) ) H (l) 1 x (k) x (k) . I C S(k) = N − L + 1 l=1 L

C S(k) =

.

(3.107)

The obtained coefficient is applied to the combination of the EIBMV and FBMV weights, i.e., .wESB-FBMV (k), and the new weight is obtained as below: wESB-FBMV-ASCBP (k) = HASCBP (k) × w ESB-FBMV (k).

.

(3.108)

Finally, the ASCBP beamformed data is obtained according to the following equation: y

. ASCBP

(k) =

NΣ −L+1 1 H wESB-FBMV-ASCBP (k)x (l) (k). N − L + 1 l=1

(3.109)

In order to further improve the image contrast and speckle quality, it was proposed in Eslami and Asl (2022) to apply a 2D moving average filter on (3.106), which is known as the square neighborhood (SN) ASCBP (. HSN-ASCBP (k)) and obtained according to the following equation:

.

HSN-ASCBP (k) =

Nw Σ 1 (2Nw + 1)2 i=−N

Nw Σ w

HASCBP (xk + i, z k + j),

(3.110)

j=−Nw

where .(xk , z k ) denotes the .(x, z) coordinates of the .k th imaging point, and . Nw is a constant value that determines the length of the 2D window. The final weight is then obtained by replacing . HASCBP (k) with (3.110) in (3.108).

112

3.5.2.6

3 Beamforming Algorithms in Medical Ultrasound …

Generalized Sidelobe Canceler Combined with Eigenspace Wiener Filter

In Yang et al. (2022), the generalized sidelobe canceler (GSC) algorithm was combined with the eigenspace-wiener post-filter, which is known as GSC-ESW, in order to improve the performance of the MV algorithm in terms of both resolution and contrast. To better discuss this algorithm, the GSC algorithm is going to be presented first. Then, the eigenspace-wiener post-filter and how the result is combined with the GSC algorithm will be explained. The GSC method is one way to implement the MV algorithm in which the minimization problem is converted to an unconstrained problem (Li et al. 2017). Also, the signal cancellation phenomenon is prevented by using the GSC algorithm. The schematic of this algorithm is presented in Fig. 3.45. It can be seen that the weight vector is divided into two parts: the adaptive .wa (k) and the fixed .wq weight vectors. The fixed weight vector is used to estimate the desired signal, while the adaptive one is used to suppress the noise and interferences. According to the schematic shown in Fig. 3.45, the lower path corresponding to the adaptive weight vector, includes the blocking matrix shown as . B ∈ C L×(L−1) . This matrix should be selected in such a way as to prevent the desired signal from entering the lower path, i.e., . B H a = 0. Considering the divided form of the weight vector, the output of the GSC is expressed as below: y

. GSC

[ ]H (k) = wq − Bwa (k) x(k),

(3.111)

where .wq is fixed, as mentioned earlier, and .wa (k) is obtained from the following unconstrained minimization problem: .

[ ]H ] [ ˆ min wq − Bwa (k) R(k) wq − Bwa (k) ,

wa (k)

(3.112)

which results in the following solution: ( )−1 ˆ ˆ wa (k) = B H R(k)B B H R(k)w q.

.

)−1 ( Also, we have .wq = aa H a.

Fig. 3.45 The schematic of the GSC algorithm

(3.113)

3.5 Contrast Improvement Algorithms

113

For the GSC method, the estimated covariance matrix can be eigendecomposed ˆ is divided into two according to (3.83). The span of the eigenvectors of the matrix . R parts; the first part is known as the signal space .Vs that includes the eigenvectors associated with the .s largest eigenvalues. Also, the second part is known as the noise space .V p that includes the remaining eigenvectors (corresponding to the smaller eigenvalues). According to the above explanations, the estimated covariance matrix can be reformulated as below: .

ˆ = Vs Λs VsH + V p Λ p V Hp R ˆs + R ˆ p, =R

(3.114)

ˆ s and . R ˆ p denote the covariance matrices corresponding to the signal subwhere . R space and noise subspace, respectively. Also, .Vs and .V p are the eigenvalues corresponding to the signal and noise subspaces, respectively (note that the interference plus noise component is denoted as . p for simple writing). Considering (3.114), the output power of the beamformer can be written as below: .

| || |H ˆ |yGSC (k)|2 = |w H (k)x(k)| |w H (k)x(k)| = w H (k) R(k)w(k) ˆ s (k)w(k) + w H (k) R ˆ p (k)w(k). = w H (k) R

(3.115)

It can be seen that the output power of the beamformer is decomposed into ˆ s (k)w(k) = |s(k)|2 , and the signal and noise powers. That is, we have .w H (k) R 2 H ˆ .w (k) R p (k)w(k) = | p| . Therefore, the wiener post-filter, which is constructed according to (3.57), is rewritten as below:

.

Hwiener (k) =

ˆ s (k)w(k) w H (k) R , ˆ s (k)w(k) + w H (k) R ˆ p (k)w(k) w H (k) R

(3.116)

which is obtained by eigendecomposing the covariance matrix. Accordingly, the name eigenspace-wiener post-filter is assigned to it. In order to improve the performance of the GSC, the eigenspace-wiener post-filter was used as an adaptive weighting in Yang et al. (2022). More precisely, the weight of the GSC-ESW algorithm is obtained as below: wGSC-ESW (k) = Hwiener w(k) =

.

ˆ s (k)w(k) w H (k) R w(k). ˆ s (k)w(k) + w H (k) R ˆ p (k)w(k) w H (k) R (3.117)

By considering the above adaptive weigh, the output of the beamformer is obtained as follows:

114

3 Beamforming Algorithms in Medical Ultrasound … .

H H 2 H ˆ ˆ |yGSC-ESW (k)|2 = wGSC-ESW (k) R(k)w GSC-ESW (k) = H w (k) R(k)w(k)

ˆ s (k)w(k) − = w H (k) R

ˆ p (k)w(k) w(k) R 1+

ˆ p (k)w(k) w H (k) R ˆ s (k)w(k) w H (k) R

.

(3.118)

Comparing the above equation and (3.115), it can be seen that there is a difference between their second term, that corresponds to the noise power. Since the value of the denominator of the second term in (3.118) is much greater than 1, the noise power will be smaller compared to (3.115). Therefore, the output of the GSC-ESW method will be close to the desired signal.

3.5.2.7

Generalized Sidelobe Canceler Based on Cross Subaperture Averaging

In order to improve the resolution and contrast of the GSC algorithm, a cross subaperture averaging technique was proposed in Yang et al. (2021) to estimate the covariance matrix; in this technique, the covariance between any two subarrays is ˆ C (k) is calculated. More precisely, instead of (3.13), the new covariance matrix . R estimated according to the following equation:

.

ˆ C (k) = R

−L+1 NΣ −L+1 N Σ [ ]H 1 x (i) (k) x ( j) (k) . (N − L + 1)2 i=1 j=1

(3.119)

Note that in the conventional covariance matrix estimation, i.e., (3.13), the diagonal [ ]H entries of the sub-covariance matrices . x (l) (k) x (l) (k) are always positive, while other entries are positive or negative values. Therefore, by performing a summation operation on the sub-covariance matrices, the final estimated covariance matrix will be close to the identity matrix, especially for a large number of subarrays. This causes the performance of the GSC algorithm to be close to the non-adaptive DAS beamformer. By estimating the covariance matrix according to (3.119), all the entries of the estimated matrix, including the diagonal entries, will be positive or negative values. Consequently, the problem of accumulating the diagonal entries of the covariance matrix will be overcome. Furthermore, in Yang et al. (2022), it was proposed to use the forward-cross-backward (FCB) subaperture averaging method in order to prevent the need for diagonal loading, and also, achieve a more robust estimation. In this regard, the final covariance matrix is estimated similar to (3.88) in which the ˆ ˆ C (k): is replaced with . R matrix . R(k) .

( ) ˆ C (k) + J R ˆ C (k)T J . ˆ FCB (k) = 1 R R 2

(3.120)

By using the obtained covariance matrix in the GSC algorithm, an improved quality image will be produced.

3.6 Robustness Improvement of Adaptive Beamformers

115

3.6 Robustness Improvement of Adaptive Beamformers As discussed earlier, the minimization problem of the adaptive MV beamformer is written such that the magnitude gain of the signal received from the desired direction is maintained while the signals received from other directions are suppressed simultaneously, i.e., (2.26). Then, the weight vector is adaptively obtained based on the considered constraints. Obtaining and performing the weight vector with the consideration of minimizing the variance of the interference plus noise, signal cancellation may occur. This phenomenon is due to the correlation between the desired and interfering signals, which leads to the underestimation of the signal amplitude. Signal cancellation is a common concern in medical US imaging when the adaptive MV algorithm is going to be used to construct the images. This topic was previously discussed in Sect. 2.3.3.4 in detail. Inaccurate estimation of the covariance matrix is another reason for degrading the performance of the MV algorithm, which can also result in signal cancellation. To deal with this issue, tricks that increase the robustness of the algorithm must be applied. In this section, techniques that lead to a more accurate estimation of the covariance matrix, or equivalently, increasing the robustness of the adaptive MV algorithm, are discussed.

3.6.1 Minimum Variance Parameters: Diagonal Loading Factor and Subarray Length The diagonal loading technique is one of the most commonly used techniques to reduce the signal cancellation effect. The principles of this technique were explained in Section 2.3.3.3, and it was proved that it makes a trade-off between the robustness and the image resolution. To evaluate the performance of the diagonal loading technique on the robustness of the reconstructed image, three equal-amplitude point targets are simulated, each of them being 1.5 .mm apart from the other. The adaptive MV algorithm is performed with different values of the parameter .ξ corresponding to the diagonal loading technique. Note that the diagonal loading factor is applied according to (3.14). The resulting images are shown in Fig. 3.46a1–d1. Also, the corresponding lateral variations plot is presented in Fig. 3.47a. The signal cancellation phenomenon can be clearly seen in the case in which the diagonal loading is not applied, i.e.,.ξ = 0; the amplitude of the reconstructed point target is underestimated. This issue can be concluded from the lateral variations plot shown in Fig. 3.47a, and also, the intensity of the reconstructed point targets located on either side of the middle point target shown in Fig. 3.46a1. Applying the diagonal loading technique, the effect of the signal cancellation is suppressed. As the value of the parameter .ξ increases, the robustness is improved. However, the resolution of the resulting image is degraded; the FHWM evaluation metric is increased for about 0.02 .mm, 0.04 .mm, 1 , and 0.2 .mm by using the diagonal loading technique with the parameters .ξ = 100L 1 1 .ξ = , and .ξ = L , respectively, compared to the case in which no diagonal load10L

116

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.46 The reconstructed images of the simulated point targets obtained from different values of the parameter.ξ while the subarray length is constant (first row), and different values of the subarray length while the parameter .ξ is constant (second row). Note that the adaptive MV algorithm with . L = 1, i.e., part (d2), is equivalent to the non-adaptive DAS algorithm. The images are shown with the dynamic range of 50 .dB

ing is performed (.ξ = 0). This confirms the trade-off between the resolution and the robustness of the algorithm using the diagonal loading technique. The behavior of the image resolution for different values of the parameter .ξ is depicted in Fig. 3.48a. Although the resolution of the MV algorithm is degraded as the robustness is increased, note that the resolution of the resulting image is still better than the non-adaptive DAS beamformer. Subarray length is another parameter that affects the robustness of the adaptive MV algorithm; the shorter the subarray length, the higher the robustness of the algorithm. This improvement is at the expense of resolution degradation, as discussed previously in Sect. 3.4 of this book. Figure 3.46a2–d2 demonstrate the reconstructed images obtained from the adaptive MV beamformer using different values of subarray length. Note that in the case in which . L = 1, i.e., the reconstructed image shown in Fig. 3.46d2, the result is equivalent to the non-adaptive DAS beamformer. It can be seen from the figures that by decreasing the subarray length, the signal cancellation effect is suppressed. Considering the reconstructed images as well as the corresponding lateral variations plot shown in Fig. 3.47b, it can be concluded that among different values of the subarray length, . L = N2 − 8 = 40 outperforms others; signal cancellation is overcome successfully while the resolution is not significantly degraded. The quantitative evaluation also shows that the FWHM value of the reconstructed image obtained from . L = 40 is increased for about 0.01 .mm compared to . L = N /2. While the FWHM degradation is about 0.04 .mm and 0.15 .mm compared to . L = N /2 when the subarray length is considered as . L = N /3 and . L = N /4, respectively. Figure 3.48b shows the behavior of the FWHM value, or equivalently, the image resolution, by changing the subarray length. It can be seen

3.6 Robustness Improvement of Adaptive Beamformers

117

Fig. 3.47 The lateral variations plot corresponding to Fig. 3.46. a corresponds to the reconstructed images with . L = N /2 and different values of .ξ (. N = 96). Also, b corresponds to the reconstructed images with .ξ = 0 and different values of .L

from the demonstrated graph that the subarray length equal to a little bit smaller than half the number of elements results in the best performance in terms of resolution while the signal cancellation effect is reduced simultaneously, as shown in Fig. 3.47b. This conclusion has already been made in Austeng et al. (2009). In the simulation corresponding to the results presented here, . N = 96. Therefore, half the number of elements results in . N /2 = 48, and a bit smaller than the obtained value is considered to be .40. Comparing the calculated FWHM of the DAS method and the adaptive MV beamformer using different values of the subarray length, it is concluded that the performance of the adaptive MV beamformer is improved compared to the DAS algorithm in all cases. As the subarray length reaches a single element (. L = 1), the performance of the algorithm would be equivalent to the DAS algorithm. Therefore, the following key point is obtained from the evaluations performed in this section:

118

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.48 The calculated FWHM value of the non-adaptive DAS beamformer as well as the adaptive MV algorithm using a different values of the parameter .ξ , and b different values of the subarray length



> MV vs. DAS

As the robustness of the MV algorithm increases, the resolution will be degraded. However, the resolution of the resulting image is still better compared to the nonadaptive DAS algorithm. The worst resolution of the MV algorithm is obtained for . L = 1, which results in the reconstructed image equivalent to the DAS algorithm.

3.6.2 Forward-Backward Minimum Variance Algorithm The FBMV algorithm, which is previously discussed in Sect. 3.5.2.3, was proposed to improve the image contrast. The superiority of this algorithm can be mainly seen from the reconstructed images obtained from the speckle-generating mediums such as Fig. 3.43, in which the image with an acceptable contrast is generated using the FBMV algorithm without performing the temporal averaging technique. As mentioned earlier, participation of the backward estimation in addition to the forward estimation, the sample covariance matrix is estimated using more observations, and therefore, the algorithm is expected to be more robust compared to the basic MV algorithm. To see this, the simulated three point targets shown in Fig. 3.46 are also processed using the FBMV algorithm, and the corresponding lateral variations plot is presented in Fig. 3.49. Comparing the FBMV beamformer and the basic MV algorithm in which no diagonal loading is applied, it can be seen that the FBMV algorithm can successfully suppress the signal cancellation effect. This indicates the robustness of the FBMV algorithm.

3.6 Robustness Improvement of Adaptive Beamformers

119

Fig. 3.49 The lateral variations plot corresponding to the simulated point targets using DAS, MV and FBMV. . L = M/2 and . K = 0 are considered for the MV-based algorithms

3.6.3 Amplitude and Phase Estimation Algorithm The amplitude and phase estimation (APES) algorithm is a MV-based algorithm that is developed to increase the robustness of the MV algorithm. In this regard, the minimization problem of the basic MV algorithm is modified; it is tried to make the output of the MV beamformed data as close as possible to a plane wave with the wavenumber of .k x (Blomberg et al. 2009; Holfort et al. 2008). More precisely, note that the output of the MV algorithm is obtained as (3.16). Minimizing the difference between the obtained output and the plane wave with the wavenumber of .k x , the following problem is written:

.

min

w(k),αPW

M−L+1 Σ | | 1 |w H (k)x (l) (k) − αPW e jkx xl |2 , M − L + 1 l=1

subject to w H (k)a = 1.

(3.121)

In the above minimization problem, .xl denotes the .x coordinate of the .lth element, x (l) (k) is the .lth subarray at time index .k which is defined in (3.12), and .αPW is the complex amplitude of the desired plane wave which is an unknown parameter. As it can be seen from the above problem, there exist two unknowns: .αPW and .w(k). To obtain the solution, the minimization problem with respect to .αPW is first considered, and the problem is solved using the Lagrangian multiplier method. This results in the following solution:

.

αˆ

. PW

= w H (k)G(k x ),

(3.122)

where:

.

G(k x ) =

M−L+1 Σ 1 x (l) (k)e− jkx xl . M − L + 1 l=1

(3.123)

120

3 Beamforming Algorithms in Medical Ultrasound …

Inserting (3.123) into (3.121), the minimization problem is updated as below: .

ˆ min w H (k) Q(k)w(k), w(k)

subject to w H (k)a = 1,

(3.124)

ˆ ˆ = R(k) − G(k x )G(k x ) H . Again, using the Lagrangian multiplier where . Q(k) method, the optimum weight of the above minimization problem is obtained as below: wAPES (k) =

.

−1 ˆ Q(k) a . H −1 ˆ a Q(k) a

(3.125)

It can be seen that the weight vector obtained from the APES minimization problem is similar to the one obtained from the MV algorithm, with this difference that the sample covariance matrix is modified; the covariance matrix of the interference plus ˆ ˆ is estimated and replaced with the array covariance matrix (. R(k)) noise (. Q(k)) which is used in the basic MV algorithm. It has been shown that the APES algorithm improves the robustness and suppresses the signal cancellation effect. The reason is that the covariance matrix of interference plus noise is estimated in this algorithm. In other words, the desired signal does not exist in the estimated covariance matrix ˆ . Q(k). However, it should be noted that this achievement is at the expense of a slight degradation of the resolution.

3.6.4 Modified Amplitude and Phase Estimation Algorithm According to the discussion performed in the previous section, it is concluded that the APES algorithm improves the amplitude resolution compared to the MV beamformer. However, the spatial resolution is slightly degraded using the APES algorithm. The modified APES (MAPES) algorithm, as its name implies, is the modified version of the APES method in which the goal is to take advantage of both the good spatial resolution of the MV beamformer and the good amplitude resolution of the APES method. In this regard, a parameter, denoted as .η, is involved in the estimated covariance matrix . Qˆ that makes a trade-off between the MV and APES algorithms ˜ and is written (Mohammadzadeh 2016). The new covariance matrix is denoted as . Q, as below: .

˜ ˆ Q(k) = R(k) − ηG(k x )G H (k x ).

(3.126)

Replacing . Q˜ with the modified covariance matrix in (3.125), the weight vector corresponding to the MAPES algorithm is obtained as below:

3.6 Robustness Improvement of Adaptive Beamformers

wMAPES (k) =

.

−1 ˜ Q(k) a . H −1 ˜ a Q(k) a

121

(3.127)

From (3.126), one can see that for .η = 0, the resulting weight vector would be equivalent to the basic MV algorithm. Also, for .η = 1, the performance of the MAPES algorithm reaches the APES method. It is obvious that the performance of the MAPES algorithm varies by changing the parameter .η; as the value of the parameter .η increases, the amplitude resolution would be improved. However, the spatial resolution would be degraded. In contrast, the spatial resolution would be improved as .η gets closer to zero at the expense of the amplitude resolution degradation. A constant value for the parameter .η can be considered during the image formation process. However, to take advantage of the MAPES algorithm at its best, it is expected to change the value of .η adaptively; it is desirable to increase the value of the parameter .η around the strong scatterers. Therefore, the performance of the algorithms will be close to the APES algorithm and the signal cancellation phenomenon is prevented. Also, in areas farther from the strong scatterers, it is desirable to decrease the value of the parameter .η to achieve a good spatial resolution of the MV algorithm. According to these descriptions, it can be concluded that the coefficients of the CF algorithm can be matched well to assign the appropriate values to the parameter .η adaptively. According to (3.126) and (3.19), the covariance matrix of the MAPES is, therefore, written as below: .

˜ ˆ Q(k) = R(k) − C F(k)G(k x )G H (k x ) ⎛ |Σ |2 ⎞ | | M x (k) | ⎟ | i=1 i ⎜ H ˆ = R(k) − ⎝ Σ M ⎠ G(k x )G (k x ). M i=1 |xi (k)|2

(3.128)

3.6.5 Amplitude and Phase Estimation Plus Wiener Post-Filter Algorithm In the APES+wiener beamformer (Deylami and Asl 2016), as its name implies, the APES algorithm is combined with the wiener post-filter method. Using this technique, the resolution and contrast of the resulting image would be improved compared to the APES algorithm. The wiener post-filter is a post-processing weighting step in which the scaling factor is calculated and multiplied by the weight vector, as can be seen from (3.55). On one hand, in order to perform the wiener post-filter method, the estimated covariance matrix of the interference plus noise is required. On the other hand, in Sect. 3.6.3, it is observed that in the APES algorithm, the covariance matrix of the interference plus noise (i.e., . Q) is estimated and used to obtain the weight vector. In the APES+wiener beamformer, it is proposed to perform the estimated covariance matrix which is obtained in APES to calculate the coefficients of the

122

3 Beamforming Algorithms in Medical Ultrasound …

wiener post-filter. The weight vector is updated accordingly. More specifically, by combining the estimated covariance matrix of the APES algorithm and the wiener post-filter, (3.57) which shows the formulation of the wiener post-filter, is rewritten as below: .

HAPES (k) =

|˜s (k)|2 H |˜s (k)| + w APES (k) Q(k)wAPES (k) 2

,

(3.129)

where .|˜s (k)|2 denotes the power of the desired signal which is approximated from the following equation: .

H |˜s (k)|2 = w APES (k)Rs (k)wAPES (k).

(3.130)

In the above equation, . Rs (k) is the estimated covariance matrix of the desired signal ˆ − Q(k). Note that (3.129) is the modified which is considered as . Rs (k) = R(k) version of (3.57) in which the weight vector of the APES algorithm is used, and also, the estimated covariance matrix of the interference plus noise is replaced with . Q(k). Once the coefficients of the modified wiener post-filter are calculated according to (3.129), the weight vector of the APES+wiener beamformer is obtained as below: wAPES+wiener (k) = HAPES (k)wAPES (k).

.

(3.131)

Briefly, it can be said that in the APES+wiener beamformer, the covariance matrix of the interference plus noise, as well as the covariance matrix of the desired signal, are estimated using the APES algorithm. Then, the estimated parameters are used to obtain the coefficients of the wiener post-filter. Finally, the output beamformed data is obtained as below: y

. APES+wiener

(k) =

M−L+1 Σ 1 H wAPES+wiener (k)x (l) (k). M − L + 1 l=1

(3.132)

3.7 Low-Complexity Minimum Variance Beamforming Algorithms The results obtained from the MV-based algorithms confirm that this adaptive beamformer improves the resolution and contrast of the constructed image compared to the data-independent DAS algorithm. However, this improvement is achieved at the expense of high computational complexity. Therefore, high-quality real-time imaging is challenging. The major computational load of the MV algorithm is related to the estimation of the covariance matrix and its inversion. In order to accurately determine the computational complexity of the MV algorithm, it is desired to count its floating point operations (flops), which are presented in Table 3.1. Note that a

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

123

Table 3.1 The order of computational complexity of different parts of the MV algorithm Calculations

Flops

Covariance matrix

.

Weight vector Final output

.N

(

)

L2 5 2 + 2N + 2 2 3 2 . L + L + 2L 3

L

+ 3L − 2

flop refers to a basic mathematical operation including a summation, subtraction, multiplication, or division (Golub and Van Loan 1996). Flops that are required to estimate the covariance matrix, calculate the weight vector, and obtain the final output are determined according to (3.13), (3.15), and (3.16), respectively. Considering the total flops presented in Table 3.1, it can be concluded that the covariance matrix ˆ ∈ C L×L estimation using the spatial smoothing, and also, its inversion which is .R associated with the weight vector calculation, require the computational complexity with the order of . O(L 2 ) and . O(L 3 ), respectively. Implementing the MV algorithm on the graphics processing unit (GPU) significantly accelerates the computation time of this algorithm (Fath et al. 2012). Moreover, the computational complexity of the MV algorithm is also affected by its adjusted parameters; in particular, it is obvious that decreasing the subarray length speeds up the MV processing. However, as the subarray length decreases, the quality of the resulting image is degraded. This issue is discussed in Sect. 3.4. To reduce the computational complexity of the MV algorithm while its good performance is retained, several techniques have been performed. In this section, these techniques are going to be discussed in more detail.

3.7.1 Low-Complexity Adaptive Beamformer Using Pre-determined Weights To reduce the computational complexity of the MV beamformer, a low-complexity adaptive (LCA) method was proposed in which the time-consuming processing steps of calculating the weight vector are removed (Synnevåg et al. 2011). In this method, some pre-determined weight vectors are designed and used to obtain the reconstructed image. The weight vectors, the number of which is denoted as . P, are designed by considering the response of the MV weights in a common medical US scenario. The goal of designing the pre-determined weight vectors is to eliminate the high-computational processing steps performed in the MV algorithm. In all of the pre-determined weight vectors, the condition .w(k) H a = 1 is met. This can be achieved by simply normalizing the weight vector. To construct the image at each time index .k, the weight vector is selected among . P pre-determined weight vectors such that the variance of the output beamformer is minimized. More precisely, the optimum weight vector is selected based on the following minimization problem:

124

3 Beamforming Algorithms in Medical Ultrasound … .

{| |2 } min E |w¯ Hp (k)x(k)| , p

p = {1, · · · , P},

(3.133)

where .w¯ p (k) denotes the . pth pre-determined weight vector. It is well-known that the variance of the output beamformer should be estimated. The estimation is performed from the following equation: σˆ 2 (k) =

. y LCA

K Σ 1 |yLCA (k + n)|2 , 2K + 1 n=−K

(3.134)

where . yLCA (k) = w¯ p (k) H x(k) is the output of the LCA method at time index .k. It can be seen from the above equation that the variance can be estimated by temporal averaging over .2K + 1 samples. For . K = 0, the estimation is performed using a single sample .k. The computational complexity of the LCA algorithm is . P − 1 times the computational burden of the DAS algorithm which is considerably lower than the adaptive MV beamformer. In the case in which the temporal averaging is used to estimate the variance of the output, the computational complexity of the algorithm is not increased (compared to the case in which . K = 0) if the .2K + 1 last values of the output are stored for all . p. In such a case, the variance estimation is updated as below: ( ) 1 |yLCA (k − K )|2 − |yLCA (k + K + 1)|2 . 2K + 1 (3.135)

σˆ 2 (k + 1) = σˆ y2LCA (k) −

. y LCA

The pre-determined weight vectors play an important role in the performance of the LCA algorithm. The pre-determined weight vectors usually include some classical windows, such as Kaiser and rectangular windows, in which a trade-off exists between the main lobe width and sidelobe level. Furthermore, by increasing the weighting toward the aperture edges, a new weight vector is generated in which the resolution of the rectangular window is pushed. In addition to the Kaiser window, its inverse version is also used as below: w p (k) =

.

1 , w K (k)

(3.136)

where .w K (k) ∈ C N ×1 denotes the Kaiser window. By doing so, the main lobe width would be improved compared to the rectangular window. However, this improvement is at the expense of a higher sidelobe level. Also, some sets of weight vectors are designed by experience; for instance, a window with an asymmetric response can better define the edges compared to the symmetric one. Designing the asymmetric window is done by shifting the peak of the response of the Kaiser window as below: w p (k) = w K (k)e− j2πφk .

.

In the above equation, the shift is determined by the parameter .φ.

(3.137)

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

125

According to Synnevåg et al. (2011), twelve pre-determined weight vectors, including a rectangular window and various Kaiser windows with different parameters, are considered for the LCA method (. P = 12). The more the variety of the predetermined weight vectors, the better the performance of the LCA method. However, this improvement is at the expense of higher computational complexity. It has been shown that temporal averaging is not necessary to construct the point targets, i.e., an acceptable quality image comparable with the MV algorithm would be obtained using | the LCA |method with . K = 0. In such a case, the variance of the output equals .|w¯ p (k) H x(k)| according to (3.134). However, to construct the images corresponding to the speckle-generating mediums, temporal averaging is required to make the speckle more homogeneous, and also, increase the total brightness. The drawback of the LCA method compared to the MV algorithm is the limited number of pre-determined weight vectors which cannot cover all the possible scenarios.

3.7.2 Beamspace Adaptive Beamforming In the beamspace (BS) algorithm, the data is transformed from the element space to the beam space. Then, the MV processing steps are performed on the transformed data. By transforming the data to the beam space, it is possible to select a limited number of beams that potentially cover the areas containing the interference. Therefore, the covariance matrix would be estimated successfully. This property is not achieved in the element space without the performance degradation of the MV algorithm. The goal of using the BS algorithm is to reduce the dimensions of the estimated covariance matrix, and therefore, speed up the processing while the performance of the algorithm is comparable with the MV beamformer (Nilsen and Hafizovic 2009). The BS beamformer is defined as the process in which the outputs of multiple beamformers are combined. In this algorithm, it is necessary to specify which beams should be selected for the combination. Indeed, in this process, it is as if the outputs of multiple DAS beamformers that are steered in different directions are combined. Transforming the data from the element space to the beam space is performed using the so-called Butler matrix . B ∈ C N ×N , the entries of which are obtained as below: 1 b(i, j) = √ e− j2πi j/N . N

.

(3.138)

In the above relation, .b(i, j) denotes the entry of the Butler matrix corresponding to its .ith row and . jth column. The Butler matrix is equivalent to the normalized version of the N-point discrete Fourier transform (DFT) matrix. The columns of the Butler matrix are orthogonal to each other. Therefore, it is concluded that . B H B = I. Multiplying the Butler matrix with the received signal of the N-element array for each time index .k, the weighted beam is obtained as below:

126

3 Beamforming Algorithms in Medical Ultrasound … .

x B S (k) = Bx(k).

(3.139)

Now, the minimization problem of the BS algorithm is written based on the weighted beams: .

min w H B S (k)R B S (k)w B S (k),

w B S (k)

subject to w H B S (k)a B S = 1,

(3.140)

where: .

{ } R B S (k) = E x B S (k)x H B S (k) (3.141)

Also, we have . a B S = Ba. It can be seen that the above minimization problem is similar to the minimization problem of the MV algorithm shown in (2.26). The optimum weight vector is, therefore, obtained similarly to the MV algorithm which results in the following equation: w B S (k) =

.

R−1 B S (k)a B S

−1 aH B S R B S (k)a B S

.

(3.142)

In the case where all of the beams contribute to obtaining the optimum weight vector, the result would be equivalent to the weight vector obtained from the MV beamformer. To see this, note that.w B S (k) = Bw(k), and the Butler matrix is unitary. Therefore, we have: { } w H (k)R(k)w(k) = w H (k)E x(k)x H (k) w(k) { } = w H (k)E B H Bx(k)x H (k)B H B w(k) } { = w H (k)B H E Bx(k)x H (k)B H Bw(k)

.

= wH B S (k)R B S (k)w B S (k). Moreover, we have: w H (k)a = w H (k)B H Ba

.

= w B S (k)a B S . These two equalities presented above prove the claim. To reduce the computational complexity and speed up the processing while the performance of the BS algorithm is comparable with the MV beamformer, some beams should be removed from the vector . x B S . By doing so, the information corresponding to the main response axes is omitted and the remaining beams, the number of which is denoted as. N B S , will correspond to the directions in which minimizing the

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

127

Fig. 3.50 Diagram of the BS algorithm processing steps

interference is of interest. Removing the beams corresponding to the main response axes is performed during the transformation process. More precisely, some rows of the Butler matrix are removed and the resulting matrix is written as below: .

]T [ B˜ = b1 , b2 , · · · , b N B S ,

(3.143)

˜ Considering where . B˜ ∈ C N B S ×N , and . biT ∈ C N ×1 denotes .ith row of the matrix . B. . N B S < N , the computational complexity of inversing the covariance matrix will be reduced from . O(N 3 ) to . O(N B3 S ). The schematic of the processing steps of the BS algorithm is depicted in Fig. 3.50. Generally, it can be said that the smaller the value of. N B S , the higher the processing speed. To determine which beams should be retained to keep the performance of the algorithm close to the MV beamformer, a priori knowledge about the spatial distribution of the interference is required. In US imaging, it is known that the received signals corresponding to the focal point are coherent. Also, the received signals of the farther areas from the focal point are incoherent. It can be concluded that the lowfrequency components in the spatial frequency domain correspond to the received signals obtained from the focal point. Therefore, a few first rows of the Butler matrix that correspond to the low-frequency components are desired to be maintained in order to perform the dimensionality reduction effectively. Furthermore, investigating the energy of the transmit-receive beamfan, i.e., the production of the transmit and receive beam patterns, is a good candidate to see which beams are expected to be maintained to reduce the computational complexity while the performance of the algorithm is comparable with the MV beamformer. The energy of the transmitreceive of the .ith beam is obtained from the following equation: { .

E(i) =

π/2

−π/2

|BPT X (θ )BP R X (θ )|2 dθ, i = 0, 1, · · · ,

N , 2

(3.144)

where .BPT X (θ ) and .BP R X (θ ) denote the beampattern of the transmission and reception, respectively, which can be considered as below:

128

3 Beamforming Algorithms in Medical Ultrasound …

BPT X (θ ) =

.

BP R X (θ ) =

sin

(πN

sin(θ )

)

(2 ) , sin π2 sin(θ ) ]) ( [ sin π2N sin(θ ) − 2iN ]) . ( [ sin π2 sin(θ ) − 2iN

It is assumed that the elements participating in the transmission and reception are similar and equal. N . The normalized cumulative transmit-receive energy is calculated as below: Σi j=1 E( j) .Cum{E(0 : i)} = Σ (3.145) . N /2 j=1 E( j) Analyzing the above equation, one can obtain that by using only the first few beams, a large percentage of the transmit-receive energy is conserved. Therefore, it is concluded that the first few beams that have a major effect on the quality of the reconstructed image are preserved, and the remaining beams are removed to speed up the processing. Note that similar to the MV beamformer, in the BS algorithm, spatial smoothing is necessary to be applied for estimating the covariance matrix . R B S . More precisely, the transformation process is applied to the divided subarrays with the length of . L and the spatially smoothed covariance matrix is estimated accordingly. The resulting covariance matrix is shown as . Rˆ B S . In such a case, the dimensions of the Butler matrix would be as . B˜ ∈ C N B S ×L . Also, diagonal loading and temporal averaging can be used. It can be concluded that by using the BS method, the dimensions of the data being processed are reduced. Generally, we have: ˜ w B S (k) ∈ C N B S ×1 = Bw(k),

.

ˆ Rˆ B S (k) ∈ C N B S ×N B S = B˜ R(k), ˜ a ∈ C N B S ×1 = Ba.

(3.146)

One should note that the beampatterns in the BS method are not symmetric. For this reason, an odd number should be assigned to the parameter . N B S in order to have a symmetric beampattern. To better understand this issue, pay attention to Fig. 3.51 which shows the beampatterns of the first three beams in the beamspace (. N B S = 3). As can be seen from the figure, the first beam is steered in the direction of zero degree. The first beam, together with the second and third beams that are placed on both sides of the first beam, lead to the production of a symmetric beampattern.

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

129

Fig. 3.51 Beampatterns of first three beams corresponding to the BS algorithm

3.7.3 Beamspace Method Based on Discrete Cosine Transform In the BS method discussed in Sect. 3.7.2, it has been concluded that the performance of the algorithm is degraded when an even number of beams is selected to perform the transformation, e.g.,. N B S = 2. This issue is due to the asymmetry of the beampattern. To overcome this limitation, the BS method based on the discrete cosine transform (DCT) can be used (Deylami and Asl 2017); in this method, the DCT matrix is used as the basis to perform the transformation instead of the DFT (or Butler) matrix. Accordingly, a few first low-frequency components of the DCT matrix (i.e., the . N B S first columns of the DCT matrix) are maintained to reduce the dimensions of the data in beam space. The DCT basis matrix is denoted as .C the entries of which are calculated as below: ⎧ ⎨ √1 for i = 0, 0 ≤ j ≤ (N − 1) N ) ( .c(i, j) = π( j+ 21 )i 2 ⎩ √ cos for 1 ≤ i ≤ (N − 1), 0 ≤ j ≤ (N − 1). N N (3.147) where .c(i, j) denotes the entry corresponding to the .ith row and . jth column of the DCT matrix. Note that .C is a unitary matrix, i.e., we have .C C H = I. The processing steps are exactly similar to the BS method discussed in Sect. 3.7.2 except that the Butler matrix . B is replaced with the DCT matrix .C. Using the DCT matrix as the basis, the beampattern of the beams in this beam space would be symmetric. Therefore, the performance of the algorithm would not be degraded for an even number of selected beams. In particular, . N B S = 2 can be selected to perform the transformation, and therefore, the computational complexity would be reduced. To better see the symmetrical property of the DCT-based BS method, pay attention to Fig. 3.52. Comparing Figs. 3.51 and 3.52, one can see that by using the DCT-based BS method, the beamspace will be symmetric even for. N B S = 2

130

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.52 Beampatterns of beams 1-3 corresponding to DCT-based BS method

in contrast to that of the DFT-based BS method. Moreover, note that the coefficients of the DCT matrix are real while the coefficients of the DFT matrix are complex. Therefore, the processing speed of the DCT-based method is higher compared to the DFT-based one. Considering . N B S = 2 in the BS method using the DCT basis matrix, the estimated covariance matrix in the beam space will have the dimensions of .2 × 2. Inverting the resulting covariance matrix can be performed by changing the entries of the main diagonal and multiplying the entries of the sub diagonal by .−1, the computational complexity of which is negligible. Generally, using the DCT basis matrix, the BS method preserves more energy in lower dimensions compared to the case in which the Butler matrix is used.

3.7.4 Beamspace Method Based on Modified Discrete Cosine Transform Using the DCT basis matrix in the BS method, the noise beams have a higher peak magnitude compared to the signal beam. In other words, the beams with the main response axes in non-zero directions have a higher magnitude compared to the ones in the zero direction. To deal with this limitation, in Vaidya and Srinivas (2020) it has been proposed to use a kind of DCT basis matrix which results in the lower magnitude noise beams compared to the magnitude of the signal beam. By modifying the DCT basis matrix presented in (3.147), the modified DCT matrix .C˜ ∈ C N ×N is constructed the entries of which is obtained as below: ( ) ( π ) πi j cos , 0 ≤ i, j ≤ N − 1. (3.148) .c(i, ˜ j) = sin 2N N (π ) is the normalization factor that is considered in order In the above equation, .sin 2N to set the magnitude of the signal beam to one. One should note that the defined matrix

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

131

C˜ is not unitary since its rows are not orthogonal. Therefore, the transformation matrix is redefined as below:

.

.

[ H ]− 21 , T = C˜ C˜ C˜

(3.149)

and consequently, we have . T T H = I. Reducing some rows of the transformation matrix . T and keeping only . N B S rows of it, the transformation process results in . N B S number of DAS beams, each of which is focused on a specific direction, as discussed in Sect. 3.7.2. Consider the transformed data vector . x B S using the transformation matrix defined in (3.149). The first entry of . x B S includes the signal, and the remaining corresponds to the noise. The transformation matrix . T assigns a lower magnitude to the noise beams compared to the signal beam, as can be concluded from (3.148). Note that similar to the DCT matrix defined in Sect. 3.7.3, the modified DCT transformation matrix is real. This reduces the computational complexity compared to the case in which the basis matrix is complex such as the Butler matrix discussed in Sect. 3.7.2. The drawback of the modified DCT matrix compared to the DCT matrix presented in (3.147) is that it results in a higher level of sidelobes. However, this problem can be overcome using the sidelobe suppression techniques of the MV beamformer. The processing steps of the BS algorithm using the newly defined transformation matrix is similar to the BS method discussed in Sect. 3.7.2 except that the Butler matrix . B is replaced with the modified DCT transformation matrix defined in (3.149).

3.7.5 Minimum Variance Beamformer Based on Principal Component Analysis In the principal component analysis (PCA)-based method, the MV weight vector is not calculated directly; rather, it is approximated by a linear combination of a few selected dominant principal components (Kim et al. 2014). In this method, a set of weight vectors are obtained offline using the MV beamformer. These predetermined weight vectors are used to obtain the principal components. Consider . Q pre-determined weight vectors are acquired. The covariance matrix of the predetermined weight vectors, known as . Rw (k) ∈ C L×L , is calculated as below:

.

Rw (k) =

Q ][ ]H 1 Σ[ wq (k) − μ(k) wq (k) − μ(k) , Q q=1

(3.150)

where.wq (k) ∈ C L×1 is the.qth weight vector, and.μ(k) ∈ C L×1 is the mean vector of the pre-determined weight vectors. The obtained covariance matrix is decomposed as below:

132

3 Beamforming Algorithms in Medical Ultrasound … .

Rw = VΛ−1 V H ,

(3.151)

where .Λ = diag [λ1 , · · · , λ L ] ∈ C L×L is a diagonal matrix consisting of the eigenvalues of the covariance matrix in descending order, .V = [v1 , v2 , · · · , v L , ] ∈ C L×L is a matrix consisting of the eigenvectors corresponding to the obtained eigenvalues, and .vi ∈ C L×1 denotes the orthonormal eigenvector of the eigenvalue .λi which is known as .ith principal component. The time index .k is ignored in the above equation and henceforth for simple writing. Note that these calculations are performed offline. Therefore, no matter how much the computational burden is borne so far. The matrix .V is full rank. Therefore, the weight vector can be considered as the linear combination of this matrix as below: w = Vβ,

(3.152)

.

where .β ∈ C L×1 is a vector. Having the matrix .V, obtaining the optimum weight vector is equivalent to obtaining the vector .β. In such a case, the MV minimization problem is reformulated as below: .

min β H Rˆ P β, β

subject to β H a P = 1.

(3.153)

In the above problem, . a P = V H a. Writing the estimated covariance matrix . Rˆ p in the matrix from, we have: .

Rˆ p =

1 UU H, N −L +1

(3.154)

] [ where.U = u1 , · · · , u N−L+1 ∈ C L×(N −L+1) , and.ul = V H x (l) ∈ C L×1 . Recall that the vector . x (l) is the .lth subarray the entries of which are defined in (3.12). Temporal averaging is not considered in the estimated covariance matrix expressed above, although it can be applied. The solution of the minimization problem presented in (3.153) is obtained similar to the MV beamformer as below: −1

β=

.

Rˆ p a p −1

a Hp Rˆ p a p

.

(3.155)

The goal is to reduce the computational complexity of the MV beamformer. In this regard, it is observed that the first few columns of the matrix .V, or equivalently, the first few principal components are sufficient to express most of the weight space of the MV beamformer. Therefore, a few principal components (. N p ) are retained and the remaining are removed. The new matrix, some columns of which are removed, is obtained as below:

3.7 Low-Complexity Minimum Variance Beamforming Algorithms .

[ ] ˜ = v1 , · · · , v N p , V

133

(3.156)

˜ ∈ C L×N p and . N p < L. Replacing the matrix .V with .V ˜ in (3.152), we have: where .V .

˜ ˜ β, w˜ = V

(3.157)

where: β˜ =

.

−1 R˜ p a˜ p

∈ C N p ×1 ,

−1 a˜ Hp R˜ p a˜ p N p ×1 ˜H

a˜ p = V a ∈ C , 1 H R˜ p = U˜ U˜ ∈ C N p ×N p , N −L +1 ] [ ˜ (l) ∈ C N p ×1 . U˜ = u˜ 1 , · · · , u˜ N−L+1 , u˜ l = Vx By reducing the dimensions of the estimated covariance matrix from . L × L to . N p × N p , the computational complexity of the processing will be reduced to . O(N p3 ). Finally, the output of the PCA-based MV algorithm is obtained without directly calculating the weight vector .w˜ as below: y

. PCA

NΣ −L+1 1 H = β˜ u˜ l . N − L + 1 l=1

(3.158)

According to (3.150), the mean vector of the pre-determined weight vectors is subtracted from them before estimating the covariance matrix. This is a pre-processing step that is necessary for the standard PCA. By doing so, the DC component is ˜ will removed, and therefore, the summation of the entries along each column of .V H ˜ = 1 would not be met. This be equivalent to zero. Consequently, the constraint .β˜ V √ problem is overcome by adding a DC component equivalent to .1/ L to the entries ˜ Note that adding this DC component increases the error due to dimensional of .V. reduction. However, it still leads to an acceptable result comparable with the MV algorithm. If the pre-determined weight vectors are acquired accurately, the PCA-based MV algorithm results in a better estimation compared to the BS method which is discussed in Sect. 3.7.2. However, note that in some conditions, the BS method is more robust compared to the MV algorithm. This causes some of the estimated pixel values to differ significantly from the MV beamformer. As the pre-determined weight vectors are obtained from the medium which is closer to the actual one, the result of the PCA-based MV method would be more accurate, especially in the cases where . N p is small. In other words, it is desirable to acquire the training set as similar as possible to the actual data. If the training set, or equivalently, the pre-determined weight vectors are not close to the actual one, the performance of the algorithm would be degraded

134

3 Beamforming Algorithms in Medical Ultrasound …

but is still acceptable; in any condition, it is trying to obtain .β˜ in the way that the variance of the interference plus noise is minimized, as stated in (3.153). That is, the best possible solution is calculated in each condition. Therefore, the difference between the pixel values obtained from the PCA-based MV and the standard MV would be minimized.

3.7.6 Minimum Variance Beamformer Based on Legendre Polynomials So far, it is concluded that to reduce the computational complexity of the MV beamformer, one simple way is to transform the received data into a new space and reduce the dimensions of the estimated covariance matrix, as used in the BS method, as an example. In Bae et al. (2016), it has been proposed to use the Legendre polynomials to construct the basis matrix and perform the transformation. The processing steps are exactly similar to the BS method except that the transformation matrix is replaced with the new one which is expressed based on the Legendre Polynomials: .

] [ P = p0 , · · · , p L−1 ,

(3.159)

where. P ∈ C L×L is the Legendre Polynomials basis. The. jth column of the matrix. P ]T [ is defined as . p j = p0 j , p1 j , · · · , p(L−1) j ∈ C L×1 the entries of which is obtained according to the following equation:

.

pi j =

j Σ

i k ck j .

(3.160)

k=0

In the above equation, . pi j denotes the .ith entry corresponding to . jth column of . p j . The parameter .ck j is determined by the Gram-Schmidt orthonormalization process. For instance, the entries of the vector . p1 , that corresponds to the second column of the Legendre Polynomials basis defined in (3.159) is obtained according to (3.160) as below: ⎡ ⎤ ⎡ ⎤ 0 p01 ⎢ p11 ⎥ ⎢ ⎥ c01 + c11 ⎢ ⎥ ⎢ ⎥ . p1 = ⎢ (3.161) ⎥. .. ⎥ = ⎢ .. ⎣ . ⎦ ⎣ ⎦ . p(L−1)1

c01 + (L − 1)c11

To reduce the dimensions of the estimated covariance matrix, and consequently, speed up the processing, a few columns of the Legendre Polynomials basis is retained to perform the transformation, and the remaining are removed. The number of columns that are maintained is denoted as . Q. Note that . Q ≤ L. Similar to the BS method, a few first components of the Legendre Polynomials basis are considered to be used in

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

135

order to dimensionally reduce the estimated covariance matrix. Therefore, the computational complexity of the MV method is reduced to . O(Q 3 ). It has been shown that considering . Q = 1, the performance of the algorithm reaches the non-adaptive DAS beamformer. The advantage of using the Legendre Polynomials-based MV method compared to the BS and PCA-based methods is that the entries of the Legendre Polynomials matrix are real. This is while the entries of the transformation matrices corresponding to the other two methods are complex. Therefore, the computational complexity of the transformation and the subaperture averaging process would be reduced using the Legendre Polynomials-based MV method. Moreover, the beampattern of each column of the matrix . P is symmetric compared to the BS and the PCA-based methods. This results in sharper and more symmetrical points being reconstructed for an even number of selected columns of the transformation matrix (i.e., . Q). In other words, the matrix . P can suppress the interference close to the focal point more efficiently and symmetrically, even for . Q = 2.

3.7.7 Decimated Minimum Variance Beamformer Similar to the BS method presented in Sect. 3.7.2, the idea of the decimated MV (DMV) algorithm is to reduce the dimensions of the covariance matrix in order to seep up the processing. The difference is that in the DMV method, the decimation process is performed in the array domain instead of beam space (Sakhaei 2015). In other words, no transformation matrix is used to transform the received data to another space. It is known that the low-frequency components in the frequency space correspond to the received signals from the focal point. To suppress the highfrequency components of the received signals, and also, suppress the noise, the received signal is first filtered. Then, the filtered signal is decimated according to the specified decimation factor. To filter the received signal, consider a tap weight vector with the length of . J , . f ∈ C J ×1 where . J < N . The filtered signal is denoted as . x f ∈ C(N −J +1)×1 and formulated as below (time index .k is not considered for simple writing): .

x f = Fx,

(3.162)

where . F ∈ C(N −J +1)×J is the convolution matrix for the tap weight vector . f . Note that the filter length affects the resolving capability; as the length of the filter increases, the spatial length of the filtered array would be decreased which degrades the resolving capability. However, to suppress the high-frequency components as well as reduce the aliasing effect due to the decimation process, the filter should be applied to the received signals. It is important to adjust the filter length appropriately such that the good performance of the algorithm remains while the aliasing effect is negligible.

136

3 Beamforming Algorithms in Medical Ultrasound …

After the decimation filter is applied to the received signal, the filtered signal is decimated to the length of . Nd which is denoted as . x d ∈ C Nd ×1 . If the decimation +1)−1 which is an integer value, the relation between factor is considered as .r = (N −J Nd −1 the vectors . x f and . x d is written as below: x

. d,m

= x f,(m−1)(r +1) ,

(3.163)

where .xd,m denotes the .mth entry of the vector . x d . Similarly, .x f,a represents the .ath entry of the filtered signal . x f . The above equation can be rewritten in the matrix form as below: .

xd = I d x f ,

(3.164)

where . I d ∈ C Nd ×(N −J +1) is the identity matrix some columns of which are decimated. Each entry of the decimated identity matrix corresponding to .ith row and . jth column is denoted as . I d (i, j) and is obtained as below: { I d (i, j ) =

1 0

j = (i − 1)r + 1 . otherwise

The optimization problem of the MV beamformer is now written based on the decimated received signal as below: .

min wdH Rˆ d wd , wd

subject to w dH ad = 1,

(3.165)

where . ad ∈ C Nd ×1 is the steering vector the entries of which are reduced. Also, ˆ d ∈ C Nd ×Nd is the estimated covariance matrix of the decimated received signals. .R Solving the above minimization problem similar to the MV algorithm, the following optimum weight vector would be obtained: −1

wd =

.

Rˆ d ad −1

adH Rˆ d ad

.

(3.166)

In the MV method, it is seen that the linear array is divided into . N − L + 1 overlapping subarrays to estimate the covariance matrix. In the DMV method, this process is also performed on the filtered array; after the received signal is filtered (producing a .(N − J + 1) × 1 vector), subarray division is performed which results in .(N − J + 1) − L + 1 number of overlapping subarrays. Then, the decimation proNd ×1 . Finally, cess is performed on each subarray, each of which is denoted as. x (l) d ∈C the covariance matrix is estimated as below:

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

.

Rˆ d =

1 (N − J + 1) − L + 1

(N −JΣ +1)−L+1

( )H x (l) x (l) . d d

137

(3.167)

l=1

Also, diagonal loading and temporal averaging can be used to estimate the covariance matrix. Using the DMV method, the computational complexity of the MV algorithm reduces to . O(Nd3 ). It is known that the beampattern of the received signal includes a main lobe and a series of sidelobes. The sidelobes closest to the main lobe have the largest amplitude compared to other sidelobes. If at least two high-amplitude sidelobes on both sides of the main lobe are null in the adaptive algorithms, the performance would be improved compared to the non-adaptive DAS beamformer. Therefore, having three degrees of freedom (one to maintain the power of the desired signal, .w H a = 1, and the others to place null on the high-amplitude sidelobes close to the main lobe) is sufficient to improve the image quality compared to DAS. Considering this issue, . Nd = 3 in the DMV method improves the image quality compared to the DAS algorithm. Of course, considering . Nd = 5, it is possible to place null in four close sidelobes. Clearly, in such a case, the performance of the DMV algorithm will be improved. However, the computational complexity will be increased. The drawback of the DMV method is that its performance is degraded by increasing the imaging depth. This is due to the fact that a large part of the signal is discarded. Therefore, the algorithm would not be able to resolve the close points in deep areas. Consequently, the resolution would be degraded considerably.

3.7.8 Minimum Variance Method Based on QR Decomposition Another way to reduce the computational complexity of the MV algorithm is to use the QR decomposition-based method (Park et al. 2016). Similar to the previous methods discussed to reduce the computational complexity, in the QR-based method, a transformation matrix denoted as . T is performed to reduce the dimensions of the received signals in another space. The .lth subarray in the transformation domain is expressed as: .

z (l) (k) = T (k)x (l) (k).

(3.168)

The minimization problem of the MV beamformer is rewritten based on the received data in the transformed domain as below: .

min w zH (k) Rˆ z (k)w z (k), wz

subject to w zH (k)a z = 1,

(3.169)

138

3 Beamforming Algorithms in Medical Ultrasound …

where . a z = T a. Also, . Rˆ z (k) is the covariance matrix in the transform domain that is estimated as below: .

Rˆ z (k) =

1 z (l) (k)z (l) (k) H . N −L +1

(3.170)

Solving the minimization problem expressed in (3.169), we have: −1

w z (k) =

.

Rˆ z (k)a z −1

a zH Rˆ z (k)a z

.

(3.171)

In the QR-based method, the goal is to make the estimated covariance matrix . Rˆ z (k) a scalar value. In such a case, inverting the covariance matrix will have no computational burden, and the weight vector can be simply calculated as below: w z (k) =

.

az . H az az

(3.172)

To this end, the subarray data matrix is defined as .U = [u1 , · · · , u L ] ∈ C(N −L+1)×L , [ ]T where.ul = x N −L+l (k), x N −L+l−1 (k), · · · , xl (k) ∈ C(N −L+1)×1 for.l = 1, · · · , L. More precisely, we have: ⎤ x N −L+1 (k) x N −L+2 (k) · · · x N (k) ⎢ x N −L (k) x N −L+1 (k) · · · x N −1 (k)⎥ ⎥ ⎢ . ⎥ . U(k) = ⎢ .. .. .. .. ⎦ ⎣ . . . . x2 (k) · · · x L (k) (N −L+1)×L x1 (k) ⎡

(3.173)

It can be seen that each row of the matrix .U(k) consists of a subarray. Note that the covariance matrix estimated from the subarrays in array space can be expressed based on the matrix .U(k) as below: .

ˆ R(k) =

1 U H (k)U(k), N −L +1

(3.174)

which is equivalent to (3.13). Applying the QR decomposition to the matrix .U, we have: .

U(k) = Q(k)S(k),

(3.175)

where . Q ∈ C(N −L+1)×L and . S ∈ C L×L are the orthogonal and the upper triangular matrices, respectively. Transposing (3.175), we have: .

U T (k) = ST (k) Q T (k).

(3.176)

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

139

Now, multiplying both sides of (3.176) by . S−T (k), the following relation would be obtained: .

Q T (k) = S−T (k)U T (k).

(3.177)

Note that . ST (k) is a lower triangular matrix, the diagonal entries of which are nonzero. Therefore, this matrix is invertible. In the above equation,.U T (k) ∈ C L×(N −L+1) is obtained by transposing (3.173). Also, the matrix . Q T (k) is expressed as below: ⎤ q1,1 (k) q1,2 (k) · · · q1,N −L+1 (k) ⎢ q2,1 (k) q2,2 (k) · · · q2,N −L+1 (k) ⎥ ⎥ ⎢ Q T (k) = ⎢ . ⎥ .. .. .. ⎦ ⎣ .. . . . q L ,1 (k) q L ,2 (k) · · · q L ,N −L+1 (k) [ (1) ] = z (k), z (2) (k), · · · , z (N −L+1) (k) . ⎡

.

(3.178)

According to (3.173), (3.177) and (3.178), it can be concluded that the matrix . Q T (k) is interpreted as the transformed subarray data matrix. Therefore, . S−T (k) is set as the transformation matrix, i.e., . T (k) = S−T (k). Using the obtained transformation matrix, the estimated covariance matrix. Rˆ z (k) would be scalar. This is achieved using the QR decomposition principle. In particular, we have . a z = S−T (k)a. However, inverting the transformation matrix . S−T (k) is not desirable; the forward substitution technique can be used according to the following equation to compute . a z : .

ST (k)a z = a.

(3.179)

Finally, the beamformed data is obtained from the following equation: y

. QR-MV

(k) =

NΣ −L+1 1 w zH (k)z (l) (k), N − L + 1 l=1

(3.180)

where . z (l) (k) is obtained as (3.178). Also, the weight vector .w z (k) is calculated according to (3.172). The processing steps of the QR-based MV algorithm are summarized below: 1. Constructing the subarray data matrix defined in (3.173). 2. Obtaining the matrix . S(k) that satisfies .U(k) = Q(k)S(k). Then, obtaining the lower triangular matrix . ST (k) according to the obtained . S(k). Note that obtaining the orthogonal matrix . Q is not needed. 3. Obtaining the transformed steering vector according to (3.179). 4. Obtaining the transformed subarray data according to (3.168) with the consideration of . T (k) = S−T (k). 5. Obtaining the weight vector according to (3.172). 6. Calculating the beamformed data according to (3.180).

140

3 Beamforming Algorithms in Medical Ultrasound …

Using the discussed QR-based method, the computational complexity of the process is reduced to . O(L 2 ) compared to the basic MV beamformer. Mathematically, the QR-based method is equivalent to the basic MV algorithm; no approximation is made to achieve the beamformed data using the QR-based method. Therefore, the performance of this method is exactly the same as the MV algorithm while the computational complexity is reduced. The drawback of the QR-based method is that the temporal averaging and diagonal loading techniques cannot be applied to the estimated covariance matrix . Rˆ z (k). This issue leads to the poor performance of the QR-based method in cases such as imaging the speckle-generating mediums in which the temporal averaging technique is necessary to construct the speckle pattern homogeneously. Furthermore, as the diagonal loading technique cannot be applied to the covariance matrix, the algorithm may not be robust against mismatch errors.

3.7.9 Subspace Minimum Variance Beamformer Different high-speed MV-based methods have been introduced so far, most of which use a transformation matrix to perform the dimensional reduction process and speed up the algorithm. It is well-known that the computational complexity of the MV algorithm can also be reduced by decreasing some elements of the array directly. However, this leads to the quality degradation of the resulting image. To reduce the computational complexity of the MV bemaformer, a subspace (SS)-based method is proposed in Deylami and Asl (2016) in which the dimensional reduction is performed in array space. In this method, the covariance matrix of the received signals is estimated according to (3.13). Then, some rows of the estimated covariance matrix are eliminated. This method is discussed in more detail in the following. Assume that the imaging medium consists of .η sources. In such a case, at least .η independent observations are required to discriminate the sources. According to this concept, some rows of the estimated covariance matrix are removed, and only .η number of them is retained in the SS-based method. The dimensionally reduced covariance matrix is denoted as . Rˆ η (k) ∈ Cη×L . Note that the resulting covariance matrix is non-square. Therefore, there is no closed-form solution for the weight vector based on (3.15). To overcome this problem, note that if the following equation is satisfied, the power of the interference would be minimized while the amplitude of the desired signal is retained (the noise component is ignored): .

Rˆ η (k)w(k) = caη ,

(3.181)

where. aη ∈ Cη×1 is a vector of ones. Also,.c is a constant parameter. Note that.η < L. In order to obtain the weight vector, the covariance matrix is decomposed based on the QR decomposition principle: .

Rˆ η (k) H = Q(k)S(k),

(3.182)

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

141

where . Q(k) ∈ C L×η and . S(k) ∈ Cη×η are the orthogonal and the upper triangular matrices, respectively. The weight vector can be considered as below: w(k) = Q(k)u(k),

.

(3.183)

More precisely, the weight vector .w(k) is considered as the projection of the vector .u(k) ∈ Cη×1 to the higher dimension vector .w ∈ C L×1 . This is achieved using the matrix . Q(k), the columns of which span the signal plus interference subspace. According to (3.181)–(3.183), the weight vector is obtained from the following equation: ( H )−1 .w(k) = c Q(k) S (k) aη . (3.184) Three issues remain that need to be addressed: • The parameter .c is unknown. • It is not clear which rows of the estimated covariance matrix should be removed. • The number of sources in the imaging medium, .η, is unknown. For the first issue, note that the constant parameter .c should be adjusted such that the constraint .w(k) H a = 1 is satisfied. To address the second issue, note that the on-axis signals are mainly concentrated on zero angles. It can be concluded that the signals received from the center element of the linear array are more prominent compared to the ones received from the side ˆ are kept elements. Therefore, .η middle rows of the estimated covariance matrix . R(k) and the rest are removed. To address the last issue, one should note that the number of sources is not available in practice. Therefore, the parameter .η should be estimated such that the rows of the covariance matrix are removed as much as possible while the performance of the algorithm is comparable with the MV beamformer; the number of retained rows is expected to be less near the strong scatterers since the number of dominant sources is reduced in such a region. In contrast, the number of rows is expected to be more around the areas farther away from the strong scatterers since multiple sources exist in such regions. This definition matches the GCF weighting factor discussed in Sect. 3.5.1.2; based on the coherency of the received signals, this weighting factor assigns a value in the range of .[0, 1] to the output beamformed data. Therefore, to estimate the parameter .η, one can take advantage of the GCF weighting factor. To this end, the following equation is expressed: η

. map

= ηmin + (1 − GC F) × (ηmax − ηmin ) ,

(3.185)

where .ηmap denotes the number of necessary rows. Also, .ηmin and .ηmax are the minimum and maximum allowable rows for the covariance matrix to be retained, respectively. It can be seen from the equation that for .GC F = 0, the parameter .ηmap would be equivalent to the maximum allowable number of rows; this indicates that the noncoherent regions (such as background speckle parts of the image) consist of more

142

3 Beamforming Algorithms in Medical Ultrasound …

number of sources, and therefore, more rows of the covariance matrix should be retained. Also, for .GC F = 1 which is obtained near the strong scatterers, we have .ηmap = ηmin . Definitely, if the number of rows is considered to be more than the obtained .ηmap , the performance of the SS-based method will get closer to the MV beamformer. However, the computational complexity will also increase. Also, if the number of rows is considered to be less than the obtained.ηmap , the processing steps of the algorithm would be sped up. However, the decreased computational complexity is at the expense of the performance degradation of the algorithm.

3.7.10 Low-Complexity Minimum Variance Beamformer Using Structured Covariance Matrix Theoretically, the covariance matrix of the spatially stationary signals is Toeplitz. The structure of the Toeplitz matrix is such that the entries of each sub-diagonal have the same values. For instance, consider a Toeplitz matrix with the dimensions of . L × L as below: ⎡ ⎤ a0 a1 a2 · · · a L−1 ⎢ . ⎥ ⎢ a−1 a0 a1 . . . .. ⎥ ⎢ ⎥ ⎢ ⎥ .. . .⎢ a ⎥ . a a a 2 ⎥ ⎢ −2 −1 0 ⎢ . ⎥ . . . .. .. .. a ⎦ ⎣ .. 1 a−L+1 · · · a−2 a−1 a0 L×L The key point is that the computational complexity required to invert such a matrix is . O(L 2 ). This can be done using the algorithms such as the Trench method (Trench 1964). In medical US imaging, the received signals are concentrated on the focal point. Therefore, the stationarity assumption can be considered for the received signals in medical US imaging. Accordingly, an attempt is made to establish the Toeplitz structure on the covariance matrix of the delayed received signals (Asl and Mahloojifar 2012). This is achieved by averaging over the entries of each sub-diagonal of the covariance matrix and replacing the resulting value with the entries. More precisely, the estimated covariance matrix obtained according to (3.13) is modified such that the entries of its .dth diagonal (. R˜ d (k)) are replaced with their mean value as below:

.

R˜ d (k) =

L−d 1 Σ ˆ Rl,l+d (k), d = 0, 1, · · · , L − 1. L − d l=1

(3.186)

˜ The resulting covariance matrix, which has the Toeplitz structure, is denoted as. R(k). The new covariance matrix is used to calculate the weight vector and obtain the output of the beamformer. It has been shown that using the structured covariance matrix, the performance of the MV algorithm is retained while the computational complexity is reduced.

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

143

3.7.11 Iterative Minimum Variance Algorithm It is well-known that the main computational complexity of the MV algorithm is the inversion operation of the covariance matrix. In the iterative MV algorithm, an attempt is made to remove this operation. This is achieved using an iterative method, as its name implies (Deylami and Asl 2018). In the following, this algorithm is discussed in more detail. The MV minimization problem is considered as (2.26). The problem is going to be solved iteratively using the steepest gradient method. To this end, the objective function of the problem is written as below: .

( ) ˆ L(w, λ) = w H (k) R(k)w(k) + λ w H (k)a − 1 .

(3.187)

Taking the derivation of the objective function with respect to the weight vector.w(k), we have: ∂L ˆ = 2 R(k)w(k) + λa. (3.188) . ∂w According to the following iterative process, the objective function would be minimized: ( ) ˆ .w i+1 (k) = w i (k) − μ R(k)w (3.189) i (k) + λa , where .wi (k) denotes the weight vector obtained from .ith iteration. Also, the parameter .μ is the step size of the iterative process. Applying the hermitian operation to both sides of the above equation, multiplying them by . a, and considering the constraint H .w(k) a = 1, the following equation would be obtained: ( )H ˆ wiH (k)a − μ R(k)w a = 1. i (k) + λa

.

(3.190)

The above equation is used to obtain the parameter .λ. Then, the obtained value is replaced in (3.189), and the following equation is finally obtained for updating the weight vector at each iteration: ] [ ( )−1 H ] [ ( )−1 ˆ I − μ R(k) wi (k) + a a H a wi+1 (k) = I − a a H a a .

.

(3.191)

It can be seen that the inversion operation of the covariance matrix does not appear during the processing. The processing steps of the iterative algorithm are summarized below: 1. The covariance matrix is estimated according to (3.13). 2. An initial value is set for the weight vector .w0 (k), and also, the parameter .μ. 3. The weight vector is updated iteratively according to (3.191) until the algorithm ]T [ ] [ converges according to the criteria . wi+1 (k) − wi (k) wi+1 (k) − wi (k) ≤ ε. Note that .ε is a small value.

144

3 Beamforming Algorithms in Medical Ultrasound …

Therefore, the computational complexity of the iterative algorithm reduces to O(Niter L 2 ) where . Niter denotes the number of iterations. Any initial weight vector that satisfies the condition .w0 (k) H a = 1 leads to the final solution. One should note that the initial weight vector affects the convergence speed; As the initial vector .w0 (k) is closer to the optimal solution, the algorithm will converge faster. Therefore, by selecting the initial vector at each imaging point appropriately, a more efficient performance would be achieved in terms of computational complexity; the optimum weight vectors of the adjacent imaging points are not much different from each other. One way to initialize the iterative algorithm efficiently is to set the optimum weight vector of one imaging point as the initial weight vector for the next one, i.e., .w 0 (k + 1) = w(k). This speeds up the iterative algorithm. Also, instead of letting the algorithm be converged, one can consider a fixed number of iterations to control the processing time. However, this leads to the performance of the algorithm being degraded compared to the MV algorithm.

.

3.7.12 Dominant Mode Rejection-Based Minimum Variance Algorithm To reduce the computational complexity of the MV algorithm, another approach based on the dominant mode rejection (DMR) algorithm has been proposed in which the covariance matrix is estimated with a considerably lower computational complexity (Asl and Deylami 2018). It is well-known that in medical US imaging with the focusing process on each imaging point, there is no need to inverse the full covariance matrix since we have only a limited number of dominant modes. Based on this issue, a few first largest dominant modes are used to estimate the covariance matrix; consider the eigendecomposition of the estimated covariance matrix according to (3.83) which can be rewritten as below:

.

Rˆ =

L Σ

λi vi viH ,

(3.192)

i=1

where .vi ∈ C L×1 is the eigenvector corresponding to the eigenvalue .λi . In the DMRbased MV algorithm, it is proposed to use the first . Dm largest eigenvalues and their corresponding eigenvectors to estimate the covariance matrix. The first largest eigenvalues are mostly due to the strong reflected waves. Also, the smaller ones are due to the weaker reflected waves. The covariance matrix can then be rewritten based on this division: Dm L Σ Σ ˆ= .R λi vi viH + λi vi viH . (3.193) i=1

i=Dm +1

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

145

In the DMR-based MV algorithm, an attempt is made to approximate the covariance ˜ in which the . Dm largest eigenvalues and their corresponding matrix, denoted as . R, eigenvectors are the same as . Rˆ while the power is preserved. The approximated covariance matrix is expressed as below: ˜= .R

Dm Σ

λi vi viH



L Σ

vi viH

i=Dm +1

i=1

[

] ) Dm ( Σ λi − α =α I+ vi viH , α i=1

(3.194)

where: α=

.

1 L − Dm

L Σ

λi

i=1Dm +1

[ ] Dm { } Σ 1 ˆ λi . = trace R − L − Dm i=1

(3.195)

It can be seen that the approximated covariance matrix is achieved by averaging over the . L − Dm small eigenvalues of . Rˆ while its . Dm largest eigenvalues are retained. It can be concluded from (3.194) and (3.195) that the{. Dm} largest eigenvalues of . Rˆ and their corresponding eigenvectors, and also, .trace Rˆ are required to estimate the covariance matrix using the DMR-based method. As it was previously seen in (2.24), inversion of the covariance matrix is required to obtain the weight vector, the computational complexity of which is high. The inverse of the approximated covariance matrix . R˜ is obtained as below: [ ] ) Dm ( Σ λ − α −1 i −1 H ˜ =α vi vi . .R (3.196) I− λi i=1 Applying the diagonal loading technique to the covariance matrix to increase the robustness of the algorithm, we have: [

] ) Dm ( Σ λ − α i ˜ + ξ I = (α + ξ ) I + vi viH .R α + ξ i=1 ⎡ ⎤ ) Dm ( L L Σ Σ Σ λ − α i = (α + ξ ) ⎣ vi viH + (0)vi viH ⎦ vi viH + α + ξ i=1 i=1 i=D +1 m

= (α + ξ )( A + B),

(3.197)

146

3 Beamforming Algorithms in Medical Ultrasound …

where: ⎡

1 L ⎢0 Σ ⎢ .A = vi viH = V ⎢ . ⎣ .. i=1

0 1 .. .

··· ··· .. .

⎤ 0 0⎥ ⎥ .. ⎥ .⎦

0 0 ··· 1

VH ,

(3.198)

L×L

and:

.

B=

) Dm ( L Σ Σ λi − α vi viH + (0)vi viH α + ξ i=1 i=Dm +1 ⎤ ⎡ λ1 −α

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ = V⎢ ⎢ ⎢ ⎢ ⎢ ⎣

α+ξ

λ2 −α α+ξ

..

.

λ Dm −α α+ξ

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ VH . ⎥ ⎥ ⎥ 0 ⎥ .. ⎥ . ⎦ 0 L×L

(3.199)

According to (3.197)–(3.199), we have: ⎤

⎡ λ1 +ξ ⎢ ⎢ ⎢ ⎢ ⎢ ˜ + ξ I = (α + ξ ) × V ⎢ .R ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

α+ξ

λ2 +ξ α+ξ

..

.

λ Dm +ξ α+ξ

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ VH . ⎥ ⎥ ⎥ 1 ⎥ .. ⎥ . ⎦ 1 L×L

(3.200)

Accordingly, the inverse of the resulting covariance matrix is obtained as below: ⎤

⎡ α+ξ

[ .

R˜ + ξ I

]−1

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ −1 = (α + ξ ) × V ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

λ1 +ξ

α+ξ λ2 +ξ

..

.

α+ξ λ Dm +ξ

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ V H . (3.201) ⎥ ⎥ ⎥ 1 ⎥ .. ⎥ . ⎦ 1 L×L

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

Note that . λα+ξ =1− i +ξ

[ .

R˜ + ξ I

]−1

λi −α . λi +ξ

147

Therefore, (3.201) is rewritten as below:

⎡ 1− ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ −1 = (α + ξ ) × V ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛

λ1 −α λ1 +ξ

⎤ 1−

[

..

. 1−

λ Dm −α λ Dm +ξ

⎡ λ1 −α

⎜ ⎢ ⎜ ⎢ ⎜ ⎢ ⎜ ⎢ ⎜ ⎢ ⎜ ⎢ −1 H ⎜ = (α + ξ ) × ⎜V IV − V ⎢ ⎢ ⎜ ⎢ ⎜ ⎢ ⎜ ⎢ ⎜ ⎢ ⎝ ⎣

= (α + ξ )−1 I −

λ2 −α λ2 +ξ

λ1 +ξ

λ2 −α λ2 +ξ

] ) Dm ( Σ λi − α vi viH . λi + ξ

..

.

λ Dm −α λ Dm +ξ

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ VH ⎥ ⎥ ⎥ 1 ⎥ .. ⎥ . ⎦ 1 L×L ⎤ ⎞ ⎥ ⎟ ⎥ ⎟ ⎥ ⎟ ⎥ ⎟ ⎥ ⎟ ⎥ ⎟ H⎟ ⎥ V ⎟ ⎥ ⎥ ⎟ ⎥ ⎟ 0 ⎥ ⎟ ⎥ ⎟ .. ⎦ ⎠ . 0 L×L

(3.202)

i=1

Inserting the obtained inversed covariance matrix into (2.24), the weight vector of the DMR-based MV method is achieved as below: Σ Dm ( H ) βi vi a vi a − i=1 .w DMR = (3.203) Σ Dm | H |2 , 2 |a| − βi |v a| i=1

where .βi =

i

λi −α . According to (3.202) and (3.195), it can be seen that the . Dm λi +ξ { }

largest

eigenvalues and their corresponding eigenvectors, and also, .trace Rˆ are used to estimate the covariance matrix, and there is no need to inverse the full covariance ˆ The computational complexity of the DMR-based MV method is. O(L Dm ), matrix. R. which is considerably reduced compared to the MV algorithm. It has been shown that . Dm = 2 leads to performance similar to the MV algorithm while the computational complexity is decreased. As the parameter . Dm increases, the performance of the DMR-based method is not significantly changed.

148

3 Beamforming Algorithms in Medical Ultrasound …

3.7.13 Low-Complexity Minimum Variance Algorithm Based on Generalized Sidelobe Canceler To reduce the computational complexity of the MV algorithm, it was proposed in Deylami and Asl (2019) to use the partial GSC structure that has been discussed in Sect. 3.5.2.6. In this regard, a transformation matrix . T ∈ C(L−1)×Ng where . N g ≤ (L − 1) is defined to reduce the dimensions of the data. Designing this matrix such that the performance of the algorithm is not degraded is of importance. Using the transformation matrix, we have . Bˆ = BT , the dimensions of which is . L × N g . Then, the blocking matrix of the GSC algorithm is replaced with . Bˆ to perform the processing steps. ˆ various choices To select the matrices . B and . T , or equivalently, the matrix . B, exist. One way is to obtain the mentioned matrix and the weight vector .wa (k) jointly from the minimization problem presented in (3.112). However, this method is computationally expensive. Other transformation matrices that are discussed in the previous sections, such as the DFT matrix presented in Sect. 3.7.2, or the Legendre polynomiˆ Moreover, some als presented in Sect. 3.7.6, can also be used to define the matrix . B. matrices such as Griffiths and Jim, Dolph-Chebychev, Taylor series, and the waveletˆ In particular, based blocking matrices can be used to adjust the blocking matrix . B. the DCT basis matrix which is discussed in Sect. 3.7.3 is considered in Deylami and Asl (2019) to design the blocking matrix. The matrix is constructed according to (3.147), i.e., we have . Bˆ = C. The number of columns of the blocking matrix is considered as . N g = 2. More precisely, two degrees of freedom are considered here, and the blocking matrix is defined as . Bˆ = [b0 , b1 ] ∈ C L×2 where . bi ∈ C L×1 with .i = {0, 1} is the .(i + 1)th column of the DCT matrix. Note that using the DCT basis matrix, the computational complexity of the algorithm is reduced by considering 2 degrees of freedom while the performance of the algorithm is comparable with the MV beamformer. This issue was previously discussed in Sect. 3.7.3. In such a case, the weight vector of the algorithm is written as below: wGSC (k) = wq − wa0 (k)b0 − wa1 (k)b1 .

.

(3.204)

[ In the above equation, the adaptive weight vector is considered as .wa (k) = wa0 (k), ]T wa1 (k) ∈ C2×1 . Therefore, replacing . B with . Bˆ in the minimization problem presented in (3.112), and considering the reduced dimensions of the weight vector, the problem is rewritten as below: .

min

wa0 (k),wa1 (k)

[ ]H ] [ ˆ wq − wa0 (k)b0 − wa1 (k)b1 R(k) wq − wa0 (k)b0 − wa1 (k)b1 . (3.205)

The above equation is solved and the unknowns are obtained by taking the derivatives with respect to .wa0 (k) and .wa1 (k) which results in the following equation:

3.7 Low-Complexity Minimum Variance Beamforming Algorithms

[ .

] [ H ˆ b R(k)b wa0 (k) 0 = 0H ˆ wa1 (k) b1 R(k)b 0

ˆ b0H R(k)b 1 H ˆ b1 R(k)b1

]−1 [

] ˆ b0H R(k)w q . ˆ b1H R(k)w q

149

(3.206)

ˆ To specify the computational complexity of the above equation, the term . biH R(k)b j should be expanded. Considering that the spatial smoothing, the forward-backward, and the diagonal loading techniques are performed to estimate the covariance matrix, we have: H . bi

[ N −L+1 ] Σ 1 (l) (l) H (l) (l) H H ˆ bj x (k)x (k) + x˜ (k) x˜ (k) R(k)b × bi j = N −L +1 l=1 ( ) ˆ + ξ.trace R(k) biH I b j N −L+1 ) (( ) H ) ( ) (( ) H )] Σ [( 1 biH x (l) (k) x (l) b j + biH x˜ (l) (k) bj × x˜ (l) N −L +1 l=1 ( ) ˆ (3.207) + ξ.trace R(k) biH b j ,

=

˜ where . x(k) corresponds to the backward estimation of the covariance matrix as defined in Sect. 3.5.2.3. Each entry of (3.206) is obtained according to (3.207), ˆ which reduces the computational complexity. More precisely, . biH R(k)b j is replaced H (l) (l) by . bi x (k) or . x (k)b j which results in the complexity reduction from . O(2L 2 ) to . O(2L). The computational complexity of each part of (3.207) is summarized in Table 3.2. It can be concluded that the overall computational complexity of the GSC-based method with the consideration of . N g = 2 reaches . O(4L(N g + 1)(N − L + 1) + 2N ) which is reduced in comparison with the MV beamformer. Temporal averaging can also be used to estimate the covariance matrix. If the calculations of each time index .k are stored, then the computational complexity of the GSC-based method would not be increased by using temporal averaging. All the low-complexity MV-based algorithms that are presented in this section are summarized in Table 3.3.

Table 3.2 Computational complexity of the low-complexity GSC-based MV method Calculation Computational complexity H (l) x (k)

. bi

.2L

x˜ (k) ( (l) ) H . x (k) wq ( )H (l) ˜ (k) wq . x ( ) ˆ .trace R(k)

.2L

× N g (N − L + 1) × N g (N − L + 1)

.2L

× (N − L + 1)

.2L

× (N − L + 1)

Total

.4L(N g

H (l)

. bi

.2N

+ 1)(N − L + 1) + 2N

150

3 Beamforming Algorithms in Medical Ultrasound …

Table 3.3 The order of computational complexity of low-complexity MV algorithms Algorithm Order Description Pre-determined weights

. O(P N )

Beamspace (BS)

. O(N B S )

BS based on DCT

. O(N B S )

BS based on modified DCT

. O(N B S )

MV based on PCA

. O(N p )

BS based on Legendre Polynomials Decimated MV

. O(Q

3

3

3

3

3)

. O(Nd )

3

2)

MV based on QR decomposition

. O(L

Subspace MV

. O(η

MV based on Structured covariance matrix

. O(L

Iterative MV

. O(Niter L

MV based on DMR

. O(L Dm )

MV based on GSC

. O(N )

2 L) 2)

2)

Selecting the suitable window among . P pre-determined weight vectors Reducing the data dimensions by performing a transformation on it using an . N B S row Butler matrix Reducing the data dimensions by performing a transformation on it using an . N B S row DCT matrix Reducing the data dimensions by performing a transformation on it using an . N B S row modified DCT matrix Reducing the data dimensions using . N p dominant principal components of the pre-designed MV weight vectors Reducing the data dimensions using the first . Q coefficients of the Legendre Polynomials Reducing the data dimensions to . Nd in array space Obtaining a transformation matrix to turn the covariance matrix into a unitary matrix, and therefore, avoid inverting it Reducing the dimensions of the covariance matrix by keeping its .η rows Converting the covariance matrix into a Toeplitz to reduce the computational complexity of the inversion process Obtaining the weight vector using an iterative method with . Niter number of iterations Estimating the covariance matrix using . Dm first largest dominant modes Separating the weight vector into adaptive and non-adaptive components using the GSC structure, and obtaining the adaptive component using the dimensionally reduced data

3.8 User Parameter-Free Minimum Variance-Based Algorithms Using the adaptive MV algorithm, one can take advantage of the improved resolution and contrast of the reconstructed image compared to the non-adaptive DAS beamformer. In the MV algorithm, the covariance matrix is estimated according to (3.13) in order to obtain the weight vector. The more accurate the covariance matrix

3.8 User Parameter-Free Minimum Variance-Based Algorithms

151

is estimated, the better the performance of the algorithm. The covariance matrix is estimated using the spatial smoothing technique in which the array is divided into . N − L + 1 overlapping subarrays with the length of . L, as discussed in Sect. 3.3.1.1. The question may arise that how to obtain the best value for the parameter . L? It has been shown that the value of this parameter significantly affects the performance of the algorithm; the smaller the subarray length, the more robust the algorithm. However, the resolution would be degraded. Conversely, the greater the subarray length, the better the resolution of the resulting image. However, this improvement is achieved at the expense of robustness degradation. Furthermore, to improve the image contrast in speckle-generating mediums, temporal averaging over .2K + 1 samples is performed in addition to the spatial smoothing to estimate the covariance matrix according to (3.18). Adjusting an appropriate value for the parameter . K is also a concern. The estimated covariance matrix is diagonally loaded using the loading factor .ξ to overcome the signal cancellation problem and improve the image robustness which is discussed in detail in Sect. 2.3.3.3. The value of the parameter .ξ also affects the performance of the MV algorithm. Generally, it can be said that the dependency of the parameters . L, . K , and .ξ on the operator is the drawback of the MV algorithm that negatively affects its performance. Adaptively changing these parameters appropriately for each time index and making them user-independent is a great achievement in improving the MV algorithm. In this section, the techniques that are provided to overcome this limitation are discussed. More precisely, attempts made to make the parameters .ξ , . L, and . K independent of the user are discussed in the following sections separately.

3.8.1 User-Independent Techniques to Determine the Diagonal Loading Factor To make the loading factor .ξ user-independent, and consequently, improve the performance of the MV algorithm in terms of both resolution and robustness, different techniques have been proposed. In this section, these algorithms are going to be discussed separately.

3.8.1.1

Forward-Backward Minimum Variance Algorithm

The FBMV algorithm that was previously discussed in Sect. 3.5.2.3 is one way to overcome the limitation of the dependency of the loading factor on the operator. As stated earlier, the backward estimation, as well as the forward estimation, is used to obtain a more accurate covariance matrix. In this algorithm, the covariance matrix is estimated according to (3.88), and it has been shown that it is robust well enough without applying the diagonal loading technique. In other words, the user-dependent loading factor .ξ is removed from the calculations in the FBMV algorithm. Therefore,

152

3 Beamforming Algorithms in Medical Ultrasound …

the challenge of the dependency of the parameter .ξ on the operator is overcome in this algorithm; it can be concluded that the FBMV algorithm is a user parameter-free algorithm with respect to the loading factor.

3.8.1.2

Variable Loading Method

In the diagonal loading technique, a constant parameter (denoted as .ξ ) is added to the diagonal entries of the estimated covariance matrix. As stated earlier, this parameter is user-dependent and affects the result of the MV beamformed data. To analyze the effect of the loading value from another aspect, consider the[ eigendecomposition of ] L×L ∈ C , where . Λ = diag λ , · · · , λ the covariance matrix as. Rˆ = V R Λ R V H R R R 1 L R is the diagonal entries of which include the eigenvalues, and.V R = [ the diagonal ]matrixL×L in which .v Ri ∈ C L×1 denotes the eigenvector corresponding v R1 , · · · , v R L ∈ C to the .ith eigenvalue. The weight vector of the MV algorithm presented in (2.27) can be rewritten based on the eigendecomposition of the estimated covariance matrix as below: L ( )−1 Σ vH Ri a vR . w = Rˆ + ξ I a= λ +ξ i i=1 Ri

.

(3.208)

It can be seen from the above equation that for a higher amount of the eigenvalue, i.e., .λ Ri ≫ ξ , we have . λ R 1+ξ ≈ λ1R , indicating that the diagonal loading factor does i i not affect the obtained weight significantly. However, for smaller eigenvalues, that correspond to the noise components, the absence of the loading factor (.ξ = 0) causes an amplified value to be assigned to the weight vector, and therefore, the performance of the MV algorithm will be disrupted in terms of robustness against the mismatch errors (Zhuang et al. 2016). The loading factor .ξ limits the inversed eigenvalue and makes the MV algorithm less sensitive against a small error. In other words, the algorithm would be robust. However, the resolution of the resulting image would be negatively affected, as previously discussed in Sect. 3.6.1. To deal with this issue, it is desirable to change the loading factor adaptively according to the values of .λ Ri s. In this regard, the variable loading (VL) algorithm has been proposed in which the loading value corresponding to each eigenvalue of the estimated covariance matrix is distinct (Gu et al. 2006). The optimization problem of the MV algorithm presented in (2.26) is modified as below: .

ˆ min w H Rw, w

subject to w H a = 1 −1

w H Rˆ w ≤ T,

(3.209)

3.8 User Parameter-Free Minimum Variance-Based Algorithms

153

−1

in which the constraint .w H Rˆ w ≤ T is added to the minimization problem. Considering the newly added constraint, the variable loading values are assigned to the eigenvalues of the estimated covariance matrix adaptively. Solving the above minimization problem, the optimum weight vector denoted as .w V L is obtained as below: ( 2 ) )−1 −1 −1 ˆ Rˆ + ξ˜ I Rˆ + ξ˜ Rˆ a Ra = = , ( ) ( 2 )−1 −1 −1 H H ˆ ˆ ˆ ˆ ˜ ˜ R+ξR R +ξI a a a Ra (

wV L

.

(3.210)

−1

where .ξ˜ is a parameter that should be adjusted such that .w H Rˆ w = T . Considering that the eigenvector matrix is unitary, i.e., .V H R V R = I, the obtained weight vector can be rewritten based on the eigendecomposition of the estimated covariance matrix as below: wV L =

L Σ

.

i=1

vH a ) v Ri . (Ri λ Ri + ξ˜ /λ Ri

(3.211)

Note that the denominator of (3.210) is not considered in (3.211) since it is a constant and can be ignored. The weight vector obtained from the VL method indicates that the loading factor changes adaptively based on the eigenvalues of the estimated covariance matrix; according to (3.211), it can be seen that for higher values of .λ Ri , the effect of the loading factor reaches zero. Also, for smaller values of .λ Ri , the effect of the loading factor increases. More precisely, the loading factor corresponding to the noise components, or equivalently, the smaller amount of the eigenvalues of the covariance matrix is increased to make the algorithm less influenced by the mismatch errors. Therefore, using the VL method, the robustness of the MV beamformer is expected to be increased. Choosing an appropriate value for the parameter .ξ˜ is of importance. If .λ Ri = ξ , a 3 .d B difference between applying and not applying the loading factor .ξ is obtained. More precisely, we have: | | | 1 1 1 || | . | λ − λ + ξ | = 2λ , for λ Ri = ξ. Ri Ri Ri

(3.212)

The parameter .ξ˜ is desirable to be adjusted such that the VL method reaches the same threshold. In other words, the parameter .ξ˜ is set in the way that the following equality is met: | | | | | 1 | 1 |= 1 , ) ( − .| |λ | | Ri λ Ri + ξ˜ /λ Ri | 2λ Ri

(3.213)

154

3 Beamforming Algorithms in Medical Ultrasound …

which results in .ξ˜ = λ2Ri . Considering the obtained result, and also, taking .ξ = λ Ri into account, it can be concluded that .ξ˜ = ξ 2 would be an appropriate choice. The output of the beamformer is finally obtained according to (3.16) with this difference that the new weight vector expressed in (3.211) is used for weighting the subarrays.

3.8.1.3

Shrinkage Method

To make the loading factor independent of the user, another method known as the shrinkage method has been proposed in which a priori knowledge about the covariance matrix is used to perform the estimation more accurately (Stoica et al. 2008). In the shrinkage method, the covariance matrix is considered as a linear combination of the estimated covariance matrix . Rˆ that is obtained according to (3.13), and the identity matrix . I. This technique has been applied to medical US imaging to obtain a more accurate and robust estimation of the covariance matrix by changing the loading factor adaptively (Liu et al. 2015). The improved covariance matrix . R˜ is expressed as below: .

ˆ R˜ = α I + β R,

(3.214)

where .α and .β are the shrinkage parameters. The improved covariance matrix expressed in the above equation results in a more accurate estimation compared ˆ One should note that the constraint .α, β > 0 should be met for the shrinkage to . R. parameters to ensure that . R˜ > 0. Dividing both sides of (3.214) to the parameter .β, we have: .

α R˜ ˆ = I + R. β β

(3.215)

It can be seen that the scaled covariance matrix is the same as the conventional diagonal loading technique defined in (2.35) in which the loading factor .ξ is replaced with .α/β. To achieve the covariance matrix based on the shrinkage method, the parameters .α and .β should be obtained. These parameters are obtained by minimizing the mean ˜ More precisely, the parameters .α and .β are obtained by square error (MSE) of . R. solving the following problem: .

] [ min E || R˜ − R||2 , α,β

(3.216)

] ( ) [ where .E || R˜ − R||2 = MSE R˜ , and .||.|| denotes the Euclidean norm. To solve ( ) the above minimization problem and obtain the shrinkage parameters, .MSE R˜ is expanded inspired by (3.214) as below:

3.8 User Parameter-Free Minimum Variance-Based Algorithms

155

[|| ||2 ] [ ] || || E || R˜ − R||2 = E ||α I + β Rˆ − R|| [|| ( )||2 ] || || = E ||α I − (1 − β)R + β Rˆ − R || [||( )||2 ] || || = ||α I − (1 − β)R||2 + β 2 E || Rˆ − R ||

.

= α 2 L − 2α(1 − β)trace {R} + (1 − β)2 ||R||2 [||( )||2 ] || || + β 2 E || Rˆ − R || .

(3.217)

According to (3.216) and (3.217), the MSE minimization problem is rewritten as below: [ ( ]) 2 2 2 2 ˆ − R||2 . . min α L − 2α(1 − β)trace {R} + (1 − β) ||R|| + β E || R α,β

(3.218) Taking the derivative of the objective function expressed in (3.218) with respect to the unknown parameters .α and .β separately, and equating them to zero, we have: αL − (1 − β)trace {R} = 0,

.

[|| ||2 ] || || αtrace {R} − (1 − β) ||R||2 + βE || Rˆ − R|| = 0.

(3.219) (3.220)

According to the above equations, the optimum values of the shrinkage parameters are obtained as below: α

. opt

βopt

(1 − βopt )trace {R} , L η = , η+ρ =

(3.221) (3.222)

where: trace2 {R} η = ||R||2 − , L] [|| ||2 || || ρ = E || Rˆ − R|| .

.

(3.223) (3.224)

According to (3.221)–(3.224), it can be seen that the optimum values of the shrinkage parameters depend on the covariance matrix . R. However, the exact covariance matrix is unavailable. Consequently, the covariance-dependent parameters should ˆ In other words, the parameter .αopt shown in be estimated by replacing . R with . R. (3.221) is estimated as below:

156

3 Beamforming Algorithms in Medical Ultrasound …

αˆ =

{ } ˆ (1 − β)trace Rˆ

.

L

.

(3.225)

Also, parameters .η and .ρ shown in (3.223) and (3.224) that are used to obtain .βˆ should be estimated. The parameter .η is estimated as below: { } || ||2 trace2 Rˆ || || . .η ˆ = || Rˆ || − L

(3.226)

To estimate the parameter .ρ, the .mth column of . R and . Rˆ are denoted as . r m and . rˆ m , respectively. Then, (3.224) is estimated as below: ρˆ =

L Σ

.

[|| ||2 ] E || rˆ m − r m || ,

(3.227)

m=1

where: NΣ −L+1 1 ∗ ˆm = .r x (l) xl+m−1 , N − L + 1 l=1

(3.228)

where .(.)∗ denotes the complex conjugate operation. Equation (3.227) is expanded according to (3.228) as below: ⎡|| ||2 ⎤ NΣ −L+1 || || 1 || || ∗ x (l) xl+m−1 − r m || ⎦ . .ρ ˆ= E ⎣|| || N − L + 1 || m=1 l=1 L Σ

(3.229)

As stated earlier, the exact covariance matrix . R, and consequently, the vector . r m is not available. Therefore, . r m in (3.229) should also be estimated, and the estimation is completed according to the following equation: [

] NΣ −L+1 || (l) ∗ ||2 1 || || x xl+m−1 − rˆ m .ρ ˆ= (N − L + 1)2 l=1 m=1 [ ] L NΣ −L+1 Σ || (l) ||2 | ∗ |2 || ||2 1 1 || || || x || |x | = l+m−1 − rˆ m N − L + 1 m=1 N − L + 1 l=1 L Σ

NΣ −L+1 || ||2 || (l) ||4 1 1 || ˆ || || x || − = || R|| . (N − L + 1)2 l=1 N −L +1

(3.230)

3.8 User Parameter-Free Minimum Variance-Based Algorithms

157

According to (3.226) and (3.230), the parameter .βˆ is estimated as below: βˆ =

.

ηˆ . ηˆ + ρˆ

(3.231)

Finally, the modified covariance matrix using the adaptive loading factor is estimated based on the obtained shrinkage parameters as below: .

ˆ R˜ = αˆ I + βˆ R.

(3.232)

It can be seen from (3.232) that the loading factor is applied to the covariance matrix adaptively without the need to be set by the user. The weight vector of the shrinkage method is obtained similar to (2.24) except that the modified covariance matrix . R˜ is used which results in the following equation:

wShrinkage

.

( )−1 αˆ I + βˆ Rˆ a = = ( )−1 . −1 a H R˜ a a H αˆ I + βˆ Rˆ a −1 R˜ a

(3.233)

Finally, the output of the beamformer is constructed according to (3.16) by replacing the weight vector with the one obtained using the shrinkage method, i.e., (3.233).

3.8.1.4

Modified Shrinkage Method

The shrinkage method in which the loading factor changes automatically is discussed in Sect. 3.8.1.3 in detail. To further improve the covariance matrix estimation, some modifications have been performed on the shrinkage method; in Salari and Asl (2018), ˆ In other words, it has been suggested to apply temporal averaging to the parameter .ρ. (3.230) is modified as below: ρˆ

. M

−L+1 K NΣ Σ || (l) ||4 1 || x (n)|| 2 2 (2K + 1) (N − L + 1) n=−K l=1 || || || ˆ ||2 K R(n) || || Σ 1 . − N − L + 1 n=−K (2K + 1)

=

(3.234)

This leads to a more accurate estimation of the shrinkage parameters, and con˜ As stated in (3.232), the covariance sequently, the estimated covariance matrix . R. ˆ matrix. R is modified by combining it linearly with the identity matrix. I which is interpreted as the covariance matrix of the white noise. Another modification has been performed on the shrinkage method in Salari and Asl (2021) in which the identity matrix is replaced with the inverse of the covariance matrix . Rˆ the result of which is named

158

3 Beamforming Algorithms in Medical Ultrasound …

the modified-shrinkage method. More precisely, in the modified-shrinkage method, the estimated covariance matrix of the shrinkage method expressed in (3.232) is rewritten as below: .

R˜ = αˆ M Rˆ

−1

ˆ + βˆM R.

(3.235)

˜ the modifiedSimilar to the shrinkage method, by minimizing the MSE of . R, ˆ shrinkage parameters .αˆ M and .β M would be obtained as:

αˆ M

.

{( ] ) −1 H (1 − βˆM )trace2 Rˆ Rˆ , = || −1 ||2 || ˆ || || R ||

(3.236)

and: βˆM =

.

ηˆ M , ηˆ M + ρˆ M

(3.237)

where .ρˆ M is obtained according to (3.234). Also, for .ηˆ M , we have: 2

ηˆ

. M

{(



) −1 H

|| ||2 trace || || = || Rˆ || − || −1 ||2 || ˆ || || R ||



] .

(3.238)

Using the modified-shrinkage method, the covariance matrix would be estimated more accurately and results in an improved resolution while the robustness is also increased. Using the estimated covariance matrix based on the modified-shrinkage method, i.e., (3.235), the weight vector is obtained as below:

wM-Shrinkage

.

( )−1 −1 αˆ M Rˆ + βˆM Rˆ a . = ( ) −1 −1 a H αˆ M Rˆ + βˆM Rˆ a

(3.239)

The obtained weight vector can be rewritten based on the VL method discussed in Sect. 3.8.1.2. In other words, the VL and the modified-shrinkage methods are combined, which results in the VL with modified-shrinkage (VL-MSh) method. Inspired by (3.211), and also, according to (3.239), the VL-MSh weight vector is obtained as below: wVL-MSh =

L Σ

.

i=1

viH a vi . ( ) αˆ M /λ Ri + βˆM λ Ri

(3.240)

3.8 User Parameter-Free Minimum Variance-Based Algorithms

159

It can be seen from the obtained weight vector that the loading factor is applied to the eigenvalues of the estimated covariance matrix adaptively, which improves the robustness as well as the resolution of the resulting image. Finally, the obtained weight vector presented in (3.240) is replaced in (3.16), and the VL-MSh beamformed data is achieved accordingly.

3.8.2 User-Independent Techniques to Determine the Subarray Length Subarray length is another user-dependent parameter in the MV algorithm. As previously discussed in Sect. 3.6.1, this parameter considerably affects the performance of the algorithm; the greater the subarray length, the better the resolution. However, the resolution improvement is achieved at the expense of robustness degradation. Accurate choosing this parameter such that it changes adaptively for each time index results in performance improvement of the algorithm in terms of both resolution and robustness. In this regard, in Salari and Asl (2021), a technique is provided for the automated selection of the subarray length based on the CF weighting method. The CF weighting is defined as the ratio of the main lobe energy to the total energy, as discussed earlier in Sect. 3.5.1.1. Calculating the CF coefficients according to (3.19), values in the range of .[0, 1] would be obtained. It is known that the greater values of CF correspond to the samples in which the focusing is performed well. Also, smaller values of CF correspond to the sidelobes region. According to the behavior of the CF, the following function can be expressed for the subarray length at time index .k: ( .

L(k) =

) N − 1 × C F(k) + 1, 2

(3.241)

in which a linear relationship is established between the CF coefficient and the subarray length. Note that the output of (3.241) should be rounded to make sure that the obtained value is an integer. To construct the regions where the strong reflectors do not exist, such as the background speckle of the imaging medium, it is desirable to decrease the subarray length since a high resolution is not required; instead, high contrast is of importance that can be achieved by decreasing the parameter . L. One should note that the CF value corresponding to such a case is small. Therefore, it is desirable to decrease the subarray length as the CF value decreases. Also, the greater values of the CF correspond to the strong reflectors in which a high resolution is required. This can be achieved by increasing the subarray length. Consequently, the subarray length is expected to be increased as the CF value increases. The defined function for adaptively obtaining the subarray length presented in (3.241) fulfills the expected conditions. In particular, consider the time instance in which .C F = 1. According to (3.241), the subarray length would be equivalent to its maximum possible value, i.e.,. L = N /2, and consequently, a good resolution would be obtained.

160

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.53 Subarray length changes for linear, sinusoid and quadratic functions. . N = 128 is considered in this figure

One should note that to adapt the subarray length with the CF coefficients, other functions can also be considered. The sinusoid and quadratic functions are examples of this function that can establish a relationship between the subarray length and the CF coefficients as below: ( ) (π ) N . L(k) = (3.242) − 1 × sin C F(k) + 1, 2 2 ( ) N L(K ) = (3.243) − 1 × C F 2 (k) + 1. 2 Figure 3.53 shows the behavior of the subarray length for three different functions. It has been obtained that the best result would be achieved using the linear function as (3.241) (Salari and Asl 2021). To remove the dependency of the subarray length on the user and change this parameter adaptively at each time instance.k, another function has also been provided in Lan et al. (2021) based on the GCF weighting factor. The GCF weighting method is the generalized version of the CF method and is discussed in Sect. 3.5.1.2. This weighting factor is defined as the ratio between the low-frequency energy components with the cut-off frequency of . M0 to the total energy according to (3.21). The subarray length can be considered as a function of the GCF as below: [

] N . L(k) = round |1 − 2 × GC F(k)| × , 2 n

(3.244)

where .n is the order of scaling the gradient of the change in adaptive subarray length. As it can be seen from the above equation, the greater the parameter .n, the smaller the subarray length for a fixed value of GCF. Therefore, it can be concluded that

3.8 User Parameter-Free Minimum Variance-Based Algorithms

161

in the cases in which high contrast is of importance, the parameter .n is expected to be high. In contrast, in the cases where a high resolution is required, this parameter should be selected small.

3.8.3 User-Independent Techniques for Temporal Averaging As discussed earlier in Sect. 3.5, the temporal averaging technique is used to retain the speckle statistics of the beamformed data according to (3.18). In this technique, temporal averaging over .2K + 1 samples is performed, in which the parameter . K is adjusted by the operator. To remove the dependency of the parameter . K on the user, a function for this parameter has been provided in Salari and Asl (2021) based on the CF weighting method. It is known that to construct the speckle-generating regions with improved contrast, temporal averaging with a higher value of . K is required. Also, to construct the strong reflectors, the parameter . K is desired to be small. According to these explanations, and also, according to the behavior of the CF weighting coefficients, a relationship between the parameter . K and the CF can be established as below: )] [ (π C F(k) , . K (k) = round 2L p cos (3.245) 2 where . L p denotes the pulse length. The maximum possible value for the parameter K equals two times the transmitted pulse length, as stated earlier. Therefore, (3.245) is designed such that the maximum possible value of . K , which occurs for .C F = 0, is obtained as .2L p . Using (3.245), the number of samples over which the temporal averaging should be performed is obtained adaptively, and the performance of the resulting image would be improved in terms of contrast. In addition to the cosine function defined as (3.245), other functions can also be established to make a relationship between the parameter . K and the CF coefficients. The following equations are examples of these functions:

.

.

[ ] K (k) = − 2L p × C F(k) + 2L p [ ] [ ] K (k) = 2L p × C F 2 (k) − 4L p × C F(k) + 2L p .

The behavior of the parameter . K for these different functions is depicted in Fig. 3.54. It has been shown that the cosine function expressed in (3.245) results in a better performance in terms of both resolution and contrast (Salari and Asl 2021).

162

3 Beamforming Algorithms in Medical Ultrasound …

Fig. 3.54 Behavior of the parameter . K for three different functions. . N = 128 is considered in this figure

3.8.4 User-Independent Technique to Determine the Threshold Value of the Eigenspace-Based Minimum Variance Beamformer In Sect. 3.5.2.2, the EIBMV algorithm was presented, and it was shown that improved noise suppression is achieved compared to the MV algorithm by eigendecomposing the estimated covariance matrix and using the eigenvectors corresponding to “Num” largest eigenvalues. The eigenvectors are selected using a pre-determined threshold .δ, which is a fixed value for all imaging points. One should note that as a smaller value is assigned to .δ, the background speckle will be preserved more successfully, and dark artifacts will be prevented. However, the noise reduction would be degraded. It is expected to determine the threshold value .δ adaptively such that the speckle pattern preservation and noise reduction can be achieved simultaneously; in particular, a small value of .δ should be considered for speckle regions that surround a cyst. Also, a large value should be assigned to .δ for inside the cystic region. The amplitude of the received signals corresponding to the speckle pattern does not vary significantly along the array. In contrast, the hyperechoic targets generate large transient amplitude changes along the array. It can be concluded that the standard deviation of the signal amplitude corresponding to the speckle region is smaller compared to that one obtained from hyperechoic targets. In Lan et al. (2021), it was proposed to use the standard deviation of the convolution of the received signals in order to adaptively determine the threshold value .δ(k); the convolution process improves the SNR, and consequently, reduces the noise effect in standard deviation estimation. The normalized standard deviation obtained from the convolution of the received signal is used to estimate .δ(k). More precisely, in the first step, the received

3.9 Conclusion

163

signal . x(k) ∈ C N ×1 is zero-padded, and the result is denoted as . xˆ (k) ∈ C(2N −1)×1 . Then, the convolution sequence of . xˆ (k), which is denoted as . x conv (k), is obtained as below: .

{ } x conv (k) = IFFT FFT( xˆ (k)) × FFT( xˆ (k)) .

(3.246)

It can be seen from the above equation that the convolution process is performed in the frequency domain. The standard deviation of . x conv (k) is calculated as below: [ | | | .σ (k) =

2N −1 Σ 1 [x conv (k, i) − x¯conv (k)]2 , 2N − 1 i=1

(3.247)

where . x conv (k, i) denotes .ith entry of . x conv (k) in .kth time index. Also, .x¯conv (k) represents the mean value of . x conv (k). If we consider the estimated standard deviations for all of the imaging points as .σ , the adaptive threshold value .δ is obtained as below: δ=N

.

( ) 1 , σ

(3.248)

where .N(.) denotes the normalization operation; that is, the obtained .δ will be in the range of .[0, 1].

3.9 Conclusion The analysis of the adaptive beamformers and how to apply them to medical US data was the main focus of this chapter; in particular, the limitations of using the adaptive beamformers on medical US data were raised, and solutions to overcome them were explained in detail. For instance, it was observed that by applying the focusing process on the received data using time delays, the near-field limitation of medical US data is solved. As a result, using the adaptive beamformers that are originally developed for far-field applications is provided. Also, it was observed that the limited data restriction is overcome by using the spatial smoothing technique. In the next section of this chapter, i.e., Sect. 3.4, applying the data-dependent MV algorithm was discussed, and the improvement of the final image resolution compared to the non-adaptive DAS beamformer was achieved. Next, in Sect. 3.5 of this chapter, different techniques to improve the contrast of the reconstructed image were examined in detail. The CF weighting method and its modified versions, and also, EIBMV and FBMV algorithms as the MV-based techniques are examples of these methods. Moreover, the non-linear DMAS algorithm in which the cross-correlation between the received signals of the array elements are calculated, considerably improves the image contrast compared to the conventional DAS beamformer.

164

3 Beamforming Algorithms in Medical Ultrasound …

There usually exist mismatches between the assumed and actual model parameters. Therefore, it is necessary to develop techniques in order to increase the robustness of the MV algorithm against these mismatches. In Sect. 3.6 of this chapter, different methods that increase the robustness of the MV algorithm were discussed. It was shown that the loading factor and the subarray length are two parameters that affect the robustness of this algorithm. In addition, algorithms such as the FBMV, APES, and the wiener post-filter method are among the algorithms that are developed in this regard. Although the MV algorithm improves the image resolution compared to the DAS algorithm, this improvement is at the expanse of a severe increase in computational complexity. In Sect. 3.7 of this chapter, solutions to overcome this limitation and reduce the computational complexity of the MV algorithm were discussed. In most of the low-computational complexity methods, an attempt is made to reduce the dimensions of the covariance matrix through data transformation. This can be achieved by using a transformation basis. The BS method and its modified versions are examples in this regard. It is also possible to reduce the computational complexity of the MV algorithm using other techniques except data transformation; rewriting the estimated covariance matrix such that its inversion incurs less computational burden (MV algorithm based on structured covariance matrix), reducing the data dimensions in the spatial domain (decimated MV algorithm), and also, the iterative MV algorithm are among the techniques that have been developed so far. A summary of the reviewed algorithms is presented in Table 3.3. Finally, Sect. 3.8 of this chapter was dedicated to the user-independent techniques for determining the parameters of the MV algorithm. It is well-known that a common solution to increase the robustness of the MV algorithm is to use the diagonal loading technique in which a loading factor is added to the diagonal entries of the covariance matrix. This factor is adjusted manually and depends on the user. In order to achieve a better performance of the MV algorithm, it is desired to set the loading factor adaptively based on the input data. In this section, this topic was discussed, and the available techniques to make this parameter user-independent were examined. In addition to the loading factor, solutions to make other parameters of the MV algorithm user-independent, such as the subarray length, were also presented.

References Ali ME, Schreib F (1992) Adaptive single snapshot beamforming: a new concept for the rejection of nonstationary and coherent interferers. IEEE Trans Signal Process 40(12):3055–3058 Asl BM, Mahloojifar A (2009) Contrast enhancement of adaptive ultrasound imaging using eigenspace-based minimum variance beamfoming. In: 2009 IEEE international ultrasonics symposium. IEEE, pp 349–352 Asl BM, Deylami AM (2018) A low complexity minimum variance beamformer for ultrasound imaging using dominant mode rejection. Ultrasonics 85:49–60

References

165

Asl BM, Mahloojifar A (2009) Minimum variance beamforming combined with adaptive coherence weighting applied to medical ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 56(9):1923–1931 Asl BM, Mahloojifar A (2010) Eigenspace-based minimum variance beamforming applied to medical ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 57(11):2381–2390 Asl BM, Mahloojifar A (2011) Contrast enhancement and robustness improvement of adaptive ultrasound imaging using forward-backward minimum variance beamforming. IEEE Trans Ultrason Ferroelectr Freq Control 58(4):858–867 Asl BM, Mahloojifar A (2012) A low-complexity adaptive beamformer for ultrasound imaging using structured covariance matrix. IEEE Trans Ultrason Ferroelectr Freq Control 59(4):660– 667 Austeng A, Jensen AC, Synnevaag J-F, Holm S (2009) Image amplitude estimation with the minimum variance beamformer. In: 2009 IEEE international ultrasonics symposium. IEEE, pp 2355– 2358 Avanji SAI, Far AM, Asl BM (2013) Investigation of the effects of transducer parameters on adaptive mv beamformers in medical ultrasound applications. In: 2013 21st Iranian conference on electrical engineering (ICEE). IEEE, pp 1–6 Avanji SAI, Mahloojifar A, Asl BM (2013) Adaptive 3d mv beamforming in medical ultrasound imaging. In: 2013 20th Iranian conference on biomedical engineering (ICBME). IEEE, pp 81–86 Bae M, Park SB, Kwon SJ (2016) Fast minimum variance beamforming based on legendre polynomials. IEEE Trans Ultrason Ferroelectr Freq Control 63(9):1422–1431 Bethel R, Shapo B, Van Trees H (2002) Single snapshot spatial processing: Optimized and constrained. In: Sensor array and multichannel signal processing workshop proceedings. IEEE, pp 508–512 Blomberg AE, Holfort IK, Austeng A, Synnevåg J-F, Holm S, Jensen JA (2009) APES beamforming applied to medical ultrasound imaging. In: 2009 IEEE international ultrasonics symposium. IEEE, pp 2347–2350 Camacho J, Parrilla M, Fritsch C (2009) Phase coherence imaging. IEEE Trans Ultrason Ferroelectr Freq Control 56(5):958–974 Christensen DA (1988) Ultrasonic bioinstrumentation. Wiley Dahl JJ, Hyun D, Lediju M, Trahey GE (2011) Lesion detectability in diagnostic ultrasound with short-lag spatial coherence imaging. Ultrason imaging 33(2):119–133 Deylami AM, Asl BM (2016) Amplitude and phase estimator combined with the wiener postfilter for medical ultrasound imaging. J Med Ultrason 43(1):11–18 Deylami AM, Asl BM (2016) Low complex subspace minimum variance beamformer for medical ultrasound imaging. Ultrasonics 66:43–53 Deylami AM, Asl BM (2017) A fast and robust beamspace adaptive beamformer for medical ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 64(6):947–958 Deylami AM, Asl BM (2018) Iterative minimum variance beamformer with low complexity for medical ultrasound imaging. Ultrasound Med Biol 44(8):1882–1890 Deylami AM, Asl BM (2019) High resolution minimum variance beamformer with low complexity in medical ultrasound imaging. Ultrasound Med Biol 45(10):2805–2818 Eslami L, Asl BM (2022) Adaptive subarray coherence based post-filter using array gain in medical ultrasound imaging. Ultrasonics 126:106808 Eslami L, Makouei F, Ziksari MS, Karam SAS, Asl BM (2021) A new extension of dmas ultrasound nonlinear beamformer using the third degree terms with low computational complexity. In: 2021 IEEE international ultrasonics symposium (IUS). IEEE, , pp 1–4 Esmailian K, Asl BM (2022) Correlation-based modified delay-multiply-and-sum beamforming applied to medical ultrasound imaging. Computer methods and programs in biomedicine, p 107171 Fathi Y, Mahloojifar A, Asl BM (2012) Gpu-based adaptive beamformer for medical ultrasound imaging. In: 2012 19th Iranian Conference of Biomedical Engineering (ICBME). IEEE, pp 113– 117

166

3 Beamforming Algorithms in Medical Ultrasound …

Golub GH, Van Loan CF (1996) Matrix computations, johns hopkins u. Johns Hopkins University Press, Baltimore, MD, Math, Sci Gu J, Wolfe PJ (2006) Robust adaptive beamforming using variable loading. In: Fourth IEEE workshop on sensor array and multichannel processing, 2006. IEEE, pp 1–5 Holfort IK, Gran F, Jensen JA (2008) Investigation of sound speed errors in adaptive beamforming. In: 2008 IEEE ultrasonics symposium. IEEE, pp 1080–1083 Holfort IK, Gran F, Jensen JA (2009) Broadband minimum variance beamforming for ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 56(2):314–325 Holm S, Synnevag J-F, Austeng A (2009) Capon beamforming for active ultrasound imaging systems. In: 2009 IEEE 13th digital signal processing workshop and 5th IEEE signal processing education workshop. IEEE, pp 60–65 Hu C-L, Li C-J, Cheng I-C, Sun P-Z, Hsu B, Cheng H-H, Lin Z-S, Lin C-W, Li M-L (2022) Acoustic-field beamforming-based generalized coherence factor for handheld ultrasound. Appl Sci 12(2):560 Izadi SA, Mahloojifar A, Asl BM (2015) Weighted capon beamformer combined with coded excitation in ultrasound imaging. J Med Ultrason 42(4):477–488 Jensen JA (1996) Field: a program for simulating ultrasound systems. In: 10th nordicbaltic conference on biomedical imaging, vol 4, supplement 1, PART 1: 351–353 Jensen JA, Svendsen NB (1992) Calculation of pressure fields from arbitrarily shaped, apodized, and excited ultrasound transducers. IEEE Trans Ultrason Ferroelectr Freq control 39(2):262–267 KaramFard SS, Asl BM (2017) 2-stage delay-multiply-and-sum beamforming for breast cancer detection using microwave imaging. In: 2017 Iranian conference on electrical engineering (ICEE). IEEE, pp 101–106 Kim K, Park S, Kim J, Park S-B, Bae M (2014) A fast minimum variance beamforming method using principal component analysis. IEEE Trans Ultrason Ferroelectr Freq Control 61(6):930–945 Lan Z, Zheng C, Wang Y, Peng H, Qiao H (2021) Adaptive threshold for eigenspace-based minimum variance beamformer for dark region artifacts elimination. IEEE Trans Instrum Meas 70:1–16 Lan Z, Zheng C, Peng H, Qiao H (2022) Adaptive scaled coherence factor for ultrasound pixel-based beamforming. Ultrasonics 119:106608 Li J, Chen X, Wang Y, Shi Y, Yu D (2017) Generalized sidelobe canceler beamforming applied to medical ultrasound imaging. Acoust Phys 63(2):229–236 Liu H-L, Zhang Z-H, Liu D-Q (2015) Adaptive diagonal loaded minimum variance beamforming applied to medical ultrasound imaging. J Central South Univ 22(5):1826–1832 Madhavanunni A, Panicker MR (2023) Beam multiply and sum: a tradeoff between delay and sum and filtered delay multiply and sum beamforming for ultrafast ultrasound imaging. Biomed Signal Process Control 85:104807 Madhavanunni A, Panicker MR (2022) Lesion detectability and contrast enhancement with beam multiply and sum beamforming for non-steered plane wave ultrasound imaging. In: 2022 IEEE 19th international symposium on biomedical imaging (ISBI). IEEE, pp 1–4 Majd SMMT, Asl BM (2020) Adaptive spectral doppler estimation based on the modified amplitude spectrum capon. IEEE Trans Ultrason Ferroelectr Freq Control 68(5):1664–1675 Makouei F, Asl BM (2020) Subspace-based blood power spectral capon combined with wiener postfilter to provide a high-quality velocity waveform with low mathematical complexity. Ultrasound Med Biol 46(7):1783–1801 Makouei F, Asl BM (2020) Adaptive transverse blood velocity estimation in medical ultrasound: a simulation study. Ultrasonics 108:106209 Mann J, Walker W (2002) A constrained adaptive beamformer for medical ultrasound: initial results. In: 2002 IEEE ultrasonics symposium, 2002. Proceedings, vol 2. IEEE, pp 1807–1810 Matrone G, Ramalli A (2018) Spatial coherence of backscattered signals in multi-line transmit ultrasound imaging and its effect on short-lag filtered-delay multiply and sum beamforming. Appl Sci 8(4):486 Matrone G, Savoia AS, Caliano G, Magenes G (2014) The delay multiply and sum beamforming algorithm in ultrasound b-mode medical imaging. IEEE Trans Med Imaging 34(4):940–949

References

167

Mohammadzadeh Asl B (2016) Combining the APES and minimum-variance beamformers for adaptive ultrasound imaging. Ultrason Imaging 38(4):239–253 Nilsen C-I, Hafizovic I (2009) Beamspace adaptive beamforming for ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 56(10):2187–2197 Nilsen C-IC, Holm S (2010) Wiener beamforming and the coherence factor in ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq control 57(6):1329–1346 Park J, Wi S-M, Lee JS (2016) Computationally efficient adaptive beamformer for ultrasound imaging based on QR decomposition. IEEE Trans Ultrason Ferroelectr Freq Control 63(2):256– 265 Ramalli A, Scaringella M, Matrone G, Dallai A, Boni E, Savoia AS, Bassi L, Hine GE, Tortoli P (2017) High dynamic range ultrasound imaging with real-time filtered-delay multiply and sum beamforming. In: 2017 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Sakhaei SM (2015) A decimated minimum variance beamformer applied to ultrasound imaging. Ultrasonics 59:119–127 Salari A, Asl BM (2021) User parameter-free minimum variance beamformer in medical ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 68(7):2397–2406 Salari A, Asl BM (2018) Adaptive beamforming with automatic diagonal loading in medical ultrasound imaging. In: 2018 25th National and 3rd international Iranian conference on biomedical engineering (ICBME). IEEE, pp 1–6 Sasso M, Cohen-Bacrie C (2005) Medical ultrasound imaging using the fully adaptive beamformer. In: Proceedings.(ICASSP’05). ieee international conference on acoustics, speech, and signal processing, 2005, vol 2. IEEE, pp ii–489 Shamekhi S, Periyasamy V, Pramanik M, Mehrmohammadi M, Asl BM (2020) Eigenspace-based minimum variance beamformer combined with sign coherence factor: application to linear-array photoacoustic imaging. Ultrasonics 108:106174 Shan T-J, Kailath T (1985) Adaptive beamforming for coherent signals and interference. IEEE Trans Acoust Speech Signal Process 33(3):527–536 Shen C-C (2020) A study of double-stage DMAS and p-DMAS for their relation in baseband ultrasound beamforming. Biomed Signal Process Control 60:101964 Shen C-C, Hsieh P-Y (2019) Ultrasound baseband delay-multiply-and-sum (BB-DMAS) nonlinear beamforming. Ultrasonics 96:165–174 Stoica P, Li J, Zhu X, Guerci JR (2008) On using a priori knowledge in space-time adaptive processing. IEEE Trans Signal Process 56(6):2598–2602 Synnevåg J-F, Nilsen C-I, Holm S (2007) P2b-13 speckle statistics in adaptive beamforming. In: 2007 IEEE ultrasonics symposium proceedings. IEEE, pp 1545–1548 Synnevag J, Austeng A, Holm S (2005) Minimum variance adaptive beamforming applied to medical ultrasound imaging. Proc IEEE Ultrason Symp 2:1199–1202 Synnevag JF, Austeng A, Holm S (2007) Adaptive beamforming applied to medical ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq control 54(8):1606–1613 Synnevåg J-F, Austeng A, Holm S (2011) A low-complexity data-dependent beamformer. IEEE Trans Ultrason Ferroelectr Freq Control 58(2):281–289 Trench WF (1964) An algorithm for the inversion of finite toeplitz matrices. J Soc Ind Appl Math 12(3):515–522 Vaidya AS, Srinivas M (2020) A low-complexity and robust minimum variance beamformer for ultrasound imaging systems using beamspace dominant mode rejection. Ultrasonics 101:105979 Vayyeti A, Thittai AK (2021) Weighted non-linear beamformers for low cost 2-element receive ultrasound imaging system. Ultrasonics 110:106293 Vayyeti A, Thittai AK (2021) Optimally-weighted non-linear beamformer for conventional focused beam ultrasound imaging systems. Sci Rep 11(1):1–14 Vayyeti A, Thittai AK (2020) A filtered delay weight multiply and sum (F-DwMAS) beamforming for ultrasound imaging: Preliminary results. In: 2020 IEEE 17th international symposium on biomedical imaging (ISBI). IEEE, pp 312–315

168

3 Beamforming Algorithms in Medical Ultrasound …

Viola F, Walker WF (2005) Adaptive signal processing in medical ultrasound beamforming. Proc IEEE Ultrason Symp 4:1980–1983 Viola F, Ellis MA, Walker WF (2007) Time-domain optimized near-field estimator for ultrasound imaging: initial development and results. IEEE Trans Med Imaging 27(1):99–110 Wang Y-H, Li P-C (2014) Snr-dependent coherence-based adaptive imaging for high-frame-rate ultrasonic and photoacoustic imaging. IEEE Trans Ultrason Ferroelectr Freq Control 61(8):1419– 1432 Wang Z, Li J, Wu R (2005) Time-delay-and time-reversal-based robust capon beamformers for ultrasound imaging. IEEE Trans Med Imaging 24(10):1308–1322 Wang S-L, Chang C-H, Yang H-C, Chou Y-H, Li P-C (2007) Performance evaluation of coherencebased adaptive imaging using clinical breast data. IEEE Trans Ultrason Ferroelectr Freq Control 54(8):1669–1679 Wang Y, Wang Y, Liu M, Lan Z, Zheng C, Peng H (2022) Minimum variance beamforming combined with covariance matrix-based adaptive weighting for medical ultrasound imaging. BioMed Eng OnLine 21(1):1–24 Wang Y, Zheng C, Wang Y, Feng S, Liu M, Peng H (2022) An adaptive beamformer based on dynamic phase coherence factor for pixel-based medical ultrasound imaging. Technology and Health Care, no. Preprint, pp 1–25 Yang J, Li J, Chen X, Xi J, Cai H, Wang Y (2021) Cross subaperture averaging generalized sidelobe canceler beamforming applied to medical ultrasound imaging. Appl Sci 11(18):8689 Yang J, Chen X, Cai H, Wang Y (2022) Generalized sidelobe canceler beamforming combined with eigenspace-wiener postfilter for medical ultrasound imaging. Technology and Health Care, no. Preprint, pp 1–12 Yang J, Chen X, Cai H, Wang Y (2022) Generalized sidelobe canceler beamforming with an improved covariance matrix estimation for medical ultrasound imaging. In: 2021 international conference on optical instruments and technology: optoelectronic measurement technology and systems, vol 12282. SPIE, pp 307–314 Zhuang J, Ye Q, Tan Q, Ali AH (2016) Low-complexity variable loading for robust adaptive beamforming. Electron Lett 52(5):338–340 Ziksari MS, Asl BM, Ingram M, D’hooge J (2022) A new adaptive imaging technique using generalized delay multiply and sum factor. In: 2022 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4

Chapter 4

Phase Aberration Correction

Abstract This chapter deals with phase aberration correction techniques, which is a common phenomenon in abdominal imaging. In this chapter, the phase aberration correction methods are divided into three categories: phase aberration correction based on time delay estimation, sound velocity estimation, and image reconstruction algorithm. Then, techniques that are included in each category are discussed separately in detail. Keywords Inhomogeneous medium · Phase aberration · Velocity estimation · Cross-correlation · Robustness

4.1 Introduction To construct the image, the received signals are dynamically time delayed such that they are focused at each focal point. In other words, the signal originating from a point source reaches the elements of the array at the same time by applying time delays to the received signals. If the imaging medium is homogeneous in which the sound speed does not change, focusing will be performed successfully, and a reconstructed image with high accuracy will be obtained. However, in practice, the imaging medium is inhomogeneous in most cases which causes variations in the arrival times of the received signals. Therefore, the phase variations will occur across the array face. The presence of the fat layer in muscle tissue is an example in this regard. Note that either the transmission or the reception processes are affected by the phase variations. In such a case, distinguishing capability between the on-axis and off-axis signals is lost, and focusing operation with the consideration of a constant sound speed leads to a degraded quality image due to phase aberration error. It is well-known that the focusing is true only if the signals received from a point source reach the array elements simultaneously. The degree to which the coincidence of the signal of a point source to the array elements is violated is a measure of the phase error. Usually, the phase error caused by the inhomogeneity of the imaging medium is considered to be equivalent to the presence of a phase screen in front of the array © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Mohammadzadeh Asl and R. Paridar, Beamforming in Medical Ultrasound Imaging, Springer Tracts in Electrical and Electronics Engineering, https://doi.org/10.1007/978-981-99-7528-0_4

169

170

4 Phase Aberration Correction

during the imaging process. The generated phase error in the considered scenario is very close to the one that is produced in imaging the abdominal wall. The phase error is also known as the arrival time error and severely affects the image accuracy and should be minimized. Different methods have been proposed to estimate the phase error, or equivalently, to correct the phase aberration and improve the image accuracy. The existing phase aberration correction algorithms can be divided into three main categories as below: • Phase aberration correction based on phase term/time delay estimation. Most of the existing phase aberration correction algorithms are included in this category. In these algorithms, the phase aberration is corrected by directly estimating the aberrant phase term or the time delays. The estimated phase term/time delay is applied to the received signal. Then, a beamforming technique is used to obtain the reconstructed image in which the effect of the phase aberration is reduced. Most of the algorithms included in this category perform the estimation according to an iterative procedure. • Phase aberration correction based on sound velocity estimation. In such algorithms, the sound velocity is estimated and used to accurately apply the time delays to the received signals. The algorithms that are included in this category obey an iterative procedure. • Phase aberration correction based on image reconstruction algorithm. In the algorithms that are included in this category, the phase aberration is tried to be corrected by developing a modified image reconstruction algorithm. In other words, an attempt is made to reduce the effect of the phase aberration in the reconstructed image. The resolution, bias, and jitter metrics are commonly used to evaluate the performance of the phase aberration correction algorithms. The resolution metric has been discussed and used to evaluate the performance of the algorithms in the previous ˆ as the true sections. To define the bias and jitter metrics, consider .Δ(i) and .Δ(i) time delay and the estimated one, respectively, for the .ith sample. The bias metric is expressed as below: bias =

.

M ] 1 Σ[ ˆ Δ(i) − Δ(i) , M i=1

(4.1)

where. M denotes the number of samples. It can be seen from the above definition that the mean difference between the true time delay and the estimated value is known as bias. Also, the jitter metric is expressed as below: [ ]2 [ | M M |1 Σ 1 Σ | ˆ ˆ Δ(i) − Δ(i) . .jitter = M i=1 M i=1

(4.2)

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

171

In other words, the standard deviation of the displacement errors denotes the jitter metric. In the following, different phase aberration correction algorithms are going to be discussed in more detail.

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation This section is dedicated to the phase aberration correction algorithms that are based on direct estimation of the phase term of the aberrated signal or the arrival times corresponding to each receiving element.

4.2.1 Nearest Neighbor Cross-Correlation Method To quantify the phase distribution, and consequently, correct the phase aberration due to the inhomogeneity of the imaging medium, the cross-correlation between the received signals of two adjacent elements can be used (O’donnell and Flax 1988); consider .si (n) and .si+1 (n) as the signals received by the .ith and .(i + 1)th elements, respectively. The cross-correlation between this signal pair is written as below: L c /2−1 .

Σ

A(k) =

si (n)si+1 (n + k),

(4.3)

n=−L c /2

where . L c denotes the total samples participating in the correlation computation process. The peak position of the cross-correlation between the adjacent signals represents the arrival time difference between these two elements. More precisely, the arrival time difference is obtained using the peak of the cross-correlation function as below: ˆ m = k0 Δt0 , Δt

.

(4.4)

where .k0 denotes the peak of the cross-correlation and .Δt0 is the sampling period of the received signal. By cumulating the obtained element-by-element arrival time differences (time shifts) along the array according to (4.4), the phase aberration would be estimated. In particular, the phase aberration corresponding to element .n is obtained as below: φˆ =

n−1 Σ

. n

i=0

ˆ i. Δt

(4.5)

172

4 Phase Aberration Correction

The phase of the .nth element is finally estimated by unwrapping the measured phase aberration as follows: τˆ = φˆ n −

. n

N −2 n Σ ˆ i, Δt N − 1 i=0

(4.6)

where the second term on the right-hand side of the above equation leads the linear term of the phase profile to be removed. Consequently, an estimate of the phase error due to the inhomogeneity of the imaging medium is obtained, which is applied to the received signals to compensate for defocusing. Subtracting the estimated phase errors from the signals received by the array elements, the errors are said to be timereversed. This method is known as the nearest neighbor cross-correlation (NNCC) technique. One should note that this technique can also be used in imaging moving targets to decrease motion error. If a point-like source is found in the imaging medium, the NNCC method results in an accurate estimation of the phase error. However, in most cases, a point-like source is not found, and the imaging medium is speckle-generating. In such a case, the NNCC method cannot produce the desired accuracy. To analyze this issue, note that the received signals corresponding to the scatterer distributed mediums can be divided into two components; the first component denotes the random variations of the arrival time corresponding to the interference pattern that is generated by the scatterers. Also, the second component corresponds to the propagation effect that includes true information about the arrival time errors (Flax and O’Donnell 1988). To obtain the arrival time errors, the estimation process should be performed on a large segment of the distribution of the scatterers. By doing so, the effect of the first component, i.e., random variations of the arrival time, is removed. Nevertheless, the estimated phase error will still be underestimated. The higher the phase error, the lower the ability to estimate. In other words, the accuracy of the estimated phase error will be reduced. To tackle this limitation, an iterative process is used to take advantage of the underestimated phase error in order to improve the beam quality and reduce the phase aberration. More precisely, the cross-correlation function is performed iteratively; at each iteration, the phase error is estimated using the NNCC method and applied to the data. This leads to a relatively modified beam compared to the initial case. Then, the modified data is used to obtain the phase error using the NNCC method again, and the resulting estimation is applied to the modified data obtained from the previous iteration. This process continues until the improvement obtained from two consecutive iterations is negligible. Using the NNCC method iteratively, the residual underestimated phase error will be reduced at each iteration. Also, an accurate phase estimation can be achieved even if the received signals are obtained from the speckle-generating medium.

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

173



> Phase aberration estimation using NNCC method

The NNCC method is a cross-correlation-based method to estimate the phase aberration due to the inhomogeneity of the imaging medium. To perform the NNCC method on the signals corresponding to the speckle-generating medium, the estimation process is necessary to be applied iteratively in order to decrease the underestimated phase error efficiently. Moreover, a large segment of the scatterer distribution, at least as long as the range resolution of the imaging system (Flax and O’Donnell 1988), should be considered to perform the process.

4.2.2 Beamsum Correlation Method The NNCC phase aberration correction method results in high correlation coefficients, and consequently, a low error variance in phase estimation. However, when the received signals corresponding to the speckle-generating medium are of interest, underestimation will occur, as discussed earlier. This leads the estimation to be biased; one should note that the reflected signals corresponding to a point target are radiated as spherical waves. By applying time delays at the receiver and performing the focusing process, the waveform is summed coherently across the array (Krishnan et al. 1997). However, for speckle-generating mediums, the transmitted wave insonifies several scatterers. The backscattered signals of the insonified scatterers interfere with each other, and therefore, a waveform with less curvature would be resulted compared to a point target. In such a case, applying time delays at the receiver and focusing operation results in a biased estimated phase, since there is a difference between the applied delay and the actual waveforms. Clearly, as the transmitted wave insonifies a larger area, the bias of the estimated phase will be greater. To reduce the bias of the phase estimation compared to the NNCC method, the beamsum correlation method has been proposed (Rigby 2000). In the beamsum method, a summation operation is performed across the array signals to construct a fixed reference signal as follows: b(k) =

N Σ

.

si (k).

(4.7)

i=1

The correlation between the obtained reference signal and the received signals of each element is then calculated to estimate the phase aberration. Using the fixed reference signal, the arrival time error would be estimated for each element directly without cumulating the arrival time differences along the array, which is done in the NNCC method. By using the beamsum correlation method, underestimation would be reduced compared to the NNCC method. However, the disadvantage of the beamsum correlation method is that the correlation between the beamsum and

174

4 Phase Aberration Correction

the received signal of each element is lower compared to the correlation between the adjacent signals that are used in the NNCC method. In particular, consider the correlation between the beamsum obtained from (4.7) and the signal received from the .ith element as below: L c /2−1

Σ

.

b∗ (k)si (k) =

k=−L c /2

N LΣ c /2−1 Σ

sn∗ (k)si (k).

(4.8)

n=1 k=−L c /2

As stated in the above equation, the beamsum correlation for one element is equivalent to the average correlation between that element and the other remaining elements of the array. The greater the spatial separation of the array elements, the lower the beamsum correlation. It can be concluded from (4.8) that, for the .ith element, the beamsum correlation is lower compared to the correlation between that element and its adjacent one. This leads the estimated error variance to be higher compared to the NNCC method. According to the discussion made above, it can be concluded that in the beamsum correlation method, using the correlation function as (4.8) to estimate the phase error does not lead to a good estimation; high reflectors dominate the summation of the correlations and affect the beam steering. Also, the error variance is increased in the speckle-generating regions of the imaging medium. To overcome this limitation, an attempt is required to be made so that the estimation does not vary from one beam to another one significantly. By normalizing (4.8), we have Σ

Ci = /Σ

.

b∗ (k)si (k) , 2Σ 2 |b(k)| |s (k)| i k k k

(4.9)

where .Ci denotes the normalized correlation sums for the .ith element. In particular, if the signals of active elements are identical, i.e., they are successfully focused from a point target, the parameter .Ci tends to be one. Using (4.9), a weighting is applied to the observations to reduce the variance error. Similar to the NNCC method, the beamsum correlation method can also be performed iteratively to obtain an accurate estimation. Empirically, it has been obtained that the difference between the estimated phase errors will be negligible after 3–4 iterations (Rigby 2000).

4.2.3 Nearest Neighbor Cross-Correlation Combined with Beamsum Correlation Method As discussed in Sects. 4.2.1 and 4.2.2, the high correlation coefficients of the NNCC method result in a reduced error variance compared to the beamsum method. However, in the case in which the received signals correspond to the speckle-generating medium, the underestimated errors lead to biased estimation. In contrast, in the

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

175

beamsum correlation method, the lateral extent of the contributing signals is reduced to the beamsum. This restricts the spatial frequencies in the correlation, and therefore, reduces the bias error. However, low correlation coefficients of the beamsum correlation method increase the variance error. It has been proposed to combine these two phase aberration correction methods in such a way as to take advantage of the low variance error of the NNCC method and low bias error of the beamsum method simultaneously (Krishnan et al. 1997). In this combined method, a threshold is considered for the correlation coefficient between the beamsum and each element. For each element, if the obtained correlation coefficient is greater than the selected threshold, it is concluded that it will retain the phase estimate. In other words, the output of the beamsum correlation method will be considered to estimate the phase aberration. Also, the remaining intervals corresponding to the elements, the correlation coefficients of which are less than the selected threshold, are considered to be phase estimated using the NNCC method. One should note that the estimation obtained from the NNCC method is unwrapped within the reference points provided by the beamsum correlation method; that is, they are linearly fitted with the estimated error obtained from the beamsum correlation method within two reference points. To better understand, consider Fig. 4.1 as an example of phase aberration correction using the combination of NNCC and beamsum correlation methods. As it can be seen from this example, the beamsum correlation coefficients are first obtained, as shown in Fig. 4.1a. The phase errors of the elements, the correlation coefficients of which exceed the selected threshold, are estimated using the beamsum correlation method, and the remaining is left to be estimated using the NNCC method as shown

Fig. 4.1 a Beamsum correlation coefficients with the consideration of a threshold to perform the combined method, b the estimated phase errors using the beamsum correlation method, c the estimated phase errors using the NNCC method in the interval between the elements 15–30 (the unwrapped estimated phase is also shown with the dashed plot), and d the final estimated phase aberration for elements 10–50

176

4 Phase Aberration Correction

in Fig. 4.1b. In particular, the estimated phase errors corresponding to the interval between the elements 15–30 are shown in Fig. 4.1c. After unwrapping the obtained phase profile and fitting it to the beamsum output, the result is used to fill the final estimated phase error as shown in Fig. 4.1d. In this combined method, the threshold value plays an important role. It is desirable for the threshold to be changed adaptively in order to achieve an accurate estimation; when the beamsum correlation coefficient is low, the transmitted beam is wide. In such a case, a low value should be assigned to the threshold (compared to the mean value of the beamsum correlation coefficients) in order to prevent bias in the correlations of the NNCC method. In contrast, high values of the beamsum correlation coefficients are associated with the narrow transmitted beam. In such cases, the bias of the correlations in the NNCC method would be reduced, and therefore, a higher value of the threshold is desirable. One should note that at each iteration, the beamsum correlation coefficients will increase. Therefore, the threshold value is expected to be increased at each iteration.

4.2.4 Filtered Normalized Cross-Correlation Method The performance of the normalized cross-correlation (NCC) algorithm is not well enough in cases where the received data suffers from low SNR. In particular, in the STA imaging method that has been discussed in Sect. 1.11.2, it is observed that the entire imaging volume is radiated for a single element emission, and the image is reconstructed by processing the corresponding backscattered echoes. The SNR of the received signal is low in this imaging technique compared to conventional B-mode imaging since the energy of the propagated wave of a single element is low. Therefore, the NCC algorithm does not lead to a successful phase aberration correction in STA imaging. To tackle this limitation, in Monjazebi and Xu (2021), a filtered-NCC (F-NCC) technique was developed to achieve the desired performance of the NCC algorithm even in the case of low SNR in the speckle regions. Similar to the NNCC method, the phase aberration correction process is performed iteratively in this algorithm to achieve a non-aberrated reconstructed image. Consider the received signal of the .ith element as .si (t). In the F-NCC algorithm, a 2D Fourier transform is applied in both the aperture and temporal directions, and the result is denoted as . Sk ( f ). Then, a low-pass filter is applied to the Fourier-transformed aperture direction, which leads to the noise and clutter being considerably suppressed. Also, another low-pass filter is designed and applied to the Fourier-transformed temporal direction to reduce the noise of the RF signal and improve the quality of the final reconstructed image. If the former filter is denoted as . Mk and the latter one is denoted as . N ( f ), The filtered data is expressed as below: S , ( f ) = Sk ( f ).Mk .N ( f ).

. k

(4.10)

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

177

Once the filters . Mk and . N ( f ) are applied to the data, the result is inversely Fourier transformed in both temporal and aperture directions, and a suppressed noise and increased SNR wavefront would be achieved accordingly. Finally, the NCC algorithm is applied to the filtered data to correct the phase aberration and improve the image quality. Note that by filtering the data according to (4.10) in the F-NCC algorithm, the shape of the wavefront is preserved, while the signal noise is considerably suppressed. In other words, by applying the filter, the phase deviation in the received data does not change, and only the SNR is improved, which makes the NCC algorithm more efficient.

4.2.5 Speckle Brightness Maximization Method The average speckle brightness (. S¯B), as a quality factor, can be used to correct the phase aberration (Nock et al. 1989). The average speckle brightness is formulated as below: /{ { .

S¯B = C ×

R 2 (θ, t)dtdθ ,

(4.11)

where .C is a constant, and . R(θ, t) denotes the reflection amplitude corresponding to time .t and lateral dimension .θ . In the case where there is no phase aberration, the point spread function (PSF) contains a large peak. In contrast, in the presence of phase aberration, the peak of the PSF is reduced. Moreover, the off-axis signal contribution is increased. According to this issue, one can conclude that the value of the parameter . S¯B decreases in the presence of phase aberration. Also, in the case in which there is no phase aberration, a greater value will be obtained for this parameter. In other words, the greater the phase aberration, the darker the speckle part of the image. Inspired by (4.11), one can force the speckle brightness to be increased, and consequently, correct the phase aberration. That is, one can change the time delays of the received signals such that the parameter . S¯B increases compared to its original value. Since almost all of the imaging mediums are speckle-generating in medical US imaging, the factor . S¯B in which the speckle is evaluated is a good candidate. In this method, which is known as the speckle brightness maximization algorithm, the phase aberration correction is performed in an iterative manner; in each step, depending on the obtained value of the parameter . S¯B (increasing or decreasing), a specific constant value is added or subtracted to the time delays of the element that is obtained in the previous step. Note that this process is performed for each element. The advantage of this algorithm is that a priori information about the target is not required. Also, this technique is simple and less sensitive to noise. One should note that in the case in which neither adding nor subtracting the constant value does not lead the parameter . S¯B to be increased, the output remains unchanged.

178

4 Phase Aberration Correction

4.2.6 Sum of Absolute Differences Minimization Method Another way to estimate and correct the phase error is to minimize the sum of absolute differences (SAD) between the received signals corresponding to two adjacent elements (Karaman et al. 1993). Consider two adjacent elements .n − 1 and .n. The SAD of the received samples for these two elements is written as below: ε(rni ) =

.

K Σ | i |s (k) − s i

n−1 (k

n

| + rni )| ,

(4.12)

k=1

where .i denotes the .ith scan angle. In the above equation, the received samples over a temporal window with the length of. K are considered. Also,.rni denotes the shift index, the value of which that minimizes (4.12) is considered as the optimum solution. Once the shift index is obtained according to the so-called SAD minimization method, the relative phase aberration (relative aberration delay) is calculated as follows: φ i = Δt0 .rni − γni ,

(4.13)

. n

where .γni is the relative focusing delay (corresponding to .nth element and .ith scan angle). Then, a summation operation is performed over the obtained relative phase aberrations as below: τi =

n Σ

. n

φ ij ,

(4.14)

j=1

and the phase aberration is estimated accordingly. One should note that by using the SAD minimization method, the scattering effect may not be canceled out from the estimated phased aberration due to the non-uniformity of the aberration pattern. Therefore, the estimation will not be accurate enough. To overcome this limitation, the weighted estimation can be used; for .ith scan angle, the weight coefficient of .nth element which is denoted as .wni , is obtained as below: wni =

.

K 1 Σ || i || s (k) . K k=1 n

(4.15)

It can be found from the above equation that the weight coefficient of the element n is obtained by averaging over the amplitude of its received samples. Finally, the weighted phase aberration is estimated according to the following equation:

.

ΣI τ

. n weighted

= Σi=1 I

wni τni

i=1

wni

,

(4.16)

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

179

in which . I scan angles are used to perform averaging. The advantage of the SAD minimization method is that its computational burden is low. However, one should note that to perform the SAD minimization algorithm, sampling is performed on the received signals of the array. Consequently, the estimation accuracy of this algorithm is limited to the sampling rate.

4.2.7 Time-Reversal Method In many of the phase aberration correction algorithms, such as the NNCC method, the amplitude of the signal that may be affected by the inhomogeneity of the imaging medium is not corrected. However, in order to achieve a more accurate estimation, it is desirable to minimize both the amplitude and phase errors. The time-reversal algorithm overcomes this limitation (Fink et al. 1989); in this algorithm, the inhomogeneous medium is illuminated, and the backscattered echoes are recorded by the array. Note that due to the inhomogeneity of the imaging medium, the received signals are distorted. The distorted received signal is time-reversed and backpropagated toward the imaging medium. By doing so, the re-emitted wave is focused on the imaging target. In other words, transmit focusing will be performed using the time-reversal algorithm which results in an improved quality image. However, the good performance of this algorithm is limited to the cases in which a point-like source is found in the imaging medium.

4.2.8 Adaptive Parallel Receive Compensation Method As stated earlier, the performance of the time-reversal algorithm is degraded in cases where a point-like source cannot be found in the imaging medium. To overcome this limitation, one can use a two-step processing technique and compensate for the amplitude and phased errors adaptively; in the first step, the cross-correlation-based algorithm that has been previously discussed in Sect. 4.2.2 is used in order to estimate the main phase aberration and compensate for the phase error. In the second step, an adaptive process is used to remove the remaining phase aberration, and also, estimate the amplitude error. This technique is known as adaptive parallel receive compensation algorithm (PARCA) (Krishnan et al. 1996). By using the PARCA, the limitations caused by the beamforming that is not addressed using the crosscorrelation-based algorithm will be overcome. In the PARCA, an attempt is made to model the sidelobe distribution in a beam angle using the parallel receive. More precisely, the sidelobe interactions are predicted, and consequently, their contributions are removed from the on-axis part of the signal. In this regard, consider the measured source as . s ∈ Cn×1 for a transmit beam angle. The convolutional-based model is expressed as below:

180

4 Phase Aberration Correction .

Ba = s,

(4.17)

where . a ∈ Cm×1 and . B ∈ Cn×m denote the complex sources vector and the complex matrix, respectively. Note that .m denotes the number of sources. Assuming that the .ith source is in the direction of .θi , the .ith column of the matrix . B denotes the dynamic focused receive beam pattern that is focused on this direction (Li et al. 1993). By Fourier transforming the aperture function, the dynamic focused receive beam pattern is approximated for each range. In the aperture function, the weight corresponding to the active element equals one. Also, the weight corresponding to the inactive element equals zero. One should note that the number of active elements and their positions differ from one imaging point angle and range to the other. Therefore, the matrix . B is different for each imaging point. To achieve the goal of the PARCA, the source profile should be estimated. In this regard, the minimization problem in which minimizing the estimation error . e = s − Ba is of interest is written as below: .

min ||e||2 , a

subject to (s + e) ∈ Range(B).

(4.18)

The problem is that an exact structure for the matrix . B is not available since the beam pattern is changed with the amplitude and phase aberrations. To tackle this problem, the total least-squares (TLS) model is used; in addition to the measured source, disturbances are also considered for the complex matrix, and the problem is rewritten in the following form: .

min ||[E|e]|| F , E,e

subject to (s + e) ∈ Range(B + E),

(4.19)

where . E ∈ Cn×m is the disturbance matrix that is added to the complex matrix. Also, .||.|| F denotes the Frobenius norm. In the above problem, the matrix . E and vector . e are juxtaposed and result in the matrix .[E|e]. By solving the above problem, minimal disturbances would be obtained. Then, each source vector that holds in the following relation is the solution of the TLS: .

(B + E) a = s + e.

(4.20)

By using the TLS model to rewrite the problem as (4.19), varying the beam pattern and source are provided for each angle and range. This leads to the contribution of the sources in the measured signal being accurately specified. As a result, it is possible to identify the contribution of the off-axis signal and remove its effect from the measured signal. As a conclusion, by using the PARCA following the cross-correlation-based algorithm, the contrast of the reconstructed image will be improved compared to the case in which only the NNCC algorithm is used. Moreover, there is no need for a

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

181

point reflector in the imaging medium to obtain the improved result using this twostep algorithm. However, one should note that the PARCA needs all of the .n Nyquist samples of the received beam in order to correct the phase aberration. Therefore, this algorithm cannot be implemented in real time due to the high number of received beams.

4.2.9 Modified Beamformer Output Method The modified beamformer output (MBFO) is another method that is developed to estimate the time arrival variations in an inhomogeneous medium. In this algorithm, the non-aberrated signal of .nth element is estimated through a correlation process between the received signal and the output of the modified beamformer over the elements of the array. Detailed explanations about how to perform the estimation using the MBFO algorithm are provided in the following. The received signal of .nth element in the frequency domain can be expressed as . yn = sn f n , where.sn denotes the non-aberrated signal and. f n is the phase screen which is propagated in an inhomogeneous medium. It is assumed that the phase screen is independent of the scatterer. Note that. yn ,. f n and.sn are frequency-dependent which is not written in the notation for simplicity. Assuming a zero mean Gaussian distribution for the signal, one can conclude that all of its statistical information is available. In this regard, the cross-spectrum of the .nth and .mth elements of the array is expressed as below (Måsøy et al. 2004): .

] [ Rmn = E ym∗ yn [ ] = E (sm f m )∗ (sn f n ) = sm∗ sn Fmn ,

(4.21)

] [ where . Fmn = E f m∗ f n . Inspired by the above equation, the non-aberrated signal (.sn ) is obtained as below: s = an e jθn =

. n

Rmn e jθm . Fmn am

(4.22)

It is assumed that the magnitude of . Fmn is known. The phase component of . Fmn is, however, an unknown value. For this reason, (4.22) is rewritten as below: s =

. n

Rmn e j (θm −θ Fmn ) Rmn ≡ . |Fmn |am |Fmn |sm∗

(4.23)

In other words, the phase component of . Fmn (due to the refraction that occurs in the boundary of two layers) is considered to be combined with the phase component of .sm . As can be seen from (4.23), it is required to obtain . Rmn in order to estimate the

182

4 Phase Aberration Correction

non-aberrated signal. However, this parameter is not available in practice. Therefore, it is estimated by averaging over . K realizations as below: K 1 Σ ∗ ˆ . Rmn = y (k)yn (k). K k=1 m

(4.24)

It has been shown that the variance of the magnitude of . Rˆ mn is obtained as below (Priestley 1981): 2 .σ ˆ | Rmn |

( ) 1 1 2 |Rmn | 1 + ∼ . 2K |wmn |2

(4.25)

Also, the variance of the phase of . Rˆ mn is formulated as follows: σ∠2 Rˆ

.

mn



1 2K

(

1 |wmn |2

) −1 ,

(4.26)

where .wmn = √ RRmnR . It can be seen from (4.25) and (4.26) that the variance of . Rˆ mn mm nn increases as the value of .|wmn | decreases. Therefore, a weighted averaging method is used as below to estimate the non-aberrated signal: sˆ =

N Σ

. n

Wmn

m=1

Rˆ mn , n = 1, . . . , N , |Fmn |ˆsm∗

(4.27)

where .

ˆ Note that .wˆ mn = √ Rmn

Rˆ mm Rˆ nn

|wˆ mn |2 . Wmn = Σ N ˆ mn |2 m=1 |w

(4.28)

. Equation (4.27) that is used to estimate the non-aberrated

signal is updated iteratively according to the following equation until it converges: { .s ˆn(q+1)

=

sˆn(q)

+C

[

sˆn(q)

N Σ

Rˆ mn − Wmn |Fmn |(ˆsm∗ )(q) m=1

]} ,

(4.29)

where .(.)(q) denotes the .qth iteration step, and .C is a constant value. The amplitude and phase of the obtained signal.sˆn denoted as.|ˆsn | and.θˆn , respectively, are considered as the final estimated values. Finally, the variations of the time arrival corresponding to the .nth element are obtained as below:

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

(

1 .τn = ω0

) N 1 Σ θˆn − θˆm , N m=1

183

(4.30)

where .ω0 is the transmitted pulse frequency. By substituting (4.24) into (4.27), we have sˆ =

N Σ

. n

Wmn

m=1

=

K 1 Σ ∗ 1 ym (k)yn (k) K k=1 |Fmn |ˆsm∗

K 1 Σ yn (k)bˆn∗ (k), K k=1

(4.31)

where bˆ (k) =

N Σ

. n

ym (k)

m=1

Wmn . |Fmn |ˆsm

(4.32)

It can be concluded that the non-aberrated signal estimation corresponding to the nth element is defined as the correlation of the received signal and the modified beamforming output .bˆn (k), as the name of the algorithm implies.

.

4.2.10 Multi-lag Cross-Correlation Method Another phase aberration correction algorithm, known as the multi-lag crosscorrelation method, has been developed to estimate the phase profile (Liu and Waag 1994). In this algorithm, the time shifts between an element and its neighboring elements are obtained using the cross-correlation between them. Then, the obtained time shifts are used in the least-squares equations system, and the phase profile is estimated accordingly. More precisely, consider the vector that contains the estimated time shifts as . d. The following equation is formulated, the solution of which results in the estimated phase profile: .

Aφ = d,

(4.33)

where . A and .φ denote the model matrix and the phase profile vector that we are looking for, respectively. The solution to the above problem is obtained according to the following equation: ( )−1 T φˆ = AT A A d.

.

(4.34)

184

4 Phase Aberration Correction

Note that by increasing the number of neighboring elements, or equivalently, the number of correlations, the computational complexity of the multi-lag cross-correlation algorithm will be increased (Ivancevich et al. 2009).

4.2.11 Common-Midpoint Cross-Correlation Method The common-midpoint cross-correlation algorithm is another way to estimate the phase profile (Li 1997). The information of the complete data received from each transmit element is redundant; consider an array consisting of . N elements. In such a case, . N independent images can be obtained from a similar imaging target for a single transmit element. The idea behind this algorithm is to consider a subset of the complete data corresponding to the common-midpoint, which is known as the common-midpoint gather, to estimate the phase profile; for the transmit element .s and the receive element .g, the followings are defined (Haun et al. 2004): g+s , 2 g−s h= , 2

m=

.

(4.35)

where .m and .h denote the midpoint and offset parameters, respectively. Considering s (t) as the received signal associated with the .sth transmit element and .gth receive element, the data subset within the interval of . sm−h,m+h (t) is the common-midpoint gather (Haun et al. 2004). The phase profile is estimated by cross-correlation calculation of the common-midpoint gather. Note that the common-midpoint gathers are highly correlated even in the presence of phase aberration. This makes the commonmidpoint-based algorithms more robust compared to the algorithms that are based on the signals of a focal point. Also, there is no need to perform the common-midpoint cross-correlation algorithm iteratively. However, obtaining the phase profiles for different steering angles is difficult using this algorithm. Moreover, this phase aberration correction technique highly suffers from high-computational complexity.

. s,g

4.2.12 Adaptive Scaled Covariance Matrix Method Another way to correct the phase aberration is to use the adaptive scaled covariance matrix (ASCM) method (Silverstein and Ceperley 2003). As its name implies, the sample covariance matrix of the time-delayed data is used to estimate the phase aberration in this algorithm. Consider the vector . x(k) ∈ C N ×1 as the time-delayed signal of . N elements (corresponding to .kth resolution cell) that is phase aberrated due to inhomogeneity of the imaging medium. In particular, the sample covariance matrix of the vector . x(k) is estimated according to the following equation:

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

.

Σ ˆ = 1 x(k)x H (k). R K k

185

(4.36)

It can be seen from the above equation that the sample covariance matrix is estimated by averaging over the correlations of the (independent) resolution cells, the number of which is assumed to be. K . Each entry of the sample covariance matrix, i.e., the.mth ˆ is denoted as . R(m, ˆ row and .nth column of the matrix . R n). The sample covariance matrix can be modeled as below: | | | |ˆ ˆ n)| e j[φ(m)−φ(n)] , . R(m, (4.37) n) ∼ = | R(m, where .φ(n) denotes the phase of the .nth element. In order to estimate the aberrant phase (denoted as .φ(m)) using the ASCM algorithm, a scaling factor is used, which is obtained directly from the received data. More precisely, an attempt is made to obtain a scaling factor that leads the following equality to be met: .

| | | |ˆ n)| = Ccons , W (m, n) | R(m,

(4.38)

where .Ccons is a constant. Then, the obtained scaling factor is used in the following phase aberration model: Σ .

ˆ W (m, n) R(m, n) =

n

Σ

| | | |ˆ n)| e j[φ(m)−φ(n)] W (m, n) | R(m,

n

= Ccons e jφ(m)

Σ

e− jφ(n) ,

(4.39)

n

which is inspired by (4.37) and (4.38). The important key point is that the above model results in creating a constant phase reference that is independent of the selected element to estimate its phase aberration. This constant phase reference is specified as below: Σ − jφref .|A|e ≡ Ccons e− jφ(n) . (4.40) n

Accordingly, (4.39) is rewritten as below: Σ .

ˆ W (m, n) R(m, n) = |A|e j[φ(m)−φref ] .

(4.41)

n

By placing the scaling factor in the above equation, the aberrant phase .φ(m) will be estimated and applied to the phase aberrated data (by multiplying its complex conjugate with the data), and consequently, phase aberration correction will be performed. ˆ n)|−1 in Silverstein and Ceperley (2003), The scaling factor is considered as .| R(m, that is, the inverse of the amplitude of the estimated covariance matrix. Note that the reference phase .φref is obtained by performing a summation over all channels,

186

4 Phase Aberration Correction

as can be seen from (4.40). Therefore, this process suffers from high-computational complexity. To tackle this problem, one can use a sub-optimal estimation such that only a few columns of the covariance matrix are used to perform the computations. Although this sub-optimal method negatively affects the performance of the ASCM algorithm a bit, however, the computational burden is considerably improved compared to the initial case. Obtaining the adaptive scaling factor according to the explanation performed above to estimate the phase aberration leads to a high-computational burden. To overcome this limitation and speed up the estimation process, the lookup table scaled covariance matrix (LSCM) algorithm can be used; in contrast to the ASCM algorithm, in the LSCM method, the scaling factor is obtained from a lookup table and independent of the received data. The estimation process of the aberrant phase is similar to the one performed in the ASCM algorithm, and the difference is only in obtaining the scaling factor. More information about the selected lookup table can be found in Silverstein and Ceperley (2003). The performance of the scaled covariance matrix-based algorithm is good in relatively homogeneous mediums; in the cases in which the imaging medium leads to refraction of the beam, such as breast imaging, this algorithm cannot successfully compensate for the phase aberration.

4.2.13 Continuous Estimation Methods One of the limitations of the time delay estimation techniques is that the accuracy of the estimation depends on the sampling frequency. To tackle this limitation, one can use continuous estimation using polynomials; by continuously representing the discrete signal, an accurate estimation will be obtained. In the following, different continuous estimation methods are discussed. Spline-based Algorithm The advantage of using polynomials to continuously represent the signal is that a smaller sampling frequency can be used while good performance is achieved. In particular, the cubic spline, as a commonly used polynomial in signal processing, is used to represent the signal. Then, by using a pattern-matching function, an appropriate time delay is estimated (Pinton and Trahey 2006). More precisely, the reference signal is considered a pattern. Then, an attempt is made to achieve a time delay for which the delayed signal is matched well with the pattern by minimizing/maximizing a matching function. A detailed explanation is presented in the following. A continuous piecewise polynomial representation of the signal .s(t) using the 3-order cubic spline is expressed as below: s (iΔt0 ≤ t < (i + 1)Δt0 ) = f i (t) = ai t 3 + bi t 2 + ci t + di ,

.

(4.42)

where .ai , .bi , .ci and .di are the coefficients of the cubic spline polynomials for the ith sample. In the first step, the reference and time-delayed signals are represented

.

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

187

using the cubic spline polynomial according to the above equation. Consider the continuous representations of the reference and time-delayed signals as . pi (t) and .qi (t), respectively, for the .ith sample. We have .

pi (t) = ai t 3 + bi t 2 + ci t + di , qi (t) = ki t 3 + li t 2 + m i t + n i .

(4.43)

In the second step, a pattern-matching function is considered to obtain the time delay. One of the most commonly used pattern-matching functions is the sum of square errors (SSE), which is defined as below: { .

SS E(t) =

T /2

−T /2

[ p(τ ) − q(t + τ )]2 dτ,

(4.44)

where.T denotes the sampling interval. Considering the continuous representations of the reference and time-delayed signals that are obtained as (4.43), the SSE definition is rewritten as below: ] [ [ 2] 5 6 . SS E(t) = t T ki + t 5 [2T ki li ] + t 4 T li2 + 2T ki m i + T 3 ki2 4 [ ] 5 1 + t 3 −2T di ki + 2T li m i + 2T ki n i − T 3 bi ki + T 3 ki li 6 3 [ 1 1 1 + t 2 −2T di li + T m i2 + 2T li n i − T 3 ci ki − T 3 bi li + T 3li2 2 6 2 ] 3 3 + T 3 ki m i − T 5 ai ki + T 5 ki2 40 16 [ 1 1 1 + t −2T di m i + 2T m i n i − T 3 di ki − T 3 ci li − T 4 bi m i 2 3 6 ] 1 3 3 5 1 5 1 5 1 3 + T li m i + T ki n i − T bi ki − T ai li + T ki li + C, 2 2 40 20 8 (4.45) where 1 3 2 1 3 1 T ci + T bi di − T 3 di li 12 6 6 1 3 2 1 3 1 3 1 1 3 − T ci m i + T m i − T bi n i + T li n i + T 5 bi2 6 12 6 6 80 1 5 1 5 1 52 1 1 5 + T ai ci − T ci ki − T bi li + T li − T 5 ai m i 40 40 40 80 40 1 7 2 1 7 1 7 2 1 T ai − T ai ki + T ki + T 5 ki m i + 40 448 224 448

C = T di2 − 2T di n i + T n i2 +

.

(4.46)

188

4 Phase Aberration Correction

Displacement within . L c samples of the reference and time-delayed signals is calculated by considering a summation over . L c piecewise polynomials according to the following equation:

.

SS E(t) =

Lc { Σ i=1

T /2 −T /2

[ pi (τ ) − qi (t + τ )]2 dτ.

(4.47)

The time.t that minimizes the above function is the optimum time delay between these two signals. This optimum value is achieved by taking a derivative of the function . SS E(t) with respect to .t and setting the result equal to zero. Different pattern-matching functions can be used to estimate the time delay. The normalized cross-correlation (NCC) function is another pattern-matching function which is defined as below: { T /2 −T /2 pi (τ )qi (t + τ )dτ (4.48) . N CC(t) = / . { T /2 2 { T /2 2 −T /2 pi (τ )dτ −T /2 qi (t + τ )dτ Similar to the SSE function, the continuous representations of the reference and time-delayed signals . pi (t) and .qi (t) are substituted in the NCC formulation, a summation is performed over . L c samples, and finally, the optimum time delay value is estimated; the time that maximizes the NCC function presented in (4.48) is the optimal value. It is worth noting that the order of the cubic spline polynomial affects the estimation; the accuracy of the estimation will be improved by increasing the order of the polynomial. However, this accuracy improvement is at the expense of increasing the computational complexity. In Viola and Walker (2005), it was proposed to continuously represent the reference signal and keep the time-delayed signal discrete in order to perform the continuous estimation. The SSE between the reference signal . pi (t) and the discretetime-delayed signal .s(i) is then defined as below:

.

SS E(t) =

Lc Σ

[ pi (t) − s(i)]2 ,

(4.49)

i=1

and the optimal time delay is estimated by finding the time .t that minimizes the above equation. Although the performance of this technique is degraded in terms of bias and jitter, the computational complexity of solving (4.49) is much lower compared to the case in which the continuous representation is used for both the reference and time-delayed signals, i.e., (4.47). Sample Tracking Method In the continuous estimation method that is discussed in the previous section, the pattern-matching function is formulated for . L c number of samples. In other words, a window with the length of . L c is considered, and the time delay is estimated for . L c

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

189

samples that are included in the considered window. The estimation can be performed in another way in which the windowing process is not required; rather, the time delays are estimated for each sample separately. This technique is known as the sample tracking (ST) method (Zahiri-Azar and Salcudean 2008). The ST method results in a more dense estimation compared to the window-based algorithm. In the following, this algorithm is going to be discussed in more detail. In the ST algorithm, the reference signal is continuously represented using the cubic spline polynomial (i.e., . pi (t)), and the time-delayed signal .s(i) is kept as its discrete form. For each sample .i, the displacement of the signal .s(i) with respect to the reference signal . pi (t) is estimated by obtaining the time .t for which the equality . pi (t) = s(i) holds; that is, the time delay is estimated by obtaining the root of . pi (t) − s(i) in the interval of .[0, T ]: ˆ Δ(i) = {t ∈ [0, T ]| pi (t) = s(i)} .

.

(4.50)

For negative time shifts in which the estimation is within the interval of .[−T, 0], the above equation is considered for the polynomial coefficients of .ai−1 − di−1 . More precisely, the coefficients of the reference signal are shifted by one sample, and the root of . pi−1 (t) − s(i) is estimated in the interval of .[0, T ]. This case is equivalent to estimation in the interval of .−T + [0, T ] = [−T, 0]. In the ST algorithm, it is possible to obtain more than one root for some of the samples. Also, for some samples, no root may be found at all. Such cases are known as incorrect roots. In order to preserve the performance of the algorithm, these incorrect roots should be removed. In this regard, one can use a non-linear filter such as a median filter (Zahiri-Azar and Salcudean 2008). To better understand the difference between the ST algorithm and the one discussed in Sect. 4.2.13, consider (4.49) which is used to estimate the time delay. As mentioned before, by taking the derivative of the SSE function and setting the result equal to zero, the time delay will be estimated. In the case in which . L c = 1 is considered, (4.49) is reformulated as .[ pi (t) − s(i)]2 . Now, taking the derivative of the resulting function with respect to .t and setting the result equal to zero, we have . pi (t) = s(i), which is equivalent to the equation that is expressed in the ST algorithm. Generally, the performance of the ST algorithm is improved compared to the window-based algorithm in terms of bias and jitter for high-SNR signals. To obtain a more accurate estimation, a two-step process can be considered in which either the ST algorithm or the window-based method is used (Zahiri-Azar and Salcudean 2008); in the first step, the window-based algorithm is used to obtain the coarse estimation of the time delay. Then, the ST algorithm is applied to the output of the window-based method in the second step in order to obtain the fine estimation. Zero-Crossing Tracking Method In addition to the ST algorithm, another tracking method known as zero-crossing tracking (ZCT) can be used to perform the estimation (Zahiri-Azar and Salcudean 2008). In this algorithm, the zero-crossing of the reference and time-delayed signals are obtained. The distance between the positions of the obtained zero-crossings is

190

4 Phase Aberration Correction

the estimated time delay. More precisely, the reference and time delay signals are continuously represented using the cubic spline polynomial (i.e., . pi (t) and .qi (t) are obtained). Then, the zero-crossing associated with each signal is obtained. Consider the index of the obtained zero-crossings to be . j. The positions corresponding to the signals . pi (t) and .qi (t), denoted as . Z p ( j) and . Z q ( j), respectively, are obtained as below: { Z p ( j) = t| pi (t)=0 , . (4.51) Z q ( j) = t|qi (t)=0 . Now, the difference between the values obtained from the above equation, or equivalently, the distance between the positions associated with zero-crossings of the reference and time-delayed signals, represents the estimated time delay: ˆ j) = Z p ( j) − Z q ( j). Δ(

.

(4.52)

Note that it is assumed that both of the signals . pi (t) and .qi (t) result in a similar zero-crossing index (. j). However, it is possible that a zero-crossing of the reference signal is missed or a new zero-crossing is added to the time-delayed signal. In such a case, the considered assumption will no longer hold. Therefore, the ZCT algorithm leads to increasing bias. To tackle this problem, in Zahiri-Azar and Salcudean (2008), it was proposed to modify the ZCT algorithm by rewriting (4.52) as below: ˆ j) = Z p ( j + n) − Z q ( j + m), Δ(

.

(4.53)

where .n and .m are known as the correction factors. By using the correction factors, the added or removed zero-crossings are taken into account. Indeed, in this modified ZCT algorithm, it is assumed that the positive jumps that are bigger than .λ/4 are due to appearing a new zero-crossing. Also, the negative jumps (again, bigger than .λ/4) are due to removing a zero-crossing. The modified ZCT algorithm improves the performance of the estimation compared to the initial ZCT method.

4.2.14 Phase Aberration Correction Method Combined with the Minimum Variance Beamformer The adaptive MV algorithm has been previously discussed, and it has been shown that this algorithm can successfully improve image quality compared to the non-adaptive DAS beamformer. However, the sensitivity of this algorithm to sound velocity errors greatly limits its performance. Although different techniques have been developed to make the MV algorithm robust against these errors, as discussed in Sect. 3.6, however, there is a trade-off between the robustness and image quality. In particular, in Ziksari and Asl (2017, 2015), the performance of the MV algorithm was evaluated in the presence of phase aberration, and it was proposed to use the phase aberra-

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

191

tion correction techniques to increase the robustness of this algorithm against phase aberrations. In the performed study, it was proposed to correct the phase aberration in both the transmission and reception steps; in the first step, a phase aberration correction algorithm is applied to the received signals, and the phase aberration profile is estimated. Then, in the second step, the estimated profile is applied inversely to the system in order to correct the transmit phase aberration. Consequently, the phase aberration exists only in the received signals, which will be corrected by applying a phase aberration correction algorithm such as the NNCC method. Once the phase aberration is corrected in both transmission and reception, applying the MV algorithm will efficiently lead to a reconstructed image with high quality. Also, in Chau et al. (2018), the severe sensitivity of the MV algorithm and its modified version, i.e., the EIBMV algorithm, to the phase aberration were investigated, and using these algorithms together with a phase aberration correction algorithm was confirmed as an efficient solution to reduce this sensitivity.

4.2.15 Coherent Multi-transducer Imaging Method In Peralta et al. (2019), a correlation-based technique, known as coherent multitransducer US imaging, was developed for phase aberration correction. In this algorithm, two linear arrays that cover a common field of view are synchronized and considered to be on the same plane (i.e., . y = 0). Plane wave imaging is performed using these two linear arrays. The time-delayed and beamformed data using the DAS beamformer in which the .ith and . jth arrays perform the emission and reception processes, respectively, is denoted as . yDASi, j (t; α) and obtained from the following equation: y

. DASi, j

(t; α) =

N Σ

Ti R j (k, t − τk ),

(4.54)

k=1

where.α denotes the emission angle, and.τk is the calculated time delay corresponding to .kth element of the receiving array. Also, the .ith transmit array and . jth receive array is expressed as.Ti R j . The final beamformed image is obtained by compounding all the possible imaging scenarios between the arrays: y

. DAS

(t; α) = yDAS1,1 (t; α) + yDAS1,2 (t; α) + yDAS2,1 (t; α) + yDAS2,2 (t; α). (4.55)

In order to correct the phase aberration and improve the image quality, the beamforming parameters, .P, are required to be optimized. The parameters which we are looking for include the sound velocity (.c), the rotation angles of each of the arrays (.θ1 , θ2 ), and the translation vectors (. r 1 , r 2 ); we have .P = {c, θi , r i } for .i = 1, 2. To obtain the optimum values for the mentioned parameters, an attempt is made to maximize the following equation:

192

4 Phase Aberration Correction

X(P) =

N Σ {

.

NCC [T1 R1 (t; P), T2 R1 (t; P)] w1,1 w2,1

k=1

} + NCC [T1 R2 (t; P), T2 R2 (t; P)] w1,2 w2,2 ,

(4.56)

where .wi, j denotes the weighting coefficients that are applied to the received signals, and NCC stands for normalized cross-correlation. Note that (4.56) is written for a single imaging point. By maximizing the normalized cross-correlation between the received signals of the elements, the optimum values of the parameters .P will be obtained, and consequently, phase aberration correction will be performed.

4.2.16 Phase Aberration Correction Using Blind Calibration of Array In van der Meulen et al. (2021), a blind calibration technique was proposed to correct the phase aberration. In this algorithm, it is tried to find a transfer function for the aberration layer. Consider. y ∈ C N ×1 and. s ∈ C K ×1 as the received signal from the Nelement array and the transmit signal, respectively, for a single temporal frequency bin. Note that it is assumed that the transmit array consists of . K elements. The received signal is formulated as a linear function of the transmit signal according to the following model: .

y = Gs.

(4.57)

In the above equation,. G = G (r ) diag(x)G (t) ∈ C N ×K , where each entry of the matrix (r ) .G which is denoted as .g (r ) (n, p) represents the Green function of the transmitted wave from the . pth pixel to .nth element. Similarly, each entry of the matrix . G (t) which is denoted as .g (t) ( p, k) is the Green function form .kth source to . pth pixel. Also, the . pth entry of the vector . x ∈ C Nx N y ×1 denotes the intensity of . pth pixel of the image (which consists of . N x × N y pixels). In the presence of phase aberration, another transfer function is added to the defined model; a virtual array is considered in front of the aberration layer, and it is assumed that there exists a transfer function from the virtual array to the real one. In this case, (4.57) is rewritten as below: .

y = H Gs,

(4.58)

where . H ∈ C N ×N is the so-called mapping function from the virtual array to the real one. In order to calibrate the array, the matrix . H should be estimated by taking this issue into account that no prior knowledge about the vector . x is available. In this regard, two sets of measurements, without and with the presence of the

4.2 Phase Aberration Correction Based on Phase Term/Time Delay Estimation

193

aberration layer, are performed. First, consider . E transmissions using the K-element transmit array; the transmit matrix is denoted as . S = [s(1), . . . , s(E)]T ∈ C K ×E . Then, inspired by (4.58), the measurements are modeled as below: .

Y 1 = H G S,

(4.59)

where.Y 1 ∈ C N ×E . The second set of measurements is performed in a similar manner, with the difference that a phase screen is considered in front of the array which results in phase aberration. In this case, the measurements are modeled according to the following equation: .

Y 2 = HΘG S,

(4.60)

where .Θ = diag(θ ) ∈ C N ×N , and .θ ∈ C N ×1 denotes the phase shifts that are applied to the received array. To perform the calibration algorithm, or equivalently, to obtain the matrix . H, (4.59) and (4.60) are combined according to the following relation: .

ZY 1 = Y 2 Z = Y 2 Y †1 = HΘG S (G S)† H −1 = HΘH −1 ,

(4.61)

where .(.)† denotes the pseudo-inverse operation. Note that the above equation is obtained by assuming that . G S is a full-rank matrix, and therefore, we have † . G S(G S) = I. Also, note that the obtained matrix . Z is independent of . s and . x. If eigenvalues and eigenvectors of the matrix . Z are denoted as .θ and . H, respectively, the aberration matrix is obtained by eigendecomposing the matrix . Z without any prior information about .θ with the consideration of this constraint that the eigendecomposition is unique. This constraint is achieved by an appropriate design of the phase screen such that each delay of the vector .θ is unique (van der Meulen et al. 2021). The explained process should be performed for all the temporal frequency bins. According to the above explanations, one can conclude that the calibration algorithm is performed by expressing the problem as finding the eigenvectors of the matrix . Z. Note that in order to achieve an accurate solution, it is necessary for the matrix .Y 1 which is presented in (4.59) to be well-conditioned. Otherwise, the estimation will not be performed correctly.

194

4 Phase Aberration Correction

4.3 Phase Aberration Correction Based on Sound Velocity Estimation As mentioned earlier at the beginning of this section, another way to compensate for the phase aberration and improve the image quality is to estimate the sound velocity in different parts of the imaging medium (not time delay or aberrant phase of the received data). In the following, the algorithms based on sound speed estimation are presented.

4.3.1 Image-Based Algorithm In Yoon et al. (2012), a phase aberration correction algorithm, known as the imagebased algorithm, was developed in which the beamformed data is used to perform the estimation. In this algorithm, a focusing quality factor (FQF) is considered, and an attempt is made to select the sound speed for which the optimal value of the considered FQF is obtained. As different parts of the inhomogeneous medium have different sound speeds, the sound velocity is estimated separately for each region of interest (ROI) in the image-based algorithm. It is well-known that the slope of the PSF plot associated with a beamformed signal in which the true sound velocity is used is more steep compared to the case in which the sound velocity is estimated incorrectly. Consequently, it can be concluded that the average gradient of the edge region is a good candidate for the FQF definition; as the average gradient of the edge region increases, the estimated sound velocity gets closer to its true value. Note that the defined FQF depends on the edge visibility. Therefore, an edge detection algorithm is required to calculate this factor. However, the problem is that the US images are contaminated with speckle noise which makes the conventional masking techniques not very efficient for edge detection. Therefore, the speckle texture should be removed before performing edge detection. In this regard, the Laplacian pyramid-based non-linear diffusion (LPND) technique has been proposed, which is strongly capable of removing the speckle pattern while the edges are preserved. Although the speckle should be removed in order to obtain the FQF, one should note that in the speckle regions, the speckle texture with high density is useful for calculating this factor. This implies that the LPND technique cannot be used in all of the imaging regions to obtain the FQF; rather, it can only be applied to regions where a point-like target exists. According to this issue, in Yoon et al. (2012), it was proposed to use the envelope signal instead of its Laplacian pyramid transformed in order to obtain the FQF, since by using the envelope signal, the effect of non-linear processes will be minimized. In such a case, the non-linear anisotropic diffusion equation is considered as below:

4.3 Phase Aberration Correction Based on Sound Velocity Estimation

195

∂I = div {g (||∇ (G(σa , σl ) ∗ I )||) .∇ I } , ∂t [ ( ) ] ||∇ J || 2 , g (||∇ J ||) = exp − k .

k = mean (||∇ I ||) ,

(4.62)

where .∗ represents convolution, .∇ and div.{.} denote the gradient and divergence operators, respectively, .g(.) is the diffusivity function, . I is the intensity of the image, and .k is the gradient threshold that is defined as the mean value of the gradient of the image intensity. Also, . J = G(σa , σl ) ∗ I is the filtered image intensity where . G denotes the anisotropic Gaussian filter, which is characterized by the standard deviations in the axial (.σa ) and lateral (.σl ) directions. By solving the above equation, the edge visibility is obtained which is equivalent to the summation of the gradient energy associated with the edge regions of the image.

Algorithm 1 Image-based method to perform PAC.

1: 2: 3: 4: 5: 6:

Input: Ns pre-determined sound velocities c ∈ C Ns ×1 , received signals, original image. Output: Sound-velocity-corrected reconstructed image (X corrected ). ROI selection for i = 1 : Ns do beamforming using i th pre-determined sound velocity FQF(i) = edge detection and edge visibility identification end for i opt = argmax {FQF(i)} i

7: cestimated = c(i opt ) 8: X corrected = reconstructed image using cestimated 9: Return X corrected

The pseudo-code of the discussed image-based algorithm is presented in Algorithm 1. It can be seen that the beamformed data is generated for each ROI using . Ns pre-determined sound velocities (from 1400 to 1600 m/s with the step size of 10 m/s in Yoon et al. (2012)). Then, edge detection and edge visibility identification are performed on the beamformed data to calculate the FQF corresponding to the considered sound velocity. Finally, the sound speed that maximizes the FQF is selected as the optimal value.

4.3.2 An Efficient Sound Velocity Estimation Algorithm In Cho et al. (2009), an efficient algorithm based on sound velocity estimation was proposed in which the estimation is performed by processing only a few scan lines.

196

4 Phase Aberration Correction

Therefore, the computational complexity is low in this technique. Consider the IQ signal corresponding to the .ith receiving element as below: ] [ g (t) = LPF si (t)e− jω0 t = Ai (t + τi )e− jω0 τi ,

. i

where .ω0 denotes the carrier frequency, . Ai (t + τi ) represents the signal amplitude of the .ith receiving element, and LPF stands for the low-pass filter. Also, .τi is the time delay corresponding to the .ith element that depends on the sound velocity as well as the distance between the element and the considered imaging point. In particular, consider the position of the .ith element as .(xi , yi ). Also, assume a reflector that is located at a depth of . Z and is . X d away from the central axis of the beam. The coordinate of this reflector is expressed as .(X d , Z ). In such a case, the time delay .τi is obtained as below: / (X d − xi )2 + (Z − yi )2 − Z . .τi = (4.63) c To perform the phase aberration correction according to the algorithm developed in Cho et al. (2009), an attempt is made to set the parameters . X d and .c such that the following cost function is maximized: | | N | |Σ g (t − τ ) i 0 i | | .X(c, X d ; z = z 0 ) = | e jω0 τi | , | | |gi (t0 − τi )|

(4.64)

i=1

and the optimum sound velocity is estimated accordingly for the depth of .z = z 0 (.t = t0 ). Note that the positions of the reflectors that generate a well enough strong echo signal are not known. For this reason, an interval in the depth direction of the imaging region is considered to maximize the above cost function. Also, in order to reduce the noise effect in maximizing the considered cost function, pixels, the corresponding echoes of which are less than a specified threshold, are removed and not included in the estimation procedure. Optimizing (4.64) is performed for a few numbers of scan lines (1–5 scan lines according to Cho et al. 2009), and the values obtained for each scan line are averaged to achieve the final estimated sound velocity.

4.3.3 Minimum Average Phase Variance Method The minimum average phase variance (MAPV) algorithm, which is an iterative algorithm, can be used to estimate the sound velocity of the imaging medium (Yoon et al. 2011). In this algorithm, the process is performed on the beamformed data; first, an initial sound velocity .cinit is considered and used to calculate the time delays corresponding to each element of the array. By applying the obtained time delays to the received signals, the beamformed data would be achieved. Assume that the

4.3 Phase Aberration Correction Based on Sound Velocity Estimation

197

beamformed data consists of . L scanlines and . P focal points. The beamformed data corresponding to the .lth scanline and . pth focal point is expressed as below: .

y(l, p) = [y1 (l, p), . . . , y N (l, p)] ,

(4.65)

where the length of the vector . y(l, p) ∈ C N ×1 equals the number of array elements. If the quadrature and in-phase components of the delayed signal . yi (l, p) are denoted as . Q i (l, p) and . Ii (l, p), respectively, the phase associated with the data presented in (4.65) is obtained as below: φ(l, p) = phase {y(l, p)} =

.

( ( [ ) )] Q 1 (l, p) Q N (l, p) 1 tan−1 , . . . , tan−1 . 2π I1 (l, p) I N (l, p) (4.66)

The variance of the signal phase is calculated as follows: σ 2 (l, p) = var {φ(l, p)} .

.

(4.67)

The phase variance is calculated for each scanline and focal point, and the average value of the results is calculated according to the following equation: σ2 =

. avg

L P 1 ΣΣ 2 σ (l, p). L P l=1 p=1

(4.68)

The sound velocity that minimizes the average phase variance is considered as the optimal value. More precisely, we have ( ) 2 . cˆ = argmin σavg

.

(4.69)

c=c+cconst

It can be seen from the above minimization problem that in each iteration, the sound velocity is updated by adding a constant value .cconst to the sound velocity of the previous iteration.

4.3.4 Sound Velocity Estimation in Dual-Layered Medium Using Deconvolution-Based Method In Shin et al. (2010), a deconvolution-based algorithm was proposed to estimate the sound velocity of a medium consisting of two layers. The sound velocity of the upper and lower layers of the medium are denoted as .c1 and .c2 , respectively. Assume that the array surface is located parallel to the boundary of layers. In such a case, a small percent of the incident wave is reflected from the boundary of two layers and

198

4 Phase Aberration Correction

most of it passes through the second layer (Shin et al. 2010). Therefore, the reflected wave can be simply ignored. According to explanations performed in Sect. 1.3, the following relation can be considered: .

sin θ1 sin θ2 = , c1 c2

(4.70)

where .θ1 denotes the incident angle in the first layer. Also, .θ2 is the transmitted angle from the first layer to the second layer. Assume that the transducer is located at .(x0 , z 0 ), the reflector (to be imaged) is located at .(xr , zr ), and the coordinate in which the incident wave reaches the boundary, is denoted as .(xb , z b ); the square of the above relation can be rewritten as below: .

c12

(xb − x0 )2 (xr − xb )2 [ ] = 2[ ]. (z b − z 0 )2 + (xb − x0 )2 c2 (zr − z b )2 + (xr − xb )2

(4.71)

Assuming .x0 , z 0 = 0 (for further simplification) and manipulating (4.71), the following quadratic equation is obtained: a + a1 xb + a2 xb2 + a3 xb3 + a4 xb4 = 0.

. 0

(4.72)

Considering .c = c1/c2, the coefficients presented in (4.72) are defined as below: a = −c2 z b2 xr2 ,

. 0

a1 = 2xr z b2 c2 , a2 = (zr − z b )2 + xr2 − c2 (z b2 + xr2 ), a3 = −2xr a4 , a4 = 1 − c 2 .

(4.73)

It is assumed that all of the coordinates that are used in (4.72) are known except the parameter .xb . To estimate the sound velocities .c1 and .c2 , (4.72) is first used to obtain a unique solution for the parameter .xb with the consideration of this constraint that the obtained value should be real and between the reflector and the transducer. Then, the sound velocities are estimated for each layer; in this regard, the upper/first layer is considered, and the corresponding sound velocity (i.e., .c1 ) is estimated using the deconvolution algorithm. The obtained .c1 is considered as the gold standard and used to obtain the sound velocity of the bottom/second layer (i.e., .c2 ). One can refer to Shin et al. (2010) to obtain detailed information about the deconvolution-based algorithm. This algorithm can be extended to more than two layers. Note that the limitation of this method is that it is assumed that the axial position of the boundary of two layers is known.

4.3 Phase Aberration Correction Based on Sound Velocity Estimation

199

4.3.5 Local Sound Speed Estimation Using Average Sound Velocity Recently, a local sound speed estimation method in a medium consisting of several layers has been developed in Ali et al. (2021); assume the position of a single element of the array as .(x, 0). The total distance from the element to the focal point .(x f , z f ) is denoted as .dt (x), which is a function of .x. In a two-layer medium, a fraction of the distance is traversed in the first layer, and the remaining is traversed in the second layer. The traversed distance corresponding to the first layer is denoted as .d1 (x) = α1 dt (x). Similarly, the distance which is traversed in the second layer is denoted as .d2 (x) = α2 dt (x). Note that .α1 and .α2 are the ratios of the traversed distances in the first and second layers, respectively. Also, we have .α1 + α2 = 1. The time that it takes for the wave to travel the total distance .dt (x) is obtained as below: ) ( d1 (x) d2 (x) α1 α2 , .τ (x) = + = dt (x) + (4.74) c1 c2 c1 c2 where.c1 and.c2 denote the sound velocity of the first and second layers of the medium, respectively. The average sound velocity is denoted as .cavg which is formulated as below: .

1 α1 α2 = + . cavg c1 c2

(4.75)

According to the obtained average sound velocity, (4.74) is rewritten as .τ (x) = dt (x)/cavg which is the ideal case. By expanding the layers of the imaging medium to . L > 2, the average sound velocity is reformulated as below: 1 .

cavg,L

=

L Σ αi i=1

ci

,

(4.76)

Σ where . αi = 1, and .cavg,L denotes the average sound velocity corresponding to a medium consisting of . L layers. In the case in which the thicknesses of all the layers are similar, we have .αi = 1/L, and (4.76) is rewritten as below: 1 .

cavg,L

=

L 1Σ 1 . L i=1 ci

(4.77)

By using the obtained .cavg,L , the sound velocity of each layer can be estimated. In particular, the sound velocity of the . Lth layer is estimated according to the following equation:

200

4 Phase Aberration Correction

.

1 L L −1 = − . cL cavg,L cavg,L−1

(4.78)

As can be seen, the average sound velocity is required to perform the estimation. In order to obtain a good estimation for .cavg,L , one can use different metrics such as the CF or SLSC metrics, each of which was previously discussed in Sects. 3.5.1.1 and 3.5.1.7, respectively. In particular, in Ali et al. (2021), it has been proposed to obtain the CF corresponding to a range of sound velocities separately (from 1460.m/s to 1620 .m/s). Then, the generated CF image is averaged over the lateral direction for each of the considered sound velocities. The sound velocity that results in the maximum averaged CF value is considered as the average sound velocity, which is raw , where. L ∈ [L min , . . . , L max ]. Note that it is assumed that the sound denoted as.cavg,L velocity variations along the lateral direction are negligible in the layered medium, and therefore, only the variations along the depth (axial) direction are aimed to be estimated. raw One should note that the obtained .cavg,L is contaminated with noise. This negatively affects the local sound velocity estimation. Therefore, the noise effect should be minimized. In this regard, the smoothing procedure is performed on the average sound velocity before estimating the local sound velocity; the average sound velocity vector ]T [ raw raw ∈ C(L max −L min +1)×1 . The smoothed is considered as . craw avg = cavg,L min , . . . , cavg,L max ]T [ average sound velocity vector is also denoted as . cavg = cavg,L min , . . . , cavg,L max ∈ C(L max −L min +1)×1 , which is obtained from the following equation: c

. avg

( )−1 raw = I + λ DT D cavg .

(4.79)

In the above equation, .λ is the regularization constant, and . D is denoted as the roughening matrix which is defined as below (Ali et al. (2021)): ⎡

1 ⎢0 ⎢ ⎢. . D = ⎢ .. ⎢ ⎣0 0

−3 1 .. .

3 −3 .. .

··· 0 ··· 0

−1 3 .. .

0 −1 .. .

0 0 .. .

··· ··· .. .

0 0 .. .



⎥ ⎥ ⎥ ⎥ ⎥ 1 −3 3 −1 0 ⎦ 0 1 −3 3 −1 (L

.

(4.80)

max −L min −2)×(L max −L min +1)

Once the average sound velocity is estimated, (4.78) is used to obtain the local sound velocity of each layer in the medium.

4.3.6 Spatial Domain Reconstruction Algorithm Using displacements along different propagation wave paths, a forward problem can be defined. Then, by solving the inverse problem in the spatial domain, the sound

4.3 Phase Aberration Correction Based on Sound Velocity Estimation

201

velocity corresponding to each imaging point would be obtained. In other words, the sound velocity reconstruction will be performed. This algorithm, known as the spatial domain reconstruction method, has been performed in Jaeger et al. (2015), Sanabria et al. (2018), Rau et al. (2019) to estimate the sound velocity in the PWI. The forward problem is defined as below: τ = Lσ ,

.

(4.81)

where .τ = [τ1 , . . . , τ M ]T ∈ C M×1 denotes the relative time delay vector for . M mea, , surements, and .σ = 1/c ∈ C Nx N y ×1 is the slowness vector (inverse of the sound , , velocity) for each cell of the considered . N x, × N y, spatial grid. Also, . L ∈ C M×Nx N y is the differential path matrix that relates the slowness to the time delay. In particular, the .(m, n)th entry of this matrix represents the multiplication of the cumulative roundthe .nth cell and the trip path of . P different emissions from the transmission source toΣ measurement weight associated with the .mth measurement, i.e., . Pp=1 (wm, p d p,n ). It can be observed that the matrix . L depends on the imaging geometry. One should note that the differential path matrix is ill-conditioned due to the relative measurements. Therefore, in order to solve the forward problem presented in (4.81) and obtain the vector .σ , an .ℓ1 -norm regularization constraint is added to the existing problem, and the modified minimization problem is expressed as below: σˆ = argmin ||τ − Lσ ||1 + λ|| Dσ ||1 .

.

σ

(4.82)

As can be seen from the above minimization problem, the .ℓ1 -norm constraint .|| Dσ ||1 is added to the initial problem where . D is the first-order differentiation matrix, or equivalently, the smoothing matrix. The regularization constant .λ is also used to determine the participation of the newly added constraint. As was previously mentioned in Sect. 1.11.3, an imaging point is radiated from different angles in PWI. In particular, consider the PWI is performed from two different angles .θ1 and .θ2 . In such a case, an imaging point is radiated from two different paths and emission angles. The weights of the matrix . L are set by subtracting the traversed paths (.d p,n s) corresponding to the propagated waves of .θ1 and .θ2 angles. Note that by using multiple emissions from different angles, stacking the corresponding measurements to (4.82), and solving the resulting problem, one can take advantage of a more robust solution. This algorithm has also been used in DWI, which was discussed in Sect. 1.11.4, to obtain the sound velocity map of the imaging medium (Rau et al. 2021).

202

4 Phase Aberration Correction

4.3.7 Tomographic Sound Velocity Reconstruction and Estimation Based on Eikonal Equation In Ali et al. (2022), a tomographic sound velocity reconstruction method has been developed based on the forward problem presented in (4.81). In this study, in order to obtain the solution of the considered forward problem, first, .τ − Lσ is modeled using the Normal distribution with mean .0 and covariance . R. The covariance matrix . R is a diagonal matrix, the entries of which depend on the path length between the element and the imaging point. Note that the variances of the observations are scaled with the propagation distance; this process is performed in order to account for the SNR reduction caused by spreading the wave during the propagation path. Then, a prior model is considered for .σ ; a Normal distribution with mean .σ 0 = 1/1540 and covariance . Q is used to model the slowness vector. By considering the mentioned models, the problem is written in the form of least squares as below: } 1 1 −1 T −1 T . min (τ − Lσ ) R (τ − Lσ ) + (σ − σ 0 ) Q (σ − σ 0 ) , σ 2 2 {

(4.83)

which is obtained by maximizing the posterior likelihood. The solution to the above problem is as follows: ( )−1 σ = σ 0 + Q LT L Q LT + R (τ − Lσ 0 ) .

.

(4.84)

To obtain the local sound velocity using the above solution, it is broken into two parts as below: ( .

) L Q L T + R ζ = τ − Lσ 0

(4.85)

σ = σ 0 + Q L ζ, T

(4.86)

where the parameter .ζ is obtained from (4.85), substituted into (4.86), and consequently, the prior distribution .σ 0 is updated. Once the sound velocity distribution is estimated according to (4.85) and (4.86), the phase aberration correction process should be performed. In this regard, the Eikonal equation can be used; considering the sound velocity of the .nth imaging point (i.e., .c(xn , z n )) as a known value, the travel path time from the .ith element which is located at .(xi , 0) to the considered imaging point is obtained according to the following equation: /( .

∂τ ∂ xn

)2

( +

∂τ ∂z n

)2 =

1 , c(xn , z n )

(4.87)

with the consideration of the constraint .τ (xi , 0) = 0. The above equation is written for each element of the array, and the time delay corresponding to each imaging point will be obtained accordingly. By solving the Eikonal equation using the fast marching

4.3 Phase Aberration Correction Based on Sound Velocity Estimation

203

method, as suggested in Ali et al. (2022), the inhomogeneity of the imaging medium is taken into account and the phase aberration is corrected from the aberrated signals.

4.3.8 Local Sound Velocity Estimation Based on Pulse-Echo Imaging In Jakovljevic et al. (2018), an algorithm was proposed in which the local sound velocity estimation is performed using the pulse-echo US imaging (in contrast to the tomographic reconstruction method presented in Sect. 4.3.7 in which the throughtransmission-based method is used). In this technique, uniform sampling in time (not space) is assumed for the received signals. The total traversed path of the propagated wave between an element and the specific focus point is expressed as below: d =

S Σ

. t

di .

(4.88)

i=1

Considering .Ts as the sampling period, we have .di = ci Ts , where .ci denotes the sound velocity of the .ith segment of the imaging region. Also, . S is the total number of segments of the imaging region. Note that the distances between the segments, i.e., .di s, are not necessarily equal since the sound velocity of each segment may be different. The above equation can be rewritten as below: c

. avg

.S.Ts =

S Σ

ci Ts ,

i=1

cavg =

S 1Σ ci . S i=1

(4.89)

From the above equation, it can be concluded that the average sound velocity (.cavg ) equals the mean value of the local sound velocities along the propagation path. Generalizing (4.89), the equations system can be considered as below: c

. avg

= Acl + ε,

(4.90)

where . cl is the vector including the local sound velocities corresponding to each segment, and .ε denotes the error vector. Also, . A is considered a lower triangular matrix as below: ⎡ ⎤ 1 0 ⎢1 1 ⎥ ⎢2 2 ⎥ (4.91) . A = ⎢. . . ⎥. ⎣ .. .. . . ⎦ 1 1 · · · 1S S S

204

4 Phase Aberration Correction

Note that the measurements corresponding to the average sound velocity are obtained from a similar wave propagation direction. Therefore, the lower triangular assumption can be considered for the matrix . A. If the measurements are obtained from different propagation directions, no explicit definition can be provided for this matrix. In order to estimate the local sound velocities using (4.90), it is necessary to obtain the average sound velocity . cavg first. Different algorithms can be used for this purpose. Here, the arrival times of the neighboring elements are used (as suggested in Jakovljevic et al. (2018)); in particular, if the reflected echoes correspond to a speckle-generating medium, the signals of two elements .m and .n (.|m − n| ≤ 5) that are denoted as .sxm (t) and .sxn (t), respectively, are almost similar, and the following relation can be considered for them: s (t) = sxn (t + δxm ,xn ) + εxm ,xn ,

. xm

(4.92)

where .δxm ,xn and .εxm ,xn denote the time delay corresponding to the elements .m and n and the noise, respectively. The F-NCC method that is described in Sect. 4.2.4 is used to obtain the time delay. In a homogeneous medium, the relationship between the sound velocity and arrival time corresponding to the element which is located at .(x, 0) and the focal point .(x f , z f ) is expressed as below: .

/ t (x) =

.

(x − x f )2 + z 2f

c = p1 x + p2 x + p3 , 2

(4.93)

where .

p1 = 1/c2 , 2x f p2 = − 2 , c (x 2f + z 2f ) p3 = . c2

One can conclude from the above equation that the local sound velocity can be estimated by fitting a parabola on the square of the arrival time profile. Once the average sound velocity is obtained, the local sound velocities . cl would be estimated according to (4.90). The gradient descent method, which is an iterative algorithm, can be used to solve this problem efficiently.

4.4 Phase Aberration Correction Based on Image Reconstruction Algorithm

205

4.4 Phase Aberration Correction Based on Image Reconstruction Algorithm In contrast to the phase aberration correction algorithms that are discussed so far, in some algorithms, it is not trying to correct the phase aberration of the received data directly, as mentioned before. More precisely, the phase aberration or time delay is not estimated in such algorithms; rather, an attempt is made to reduce the destructive effects of the phase aberration on the resulting image. The CF algorithm that was previously discussed in Sect. 3.5.1.1 and its modified versions are examples in this regard. In the following, other existing phase aberration correction algorithms that are based on a reconstruction algorithm are discussed.

4.4.1 Dual Apodization with Cross-Correlation Method The so-called dual apodization with cross-correlation (DAX) algorithm is included in the third category of the phase aberration correction algorithms by using which in the presence of phase aberration, the quality of the resulting image will be improved (Seo and Yen 2009). In the DAX algorithm, two PSFs with two different apodizations are constructed from the received signals of the array. The PSFs have a similar main lobe. However, they are different in sidelobes and clutter. The cross-correlation between these two PSFs provides useful information about their main lobe as well as their clutter and sidelobes; the cross-correlation corresponding to the main lobe is a high value. Also, a low value (near zero) is assigned to the clutter part of the signals. It can be concluded that by using the calculated cross-correlation between the constructed PSFs as weighting coefficients, the contrast of the resulting image will be improved. The pseudo-code of the DAX algorithm is presented in Algorithm 2. It can be seen that the calculated cross-correlation goes through a thresholding block; in this block, the coefficients that are higher compared to a constant threshold.ε are preserved. Also, the coefficients the values of which are smaller compared to the considered threshold are substituted with.ε. Accordingly, the weighting coefficients are generated. In order to achieve better performance and reduce the effect of artifacts that may appear in the reconstructed images obtained from the DAX algorithm, the generated weighting coefficients can be further processed by using a median filter. By applying the generated weighting coefficients to the DAS beamformed data, an improved reconstructed image in terms of contrast will be achieved. In the pseudocode shown in the Algorithm 2, the received signals of . K consecutive elements are summed . K -in-between in order to create two groups of the signals (RX1 and RX2). Such a pattern is named . K -. K alternating. As the value of the alternating pattern increases, the contrast of the image associated with the transmit focus area is improved. However, some dark artifacts appear over other parts of the image. To overcome this limitation and improve the performance of the algorithm, the adaptive DAX can be used; a larger value is assigned to the alternating pattern for transmit

206

4 Phase Aberration Correction

Algorithm 2 Processing steps of DAX method. Input: x(n) ∈ C N ×1 , N , K , ε. Output: DAX beamformed output for n th imaging point (y D AX (n)). 1: Initialize: i = 1, R X 1 (n) = 0, R X 2 (n) = 0 2: while i ≤ N do 3: R X 1 (n) = R X 1 (n) + xi (n) + · · · + xi+K −1 (n) 4: R X 2 (n) = R X 2 (n) + xi+K (n) + · · · + xi+2K −1 (n) 5: i ← i + 2K 6: end while 7: y D AS (n) = R X 1 (n) + R X 2 (n) 8: cc(n) = cross-correlation between R X 1 (n) and R X 2 (n) 9: if cc(n) < ε then 10: cc(n) ← ε 11: end if 12: y D AX (n) = y D AS (n) × cc(n) 13: Return y D AX (n)

focus regions. As the reconstructed region moves away from the transmit focus region (i.e., closer and further away from the array), this value decreases (Seo and Yen 2009). The disadvantage of the DAX algorithm is that in the cases in which the phase aberration is severe in the received data, the performance of the algorithm is degraded; in addition to the reduction of the contrast improvement, this algorithm leads to the production of cysts with unusual shapes, and therefore, negatively affects the diagnosis. To tackle this problem, one can combine one of the phase aberration correction algorithms that directly compensate for the phase aberration with the DAX technique. In particular, in Shin and Yen (2012), it was proposed to combine the DAX algorithm with the NNCC algorithm that is presented in Sect. 4.2.1. As the contrast improvement is performed in different manners in these two algorithms, this combinatorial algorithm is expected to result in a better contrast compared to the case in which each of the mentioned algorithms is used alone. The process followed to apply this hybrid algorithm is that the NNCC algorithm is used in the first step to maintain the coherence of the received signals associated with an inhomogeneous medium. Then, the DAX algorithm is applied to the output of the first step in order to reduce the clutter and improve the image contrast. Moreover, the modified version of the DAX algorithm, known as multi-apodization with cross-correlation (MAX), can also be used to further suppress the clutter artifacts (Shin et al. 2014). In MAX, multiple pairs of apodizations instead of a single pair are used that are applied to the time-delayed signals, and their corresponding NCCs are calculated separately. Then, the final weighting coefficients are achieved by averaging over the obtained NCC coefficients. For a 4-4 alternating pattern, as an example, four different apodization pairs are defined in the MAX algorithm. Furthermore, the phase apodization with cross-correlation (PAX) algorithm, as another modified version of DAX, has been developed in which the phase shifts are applied to the time-delayed signals (Shin and Yen 2013). More precisely, a pair of complementary sinusoidal coefficients are

4.4 Phase Aberration Correction Based on Image Reconstruction Algorithm

207

defined and applied to RX1 and RX2. Using the complementary sinusoidal coefficients, some grating lobes (away from the main lobe) will be generated with different characteristics for each signal pair. Therefore, the improved contrast weighting coefficients would be achieved by using the NCC. The location and magnitude of the generated grating lobes can be adjusted by appropriately setting the sinusoidal coefficients, such as the spatial frequency. One should note that the output of the NCC process will be different for different spatial frequency values. To further improve the algorithm’s robustness against strong clutter, it has been proposed in Shin et al. (2016) to average the calculated NCCs over . N f different values of the spatial frequencies. This algorithm, which is the modified version of the PAX method, is called multiphase apodization with cross-correlation (MPAX). Also, in Wang et al. (2023), it has been proposed to use square-wave phase apodizations instead of sinusoidal ones. By doing so, more extended grating lobes will be covered in each signal pair compared to MPAX, and therefore, the clutter level will be suppressed more successfully. This algorithm, which is known as improved MPAX (IMPAX), reduces the computational complexity compared to MPAX; in order to achieve a wide coverage of grating lobes in MPAX, more pairs of apodizations should be used, which negatively affects the computational complexity and data storage. In contrast, the IMPAX algorithm is capable of producing a more extended coverage of grating lobes compared to MPAX by using a similar number of parameters. Note that for all of the DAX-based techniques, the 2D NCC over a specific number of samples along both the axial and lateral directions can be used (instead of 1D NCC). Consequently, the artifacts arising from the random speckle will be successfully suppressed, and there will be no need to use the 2D median filter after the thresholding process.

4.4.2 Singular Value Decomposition Beamformer Recently, a singular value decomposition (SVD) beamformer has been developed to perform the phase aberration correction process on the plane wave and diverging wave imaging systems (Bendjador et al. 2020, 2021). In this algorithm, a compound matrix, including the beamformed images corresponding to different emissions, is constructed and decomposed to obtain the non-aberrated image. Before going through this SVD-based algorithm, the reader is recommended to refer to Sects. 6.1.1–6.1.3 in order to get basic information about the principles of plane wave imaging. The received signal can be considered as the following equation in the frequency domain: .

S(ω) = H (ω)Γ(ω) tH (ω)E(ω),

(4.94)

where .tH (ω) represents the propagation impulse response from the transmitter to the scatterer in the frequency domain. Similarly, . H (ω) represents the frequency domain propagation impulse response from the scatterer to the receiver. Also, the imaging

208

4 Phase Aberration Correction

medium is modeled by the matrix .Γ(ω), and . E(ω) is the Fourier transform of the emitted signal for each element. The frequency dependency of the defined matrices is ignored hereafter for simple writing. By defining a propagation model . H0 that is as close as possible to the real propagation . H , the reconstructed image . I ∈ C Nx N y ×1 corresponding to the transmission vector . E can be expressed as below: .

I = tH0∗ H Γ tH E.

(4.95)

Note that it is assumed that the reconstructed image includes . N x × N y pixels. If H0 = H , it can be concluded that . tH0∗ H equals an identity matrix, and consequently, the above equation is simplified as . I = Γ tH E; that is, the reconstructed image is equivalent to the pixel-by-pixel multiplying .Γ with the amplitude and phase of . tH E that includes the time path differences during the wave propagation toward the pixels. In the conventional B-mode imaging, several focused emissions are performed, each of which results in the generation of one scan line in the image. The .ith transmit vector is denoted as . E i = tH0,∗ p , where . H0, p represents the . pth column of the propagation model . H0 . Indeed, . tH0,∗ p indicates the transmit vector that is focused on the . pth pixel. Therefore, the reconstructed image is obtained as below for each transmission:

.

( ) I = diag(I ) = diag tH0∗ H Γ tH tH0∗ .

. m

(4.96)

In plane wave imaging, . M different emissions from different angles are used to perform imaging. In this case, in order to express the reconstructed image in the form of (4.96), it is necessary to define a matrix . P ∈ C Nx N y ×M that represents basis variations between the plane wave space and array elements space. By using the considered matrix . P, the final reconstructed image is obtained as below: ( ) I = diag(I P W ) = diag tH0∗ H Γ tH P t P ∗ tH0∗ .

. m

(4.97)

The key point is to define the compound matrix . R ∈ C Nx N y ×M that consists of . M beamformed images. In other words, this matrix includes the beamformed data corresponding to different emissions that result in the final reconstructed image by performing a summation operation over them. The compound matrix is defined as below: .

R=

(t

) ( ) H0∗ H Γ tH P ◦ tH0∗ P ∗ ,

(4.98)

where .◦ denotes the Hadamard product. Consider the diagonal matrix . A as the aberration matrix, which is applied to the propagation model. As a result, the above equation is rewritten as below: .

) ) ( H0∗ H Γ tH0 P A ◦ tH0∗ P ∗ ( ) ) ( = tH0∗ H Γ tH0 P ◦ tH0∗ P ∗ A.

R˜ =

(t

(4.99)

References

209

( ) The equation can be considered as . R = B.A, where . B = tH0∗ H Γ tH0 P ◦ ) ( t ∗above H0 P ∗ . The .ith column of the matrix . B represents a non-aberrated image corresponding to the .ith emission. To achieve the non-aberrated image, the coherence of the reflected echoes between different emissions is required to be maximized. In this regard, we first define the angular coherence matrix .Cangular which is obtained as .Cangular = tB ∗ B. In the presence of phase aberration, the coherence of the antidiagonal of the matrix .Cangular is decreased, which results in image quality degradation. Now, with the consideration of this principle, the phase aberration correction is going to be performed in the following. To obtain the non-aberrated image, the matrix . A is required to be estimated; the phase aberration corrected image is denoted as . B A∗ . According to the explanations performed earlier, it is well-known that the angular covariance should be optimized in order to achieve the phase aberration corrected image. Therefore, the problem is to find the vector . a = diag (A∗ ) such that the angular covariance of the image . Ba is maximized. It has been shown in Bendjador et al. (2020) that the first singular vector of the matrix . B maximizes the angular covariance. More precisely, the problem solution is obtained by singular value decomposing the matrix . B as .U S tV ∗ , and selecting the first singular vector, i.e., . a = diag(A∗ ) = v 1 . By selecting the first singular vector of the matrix . B, the images are filtered and their phase components are corrected using. A∗ . This algorithm is simple, and also, there is no need to perform an iterative manner to achieve the optimum result.

4.5 Conclusion In this chapter, the PAC techniques have been addressed. It is well-known that due to the inhomogeneity of the imaging medium, the sound velocity changes during the wave traveling. Applying a beamforming process with the consideration of a constant sound velocity does not result in a well-focused image. In this chapter, different PAC techniques were divided into three main categories, each of which was explained separately. Briefly, it was observed that the PAC process could be performed by estimating the signal phase and sound velocity, or by applying some modifications to the image reconstruction algorithm. One of the most commonly used methods in this regard is the NNCC method.

References Ali R, Telichko AV, Wang H, Sukumar UK, Vilches-Moure JG, Paulmurugan R, Dahl JJ (2021) Local sound speed estimation for pulse-echo ultrasound in layered media. IEEE Trans Ultrason Ferroelectr Freq Control 69(2):500–511 Ali R, Brevett T, Hyun D, Brickson LL, Dahl JJ (2022) Distributed aberration correction techniques based on tomographic sound speed estimates. IEEE Trans Ultrason Ferroelectr Freq Control 69(5):1714–1726

210

4 Phase Aberration Correction

Bendjador H, Décombas-Deschamps S, Burgio MD, Sartoris R, Van Beers B, Vilgrain V, Deffieux T, Tanter M (2021) The SVD beamformer with diverging waves: a proof-of-concept for fast aberration correction. Phys Med Biol 66(18):18LT01 Bendjador H, Deffieux T, Tanter M (2020) The SVD beamformer: Physical principles and application to ultrafast adaptive ultrasound. IEEE Trans Med Imaging 39(10):3100–3112 Chau G, Dahl J, Lavarello R (2018) Effects of phase aberration and phase aberration correction on the minimum variance beamformer. Ultrasonic Imaging 40(1):15–34 Cho M, Kang L, Kim J, Lee S (2009) An efficient sound speed estimation method to enhance image resolution in ultrasound imaging. Ultrasonics 49(8):774–778 Fink M, Prada C, Wu F, Cassereau D (1989) Self focusing in inhomogeneous media with time reversal acoustic mirrors. In: Proceedings of the IEEE Ultrasonics Symposium, pp 681–686. IEEE Flax S, O’Donnell M (1988) Phase-aberration correction using signals from point reflectors and diffuse scatterers: basic principles. IEEE Trans Ultrasonics Ferroelectr Freq Control 35(6):758– 767 Haun MA, Jones DL, Oz W (2004) Overdetermined least-squares aberration estimates using common-midpoint signals. IEEE Trans Med Imaging 23(10):1205–1220 Ivancevich NM, Dahl JJ, Smith SW (2009) Comparison of 3-D multi-lag cross-correlation and speckle brightness aberration correction algorithms on static and moving targets. IEEE Trans Ultrasonics Ferroelectr Freq Control 56(10):2157–2166 Jaeger M, Held G, Peeters S, Preisser S, Grünig M, Frenz M (2015) Computed ultrasound tomography in echo mode for imaging speed of sound using pulse-echo sonography: proof of principle. Ultrasound Med Biol 41(1):235–250 Jakovljevic M, Hsieh S, Ali R, Chau Loo Kung G, Hyun D, Dahl JJ (2018) Local speed of sound estimation in tissue using pulse-echo ultrasound: model-based approach. J Acoust Soc Am 144(1):254–266 Karaman M, Atalar A, Koymen H, O’Donnell M (1993) A phase aberration correction method for ultrasound imaging. IEEE Trans Ultrasonics Ferroelectr Freq Control 40(4):275–282 Krishnan S, Rigby K, O’donnell M (1997) Improved estimation of phase aberration profiles. IEEE Trans Ultrasonics Ferroelect Freq Control 44(3):701–713 Krishnan S, Li P-C, O’Donnell M (1996) Adaptive compensation of phase and magnitude aberrations. IEEE Trans Ultrasonics Ferroelectr Freq Control 43(1):44–55 Li Y (1997) Phase aberration correction using near-field signal redundancy. I. Principles [ultrasound medical imaging]. IEEE Trans Ultrasonics Ferroelectr Freq Control 44(2):355–371 Li P-C, Flax SW, Ebbini ES, O’Donnell M (1993) Blocked element compensation in phased array imaging. IEEE Trans Ultrasonics Ferroelectr Freq Control 40(4):283–292 Liu D-L, Waag RC (1994) Time-shift compensation of ultrasonic pulse focus degradation using least-mean-square error estimates of arrival time. J Acoust Soc Am 95(1):542–555 Måsøy S-E, Angelsen B, Varslot T (2004) Estimation of ultrasound wave aberration with signals from random scatterers. J Acoust Soc Am 115(6):2998–3009 Monjazebi D, Xu Y (2021) Estimating phase aberration from noisy radiofrequency data of a single frame of synthetic aperture ultrasound image. arXiv:2106.11094 Nock L, Trahey GE, Smith SW (1989) Phase aberration correction in medical ultrasound using speckle brightness as a quality factor. J Acoust Soc Am 85(5):1819–1833 O’donnell M, Flax S (1988) Phase aberration measurements in medical ultrasound: human studies. Ultrasonic imaging, vol 10, no 1, pp 1–11 Peralta L, Gomez A, Hajnal JV, Eckersley RJ (2019) Coherent multi-transducer ultrasound imaging in the presence of aberration. In: Medical imaging 2019: ultrasonic imaging and tomography, vol 10955. SPIE, pp 152–161 Pinton GF, Trahey GE (2006) Continuous delay estimation with polynomial splines. IEEE Trans Ultrasonics Ferroelectr Freq Control 53(11):2026–2035 Priestley MB (1981) Spectral analysis and time series: univariate series, vol 1. Academic Press (1981)

References

211

Rau R, Schweizer D, Vishnevskiy V, Goksel O (2019) Ultrasound aberration correction based on local speed-of-sound map estimation. In: 2019 IEEE international ultrasonics symposium (IUS). IEEE, pp 2003–2006 Rau R, Schweizer D, Vishnevskiy V, Goksel O (2021) Speed-of-sound imaging using diverging waves. Int J Comput Assist Radiol Surg 16(7):1201–1211 Rigby KW (2000) Real-time correction of beamforming time delay errors in abdominal ultrasound imaging. In: Medical imaging 2000: ultrasonic imaging and signal processing, vol 3982. International Society for Optics and Photonics, pp 342–353 Sanabria SJ, Ozkan E, Rominger M, Goksel O (2018) Spatial domain reconstruction for imaging speed-of-sound with pulse-echo ultrasound: simulation and in vivo study. Phys Med Biol 63(21):215015 Seo CH, Yen JT (2009) Evaluating the robustness of dual apodization with cross-correlation. IEEE Trans Ultrasonics Ferroelectr Freq Control 56(2):291–303 Shin J, Yen JT (2013) Clutter suppression using phase apodization with cross-correlation in ultrasound imaging. In: 2013 IEEE international ultrasonics symposium (IUS). IEEE, pp 793–796 Shin J, Chen Y, Nguyen M, Yen JT (2014) Robust ultrasonic reverberation clutter suppression using multi-apodization with cross-correlation. In: 2014 IEEE international ultrasonics symposium. IEEE, pp 543–546 Shin J, Yen JT (2012) Synergistic enhancements of ultrasound image contrast with a combination of phase aberration correction and dual apodization with cross-correlation. IEEE Trans Ultrasonics Ferroelectr Freq Control 59(9):2089–2101 Shin H-C, Prager R, Gomersall H, Kingsbury N, Treece G, Gee A (2010) Estimation of speed of sound in dual-layered media using medical ultrasound image deconvolution. Ultrasonics 50(7):716–725 Shin J, Chen Y, Malhi H, Yen JT (2016) Ultrasonic reverberation clutter suppression using multiphase apodization with cross correlation. IEEE Trans Ultrasonics Ferroelectr Freq Control 63(11):1947–1956 Silverstein SD, Ceperley DP (2003) Autofocusing in medical ultrasound: the scaled covariance matrix algorithm. IEEE Trans Ultrasonics Ferroelectr Freq Control 50(7):795–804 van der Meulen P, Coutino M, Kruizinga P, Bosch JG, Leus G (2021) Blind calibration for arrays with an aberration layer in ultrasound imaging. In: 2020 28th European signal processing conference (EUSIPCO). IEEE, pp 1269–1273 Viola F, Walker WF (2005) A spline-based algorithm for continuous time-delay estimation using sampled data. IEEE Trans Ultrasonics Ferroelectr Freq Control 52(1):80–93 Wang P, Cuhen J, Shen Y, Li Q, Tong L, Li X (2023) Low complexity adaptive ultrasound image beamformer combined with improved multiphase apodization with cross-correlation. Ultrasonics 107084 Yoon C, Lee Y, Chang JH, Song T-K, Yoo Y (2011) In vitro estimation of mean sound speed based on minimum average phase variance in medical ultrasound imaging. Ultrasonics 51(7):795–802 Yoon C, Seo H, Lee Y, Yoo Y, Song T-K, Chang JH (2012) Optimal sound speed estimation using modified nonlinear anisotropic diffusion to improve spatial resolution in ultrasound imaging. IEEE Trans Ultrasonics Ferroelectr Freq Control 59(5):905–914 Zahiri-Azar R, Salcudean SE (2008) Time-delay estimation in ultrasound echo signals using individual sample tracking. IEEE Trans Ultrasonics Ferroelectr Freq Control 55(12):2640–2650 Ziksari MS, Asl BM (2015) Phase aberration correction in minimum variance beamforming of ultrasound imaging. In: 2015 23rd Iranian conference on electrical engineering. IEEE, pp 23–26 Ziksari MS, Asl BM (2017) Combined phase screen aberration correction and minimum variance beamforming in medical ultrasound. Ultrasonics 75:71–79

Chapter 5

Harmonic Imaging and Beamforming

Abstract This chapter is dedicated to harmonic imaging; a brief explanation is that the received echo signals in ultrasound imaging include the fundamental and higher order frequency components. By using the higher order frequency components, an image with a better resolution will be reconstructed. There are various methods for extracting the higher order frequency components from the received signal, which are explained in this chapter. The chapter is finished with the image reconstruction algorithms based on harmonic imaging. Keywords Harmonic imaging · Frequency component · Dual frequency · Resolution improvement · Pulse inversion

5.1 Introduction During the wave propagation through the tissue, different harmonic components with frequencies equal to integer multiples of the frequency component associated with the transmitted wave (known as fundamental frequency) are generated. The reason is that as the wave propagates in the medium, it constantly vibrates the tissue (expansion and contraction) which leads to wavefront distortion, and consequently, harmonics generation. More precisely, consider the propagation speed in the medium as a function of particle velocity (.u) as below (Averkiou 2000): c = c0 + βu,

.

where .β denotes the nonlinearity coefficient, and .c0 is a constant sound velocity such as 1540 m/s. This nonlinearity leads the propagated wave to be distorted along the propagation path; the positive peak of the propagated wave corresponding to that parts of the imaging medium with a higher value of .u (or equivalently, the contracted areas) speeds up compared to the negative peak of the wave. This explanation can be better understood by comparing Fig. 5.1a and b, which demonstrate the schematics of the wave propagation corresponding to the fundamental and harmonic components, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Mohammadzadeh Asl and R. Paridar, Beamforming in Medical Ultrasound Imaging, Springer Tracts in Electrical and Electronics Engineering, https://doi.org/10.1007/978-981-99-7528-0_5

213

214

5 Harmonic Imaging and Beamforming

Fig. 5.1 The schematic of a fundamental and b the generated harmonic waves

respectively. This phenomenon results in some shifts from the fundamental component to the higher one and harmonics generation. If the fundamental frequency is considered as . f 0 , the frequency components of the generated harmonics are .2 f 0 , .3 f 0 , .4 f 0 , and so on. From the above explanations, one can conclude that harmonics are generated during wave propagation due to their non-linear property in the imaging medium. Inspired by the generated harmonic components during wave propagation, harmonic imaging is developed which is performed by using the backscattered echoes with the frequency component of .2 f 0 (or higher) to construct the image. By using this technique, one can take advantage of the improved resolution and contrast of the resulting image. The advantage of harmonic imaging has been initially observed by using contrast agents (microbubbles) that are widely used in vascular imaging; this method is known as contrast harmonic imaging, in which microbubbles are used to perform the US imaging. As the injected microbubbles in the imaging vessel are radiated, they are constantly expanded and contracted. Due to this phenomenon as well as the impedance difference between microbubbles and the surrounding tissue, harmonic components are generated which can be processed to identify the microbubbles. Note that identification of the microbubbles is helpful in the diagnosis and treatment process. From the practical point of view, only the smallest harmonic frequency, i.e., .2 f 0 is usually used in addition to the fundamental frequency. This is due to the limited frequency range of the receiving elements. Also, a high-frequency signal is rapidly attenuated during the propagation in the tissue, as previously discussed in Sect. 1.4. As a result, the penetration depth will be dramatically decreased for higher frequency components. Furthermore, one should note that the amplitude of the harmonic component is appropriate to the square of the amplitude of the fundamental component; that is, a 5 dB reduction in the intensity of the fundamental frequency beam results in a 10 dB reduction in the intensity of the second harmonic signal. As the frequency component of the generated harmonics increases, the amplitude of the corresponding signals will be further suppressed which leads to SNR degradation. Therefore, harmonic imaging is performed using the received signal with the frequency of .2 f 0 .

5.1 Introduction

215

Nonetheless, higher harmonic components are rarely used to take advantage of the improved image quality; in Bouakaz and De Jong (2003), the higher harmonic frequencies, until the fifth harmonic component, are combined into a single component, the result of which is known as the superharmonic component. This combination is performed using a wide-band filter. It has been shown that by using the superharmonic component, supplementary information can be achieved from the imaging tissue. Note that the signal energy is reduced by increasing the harmonic frequency component, which negatively affects the result. However, due to the combination of the harmonic components performed in the superharmonic imaging, the resulting signal energy would be increased. Harmonic imaging has several advantages compared to conventional imaging in which the fundamental signal is used to construct the image (Tranquart et al. 1999; Zhao et al. 2003; Szopinski et al. 2003); in particular, by using this technique in cardiac imaging, the noise and clutter of the resulting image will be reduced, and therefore, a more clear image will be obtained from the heart. Also, harmonic imaging is potentially used in imaging inhomogeneous tissues, since it is less affected by inhomogeneity, and consequently, less defocused. Furthermore, it has a good performance in identifying the edges of the imaging medium. As mentioned earlier, the amplitude of the generated harmonic is reduced compared to the fundamental signal. One can conclude that the sidelobes of the non-linear beams (which result in harmonic generation) are much lower compared to the linear ones (which result in the generation of the fundamental signal). Therefore, another advantage of harmonic imaging is CNR improvement. Finally, as the higher frequency component results in resolution improvement, it can be concluded that the harmonic imaging also improves the lateral resolution in addition to the contrast. To perform harmonic imaging, it is necessary to remove the fundamental frequency component and process the harmonic component of the received signals. To this end, one can use a band-pass filter to filter out the fundamental signal. It is well-known that as the frequency bandwidth of the transmitted signal increases, the axial resolution of the resulting image will be improved. However, the overlap between the fundamental and harmonic components of the received signal increases by increasing the bandwidth of the transmitted signal. Consequently, the fundamental and harmonic components cannot be well separated, which degrades the performance of harmonic imaging. To tackle this problem, a narrower bandwidth should be performed, which results in axial resolution degradation. Therefore, one can conclude that one of the drawbacks of harmonic imaging is the axial resolution degradation compared to conventional imaging. Also, in comparison with conventional imaging, the penetration depth will be decreased by using harmonic imaging since the amplitude of the harmonic received signal is suppressed. Although the penetration depth is decreased using harmonic imaging, one should note that the tissue structure is better identified due to the usage of a higher frequency component. To overcome the former limitation, different techniques, such as the pulse inversion (PI) algorithm, can be used to extract the pure harmonic component even in the case in which a high-

216

5 Harmonic Imaging and Beamforming

frequency bandwidth is used. Also, for the latter limitation, the synthetic aperture sequential beamforming (SASB) can be used. Each of the mentioned solutions will be discussed in detail in the following.

5.2 Harmonic Signal Extraction Methods The harmonic component of the received signal can be extracted by using a highpass or band-pass filter, as mentioned earlier. However, the overlap between the fundamental and harmonic components prevents the filtering process from removing the fundamental component from the received signal successfully. Generally, the filtering method is efficient in cases in which the frequency components are not overlapped. This is while this condition is not met without compromising the axial resolution. Therefore, some other remedies are necessary to be provided in order to tackle this problem. A list of the most common methods in this regard contains the following.

5.2.1 Pulse Inversion Algorithm The pulse inversion (PI) algorithm is one of the commonly used techniques in harmonic imaging in which the fundamental frequency component is removed from the received signal, and the harmonic component is retained for further processing. In this algorithm, two wide-band pulses that are .180◦ different in phase are used to perform the transmission. The fundamental components of the received signals corresponding to these transmitted pulses are similar in amplitude with .180◦ difference in phase. In contrast, the harmonic components of the received signals of the transmitted pulses, which are a consequence of the nonlinearity of the imaging tissue, are not equal in amplitude. Therefore, by performing a summation over the received signals corresponding to these two transmitted pulses, the fundamental component (and other odd frequency components) will be removed while the second harmonic component (and other even harmonic frequency components) is retained. By doing so, the pure harmonic component can be extracted even if a wide-band frequency is used. Consequently, inspired by the PI algorithm, the problem of the degraded axial resolution of the harmonic imaging is overcome. The drawback of the PI technique is that the tissue motion between two consecutive transmissions negatively affects the result. Also, the frame rate is decreased compared to the simple filtering method. Although the PI algorithm filters out the fundamental frequency component of the signal, however, the signal noise is not removed and is even amplified in some cases. In order to tackle this problem, a quadratic kernel can be used in combination with the PI algorithm to further suppress the noise level of the combined signal while the second harmonic component is retained (Al-Mistarihi and Ebbini 2005).

5.2 Harmonic Signal Extraction Methods

217

5.2.2 Amplitude Modulation Algorithm Another way to extract the harmonic component from the received signal is the socalled amplitude modulation algorithm. In this algorithm, two pulses that are different in amplitude are used to perform the transmission (Mor-Avi et al. 2001). Consider the amplitude of the first signal is .α > 1 times the second one. The backscattered echoes corresponding to these two pulses are scaled with .α and subtracted; consider the received signal associated with the first and second pulses as .s1 and .s2 , respectively. The subtraction process is performed as .s1 − αs2 . Consequently, the fundamental component is canceled out from the received signal, and the harmonic components that are related to the nonlinearity of the tissue are extracted. By using the amplitude modulation algorithm, some other odd harmonic components will also be preserved in addition to the second harmonic (as an even harmonic component). The limitations of this technique are exactly similar to the ones concluded in the PI method. The combination of the PI and amplitude modulation techniques has also been developed in order to improve the second and third harmonic components while the fundamental component is suppressed from the received signal (Eckersley et al. 2005).

5.2.3 Harmonic Beamforming In order to extract the harmonic component and remove the fundamental component of the signal, the harmonic beamforming technique can be used in which a modification is applied to the conventional beamforming (Trucco and Bertora 2003, 2006). In this method, a single transmission pulse is used. Previously, it has been shown that the received signals are aligned by applying the time delays to them, and the beamformed data is constructed by performing a summation over the time-delayed signals as below:

.

y(t) =

N Σ

wi si (t − Δti ).

(5.1)

i=1

In the above equation, if the weighing coefficients are initialized according to the uniform function, the resulting beamforming will be the non-adaptive DAS algorithm as presented in (2.14). In the harmonic beamforming algorithm, the term .Δt Hi = i/2 f 0 is added to the calculated time delay, and the modified beamformed data is obtained as below:

.

y(t) =

N Σ i=1

wi si (t − Δti − Δt Hi ).

(5.2)

218

5 Harmonic Imaging and Beamforming

By applying the delayed signals in such a way, the received signals at the frequency of . f 0 corresponding to the .ith and .(i + 1)th elements are time-shifted for a halfperiod. In other words, the signals at the frequency of . f 0 are phase-shifted by .180◦ . This is while the .2 f 0 frequency signals corresponding to the mentioned elements are time-shifted for the entire period. As a result, the fundamental component is filtered out from the signal by performing a summation over the modified delayed signals while the signals at the frequency of .2 f 0 remain aligned. Consequently, summing the modified delayed signals leads to maintaining the second harmonic signal. In the case in which a wide-band signal is used, the performance of the harmonic beamforming method will be degraded due to the overlap between the . f 0 and .2 f 0 components. In such a case, the new term .Δt Hi which is applied to (5.2) changes to one of the following three forms (Trucco and Bertora 2003): ( ) 1 i Δt Hi = rem , 2 2 f0 ) ] [ ( 1 i +1 −1 , Δt Hi = rem 3 2 f0 ( ) (i+1) 1 i . Δt Hi = (−1) 2 rem 2 2 f0

.

(5.3)

In the above equation, “rem” stands for the remainder. By using the above equation, alternation of the sign will result, which prevents the fundamental component from leaking into the second harmonic component of the signal. However, the drawback of this technique is that the fundamental frequency component of the signal does not reduce at the beginning and the end of the pulse. To deal with this problem and partially overcome it, one can use a smooth pulse.

5.2.4 Multitone Nonlinear Coding In order to take advantage of the entire bandwidth of the transducer, one can use a dual-frequency transmit pulse in harmonic imaging; in Nowicki et al. (2007), it was proposed to use the transmission pulses with dual frequencies . f 1 and . f 2 = 2 f 1 to illuminate the imaging medium. In this regard, four different dual frequency pulses can be defined as below: .

Ppp = sin(2π f 1 t) + sin(2π f 2 t), Ppn = sin(2π f 1 t) − sin(2π f 2 t), Pnp = − sin(2π f 1 t) + sin(2π f 2 t), Pnn = − sin(2π f 1 t) − sin(2π f 2 t),

5.2 Harmonic Signal Extraction Methods

219

where the dependency of . P to the time and spatial position of the propagation source is neglected in the notation for simple writing. Among the above-defined pulses, two pulses are selected and transmitted consecutively toward the medium. Then, their corresponding backscattered echoes are combined. This technique, known as the multitone non-linear coding (MNC), is similar to the PI algorithm, with the difference that in the PI method, only a single pair of transmit pulses (with a phase shift of .180◦ ) is used. In contrast, six different pairs of transmit pulses can be expressed in the MNC method as below: .

Pppnn = Ppp + Pnn , Ppnnp = Ppn + Pnp , Pppnp = Ppp + Pnp , Ppnnn = Ppn + Pnn , Ppppn = Ppp + Ppn , Pnnnp = Pnn + Pnp .

In the cases in which two pairs . Pppnn and . Ppnnp are selected for imaging among the above-defined pairs, the MNC method would be similar to the PI algorithm in terms of performing two pulses with opposite polarities. Therefore, these two cases are named the multitone PI (MPI) algorithm. By using these two cases, the second as well as third harmonic components of the received signal are extracted. It has been shown that the MNC algorithm improves the signal gain compared to the PI method.

5.2.5 Coded Excitation Method Using Chirp Pulse Coded excitation improves the SNR in harmonic imaging. In this technique, a weighted chirp pulse, the frequency of which changes linearly, is used to perform the transmission (Song et al. 2010). Note that other pulses (e.g., Golay Shen and Shi 2011) can also be used as the transmit pulse. The advantage of the chirp pulse is that its bandwidth is tunable. The chirp pulse is formulated as below: ( α ) c(t) = w(t) cos 2π f 0 t + t 2 , 2

.

(5.4)

where .α = B/T denotes the chirp rate which is determined by the bandwidth . B and the time duration of the waveform .T , and .w(t) is the weighting coefficient. Note that the weighting process is performed in order to reduce the sidelobes. The selected weighting coefficient affects the sidelobe level as well as the spatial resolution (Tanabe et al. 2011). As mentioned before, the received signal includes the fun-

220

5 Harmonic Imaging and Beamforming

damental (.s f ), the second harmonic (.s2 f ), and higher harmonic (.s>2 f ) components which can be expressed as below: s(t) = s f (t) + s2 f (t) + s>2 f (t).

.

(5.5)

In the coded excitation method, the compression process is performed to improve the SNR of the signal. Moreover, the harmonic component is decoded selectively. The compression is performed by convolving a second harmonic decoding function, which is denoted as .c2 f (t), with the received signal. The second harmonic decoding function is defined as below: ) ( c (t) = w(t) cos 2π(2 f 0 )t + αt 2 .

. 2f

(5.6)

The compression is then performed according to the following equation: [ ] s (t) = s f (t) + s2 f (t) + s>2 f (t) ∗ c2 f (t), [ ] [ ] [ ] = s f (t) ∗ c2 f (t) + s2 f (t) ∗ c2 f (t) + s>2 f (t) ∗ c2 f (t) .

. c

(5.7)

If the function .c2 f (t) is independent of the fundamental and the higher harmonic (.>2) components of the received signal, in other words, if there exists no overlap between the second harmonic component and the others, the above convolution can be approximated as .s2 f (t) ∗ c2 f (t). However, if a wide-band transmit pulse is used, this approximation is not correct, and consequently, the image artifact will be increased. In such a case, one of the second harmonic extraction methods, such as filtering or the PI algorithm, can be used (Song et al. 2010). In particular, in the case where the PI is used, two transmit pulses are first considered as below: ( α ) (t) = w(t) cos 2π f 0 t + t 2 , 2 ( α ) ct,P I2 (t) = −w(t) cos 2π f 0 t + t 2 , 2 c

. t,P I1

(5.8)

and their corresponding received signals are recorded that are expressed as below: s

. P I1

(t) = s P I, f (t) + s P I,2 f (t) + s P I,>2 f (t),

s P I2 (t) = −s P I, f (t) + s P I,2 f (t) − s P I,>2 f (t).

(5.9)

Performing a summation over the received signals formulated above, the result is obtained as .s P I (t) ≈ 2s P I,2 f in which the second harmonic component is separated successfully from the other components. By performing the compression process on the output of the PI algorithm, we have s

. P I,c

= 2s P I,2 f ∗ c2 f (t),

(5.10)

5.2 Harmonic Signal Extraction Methods

221

which results in a high-resolution and high-SNR harmonic imaging. Note that the generated harmonic components obtained from a chirp transmit pulse is also a chirp; one of the properties of the chirp pulse is that it maintains the coded phase relationship during the transmission. Consequently, it is possible to compress the harmonic components of the received signal corresponding to a chirp transmit pulse.

5.2.6 Coded Excitation Method Using Multiple Chirp Pulses In Tanabe et al. (2011), it was proposed to use multiple chirp pulses for transmission. In this technique, the spectrum of the chirp pulses is selected such that the overlap between the fundamental and harmonic components of the received signals is prevented. Therefore, the generated harmonics can be separated from the fundamental component by simply filtering the received signal, and consequently, the harmonic imaging will be less sensitive to the tissue motion compared to the PI method. In this algorithm, . Nc chirp pulses with different starting frequencies are used as below: ( α ) c (t) = wi (t) cos 2π f i t + t 2 , 0 ≤ t ≤ T /Nc , 2

. i

(5.11)

where . f i denotes the starting frequency of the .ith pulse which is obtained as . f i = , and .T /Nc is the time duration of each emission. Each pulse is emitted f 0 + B(i−1) Nc separately, and its corresponding harmonic component is extracted using a band-pass filter. Then, the output of each filtered received signal is compressed. In this regard, for each transmit pulse, the second harmonic decoding function is defined as below: c

. 2 fi

) ( (t) = wi (t) cos 2π(2 f i )t + αt 2 , 0 ≤ t ≤ T /Nc .

(5.12)

Then, the compressed signal .smc is finally obtained according to the following equation: s (t) =

. mc

Nc Σ [ ] si (t) ∗ c2 fi (t) .

(5.13)

i=1

Consequently, an improved spatial resolution and SNR data would be achieved.

5.2.7 Coded Excitation Method Using Dual-Frequency Chirp Pulses On one hand, it is well-known that by using the dual-frequency transmit signal, the entire bandwidth of the element can be used. On the other hand, it has been observed that the coded excitation using the chirp pulse leads to SNR improvement. In order

222

5 Harmonic Imaging and Beamforming

to achieve both of the mentioned advantages simultaneously, the excited chirp pulse with dual frequencies can be used (Shen and Lin 2012); consider a dual-frequency chirp pulse with the frequency components of . f 0 and .2 f 0 as below: c

. UU11

[ ( [ ( α )] α )] (t) = cos 2π f 0 t + t 2 + cos 2π 2 f 0 t + t 2 . 2 2

(5.14)

In the signal expressed above, the chirp frequency is increasing from . f 0 − B/2 to f + B/2 in the . f 0 chirp, and from .2 f 0 − B/2 to .2 f 0 + B/2 in the .2 f 0 chirp; it is said that the chirp pulse is up-sweeping. Accordingly, the index UU11 is assigned to the defined dual-frequency chirp pulse as shown in (5.14). As mentioned before, the second harmonic component of the received signal is proportional to the square of its fundamental component. Consider the linear and non-linear second and third-order components of the received signal as below:

. 0

s(t) =

Σ

.

an x n (t) = a1 x(t) + a2 x 2 (t) + a3 x 3 (t).

n

Now, considering the non-linear second-order component of the signal, the square of the defined .cUU11 (t) signal is obtained as below: c2

. UU11

[ ] (t) = 1 + cos [2π( f 0 t)] + 0.5 cos 2π(2 f 0 t + αt 2 ) [ ] [ ] + cos 2π(3 f 0 t + αt 2 ) + 0.5 cos 2π(4 f 0 t + αt 2 ) .

(5.15)

It can be seen that the second harmonic component of the signal is obtained as an up-sweeping chirp, and therefore, the compression process can be performed on it in order to preserve the axial resolution. However, the . f 0 frequency component, which corresponds to the frequency difference between the two emitted pulses, appears as a single-frequency component. Therefore, it cannot be compressed and leads to axial resolution degradation in . f 0 harmonic imaging. To overcome this problem, it is necessary to select the .2 f 0 chirp such that the generated . f 0 component is not a single frequency. Rather, it should have a bandwidth similar to one of the .2 f 0 harmonic component. In this regard, the newly defined dual-frequency chirp signal, which is denoted as .cUU13 (t) is redefined as below: c

. UU13

)] [ ( [ ( 3α 2 α )] . t (t) = cos 2π f 0 t + t 2 + cos 2π 2 f 0 t + 2 2

(5.16)

Comparing (5.14) and (5.16), it can be noted that the bandwidth of the .2 f 0 chirp signal is tripled relative to the . f 0 chirp signal. Similarly, the non-linear second-order component of the signal is squared with the consideration of (5.16), and we have c2

. UU13

[ ] [ ] (t) = 1 + cos 2π( f 0 t + αt 2 ) + 0.5 cos 2π(2 f 0 t + αt 2 ) [ ] [ ] + cos 2π(3 f 0 t + 2αt 2 ) + 0.5 cos 2π(4 f 0 t + 3αt 2 ) .

(5.17)

5.2 Harmonic Signal Extraction Methods

223

It can be seen from the above equation that the . f 0 component is obtained as an upsweeping chirp with the bandwidth of .2B, which is the same as the bandwidth of the .2 f 0 component. Consequently, the compression process can be applied to it, and the axial resolution degradation is prevented. The problem with the newly defined .cUU13 (t) signal is that the .3 f 0 component has a wide bandwidth, i.e., .4B; generally, this component is out of the transducer band. However, its low-frequency component will overlap with other components (. f 0 and .2 f 0 ), which suppresses the quality of the resulting image obtained from the harmonic imaging. To tackle this limitation, the transmit dual-frequency chirp signal is modified as below which is denoted as .cUD11 (t): c

. UD11

[ ( [ ( α )] α )] (t) = cos 2π f 0 t + t 2 + cos 2π 2 f 0 t − t 2 , 2 2

(5.18)

where the .2 f 0 chirp is defined with a bandwidth similar to the one defined in the . f 0 chirp, and in a down-sweeping form. Accordingly, the index UD11 is assigned to the updated transmit chirp. The square of the non-linear second-order component of the signal in this case is, therefore, formulated as below: c2

. UD11

[ ] [ ] (t) = 1 + cos 2π( f 0 t − αt 2 ) + 0.5 cos 2π(2 f 0 t + αt 2 ) [ ] + cos [2π(3 f 0 t)] + 0.5 cos 2π(4 f 0 t − αt 2 ) .

(5.19)

It can be seen that the .3 f 0 component appears as a single-frequency signal; the corresponding bandwidth is so small that the overlapping phenomenon is prevented. Note that in comparison with the signal .cUU13 (t), the .2 f 0 component is changed as a down-sweeping chirp using the signal .cUD11 (t). This is while the bandwidth remains similar to the one of . f 0 component. It is concluded that by using the modified dualfrequency chirp signal .cUD11 (t), the .2 f 0 and .3 f 0 components will be successfully separated from the received signal (Shen and Lin 2012). Despite the improved performance by using the dual-frequency chirp signal .cUD11 (t), it can still lead to the generation of lateral sidelobes; the overlap between the second harmonic components of the signals, and also, the effect of the fourth-order component of the signal on the compression process, are two factors that negatively affect the lateral sidelobes. To overcome this limitation, a modified method known as the range sidelobe inversion (RSI) was proposed in Shen et al. (2014). In the dual-frequency chirp transmit signal .cUD11 (t) presented in (5.18), the phases of the . f 0 and .2 f 0 chirps are considered to be zero. If the phases of the . f 0 and .2 f 0 chirps are considered as .φ1 and .φ2 , respectively, (5.18) is modified as below: c

. mUD11

] [ ] [ (t) = cos 2π f 0 t + π αt 2 + φ1 + cos 2π(2 f 0 )t − π αt 2 + φ2 ,

(5.20)

and consequently, the corresponding square of the non-linear second order component of the signal is obtained as below:

224

5 Harmonic Imaging and Beamforming

c2

. mUD11

[ ] (t) = 1 + cos 2π( f 0 t − αt 2 ) + (φ2 − φ1 ) [ ] + 0.5 cos 2π(2 f 0 t + αt 2 ) + (2φ2 ) .

(5.21)

The components higher than the second harmonic component are excluded since they are out of the passband of the transducer. It is well-known that leaking a frequency component in the imaging band of the other frequency component results in sidelobe generation. It should be noted that the polarity of the lateral sidelobes depends on the interference phase; in particular, in . f 0 imaging, the interference caused by the .2 f 0 harmonic component contains sidelobes with the phase of .2φ1 . In the RSI method, in the case where the. f 0 imaging is of interest, another transmit pulse in addition to (5.20) is also used in which the phase of the .2 f 0 component is changed as .2φ1 + π while the . f 0 component remains unchanged. This new transmit pulse is named .c R S I1 (t) and is designed as below: c

. RSI1

[ ( π )] (t) = cos 2π f 0 t + π αt 2 + φ1 + ( 2 π )] [ . + cos 2π(2 f 0 )t − π αt 2 + φ2 + 2

(5.22)

Also, in the case where the .2 f 0 imaging is of interest, the interference due to the f component generates sidelobes with the phase of .(φ2 − φ1 ), as can be obtained from (5.21). In such a case, the second transmit dual-frequency signal is designed as below:

. 0

c

. RSI2

[ ] (t) = cos 2π f 0 t + π αt 2 + (φ1 ) ] [ + cos 2π(2 f 0 )t − π αt 2 + (φ2 + π ) ,

(5.23)

in which it is tried to generate the . f 0 harmonic component such that its phase equals (φ2 − φ1 + π ). Depending on whether the. f 0 or.2 f 0 imaging is intended, the transmit signal (5.22) or (5.23) along with the signal .cmUD11 (t) presented in (5.20) are used to perform the emission, and their corresponding received signals are combined. More precisely, the PI method is applied using the newly defined transmit pulses. Consequently, the lateral sidelobes caused by the interference of the unwanted components will be removed. Note that in the . f 0 imaging, the received signals corresponding to the transmit pulses (5.20) and (5.22) are summed. Also, in the .2 f 0 imaging, the received signals of the emissions (5.20) and (5.23) are subtracted to achieve the desired result.

.

5.2.8 Excitation Method Using Mixed-Frequency Sine Pulses In Karlinsky and Ilovitsh (2022), an excitation pulse consisting of two sine wavefronts with different frequencies . f 1 and . f 2 is used in contrast harmonic imaging in which

5.2 Harmonic Signal Extraction Methods

225

microbubbles are injected into the blood vessel to perform the imaging process. This method is similar to the coded excitation technique using dual-frequency chirp pulses, except that sine waves are used instead of the chirp pulses. The excitation pulse is formulated as below: .

P(t) = w1 cos (2π f 1 t) + w2 cos (2π f 2 t) .

(5.24)

The non-linear second-order component of the signal is obtained according to the following equation: .

P 2 (t) = w0 + w1 cos (2π f 1 t) + w2 cos (2π f 2 t) + w3 cos (2π(2 f 1 )t) + w4 cos (2π(2 f 2 )t) + w5 cos (2π( f 1 + f 2 )t) + w6 cos (2π( f 2 − f 1 )t) + . . . .

(5.25)

It can be seen that in addition to the fundamental and second harmonic components of the received signal, two components . f 1 + f 2 and . f 2 − f 1 are also generated. The presence of these two components, in addition to the second harmonic component, amplifies the microbubble signal. Consequently, the quality of the resulting image will be improved compared to the case in which a single frequency pulse is used for emission. Note that the component . f 1 + f 2 is usually outside the passband of the transducer. Therefore, only the component . f 2 − f 1 is taken into account.

5.2.9 Total Least Square Filtering Method In contrast harmonic imaging, the second harmonic component of the microbubbles may be contaminated with the generated second harmonic component of the tissue, which degrades the image quality. To separate the second harmonic component of the microbubbles from that of the tissue, in Zhu et al. (2021), a total least square (TLS) method was developed; in this method, the second harmonic components of the received signal before and after the contrast agent injection are used as the reference and input signals, respectively. The problem is expressed as below: (S I + Δ I ) C = S O + Δ O ,

.

(5.26)

where .C is the optimal coefficient, . S I ∈ Cs×m denotes the input signal, and . S O ∈ Cs×n represents the second harmonic component of the received signal before the microbubbles injection. Also,.Δ I and.Δ O denote the noise associated with the signals . S I and . S O , respectively. The coefficient .C is obtained according to the following minimization problem: ||[ ]|| Copt = min || Δ I Δ O || F ,

.

C

(5.27)

226

5 Harmonic Imaging and Beamforming

Fig. 5.2 The schematic of the TLS method

where .||.|| F denotes the Frobenius norm. The above minimization problem cannot be solved since the parameter .Δ I is unknown. Therefore, to obtain the optimum solution of (5.27), the SVD of the matrix . S = [S I S O ] ∈ Cs×(m+n) is used; we have T (m+n)×(m+n) . S = UΣV , where .Σ ∈ C is a diagonal matrix consisting of singular values of the matrix . S. Also, matrices .U ∈ Cs×(m+n) and . V ∈ C(m+n)×(m+n) include the singular vectors of the matrix . S. The coefficient .C is obtained using the entries of the matrix . V as below: ⎤⎡ ⎤−1 vm+1,m+1 · · · vm+1,m+n v1,m+1 · · · v1,m+n ⎥ ⎢ .. .. ⎥ ⎢ .. .. ⎢ ⎥ ⎢ . . ⎥ . . ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎢ = ⎢ vi,m+1 · · · vi,m+n ⎥ ⎢ vm+i,m+1 · · · vm+i,m+n ⎥ ⎥ . ⎥ ⎢ .. .. ⎥ ⎢ .. .. ⎦ ⎣ . . ⎦⎣ . . vm,m+1 · · · vm,m+n vm+n,m+1 · · · vm+n,m+n ⎡

Copt

.

(5.28)

Finally, the second harmonic component of the tissue is estimated as . Sˆ O ≈ S I Copt . As the input signal . S I includes the second harmonic components of the tissue and the microbubbles, the second harmonic component of the microbubbles can be obtained by subtracting the output of the TLS model from . S I , i.e., . S I − Sˆ O . The schematic of the described TLS method is illustrated in Fig. 5.2.

5.2.10 Adaptive Pulse Excitation Selection Method In the PI method (as well as other methods where multi-pulses are used to extract the second harmonic component), performing the excitation pulse appropriate to the imaging tissue improves the performance of harmonic imaging in terms of image quality and robustness. More precisely, it is desirable to select the excitation pulse adaptively according to the imaging medium. In this regard, in Ménigot and Girault (2016), an adaptive pulse selection method was developed; in this method, an optimization problem is expressed as below: wopt = argmax [C(w)] ,

.

w

(5.29)

5.3 Image Construction Techniques Based on Harmonic Imaging

227

where .w ∈ C S×1 is the optimal transmit signal (with . S samples), and .C(w) is the contrast resolution cost function which is tried to be maximized. By solving the above maximization problem, the optimal excitation pulse will be obtained without any prior information about the imaging tissue. This method is an iterative process, the general processing steps of which are as follows: in each iteration, . M waves (as well as their .180◦ phase-shifted versions) are randomly selected and used to perform harmonic imaging. In this step, the imaging tissue is tried to be explored. Then, for each of the . M received signals in which the second harmonic components are extracted using the PI algorithm, the resolution contrast metric is calculated. At the end of this iteration, the wave that results in the maximum value of the resolution contrast metric is selected as the optimum wave for harmonic imaging. Note that there is a huge variety of initial waves that are required to be defined in this adaptive excitation wave selection method. Therefore, the computational burden of this optimization problem is very high. In order to reduce the computational complexity and speed up the processing steps of this method, in Ménigot and Girault (2016), it was suggested to use the genetic algorithm.

5.3 Image Construction Techniques Based on Harmonic Imaging Once the second harmonic component of the received signal is extracted using one of the existing algorithms, such as the PI method, the extracted signal should be further processed to achieve the final harmonic image. The non-adaptive DAS beamformer is one of the most commonly used algorithms in this regard. Other image formation algorithms can also be performed to obtain the reconstructed image. In particular, it has been previously observed that the MV algorithm improves the image resolution compared to the DAS algorithm when it is applied to the fundamental component of the received signal. One can use this data-dependent algorithm in harmonic imaging in order to take advantage of the benefits of both techniques simultaneously (Näsholm et al. 2011). In the following, other existing techniques that are used in harmonic imaging will be discussed.

5.3.1 Harmonic Imaging Combined with Delay-Multiply-and-Sum Algorithm As previously mentioned in Sect. 3.5.1.9, in the non-linear DMAS algorithm, the .2 f 0 frequency component is generated due to the multiplication of the received signals. This process can be interpreted as the generation of the artificial second harmonic component. In Matrone et al. (2018), it was proposed to combine this technique with harmonic imaging; in this method, the second harmonic component of the received

228

5 Harmonic Imaging and Beamforming

signal is extracted using a band-pass filter. Then, the DMAS algorithm is performed on the filtered signal. Finally, the output of the DMAS algorithm is filtered using a band-pass filter with the central frequency of .4 f 0 , and the reconstructed image is achieved accordingly. It has been shown that this DMAS-based harmonic imaging method improves the image contrast and suppresses the noise level better compared to the conventional harmonic imaging in which the DAS algorithm is used to obtain the reconstructed image. Note that by using the DMAS algorithm, it is also possible to achieve the reconstructed images corresponding to the fundamental, second, third, and fourth components of the received signal and combine them; in such a case, the received signal is first filtered so that the fundamental and second frequency components are included. Then, the pre-processed signal goes through the DMAS processing steps. Finally, the information associated with different frequency components will be achieved by using the band-pass filters with the central frequencies of . f 0 , .2 f 0 , .3 f 0 , and .4 f 0 . This technique is known as mixed-signal imaging (Matrone et al. 2018). The combination process is performed by enveloping, normalizing, and finally, summing the beamformed signals.

5.3.2 Harmonic Imaging Combined with Fundamental Imaging Based on Minimum Variance Algorithm As discussed earlier, the improved resolution of the harmonic imaging compared to the fundamental imaging is at the expense of SNR reduction, and consequently, the penetration depth degradation. In contrast, the reconstructed image using the fundamental component of the received signal improves the penetration depth. However, the image resolution is suppressed in comparison with harmonic imaging. In Varnosfaderani and Asl (2019), it was proposed to reach an optimum reconstructed image by combining the fundamental and harmonic images. This combination is performed adaptively inspired by the MV principle, which is known as the mixing-together MV (MTMV) algorithm. Consider the fundamental and the second harmonic components of the received signal as . x f (n) ∈ C N ×1 and . x h (n) ∈ C N ×1 , respectively. The weighted summation of these two components is expressed as below: .

| | | | x(n) = η f |w Hf x f (n)| + ηh |w hH x h (n)| ,

(5.30)

where .w f (n) ∈ C N ×1 and .w h (n) ∈ C N ×1 denote the weight vector corresponding to . x f (n) and . x h (n), respectively. Also, the parameters .η f and .ηh are constants that determine the participation of each component. Note that .η f + ηh = 1, and also, .η f , ηh ≥ 0. In order to make it possible to combine the fundamental and harmonic components, the signals corresponding to both components should be normalized since the amplitude of the fundamental component is higher compared to the harmonic component. Therefore, without normalizing the mentioned signals, the effect of the harmonic component will be suppressed during the combination process.

5.3 Image Construction Techniques Based on Harmonic Imaging

229

In the MTMV algorithm, an attempt is made to optimize the weight vectors w f (n) and .w h (n) as well as the parameters .η f and .ηh such that the output power of the interference plus noise corresponding to the fundamental and the harmonic components of the signal is minimized while the amplitude of the desired signal remains unchanged. In other words, we have

.

.

min

Σ[

wi ,ηi

] ˆ i (n)wi (n) , ηi2 wiH (n) R

i

subject to wiH a = 1, ηi ≥ 0, Σ ηi = 1,

(5.31)

i

ˆ i denotes the estimated covariance matrix of the .ith compofor .i ∈ { f, h}, and . R nent. The first constraint of the above optimization problem (i.e., .wiH a = 1) can be rewritten according to the following equation: .η f .1 + ηh .1 = 1 [ ] H η f w f (n) + ηh w hH (n) a = 1.

(5.32)

Also, in order to further simplify the problem, .ηi and .wi (n) (for .i ∈ { f, h}) are considered as a complex, and it is tried to obtain its optimum value; that is, the ˙ i (n) = ηi wi (n) is considered, and (5.31) is rewritten as below: complex .w .

min ˙i w

] Σ[ ˆ ˙ iH (n) R(n) ˙i , w w i

subject to

Σ

˙ iH (n)a = 1, w

i

˙ iH (n)a ≥ 0, w

(5.33)

for .i ∈ { f, h}. To obtain the optimum solution, one can ignore the constraint ˙ iH (n)a ≥ 0 and solve the problem easily by using the Lagrangian multipliers w method; it has been proved that the mentioned constraint always holds, and therefore, it can be omitted from the problem Varnosfaderani and Asl (2019). The optimum solution is obtained as below:

.

−1

˙ iH (n) = w

.

ˆ (n) aH R [ i −1 ] , i ∈ { f, h}, Σ ˆ i (n) a aH i R

(5.34)

which results in image improvement in terms of both resolution and contrast compared to the case where each of the components (fundamental and harmonic) is used alone. Note that the combination process can also be performed by simply averaging

230

5 Harmonic Imaging and Beamforming

over the fundamental and harmonic images. However, the resulting image will not be optimal in this case since the contribution of each image, including its weaknesses, will be equal.

5.3.3 Harmonic Imaging Combined with a Phase Aberration Correction Algorithm Apart from the phase aberration correction algorithms that are discussed in Chap. 4, one can use harmonic imaging in order to improve the image quality and reduce the effect of phase aberration; it is well-known that the harmonic components of the signal are generated during the propagation wave through the tissue. That is, the harmonic components of the signal travel once through the tissue. This is while the fundamental component of the signal travels the same path twice. Therefore, the phase aberration effect in the harmonic components of the received signal is less compared to the fundamental component. Consequently, the harmonic components are less affected by the phase aberration, which results in a focused reconstructed image from an inhomogeneous tissue. To further improve the image quality and remove the phase aberration completely in imaging tissues with severe aberrating layers, one can use harmonic imaging together with a phase aberration correction algorithm. For instance, in Shin and Yen (2013), it was proposed to combine the harmonic imaging with the DAX algorithm (discussed in Sect. 4.4.1) to cover the weak performance of the DAX technique in the presence of strong aberration. In this technique, the second harmonic component is extracted from the received signal in the first step, and the result is processed using the DAX algorithm in the second step. In such a case, the aberration effect is considerably suppressed thanks to the first processing step. Consequently, the remaining aberration is successfully corrected using the DAX algorithm, and a well-focused reconstructed image will result.

5.3.4 Harmonic Imaging Combined with Synthetic Aperture Sequential Beamforming To improve the penetration depth as well as the SNR of the harmonic imaging, one can take advantage of the Synthetic aperture sequential beamforming (SASB) method, which is interpreted as the modified version of the STA technique previously discussed in Sect. 1.11.2. In the SASB imaging method, a virtual source is generated by focusing on a focal point using a number of array elements (subarray). The generated virtual source propagates the spherical waves in deeper regions of the tissue. This improves the imaging depth compared to the conventional imaging method in which the real array on the surface of the imaging medium is considered as the transmission

5.3 Image Construction Techniques Based on Harmonic Imaging

231

Fig. 5.3 The schematic of wave propagation from the generated virtual source in SASB

Fig. 5.4 The schematic of the first processing step of SASB

source (Kortbek et al. 2013). The schematic of the generated virtual source in SASB is demonstrated in Fig. 5.3. Consider the subarray length as . L. Also, assume that the array consists of . N elements. Considering . N − L + 1 subarrays and generating the virtual sources for each subarray, an array of . N − L + 1 virtual sources will be created by using which the image resolution is improved compared to the conventional imaging process. Furthermore, the image resolution will be less dependent on the imaging depth, as will be shown in the following. The SASB imaging method is divided into two main steps; in the first step, once the virtual sources are generated for each subarray, a beamforming method is performed using a focal point in both transmission and reception (Hemmsen et al. 2012). This process results in producing a series of scanlines, each of which corresponds to a virtual source. As a scanline is generated and stored (not the whole volume of the imaging medium) for each virtual source, one can conclude that the need for memory will be considerably lower compared to the STA imaging technique that has been discussed in Sect. 1.11.2. The schematic of the first step of the SASB imaging method is demonstrated in Fig. 5.4. Each sample corresponding to an imaging point is represented in multiple scanlines. In other words, multiple scanlines contain information about a similar imaging point. Therefore, in order to obtain a high-resolution image, the scanlines that contain information about a common imaging point are desired to be compounded. The second step of the SASB imaging method is to combine these scanlines. In the following, each step is going to be discussed in more detail.

232

5 Harmonic Imaging and Beamforming

First Step In the first step of SASB, each virtual source is first created by applying time delays to the corresponding subarray. In the schematic shown in Fig. 5.5, the subarray length is 7, and also, the virtual source is denoted as . f . The propagated wave from the virtual source covers a limited area of the imaging medium which is determined by the so-called opening angle as below: ) 1 , .α = 2 arctan 2F# (

(5.35)

where . F# denotes the f-number. The time takes for the wave that is propagated from the transmit point .e to reach the imaging point . p and reflected back to the receiving element .r via the virtual source is calculated as below: t (r ) =

.

1 (d1 + 2d2 + d3 ) , c

(5.36)

where .d1 is the distance between the transmitting element .e and the point . f , .d2 denotes the distance between the point . f and the imaging point . p, and .d3 is the distance between the point . f and the receiving element .r . The mentioned parameters are demonstrated in Fig. 5.5. Indeed, the distance of the propagated wave from the transmit point .e to the imaging point . p through the virtual source equals .d1 + d2 . Also, the return path from the imaging point . p to the receiving element .r (again, through the virtual point) equals .d2 + d3 . One should note that the central element of the subarray is always considered as the transmitting element. In the case in which the imaging point is above the virtual source, the time delay presented in (5.36) will be rewritten as below: t (r ) =

.

Fig. 5.5 The schematic of the wave propagation path to obtain the fixed/dynamic receive focusing time delays

1 (d1 − 2d2 + d3 ) . c

5.3 Image Construction Techniques Based on Harmonic Imaging

233

The beamforming described above is performed by considering a fixed receive focus. In the case where the dynamic receive focusing is considered, the time delay is calculated according to the following equation: t (r ) =

.

1 (d1 ± d2 + d4 ) , c

(5.37)

where.d4 denotes the distance between the imaging point . p and the receiving element r . In the above equation,.± is selected depending on the position of the point. p relative to the point . f ; similar to the fixed receive focusing technique, if the imaging point is above the virtual source, the negative sign is selected. Otherwise, the positive sign is considered. The parameter .d3 , which includes the positions of the receiving elements, is independent of the imaging point . p; this can also be seen from the schematic shown in Fig. 5.5. In contrast, the value of the parameter .d4 changes for different positions of the point . p. Therefore, it can be concluded that the computational complexity of the fixed receive focusing technique is considerably lower compared to the dynamic receive focusing technique.

.

Second Step Once the scanlines are generated and stored in the first step of the processing, it is necessary to compound the scanlines that contain information about a similar imaging point. In this regard, the scanlines that fulfill the mentioned constraint should be identified. In this regard, the following equation is used to find the number of scanlines to be compounded . K (z) (Kortbek et al. 2013): |) (| ( ) 2 |z − z f | tan α2 L(z) = , . K (z) = Δ Δ

(5.38)

where the opening angle .α is obtained according to (5.35). Also, .Δ denotes the spacing between two consecutive virtual sources,.z f is the depth of the virtual source, and .z is the depth of the imaging point. Note that . K (z) is a function of depth .z. From the above equation, one can conclude that the number of scanlines that participate in the second processing step increases by increasing the depth of the imaging point. This leads the resolution of the reconstructed image to be less affected by the imaging depth. Once the value of the parameter . K (z) is determined, the beamforming of the second step is performed and the value corresponding to the imaging point . p is obtained according to the following equation: y

. SASB

( p) =

K (z) Σ

( ) wi ( p).si tsi ( p) ,

(5.39)

i=1

) ( where .wi denotes the weighting coefficient, and .si tsi ( p) is the received sample corresponding to the time delay of the .ith scanline, i.e., .tsi . For each scanline .i, the time delay of the beamforming is obtained from the propagation path of the wave

234

5 Harmonic Imaging and Beamforming

from the transmit point .ei to the imaging point . p, and its return through the same path. More precisely, if the distance between the transmit point .ei and the position of the virtual source . f i corresponding to the .ith scanline are denoted as .d1i , and also, the distance between the point . f i and the imaging point is denoted as .d2i , the time delay is formulated as below: t ( p) =

. si

) 2( d1i ± d2i . c

(5.40)

Similar to the first step, the sign .± in the above equation is related to the position of the imaging point with respect to the virtual source. Note that in the SASB method, the fixed receive focusing process is performed. If the imaging process is performed with the consideration of dynamic receive focusing, the resulting technique is known as the bidirectional pixel-based focusing (BiPBF) method. By using the BiPBF method, the image quality will be improved compared to the SASB imaging technique. However, the need for memory is increased since the RF data corresponding to all of the scanlines is required to perform dynamic focusing for each imaging point. Also, its computational complexity is increased compared to the SASB method. Note that the word “bidirectional” is due to the fact that each virtual source that is created at a certain depth of the imaging medium emits a wavefront in two directions; forward and backward directions with respect to the virtual source. This can also be seen from the schematic of the virtual source generation shown in Fig. 5.3. This principle makes it possible to perform synthetic focusing on the imaging points that exist through both the forward and backward directions of the virtual source. The general processing steps of the BiPBF are similar to the SASB imaging method, except that the dynamic receive focusing is used instead of the fixed receive focusing. Therefore, a detailed explanation of this imaging technique is ignored. Application to Harmonic Imaging The harmonic imaging using SASB has been proposed in order to take advantage of the improved lateral resolution of the harmonic imaging while the penetration depth, as well as the SNR of the final image, are also improved thanks to the SASB method (Hemmsen et al. 2014; Du et al. 2011). Also, in Bae and Jeong (2000), Bae et al. (2008), it was proposed to combine the harmonic imaging with the BiPBF technique to take advantage of both methods simultaneously. Note that in comparison with the harmonic imaging combined with the BiPBF method, the advantage of the combination of the harmonic imaging and the SASB method is the lower computational complexity, and also, less memory requirement. Although the MV algorithm improves the resolution of the reconstructed image compared to the commonly used DAS method, however, one should note that this data-dependent beamformer is sensitive to noise; in the case where the SNR of the received signal is low, the estimation of the covariance matrix will not be accurate enough, and consequently, the performance of the algorithm will be degraded. As the second harmonic component of the received signal suffers from low SNR compared to the fundamental component, the performance of the MV algorithm is expected to

5.3 Image Construction Techniques Based on Harmonic Imaging

235

be disturbed in harmonic imaging. In Varnosfaderani et al. (2018), it was proposed to combine the BiPBF method and the adaptive MV algorithm in harmonic imaging in order to cover this limitation. More precisely, by using the BiPBF method, the signal SNR (including the second harmonic component), as well as the penetration depth, is improved. Also, by applying the MV algorithm to the improved SNR data, the quality of the resulting image would be improved in terms of resolution compared to the case where the MV algorithm is used with the combination of the SASB method.

5.3.5 Harmonic Imaging Combined with Multiplane-Wave Compounding Method As previously discussed in Sect. 1.11.3, PWI is an ultrafast imaging method that significantly improves the frame rate compared to conventional B-mode imaging. The principles of this imaging system are explained in detail in Chap. 6 of this book. To further improve the SNR of PWI, the multiplane-wave (MW) compounding method has been developed; in this modified imaging technique, multiple plane waves are emitted quasi-simultaneously with short time intervals for each transmission (Tiran et al. 2015). Although the SNR of the resulting image will be improved by using the MW imaging method, however, due to the usage of the long-length transmit pulses, the reverberation artifact induces the received signal, which negatively affects the reconstructed image. Considering that harmonic imaging is able to successfully suppress the reverberation artifacts, it can be combined with the MW imaging method in order to overcome the mentioned limitation and achieve an improved quality image compared to the case in which each of the imaging techniques is used alone (Gong et al. 2016). In the MW imaging method, the transmitted plane waves are coded by using the .±1 coding factors as well as the Hadamard matrix. This leads to the modification of the polarities of the transmitted pulses. For each emission, according to the sign of the coding factor (plus or minus one) that is assigned to the plane waves, their corresponding received signals are added or subtracted, and the result is considered as the received signal obtained from that emission. However, by using this structure, it is not possible to extract the second harmonic component of the received signal using the PI algorithm; by inversing the polarity of the transmit pulse in the MW imaging method, only the polarity of the fundamental signal is inversed. This is while the polarity of the second component remains unchanged, and consequently, the PI method will be useless. In this regard, in Gong et al. (2016), it was proposed to modify the MW imaging method such that the time delay .τd is used instead of the coding factor .±1. In other words, the delay factors are applied temporally instead of spatially. Therefore, the factors .{−1, +1} are substituted by the time delays .{0, τd } (the component .0 is used in the case in which no temporal delay is applied). Consequently, the PI technique can be used in the MW imaging method in order to obtain the second harmonic component. Note that conventionally, the delay component .τd equals .T /2. However, in the case where the harmonic imaging

236

5 Harmonic Imaging and Beamforming

is going to be combined with the MW technique, .τd = T /4 = 1/4 f 0 is considered; this value is equivalent to applying a time delay of .T /2 in the harmonic component (with the central frequency of .2 f 0 ). By modifying the time delay in such a way, the second harmonic component will be extracted accurately. Assume that . I plane waves are transmitted for each emission. The received signal of the . jth emission, i.e., .m j (t), is formulated as below (Gong et al. 2016): I Σ

m j (t) =

.

si, j (t − τ (i)) ,

(5.41)

i=1

where .si, j (t) denotes the received signal corresponding to .ith transmitted plane wave in the . jth emission. If the time delay .τd is applied to the transmitted plane wave, then we have .τ (i) = τd in the above equation. Also, in the case where no time delay is applied to the transmitted plane wave, .τ (i) = 0 is considered. The Fourier transform of (5.41) is expressed as below:

.

I Σ

Ml ( f ) =

Si ( f )e− j2π f τ (i) .

(5.42)

i=1

Considering . Ai ( f ) = e− j2π f τ (i) as the coding factor, the above equation can be rewritten as below: .

AS = M,

(5.43)

where the entries of the matrix. A are filled using the coding factor. Ai ( f ). Considering (5.43), the received signals . S can be obtained as . S = A−1 M. Finally, by performing an inverse Fourier transform to the obtained . S, the second harmonic component of the received signal will be obtained for each emission. The reconstructed image is achieved by applying a beamformer to the obtained second harmonic components, and also, compounding the results of all emissions.

5.3.6 Harmonic Imaging Combined with Attenuation Coefficient Estimation Method Attenuation coefficient estimation is used to evaluate the fat content in liver US imaging. The fundamental frequency component of the received signal is used to perform the attenuation estimation. However, the reverberation artifact that occurs in the body wall during the imaging procedure contaminates the fundamental component of the received signal to noise, and consequently, induces errors in the attenuation estimation. To reduce the reverberation effect and improve the accuracy of estimation, the second harmonic component of the received signal can be used. In particular, in

5.3 Image Construction Techniques Based on Harmonic Imaging

237

Gong et al. (2020), an attenuation coefficient estimation method, known as the reference frequency method, was proposed to be combined with harmonic imaging to improve the accuracy of the results. In this combinatorial technique, the power of each frequency component (. f i ) is normalized by its adjacent frequency component (. f i−1 ) in order to remove the effects of some dependencies such as focusing and TGC. Then, the attenuation estimation is performed from the deterioration slope of the normalized frequency power relative to each second harmonic frequency component (i.e., . f i ). This process is repeated for different frequency components, and the average value of the attenuation coefficient estimation is obtained accordingly. It has been shown that this technique leads to a more robust and accurate estimation compared to the case where the estimation is performed using the fundamental component of the received signal. Note that the discussed attenuation estimation method is sensitive to noise; if the signal is contaminated with noise, the accuracy of the estimation is negatively affected. To overcome this limitation, it is desired to perform a noise suppression technique on the received signal before attenuation estimation (Gong et al. 2021); consider the received signal corresponding to the frequency component of . f i and the depth of .z as below: s( f i , z) = s0 ( f i , z) + n( f i , z),

.

(5.44)

where .s0 and .n denote the noise-free signal and the additive noise, respectively. The power spectrum of the received signal is obtained according to the following equation: .

Y ( f i , z) = s( f i , z)s ∗ ( f i , z) = [s0 ( f i , z) + n( f i , z)] [s0 ( f i , z) + n( f i , z)]∗ = s0 ( f i , z)s0∗ ( f i , z) + n( f i , z)n ∗ ( f i , z) + s0 ( f i , z)n ∗ ( f i , z) + s0∗ ( f i , z)n( f i , z),

(5.45)

which can be simplified as below by assuming that the pure signal and noise are independent: .

Y ( f i , z) = s0 ( f i , z)s0∗ ( f i , z) + n( f i , z)n ∗ ( f i , z) = S0 ( f i , z) + N ( f i , z).

(5.46)

If the power spectrum of noise is known, the noise-free signal can easily be achieved by subtracting the power spectrum of noise from the power spectrum of the received signals. In Gong et al. (2021), it was suggested to perform the reception process with similar system settings. The system settings include the selected beamformer, apodization, TGC, and other gain settings. Consequently, the system noise, the power spectrum of which is denoted as . Nˆ ( f i , z) is obtained, and we have Sˆ ( f i , z) = S( f i , z) − Nˆ ( f i , z),

. 0

(5.47)

238

5 Harmonic Imaging and Beamforming

and the noise-suppressed signal is estimated accordingly. Once the noise-free signal is estimated, the result is used to estimate the attenuation coefficient according to the explanations performed above, which results in a more accurate estimation.

5.3.7 Harmonic Imaging Using Dual-Frequency Transducers In order to benefit from the advantages of harmonic imaging in intravascular imaging, in Lee and Chang (2019), it was proposed to use two transducers with different central frequencies which are known as dual-frequency intravascular US imaging; in this method, a transducer with the central frequency of . f 0 is used to transmit the signal through the imaging medium. Also, the other one with the central frequency of .2 f 0 is used to receive the generated second harmonic component. Each transducer has its own spectrum, and as a result, the limitation regarding the overlap between the fundamental and second harmonic components of the received signal will be overcome. In other words, without the need for increasing the frequency bandwidth, the axial resolution is preserved, and the fundamental, as well as the second component of the received signal, can be obtained simultaneously by using the filtering method. By using the dual-frequency intravascular US imaging method, it is even possible to combine the images corresponding to each sub-band (. f 0 and .2 f 0 ) to obtain an improved quality image. In such a case, it should be noted that if the signals obtained from each sub-band are uncorrelated, their combination (using a simple averaging technique) leads to an improved contrast image. However, if this constraint is not met, it is required to apply weighting to the received signals of each sub-band. Assuming that there exists . Nsub different sub-bands, the weighting coefficients associated with the .nth sub-band can be obtained using the root-mean-square (RMS) of each signal according to Lee and Chang (2019) as below: [ ⎤2 ⎡ | | Σ S L Σ |1 ⎣ .RMSn = | x j,n (i)⎦ , S i=1 j=1

(5.48)

where . S denotes the number of samples in each axial direction, .x j,n (i) is the .ith sample in the . jth scanline corresponding to .nth sub-band, and . L represents the total number of scanlines. Once the RMS is calculated according to (5.48), the weighting coefficient of the .nth sub-band is obtained as follows: Π Nsub wn =

.

N sub Σ n=1

[

k=1 k/=n

RMSk

Π Nsub k=1 k/=n

]. RMSk

(5.49)

References

239

Other dual-frequency imaging structures have been developed for intravascular US imaging (Kim et al. 2008, 2017; Ma et al. 2015; Lee et al. 2018). For instance, in Lee et al. (2018), a structure consisting of three elements was presented where two elements are used to perform emission (with the central frequency of . f 0 ), and the third one (with the central frequency of .2 f 0 ) is used to receive the second harmonic component. It is worth noting that in intravascular imaging, the transducer enters the vessel which is small in diameter. Therefore, the size of the transducer is necessary to be small enough so that it can pass easily through the curve of the vessel. It can be concluded that harmonic imaging using multi-elements causes limitations in intravascular imaging. Different solutions have been devised so far to solve this problem (e.g., Sung et al. 2020). Such studies are based on transducer fabrication which is out of the scope of this book and will not be discussed further.

5.4 Conclusion In this chapter, harmonic imaging was discussed. It was observed that to achieve an improved resolution image, the second harmonic component of the received signals can be used. The main challenge in this field is how to extract and separate the second harmonic component from the received signal. In this chapter, the existing techniques were reviewed in this regard. One of the most common methods to extract the second harmonic component is the well-known PI technique in which two successive excitation pulses with .180◦ phase difference are used. Another limitation in harmonic imaging is that the SNR, as well as the penetration depth, is degraded. In this chapter, it was concluded that the SASB method could efficiently overcome these limitations and improve the performance of harmonic imaging in terms of SNR and imaging depth.

References Al-Mistarihi MF, Ebbini ES (2005) Quadratic pulse inversion ultrasonic imaging (QPI): detection of low-level harmonic activity of microbubble contrast agents [biomedical applications]. In: Proceedings. (ICASSP’05). IEEE international conference on acoustics, speech, and signal processing, 2005, vol 2. IEEE, pp ii–1009 Averkiou MA (2000) Tissue harmonic imaging. In: 2000 IEEE ultrasonics symposium. Proceedings. An international symposium (Cat. No. 00CH37121), vol 2. IEEE, pp 1563–1572 Bae M-H, Lee H-W, Park SB, Yoon R-Y, Jeong MH, Kim DG, Jeong M-K, Kim Y-G (2008) A new ultrasonic synthetic aperture tissue harmonic imaging system. In: 2008 IEEE ultrasonics symposium. IEEE, pp 1258–1261 Bae M-H, Jeong M-K (2000) A study of synthetic-aperture imaging with virtual source elements in b-mode ultrasound imaging systems. IEEE Trans Ultrason Ferroelectr Freq Control 47(6):1510– 1519 Bouakaz A, De Jong N (2003) Native tissue imaging at superharmonic frequencies. IEEE Trans Ultrason Ferroelectr Freq Control 50(5):496–506

240

5 Harmonic Imaging and Beamforming

Du Y, Rasmussen J, Jensen H, Jensen JA (2011) Second harmonic imaging using synthetic aperture sequential beamforming. In: 2011 IEEE international ultrasonics symposium. IEEE, pp 2261– 2264 Eckersley RJ, Chin CT, Burns PN (2005) Optimising phase and amplitude modulation schemes for imaging microbubble contrast agents at low acoustic power. Ultrasound Med Biol 31(2):213–219 Gong P, Song P, Chen S (2016) Delay-encoded harmonic imaging (DE-HI) in multiplane-wave compounding. IEEE Trans Med Imaging 36(4):952–959 Gong P, Zhou C, Song P, Huang C, Lok U-W, Tang S, Watt K, Callstrom M, Chen S (2020) Ultrasound attenuation estimation in harmonic imaging for robust fatty liver detection. Ultrasound Med Biol 46(11):3080–3087 Gong P, Song P, Huang C, Lok U-W, Tang S, Zhou C, Yang L, Watt KD, Callstrom M, Chen S (2021) Noise suppression for ultrasound attenuation coefficient estimation based on spectrum normalization. IEEE Trans Ultrason Ferroelectr Freq Control 68(8):2667–2674 Hemmsen MC, Hansen PM, Lange T, Hansen JM, Hansen KL, Nielsen MB, Jensen JA (2012) In vivo evaluation of synthetic aperture sequential beamforming. Ultrasound Med Biol 38(4):708–716 Hemmsen MC, Rasmussen JH, Jensen JA (2014) Tissue harmonic synthetic aperture ultrasound imaging. J Acoust Soc Am 136(4):2050–2056 Karlinsky KT, Ilovitsh T (2022) Ultrasound frequency mixing for enhanced contrast harmonic imaging of microbubbles. IEEE Trans Ultrason Ferroelectr Freq Control Kim HH, Cannata JM, Liu R, Chang JH, Silverman RH, Shung KK (2008) 20 mhz/40 mhz dual element transducers for high frequency harmonic imaging. IEEE Trans Ultrason Ferroelectr Freq Control 55(12):2683–2691 Kim J, Lindsey BD, Li S, Dayton PA, Jiang X (2017) Dual-frequency transducer with a wideband PVDF receiver for contrast-enhanced, adjustable harmonic imaging. In: Health Monitoring of Structural and Biological Systems 2017, vol 10170. SPIE, pp 166–173 Kortbek J, Jensen JA, Gammelmark KL (2013) Sequential beamforming for synthetic aperture imaging. Ultrasonics 53(1):1–16 Lee J, Chang JH (2019) Dual-element intravascular ultrasound transducer for tissue harmonic imaging and frequency compounding: Development and imaging performance assessment. IEEE Trans Biomed Eng 66(11):3146–3155 Lee J, Shin E-J, Lee C, Chang JH (2018) Development of dual-frequency oblong-shaped-focused transducers for intravascular ultrasound tissue harmonic imaging. IEEE Trans Ultrason Ferroelectr Freq Control 65(9):1571–1582 Ma J, Martin KH, Li Y, Dayton PA, Shung KK, Zhou Q, Jiang X (2015) Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography. Phys Med Biol 60(9):3441 Matrone G, Ramalli A, Tortoli P, Magenes G (2018) Experimental evaluation of ultrasound higherorder harmonic imaging with filtered-delay multiply and sum (F-DMAS) non-linear beamforming. Ultrasonics 86:59–68 Ménigot S, Girault J-M (2016) Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging. Ultrasonics 71:231–244 Mor-Avi V, Caiani EG, Collins KA, Korcarz CE, Bednarz JE, Lang RM (2001) Combined assessment of myocardial perfusion and regional left ventricular function by analysis of contrastenhanced power modulation images. Circulation 104(3):352–357 Näsholm SP, Austeng A, Jensen AC, Nilsen C-IC, Holm S (2011) Capon beamforming applied to second-harmonic ultrasound experimental data. In: 2011 IEEE international ultrasonics symposium. IEEE, pp 2217–2220 Nowicki A, Wójcik J, Secomski W (2007) Harmonic imaging using multitone nonlinear coding. Ultrasound Med Biol 33(7):1112–1122 Shen C-C, Lin C-H (2012) Chirp-encoded excitation for dual-frequency ultrasound tissue harmonic imaging. IEEE Trans Ultrason Ferroelectr Freq Control 59(11):2420–2430 Shen C-C, Shi T-Y (2011) Third harmonic transmit phasing for snr improvement in tissue harmonic imaging with golay-encoded excitation. Ultrasonics 51(5):554–560

References

241

Shen C-C, Peng J-K, Wu C (2014) Range side lobe inversion for chirp-encoded dual-band tissue harmonic imaging [correspondence]. IEEE Trans Ultrason Ferroelectr Freq Control 61(2):376– 384 Shin J, Yen JT (2013) Effects of dual apodization with cross-correlation on tissue harmonic and pulse inversion harmonic imaging in the presence of phase aberration. IEEE Trans Ultrason Ferroelectr Freq Control 60(3):643–649 Song J, Kim S, Sohn H-Y, Song T-K, Yoo YM (2010) Coded excitation for ultrasound tissue harmonic imaging. Ultrasonics 50(6):613–619 Sung JH, Jeong EY, Jeong JS (2020) Intravascular ultrasound transducer by using polarization inversion technique for tissue harmonic imaging: Modeling and experiments. IEEE Trans Biomed Eng 67(12):3380–3391 Szopinski KT, Pajk AM, Wysocki M, Amy D, Szopinska M, Jakubowski W (2003) Tissue harmonic imaging: utility in breast sonography. J Ultrasound Med 22(5):479–487 Tanabe M, Yamamura T, Okubo K, Tagawa N (2011) Tissue harmonic imaging with coded excitation. Ultrasound Imaging Tiran E, Deffieux T, Correia M, Maresca D, Osmanski B-F, Sieu L-A, Bergel A, Cohen I, Pernot M, Tanter M (2015) Multiplane wave imaging increases signal-to-noise ratio in ultrafast ultrasound imaging. Phys Med Biol 60(21):8549 Tranquart F, Grenier N, Eder V, Pourcelot L (1999) Clinical use of ultrasound tissue harmonic imaging. Ultrasound Med Biol 25(6):889–894 Trucco A, Bertora F (2006) Harmonic beamforming: performance analysis and imaging results. IEEE Trans Instrum Meas 55(6):1965–1974 Trucco A, Bertora F (2003) Harmonic beamforming: a new approach to removing the linear contribution from harmonic imaging. In: IEEE symposium on ultrasonics, 2003, vol 1. IEEE, pp 457–460 Varnosfaderani MHH, Asl BM (2019) Minimum variance based fusion of fundamental and second harmonic ultrasound imaging: Simulation and experimental study. Ultrasonics 96:203–213 Varnosfaderani MHH, Asl BM, Faridsoltani S (2018) An adaptive synthetic aperture method applied to ultrasound tissue harmonic imaging. IEEE Trans Ultrason Ferroelectr Freq Control 65(4):557– 569 Zhao B-W, Tang F-G, Shou J-D, Xu H-S, Lu J-H, Fan M-Y, Fan X-M, Pan M (2003) Comparison study of harmonic imaging (HI) and fundamental imaging (FI) in fetal echocardiography. J Zhejiang Univ Sci A 4(3):374–377 Zhu J, Zhang Y, Zhang K, Lang X (2021) Improved second harmonic imaging of ultrasound contrast agents based on total least-squares adaptive filtering. In: 2021 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4

Chapter 6

Ultrafast and Synthetic Aperture Ultrasound Imaging

Abstract In this chapter, the ultrafast and synthetic aperture ultrasound imaging are discussed. In particular, plane wave imaging, which is an ultrafast imaging in ultrasound and has attracted a lot of attention due to providing real-time imaging, has been explained. Moreover, the synthetic transmit aperture imaging technique and its different modified versions (e.g., diverging wave imaging) are explained. Also, a variety of algorithms that have been developed in the reconstruction of images in plane wave imaging, as well as synthetic aperture-based imaging, have been presented. Keywords Ultrafast · Coherent compounding · Plane wave imaging · Synthetic aperture imaging · Frame rate · Signal-to-noise ratio · Beamforming

6.1 Ultrafast Imaging In conventional US imaging, the total volume of the imaging medium is illuminated using sequential line-by-line scanning. Then, by processing the backscattered echoes of each scanline separately and juxtaposing the results, the final B-mode image would be obtained. This technique is schematically shown in Fig. 1.16. Using conventional B-mode imaging, the frame rate is limited to about 30–40 frames per second. To use US imaging in a wider range of applications, such as cardiac imaging (e.g., tracking the movements of heart valves) or flow imaging, the frame rate is necessary to be increased. In other words, ultrafast imaging should be developed to provide a sufficiently high frame rate, and consequently, take advantage of new applications in US imaging. In this regard, it was suggested to insonify a broader region of the imaging medium using a single element and process several scanlines simultaneously. This technique is also known as the parallel imaging technique that increases the frame rate compared to conventional B-mode imaging. Synthetic transmit aperture (STA) imaging is a common example of parallel imaging that was briefly discussed in Sect. 1.11.2. In STA imaging, all of the array elements are used to receive the backscattered echoes resulting from a single emission. By parallel processing the received beams, the final image would be obtained with a higher frame rate. The © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Mohammadzadeh Asl and R. Paridar, Beamforming in Medical Ultrasound Imaging, Springer Tracts in Electrical and Electronics Engineering, https://doi.org/10.1007/978-981-99-7528-0_6

243

244

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

problem with the mentioned parallel imaging technique was that it leads to a low SNR; the transmission is performed using a single element that contains lower transmission energy compared to the one produced by several elements. To overcome this limitation, in the 1990s, it was proposed to use a series of elements instead of a single element to illuminate the medium. Later in 1992, plane wave imaging (PWI) was developed by Fink et al. to further increase the frame rate to thousands of frames per second (Fink 1992). In PWI, a wide unfocused beam is used to insonify the whole volume of the imaging medium at once. Since all of the elements are used to perform the emission process, the problem of low SNR encountered in STA imaging is solved. Similar to the STA imaging technique, parallel processing of the acquired data in PWI improves the frame rate and provides ultrafast imaging. The goal of developing the PWI was to perform real-time imaging of the transient propagation of mechanical shear waves in human tissue. This is known as transient elastography, which is used to evaluate the viscoelastic characteristics of the tissue. The developed PWI was also used for the first time to diagnose breast cancer in an in-vivo clinical study (Bercoff et al. 2003). The performance of the PWI technique using a single emitted plane wave is good in transient elastography, and the frame rate is high enough to trace the displacements at a scale of a nanometer. However, a degraded quality image would be obtained in the reconstruction of the B-mode image associated with a speckle-generating medium. The resulting weak quality image is due to performing only a single plane wave to insonify the medium; the unfocused transmitted wave induces artifacts to the reconstructed image. More precisely, the image contrast is degraded using a single plane wave. In 2009, Montaldo et al. proposed the compounded PWI (CPWI) to improve the image quality in which the PWI is performed using several plane waves from different angles (Montaldo et al. 2009). Then, the final high-resolution image is achieved by coherently compounding the low-resolution images obtained from different angles. One should note that incoherent summation is also used in US imaging, in which the compounding operation is performed on the intensity images in order to smooth the speckle noise. In contrast, in the coherent summation, the compounding operation is performed on the beamformed received data, or equivalently, the complex data including phase information as well as the acoustic intensity.

6.1.1 Compounded Plane Wave Imaging To perform CPWI, a set of unfocused plane waves with different angles are used to insonify the imaging medium separately. The angles in which the plane waves are tilted are obtained according to the following equation: ( θ = arcsin

. i

) N N iλ , i = − , . . . , − 1, N × Pitch 2 2

(6.1)

6.1 Ultrafast Imaging

245

Fig. 6.1 Schematic of a PWI, b CPWI, and c an example of a steered plane wave in CPWI by applying time delays to the array elements

where .θi denotes the angle corresponding to .ith plane wave. According to the above equation, it can be concluded that . N plane waves with different angles are defined in CPWI. However, not all of the defined plane waves are used to perform the imaging; a limited number of plane waves, the angles of which are usually in the range of ◦ .∓20 , are selected to illuminate the medium. More precisely, plane waves with the maximum overlap between them in the imaging field are used to perform the imaging. According to this explanation, it is concluded that the plane waves with smaller angles are good candidates. In particular, the plane wave with .θ = 0◦ is always included in the selected plane waves. Figure 6.1a shows the schematic of the PWI in which a single plane wave with.θ = 0◦ is emitted from the array to insonify the medium. Also, the CPWI in which a series of tilted plane waves are used to perform the imaging is schematically shown in Fig. 6.1b. In this imaging technique, different tilted plane waves are generated by applying time delays to the array elements. To better clarify this issue, Fig. 6.1c is presented as an example. This figure also shows that to steer[ the transmitted ] plane wave, appropriate time delays are required. Consider . x p = x p1 , . . . , x p N ∈ C N ×1 as the spatial position of the elements (e.g., .x p1 denotes the position of the first element). Also, assume that the array is located at the .z-axis, i.e., . z p = 0 ∈ C N ×1 . The time delays required to form the .ith plane wave are calculated as below: [

] x p1 , . . . , x p N . sin θi . . [τ1 , . . . , τ N ] = c

(6.2)

It can be concluded that a set of time delays are obtained for each plane wave. After applying the calculated time delays to the array elements, the steered plane wave is emitted toward the imaging medium to insonify its whole volume. Then, the

246

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

backscattered echoes are recorded using all the elements. The next step is to perform beamforming on the recorded signals to obtain the image. This is done by applying dynamic focusing to the received signals, which is discussed in the next section.

6.1.2 Focusing Consider the .ith tilted plane wave emitted from the array elements as shown in Fig. 6.2. The time takes for the plane wave to reach a specific point (.nth imaging point) with coordinate .(xn , z n ) is obtained as below: τ

. nT X

(θi ) =

z n cos θi + xn sin θi . c

(6.3)

Also, the time duration in which the backscattered echoes from the point reach the ith element is obtained as follows: / ( ) z n2 + (xn − x pi )2 . (6.4) .τn R X x pi = c

.

The total round-trip time for the considered point is finally obtained according to (6.3) and (6.4) as below: τ

. n

( ) ( ) θi , x pi = τn T X (θi ) + τn R X x pi ,

(6.5)

which is used to perform the beamforming process on the received data.

6.1.3 Compounding The received signal associated with the .ith element is denoted as . si ∈ C M×1 . In PWI, the echoes reflected from the insonified medium are recorded using all of the array elements, as stated earlier. Consequently, a 2D dataset is achieved as. S(:, j) ∈ C M×N

Fig. 6.2 Schematic of the round-trip travel time for a plane wave with the angle of .θi for .nth imaging point

6.1 Ultrafast Imaging

247

Fig. 6.3 The acquisition schematic of the CPWI which results in a 3D data as . S ∈ C M×N ×N p

for . jth plane wave emission. Consider the CPWI is performed using . N P different plane waves. Therefore, the acquired data resulting from the CPWI is a 3D tensor which is denoted as .S = [S(:, 1), . . . , S(:, N P )] ∈ C M×N ×N P . Figure 6.3 shows the 3D tensor obtained from the CPWI schematically for .θ ∈ [−θ0 , θ0 ] with the step size of .Δ. Prior to compounding, the dynamic focusing process should be performed on the received data of each transmission separately. Dynamic focusing in reception which results in the final DAS beamformed data is discussed in Sect. 2.3.2. According to the explanation about the dynamic focusing process, and also, according to the travel time obtained as (6.5), for each transmission, i.e., .S{ j} = S(:, j), we have y

(θ j )

. DAS

(n) =

N Σ

si (n − τ, j)

i=1

=

N Σ

xi (n, j) ,

(6.6)

i=1

where .xi (n, j) denotes the time-delayed received signal associated with the . jth transmitted plane wave and .ith element at time index .n. Note that for time index .n, we have . s(n, j) = [s1 (n, j), . . . , s N (n, j)]T ∈ C N ×1 . To better clarify the difference between the symbols used to express the received data obtained from the CPWI, pay attention to Fig. 6.4.

248

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.4 Tensor .S , matrix . S(:, j), and vector . s(n, j) display of the data acquired from CPWI (. j ≤ N P )

Fig. 6.5 Processing steps of CPWI. For each emission, the beamformed data are mapped on a × N y grid. Coherently compounding the low-resolution images, a high-resolution image with the dimension of . N x × N y is obtained

. Nx

By applying the dynamic focusing process to the received data of each transmitted plane wave, . N p low-resolution images would be obtained. To achieve a high-resolution image, which is the goal of CPWI, the low-resolution images are coherently compounded. More precisely, we have y

. 2D-DAS

(n) = =

NP 1 Σ (θ j ) yDAS (n) N P j=1 NP Σ N 1 Σ xi (n, j). N P j=1 i=1

(6.7)

Once the compounded data is obtained according to the above equation, the postprocessing steps that are previously discussed in Sect. 1.9 are applied to the result, and consequently, the final image will be achieved. The general processing steps of the CPWI are shown in Fig. 6.5. Each imaging point in which the focusing process is performed is equivalent to an image pixel. More precisely, the imaging region is considered as a grid including . N x × N y pixels, each of which is dynamically focused in order to obtain the beamformed data. Figure 6.6 shows the reconstructed image of a single point target using CPWI as an example. As can be seen from the figure, the images obtained from a single plane wave highly suffer from low contrast. Also, the unfocused emitted plane wave leads to a poor resolution image. In contrast, by coherently compounding the low-quality

6.1 Ultrafast Imaging

249

Fig. 6.6 The reconstructed image of a single point target obtained from CPWI. LRI: low-resolution image, HRI: high-resolution image

images, a high-quality image would be achieved in terms of both contrast and resolution; the speckle generated from the background scatterers are independent for each transmission. This issue has been previously discussed in Sect. 1.5. Therefore, by averaging the results obtained from different emissions (or equivalently, different angles), the speckle noise would be canceled out from the final image and the contrast will be improved. Moreover, the focusing process is performed synthetically by coherent compounding the data associated with different angles. Therefore, the resolution would also be improved.

6.1.4 Introducing the PICMUS Datasets in CPWI In CPWI, the selected angles for emission and the number of plane waves are two parameters that affect image quality. Plane wave imaging challenge in medical ultrasound (PICMUS) has been developed to provide a consistent dataset associated with the simulation and experimental phantoms in CPWI (Liebgott et al. 2016). This dataset is public and can be found at https://www.creatis.insa-lyon.fr/Challenge/ IEEE_IUS_2016/. The PICMUS dataset is used in this chapter to have a fair evaluation and comparison of the performance of different algorithms in CPWI. PICMUS contains two simulations and two experimental datasets. The simulated phantoms

250

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Table 6.1 Simulation and experimental parameters for PICMUS datasets Parameter Value Sampling frequency Center frequency Excitation Bandwidth Element height Element width Pitch Number of elements Sound speed (for simulation)

20.832 [MHz] 5.2 [MHz] 2.5 cycles 67% 5 [mm] 0.27 [mm] 0.3 [mm] 128 1540 [m/s]

are designed using Field II MATLAB toolbox (Jensen 1996). The imaging setup corresponding to the simulation is presented in Table 6.1. For experimental datasets, the acquisition is performed using a Verasonics Vantage 256 scanner and an L11 probe. The simulation parameters presented in Table 6.1 are adjusted similarly to the experimental imaging setup. A brief explanation of the simulation and experimental phantoms that are used in PICMUS are listed below: • The first simulated phantom corresponds to some point targets distributed over an anechoic background which is known as the simulation resolution phantom. • The second simulated phantom corresponds to 9 equal size anechoic cysts that are distributed over a speckled background. This dataset is known as the simulation contrast phantom. • The first experimental phantom corresponds to some wire targets as well as an anechoic cyst distributed over a speckled background which is known as the experimental resolution phantom. • The second experimental phantom consists of different sizes of anechoic cysts and a wire target. This dataset is known as the experimental contrast phantom. 75 plane waves with different emission angles ranging from .−16◦ to .+16◦ with the step size of .0.432◦ are used to insonify the medium and perform the CPWI. The reconstructed images corresponding to each of the phantoms described above are presented in Fig. 6.7a1–d1 using 75 plane waves. The images are reconstructed according to the procedure described in Sects. 6.1.2 and 6.1.3. Also, apodization using the Hanning function is applied to the beamformed data in order to improve the image contrast. The CPWI has been developed to improve the image quality compared to the PWI where only a single plane wave (with .θ = 0◦ ) is used to perform the imaging, as stated earlier. To compare the performance of PWI and CPWI on PICMUS datasets, the reconstructed images of the PICMUS datasets using a single plane wave are also presented in Fig. 6.7a2–d2. Image quality degradation in terms of both resolution and contrast using the PWI is clearly visible in comparison with the CPWI. One should

6.1 Ultrafast Imaging

251

Fig. 6.7 The reconstructed images corresponding to the datasets provided by PICMUS. The first row (a1–d1) corresponds to the CPWI where 75 plane waves are performed to achieve a highresolution image. The second row (a2–d2) shows the result of PWI where only a single plane wave (.θ = 0◦ ) is used to obtain the image. The DAS algorithm is used to construct the images. All of the images are shown with the dynamic range of 60 dB

note that the image quality improvement in CPWI is at the expense of a lower frame rate; the frame rate depends on the number of required emissions, as stated in (1.14). In CPWI, the number of emissions is . N P times the one used in PWI. Therefore, the frame rate would be decreased . N P times compared to the PWI. However, the frame rate is still significantly improved compared to conventional B-mode imaging. To see the performance of the CPWI for different values of . N P , pay attention to Fig. 6.8a1–d1 in which the images of the simulation resolution phantom are constructed using 3, 7, 11, and 15 plane waves. It can be seen that by performing the CPWI using more emitted plane waves, the reconstructed image would be improved in terms of resolution and contrast, as expected. One should note that the tilted plane waves are selected uniformly, ranging from ◦ ◦ .−16 to.+16 , to perform the compounded imaging in Fig. 6.8a1–d1. More precisely, plane waves with equal angular distances are selected to construct the images; for instance, in Fig. 6.8a1, three tilted plane waves with the angles .[−16◦ , 0◦ , +16◦ ], with .16◦ angular spacing between the plane waves, are used to obtain the final image. To see how the emission angles affect the quality of the compounded image, they are selected differently; consecutive plane waves are used to obtain the final image. In particular, in the case in which three plane waves are used for compounding, the angels .[−0.432◦ , 0◦ , 0.432◦ ] are selected, and the results are presented in Fig. 6.8a2– d2. Comparing the images of the first row and the second row in Fig. 6.8, it can be seen that depending on the selected emission angles, the quality of the resulting image would be different. Qualitatively, it can be concluded that using the smaller

252

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.8 The reconstructed images of the simulation resolution phantom provided by PICMUS using (a1, a2) 3 plane waves, (b1, Simulation and experimental) 7 plane waves, (c1, c2) 11 plane waves, and (d1, d2) 15 plane waves. In the first row, tilted plane waves with equal angular distances are selected to construct the images. In the second row, tilted plane waves are selected consecutively to construct the images. The DAS algorithm is used to construct the images. All of the images are shown with the dynamic range of 60 dB

Fig. 6.9 The lateral variations corresponding to Fig. 6.8 for 3 and 15 tilted plane waves at the depth of 30 mm

angles (near .0◦ ), the image contrast would be better compared to the case in which a wider range of angles is used to perform CPWI. To better visualize the results, the lateral variations plot for 3 and 15 plane waves, i.e., Fig. 6.8a1, a2 and d1, d2, are demonstrated in Fig. 6.9. The results show that by using consecutive plane waves (smaller angles), the main lobe width is increased compared to the case in which the plane waves with equal angular space are selected.

6.1 Ultrafast Imaging

253

6.1.5 Image Quality Improvement in Compounded Plane Wave Imaging In CPWI, dynamic focusing is applied to the received data associated with each transmitted plane wave in the first step using the conventional DAS beamformer. Next, the beamformed datasets, or equivalently, the resulting low-resolution images are uniformly weighted and coherently compounded to obtain the final high-resolution image, as shown schematically in Fig. 6.5. This process results in an improved quality image compared to the PWI in which only a single plane wave is used to insonify the medium. However, further quality improvement still remains a challenge in CPWI. In this regard, it has been shown that different algorithms can be replaced with the non-adaptive DAS beamformer presented in (6.6) to improve the quality of the lowresolution image for each transmission. In other words, using different beamforming algorithms in the receiver direction improves the image quality. Moreover, different weighted compounding approaches, or equivalently, different beamformers in the plane wave direction have been suggested to be used instead of a simple uniform weighting stated in (6.7) in order to further improve the image resolution and contrast. According to this explanation, the processing steps of the received data in CPWI are considered as two main parts: applying a beamformer in the receiver direction as well as the plane wave direction. The diagram of the described processing steps is illustrated in Fig. 6.10. The original CPWI which is obtained according to (6.7) is also known as 2D-DAS, since the non-adaptive DAS beamformer is used in both directions of the receiver and plane wave. Different beamformers that are used in either receiver or plane wave directions to improve image quality will be discussed in the following sections.

Fig. 6.10 General processing steps in CPWI. A beamformer in receiver (RX) direction is used to obtain the low-resolution images (LRIs). The final high-resolution image (HRI) is achieved by applying another beamformer in plane wave (Pw) direction. . N P different tilted plane waves are used to perform the imaging

254

6.1.5.1

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Delay-Multiply-and-Sum Algorithm in Compounded Plane Wave Imaging

The non-linear DMAS algorithm has already been discussed in detail in Sect. 3.5.1.9. It has been shown that this algorithm improves the contrast of the reconstructed image compared to the non-adaptive DAS beamformer due to cross-correlation between the received signals. This algorithm has been used in CPWI to achieve an improved quality image compared to the original 2D-DAS beamformer. As schematically demonstrated in Fig. 6.10, the final high-resolution image is constructed by applying a beamformer (or two different beamformers) in both directions of the receiver and plane wave. According to the considered processing steps, the non-linear DMAS algorithm has been used in different directions, each of which will be discussed separately in the following. Delay-Multiply-and-Sum Algorithm in Receiver Direction In Matrone et al. (2016a, b), it has been proposed to improve the image quality by involving the non-linear DMAS algorithm in the image reconstruction process. In this algorithm, uniform weighting is replaced with the DMAS beamformer in the receiver direction which is used in the original 2D-DAS method. The resulting algorithm is named Rx-DMAS since the DMAS algorithm is applied to the receiver direction. In this algorithm, the time-delayed data associated with each emission are first coherently summed over the plane waves. In particular, for the time instance .n, we have . y(n) = [yi (n), . . . , y N (n)]T ∈ C N ×1 , where y (n) =

NP Σ

. i

xi (n, j) .

(6.8)

j=1

By coherently compounding the data over the plane wave, the output will be synthetically focused. Then, the DMAS algorithm is applied to the focused data in the receiver direction as below: y

. Rx-DMAS

(n) =

N −1 Σ N Σ

| [ ] /| sign yi (n)y j (n) . | yi (n)y j (n)|,

(6.9)

i=1 j=i+1

and the final high-resolution image is obtained accordingly. Figure 6.11 shows the reconstructed images of the PICMUS data obtained from the Rx-DMAS algorithm. Comparing the results presented in Figs. 6.7a1–d1 and 6.11, it can be qualitatively seen that the DMAS-based algorithms result in an improved image compared to the original 2D-DAS beamformer in terms of both resolution and contrast. To evaluate the resolution improvement quantitatively, the FWHM of the reconstructed point target at a depth of 30 mm is calculated for the original 2D-DAS and Rx-DMAS algorithms. This evaluation metric is obtained as 0.61 mm and 0.5 mm for 2D-DAS and RxDMAS algorithms, respectively. This shows the superiority of the DMAS-based algorithm compared to the non-adaptive DAS beamformer in terms of resolution

6.1 Ultrafast Imaging

255

Fig. 6.11 Reconstructed images obtained from the Rx-DMAS algorithm corresponding to a simulation resolution phantom, b simulation contrast phantom, c experimental resolution phantom, and d experimental contrast phantom using 75 plane wave emissions. The images are shown with the dynamic range of 60 dB. The box and circles that are shown in (a) and (b) are used for quantitative evaluation

improvement. Furthermore, using the Rx-DMAS algorithm, the edges of the cysts are reconstructed sharper compared to the 2D-DAS algorithm. This can be seen from the reconstructed image corresponding to the simulation contrast phantom shown in Fig. 6.11b, as an example. Also, the gCNR evaluation metric is obtained as 0.98 and 0.99 for the reconstructed images of the simulation contrast phantom obtained from the 2D-DAS and Rx-DMAS algorithms, respectively, confirming the contrast improvement using the DMAS-based method. One should note that inside the anechoic cysts look darker using the DMAS-based algorithm which improves the image contrast. However, the intensity of the background speckle is suppressed compared to that of the 2D-DAS algorithm. In other words, using the Rx-DMAS algorithm, some dark regions appear in the background speckle, which negatively affects the contrast. This can be more clearly seen in Fig. 6.11c and d. Generally, the contrast of the DMAS-based algorithm will be improved compared to the 2D-DAS method if sufficient emissions are used for compounding. Delay-Multiply-and-Sum Algorithm in Plane Wave Direction In Go et al. (2018), it has been proposed to use the DMAS algorithm in the plane wave direction and perform the compounding process with the consideration of the cross-correlation between the low-resolution images. As the DMAS beamformer is applied to the plane wave direction, this algorithm is known as PW-DMAS. The lowresolution images are first generated using the standard DAS beamformer according to (6.6). In the second step of the process, the DMAS algorithm is applied to the obtained low-resolution images, and (6.7) is modified as below: y

. PW-DMAS

(n) =

NΣ P −1

NP Σ

[ sign

(θ j ) (θi ) yDAS (n)yDAS (n)

| ] /| (θ j ) | (θi ) | (n)yDAS (n)|. . |yDAS

(6.10)

i=1 j=i+1

Using the PW-DMAS algorithm, the image quality will be improved compared to the 2D-DAS beamformer. Note that the efficient implementation method for the DMAS algorithm that was previously discussed in Sect. 3.5.1.9 can also be used to improve

256

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

the computational complexity. In particular, to implement the so-called PW-DMAS algorithm, the sign and the square root of the DAS beamformed data corresponding to each emission are obtained as below: | [ ] /| (θ j ) (θ j ) | (θ j ) | .y ˆDAS (n) = sign yDAS (n) . |yDAS (n)|. (6.11) Then, the PW-DMAS algorithm presented in (6.10) is reformulated as below: ⎞2 ⎤ ⎡⎛ NP NP ( )2 Σ 1 ⎣⎝Σ (θ j ) (θ j ) yˆDAS (n) ⎦ . . yPW-DMAS (n) = yˆDAS (n)⎠ − 2 j=1 j=1

(6.12)

Delay-Multiply-and-Sum Algorithm in Receiver and Plane Wave Directions Both the Rx-DMAS and PW-DMAS algorithms improve the image quality compared to the basic DAS algorithm. One should note that the final reconstructed image can be further improved by combining these two DMAS-based algorithms, the result of which is known as 2D-DMAS. In the 2D-DMAS algorithm, the low-resolution images are constructed using the DMAS algorithm. Then, the DMAS algorithm is reused to compound the resulting images obtained from the first step. The 2D-DMAS algorithm based on the baseband signal, or equivalently, the 2D baseband DMAS (BB-DMAS) technique, has been developed in Shen and Hsieh (2019) for CPWI (basic description about this algorithm can be found in Sect. 3.5.1.11). To use this technique, a 2D data matrix is obtained for each time instance; one dimension of the data matrix corresponds to the array elements, and another dimension is associated with the emission angles. In such a case, the baseband data convert to the form of .z ik = aik e jφik which corresponds to the .ith channel (.i = 1, . . . , N ) and .kth emission (.k = 1, . . . , N P ). The 2D-DMAS algorithm using the described baseband signals is obtained according to the following equation: ( y

. 2D-DMAS

(n) =

1 NP N

NP Σ N Σ √ p aik e jφik

)p .

(6.13)

k=1 i=1

By using this technique, signal coherence is obtained by directly applying the DMAS algorithm to the data matrix. In other words, each entry of the data matrix is amplitude-scaled by . pth rooting the amplitude without changing the phase component. Then, the output is dimensionality retrieved by . pth power, as mentioned earlier. One should note that the baseband signals can also be used in other DMASbased algorithms discussed earlier. In particular, the Rx-DMAS algorithm using the baseband signals is formulated as below: ( y

. Rx-DMAS

(n) =

N 1 Σ√ p ci e jαi N i=1

)p , ci e jαi =

NP 1 Σ z ik . N P k=1

(6.14)

6.1 Ultrafast Imaging

257

Similarly, applying the PW-DMAS algorithm to the data matrix using the baseband signals, we have ( y

. PW-DMAS

(n) =

NP / 1 Σ p bk e jβk N P k=1

)p , bk e jβk =

N 1 Σ z ik . N i=1

(6.15)

Each of the DMAS-based algorithms results in an enhanced quality image compared to the original 2D-DAS algorithm in which the non-adaptive DAS beamformer is used in both receiver and plane wave directions. Regional-Lag Signed Delay Multiply and Sum Algorithm Inspired by the GCF and SL-DMAS algorithms that have been previously discussed in Sects. 3.5.1.2 and 3.5.1.12, respectively, a DMAS-based algorithm known as the regional-lag signed DMAS (rsDMAS) technique was developed in Yan et al. (2021) in ultrafast US imaging. In the rsDMAS algorithm, the lag value is selected adaptively after the received signals are compounded in the plane wave direction. The array is divided into subarrays, and the SL-DMAS algorithm is performed on each subarray. Note that the purpose of subarray division is to reduce the computational complexity of the algorithm. Also, note that the adaptive selection of the lag value is formulated based on the behavior of the output of the sign-DMAS (sDMAS) algorithm; in this algorithm, the sign of the desired signal is preserved by using the sign of the DAS beamformed data as below: y

. sDMAS

(n) = sign [yDAS (n)] . [|yDMAS (n)|] .

(6.16)

The sDMAS algorithm was initially proposed in Kirchner et al. (2018) in order to efficiently use the DMAS algorithm in photoacoustic image formation. In this study, the goal was to prevent suppressing the components of the desired signal caused by band-pass filtering. In the following, detailed explanations of the processing steps corresponding to the rsDMAS algorithm are presented. In the rsDMAS algorithm, the received signals are first compounded in the plane wave direction to achieve the synthetic focusing. The result is shown as . y(n) = [y1 (n), · · · y N (n)]T ∈ C N ×1 the entries of which are obtained according to (6.8). Then, the GCF is calculated from the obtained . y(n) according to (3.21). It has been shown that the adaptive effect of the sDMAS technique varies for different values of the lag parameter (Yan et al. 2021); as the lag value increases, the resolution of the sDMAS algorithm will be improved. Also, as the lag value gets smaller, the image speckle will be better preserved (at the expense of resolution degradation). On one hand, the signal associated with the greater value of the obtained GCF is included in the main lobe. In such a case, it is desirable to suppress the interferences to better preserve the point target. On the other hand, the signal corresponding to the smaller value of the obtained GCF is contaminated with noise and interference. Therefore, it should be suppressed in order to improve the image contrast. Finally, the signal corresponding to the moderate value of the obtained GCF is corresponding to speckle and should be preserved. It is well-known that in the imaging regions

258

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

where the speckle exists, high resolution is not required. However, high contrast is necessary. For this reason, in the rsDMAS algorithm, the GCF value, as well as two thresholds .α and .β, are used to create a separation between different areas of the imaging region. The maximum lag value for different imaging areas is calculated from the following equation: { lagmax =

.

round [GC F(n) × (N − 1)] if α < GC F < β N −1 else.

(6.17)

According to the above equation, the maximum lag value equals . N − 1, which is selected for the signals corresponding to strong reflectors in order to achieve a high resolution. The next step is to perform subarray division to reduce the computational complexity of the rsDMAS algorithm. In this step, the array is divided into. N − L + 1 subarrays where . L equals the lag value which is obtained from (6.17). Once the subarray division is performed, the DMAS algorithm is applied to each subarray. The output of the .lth subarray is denoted as . yl,DMAS (n). By averaging and normalizing the obtained outputs, we have N . . yrDMAS (n) = N −L +1

Σ N −L+1 l=1

ΣL

yl,DMAS (n)

l=1 (L

− l)

.

(6.18)

Note that the normalization operation is performed to compare the contribution of different maximum lags; for different imaging regions, the parameter . L (or equivalently, the maximum lag value) will be different. By multiplying the averaged and normalized output by . N , the output amplitude will be on the same scale as DAS. Finally, the sDMAS algorithm is applied to the output of (6.18) to improve the performance of the algorithm as below: y

. rsDMAS

6.1.5.2

(n) = sign [yDAS (n)] × [|yrDMAS (n)|] .

(6.19)

Minimum Variance Algorithm in Compounded Plane Wave Imaging

The MV algorithm is discussed in detail in Sect. 3.4, and it has been shown that by using this data-dependent algorithm, the resolution of the reconstructed image is improved compared to the non-adaptive DAS beamformer. Using the MV algorithm in CPWI, it is expected to achieve an improved reconstructed image in terms of resolution compared to the original 2D-DAS method. As demonstrated in Fig. 6.10, a beamformer is applied to receiver and plane wave directions. It can be found that the MV algorithm can be used in plane wave direction as well as the receiver direction to obtain an improved quality image. Each available case is discussed in the following.

6.1 Ultrafast Imaging

259

Minimum Variance Algorithm in Plane Wave Direction In Austeng et al. (2011), the MV algorithm is used to adaptively compound the lowresolution images, and it has been shown that the resolution of the final reconstructed image will be improved compared to the case in which a simple uniform weighting approach is used for compounding. More precisely, after the DAS beamformer is applied to the receiver direction according to (6.6) and the low-resolution images are obtained, the final high-quality image is reconstructed by using the MV algorithm in the plane wave direction as below: y

. PW-MV

(n) =

NP Σ

(θ )

j w P W j (n)yDAS (n)

j=1

= w HPW (n) y(θ) DAS (n),

(6.20)

where .w P W (n) = [w P W1 (n), . . . , w P W N P (n)]T ∈ C N P ×1 denotes the weight vector associated with the .nth time instance obtained from the MV algorithm. Also, [ ]T (θ N P ) (θ) (θ1 ) . y DAS (n) = yDAS (n), . . . , yDAS (n) ∈ C N P ×1 is the time-delayed signal corresponding to each transmission at time instance .n. The MV algorithm is applied to the plane wave direction, and therefore, the resulting modified algorithm is named PW-MV. Figure 6.12 shows the results of the PICMUS datasets obtained from the PW-MV algorithm. In particular, the FWHM value of the point target at a depth of 30 mm in Fig. 6.12a is obtained as 0.13 mm, indicating that the resolution of the resulting image is improved for about 0.37 mm compared to the Rx-DMAS algorithm the result of which is shown in Fig. 6.11a. Also, the CNR evaluation metric for the simulation contrast phantom shown in Fig. 6.12b is obtained as 5.6 dB. Comparing the result with the CNR value obtained from the 2D-DAS algorithm which is about 5.48 dB, it can be concluded that the contrast of the PW-MV algorithm is comparable

Fig. 6.12 Reconstructed images obtained from the PW-MV algorithm corresponding to a simulation resolution phantom, b simulation contrast phantom, c experimental resolution phantom, and d experimental contrast phantom using 75 plane wave emissions. The images are shown with the dynamic range of 60 dB. The box and circles that are shown in (a) and (b) are used for quantitative evaluation. . L = N /3 and .ξ = 1/100L are considered to construct the images. Also, for (b)–(d), temporal averaging with . K = 11 is used

260

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

with the original 2D-DAS algorithm. More precisely, the MV-based algorithm can successfully preserve the background speckle of the reconstructed image. Minimum Variance Algorithm in Receiver and Plane Wave Directions In Rindal and Austeng (2016), it has been proposed to use the MV algorithm in order to obtain the low-resolution images associated with each transmission. In other words, the MV algorithm is applied to the receiver direction. By doing so, the quality of each low-resolution image would be improved compared to the original case in which the standard DAS beamformer is used. In this algorithm, for the received data associated with each emission, we have y

(θ j )

. MV

(n) = w H Rx (n)x(n, j),

(6.21)

where .w Rx (n) = [w Rx1 (n), . . . , w Rx N (n)]T ∈ C N ×1 is the weight vector obtained from the MV algorithm for each transmission. Also, . x(n, j) = [x1 (n, j) · · · , x N (n, j)]T . After the low-resolution images are obtained according to the above equation, the MV algorithm is again used to perform the coherent compounding process. More precisely, the final high-resolution image is obtained according to the following equation: y

. 2D-MV

(n) =

NP Σ

(θ )

w P W j (n)yMVj (n),

j=1

= w HPW (n) y(θ) MV (n),

(6.22)

[ ]T (θ N P ) (θ1 ) where. y(θ) ∈ C N P ×1 . Also,.w P W (n) ∈ C N P ×1 is the MV (n) = yMV (n), . . . , yMV (n) weight vector obtained from the MV algorithm, the . jth entry of which corresponds to . jth emission. In other words, adaptive weighting is performed to coherently compound the images. To obtain both of the weight vectors .w Rx (n) and .w P W (n), spatial smoothing is necessary to synthetically decorrelate the received signals. This MVbased algorithm is named 2D-MV beamformer since the MV algorithm is performed on both receiver and plane wave directions.

6.1.5.3

Joint Transmitting-Receiving Adaptive Beamformer

In the coherent compounding of the data obtained from different emissions, the correlation information between the emissions is not considered. The joint transmittingreceiving (JTR) adaptive algorithm has been developed to tackle this limitation and obtain the reconstructed image with the consideration of the correlation between different plane waves (Zhao et al. 2015). In this algorithm, the process is performed jointly on the 2D data matrix obtained from different emissions rather than 1D data vectors associated with each emission separately. In the JTR algorithm, the weight vectors are calculated in both receive and plane wave directions. Then, the outputs

6.1 Ultrafast Imaging

261

are combined to achieve a 2D adaptive weighting function to perform the compounding process. In other words, the array-domain weight vector and the frame-domain weight vector are combined to apodize the 2D data matrix and obtain the beamformed output. The JTR adaptive algorithm outperforms other algorithms in which the correlation between different emissions is not taken into account. Multi-Wave Receiving Aperture Beamforming Consider the time-delayed received data at time instance .n as below: ⎡

x1,1 (n) x1,2 (n) ⎢ x2,1 (n) x2,2 (n) ⎢ . X(n) = ⎢ .. .. ⎣ . . x N P ,1 (n) x N P ,2 (n)

⎤ ⎡ T ⎤ x Rx,1 (n) · · · x1,N (n) ⎢ T ⎥ · · · x2,N (n) ⎥ ⎥ ⎢ x Rx,2 (n) ⎥ ⎥=⎢ ⎥, .. .. .. ⎦ ⎣ ⎦ . . . · · · x N P ,N (n) x TRx,N P (n)

(6.23)

]T [ where . x Rx,i (n) = xi,1 , . . . , xi,N ∈ C N ×1 denotes the time-delayed data obtained from the array elements for .ith emission. Every row in (6.23), which represents the array data from a single emission, is known as the receiving aperture. Also, every column in (6.23), which represents the received data of a single element from different emissions, is known as the transmitting aperture. The MV algorithm can be used in the receiver direction to obtain an improved resolution image which has been discussed in Sect. 6.1.5.2. Although this technique results in a better resolution compared to the original 2D-DAS method, however, it severely suffers from highcomputational complexity; at each time instance .n, the covariance matrix should be estimated and inverted to obtain the weight vector for each emission. That is, the inversion of the covariance matrix, which has a high-computational load, should be performed . N P times. To reduce the computational complexity of this process, one can use the multi-wave approach. The resulting algorithm is known as multi-wave receiving aperture beamforming (MRB). In CPWI, the plane waves that are used to insonify the imaging medium have similar amplitudes but different steering directions. The signals received from each emission are acquired from a fixed location. Therefore, it is possible to average the estimated covariance matrices associated with each transmission. In other words, a multi-wave receiving covariance matrix can be ˆ Rx (n) and expressed as below: estimated which is denoted as . R

.

ˆ Rx (n) = R

N P N −L ( )H Σ Σ1 +1 (l) 1 x Rx,i (n) x (l) (n) , Rx,i N P (N − L 1 + 1) i=1 l=1

(6.24)

[ ]T where. x (l) ∈ C L 1 ×1 denotes the.lth subRx,i (n) = x i,l (n), x i,l+1 (n), . . . , x i,l+L 1 −1 (n) array with the length of . L 1 corresponding to .ith emission. The weight vector L ×1 .w Rx (n) ∈ C 1 is obtained from the estimated covariance matrix presented in (6.24), and the beamformed data of the MRB is obtained from the following equation:

262

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

y (n) =

. Rx

N P N −L Σ Σ1 +1 1 (l) wH Rx (n)x Rx,i . N P (N − L 1 + 1) i=1 l=1

(6.25)

It can be seen from the above equation that the beamformed outputs of . N P emissions are averaged to produce the final output. One should note that in order to obtain the weight vector in MRB, the estimated covariance matrix is inversed only once (instead of . N P times), which positively affects the computational complexity of the algorithm. Transmitting Aperture Weighting In the transmitting aperture weighting method, the weighting process is performed along the transmitting aperture, as its name implies. The processing steps are similar to the MRB, except that the plane wave direction is considered instead of the receiver direction. Simply put, in the transmitting aperture weighting algorithm, a similar procedure performed in the MRB is applied to the transpose of the data matrix presented in (6.23) to obtain the reconstructed image. Consider the transpose data matrix shown in (6.23) as below: ⎡

x1,1 (n) x2,1 (n) ⎢ x1,2 (n) x2,2 (n) ⎢ T . X (n) = ⎢ . .. ⎣ .. . x1,N (n) x2,N (n)

⎤ ⎡ T ⎤ x T x,1 (n) · · · x N P ,1 (n) ⎢ T ⎥ · · · x N P ,2 (n) ⎥ ⎥ ⎢ x T x,2 (n) ⎥ = ⎢ ⎥ ⎥, . . .. .. .. ⎦ ⎣ ⎦ .

· · · x N P ,N (n)

(6.26)

x TT x,N (n)

]T [ where. x T x, j (n) = x1, j , . . . , x N P , j ∈ C N P ×1 denotes the transmitting aperture corresponding to . jth element. To obtain the transmitting aperture weighting, spatial smoothing is necessary to estimate the covariance matrix; the difference between the emission angles is small, and therefore, the received data associated with different emissions are correlated. By performing the spatial smoothing, the covariance matrix is estimated as below:

.

ˆ T x (n) = R

Σ 1 N (N P − L 2 + 1) j=1 N

−L 2 +1 N PΣ

( )H (l) x (l) (n) x (n) , T x, j T x, j

(6.27)

l=1

[ ]T where . x (l) ∈ C L 2 ×1 denotes the .lth T x, j (n) = xl, j (n), xl+1, j (n), . . . , xl+L 2 −1, j (n) subarray with the length of . L 2 corresponding to . jth element. Note that . L 2 ≤ N P /2 should be considered to achieve a good performance. The weight vector .w T x (n) ∈ C L 2 ×1 is obtained from the estimated covariance matrix presented in (6.27). The beamformed outputs of different transmitting apertures are averaged and apodized using the obtained weight vector to obtain the final beamformed data: Σ 1 N (N P − L 2 + 1) j=1 N

y (n) =

. Tx

−L 2 +1 N PΣ l=1

w THx (n)x (l) T x, j .

(6.28)

6.1 Ultrafast Imaging

263

It can be seen from the above equation that the non-adaptive DAS beamformer is used in the receiving aperture. Also, the adaptive beamformer is used to compound the low-resolution images. One should note that in the transmitting aperture weighting method, the covariance matrix is estimated based on the raw data, not the low-resolution images obtained from the first step of the process. This reveals the difference between the transmitting aperture weighting method and the MV-based algorithm presented in Sect. 6.1.5.2. Joint Transmitting-Receiving Adaptive Beamforming In JTR beamforming, receiving, and transmitting aperture weightings are combined to obtain the final high-resolution image. More precisely, considering the data matrix . X(n) presented in (6.23), the covariance matrices in both directions of the receiver and plane wave are estimated using (6.24) and (6.27), and the weight vectors .w Rx (n) and .w T x (n) are obtained accordingly. In the JTR algorithm, spatial smoothing is performed jointly in both receiver and plane wave directions to estimate the covariance matrix; the data matrix presented in (6.23) is divided into some overlapping sub-blocks with the dimensions of . L 1 × L 2 , and the covariance matrix is estimated as below: .

1 (N − L 1 + 1)(N P − L 2 + 1) ⎡ xi, j (n) N PΣ −L 2 +1 N −L Σ1 +1 ⎢ x ⎢ i+1, j (n) × ⎢ .. ⎣ .

ˆ X(n) =

i=1

⎤ xi, j+1 (n) · · · xi, j+L 1 −1 (n) xi+1, j+1 (n) · · · xi+1, j+L 1 −1 (n) ⎥ ⎥ ⎥. .. .. .. ⎦ . . . xi+L 2 −1, j (n) xi+L 2 −1, j+1 (n) · · · xi+L 2 −1, j+L 1 −1 (n) (6.29)

j=1

It can be seen from the above equation that the array division is performed in both directions. Finally, the JTR output is obtained according to the following equation: y

. JTR

( H )T ˆ (n) = w H . Rx (n) X(n) w T x (n)

(6.30)

The above equation implies that the weighting process is performed jointly on the row data in both directions of the receiver and the plane wave. One should note that the following equality is met for the weight vectors: L2 Σ L1 ( Σ .

j

)

wTi x w Rx =

i=1 j=1

L2 Σ i=1

=

L2 Σ

⎛ ⎝wTi x

L1 Σ

⎞ w Rx ⎠ j

j=1

( i ) wT x .1 = 1,

(6.31)

i=1

which indicates that the weighting vectors are distortionless. The JTR algorithm achieves an improved resolution compared to the MV algorithm discussed in

264

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Sect. 6.1.5.2. Also, the JTR method preserves the speckle statistics better compared to the MRB and transmitting aperture weighting methods. According to the above explanations, the processing steps of the JTR algorithm for each time instance .n are summarized below: 1. The time delays of the received signals are calculated for each emission separately, and the data matrix is constructed. ˆ Rx (n) and . R ˆ T x (n) are calculated using (6.24) and 2. The covariance matrices . R (6.27). 3. The weight vectors.w Rx (n) and.w T x (n), corresponding to the covariance matrices estimated in the previous step, are calculated. 4. The array division technique is performed, and the covariance matrix is estimated according to (6.29). 5. The beamformed output of the JTR algorithm is achieved using (6.30).

6.1.5.4

Spatial Coherence Approach to Minimum Variance Beamformer

As discussed in Sect. 3.3, the received signals in medical US imaging are highly correlated. To artificially decorrelate the received signals, it is necessary to apply the spatial smoothing technique. However, this technique affects the performance of the MV algorithm; as the subarray length decreases, the quality of the reconstructed image reaches the non-adaptive DAS beamformer, which suffers from low resolution. In contrast, as the subarray length increases, the robustness of the algorithm will be decreased. In CPWI, a MV distortionless response (MVDR)-based beamformer has been developed in which the weight vector is calculated without the need for spatial smoothing (Nguyen and Prager 2018); this algorithm is described as a spatial filter that decorrelates the received signals of the array elements. More precisely, the covariance matrix is estimated using the approximation of the correlation between the signals, which is a measure of the similarity between the backscattered echoes corresponding to the array elements. The MVDR-based algorithm can be applied along the rows or columns of the data matrix presented in (6.23). Accordingly, two MVDR-based algorithms have been introduced in which the weighting process is performed in different directions. Applying the algorithm along the direction of the rows and columns of the data matrix is known as the receive and transmit MVDR algorithm, respectively. Each of the MVDR-based algorithms will be discussed in the following. Transmit Minimum Variance Distortionless Response Algorithm The input of the algorithm is considered as .v(n) = [v1 (n), . . . , v N (n)]T ∈ C N ×1 , where v j (n) =

NP Σ

.

i=1

xi, j (n), for j = 1, . . . , N .

(6.32)

6.1 Ultrafast Imaging

265

It can be seen from the above equation that each entry of the input vector .v(n) is obtained by summing the signals of the . jth element over different emissions. The defined input vector should be decorrelated using a matrix that is equivalent to the second-order statistics of .v(n). In this regard, a new set of snapshots are generated from the input vector. The newly generated snapshots are obtained by different combinations of the backscattered data. One should note that [ the second-order ]T statistics of the new snapshots, which are denoted as . pk (n) = pk,1 , . . . , pk,N (n) ∈ C N ×1 , should be similar to that of .v(n). For the newly defined vector . pk (n), we have NP Σ 1 . pk, j (n) = xi, j (n). N P − 1 i=1,i/=k

(6.33)

Similar to .v(n), the newly defined vector . pk (n) consists of the compounded data, with the difference that the signal corresponding to the .kth emission is not included. To analyze the statistical similarity between .v(n) and . pk (n), pay attention to the following explanation: the vector .v(n) is constructed by coherent compounding the signals of different emissions. It is well-known that the compounding process is equivalent to synthetically focusing on a focal point (Montaldo et al. 2009). If each plane wave is considered as a sphere pulse, it can be assumed that the propagated wave originates from a pseudo-phased array consisting of . N P elements. This assumption is met for each imaging point of the medium. The van Cittert-Zernike (VCZ) theorem is used to relate the statistics of the approximated covariance matrix of the vector .v(n) among its components to the profile of the source intensity (Mallart and Fink 1991); the spatial coherence between two received wavefronts which are observed at two different points, or equivalently, two element positions .x p1 and .x p2 , is described using the VCZ theorem. This is equivalent to the Fourier transform of the source pattern obtained in the spatial frequency, which depends on the distance between .x p1 and .x p2 . In the covariance matrix estimation of the vector . pk (n), the spatial coherence among the components of the vector is approximated similarly to the covariance matrix estimation of .v(n). This approximation is made by the similar pseudo-phased array consisting of . N P elements, with the difference that the .kth element is not activated. If . N P is great enough, this difference will be small, and therefore, one can assume that the second-order statistics of the vectors .v(n) and . pk (n) are similar. Generating the vector .v k (n) for each emission (.k = 1 · · · , N P ), the covariance matrix is estimated according to the following equation:

.

NP Σ ˆ Tx-MVDR = 1 p (n) pkH (n) + ξ I, R N P k=1 k

(6.34)

where the index Tx-MVDR stands for transmit MVDR. The diagonal loading approach is also applied to the estimated covariance matrix in order to achieve a robust estimation. Consequently, the covariance matrix is estimated using a new

266

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

strategy without performing the spatial smoothing technique. After the covariance matrix is estimated, the weight vector .w Tx-MVDR (n) is calculated according to (2.27). Finally, the output of the transmit MVDR beamformer is obtained as below: Algorithm 6.1 Processing steps of transmit MVDR method. 1: 2: 3: 4: 5:

Input: data matrix obtained from N P emissions and N receive elements. Output: Tx-MVDR beamformed output for nth imaging point (yTx-MVDR (n)). v(n) = signal summation over all emissions for each element for k = 1 : N P do pk (n) = signal summation over all emissions, except k th emission for each element end for ˆ Tx-MVDR (n) = covariance matrix estimation using (6.34) R ˆ −1 R Tx-MVDR (n)a ˆ −1 R Tx-MVDR (n)a H yTx-MVDR (n) = w Tx-MVDR (n) × v(n) Return yTx-MVDR (n)

6: w Tx-MVDR (n) = 7: 8:

aH

y

. Tx-MVDR

H (n) = wTx-MVDR (n)v(n).

(6.35)

The pseudo-code corresponding to this MVDR-based algorithm is presented in Algorithm 6.1. In the transmit MVDR algorithm, the snapshots are generated using the data compounded over different emissions. Therefore, this algorithm is also known as the data compounded among transmit MVDR (DCT-MVDR). Receive Minimum Variance Distortionless Response Algorithm In the receive MVDR algorithm, the input vector is considered as the summation of the data over the received signals for a single emission. More precisely, we have [ ]T . u(n) = u 1 (n), . . . , u N P (n) ∈ C N P ×1 , where u (n) =

N Σ

. i

xi, j (n), for i = 1, . . . , N P .

(6.36)

j=1

The received signals are obtained from the same source, and therefore, a correlation exists between them. By using this MVDR-based algorithm, this correlation should be removed for each emission. However, the entries of the input vector .u(n) are associated with different emissions. Inspired by the acoustic reciprocity theorem, it is still possible to consider the receive MVDR algorithm as a decorrelation. In the acoustic reciprocity theorem, the role of the transmit and receive changes in the spatial frequency context (Bottenus and Üstüner 2015). From this aspect, the vector . u(n) is considered as the backscattered echoes received from the pseudo-phased array consisting of . N P elements. Therefore, the receive MVDR beamformer is used to decorrelate the received signals obtained from these . N P elements.

6.1 Ultrafast Imaging

267

Similar to the transmit MVDR algorithm, a set of new snapshots are generated for each element in the receive MVDR algorithm, each of which is denoted as . sk (n) = [ ]T sk,1 (n), . . . , sk,N P (n) ∈ C N P ×1 , where s (n) =

N Σ

. k,i

xi, j (n).

(6.37)

j=1, j/=k

It can be seen that in the newly generated snapshots . sk (n), the signal associated with the .kth element is not included in compounding. Based on the acoustic reciprocity theorem, .u(n) or . sk (n) can be considered as the backscattered echoes from the center of the corresponding beams which are generated from the linear array. Therefore, the VCZ theorem can be used to analyze the similarity between the statistics of . u(n) and . s k (n). The analysis is similar to the one performed for the transmit MVDR algorithm, except that the spatial frequency in the coherence function is determined by the difference between two steering angles instead of the distance between two element positions. After the new snapshots (. sk (n), for .k = 1, . . . , N ) are generated, the covariance matrix is estimated as below: N 1 Σ ˆ sk (n)skH (n) + ξ I, . R Rx-MVDR (n) = N k=1

(6.38)

and the weight vector .wRx-MVDR (n) is obtained accordingly. The beamformed output of the receive MVDR algorithm is obtained according to the following equation: y

. Rx-MVDR

H (n) = wRx-MVDR (n)u(n).

(6.39)

The pseudo-code of this algorithm is presented in Algorithm 6.2. In the receive MVDR algorithm, the snapshots are generated by compounding the data over the array elements, as mentioned earlier. Therefore, this algorithm is also known as the data compounded on receive MVDR (DCR-MVDR) beamformer. 6.1.5.5

Weighting Factors in Compounded Plane Wave Imaging

Weighting factors, such as CF and GCF, have been previously discussed in Sect. 3.5, and it has been shown that using these methods, one can successfully reduce the noise level and improve the image contrast. These weighting methods can also be used in CPWI for the same purpose either in the receiver or plane wave directions. For instance, the CMSB method, which has been presented in Sect. 3.5.2.4, was applied in the plane wave direction in Wang et al. (2022) in order to tackle the trade-off between the image contrast and speckle preservation. Some other CF-based weighting methods have been recently developed to further improve the image quality in CPWI. In the following, these methods are going to be discussed.

268

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Algorithm 6.2 Processing steps of receive MVDR method. 1: 2: 3: 4: 5:

Input: data matrix obtained from N P emissions and N receive elements. Output: Rx-MVDR beamformed output for nth imaging point (yRx-MVDR (n)). u(n) = signal summation over all elements for each emission for k = 1 : N do sk (n) = signal summation over all elements, except kth element for each emission end for ˆ Rx-MVDR (n) = covariance matrix estimation using (6.38) R ˆ −1 R Rx-MVDR (n)a ˆ −1 R Rx-MVDR (n)a H yRx-MVDR (n) = w Rx-MVDR (n) × Return yRx-MVDR (n)

6: w Rx-MVDR (n) = 7: 8:

aH

u(n)

Generalized Coherence Factor in Compounded Plane Wave Imaging The GCF algorithm has been described in Sect. 3.5.1.2, and it has been shown that by using this weighting method, the performance of the algorithm will be improved compared to the standard DAS algorithm in terms of contrast. To perform this technique in CPWI, one can construct the low-resolution images corresponding to each emission (the first processing step shown in Fig. 6.10). Then, the GCF is used to obtain the [ ]T (θ N ) (θ1 ) (n), . . . , yDASP (n) ∈ appropriate weights. More precisely, consider . y(n) = yDAS C N P ×1 as a vector consisting of the low-resolution images obtained according to (6.6) at time instance .n. The GCF coefficients are obtained according to (3.21) using the vector . y(n) as an input. Finally, the improved quality image would be achieved by applying the GCF coefficients to the final reconstructed image (Wang et al. 2018). More precisely, consider the output of the original 2D-DAS method which is obtained according to (6.7). Applying the adaptive weights obtained from the GCF method to the reconstructed image, we have y

. 2D-DAS+GCF

(n) = GC F(n) × y2D-DAS (n),

(6.40)

which results in an improved contrast image compared to the 2D-DAS algorithm. Normalized Autocorrelation Factor in Compounded Plane Wave Imaging A weighing method known as the normalized autocorrelation factor (NAF) has been developed in CPWI to improve the quality of the reconstructed image (Wang et al. 2018). In this technique, the weighting coefficients are obtained based on the first and second-order statistics of the signals, and also, the correlation between them. Consider . N P reconstructed low-resolution images that are obtained from the nonadaptive DAS beamformer, i.e., (6.6). For each time instance .n, the processing steps of the NAF algorithm are performed to obtain its corresponding weighting factor; [ ]T (θ N ) (θ1 ) (n), . . . , yDASP (n) , the mean .μ(n) ˆ and variance .σˆ 2 (n) for the vector . y(n) = yDAS values are calculated as below: μ(n) ˆ =

.

NP 1 Σ yi (n), N P i=1

(6.41)

6.1 Ultrafast Imaging

269

and:

.

σˆ 2 =

NP [ ]2 1 Σ yi (n) − μ(n) ˆ , N P i=1

(6.42)

(θi ) where . yi (n) = yDAS (n). The obtained mean and variance parameters are equivalent to the first and second-order statistics of the input beamformed signal, respectively. Also, the autocorrelation function is calculated from the following equation:

.

Rˆ r (n) =

NΣ P −r 1 yi (n)yi+r (n), r = 1, . . . , N P − 1. N P − r i=1

(6.43)

In the above equation, the parameter .r is known as the lag number which is used to tune the performance of the NAF algorithm; the greater the value of this parameter, the better the resolution of the reconstructed image, since the incoherent sources are reduced. Also, a smaller value of this parameter improves the image contrast. The NAF coefficient is then obtained according to the following equation: [ .

N AF(n) =

]2 Rˆ r (n) − μ(n) ˆ σˆ 2 (n)

.

(6.44)

Similar to other weighting factors, the output of the NAF coefficient is multiplied by the final image to improve the image quality: y

. 2D-DAS+NAF

(n) = N AF(n) × y2D−D AS (n).

(6.45)

According to the above explanations, it can be found that the correlation between different emission angles is obtained in the NAF technique. If the correlation between the emissions is high, the NAF value tends to be a great value. Also, if the signals are incoherent, or equivalently, in the case in which the correlation between them is small, the value of the NAF will be decreased. Figure 6.13 shows the explained processing steps of the NAF technique schematically. United Wiener Post-filter in Compounded Plane Wave Imaging The wiener post-filter, which is discussed in Sect. 3.5.1.8, can also be used in CPWI to obtain an improved contrast image; the weighting coefficients are calculated for each emission using (3.60), and the low-resolution images are weighted accordingly. Then, the obtained low-resolution images are coherently compounded to achieve the final image. Although this weighting factor is applicable in sidelobe suppression and contrast improvement of the reconstructed image, it results in some dark artifacts in the resulting image due to over-suppression. Therefore, it can be concluded that the wiener post-filter method cannot fully utilize the spatial coherence of the signals. Also, as the process is applied to the signal vectors received by each element for each

270

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.13 The schematic of the processing steps of the NAF method

transmission separately, it suffers from high-computational complexity. Furthermore, the spatial coherence between different transmission angles is not considered, and the weighting process is performed based on the spatial coherence in each transmission separately. This leads to performance degradation of the wiener post-filter method. To tackle the mentioned limitations, a united wiener (u-wiener) post-filter technique has been developed (Qi et al. 2021). In the u-wiener post-filter method, the noise and signal powers are estimated based on the data matrix rather than the signal vectors received by each element. More precisely, the data obtained from all of the emissions are used to calculate the weighting coefficients (at each time instance), which improves the performance of the algorithm in terms of contrast and robustness. Moreover, as the process is performed once on the data matrix, the computational complexity will be reduced compared to the conventional case where each signal vector is processed individually. Consider the data matrix presented in (6.23). The united signal power of this data matrix at time instance .n is expressed as below: |2 | | | NP Σ N Σ | | 1 2 xi, j (n)|| . . Psignal (n) = | x(n)| ¯ = || | | N P N i=1 j=1

(6.46)

Also, the united variance is obtained according to the following equation: ˆ 2u (n) .σ

=

1 NP N

NP Σ N Σ |2 | |xi, j (n) − x(n) ¯ | .

(6.47)

i=1 j=1

In the original 2D-DAS beamformer, the final image is reconstructed by performing a summation over all of the elements in all emissions. This process can be interpreted as the DAS algorithm which is applied to. N P × N elements. Based on this, the united weighting vector is defined as.w u (n) = N P1 N 1 (instead of. N1 1). Considering this issue, and also, according to (6.47), the united noise power is estimated as . Pnoise (n) = wuH (n)σˆ u (n)Iw u (n). After the united signal and noise powers are estimated, the u-wiener post-filter is obtained similar to (3.57) as below:

6.1 Ultrafast Imaging . Hu-wiener (n)

=

271

Psignal (n) Psignal (n) + Pnoise (n)

= | | 1 | NP N

| |2 | 1 ΣNP ΣN | | N P N i=1 j=1 xi, j (n)| | ( |2 ) . | ΣNP ΣN |2 1 1 ΣNP ΣN | | x (n) + (n) − x(n) ¯ x | i, j i, j j=1 j=1 i=1 i=1 NP N NP N

(6.48) In the above equation, the numerator represents the united coherent sum. Also, the second term of the denominator in (6.48) represents the united incoherent sum. Consequently, the relation between the CF and u-wiener post-filter can be expressed similarly to (3.62). The obtained u-wiener post-filter is multiplied with the 2D-DAS beamformed data to achieve the final weighted output: y

. 2D-DAS+uwiener

(n) = Huwiener (n) × y2D-DAS (n).

(6.49)

2D Mean Adaptive Zero-Crossing Factor in Compounded Plane Wave Imaging Recently, a new weighting method known as the two-dimensional mean adaptive zero-crossing factor (TMAZF) was developed in Tan et al. (2022) to improve the resolution and contrast of the reconstructed image. This weighting factor is obtained according to the zero-crossing points that are determined based on the polarities corresponding to a specific imaging point over different emissions. Consider the [ ]T (θ N ) (θ1 ) (n) · · · , yDASP (n) ∈ C N P ×1 associated with the .nth imaging vector . y(n) = yDAS point. If the adjacent entries of this vector have different polarities, it can be concluded that a virtual zero exists between them which is known as the zero-crossing point and is identified as below: (θ

)

(θi ) i+1 z (n) = yDAS (n).yDAS (n), i = 1, · · · N P − 1.

. i

(6.50)

The zero-crossing variable is then expressed according to the following equation: { c (n) =

. i

1 z i (n) ≤ 0 0 z i (n) > 0 (θ

)

i = 1, · · · N P − 1.

(6.51)

(θi ) i+1 In the case in which . yDAS (n) and . yDAS (n) have different polarities, .ci (n) = 1 will be obtained. Also, if these two parameters have the same polarity, .ci (n) = 0 will be [ ]T obtained. The zero-crossing array is constructed as . c(n) = c1 (n), · · · c N P −1 (n) ∈ C(N P −1)×1 . This array is divided into a number of subarrays. Note that the subarray length is determined adaptively based on the variance of the vector . c(n) which is denoted as .σc2 (n); the closer this value is to zero, one can conclude that a large percentage of the vector . c(n) consists of .0 or .1. Also, as the value of the variance 2 .σc (n) increases, it can be concluded that the ratio of the number of zeros and ones in the vector . c(n) gets closer to 50%. The subarray length is obtained according to the following equation:

272

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

.

L(n) =

( )] [ (l1 − l2 ) [ ] [ ] × σc2 (n) − min σc2 (n) + l2 , 2 2 max σc (n) − min σc (n)

(6.52)

where .l1 and .l2 are two constants between 1 and . N P − 1. It can be seen from the above equation that as the value of.σc2 (n) decreases, the subarray length decreases too (note that.l1 > l2 ). Considering the overlap between subarrays,. N P − L(n) subarrays can be constructed for the vector . c(n). In such a case, the .mth subarray is defined as the summation over the entries of the vector . c(n) that are included in this subarray as follows: y (n) =

m+L(n)−1 Σ

. m

cl (n).

(6.53)

l=m

If . ym (n) = 0, it can be concluded that the .mth subarrays’ polarities are the same, and therefore, the coherence is higher in this subarray. Accordingly, the function . ym, (n) is expressed as below: { 1 ym (n) = 0 , . ym (n) = (6.54) 0 ym (n) > 0. According to the above equation, the adaptive zero-crossing factor is defined as below: [ | N PΣ −L(n) | 1 | y , (n). (6.55) . AZ F(n) = N P − L(n) m=1 m Then, the result is used as the weighting factor to improve the image contrast which is applied to the 2D-DAS beamformed data as below: y

. 2D-DAS+AZF

(n) = AZ F(n) × y2D-DAS (n).

(6.56)

In order to further improve the image contrast, a 2D moving mean filter is applied to the weighting factor presented in (6.55), and the resulting factor which is known as TMAZF is formulated as below:

.

T M AZ F(n) =

K S Σ Σ 1 AZ F(n x + q, n y + r ), (2S + 1)(2K + 1) r =−S q=−K

(6.57)

where .n x and .n y represent the coordinate of the .nth imaging point. Also, two integers S and . K are related to the mean filter. Finally, the improved quality image is achieved by applying the TMAZF to the output of the 2D-DAS algorithm as below:

.

y

. 2D-DAS+TMAZF

(n) = T M AZ F(n) × y2D-DAS (n).

(6.58)

6.1 Ultrafast Imaging

6.1.5.6

273

Singular Value Decomposition Sidelobe Reduction Beamformer in Compounded Plane Wave Imaging

It is well-known that multiple plane waves from different angles are emitted toward the imaging medium in CPWI. The sidelobes corresponding to different emissions are uncorrelated. Therefore, one can use a coherence-based singular value decomposition (SVD) filter before the coherent compounding process in order to suppress the sidelobe levels and improve the image contrast (Guo et al. 2016). Applying time delays to the received signals corresponding to each emission, and also, juxtaposing the results leads to a 3D tensor with the dimensions of . N x × N y × N P , where . N x and . N y denote the spatial samples in the array and depth directions, respectively. Before applying the SVD filter, dimension reduction is performed to prevent high-computational complexity; in this process, the angular sequence is transformed into a 2D spatio-angular matrix with the dimensions of . N x N y × N P , the schematic of which is illustrated in Fig. 6.14. Then, the SVD filter is used to decompose the reduced dimension matrix which is denoted as . P. More precisely, we have

.

P = UΛV H =

Σ

λi ui × v i ,

(6.59)

i

where.Λ ∈ C Nx N y ×N P is a diagonal matrix, the.ith entry of which is denoted as.λi . The singular values are stored in this matrix. Also, .U ∈ C Nx N y ×Nx N y and . V ∈ C N P ×N P are orthonormal matrices. The .ith column of the matrix .U which is denoted as N N ×1 . ui ∈ C x y , represents a 2D image as . I i ∈ C Nx ×N y . Also, the .ith column of . V which is denoted as .v i ∈ C N P ×1 represents the .ith angular signal. Therefore, it can be concluded that the SVD filter decomposes the matrix . P as the summation of separable images which are characterized by vectors .ui . Each of the vectors that represent the 2D images is modulated by its corresponding angular signals, i.e., .v i . According to the SVD processing, the beamformed data corresponding to . jth emission can be formulated as below: Σ . P(x, z, j) = λi I i (x, z)v i ( j), (6.60) i

Fig. 6.14 The schematic of dimension reduction in SVD sidelobe reduction beamformer

274

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

where (x,z) denotes the specific spatial sample corresponding to the.xth array element and .zth depth directions. Decomposing the date using the SVD filter, it can be found that the information associated with the main lobe components is represented in a few first singular values and their corresponding singular vectors. This is due to the fact that the high spatio-angular coherence leads to the on-axis pixels to exhibit the same angular profile. Also, the sidelobe information is represented by the last singular values and their corresponding singular vectors. Therefore, it is possible to suppress the sidelobe levels by considering only a few first singular values and their corresponding singular vectors as below:

.

P (x, z, j) = f

β Σ

λi I i (x, z)v i ( j),

(6.61)

i=1

where .β is a threshold to select a few first singular values. Once the SVD filter is applied to the data, the result is again transformed into the original dimensions, which is denoted as . P f ∈ C Nx ×N y ×N P . The high-resolution image is finally obtained by compounding the frames as below: y

. SVD

(x, z) =

NP Σ

P f (x, z, i).

(6.62)

i=1

6.1.5.7

Combinatorial Algorithms

In the previous sections, it has been shown that to achieve an improved quality image in CPWI, different modified algorithms can be used; in particular, to improve the image resolution, MV-based algorithms are applied in the receiver or plane wane directions. Different available cases associated with the MV algorithm are discussed in Sect. 6.1.5.2. Also, to improve the image contrast, algorithms such as the non-linear DMAS and the weighting factors are used to suppress the sidelobes and reduce the noise level more successfully. Recently, combinatorial algorithms have been developed in CPWI in which different algorithms are applied among the receiver and plane wave directions to take advantage of resolution and contrast enhancement simultaneously. In the following, a list of related works will be discussed. Minimum Variance Algorithm Combined with Coherence Factor in Compounded Plane Wave Imaging In Hashemseresht et al. (2022), the adaptive MV algorithm has been proposed to be combined with the CF weighting method in the plane wave direction to improve both the resolution and contrast of the final reconstructed image. In this algorithm, lowresolution images are constructed using the standard DAS algorithm in the receiver direction. In order to compound the resulting low-resolution images, the MV algorithm is used according to Austeng et al. (2011), with the difference that the CF weighing method is also applied to the MV beamformed data to further suppress

6.1 Ultrafast Imaging

275

the noise level. More precisely, the final beamformed data is obtained according to (6.20). Besides, the CF coefficients are calculated in the plane wave direction according to (3.19), and we have |Σ |2 | N P (θi ) | | i=1 yDAS (n)| .C FP W (n) = |2 . Σ N P || (θi ) | N P i=1 |yDAS (n)|

(6.63)

The output of the CF-based MV algorithm in the plane wave direction is finally obtained according to the following equation: y

. PW-MV+CF

(n) = C FP W (n) × yPW-MV (n),

(6.64)

which results in further sidelobe suppression in the improved resolution image obtained from the PW-MV beamformer. The schematic of the processing steps of this combinatorial algorithm is illustrated in Fig. 6.15. Improved Minimum Variance Algorithm Combined with Generalized Coherence Factor in Compounded Plane Wave Imaging In Deylami et al. (2016), it has been proposed to use the EIBMV algorithm combined with the GCF weighting method to improve the image resolution and contrast simultaneously. In this combinatorial algorithm, the GCF technique is performed to insist on the coherent parts of the images which leads to contrast improvement. Also, using the EIBMV algorithm, the weight vector obtained from the MV beamformer is projected using a transformation matrix which is obtained by eigendecomposing the estimated covariance matrix. This results in more sidelobe suppression compared to the basic MV algorithm while a good resolution is maintained. One can refer to Sect. 3.5.2.2 to find a more detailed explanation of this adaptive beamformer. In the considered combinatorial algorithm, the EIBMV beamformer combined with the

Fig. 6.15 The schematic of the processing steps of the PW-MV beamformer combined with the CF algorithm

276

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

GCF technique is applied in the receiver direction; inspired by (6.21) and (3.85), we have y

(θ j )

. EIBMV

[ ]H (n) = Es EsH w H x(n, j), Rx (n)

(6.65)

where .Es EsH w H Rx (n) denotes the weight vector obtained from the EIBMV algorithm. Also, the vector .Es contains the selected eigenvalues and is used to project the weight vector .w Rx (n). Besides, for each emission, the GCF technique is applied similarly to (3.21) as below: Σ M0

.

l=−M

| p(l, j, n)|2

l=−N /2

| p(l, j, n)|2

GC F (θ j ) (n) = Σ N /2−10

,

(6.66)

where . p(l, j, n) denotes the DFT of the received signal (which is performed along the spatial domain) associated with .lth element and . jth emission at time instance .n. The calculated coefficients are applied to the EIBMV beamformed data as below: y

(θ j )

. EIBMV+GCF

[ ] (θ j ) (n) = 1 + GC F (θ j ) (n) × yEIBMV (n).

(6.67)

It can be seen from the above equation that a constant value of 1 is added to the coefficients obtained from the GCF method; this modification is applied to the standard GCF method to overcome the concern about the time instances in which the resulting image is multiplied with the GCF coefficients near 0. Once the low-resolution images are obtained according to this combinatorial algorithm, coherent compounding is performed by simply summing the results over the plane wave direction as below: y

. EIBMV+GCF

(n) =

NP 1 Σ (θ j ) yEIBMV+GCF (n), N P j=1

(6.68)

and the final image is obtained accordingly. Although the quality of the resulting image will be considerably improved using this algorithm, however, its computational complexity is very high which negatively affects the frame rate. 2D-Minimum Variance Algorithm Combined with Generalized Coherence Factor in Compounded Plane Wave Imaging In order to improve the image resolution in CPWI, the adaptive MV algorithm can be used in both directions of the receive and plane wave, as mentioned earlier. To take advantage of the improved resolution of the 2D-MV algorithm while the computational complexity is reduced, the JTR algorithm is a good candidate which is discussed in Sect. 6.1.5.3. To further improve the contrast of the reconstructed image obtained from the JTR algorithm, in Qi et al. (2018), it has been proposed to use this beamformer combined with the GCF weighting method. The computational complexity of this combinatorial algorithm is lower compared to the one which is reported

6.1 Ultrafast Imaging

277

in Deylami et al. (2016). In the JTR beamformer, the spatial smoothing technique is performed on the data matrix rather than the signal vector. In other words, submatrix division is used instead of subarray division. In the combinatorial technique presented in Qi et al. (2018), it has been proposed to calculate 2D-GCF coefficients for each submatrix generated from the JTR beamformer. Inspired by (6.30), consider the output of a single submatrix obtained from the JTR algorithm as below: ⎡ .

y1,1 (n) y2,1 (n) .. .

⎢ ⎢ Y (l) (n) = ⎢ ⎣

y1,2 (n) y2,2 (n) .. .

··· ··· .. .

y1,N P −L 1 +1 (n) y2,N P −L 1 +1 (n) .. .

⎤ ⎥ ⎥ ⎥. ⎦

(6.69)

y N −L 2 +1,1 (n) y N −L 2 +1,2 (n) · · · y N −L 2 +1,N P −L 1 +1 (n) In order to apply the GCF method to the above submatrix, one can rearrange the elements in one row and calculate the FFT of the result. However, the resulting coefficients are not sufficiently accurate since the submatrix includes the signals associated with different emissions. In order to increase the accuracy of the GCF method, the coefficients can be obtained for each emission separately. However, the computational complexity will be increased. To overcome this limitation, a 2D-GCF method can be used in which the 2D spatial spectrum of the submatrix is calculated as below: .

[ ] P( f ) = FFT Y (l) (n) ⎡ p1,1 (n) ⎢ p2,1 (n) ⎢ =⎢ .. ⎣ .

p1,2 (n) p2,2 (n) .. .

··· ··· .. .

p1,N P −L 1 +1 (n) p2,N P −L 1 +1 (n) .. .

⎤ ⎥ ⎥ ⎥. ⎦

(6.70)

p N −L 2 +1,1 (n) p N −L 2 +1,2 (n) · · · p N −L 2 +1,N P −L 1 +1 (n)

Then, expanding the standard 1D-GCF formula presented in (3.21) to the 2D-GCF, we have |2 Σ M1 Σ M2 | | | l1 =0 l2 =0 pl1 ,l2 (n) , (6.71) . GC F2D (n) = Σ 2 | P(n)| where . M1 and . M2 denote the cut-off frequencies in the receiver and plane wave directions, respectively. It can be seen from (6.71) that in the 2D-GCF method, the ratio of the low-frequency energy to the total energy is calculated to obtain the weighting coefficients. Finally, the obtained 2D-GCF weighting coefficients are applied to the corresponding submatrix as below: y

. JTR+2D-GCF

(n) = GC F2D (n) ×

Σ[

] Y (l) (n) ,

(6.72)

which results in contrast improvement compared to the basic JTR beamformer. The described process is repeated for each submatrix.

278

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Minimum Variance Algorithm Combined with DMAS Beamformer in Compounded Plane Wave Imaging In Ziksari and Asl (2020), the adaptive MV algorithm is combined with the non-linear DMAS beamformer in CPWI to achieve an improved image in terms of both resolution and contrast. In this algorithm, which is known as the MV-DMAS technique, the MV beamformer is used in the receiver direction to obtain low-resolution images. Then, the DMAS beamformer is used in the plane wave direction to perform the compounding process. The performance of the MV-DMAS algorithm is improved by taking advantage of the good resolution of the MV beamformer as well as the high contrast of the DMAS technique. In the MV-DMAS algorithm, the MV algorithm is applied in the receiver direction according to (6.21). To apply the DMAS algorithm in the plane wave direction and perform compounding, one should note that the baseband signals can be used; according to the discussion made in Sect. 6.1.5.1, in order to use the baseband signals, demodulation, and phase rotation should be performed before applying the algorithm. Then, the result is amplitude-scaled by . pth rooting the amplitude of the signal while the phase remains unchanged. Considering . p = 2, we have / (θi ) .y ˆMV (n) = si (n)e jφi (n) , (6.73) where .si (n) denotes the signal amplitude corresponding to .ith emission at time instance .n. Using (6.73), the DMAS algorithm is used in the plane wave direction as below: ⎤ ⎡( )2 NP NP ( ) Σ Σ 2 1⎣ (θi ) (θi ) yˆMV . yMV-DMAS (n) = yˆMV (n) − (n) ⎦ , (6.74) 2 i=1 i=1 and the final image is reconstructed accordingly. Note that by implementing the DMAS algorithm as (6.74), the computational complexity is reduced from . O(N P2 ) to. O(N P ) compared to the conventional implementation scenario presented in (3.66). This combinatorial algorithm with a different implementation process is also presented in Ziksari and Asl (2023). In this method, the Tx-MVDR technique, together with the BS method, is used instead of the conventional MV algorithm. The goal of using the BS method is to reduce the dimensions of the estimated covariance matrix, and therefore, increase the processing speed, as explained in detail in Sect. 3.7.2. Also, to take advantage of the DMAS algorithm (similar to the previously introduced MV-DMAS method), the spatial autocorrelation of the received signals is used to obtain the final reconstructed image. More precisely, instead of the vector .v, the entries of which are obtained by coherent summation of the received signals according to (6.32), an alternative vector is used which is obtained by applying the DMAS algorithm on the time-delayed signals in the plane wave direction. If the new vector is denoted as .v DMAS , the output of the mentioned method can be obtained as H .w BS-Tx-MVDR (n)v BS-DMAS (n), which is inspired by (6.35). Here, the subscript “BS” is due to the usage of the BS method.

6.1 Ultrafast Imaging

279

A variety of combinatorial algorithms can be imagined in CPWI besides the ones that are presented above in order to improve image quality. In particular, one can combine the SCF and DMAS algorithms to take advantage of both of these algorithms; a similar work was done in Luo et al. (2022) in CPWI of a wedge twolayer medium, where the DMAS algorithm was used in the Rx direction in the first step. Then the low-resolution images were compounded using the SCF technique. Also, in Zhang and Wang (2023), a modified PCF weighting method is used in combination with the DMR algorithm (presented in Sect. 3.7.12) to improve the lateral resolution and the image contrast.

6.1.6 Frame Rate Improvement in Compounded Plane Wave Imaging The quality of the reconstructed image in CPWI can be improved by a coherent combination of different plane waves. The key point is that increasing the number of plane waves in CPWI negatively affects the frame rate. Of course, one should note that the frame rate is still improved compared to the conventional B-mode imaging. In general, it can be said that there is a compromise between the reconstructed image quality and the frame rate in CPWI. The solutions devised to reduce this compromise, or equivalently, to achieve high-quality and high-frame-rate properties simultaneously, can be divided into two general categories: (i) reducing the number of emissions, and (ii) developing a fast image reconstruction algorithm. Reducing the number of emissions (the first category) leads to an increase in the speed of the imaging process, as well as an increase in data acquisition speed. In this regard, various studies have been carried out so far. For instance, a technique is presented in Afrakhteh and Behnam (2021) in which, relying on the concept of tensor completion, a limited number of plane waves are considered randomly for imaging. Since some of the emissions are known, and some others are unknown, the tensor consisting of the received signals will be an incomplete tensor. Then, by using the tensor completion algorithm, the incomplete tensor will be completed, and the coherent compounding process can be performed. Also, by using the known emissions, the unknown ones can be estimated via appropriate interpolation techniques, such as what was done in Jalilian et al. (2023). The principles of tensor completion can be used in another way to increase the processing speed; in such a way, a limited number of imaging points is considered to be known in the reconstruction process, and the rest is unknown. Then, the beamformed data corresponding to the known imaging points are obtained by using an appropriate algorithm. Finally, the values corresponding to the unknown imaging points are estimated using the tensor completion method. Such a study has been done in Paridar and Asl (2023a), and the processing speed increment while maintaining image quality has been shown. In addition, image reconstruction algorithms that lead to a higher quality compared to DAS (such as DMAS and other improved algorithms mentioned so far in this book)

280

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

can also be used. By using such algorithms, the quality of the resulting image is almost preserved for fewer emissions. Techniques based on the compressive sensing theory are also included in this category. Moreover, an efficient method, known as the convolutional-based DAS algorithm, has recently been developed in Paridar and Asl (2023c) to reduce the required emissions while maintaining image quality. In this method, a limited number of emissions are selected according to a sparse pattern; the angular interval, in which the desired quality image is achieved by emitting . N P plane waves, is down-sampled using two different sampling factors. The optimum values of the sampling factors are determined according to the beam pattern associated with the case where . N P emissions are used. By down-sampling the angle interval using each sampling factor, two subsets will be obtained, each containing a limited number of emissions. Accordingly, the total number of selected emissions for CPWI will be reduced to . N R < N P . It has been proven that by processing the data corresponding to each subset using the DAS algorithm and convolving the obtained outputs, the reconstructed image with the quality equivalent to the case of using . N P emissions will be achieved. In particular, by applying the convolutional-based DAS algorithm to the PICMUS data, the required emissions are reduced from 75 to 16 (while maintaining the image quality). For the second category, i.e., providing a fast image reconstruction algorithm, techniques such as the low-complexity MV algorithms that are discussed in Sect. 3.7 can be used. Also, since the processing speed is higher in the frequency domain compared to the time domain, the image reconstruction process can be performed in the frequency domain (Garcia et al. 2013; Chen et al. 2023a, b; Zhang et al. 2023). Recently, data processing has also been developed in the Radon domain, and by using it, the computational time can potentially be reduced (Schwab and Lopata 2023). In addition to the mentioned items, techniques based on deep learning also have a significant effect on increasing the speed of data processing and improving image quality. This issue will be further explored in Sect. 7.2.

6.2 Synthetic Aperture Ultrasound Imaging As mentioned before, the focusing process can be done synthetically instead of lineby-line in medical ultrasound imaging. In other words, the synthetic aperture focusing method can be used in which the energy of the propagated wave originating from a single element is spanned to the entire imaging region, and the backscattered echoes corresponding to all scan lines are received at once. This technique is basically used in radar applications, which is called synthetic aperture radar (SAR) imaging. The initial idea of using this scenario in the field of medicine was to optimize the imaging system in terms of hardware complexity, and also, improve the frame rate. Over time, a variety of synthetic aperture-based imaging methods have been proposed. In this section, each of them will be briefly discussed. Note that the synthetic aperture imaging scenario can also be used in 3D applications, such as 3D blood flow measurement (Makouei et al. 2020). This is known as the 3D synthetic aperture imaging

6.2 Synthetic Aperture Ultrasound Imaging

281

method and is achieved by using a 2D array. As the main focus of this book is on US imaging using a linear array (1D), investigating the contents of 3D imaging, which is the result of using a 2D array, is neglected in the following.

6.2.1 Synthetic Transmit Aperture To overcome the frame rate limitation of the conventional B-mode imaging, a parallel reception process can be performed instead of a single reception (which is done in conventional B-mode imaging). That is, one can receive several scan lines for each emission and speed up the data acquisition process (which is also known as the multiple-line acquisition (MLA) scenario). To this end, the width of the transmitted beam should be increased to cover a wider area in the imaging medium. This is achieved by reducing the subaperture length in transmission (Tong et al. 2012). The principle of the STA is based on this issue in which the widest possible beam is produced and radiated through the imaging medium using a single element. Consequently, in reception, all the scan lines are received in parallel. However, due to the low energy of a single element that is used to insonify the medium, the SNR of the reconstructed image will be significantly degraded. To overcome this limitation, the transmission-reception process is repeated for several elements, and by combining the outputs of all the emissions, an improved SNR image will be produced (Jensen et al. 2006; Rostamikhanghahi and Sakhaei 2023). The schematic of this imaging technique, in its classical form, is demonstrated in Fig. 1.17. Similar to CPWI, the received signals of all emissions in STA will be a 3D tensor; the resulting tensor in STA is considered as . S ∈ C M×N ×N E , where . N E ≤ N denotes the required number of emissions. Dynamic focusing operation is necessary to be applied to the received signals of each emission. Then, by combining the focused signals corresponding to different emissions, the final reconstructed image will be achieved. How to apply the dynamic focusing process in STA will be investigated in the following.

6.2.1.1

Focusing

In STA, the imaging process is done with the consideration of a focal point which is denoted as . F with the coordinates of .(x F , z F ). According to the considered focal point, the focusing process is done synthetically. In order to perform dynamic focusing, the round-trip time of the propagated wave should be calculated for each imaging point. Consider the . jth element is used to insonify the medium. Also, assume that the focusing is performed for .nth imaging point. In this case, the distance between . jth transmit element and the focal point . F is denoted as .d1 . Also, the distance between the point . F and the imaging point .n is denoted as .d2 , and the distance between the imaging point .n and .ith receiving element equals .d3 (see Fig. 6.16). More precisely, we have

282

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.16 Schematic of the round-trip travel time for . jth emission and .nth imaging point in STA. The images are shown for both cases in which the imaging point .n is deeper (left image), and also, shallower (right image) compared to the focal point . F

d =

. 1

d2 = d3 =

/ / /

(x p j − x F )2 + z 2F , (x F − xn )2 + (z F − z n )2 , (xn − x pi )2 + z n2 .

(6.75)

Note that the linear array is assumed to be positioned on the z-axis, i.e., .z pi = 0 for .i = 1, · · · N . The time delay will be different depending on whether the imaging point .n is located at a deeper or shallower position compared to the focal point . F; in the case in which the imaging point .n is deeper compared to the point . F, the time delay is calculated as below (Vignon and Burcher 2008): τ (i, j, x F , z F ) =

.

1/ d1 + d2 + d3 . c

(6.76)

Also, for the case in which the imaging point .n is in a shallower depth compared to the point . F, we have τ (i, j, x F , z F ) =

.

1/ d1 − d2 + d3 . c

(6.77)

Consequently, for . jth emission, the DAS beamformed data is obtained as below:

6.2 Synthetic Aperture Ultrasound Imaging

y

( j)

. DAS

(n) =

N Σ

283

si (n − τ (i, j, x F , z F ), j)

i=1

=

N Σ

xi (n, j).

(6.78)

i=1

Similar to CPWI, the final beamformed image is obtained by combining the outputs of all the emissions according to the following equation: y

. DAS

(n) =

NE 1 Σ ( j) y (n) N E i=1 DAS

NE Σ N 1 Σ = xi (n, j). N E j=1 i=1

(6.79)

By combining the outputs of all the emissions according to (6.79), the best possible image quality of the non-adaptive DAS beamformer will be achieved. The frame rate of the STA imaging system is obtained as below: FR =

.

P RF . NE

(6.80)

It can be seen that the frame rate is degraded by increasing the number of emissions.

6.2.2 Recursive Synthetic Transmit Aperture The recursive STA imaging technique was developed in Nikolov et al. (1999) in order to improve the frame rate of the STA imaging system. In this imaging method, the high-resolution image is not achieved after performing the emission process for all the elements; rather, after each emission, the high-resolution image is obtained. Note that in STA, the outputs of all emissions are combined to produce a high-resolution image. In order to produce the next high-resolution image, all the information gathered from the previous emissions is discarded, and the imaging process is repeated again. However, in recursive STA, assuming that the medium is stationary, the information of the previous emissions is used to produce a new frame. In an . N -element linear array, the first high-resolution image is obtained after . N emissions. The .(N + 1)th emission is then done using the first element, and its corresponding reflected echoes are received by all the elements. That is, the data acquisition process is repeated in STA. In the recursive STA method, in order to produce a high-resolution image, the output of the new emission, i.e., .(N + 1)th emission, is replaced with the one that is obtained from the old emission, i.e., the first emission. This process is done recursively. The schematic of the recursive STA method is demonstrated in Fig. 6.17

284

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.17 Schematic of the recursive STA imaging system for a 3-element linear array. The first high-resolution image is obtained by using all the emissions. Next high-resolution images are obtained after each emission

for a 3-element array. One should note that only the first high-resolution image is obtained after performing the emission process of all the elements, as can be seen in Fig. 6.17. The reason is that from the beginning of the imaging process, there are no previous emissions to be used. The relationship between the new emission, denoted by.n new , and its corresponding old emission, denoted by.n old , is determined according to the following equation:

n

. old

=

mod (n new − 1, N ) + 1,

(6.81)

where . mod (a, b) represents the remainder of dividing .a and .b.

6.2.3 Sparse Synthetic Transmit Aperture To improve the frame rate of STA, the emission process can be done for fewer elements, not all of them. Such a scenario is known as the sparse STA imaging method. As its name implies, the synthetic aperture becomes sparse in this scenario. Another advantage of the sparse STA method is that the motion artifact is reduced due to the reduced data acquisition time. Also, this imaging system requires less memory. The more sparse the transmit aperture, the higher the frame rate. An important issue in the sparse STA method is how to make the array sparse. Down-sampling the array

6.2 Synthetic Aperture Ultrasound Imaging

285

elements leads to increasing the element spacing. This results in grating lobe generation in the final reconstructed image. Also, if the number of elements is reduced by removing a few first and last elements of the array, the resolution of the reconstructed image will be degraded. In sparse STA, the sparse array should be designed such that the mentioned artifacts are prevented. To realize this issue, the elements are selected in such a way that the grating lobes that are produced in transmit and receive beampatterns cancel out each other. That is, the produced grating lobes of transmit and receive beampatterns should not match together so as not to reinforce each other. Consider the effective aperture as the convolution of the transmit and receive aperture functions; if the transmit and receive aperture functions are denoted as .aT x and .a Rx , respectively, the effective aperture .a E is obtained as below: a (x) = aT x (x) ∗ a Rx (x).

. E

(6.82)

The sparse array is designed in such a way that the resulting effective aperture is equivalent to the one that is obtained from the full array, i.e., the case where all the array elements are used. Various sparse arrays have been designed to fulfill this condition. For instance, considering the element spacing equal .λ/2 in a full array, it was shown in Lockwood et al. (1996); Lockwood and Foster (1995) that if the sparse array in transmit with the element spacing of .nλ/2, and the sparse array in receive with the element spacing of .(n + 1)λ/2 are designed, the resulting effective aperture is equivalent to the one that is obtained from the full array, i.e., with the element spacing of .λ/2. In order to smooth the resulting effective aperture, an appropriate apodization can be used. Different sparse arrays have been presented with the aim of reducing the number of elements as much as possible while preserving the effective aperture. By generalizing the sparse STA on 2D arrays, 3D ultrasound imaging with less hardware complexity and data acquisition time will be effectively possible.

6.2.4 Synthetic Receive Aperture Assuming that the highest hardware complexity of an ultrasound imaging system is in the receiver part, the synthetic receive aperture (SRA) method was proposed to reduce the hardware complexity (Nock and Trahey 1992; Trahey and Nock 1992). In this method, . N elements of the array are activated at each emission and propagate a focused wave toward a limited region of the imaging medium. It is known that due to the focusing process in transmission, the beam width is small, and therefore, it is not possible to receive multiple scan lines in parallel. That is, the SRA method is not efficient in frame rate improvement. In this imaging system, one or a few elements are activated to receive the backscattered echoes. By activating different receiving elements, the receiving aperture is synthesized. Note that in SRA, the emission process is repeated for a fixed angle until all the elements receive the

286

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.18 Schematic of the SRA imaging system for an . N -element linear array

reflected echoes associated with that emission angle. The schematic of this imaging system is depicted in Fig. 6.18. Assuming that the number of required scan lines to display the image equals . Nt , and the number of emissions equals the number of elements (i.e., . N ), the frame rate in SRA is obtained as below: FR =

.

P RF . N × Nt

(6.83)

It can be seen that by using the SRA method, the frame rate is degraded compared to STA, the frame rate of which is presented in (6.80). One can use more than one element to receive the reflected echoes. In other words, a subaperture consisting of . N Rx elements can be considered for the reception process. Note that this scenario is included in the category of MLA systems, since for each emission, multiple scan lines are acquired simultaneously. In such a case, the number of emissions is reduced from . N to . N /N Rx . Therefore, the frame rate changes as below: FR =

.

P R F × N Rx . N × Nt

(6.84)

It can be seen that the frame rate of this MLA imaging system is improved compared to the basic SRA method. Also, if . N = N Rx , i.e., if a limited number of elements are used both in transmit and receive in order to obtain each scan line, the SRA will be equivalent to the conventional B-mode imaging method. There exists another imaging technique called synthetic transmit-receive aperture (STRA), in which not all the elements are used in the reception process. More precisely, the number of receiving elements is greater than one and less than . N .

6.2 Synthetic Aperture Ultrasound Imaging

287

This imaging technique can be implemented in different ways (e.g., Karaman and O’Donnell 1998; Karaman et al. 1998). Similar to SRA, the purpose of providing STRA is to simplify the hardware complexity in the receiver part.

6.2.5 Virtual Source The main problem of the STA method using fewer emissions is its low SNR. Although this limitation is addressed by repeating the imaging process . N E times, as stated in the description of STA, however, the frame rate that is supposed to be improved, is negatively affected. To tackle this problem, or equivalently, in order to improve the frame rate while the SNR degradation is prevented, wave propagation using a single element is replaced by multiple elements. By doing so, the azimuthal extent of the transmitted wave is limited compared to the basic STA method. In other words, the number of parallel-receiving scan lines is reduced. Ultrasound imaging based on virtual source originates from the use of a limited number of elements in STA, which will be discussed in the following. By using a subaperture consisting of a limited number of elements, the emission process can be emulated from a point source in front of or behind the subaperture. The point source using which the emission process is emulated is called the virtual source. In fact, the focal point of the subaperture is considered as a virtual source of ultrasound energy. The virtual source is denoted as .V with the coordinates of .(x V , z V ). Depending on the intended location for the virtual source . V , a different imaging scenario is resulted; if the emission process is emulated with the virtual source behind the subaperture, the DWI scenario is achieved (see Sect. 1.11.4). Also, if the virtual source is considered to be positioned in front of the subaperture and the emission process is emulated accordingly, the SASB method will be obtained (see Sect. 1.11.2). The waveforms that are produced in these two scenarios are different. This difference is schematically demonstrated in Fig. 6.19. For the case in which the virtual source is positioned behind the array (i.e., DWI), the subaperture elements are defocused by using a central source which is located behind the subaperture and lead to the generation of a diverging wave. Also, for the case in which the virtual source is positioned in front of the array (i.e., SASB), the generated wave converges before reaching the focal point and diverges after passing through it. Each of the mentioned scenarios can be achieved by applying appropriate time delays to the array elements. Note that the diverging wave generation is not limited to linear arrays; rather, it is also possible to generate non-focused diverging waves by using convex arrays (by applying appropriate time delays) (Liang and Wang 2023).

6.2.5.1

Synthetic Aperture Sequential Beamforming

To perform imaging using the so-called SASB method, the appropriate time delays should be calculated and applied to the array elements. For each subaperture of length

288

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.19 Schematic of the generated waveform by considering the virtual source behind (left image) and in front of (right image) the array

.

L (. L-elements subaperture), the virtual source at point .V is created by applying the time delay to .ith transmit element as below: τ =

. i

] [ / )2 ( 1 . z V − z 2V + x pi − x V c

(6.85)

Note that the .x-coordinate of the virtual source .V is equivalent to the .x-coordinate of the central element of the subaperture. By processing the received signals of the emissions corresponding to different subapertures, the final image will be obtained. Detailed explanations of the focusing process in reception and producing the reconstructed image are explained in Sect. 5.3.4 of this book. The advantage of placing the virtual source in front of the array is that the constructed virtual array is closer to the imaging medium. This increases the imaging depth, and also, improves the SNR in deeper regions.

6.2.5.2

Diverging Wave Imaging

Assume that a subaperture with the length of . L is used to generate a diverging wave. The .z-coordinate of the virtual source is determined according to the subaperture length, as well as the angular aperture .α as below (Zhang et al. 2016): z =−

. V

L . 2 tan(α)

(6.86)

Note that .z V ≤ 0. Also, note that the .x-coordinate of the virtual source is within the .x-coordinates of the array elements. To perform the DWI technique, transmit time delays that are applied to the elements differ from the ones that are calculated for PWI. More precisely, the .ith element of the subaperture should be time-delayed

6.2 Synthetic Aperture Ultrasound Imaging

289

Fig. 6.20 Schematic of DWI technique. The emission is emulated with the virtual source .V behind the array

according to the following equation instead of (6.2), in order to generate a diverging wave instead of a plane wave: 1 .τi = c

(/

) (x pi − x V

)2

+

z 2V

+ zV ,

(6.87)

where the last term inside the parenthesis, i.e., .z V , is an offset. Time delays are applied to each subaperture according to the above equation. The schematic of such a scenario is presented in Fig. 6.20. In order to apply dynamic focusing in reception and obtain a high-quality image, the round-trip travel time of the propagated wave should be calculated for each emission. In this regard, the round-trip travel time of the propagated wave from .ith element and .nth imaging point is calculated as below: τ=

.

) ( / 1 / (xn − x V )2 + (z n − z V )2 + z V + (xn − x pi )2 + z n2 . c

(6.88)

By calculating the time delay for each imaging point and combining the results obtained from different emissions, the final reconstructed image will be achieved. The general processing steps of image reconstruction are similar to the CPWI method. Note that the DWI can be interpreted as a bridge between STA and PWI, which is discussed in detail in Sect. 6.1. Also, note that STA, DWI, and PWI methods are included in the MLA category, since several scan lines are received for each emission in these imaging techniques. As the possibility of receiving multiple scan lines per emission is established in MLA systems, the frame rate is improved. As a result, such scenarios can be used in cases such as 3D imaging of the heart, where generating a volume image of the heart requires the collection of several scan lines.

290

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

6.3 Image Reconstruction Algorithms in Synthetic Aperture Imaging The image reconstruction algorithms that have been developed to obtain the final image in synthetic aperture-based imaging systems are divided into two categories: time domain and frequency domain image reconstruction algorithms. One of the most commonly used algorithms in the time domain is the well-known DAS beamformer. In addition to this non-adaptive algorithm, all other time domain algorithms that are previously described in Sect. 6.1.5 can also be used in the STA technique. Note that the time domain algorithms perform a pixel-by-pixel process to obtain the final reconstructed image. This involves a high-computational burden. To address this limitation, the image reconstruction process can be done in the frequency domain. In particular, the range-doppler algorithm (RDA), chirp scaling algorithm (CSA), and wavenumber algorithm are among the frequency domain methods that are widely used in SAR imaging. In the following, some of these frequency domain algorithms that are used in medical image reconstruction will be discussed.

6.3.1 Wavenumber Algorithm In RDA and CSA, some simplifying approximations are made during the reconstruction process. This is while by using the wavenumber algorithm, the exact solution is obtained. Also, in this algorithm, the defined forward problem is solved by using interpolation in the frequency domain. Therefore, in terms of computational complexity, the wavenumber algorithm is a very efficient one among other mentioned frequency domain methods. By applying some modifications to this algorithm, it has been possible to use it in medical STA imaging (Stepinski 2007; Hunter et al. 2008; Moghimirad et al. 2015, 2016b). The general process of the wavenumber algorithm briefly includes three main steps: 3D FFT of the received signals, interpolation onto a wavenumber grid, and 3D IFFT of the interpolated data. Note that the wavenumber algorithm needs the received signals to be in the spatio-temporal frequency domain. As the spatial dimensions are 2D, one can conclude that the spatio-temporal frequency domain can be obtained by 3D FFT transforming the received signals. In the following, this algorithm is explained in more detail. Suppose the imaging region consists of a number of scatterers. Also, consider the position of .nth scatterer as .(xn , z n ). The scatterer distribution is written as below: .

f (x, z) =

Σ

an δ (x − xn , z − z n ) ,

(6.89)

n

where .an denotes the amplitude of .nth scatterer. If the transmitted signal is considered as . p(t), the received echo corresponding to the transmitting element .u and the receiving element .v is formulated as below:

6.3 Image Reconstruction Algorithms in Synthetic Aperture Imaging .

S(ω, u, v) = P(ω)

Σ

Sn (ω, u, v),

291

(6.90)

n

where . P(ω) is the Fourier transform of . p(t), and . Sn (ω, u, v) is the received signal corresponding to .nth scatterer, which is expressed according to the following equation: S (ω, u, v) = B(ω, θ1,n )B(ω, θ2,n )G(ω, r1,n )G(ω, r2,n ).

. n

(6.91)

In the above equation, . B(ω, θ ) denotes the beampattern of the element, .θ1,n and .θ2,n are the angles between the transmitting/receiving elements and .nth scatterer, and . G(ω, r ) denotes the Green function. Also, .r 1,n and .r 2,n are the distances between the transmitting/receiving elements and .nth point scatterer. By applying the IFFT to (6.90), the received signal in the time domain is obtained as below: 1 .s(t, u, v) = 2π

{∞ S(ω, u, v) exp( jωt)dω.

(6.92)

−∞

To define a simplified forward problem, and consequently, solve it efficiently, it is assumed that the transmitted signal has the impulse-shape, i.e., . p(t) = δ(t). Also, it is assumed that the beampatterns of the element are equal to 1. Considering these assumptions, the simplified forward problem is expressed as below: { { .

S(ω, u, v) =

f (x, z)G(ω, x − u, z)G(ω, x − v, z)d xdz.

(6.93)

The reconstructed image, i.e., the solution to the above forward problem, is obtained by isolating the scatterer distribution . f (x, z). To this end, the 2D Green function is considered as below (Hunter et al. 2008): )⎤ ( ⎡ / {∞ exp jk x x − j|z| k 2 − k 2 x −j ⎦ dk x , ⎣ / . G(ω, x, z) = 4π k 2 − k x2

(6.94)



(note that .k = ω/c is the wavenumber). By substituting the above equation to (6.93), and Fourier transforming it with respect to .u and .v, we have ( ) / / −F ku + kv , k 2 − ku2 + k 2 − kv2 / / . S(ω, k u , k v ) = , (4π )2 k 2 − ku2 k 2 − kv2

(6.95)

where .ku and .kv are the wavenumbers corresponding to transmitting element .u and receiving element .v, respectively. Also, . F(.) represents the Fourier transform operator. For coordinate transformation, the wavenumbers .k x and .k z are defined as below:

292

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

k = ku + kv , / / k z = k 2 − ku2 + k 2 − kv2 .

. x

(6.96) (6.97)

Indeed, the wavenumbers .ku and .kv are mapped to .k x and .k z . That is, interpolation is done onto the wavenumber grid. By applying this interpolation, the 2D Fourier transform of the scatterer distribution, i.e., . F(k x , k z ), is established. More precisely, we have {/ } / 2 −1 . F(k x , k z ) = −(4π ) S k 2 − ku2 k 2 − kv2 S (ω, ku , kv ) , (6.98) where .S −1 {.} is called the Stolt mapping, which interpolates the data onto the wavenumber grid. Finally, by applying the 2D-IFFT to the above equation, the reconstructed image will be achieved. The wavenumber algorithm has also been generalized to STA imaging systems based on virtual sources (Moghimirad et al. 2016a), and it has been shown that the reconstructed image with a quality similar to the time domain DAS beamformer can be obtained with much less computational complexity.

6.3.2 Range-Doppler Algorithm Data processing is done in the spatio-temporal Fourier domain by using the wavenumber algorithm. Spatio-temporal Fourier transforming requires loading a large amount of data at once; the number of time samples of the received signals is high. In addition, zero-padding is required in order to avoid errors in the interpolation process in the frequency domain. Therefore, the wavenumber algorithm requires a high memory. However, in the RDA, data processing is done in the spatial Fourier domain. That is, the process in the spatial and temporal dimensions is performed independently, and as a result, the need for memory reduces twice compared to the wavenumber algorithm. The RDA is primarily developed for monostatic signal processing, where the transmitting and receiving elements are the same. The general process of this algorithm includes the spatial Fourier transforming the received signal to generate the range-doppler data, applying the range-cell migration correction (RCMC) operation to the range-doppler domain data, applying a matched filter, and finally, inverse Fourier transforming the result (Jakovljevic et al. 2022, 2021). In order to use the RDA in multistatic signals (such as signals received from the STA imaging system), some modifications should be applied to this algorithm, although its general processing steps remain the same. In the following, this algorithm will be described in more detail. Consider the received signal in the multistatic model corresponding to the scatterer, which is located at the depth of . R0 as follows:

6.3 Image Reconstruction Algorithms in Synthetic Aperture Imaging

293

) [ ] ( Rv (x pv ) Ru (x pu ) Ru (x pu ) s(t, u, v) = A0 wr t − − × exp − j2π f 0 c c c [ ] Rv (x pv ) × exp − j2π f 0 , (6.99) c

.

where .wr is the signal envelope along the range dimension, and . A0 is the signal amplitude. Note that the above-received signal model corresponds to.uth transmitting element and .vth receiving element. Also, we have .

Ru (x pu ) = Rv (x pv ) =

/ /

R02 + x 2pu ,

(6.100)

R02 + x 2pv .

(6.101)

The first step of RDA is to apply a 2D-FFT in the spatial dimension, which results in the following equation: ⎡ ⎤ / ( ) )2 R c k Rv (kv ) 2 f Ru (ku ) 0 0 u ⎦ . S(t, k u , k v ) = A 0 wr 1− − × exp ⎣− jπ t− c c c f0 ⎛ ⎞ / ( )2 2 f R c k 0 0 v ⎠, 1− (6.102) × exp ⎝− jπ c f0 (

where .

R0 ( )2 , 1 − kfu0c

Ru (ku ) = /

R0 ( )2 . 1 − kfv0c

Rv (kv ) = /

(6.103)

(6.104)

Note that in the range-doppler domain, the waveforms corresponding to the targets overlap at a similar depth, i.e., the range-cells migrate. To correct this phenomenon, the RCMC operation is applied in the range-doppler domain to align the range envelopes (.wr ). It should be noted that the RCMC operation for targets that are located at the same depth can be applied together. This increases the processing speed compared to the DAS algorithm in which the migration is done for each waveform separately. The RCMC operation is done by using an interpolation along time, and according to (6.103) and (6.104). Once the RCMC operation is finished, the next step is to apply a 2D match filter in .ku and .kv dimensions. This process is used with the aim of removing the azimuth phase of the signal. Finally, by applying a 2D-IFFT to the resulting signal, the reconstructed image will be achieved.

294

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

6.4 Compressive Sensing and Beamforming In signal processing, the first step is to perform sampling and convert the input analog signal into a discrete-time signal. The sampling should be done in such a way that the prominent information of the signal is preserved from which the original signal can be recovered. According to the Nyquist theorem, the sampling rate equivalent to at least two times the maximum signal frequency (. f s ≥ 2 f max ) ensures that this requirement is met. Consequently, the original signal can be recovered correctly from the remaining samples. Note that in the Nyquist theorem, it is assumed that the signal is band-limited. After the sampling process is applied and the discrete-time signal is obtained, the compression process is performed by discarding a large amount of data and remaining limited samples of it. This step is known as the encoding process. Compression is used to optimize the data storage, and also, to transmit the data. One should note that the sampling process (according to the Nyquist sampling rate) can be considered a compression process. Then, the reconstruction process is performed to recover the original signal from the encoded data. The problem is that, in most cases, data compression according to the Nyquist sampling rate leads to a huge amount of data storage. Also, the data transmission process would be problematic by increasing the stored samples. Therefore, it is necessary to introduce an appropriate compression method to overcome this limitation. In other words, a compression method is required in which the sampling rate is so much lower compared to the Nyquist sampling rate while the original signal is constructed successfully from a limited number of remaining samples. Using the compressive sensing (CS) theorem, only a few samples of the data are maintained to be used for reconstruction, and the remaining are discarded. Compression according to the CS theorem has a wide range of applications in different cases, such as audio and video data storage, and it has been shown that no information is lost during the reconstruction process. Using the CS method, only the desired samples are received during the data acquisition instead of performing the compression process after the entire data is received. This, in turn, speeds up the data acquisition process. The CS theorem is based on a priori knowledge about the sparsity nature of the data; assuming that the data is sparse or a sparse representation can be defined for it using a transformation basis, the sampling condition would be more rigorous compared to the Nyquist theorem. Consequently, much smaller samples compared to the Nyquist sampling rate will be resulted. The CS is also known as sparse sampling since the sampling is performed on sparse data. If the considered data is non-sparse, a transformation basis (if possible) is used to map the data on a sparse domain and apply the CS method to the transformed data. After the data is compressed and transformed, the reconstruction is performed to recover the signal of interest. The US images can be reconstructed based on the CS theorem, which is known as the CS-based method in this book. According to the above explanations, it can be concluded that signal processing based on the CS theorem includes two main parts: compression (encoding) and reconstruction (decoding). In the following sections,

6.4 Compressive Sensing and Beamforming

295

these two main steps of the CS-based method are discussed separately with the goal of processing the medical US data.

6.4.1 Compression In medical US imaging, the signals received from the array elements (. si ∈ C M×1 , N ×N y .1 ≤ i ≤ N ) are used to obtain the reconstructed image of the medium .Y ∈ C x . N x N y ×1 and used in the following The reshaped beamformed data is shown as . yr ∈ C formulations. The beamformed data . yr is represented as below: y = Ψ × ˜yr ,

. r

(6.105)

where .Ψ ∈ C Nx N y ×Nx N y is the transformation matrix, each column of which represents a transform domain basis. The matrix .Ψ is also known as the sparsifying basis. Also, . ˜yr ∈ C Nx N y ×1 is the sparse representation of the (beamformed) signal of interest . yr . According to the above equation, a non-sparse signal is represented as a sparse signal using a transformation matrix. The DFT basis, DCT basis, and wavelet basis are examples of transformation matrices used to create a sparse representation for non-sparse data. Assuming only .k entries of the sparse vector . ˜yr are non-zero, . yr is named .k-sparse in basis .Ψ (.k ≪ N x N y ). In the case in which the original signal is inherently sparse, we have .Ψ = I, i.e., . yr = ˜yr . Using the CS method, the forward problem that relates the received signals to the beamformed data is written as below: s = Φ(i) yr + n,

. i

(6.106)

where .Φ ∈ C M×Nx N y is known as the measurement matrix. Since the received signal is contaminated with noise, additive noise is also included as.n ∈ C M×1 in the forward problem. According to (6.105) and (6.106), the forward problem is rewritten as below: s = Φ(i)Ψ ˜yr + n = A(i) ˜yr + n,

. i

(6.107)

where . A(i) = Φ(i)Ψ. It can be seen from the above equation that the CS forward problem is written based on the sparse representation of the data. The recovery process of the beamformed data is performed by solving the above forward problem. One problem that may arise is that two different k-sparse signals . yr1 and . yr2 are both mapped to the same compressed data due to the transformation process using the matrix .Φ. To prevent this phenomenon, the measurement matrix .Φ should be designed such that it satisfies the restricted isometry property (RIP) (Candes et al. 2006), which is defined as follows:

296

6 Ultrafast and Synthetic Aperture Ultrasound Imaging .

||2 || ||2 ||2 || || (1 − εk ) || yr1 − yr2 || ≤ ||Φ yr1 − Φ yr2 || ≤ (1 + εk ) || yr1 − yr2 || ,

(6.108)

where.εk is a small value. The RIP of the matrix.Φ states that the distance between the mutual k-sparse signals should be well-preserved in the transform domain. In other words, .Ψ and .Φ should not be able to sparsely represent each other. According to this condition, the measurement matrix is designed for a fixed .Ψ. The RIP condition leads the matrices .Ψ and .Φ to be incoherent. To meet the RIP condition, different matrices can be designed for .Φ.

6.4.2 Reconstruction Using the CS-based method in medical US imaging (Liebgott et al. 2013; Shen et al. 2012), it is possible to achieve a reconstructed image with a well-preserved quality using a reduced number of elements. Therefore, the data acquisition process speeds up. Moreover, power consumption increases. In particular, the CS-based method has been used in PWI (Ozkan et al. 2017; Szasz et al. 2016; Schiffner et al. 2012) where the reconstructed image is achieved by solving the forward problem presented in (6.106) according to the following least-squares minimization problem: || ||2 y∗ = argmin ||Φ(i) yr − si ||2 ,

. r

(6.109)

yr

with the consideration of the .k-sparse vector . yr defined in (6.105). The above minimization problem states that the optimum solution is obtained such that the noise component is minimized. One should note that (6.109) does not lead to a closed-form solution since we have . M ≪ N x N y , which means that the measurements are limited compared to the unknown parameters. To overcome this limitation and make the CS problem solvable, a new constraint is added to the existing minimization problem as below: || || || ||2 y∗ = argmin ||Φ(i) yr − si ||2 + α || yr ||0 ,

. r

(6.110)

yr

where the .ℓ0 -regularization norm is the newly added constraint, and .α is a constant parameter that determines a balance between the added constraint and the minimization problem. The general formulation of the .ℓ p -norm of the vector . yr is as follows: ⎛ || yr || p = ⎝

Σ

Nx N y

.

n=1

⎞1/ p | yr (n)| p ⎠

.

(6.111)

6.4 Compressive Sensing and Beamforming

297

According to the above equation, it can be concluded that the .ℓ0 -norm regularization term represents a sparse constraint in which the non-zero entries of the considered vector are counted. However, considering this sparse constraint, the minimization problem presented in (6.110) is non-convex. As a result, solving the problem is NPhard. To tackle this problem, the sparse constraint is approximated by replacing the .ℓ1 -norm with .ℓ0 -norm regularization term and developing the modified minimization problem as below: || || || ||2 y∗ = argmin ||Φ(i) yr − si ||2 + α || yr ||1 .

. r

(6.112)

yr

It has been shown that the .ℓ1 -norm is an appropriate approximation for the .ℓ0 norm regularization term (Jiang et al. 2014). To obtain the optimum solution to the modified minimization problem, different MATLAB toolboxes are provided, e.g., Sturm (1999). Once the optimum solution is obtained (. yr∗ ), the result is reshaped as a . N x × N y matrix, and consequently, the beamformed data would be achieved.

6.4.3 Compressive Sensing in Plane Wave Imaging Generally, according to the above explanations, it is observed that the CS-based algorithm improves the data acquisition speed. Also, it does not need several emissions to obtain a high-quality image. Therefore, it can be concluded that this algorithm can be potentially applicable in ultrafast imaging. In particular, in Ozkan et al. (2017), the CS-based algorithm was applied to reconstruct the images corresponding to PWI, and it was shown that a single plane wave is sufficient to achieve the best possible performance of the algorithm. This will be evaluated in the next section of this book. Also, the CS-based algorithm has recently been used with the combination of the adaptive MV algorithm to take advantage of both of these techniques (Paridar and Asl 2023b). In this combinatorial algorithm, the CS method is used in the Rx direction to obtain the beamformed images associated with different emissions. Then, in order to further improve the image quality (in terms of resolution) compared to the CS-based algorithm using a single emission, the adaptive MV algorithm is applied in the plane wave direction to coherently compound the beamformed images. As a result, the quality of the final reconstructed image will be improved in terms of both contrast and resolution inspired by the CS and MV algorithms. Besides, in Goudarzi et al. (2022), the inverse problem of (6.107) was solved differently using the alternating direction method of multipliers in order to provide a flexible framework. Then, the formulation is extended and the regularization term that exists in the conventional minimization problem (presented in (6.112)) was modified to obtain better speckle preservation. The CS-based algorithm can also be efficiently used to identify pointlike targets during the imaging process, such as microcalcification. This topic will be discussed in more detail in Sect. 7.1 of this book.

298

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

To evaluate the performance of the CS-based method on PWI, the PICMUS data is used; the received signals associated with a single transmitted plane wave (.θi = 0◦ ) are considered, and the reconstruction process is performed. Different measurement matrices can be designed, as mentioned before. Here, the entries of the measurement matrix are filled according to the following relation: { 1 .φmn (i) = 0

| | if |τn (x pi ) − tm | < Δt else,

(6.113)

where .φmn (i) denotes the entry .(m, n) of the matrix .Φ(i), .tm is the temporal measurement associated with the .mth sample of the received signal, and .Δt is the temporal interval between two consecutive received samples. The round-trip travel time, ◦ .τn (x pi ), is obtained according to (6.5) for .θi = 0 . The designed measurement matrix presented in (6.113) has been used in the medical US as well as photoacoustic imaging (Ozkan et al. 2017; Zhang et al. 2013). Figure 6.21a and b show the reconstructed images of the simulation resolution phantom obtained from the CPWI using the original 2D-DAS beamformer (as a reference) and the PWI using the CS-based method. It can be seen that applying the CS-based method to the received signals associated with a single transmission (.θ = 0◦ ) results in an improved image in terms of both resolution and contrast compared to the CPWI using the non-adaptive DAS algorithm in which 75 tilted plane waves are used for compounding. The image quality improvement using the CS-based method in comparison with the non-adaptive DAS algorithm is due to the newly added sparse constraint, .|| yr ||1 . The simulation resolution phantom is inherently sparse, and therefore, .Ψ = I. The CPWI using the CS-based method with 75 tilted plane waves is also performed in addition to the PWI, and the result is presented

Fig. 6.21 Reconstructed images of the simulation resolution phantom using different processing methods; a CPWI using the 2D-DAS algorithm, b PWI using the CS-based method, and c CPWI using the CS-based method. .α = 0.05 is considered for the CS-based algorithm. The images are shown with the dynamic range of 60 dB

6.4 Compressive Sensing and Beamforming

299

Fig. 6.22 Lateral variations of the simulation resolution phantom shown in Fig. 6.21 at the depth of 35 mm. Comparing the lateral variations obtained from a CPWI (75 plane waves) using the 2D-DAS algorithm and PWI using the CS-based method, b PWI and CPWI (75 plane waves) using the CS-based method, c PWI and CPWI (5 plane waves) using CS-based method, and d CPWI using CS-based method with the consideration of 5 and 75 plane waves

in Fig. 6.21c. To obtain this figure, the CS-based method is applied to the received signals of each transmitted plane wave, and the results are coherently summed to obtain the compounded image. Comparing the Fig. 6.21b and c, it can be seen that no significant improvement occurs by increasing the number of plane waves. In other words, using the CS-based method, the most improvement appears by using a few plane waves much less than 75. To better visualize this issue, the lateral variations plots for a different number of plane waves are depicted and compared in Fig. 6.22; from Fig. 6.22a that shows the lateral variations corresponding to Figs. 6.21a and b at a depth of 35 mm, it is confirmed that the CS-based method using only a single plane wave improves the main lobe width and sidelobe levels compared to the reconstructed image of the CPWI using the DAS method. Also, from Fig. 6.22b, it can be seen that using the CS-based method, the CPWI with 75 plane waves improves the image compared to the PWI; the sidelobes are more smooth and better suppressed compared to the case in which only a single plane wave is performed. This conclusion is also made by comparing Fig. 6.21b and c. However, this improvement is not considerable. Finally, Fig. 6.22c and d show the lateral variations plots associated with the reconstructed images of the CS-based method using 1, 5, and 75 plane waves. The lateral variations plots are depicted mutually to better compare and visualize the

300

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.23 Reconstructed images of the simulation resolution phantom using the CS-based method with a .α = 0.05, b .α = 0.1, and c .α = 0.2. The images are shown with the dynamic range of 60 dB

results. It can be seen that no significant improvement is achieved by increasing the number of titled plane waves. Therefore, only a few transmissions (one or two) are sufficient to obtain a reconstructed image with an acceptable quality using the CS-based method. This, in turn, reduces the computational complexity of the process. The constant parameter .α in the minimization problem defined in (6.112) determines the amount of participation of the.ℓ1 -norm regularization term, as stated earlier. The performance of the CS-based algorithm varies for different values of .α; as the parameter .α increases, the role of the sparse term is more prominent, which results in more sidelobe suppression and contrast improvement. However, if it exceeds a certain value, the result will be over-sparse; some information is lost from the reconstructed image. To show how this parameter affects the reconstructed image obtained from the CS-based method, Fig. 6.23 is presented in which the algorithm is applied to the simulation resolution phantom for three different values of .α. It can be seen that by increasing the value of this parameter, the sidelobe level is suppressed more efficiently. This can be more clearly seen from the lateral variations shown in Fig. 6.24. However, the point targets in deeper regions of the phantom disappear. The simulation resolution phantom of the PICMUS dataset is inherently sparse. Therefore, the CS-based method can be successfully performed on it without the need for the transformation matrix. However, if the data is non-sparse, a sparse representation of it should be expressed in order to make it possible to use the CSbased algorithm and obtain the final image, as mentioned earlier. Also, one can reduce the participation of the .ℓ1 -norm constraint by assigning a small value to the parameter .α presented in (6.112). Figure 6.25 shows the reconstructed images of the speckle-generating PICMUS datasets obtained from the CS-based method using different values of the parameter .α. It can be seen from the figure that as the participation of the sparse term decreases, the information of the non-sparse dataset is better preserved.

6.5 Conclusion

301

Fig. 6.24 Lateral variations plot corresponding to Fig. 6.23

6.5 Conclusion In this chapter, the principles of the ultrafast PWI were discussed. It was observed that by using this technique, imaging fast-moving objects such as the human heart is provided. In order to overcome the low-quality problem of the reconstructed image obtained from PWI, a coherent combination of several plane waves resulting from different emissions was developed which is known as CPWI. Then, various image formation techniques to improve the quality of the reconstructed images in CPWI were presented in detail. The DMAS algorithm, MV and its modified versions, and weighting coefficients such as the CF weighting method are examples of these improved image reconstruction techniques that can be applied to either the PW or Rx directions. Also, combinatorial techniques in which different algorithms are used in each direction were also discussed. In addition to PWI, the principles of the STA imaging technique and its different modified versions were discussed. It was observed that most of the modified versions of the STA imaging technique are developed to improve the frame rate of STA. Also, it was stated that all the time domain image reconstruction algorithms that are used in PWI could also be used in the STA imaging method in order to obtain the reconstructed image. Moreover, the wavenumber and range-doppler algorithms, as two common frequency domain image reconstruction techniques were discussed that speed up the image reconstruction process. The CS algorithm was also introduced, and its performance was evaluated in CPWI; it was observed that the performance of this algorithm is improved compared to the conventional 2D-DAS algorithm in terms of noise suppression. Also, the number of required samples in this algorithm is much less than the Nyquist rate. Therefore, one can conclude that the performance of the CS algorithm is improved in terms of data storage. It was observed that by using the CS algorithm, a limited number of plane waves (1 or 2) is sufficient to achieve the best possible performance. More precisely, in the case where the CS algorithm is used to obtain the reconstructed

302

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Fig. 6.25 Reconstructed images of (a1–a3) simulation contrast, (b1–b3) experimental resolution, and (c1–c3) experimental contrast phantoms using the CS-based method with different values of .α. The images are shown with the dynamic range of 60 dB

References

303

image of each emission, increasing the number of emissions does not significantly improve the final image. Therefore, it is concluded that the processing speed of the CPWI is improved by using the CS algorithm since it uses fewer emissions.

References Afrakhteh S, Behnam H (2021) Coherent plane wave compounding combined with tensor completion applied for ultrafast imaging. IEEE Trans Ultrason Ferroelectr Freq Control 68(10):3094– 3103 Austeng A, Nilsen C-IC, Jensen AC, Näsholm SP, Holm S (2011) Coherent plane-wave compounding and minimum variance beamforming. In: 2011 IEEE International ultrasonics symposium. IEEE, pp. 2448–2451 Bercoff J, Chaffai S, Tanter M, Sandrin L, Catheline S, Fink M, Gennisson J, Meunier M (2003) In vivo breast tumor detection using transient elastography. Ultrasoun Med Biology 29(10):1387– 1396 Bottenus N, Üstüner KF (2015) Acoustic reciprocity of spatial coherence in ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 62(5):852–861 Candes EJ, Romberg JK, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math J Issued by the Courant Inst Math Sci 59(8):1207– 1223 Chen Y, Kong Q, Xiong Z, Mao Q, Chen M, Lu C (2023a) Improved coherent plane-wave compounding using sign coherence factor weighting for frequency-domain beamforming. Ultrasoun Med Biology 49(3):802–819 Chen Y, Xiong Z, Kong Q, Ma X, Chen M, Lu C (2023b) Circular statistics vector for improving coherent plane wave compounding image in Fourier domain. Ultrasonics 128:106856 Deylami AM, Jensen JA, Asl BM (2016) An improved minimum variance beamforming applied to plane-wave imaging in medical ultrasound. In: 2016 IEEE International Ultrasonics Symposium (IUS). IEEE, pp 1–4 Fink M (1992) Time reversal of ultrasonic fields. I. Basic principles. IEEE Trans Ultrason Ferroelectr Freq Control 39(5):555–566 Garcia D, Le Tarnec L, Muth S, Montagnon E, Porée J, Cloutier G (2013) Stolt’s fk migration for plane wave ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 60(9):1853–1867 Go D, Kang J, Yoo Y (2018) A new compounding method for high contrast ultrafast ultrasound imaging based on delay multiply and sum. In: Proceedings of the 2018 IEEE international ultrasonics symposium (IUS), Kobe, Japan, pp 22–25 Goudarzi S, Basarab A, Rivaz H (2022) Inverse problem of ultrasound beamforming with denoisingbased regularized solutions. IEEE Trans Ultrason Ferroelectr Freq Control 69(10):2906–2916 Guo W, Wang Y, Yu J (2016) A sibelobe suppressing beamformer for coherent plane wave compounding. Appl Sci 6(11):359 Hashemseresht M, Afrakhteh S, Behnam H (2022) High-resolution and high-contrast ultrafast ultrasound imaging using coherent plane wave adaptive compounding. Biomed Signal Process Control 73:103446 Hunter AJ, Drinkwater BW, Wilcox PD (2008) The wavenumber algorithm for full-matrix imaging using an ultrasonic array. IEEE Trans Ultrason Ferroelectr Freq Control 55(11):2450–2462 Jakovljevic M, Michaelides R, Biondi E, Herickhoff C, Hyun D, Zebker H, Dahl J (2021) Adaptation of a range-doppler algorithm to multistatic signals from ultrasound arrays. In: 2021 IEEE international geoscience and remote sensing symposium IGARSS. IEEE, pp 3269–3272 Jakovljevic M, Michaelides R, Biondi E, Hyun D, Zebker HA, Dahl JJ (2022) Adaptation of rangedoppler algorithm for efficient beamforming of monostatic and multistatic ultrasound signals. IEEE Trans Ultrason Ferroelectr Freq Control 69(11):3165–3178

304

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Jalilian H, Afrakhteh S, Iacca G, Demi L (2023) Increasing frame rate of echocardiography based on a novel 2D spatio-temporal meshless interpolation. Ultrasonics 131:106953 Jensen JA (1996) Field: a program for simulating ultrasound systems. In: 10th Nordicbaltic conference on biomedical imaging, vol 4, supplement 1, part 1. Citeseer, pp 351–353 Jensen JA, Nikolov SI, Gammelmark KL, Pedersen MH (2006) Synthetic aperture ultrasound imaging. Ultrasonics 44:e5–e15 Jiang X, Zeng W-J, Yasotharan A, So HC, Kirubarajan T (2014) Minimum dispersion beamforming for non-gaussian signals. IEEE Trans Signal Process 62(7):1879–1893 Karaman M, O’Donnell M (1998) Subaperture processing for ultrasonic imaging. IEEE Trans Ultrason Ferroelectr Freq Control 45(1):126–135 Karaman M, Bilge HS, O’Donnell M (1998) Adaptive multi-element synthetic aperture imaging with motion and phase aberration correction. IEEE Trans Ultrason Ferroelectr Freq Control 45(4):1077–1087 Kirchner T, Sattler F, Gröhl J, Maier-Hein L (2018) Signed real-time delay multiply and sum beamforming for multispectral photoacoustic imaging. J Imaging 4(10):121 Liang S, Wang L (2023) A study of wide unfocused wavefront for convex-array ultrasound imaging. Ultrasonics 107080 Liebgott H, Prost R, Friboulet D (2013) Pre-beamformed RF signal reconstruction in medical ultrasound using compressive sensing. Ultrasonics 53(2):525–533 Liebgott H, Rodriguez-Molares A, Cervenansky F, Jensen JA, Bernard O (2016) Plane-wave imaging challenge in medical ultrasound. In: 2016 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Lockwood G, Foster FS (1995) Design of sparse array imaging systems. In: 1995 IEEE ultrasonics symposium. Proceedings. An international symposium, vol 2. IEEE, pp 1237–1243 Lockwood GR, Li P-C, O’Donnell M, Foster FS (1996) Optimizing the radiation pattern of sparse periodic linear arrays. IEEE Trans Ultrason Ferroelectr Freq Control 43(1):7–14 Luo L, Tan Y, Li J, Zhang Y, Gao X (2022) Wedge two-layer medium ultrasonic plane wave compounding imaging based on sign multiply coherence factor combined with delay multiply and sum beamforming. NDT & E Int 127:102601 Makouei F, Asl BM, Thurmann L, Tomov BG, Stuart MB, et al (2020) 3-d synthetic aperture high volume rate tensor velocity imaging using 1024 element matrix probe. In: 2020 IEEE international ultrasonics symposium (IUS). IEEE, pp. 1–4 Mallart R, Fink M (1991) The van cittert-zernike theorem in pulse echo measurements. J Acoust Soc Am 90(5):2718–2727 Matrone G, Savoia AS, Magenes G (2016a) Filtered delay multiply and sum beamforming in planewave ultrasound imaging: tests on simulated and experimental data. In: 2016 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Matrone G, Savoia AS, Caliano G, Magenes G (2016b) Ultrasound plane-wave imaging with delay multiply and sum beamforming and coherent compounding. In: 2016 38th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 3223– 3226 Moghimirad E, Hoyos CAV, Mahloojifar A, Asl BM, Jensen JA (2015) Fourier beamformation of multistatic synthetic aperture ultrasound imaging. In: 2015 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Moghimirad E, Hoyos CAV, Mahloojifar A, Asl BM, Jensen JA (2016a) Synthetic aperture ultrasound fourier beamformation using virtual sources. IEEE Trans Ultrason Ferroelectr Freq Control 63(12):2018–2030 Moghimirad E, Mahloojifar A, Mohammadzadeh Asl B (2016b) Computational complexity reduction of synthetic-aperture focus in ultrasound imaging using frequency-domain reconstruction. Ultrason Imaging 38(3):175–193 Montaldo G, Tanter M, Bercoff J, Benech N, Fink M (2009) Coherent plane-wave compounding for very high frame rate ultrasonography and transient elastography. IEEE Trans Ultrason Ferroelectr Freq Control 56(3):489–506

References

305

Nguyen NQ, Prager RW (2018) A spatial coherence approach to minimum variance beamforming for plane-wave compounding. IEEE Trans Ultrason Ferroelectr Freq Control 65(4):522–534 Nikolov S, Gammelmark K, Jensen JA (1999) Recursive ultrasound imaging. In: 1999 IEEE ultrasonics symposium. Proceedings. International symposium (Cat. No. 99CH37027), vol 2. IEEE, pp 1621–1625 Nock LF, Trahey GE (1992) Synthetic receive aperture imaging with phase correction for motion and for tissue inhomogeneities. I. Basic principles. IEEE Trans Ultrason Ferroelectr Freq Control 39(4):489–495 Ozkan E, Vishnevsky V, Goksel O (2017) Inverse problem of ultrasound beamforming with sparsity constraints and regularization. IEEE Trans Ultrason Ferroelectr Freq Control 65(3):356–365 Paridar R, Asl BM (2023a) Ultrafast plane wave imaging using tensor completion-based minimum variance algorithm. Ultrasoun Med Biology 49(7):1627–1637 Paridar R, Asl BM (2023b) Plane wave ultrasound imaging using compressive sensing and minimum variance beamforming. Ultrasonics 127:106838 Paridar R, Asl BM (2023c) Frame rate improvement in ultrafast coherent plane wave compounding. Ultrasonics 107–136 Qi Y, Wang Y, Yu J, Guo Y (2018) 2-d minimum variance based plane wave compounding with generalized coherence factor in ultrafast ultrasound imaging. Sensors 18(12):4099 Qi Y, Wang Y, Wang Y (2021) United wiener postfilter for plane wave compounding ultrasound imaging. Ultrasonics 113:106373 Rindal OMH, Austeng A (2016) Double adaptive plane-wave imaging. In: 2016 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4, IEEE Rostamikhanghahi H, Sakhaei SM (2023) Synthetic aperture ultrasound imaging through adaptive integrated transmitting-receiving beamformer. Ultrason Imaging 45(3):101–118 Schiffner M, Jansen T, Schmitz G (2012) Compressed sensing for fast image acquisition in pulseecho ultrasound. Biomed Eng/Biomedizinische Technik 57(SI-1-Track-B) Schwab H-M, Lopata R (2023) A radon diffraction theorem for plane wave ultrasound imaging. J Acoust Soc Am 153(2):1015–1026 Shen C-C, Hsieh P-Y (2019) Two-dimensional spatial coherence for ultrasonic DMAS beamforming in multi-angle plane-wave imaging. Appl Sci 9(19):3973 Shen M, Zhang Q, Li D, Yang J, Li B (2012) Adaptive sparse representation beamformer for high-frame-rate ultrasound imaging instrument. IEEE Trans Instrum Meas 61(5):1323–1333 Stepinski T (2007) An implementation of synthetic aperture focusing technique in frequency domain. IEEE Trans Ultrason Ferroelectr Freq Control 54(7):1399–1408 Sturm JF (1999) Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optim Methods Softw 11(1–4):625–653 Szasz T, Basarab A, Kouamé D (2016) .ℓ1 -norm regularized beamforming in ultrasound imaging. In: 2016 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–3 Tan Y, Luo L, Li J, Zhang Y, Gao X, Peng J (2022) Two-dimensional mean adaptive zero-crossing factor weighted ultrasound plane wave imaging. J Nondestr Eval 41(2):1–17 Tong L, Gao H, Choi HF, D’hooge J (2012) Comparison of conventional parallel beamforming with plane wave and diverging wave imaging for cardiac applications: a simulation study. IEEE Trans Ultrason Ferroelectr Freq Control 59(8):1654–1663 Trahey GE, Nock LF (1992) Synthetic receive aperture imaging with phase correction for motion and for tissue inhomogeneities. II. Effects of and correction for motion. IEEE Trans Ultrason Ferroelectr Freq Control 39(4):496–501 Vignon F, Burcher MR (2008) Capon beamforming in medical ultrasound imaging with focused beams. IEEE Trans Ultrason Ferroelectr Freq Control 55(3):619–628 Wang Y, Zheng C, Peng H, Zhang C (2018) Coherent plane-wave compounding based on normalized autocorrelation factor. IEEE Access 6:36927–36938 Wang Y, Zheng C, Peng H, Wang Y (2022) High-quality coherent plane-wave compounding using enhanced covariance-matrix-based statistical beamforming. Appl Sci 12(21):10973

306

6 Ultrafast and Synthetic Aperture Ultrasound Imaging

Yan X, Qi Y, Wang Y, Wang Y (2021) Regional-lag signed delay multiply and sum beamforming in ultrafast ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 69(2):580–591 Zhang X, Wang Q (2023) Improving lateral resolution and contrast by combining coherent plane-wave compounding with adaptive weighting for medical ultrasound imaging. Ultrasonics 132:106972 Zhang Y, Wang Y, Zhang C (2013) Efficient discrete cosine transform model-based algorithm for photoacoustic image reconstruction. J Biomed Opt 18(6):066008 Zhang M, Varray F, Besson A, Carrillo RE, Viallon M, Garcia D, Thiran J-P, Friboulet D, Liebgott H, Bernard O (2016) Extension of fourier-based techniques for ultrafast imaging in ultrasound with diverging waves. IEEE Trans Ultrason Ferroelectr Freq Control 63(12):2125–2137 Zhang X, Xu Y, Wang N, Jiao Y, Cui Y (2023) A novel approach to tele-ultrasound imaging: Compressive beamforming in fourier domain for ultrafast ultrasound imaging. Appl Sci 13(5):3127 Zhao J, Wang Y, Zeng X, Yu J, Yiu BY, Alfred C (2015) Plane wave compounding based on a joint transmitting-receiving adaptive beamformer. IEEE Trans Ultrason Ferroelectr Freq Control 62(8):1440–1452 Ziksari MS, Asl BM (2020) Minimum variance combined with modified delay multiply-andsum beamforming for plane-wave compounding. IEEE Trans Ultrason Ferroelectr Freq Control 68(5):1641–1652 Ziksari MS, Asl BM (2023) Fast beamforming method for plane wave compounding based on beamspace adaptive beamformer and delay-multiply-and-sum. Ultrasoun Med Biology 49(5):1164–1172

Chapter 7

Ongoing Research Areas in Ultrasound Beamforming

Abstract This chapter, as the last chapter of the book, presents the ongoing research areas in ultrasound beamforming. It deals with the application of ultrasound imaging in point target detection (e.g., kidney stone and microcalcifications) as well as various techniques presented in this regard. Furthermore, this chapter gives a brief introduction to deep learning and its applications in ultrasound data processing, which is the basis of many studies today. Finally, it ends with a brief explanation of super-resolution ultrasound imaging and its applications. Keywords Adaptive beamforming · Point detection · Super-resolution · Deep learning · Neural network

7.1 Point Target Detection Using Ultrasound Imaging In many applications of US imaging, point target detection in the tissue is helpful in the diagnosis and treatment of diseases. In particular, microcalcifications are small calcium deposits in the breast tissue at the scale of 0.1–1 mm that appear as point targets in the reconstructed images. The size, number, and distribution pattern of these microcalcifications are constructively used to detect whether the breast lesion is malignant or benign (Ouyang et al. 2019; Thon et al. 2021). It can be concluded that identifying these point targets in the reconstructed image is an important factor in the diagnosis and treatment of breast cancer. Kidney stone imaging, biopsy needle tracking, and contrast microbubbles are other important examples in which detecting the point targets is of importance. Identifying the point targets in medical US imaging is challenging due to the presence of the background speckle. One way to improve the point detection is to perform high-frequency US imaging (with the central frequency above 7 MHz). This technique results in a better resolution image, and consequently, better identification of small point targets. However, the penetration depth of high-frequency signals is limited compared to low-frequency signals. To overcome this problem, it has been proposed to use a second-order ultrasound field (SURF) imaging technique in which dual-frequency bands are performed © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Mohammadzadeh Asl and R. Paridar, Beamforming in Medical Ultrasound Imaging, Springer Tracts in Electrical and Electronics Engineering, https://doi.org/10.1007/978-981-99-7528-0_7

307

308

7 Ongoing Research Areas in Ultrasound Beamforming

in transmission (Fl et al. 2017). In this technique, the low-frequency pulse is used as the manipulation pulse. Also, the high-frequency pulse is used as the imaging pulse. In the SURF technique, the high-frequency pulse is used to image the tissues under the influence of the manipulation pulse. However, controlling the time interval between the low-frequency and high-frequency pulses to perform the transmission process is challenging. The time-reversal-based techniques can also be used to identify the point targets. In the time-reversal algorithm, an unfocused plane wave is propagated toward the imaging medium, and the corresponding reflected echoes are returned to the array. The received signals are time-reversed by the array and re-transmit toward the medium. This leads the propagated wave to be more focused compared to the initial transmitted plane wave. By performing this process iteratively, the wave will be focused on the point target. This technique and its modified versions have been used to obtain the positions of the coherent targets and separate them from the inhomogeneous background. The advantage of this technique is that the focusing is performed automatically without the need for the propagation model (Robert et al. 2006; Labyed and Huang 2011, 2013). The drawback of time-reversal-based algorithms is that they are easily affected by noise. The photoacoustic imaging technique, as a hybrid imaging modality, and ultrasound elastography imaging that is used to map the stiffness of the tissue have also been used to identify the microcalcifications (Ko et al. 2014; Kim et al. 2014). Apart from imaging techniques, a variety of processing algorithms have been proposed to successfully identify the point targets in speckle-generating mediums. In particular, the CS-based algorithm can be potentially used to recover the point targets according to the properties of this algorithm discussed in Sect. 6.4.1. In the following, a list of related works will be discussed separately.

7.1.1 Point Detection Based on Bayesian Information Criterion In order to identify the point targets among a background speckle, sparse priors based on selecting a criterion can be used in the beamforming process. In Szasz et al. (2016), an algorithm has been developed to identify and reinforce the point targets in the beamformed data in which the detection process is performed by minimizing the Bayesian information criterion (BIC). The BIC is extensively used in speech processing, which creates a trade-off between the data attachment and the sparse constraint. The resulting algorithm is known as the US-BIC method in this book. This iterative algorithm is divided into two main steps: identifying the potential reflectors and validating the identified point targets based on the BIC cost function. These two steps are iteratively performed until the algorithm converges. To describe the US-BIC algorithm in more detail, pay attention to the following explanations. The obtained RF image corresponding to the US-BIC method is considered as N ×M .S ∈ C , where . N and . M denote the number of scanlines and the samples in each

7.1 Point Target Detection Using Ultrasound Imaging

309

scanline, respectively. The number of scanlines is considered to be equivalent to the number of array elements for simplicity. The matrix . S is modeled as below:

.

S(x, n) =

Ks Σ

ak hk (x − xk , n − n k ),

(7.1)

k=1

where .x and .n denote the lateral and axial directions, respectively, and .(xk , n k ) denotes the position of .kth point target (.k = 1, . . . , K s ). Also, .ak is the amplitude of the point target, and . K s denotes the number of point targets that are going to be detected using the US-BIC algorithm. In (7.1), . hk (x, n) is the reflected echo corresponding to .kth point target. This parameter is unknown and should be estimated. Considering the described model, the following two steps of the US-BIC algorithm are performed according to the following explanations. First Step: Identification For .kth iteration, the first step of the algorithm, i.e., identifying the point targets, is performed for each RF scanline as below: |) (| n = argmax | sˆ i (n)| , n | | ak = | sˆ i (n k )| ,

. k

ˆ i , n) ◦ wh (n), hk (xi , n) = S(x ] [ τpulse . f s τpulse . f s , . . . , nk + , n = nk − 2 2

(7.2)

ˆ i , n) denotes the DAS beamformed data, . sˆ i (n) ∈ C M×1 is .ith RF scanline where . S(x ˆ i , n), .wh is the Hanning window, and .◦ represents the which is extracted from . S(x Hadamard product. Also, .τpulse and . f s are the pulse length and sampling frequency, respectively. The detected point target in .kth iteration is obtained as below: s(k) ≜ S(k) (xi , n) =

k Σ

. i

a p h p (xi , n),

(7.3)

p=1

where .(.)(k) denotes .kth iteration. In particular, . si(k) is the .ith RF scanline corresponding to the image . S(k) (x, n). In the identification step, the DAS beamformed data is used as the initial input. One should note that the SNR of the beamformed data is higher compared to the raw data. Therefore, the beamformed data is considered to be used in the identification process (instead of the raw data) to make it less affected by low SNR. Second Step: Validation After the identification step is performed, the BIC cost function should be calculated in which a balance is determined between the data attachment and the sparsity of the reflectors. This criterion is used to evaluate the similarity between the distribution

310

7 Ongoing Research Areas in Ultrasound Beamforming

predicted by the statistical model and the real one. The statistical model, which ˆ where .vn is estimated by the maximum likelihood method, is denoted as .g(vn |θ), represents the observations, and .θˆ is the estimator of the unknown parameter. Then, the BIC criterion is defined as below: [ ] ˆ + p log(n). . B I C(n) = −2 log g(vn |θ) (7.4) Adapting the BIC in ultrasound imaging, the BIC cost function in .kth iteration is defined as below: (|| ||2 ) [ ] || (k) T (i) || + α k log(S) , . f cost (k) = log ||s i .1 − y d || (7.5) 2

where . y(i) d denotes the focused raw data corresponding to .ith scanline. The first term on the right-hand side of the above equation represents the data attachment. Also, the second term is the sparse constraint. Similar to the CS-based method presented in (6.112), the parameter .α in the BIC cost function is used to determine a balance between data attachment and the sparse constraint. The two steps mentioned above are repeated for each scanline until the BIC cost function is increased compared to the previous step, i.e., . f cost (k + 1) > f cost (k). In other words, the US-BIC algorithm estimates the point targets of the image using an iterative procedure by minimizing the BIC cost function. One should note that as the identification of point targets is performed for each RF scanline, the algorithm may be overestimated. Furthermore, the spatial coherence between the neighboring scanlines is not considered during the identification process. To overcome this limitation, one can identify a set of point targets through the whole RF scanlines at each iteration. In other words, the BIC cost function defined in (7.5) can be generalized from 1D to 2D.

7.1.2 Point Detection Based on Coherence Estimation and Covariance Matrix Analysis In Huang et al. (2014), a detection method has been proposed based on coherence estimation and the covariance matrix analysis. The CF weighting method has been previously discussed in Sect. 3.5.1.1. According to (3.19), the coefficients of the CF method can be rewritten as below: | Σ |2 |1 | N | N i=1 xi (n)| . (7.6) .C F(n) = 2 1 ΣN i=1 |x i (n)| N

7.1 Point Target Detection Using Ultrasound Imaging

311

It is well-known that the nominator of the CF formula is the coherence summation of the delayed signals (. Ic (n)), which is defined as the average of the coherent intensity across the array elements. Also, the denominator of (7.6) can be expressed as below: | |2 N N N |1 Σ | 1 Σ 1 Σ | | 2 2 |xi (n)| = |Δxi (n)| + | xi (n)| , . |N | N i=1 N i=1 i=1

(7.7)

ΣN |Δxi (n)|2 is defined as the incoherent summation (. Iinc (n)). For .ith where . N1 i=1 element, .Δxi (n) is formulated as below: Δxi (n) = xi (n) −

.

N 1 Σ xi (n). N i=1

According to the above explanations, the CF is rewritten as below: C F(n) =

.

1 Ic (n) = . (n) Ic (n) + Iinc (n) 1 + IIincc (n)

(7.8)

Note that in the case in which the channel data are completely coherent, i.e.,. Iinc (n) = 0, we have .x1 (n) = x2 (n) = . . . , x N (n). The value obtained from the CF method determines how much a point is brighter compared to its surrounding points. In particular, if .C F(n) = 1, maximum brightness can be concluded which is a result of the complete coherence of the channel data. Now, consider the covariance matrix, or equivalently, the correlation matrix of the time-delayed signals at time instance .n as below:

.

R(n) =

K Σ 1 x(n − k)x H (n − k), 2K + 1 k=−K

(7.9)

where . x(n) = [x1 (n), . . . , x N (n)]T ∈ C N ×1 . Note that temporal averaging over .2K + 1 samples is performed to obtain the covariance matrix . R(n). Also, note that the eigenvalues of the obtained covariance matrix are positive and non-negative since the matrix . R(n) is positive semidefinite. Considering .λi (n) as .ith eigenvalue of the covariance matrix where .λi (n) ≥ λi+1 (n), we have trace {R(n)} =

N Σ

.

i=1

Rii (n) =

N Σ

λi (n).

(7.10)

i=1

According to the above equation, the dominance of the first eigenvalue .evdom (n) is defined as below:

312

7 Ongoing Research Areas in Ultrasound Beamforming

Algorithm 7.1 Point detection using the parameters C F(n) and evdom (n) 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:

Inputs: x(n) ∈ C N ×1 , K , τ . Output: binary mapping of point targets [O]. for each imaging point n do Calculate C F(n) according to (7.6) Calculate R(n) according to (7.9) ΣN λi (n) trace{R(n)} = i=1 1 evdom (n) = λ1 (n) 1− trace{R(n)}

Mult (n) = evdom (n) × C F(n) if Mult (n) ≥ τ then o(n) = 1 else o(n) = 0 end if

12: end for 13: Return O

evdom (n) =

.

1 1−

λ1 (n) trace{R(n)}

.

(7.11)

If .λi = 0 for .i ≥ 2, the trace of the matrix . R(n) will be equivalent to .λ1 (n), and consequently, .evdom = ∞. More precisely, the parameter .evdom (n) tends to be infinite as the rank of the covariance matrix equals one. Otherwise, this parameter will be a finite value. One should note that if there exist .n 1 , n 2 ∈ [n − K , n + K ] such that . x(n 1 ) and . x(n 2 ) are two independent vectors, the rank of the covariance matrix will be greater than one, and therefore, .evdom (n) will be a small value. On the other hand, a great value of .evdom (n) indicates that a scaling factor .a( p) exists such that . x( p) ∼ = a( p)x(n) for . p ∈ [n − K , n + K ]. This implies that a dominant point target exists around .n. Using two channel data-based parameters .C F(n) and .evdom (n), a coherence estimation of the point targets has been developed, the pseudo-code of which is presented in Algorithm 7.1. It can be seen from the pseudo-code that a thresholding process is applied to the likelihood map. Then, the point targets are identified via a binary mapping of the output threshold. The threshold is denoted as .τ in Algorithm 7.1. To reduce the dependency of the parameter .evdom (n) to the transmit focus, the described algorithm can be modified; the obtained CF coefficient is first compared with the threshold .τ1 . The reason is that the role of the CF is more prominent in identifying the point targets. If .C F(n) > τ1 , a point target is identified. In the case in which .C F(n) < τ1 , two constraints .C F(n) > τ2 and .evdom (n) > τ3 are investigated where .τ2 and .τ3 are threshold values. If these two constraints are met simultaneously, it is concluded that a point target is identified. The pseudo-code of this modified algorithm is presented in Algorithm 7.2.

7.1 Point Target Detection Using Ultrasound Imaging

313

Algorithm 7.2 Point detection using the modified version of Algorithm 7.1

1: 2: 3: 4: 5: 6:

Inputs: x(n) ∈ C N ×1 , K , τ1 , τ2 , τ3 . Output: binary mapping of point targets [O]. for each imaging point n do Initialize: o1 (n), o2 (n), o3 (n)=0 Calculate C F(n) according to (7.6) Calculate R(n) according to (7.9) ΣN λi (n) trace{R(n)} = i=1 1 evdom (n) = λ1 (n) 1− trace{R(n)}

7: 8:

if C F(n) ≥ τ1 then o1 (n) = 1

9: 10: 11:

end if if C F(n) ≥ τ2 then o2 (n) = 1

12: 13: 14:

end if if evdom (n) ≥ τ3 then o3 (n) = 1

15: end if 16: o(n) = o1 (n) + [o2 (n) × o3 (n)] 17: end for 18: Return O

7.1.3 Point Detection Based on Wavelet Coefficient In Hverven et al. (2017), a coherence-based wavelet shrinkage algorithm has been proposed to suppress the speckle noise and improve the detection of point targets such as microcalcifications. This algorithm is inspired by the application of separating the coherent point targets from the incoherent noisy background in SONAR. The pseudocode of this algorithm is illustrated in Algorithm 7.3. In this algorithm, the input image is first divided into two sub-images, or equivalently, two different looks; first, the original image is transformed into the frequency space by taking the 2D-FFT of the image. Then, the frequency space image is divided into two sets using a 2D random block grid. The size of the grid blocks should be adjusted such that the information of the point targets is preserved along several blocks while the information of the background speckle is correlated on a scale much smaller compared to the block size. This leads the point targets to be permanent in both of the complementary looks while the coherence between the speckle of these two looks is small. After the looks are obtained from the original image, wavelet coefficients of each look (as well as the original image) are calculated separately. The wavelet transform is applied to images in order to de-noise them while the resolution is preserved. By using the wavelet transform, the image is decomposed into high and low frequencies. From the high-pass filtered image, three other detailed images are generated, each of which indicates the directional local changes of the image. The low-pass filtered image is further downscaled and results in an approximation image. This process can

314

7 Ongoing Research Areas in Ultrasound Beamforming

be continued at a multi-level. By using a few wavelet coefficients, the information of the point targets would be described as well. Therefore, by thresholding the wavelet coefficients of the original image and performing an inverse wavelet transform on the result, a reconstructed image with a suppressed noise level would be achieved while the information of point targets is preserved. To obtain an optimal threshold value, consider the wavelet transform of the two looks as . W 1 and . W 2 . The coherence between the results is obtained for . pth realization as below: | | | | Σ A−1 ∗ 2 | | A−1 W 1 W 2 | | m,n=− 2 |, .C(x, y, l, p) = | / / | Σ A−1 | Σ A−1 | | 2 2 |W | |W | | 2 | 1 m,n=− A−1 m,n=− A−1 2

(7.12)

2

where . W i = W i (x + mΔx, y + nΔy, l, p), in which .Δx and .Δy denote the pixel dimensions. In the above equation, .(x, y) is the pixel position in the wavelet transformed image, and .l is the decomposition level. It is assumed that a sliding window with the dimension of . A × A is used to calculate the coherence of each pixel. The greater the value of . A, the lower the variance. However, the spatial resolution of the final image will be degraded. The coherence metric determines the similarity between two looks. As the coherence between the wavelet coefficients decreases, the noise level will be suppressed more successfully. One should note that multiple looks with statistically independent noise realizations are generated from the original image. The coherence estimation presented in (7.12) is performed for each realization, and the results are averaged to obtain the final threshold value. Assume that . P realizations are generated from the original image. The obtained coherences between two looks are averaged over . P realizations as below: P 1 Σ |C(x, y, l, p)| , .C avg (x, y, l) = P p=1

(7.13)

and the optimum threshold is achieved accordingly. The obtained threshold is used to preserve the point targets while the noise of the reconstructed image is suppressed. More precisely, we have ⎧ ⎪ ⎨1 .C th (x, y, l) = 0 ⎪ ⎩ Cavg (x,y,l)−tmin tmax −tmin

if Cavg (x, y, l) > tmax if Cavg (x, y, l) < tmin else.

(7.14)

This implies that in the case in which the wavelet transform of the original image is smaller than .tmin , the threshold coefficient will be considered zero. Also, if the wavelet transform of the original image is greater than .tmax , the threshold coefficient is considered to be one. Otherwise, a linear transition is performed to assign a value for the threshold coefficient. The resulting threshold coefficients are then multiplied

7.1 Point Target Detection Using Ultrasound Imaging

315

Algorithm 7.3 Point detection using wavelet coefficients Inputs: original image (X), P, A, Δx, Δy, tmin , tmax . Output: Improved image in terms of point detection (X new ). 1: X f = 2DFFT(X) 2: for p = 1 : P do 3: dividing X f into two sets X 1 and X 2 using a 2D random block grid 4: W 1 = Wavelet transformed of X 1 5: W 2 = Wavelet transformed of X 2 6: C(x, y, l, p)=coherence estimation between W 1 and W 2 [according to (7.12)] end for ΣP 7: Cavg (x, y, l) = P1 p=1 |C(x, y, l, p)| 8: if Cavg (x, y, l) > tmax then 9: Cth (x, y, l) = 1 10: else if Cavg (x, y, l) < tmin then 11: Cth (x, y, l) = 0 12: else C (x,y,l)−t 13: Cth (x, y, l) = avgtmax −tmin min 14: 15: 16: 17: 18:

end if W = Wavelet transformed of X W th = W × C th X new = inverse wavelet transform of W th Return X new

with the wavelet transform of the original image, as shown in Algorithm 7.3. By taking the inverse wavelet transform of the result, the final improved image would be achieved.

7.1.4 Point Detection Using Multilook The multilook technique is a widely used method in SAR applications and has been developed with the aim of reducing speckle noise. In this technique, the image is divided into several looks or sub-images (similar to the division process performed in Sect. 7.1.3). This is done by dividing the frequency bandwidth into several subsets with different central frequencies. The normalized matched filter (NMF) multilook technique is a common method used in RADAR in which, based on the known response of a point target, higher weights are applied to more important looks in order to maximize the detection of the point targets. Inspired by this technique, in Thon et al. (2022, 2023), multilook-based algorithms have been developed to identify the point targets. These algorithms will be discussed in detail in the following. In the NMF multilook algorithm, the multilook process is applied to the input image; the image is transformed into the frequency domain. Then, the frequency bandwidth is divided into . L = L x × L z subsets. Finally, the looks are obtained by applying IFFT to each subset. Note that . L x and . L z denote the number of looks in horizontal and vertical directions, respectively. Also, note that the frequency subsets

316

7 Ongoing Research Areas in Ultrasound Beamforming

can be obtained with or without overlapping the frequency bandwidth. In the NMF multilook technique, the looks are weighted based on prior knowledge about the response of the point target, as mentioned earlier. The response of the imaging system is described using the PSF. In particular, to obtain the point target response in the 1D case, we start with the PSF of a single point target at the center and shift the phase with respect to each pixel position. More precisely, consider the complex amplitude of a point target as .C = C0 e jφ at the center of the resolution cell. Now, consider the point target at the distance of .n 0 . The sinc response of such a point target is expressed as below: ) ( 4π f 2B(n − n 0 ) jφ − c 0 n 0 , sinc .apoint (n) = C 0 e e (7.15) c where . f 0 and . B denote the central frequency and the frequency bandwidth, respectively. The output of the multilook process is . L looks with equal bandwidths of (l) . B L = B/L over the central frequencies of . f c for .l = 1, . . . , L. The response to each look is formulated as below: )} { ( { } f − f c(l) (l) . (7.16) .a (n) = IFFT FFT apoint (n) rect BL Accordingly, the point target response would be obtained for each look as below: ) ( (l) 4π f 4π f BL 2B L (n − n 0 ) jφ − j c 0 n 0 j cc (n−n 0 ) C0 e e . .a (n) = e sinc B c (l)

(7.17)

Note that in the above equation, a wider sinc is generated compared to the case in which the bandwidth equals . B. This results in resolution degradation to .c/2B L . The obtained point target response shown in (7.17) can be expanded to the 2D version by calculating the response of the point target in another direction in a similar manner and multiplying the result with the one presented in (7.17). After the point target response is obtained, the NMF multilook technique is formulated. In this regard, the generalized likelihood ratio test (GLRT) is used. Consider the sublook vector . y(n) ∈ C L×1 for time instance .n. The NMF decision rule with respect to the threshold .thr is written as below: | | H | a (n)R−1 (n) y(n)|2 ][ ] > thr. .NMF( y(n)) = [ y H (n)R−1 (n) y(n) a H (n)R−1 (n)a(n)

(7.18)

In the above equation, . a(n) ∈ C L×1 denotes the theoretical point scatterer vector, and . R(n) ∈ C L×L is the sublook covariance matrix which is used to describe the correlation among the speckle samples that belong to different looks. If the looks are independent and non-overlapping, we have. R(n) = I. The NMF multilook algorithm is a normalized technique due to the denominator of (7.18). Assuming . R(n) = I, (7.18) is simplified as below:

7.1 Point Target Detection Using Ultrasound Imaging

317

| | H | a (n) y(n)|2 ][ ] > thr, for R(n) = I. .NMF( y(n)) = [ y H (n) y(n) a H (n)a(n)

(7.19)

If uniform weighting is considered for the coherent summation, i.e., . a(n) = 1, the resulting algorithm is named the multilook coherence factor (MLCF) technique which is expressed as below: |2 |Σ | | L | | H | l=1 yl (n)| |1 y(n)|2 ] = ΣL > thr. .MLCF( y(n)) = [ y H (n) y(n) L L l=1 |yl (n)|2

(7.20)

It can be seen from the above equation that, by considering . a(n) = 1, the nominator equals the coherent summation over different looks. Also, the denominator of the MLCF technique is expressed as the incoherent summation over different looks, as shown in (7.20). The computational complexity of the MLCF algorithm is considerably decreased compared to the NMF method, since there is no need to have a priori knowledge about the point target response. The output of the MLCF can be defined as a 2D coherence factor since the spatial frequencies are divided along two directions and the coherence over the resulting looks is calculated. The difference between the conventional CF, which has been previously discussed in Sect. 3.5.1.1, and the newly defined MLCF is that the spatial frequency areas are overlapping in the CF method while they are non-overlapping in MLCF. The obtained NMF and MLCF presented in (7.18) and (7.20) can be used as weighting factors; the final image in which the NMF is used as a weighting factor is known as NMF weighted (NMFW) image, and is formulated as below: | L | |Σ | | | .NMFW( y(n)) = NMF( y(n)). | yl (n)| > thr. | |

(7.21)

l=1

Similarly, the MLCF weighted (MLCFW) image is obtained by considering the MLCF as a weighting factor according to the following equation: | L | |Σ | | | .MLCFW( y(n)) = MLCF( y(n)). | yl (n)| > thr. | |

(7.22)

l=1

It has been proposed in Thon et al. (2022) to perform the pre-whitening process to the input image before going through the point detection algorithm. This preprocessing method improves the image contrast. The pre-whitening technique is a decorrelation transformation in which a set of random variables with a known covariance matrix are converted into a new set of random variables such that their covariance matrix is an identity matrix. This process is known as the whitening process since the input vector is transformed into a white noise vector. By using this transformation, the frequency amplitudes are uplifted to the same average level. Pre-whitening the

318

7 Ongoing Research Areas in Ultrasound Beamforming

image before going through the main process leads to better performance in terms of noise reduction and contrast improvement. To apply the whitening transformation, the 2D-FFT of the image is considered first. Then, the average of its smoothed amplitude is calculated. Finally, the result is inversed, and the whitened image is obtained accordingly. The pseudo-code of the discussed technique is shown in Algorithm 7.4. It can be seen that the input image is pre-whitened in the first step to further suppress the noise level. Then, the multilook process is performed to obtain . L different looks. Finally, the improved image in which the point targets are successfully identified is achieved by processing the looks using one of the multilook-based algorithms presented in (7.18)–(7.22).

7.1.5 Point Detection Based on Phase Coherence Filtering In Bilodeau et al. (2022), it has been proposed to use phase coherence filtering in order to better identify the point-like reflectors. Consider the measured signal corresponding to .eth transmission and .r th reception as below: .

M er (t) =

Σ

P i .Ser (X i , t),

(7.23)

i=1

where . P i denotes the (real) reflection coefficient for a target located at the coordinate . X i = (xi , z i ), and . Ser (X i , t) is the backscattered signal of the position . X i corresponding to .eth transmission and .r th reception. Note that it is assumed that . P i is real. The signals . Ser (X i , t) can be estimated using the reference signals . Sref er (X i , t) that are pre-determined for each emission-reception pair, and for each imaging point . O with the coordinates of . X o = (xo , z o ). The estimation is performed by correlating the reference signals with the measured signal. M er (t). In particular, the cross-correlation between the reference and measured signals for a single emission-reception pair and an imaging point . O is formulated as below:

Algorithm 7.4 Point detection using pre-whitening and multilook 1: 2: 3: 4: 5: 6: 7: 8:

Input: original image (X), L. Output: Multilook’s output (X new ). X f = FFT(X) X f w = whitening )X f ( X w = IFFT X f w dividing X f w into L subsets IFFT each subset in order to obtain L sublooks constructing the sublook vector y(n) ∈ C L×1 for n th imaging point applying a multilook method using one of the equations (7.18)–(7.22) Return X new

7.1 Point Target Detection Using Ultrasound Imaging

.

[ ] C er (X o , t) = M er (t) ∗ Sref er (X o , t) =

{

∞ −∞

319

M er (τ )Sref er (X o , τ − t)dτ,

(7.24)

where .∗ denotes the cross-correlation operation. According to (7.23), the above equation is rewritten as below: .

[ ] Σ [ ] C er (X o , 0) = P o Ser (X o , 0) ∗ Sref P i Ser (X i , 0) ∗ Sref er (X o , 0) + er (X o , 0) . i/=o

(7.25) Note that by considering .t = 0, the measured and reference signals will be aligned with the same time reference. The imaging metric is obtained by summing the above equation over all pairs of emission-reception as below: .

I(X o ) =

ΣΣ e

C er (X o , 0) = P o

r

+

Σ i/=o

ΣΣ[ e

Pi

Ser (X o , 0) ∗ Sref er (X o , 0)

r

ΣΣ[ e

]

] Ser (X i , 0) ∗ Sref er (X o , 0) . (7.26)

r

Ideally, for the imaging point at. X o , the reflection coefficient of a potential reflector is expected to be represented at the coordinate . X o , i.e., . I(X o ) = P o . In the above equation, the first term denotes the cross-correlation between the signals, which represent the backscattered signals of the imaging point [ the corresponding reference ] Σ . O, Σ and signals. In the ideal case, it is expected that . e r Ser (X o , 0) ∗ Sref er (X o , 0) = 1. Also, the second term in (7.26) denotes the cross-correlation between the reference signals of the imaging point . O, and all other imaging points except . O. This incoherent sum results in ] In the ideal case, it [ the reconstructed image. Σsome artifacts Σ Σ in (X , 0) = 0. Obtaining a set is expected to have . i/= O P i e r Ser (X i , 0) ∗ Sref o er of reference signals and a filtering cross-correlation operation is necessary to achieve a reconstruction algorithm as close as possible to the ideal case, i.e., . I(X o ) = P o . Assuming that the reference signal is a Dirac delta function at the time .t0 , i.e., ref . Ser (X o , t) = δ(t0 − t), we have I

. Ref

(X o ) = =

ΣΣ{ e

r

e

r

ΣΣ

= Po

∞ −∞

M er (t0 )

ΣΣ e

M er (τ )δ(t0 − τ )dτ

r

Ser (X o , to ) +

Σ i/= O

Pi

ΣΣ e

Ser (X i , to ).

(7.27)

r

The above equation is equivalent to the case in which all the signals are involved to synthetically focus on the considered imaging point. In other words, all of the available information is used to obtain the imaging metric. The obtained imaging metric in (7.27) is considered as the standard case for comparison.

320

7 Ongoing Research Areas in Ultrasound Beamforming

The cross-correlation process can be performed in the frequency domain. More precisely, (7.24) is written in the frequency domain as below: .

[ { }] C er (X o ) = IFFT FFT {M er (t)}∗ FFT Sref er (X o , t) .

(7.28)

Cross-correlation operation in the frequency domain is known as the generalized cross-correlation (GCC). Using the GCC, is it possible to introduce a frequencydependent filter . W (ω), where .ω is the angular frequency. To obtain a value for each imaging point, the correlation coefficients are considered without any delay, i.e., with the consideration of .t = 0. In such a case, the GCC is modified as below: { { . C er (X o ) = M ∗er (ω)Sref (X , ω).W (ω)dω = Φer (X o , ω).W (ω)dω, (7.29) o er ω

ω

ref ref ∗ where } o , ω) = M er (ω)Ser (X o , ω). In the above equation, . Ser (X o , ω) = FFT { ref .Φer (X Ser (X o , t) , and . M er (ω) = FFT {M er (t)}. In Bilodeau et al. (2022), a filter known as the Theoretical (T) filter has been introduced to improve the performance of the imaging metric in terms of point target detection. This filter is defined as below:

.

1 W T (ω) = | | . ref | S (X o , ω)|2 er

To better evaluate the effect of the T filter, first, consider the case in which no filter is applied, i.e., . W (ω) = 1. This case is known as the Excitelet imaging metric, and the resulting output is denoted as . I E xc (X o ) for the imaging point . O. Assuming that the reference signal is a good approximation of the propagated signal, the following equation is obtained according to (7.23) and (7.29): { .

C Exc (X o ) = P o

ω

|Ser (X o , ω)|2 dω +

Σ i/= O

{ Pi

ω

S∗er (X i , ω)Ser (X o , ω)dω. (7.30)

By summing the above result over all the emission-reception pairs, . I Exc (X o ) will be obtained as below: ΣΣ{ |Ser (X o , ω)|2 dω . I Exc (X o ) = P o +

Σ i/= O

e

Pi

r

ω

ΣΣ{ e

r

ω

S∗er (X i , ω)Ser (X o , ω)dω.

(7.31)

In the ideal case in which . I Exc (X o ) = P o , the output will be independent of the pixel position and the array directivity. However, it can be seen from the above equation

7.1 Point Target Detection Using Ultrasound Imaging

321

that in the Excitelet case, the reflection coefficients are negatively affected by a term that depends on the propagation model. Again, assuming that the reference signal is a good approximation of the propagation model, the T filter is used, and the following imaging metric is obtained: Σ Σ { M ∗ (ω)Sref (X o , ω) er er . I Exc-T (X 0 ) = |2 dω | ref | | ω S (X o , ω) e r er { ∗ ΣΣ Ser (X o , ω)Ser (X o , ω) dω Po = | | ref | S (X o , ω)|2 ω e r er Σ Σ Σ { S∗ (X i , ω)Ser (X o , ω) er Pi dω + | | ref | S (X o , ω)|2 ω e r i/= O er Σ Σ Σ { S∗ (X i , ω)Ser (X o , ω) er 2 = N Po + dω. Pi | | ref | S (X o , ω)|2 ω e r i/= O

(7.32)

er

In the above equation, the first term corresponds to the reflection coefficients of the considered imaging point. The phase content of the signal . Ser (X o , ω) depends on either the emission-reception pair or the considered imaging point, which leads to a random phasor distribution. Therefore, the second term is negligible compared to the first term in (7.32). The frequency band is determined based on the application; as the low-frequency components are removed, the resulting image would be better resolved. However, this improvement is achieved at the expense of increasing the background level. To obtain a metric that does not impair the amplitude of the resulting image while it identifies the presence of a reflector, it has been proposed to modify the imaging metrics . I Ref (X o ) and . I Exc-t (X o ) as below: ρ . I Ref (X o )

({ = I Ref (X o )

e

j[

Σ Σ e

r

∠Φer (X o ,ω)]

)ρ dω

ω

,

(7.33)

and: ρ . I Exc-T (X o )

({ = I Exc-T (X o )

e ω

j[

Σ Σ e

r

∠Φer (X o ,ω)]

)ρ dω

,

(7.34)

where .∠(.) denotes the angle of a complex number. In (7.33) and (7.34), the terms inside the parentheses on the right-hand side of the equations denote the phase coherence filter. For .ρ = 0, the outputs of the modified imaging metrics would be equivalent to the original ones. To identify the point-like reflectors (as well as the specular reflectors), .ρ > 1 should be selected. One can use .ρ < 1 if preserving the speckle is of interest.

322

7 Ongoing Research Areas in Ultrasound Beamforming

7.1.6 Other Point Detection Techniques Point Detection Using Short-Lag Spatial Coherence The SLSC weighting method was previously discussed in Sect. 3.5.1.7. Using this technique in CPWI, images that are correlated to the ultrasound wavefront phase are generated across the array. The wavefront phase is obtained from the signals which are time-delayed and compounded over the plane waves. Simply put, the timedelayed signals are first compounded along the plane wave direction. Then, the SLSC technique is applied to the compounded result along the receiver direction. It has been shown that this technique improves the quality of the reconstructed image. Once the image is reconstructed based on the SLSC method, the edges of the point targets are identified using an automated segmentation technique which leads to better point detection (Tierney et al. 2019). The SLSC method is also used in STA imaging system (previously discussed in Sect. 1.11.2) with multiple transmissions, and it has been shown that an improved reconstructed image in terms of both resolution and contrast would be obtained (Matrone et al. 2021). This technique improves the identification of point targets such as microcalcifications as well as the biopsy needle. Point Detection Using Mid-Lag Spatial Coherence The mid-lag spatial coherence (MLSC) is another technique that can be used to identify the point targets. This method is similar to the SLSC technique, except that the order of the process is rearranged; in this technique, the SLCS method is applied in the receiver direction prior to compounding. In other words, . N P low-resolution images are obtained using the SLSC method. After the low-resolution images are obtained, they are coherently compounded, and the final image in which the spurious correlations are suppressed would be achieved (Tierney et al. 2019). Point Detection Using Aperture Domain Model Image Reconstruction The aperture domain model image reconstruction (ADMIRE) is a model-based technique in which the effect of different sources that degrade the image is modeled. A high scatterer source, such as a kidney stone, and sources that result in reverberation noise are examples of these image degrading sources. By modeling different sources using the ADMIRE technique, it is possible to break the received wavefront to the origin points. Then, only the signals that originate from the region of interest are involved in order to construct the image which results in a high-quality image. In CPWI, the time-delayed signals are first summed along the plane wave direction. Then, the decomposition using the ADMIRE technique is performed on the compounded result. Finally, the improved image is obtained by performing a summation process along the receiver direction (Tierney et al. 2019). Point Detection Using MicroPure.TM Technique In MicroPure.TM technique, the visualization of the point targets is improved compared to the conventional US B-mode imaging Park et al. (2016). This algorithm is based on the speckle suppression technique. In the MicroPure.TM technique, the intensity of each pixel is compared with respect to its surrounding pixels. More precisely, we have:

7.2 Deep Learning in Medical Ultrasound Imaging

I

. New

(x, y) = I (x, y) −

323 Ns 1 Σ Ci , Ns i=1

(7.35)

where . I (x, y) denotes the pixel intensity corresponding to the location of .(x, y), and I (x, y) is the new pixel intensity obtained by subtracting the average value of the pixel intensities of . Ns surrounding pixels to the original pixel intensity. Also, .Ci represents the intensity of the .ith surrounding pixel. Using (7.35), the bright point targets are extracted from the inhomogeneous background. The MicroPure.TM image is finally obtained by overlaying the extracted bright point targets on the US B-mode image (in dark blue color), which improves the visualization of the point targets. In particular, it has been shown that by using this technique, more microcalcifications are identified compared to high-frequency B-mode imaging.

. N ew

Point Detection Using Correlation Technique The acoustic impedance of a point target (e.g., microcalcification) is considerably higher compared to soft tissue. Consequently, the intensity of the reflected echo corresponding to a point target is also higher compared to that one originating from soft tissue. Furthermore, a point target creates a blocking effect such that the reflected echo coming from its behind area is suppressed. According to these characteristics, the regions that generate high-intensity echoes and are associated with acoustic shadows are identified as point targets in conventional B-mode imaging. However, if the size of the point targets is very small, these features will not appear in the final image, and therefore, point targets will not be identified. To tackle this limitation, it has been proposed to use a correlation technique in Taki et al. (2012); the deterioration of the cross-correlation between the adjacent scanlines due to wavefront changes is used to perform the detection process. More precisely, one should note that there exists an overlap between the illuminated regions of two adjacent US beams. Therefore, the received signals associated with two adjacent scanlines are correlated. If the US beam illumination region includes a small point target, the US pulse wavefront will considerably change in the point target location. This change leads to the cross-correlation reduction between two adjacent scanlines. Consequently, the cross-correlation deterioration can be used to obtain information about the presence of a point target.

7.2 Deep Learning in Medical Ultrasound Imaging Achieving a high-quality and high-frame-rate image is an important challenge in medical US imaging. In particular, it has been shown that the adaptive MV algorithm and its modified versions are able to improve image quality. However, this improvement is at the expense of increasing the computational complexity, and consequently, negatively affecting the frame rate. Although a variety of algorithms have been developed to tackle this problem (see Sect. 3.7), further reduction of the computational complexity of the MV algorithm is still a challenge. One simple way to

324

7 Ongoing Research Areas in Ultrasound Beamforming

speed up the data acquisition process and increase the frame rate is to reduce the number of channel data. However, by doing so, the image quality degrades significantly. Besides, simplifying assumptions made to calculate the weight vector, and also, considering the speed of sound to be constant in different parts of the tissue, affect the accuracy of the output. To overcome these limitations, one can use deep learning methods. Deep learning has a wide range of applications in medical US imaging, such as microcalcification identification, lesion detection, classification, segmentation, image reconstruction, standard plane selection in fetal screening, and so on. In particular, to perform classification on medical US images, it has been shown that deep learning methods are less affected by the low quality of the images compared to conventional classification methods. Also, to perform the segmentation process, using deep learning methods in cases where the edges of the structure of tissue are not clear due to the presence of artifacts, a better performance would be achieved compared to the conventional segmentation techniques. A deep neural network (DNN) is a deep learning system that is capable of learning features automatically and strongly. They have been successfully used in modeling complex problems and are known as a very useful aided diagnosis tool in medical applications. In the following, a brief explanation of the principles of DNN and its different applications in the medical US field are going to be discussed. It is assumed that the reader is familiar with the general concepts of deep learning. The reader is encouraged to refer to Chollet (2021) before going through the following section.

7.2.1 Principles of Deep Neural Network Instead of considering a model and generating the output corresponding to input data, neural networks produce a model by using a series of training data and their corresponding expected outputs. Such networks consist of several layers. In each layer, a series of features are extracted from the US data and used to feed the next layer as input. That is, the features are extracted in several steps. Primary layers extract more general features that are known as low-level features, while deeper layers result in more specialized features that are known as high-level features. Figure 7.1 schematically shows the general process followed in a DNN. As can be seen from the figure, a network consists of a series of layers that are chained together. After passing the input data through the layers, the network produces an output. The obtained output is compared with the ground truth value which is equivalent to the label corresponding to the input data. This comparison is made using a loss function (e.g., cross-entropy function) which results in generating a loss value. Finally, the optimizer block uses the obtained loss value and updates the parameters of the network. Adaptive moment estimation (ADAM) is one of the optimizers commonly used to design a neural network in medical US applications.

7.2 Deep Learning in Medical Ultrasound Imaging

325

Fig. 7.1 The schematic of the process of a DNN

General steps to perform a deep learning-based method are summarized below: 1. designing an appropriate neural network, 2. preparing the training dataset to feed the network, 3. performing the training process and assigning suitable values to the parameters of the network, 4. evaluating the performance of the network using the evaluation dataset and modifying the parameters of the network obtained from the previous step. The learning process can be performed in a supervised or unsupervised manner. Each model is further divided into different categories. In particular, supervised learning is divided into two main categories: convolutional neural network (CNN) and recurrent neural network (RNN). Also, unsupervised learning is divided into two main parts: autoencoder (AE) and restricted Boltzmann machines (RBM). In the following, a brief explanation is presented for each of the mentioned networks. Convolutional Neural Network As one of the supervised models, CNN is the most common model used in US data processing. Convolutional layers are the main components of CNNs that are widely used in the detection and segmentation applications. These layers are interpreted as a series of filters that represent the data in a more useful form. In other words, convolutional layers are used to transform the input in another domain such that more useful information can be extracted from it; in a CNN, each layer is considered as a function that applies a transformation according to the following equation on the input and produces the output: [ ] output = σ (w ☉ input) + b ,

.

(7.36)

where .☉ denotes the dot product operation, .w and .b equal the weight and bias, respectively. The weight and bias used in the network are called network parameters. Also, .σ (.) denotes an activation function. Generally, CNN models consist of a series

326

7 Ongoing Research Areas in Ultrasound Beamforming

of convolutional and pooling layers, and finally, end up with fully connected (or dense) layers. Activation functions (e.g., Relu or softmax) are also used to apply a non-linear transformation to the output of the convolutional layers. In convolutional layers, features are obtained locally. The advantage of such a model is that the obtained features are translation-invariant. By using CNNs, it is possible to acquire detailed information about the considered data and make an accurate prediction. Recurrent Neural Network RNN is another supervised learning model that is often used in sequential data processing. In such models, information from the previous layers is injected into the current layer. This is done by using hidden units, which prevent the vanishinggradient phenomenon. In contrast to CNNs, RNNs have memory and are used in cases where the order is meaningful in the data. Forecasting the climate condition according to the weather information for the past few days is a common example of the use of this network. In medical US imaging, RNN can be used in cases such as US video processing which is interpreted as a sequence of frames (in order) (Chen et al. 2017). It has even been used in the automatic segmentation of prostate images obtained from US imaging, and also, in image quality improvement (Yang et al. 2017; Lei et al. 2023). One should note that this type of network is rarely used in medical US applications, and most of the designed networks are based on CNN. AutoEncoder Besides supervised learning, the networks can be learned in an unsupervised manner; in such a case, the input has no label. More precisely, the expected output corresponding to each input data is not available. In an unsupervised learning model, it is tried to extract a series of features from the input data which can be used to successfully distinguish between inputs corresponding to different classes. AE is one of the unsupervised learning methods that consists of a hidden layer. This hidden layer plays the role of an encoder in which the number of neurons is smaller compared to the input. By using the hidden layer, the input is mapped to a new latent space. The output layer is known as the decoder layer in which reconstruction is performed on the mapped features to generate an output similar to the input data. In other words, the decoder layer reversely maps the data represented in the latent space. The AE model is applicable in cases such as signal de-noising, where the noise-free version of the signal is not available. Restricted Boltzmann Machine RBM is a random neural network that is included in unsupervised learning methods. In this model, the output of the network neurons follows the Boltzmann distribution. Basically, RBM can be considered an AE. RBM consists of two layers: detectable and undetectable (or equivalently, visible and hidden). The connections between these two layers are bidirectional; that is, there is no specific direction between the connections of the neurons. The parameters of RBM networks are optimized by maximizing a likelihood function. In order to use the RBM in medical US imaging, several RBMs are stacked to generate a deep network, the result of which is named deep RBM (DRBM). In such a case, due to a large number of created layers, highlevel features are generated, and consequently, the performance of the network would

7.2 Deep Learning in Medical Ultrasound Imaging

327

be improved. DRBMs are usually used in segmentation or classification (Zhang et al. 2016). Generally, in comparison with the supervised learning model, unsupervised learning is not widely used in medical US imaging.

7.2.2 Challenges and Limitations There are two main challenges in using deep learning-based methods in the medical US field: small datasets and the noise of the received signals. For the first issue, one should note that in order to obtain an accurate model, it is necessary to train the network using a huge amount of dataset. However, a large dataset is not available in medical applications. In particular, annotated data is required to feed the network. However, this process is time-consuming, and also, requires expertise. Therefore, such datasets are rare and expensive. Furthermore, some kinds of diseases are rare, and therefore, it is not possible to achieve a huge amount of data in this regard. Also, for the latter issue, it is well-known that the received signals corresponding to US imaging are severely contaminated with artifacts. These artifacts will also appear in the reconstructed image and affect the diagnosis. Therefore, applying some techniques to de-noise the US data is important. In the following, the available solutions to solve each of the mentioned challenges using deep learning-based methods will be discussed separately.

7.2.2.1

Small Dataset

To overcome the limitation of the lack of a huge amount of dataset, one can generate a new series of data from the limited available dataset. More precisely, augmentation process should be performed to increase the training dataset. The augmentation process can be done by rotating, cropping, zooming, and translating the original image. However, the variety of samples produced in this way is not much. Also, another drawback of this method is that the size of the augmented images will be changed. Another way to perform the augmentation process and increase the amount of dataset is to use the transfer learning method as well as the generative adversarial network (GAN), each of which will be discussed in the following. Transfer Learning To improve the performance of the network, one can use the existing pre-trained neural networks in which a huge amount of dataset is used to optimize the parameters of the network. Using a pre-trained neural network is known as the transfer learning method. The AlexNet, VGG-Net, ResNet, Inception-Net, and Xception-Net are some examples of the existing pre-trained neural networks, each of which has a unique architecture. Figure 7.2 shows the architecture of the VGG-16 network as an example. The VGG-Net is one of the simple and commonly used pre-trained networks which is

328

7 Ongoing Research Areas in Ultrasound Beamforming

Fig. 7.2 The architecture of VGG-16 network

trained using the ImageNet dataset. Detailed information about this publicly available dataset can be found at https://www.image-net.org/. In Canziani et al. (2016), different pre-trained neural networks are compared and analyzed in terms of some important metrics such as accuracy and power consumption. In the application of DNN in medical US imaging, pre-trained networks have been widely used. Yu et al. (2017), Liu et al. (2019), Nair et al. (2019), Mei et al. (2021), Vasile et al. (2021) are some examples in this regard. One should note that the low-level features, such as edges and corners, are common in all applications (from object detection in a natural environment to lesion detection in a medical US image). Therefore, low-level features obtained from the non-medical dataset can be used in medical applications as well. However, high-level features are specific to the data which is used in network training. Applying high-level features corresponding to non-medical data in medical datasets disrupts the performance of the network. In order to improve the performance of the network in such a case, a few last (deeper) layers of the pre-trained network can be unfrozen and re-optimized based on the desired data. This method is known as fine-tuning which can cover only a few last layers (shallow tuning), or include more layers (deep tuning). Depending on the difference between the features of the desired data and the data which is used to train the pre-trained network, a shallow or deep tuning scenario is considered. By using the fine-tuning method, high-level features corresponding to the considered dataset will be generated, and therefore, the performance of the network will be improved. Generative Adversarial Network A GAN consists of two main parts: a generator network and a discriminator network. The schematic of this network is depicted in Fig. 7.3. As can be seen from the schematic, the generator network is fed using a random input and generates an artificial image (fake image) from it. Then, the generated fake image, together with the real image, is given to the discriminator network as input. The discriminator applies a binary classification to the input data; that is, it recognizes whether the image is real or fake. During the training process of GAN, the discriminator network helps the generator network to be improved and generate images close to real ones. The network seeks to continue the training process until a balance between generator and discriminator networks is met. Usually, GANs are more difficult to train compared to other traditional networks. By using GANs to augment the training data, one can take advantage of producing a variety of datasets with similar sizes. This network has been previously used to augment the medical US images (Tom and Sheet 2018).

7.2 Deep Learning in Medical Ultrasound Imaging

329

Fig. 7.3 The architecture of GAN

7.2.2.2

Data Artifacts

To enhance the medical US datasets, it is necessary to perform de-noising techniques to suppress the noise level and help the diagnosis. There are several non-deep learning methods in this regard; different types of filtering methods have been developed to perform on the US image. However, most of the filtering methods suffer from high-computational complexity. It has been shown that deep learning-based methods can be used in this regard, and successfully de-noise and enhance the US data. In particular, the modified version of the GAN (explained in the previous section) can be used which is known as super-resolution GAN (SRGAN) (Ledig et al. 2017). In such a case, the input of the network is a low-resolution image (instead of a random variable). Another method is to use the U-Net, which will be explained in the following. U-Network U-Net is one of the popular networks in deep learning which is widely used to remove the noise from the input image Lan and Zhang (2020). As its name implies, this network has a structure similar to the U shape, the architecture of which is demonstrated in Fig. 7.4. As can be seen from the figure, U-Net is a convolutional network consisting of two parts: encoder and decoder. In the encoder part, the input is down-sampled through four steps. In contrast, in the decoder part, the encoder’s output is up-sampled to yield an output similar to the input size. Also, skip connections between the layers are applied in order to prevent the vanishing-gradient phenomenon. It should be noted that the application of U-Net is not only limited to the denoising process; rather, it can be used in different parts of the US data processing. Many studies in which U-Net is used correspond to the segmentation of US images. This network has even been used to generate the B-mode image directly from the raw data (Ren 2018).

330

7 Ongoing Research Areas in Ultrasound Beamforming

Fig. 7.4 The schematic of the U-Net

7.2.3 Applications of Deep Neural Network in Image Generation Process Consider the general processing steps of constructing an image from the received signal according to Fig. 7.5. The DNN can be replaced and used in any of the blocks shown in this figure. In the following, each possible case will be discussed separately. Case 1: Directly on the received signal In order to reduce the data rate and increase the frame rate, one can reduce the channel data, or equivalently, down-sample the elements of the array. However, reducing the channel data leads to image quality degradation, as mentioned before. By taking advantage of DNNs, one can use reduced channel data and recover the missing data with high accuracy. Consequently, a reconstructed image equivalent to the case in which more channels are used will be achieved by using only a few channels. A similar study has been done in Peretz and Feuer (2020), Yoon et al. (2018), and it has been shown that using the DNN, it is possible to reduce the number of elements in the array while the quality of the reconstructed image is preserved. It is even

Fig. 7.5 The schematic of the general processing steps of image generation from the raw data

7.2 Deep Learning in Medical Ultrasound Imaging

331

Fig. 7.6 The schematic of the general processing steps of image generation in the case in which the DNN is used directly on the received signal

Fig. 7.7 The schematic of the general processing steps of image generation in the case in which the DNN is used on the frequency domain data

possible to perform down-sampling with a lower sampling rate compared to the Nyquist theorem in order to receive and store the backscattered echoes and recover the missing samples using the DNN (Perdios et al. 2017). The schematic of such a case is depicted in Fig. 7.6. Case 2: On the frequency domain data The DNN can be replaced in the pre-processing step of the reconstruction procedure (shown in Fig. 7.5) to remove the off-axis signal contribution. The schematic of such a case is shown in Fig. 7.7 in which the noise suppression process is performed in the frequency domain. In particular, a similar study has been conducted in Luchies and Byram (2018), Zhuang and Chen (2019), and it has been shown that the DNN is able to successfully remove the off-axis signal contribution and improve the image contrast. Case 3: Instead of the beamformer Previously, it has been shown that the MV beamformer is an adaptive algorithm in which the weight vector is updated according to the received data at each time index. It is well-known that this algorithm highly suffers from high-computational complexity. The DNN can be used to adaptively update the weight vector as schematically shown in Fig. 7.8. The advantage of using the DNN in such a way is that the processing steps of the MV algorithm will be sped up (Luijten et al. 2020). Moreover, the algorithm will be more robust compared to the case in which the MV minimization problem is solved analytically. One should note that dynamic focusing operations can also be performed using the DNN. Case 4: On the beamformed image (post-processing) The DNN can be used in the post-processing step in order to suppress the speckle noise. The schematic of this case is shown in Fig. 7.9. In order to suppress the noise,

332

7 Ongoing Research Areas in Ultrasound Beamforming

Fig. 7.8 The schematic of the general processing steps of image generation in the case in which the DNN is used instead of the adaptive beamformer to calculate the weight vector

Fig. 7.9 The schematic of the general processing steps of image generation in the case in which the DNN is used on the beamformed image

a series of filtering techniques are used, which mainly have a high-computational burden. By replacing conventional filtering methods with the DNN, the processing speed will be increased. Furthermore, one can ensure that a high-quality image would be obtained Mishra et al. (2018). This is while in conventional speckle reduction methods, noise suppression and quality improvement may not always be achieved. It has been shown that in CPWI, the beamformed images corresponding to plane waves emitted from different angles are compounded to improve the image quality. The DNN can be used in this step as a post-processing block so that with only a few emissions, a reconstructed image equivalent to performing several emissions would be achieved (Zhang et al. 2018; Gao et al. 2023). Also, various modified algorithms have been developed to improve the image quality obtained from CWPI, which can be found in Sect. 6.1.5. It is possible to train the network such that by using only a few plane waves, a high-quality reconstructed image similar to these improved algorithms is generated. As a post-processing method, the DNN is also used for medical image segmentation in which the input and output of the network are, respectively, the beamformed image and the segmented image. A wide range of DNN applications in medical US imaging has been covered by segmentation of the reconstructed images, e.g., Tang et al. (2021); Behboodi and Rivaz (2019). Case 5: End-to-end According to the schematic shown in Fig. 7.10, the DNN can be used as an end-toend network; in such a case, the input of the network is the received signal, and the output is the reconstructed image with the desired quality. A similar study has been performed in Nguon et al. (2022), Goudarzi and Rivaz (2022), Nair et al. (2018), as an example.

7.2 Deep Learning in Medical Ultrasound Imaging

333

Fig. 7.10 The schematic of using the end-to-end DNN to obtain the reconstructed image directly from the raw data

7.2.4 Deep Neural Network-Based Beamforming As the foundation of this book is the principles of beamforming techniques, it is worth specifically presenting some DNN-based methods that are involved with beamformers. In this regard, some recently developed DNN-based beamformers are briefly explained in the following. DNN for spatial coherence estimation in coherence-based beamformers In addition to the adaptive MV beamformer, spatial coherence is performed in some other algorithms with the aim of improving image quality. In particular, in Sect. 3.5.1.7, it has been shown that in the SLSC technique, the spatial coherence of the time-delayed data is used to obtain an improved contrast image. One should note that in the SLSC algorithm, the spatial coherence is calculated for each imaging point which makes this algorithm a time-consuming process. To speed up the SLSC algorithm, one can use the GPU to perform the process in parallel. However, using the GPU, some computational simplifications must be considered which induce errors in the obtained spatial coherence. To overcome this limitation, the deep learning approach is a good candidate; more precisely, by replacing the conventional process of spatial coherence calculation with a DNN, a high-accuracy result would be achieved without considering any simplification. Furthermore, the computational complexity of the resulting beamforming-based neural networks will be reduced. In Wiacek et al. (2020), a similar study was performed, the goal of which was to prevent repeating the spatial coherence calculation for each imaging point and speed up the SLSC algorithm. The architecture of the proposed network, known as the CohereNet, is depicted in Fig. 7.11. As can be seen from the figure, the designed network consists of four fully connected (FC) layers and ends up with a pooling layer. The network is fed using the time-delayed signal as input. Also, the output of the network is the spatial coherence which is equivalent to (3.53) presented in Sect. 3.5.1.7. Once the output of the CohereNet is obtained, a summation operation is performed according to (3.54), and consequently, the SLSC image is achieved. DNN for estimation of the MV weight vectors As mentioned before, the DNN can be used to estimate the weight vector of the adaptive MV algorithm. By doing so, one can take advantage of the low computational complexity of the beamforming as well as a more robust estimation compared to the analytical approach. A similar study was performed in Luijten et al. (2020); a network was designed, the input of which is the time-delayed signal, and the output is the MV weight vector. The architecture of the designed network is presented in Fig. 7.12.

334

7 Ongoing Research Areas in Ultrasound Beamforming

Fig. 7.11 The architecture of the CohereNet for spatial coherence estimation. Figure from Wiacek et al. (2020), licensed under [CC By 4.0]

Fig. 7.12 The architecture of the DNN for estimation of the MV weight vectors

As can be seen from the figure, the proposed network consists of some FC layers. Activation function and dropout are also used to perform non-linear mapping on the extracted features and prevent over-fitting, respectively. The good performance of the designed DNN was proved by comparing the resulting image with the one obtained from the conventional MV algorithm. DNN instead of the adaptive MV beamformer One of the prominent applications of using the DNN instead of beamforming is associated with the adaptive MV algorithm; the advantage of using DNN instead of the MV beamformer is that the computational complexity of the process will be considerably reduced while the quality of the resulting image is equivalent to the one obtained from the MV algorithm. In such a network, the input is the time-delayed signal, and the output is the MV beamformed data. In Goudarzi et al. (2020), it was proposed to use the well-known MobileNet-V2 and train the network in order to obtain the MV beamformed data. A detailed explanation of the MobileNet-V2 can be found in Sandler et al. (2018). The difference between this study and the one performed in Luijten et al. (2020) is that the output of the network is the MV beamformed data, not the weight vector obtained from this algorithm. Simson et al. (2019) is another example in which a convolutional-based network was designed for the same purpose.

7.2 Deep Learning in Medical Ultrasound Imaging

335

DNN instead of other MV-based beamformers Other improved beamformers based on the adaptive MV algorithm can also be targeted to produce high-quality images. In this regard, different studies have been conducted so far. For instance, in Zhou et al. (2021), a hybrid GAN model was proposed in which a high-quality image, equivalent to the EIBMV algorithm presented in Sect. 3.5.2.2, is obtained using the time-delayed data. The proposed network is known as fast beamforming thanks to its low computational complexity. The generator part of the proposed hybrid GAN model consists of two branches; feature extraction is performed on the time-delayed signal in one branch and on the DAS beamformed data in another one. After the outputs of these two branches are obtained, they are integrated to generate a high-quality image. Integration is performed using the so-called fusion block. The architecture of the generator of the hybrid GAN is depicted in Fig. 7.13. As can be seen from the figure, the time-delayed signal is used as the input of the first branch, which results in the weight vector equivalent to that one obtained from the EIBMV algorithm. In other words, the weight calculation step of the EIBMV algorithm is replaced with the first branch of the hybrid GAN model. According to (3.86), by multiplying the obtained weight vector with the correspond-

Fig. 7.13 The architecture of the generator part of the hybrid GAN which can be used instead of the EIBMV beamformer

336

7 Ongoing Research Areas in Ultrasound Beamforming

Fig. 7.14 The architecture of the discriminator part of the hybrid GAN which can be used instead of the EIBMV beamformer

ing time-delayed data, a high-quality image would be achieved. The second branch is also used to tackle the problem one may face while dealing with a large volume of input data; in the case in which the input data has a large volume, obtaining detailed features from the redundant information may be failed. To overcome this problem, the DAS log-compressed beamformed data is used as the input in the second branch of the hybrid GAN model. Consequently, conceptual features are generated from the low-quality input image. The outputs of the first and the second branches are complementary, as their integration contains more comprehensive information compared to each one alone. The fused image, which is the output of the fusion block shown in Fig. 7.13, together with the real image (obtained from the conventional EIBMV algorithm), are used as the inputs of the discriminator network of the hybrid GAN model. The architecture of the discriminator network is depicted in Fig. 7.14. As previously discussed, the output of the discriminator network is a binary classifier (real/fake image) which helps the generator to update its parameters as well. One should note that the attention block, which is used in the hybrid GAN model, consists of some convolutional, pooling, and up-sampling layers. This block helps the network to obtain a high-quality image. A detailed explanation of the attention block is presented in Zhou et al. (2021). DNN for data construction In Khan et al. (2020), a DNN was designed to produce a good quality image in the cases where the channel data is down-sampled with a fixed or variable pattern. In other words, the goal was to preserve the image quality when the number of array elements reduces. The designed CNN-based encoder-decoder is capable of constructing the beamformed image without the need for any beamformer algorithm. The input of the network is the time-delayed signal. Note that in either fixed or variable downsampling modes, two central elements of the array are included in the remaining channels. In this study, to avoid a big memory requirement, instead of feeding the network using the entire data, it was proposed to train the network for several limited depths (e.g., 3 depths) separately. Another study was performed in Kumar et al. (2020) to construct a complete dataset from the received signals corresponding to a sparse array where not all of the elements are active. In such a case, data corresponding to inactive elements are generated using the DNN. This process can be interpreted as reducing the element spacing. Therefore, one can say that the application of such networks is to suppress the

7.2 Deep Learning in Medical Ultrasound Imaging

337

Fig. 7.15 The network architecture to generate a complete RF data from the received signals of a limited number of active elements. Figure from Kumar et al. (2020), licensed under [CC By 4.0]

grating lobe, as it is claimed in Kumar et al. (2020). The architecture of the designed network is demonstrated in Fig. 7.15. As can be seen from the figure, the network consists of 16 blocks, each of which includes an activation function, concatenation process, and convolutional layers. The input and output of the proposed network are, respectively, the RF data corresponding to a sparse array and a complete array (all the elements of which are active). Furthermore, in Xiao et al. (2022), a CNN was designed in which the input is the received signals corresponding to the odd elements, and the output is the received signals corresponding to the even elements. The output of the network is combined with its input, and consequently, the complete received signal will be achieved. The designed network is an encoder-decoder consisting of a series of convolutional and deconvolutional layers. DNN for image quality enhancement The purpose of using DNN instead of beamforming is not just to increase the speed of the process; some algorithms do not bear too much computational complexity while they improve image quality. In such a case, one can take advantage of further quality improvement inspired by the DNN instead of the desired algorithm. In particular, consider the SCF algorithm, which has been previously discussed in Sect. 3.5.1.5. By using this weighting method in CPWI, the quality of the reconstructed image will be improved while the computational complexity is not much increased (in contrast to the MV algorithm). In Rothlübbers et al. (2020), a network was designed which is used instead of the conventional SCF process. The goal was to achieve a reconstructed image with a quality equivalent to the one obtained from several SCF weighted plane waves. A simple network consisting of some fully connected layers was proposed in the performed study the input of which is the time-delayed signal and the output is the weighting coefficients. Another use of neural networks for image quality enhancement is obtaining a high-resolution reconstructed image from its low-resolution equivalent. In this regard, in Zhang et al. (2023), a neural network is proposed in which the input is a low-resolution beamformed image consisting of a

338

7 Ongoing Research Areas in Ultrasound Beamforming

low number of imaging points and high noise level, and the output is a high-resolution beamformed image. DAS-based DNN approach for flexible arrays In cases where the array has a flexible structure, the non-adaptive DAS algorithm may apply wrong time delays to the received data. To tackle this problem and apply the DAS algorithm correctly to any geometry of the array, a deep learning method can be used. That is, it is possible to design a network that calculates the time delays proportional to the array structure without considering a fixed geometry (e.g., linear) for the array, and consequently, produce a reconstructed image with the desired accuracy. In this regard, a similar study has been conducted; in Huang et al. (2021), three different neural networks were proposed for the cases in which the geometry of the array is not known. The proposed networks are alternatives to the DAS algorithm. More precisely, the DAS beamformed data is obtained directly from the raw data. One of the proposed networks in this study is the well-known U-Net, the architecture of which is demonstrated in Fig. 7.4. Two other proposed networks are based on GAN; in one of the GAN-based networks, the generator network is considered to be U-Net, and the discriminator network is a convolutional network. Also, another GAN-based network is called Cycle-GAN which consists of two generators and two discriminators. In Cycle-GAN, one generator is used to produce an image from the raw data, and the other one is used to perform the reverse task, i.e., to generate the raw data from its corresponding input image. The discriminators of Cycle-GAN are also convolutional networks the output of which is a binary classifier, as explained earlier. For more information about this GAN-based network, please refer to Zhu et al. (2017). The architectures of the proposed neural networks are similar to Figs. 7.3 and 7.4, and therefore, it is avoided to show them again here. DNN to perform focused transmit It is well-known that in plane wave imaging, an unfocused signal is transmitted toward the medium which results in increasing the frame rate. However, the image quality will be degraded. In Qi et al. (2020), a simple network was designed that maps the unfocused signal of a single plane wave to a focused transmit signal. The proposed network consists of a series of FC layers. By applying a beamformer to the output of the designed network, an image equivalent to the focused imaging technique will be reconstructed. The designed network performs in the frequency domain; that is, similar to Fig. 7.7 which is presented in Sect. 7.2.3, the RF data should be transformed in the frequency domain to feed the network. With the same purpose, in Zuo et al. (2021), a DNN was proposed in plane wave imaging to perform focusing using a single plane wave. Note that in CPWI, by compounding the received data obtained from different emissions, synthetic focusing will be performed. In this study, a GAN model was used in which the input is the received data corresponding to a single plane wave, and the output is the combined data from several plane waves. In the proposed model, the generator network is based on the U-Net, both the input and output of which are RF data. Also, the discriminator network consists of a series of convolutional layers which are used to check the similarity between the RF data generated by the generator network and the desired

7.2 Deep Learning in Medical Ultrasound Imaging

339

one (compounded RF data). Many other neural networks have been developed in this field so far, where efforts are made to improve the network’s performance and achieve enhanced results (Wasih et al. 2023; Seo et al. 2023). DNN to map the sound speed The sound speed is usually considered to be constant in the process. However, in practice, due to the inhomogeneity of the imaging medium, the sound speed varies in different parts of it. Therefore, assigning a constant value to the sound speed leads to the quality degradation of the resulting image. In Kim et al. (2021), a network based on U-Net was designed, known as Delay-Net, to overcome this shortcoming. The output of this network is a delay matrix that includes the time of flight corresponding to each element of the array. The resulting delay matrix is used to apply time delays to the received signals. End-to-end DNN methods Apart from the examples mentioned so far regarding different applications of using the DNN instead of the beamforming, an end-to-end network can also be considered in which the input is the raw data, and the output is a beamformed image. In other words, without calculating and applying any time delay, the data is fed to the network. In this regard, several networks have been designed so far, the goal of which is to take advantage of high-quality and high frame rate properties simultaneously. In this section, some of the related works are going to be briefly described. In Nguon et al. (2022), an end-to-end network was designed in which the DAS algorithm is replaced with the considered DNN; that is, applying time delays and the non-adaptive apodization to the raw data are done by DNN. By log-compressing and interpolating the output of the network, the final image will be visualized. The network considered in this study is based on U-Net which has been previously discussed in Sect. 7.2.2.2. Also, in Li et al. (2020), an AE-based network was proposed in CPWI. The proposed network was designed inspired by the well-known VGG-13 network, and also, the U-Net architecture. Using GoogLeNet and U-Net models, a new architecture was developed in Lu et al. (2022) in which the GoogLeNet is applied to the U-Net model to train the network. The ground truth images in this study were the beamformed images obtained from compounding several plane waves. The developed network consists of a series of encoder and decoder blocks; each encoder block contains some convolutional layers and activation functions. Also, each decoder block includes an up-sampling operation, an activation function, and three convolutional layers. Furthermore, the input data first goes through a resize block which consists of an up-sampling operation and two convolutional layers. Detailed information on the designed network can be found in Lu et al. (2022). In Goudarzi and Rivaz (2022), a sharp Gaussian function was considered as the ideal PSF, and the ground truth image was obtained as the convolution of this PSF and the time reflectivity function of the tissue. Then, a DNN was designed to map the input RF data of a single plane wave to the ground truth image. In the designed DNN, the wavelet transform is used in both the encoder and decoder parts of the U-Net

340

7 Ongoing Research Areas in Ultrasound Beamforming

model; it has been claimed that the wavelet transform can better recover the highfrequency contents of the image. In this study, it was proposed to use Daubechies mother wavelet that includes four moments.

7.3 Super-Resolution Ultrasound Imaging The intensity of the backscattered echoes in US imaging depends on the acoustic impedance changes in tissue layers; the greater the difference between the acoustic impedances, the stronger the intensity of the corresponding received echo. In particular, identifying the structure of vessels as well as measuring the blood perfusion using US imaging is problematic; due to the small acoustic impedance changes in the blood vessel, the intensity of the corresponding received echo is smaller than the one of the surrounding tissue. This problem is aggravated in the case where imaging the small-diameter vessels on the scale of a micrometer is of interest. To overcome this limitation, the contrast agent that consists of several microbubbles is injected into the area under imaging. Then, the vessel structure (or other measurements) will be achieved by revealing microbubbles inside the vessel. This contrast-enhanced technique is known as super-resolution US (SRUS) imaging. More precisely, SRUS imaging is defined as an imaging technique that is capable of achieving a resolution beyond the diffraction limit and identifying the point targets with distances smaller than .λ/2. This imaging technique is used in different applications such as visualizing the microvascular structure, identifying the presence of a defect such as a thrombus, measuring the blood flow at the micrometer scale in organs such as the liver and kidney, determining the direction as well as the speed of blood flow in the vessel, identifying the abnormal pattern of blood flow (Doppler imaging), and early diagnosis of cancer. From the above explanations, it can be found that SRUS imaging is realized by using microbubbles. Microbubbles are small-size particles (up to 7 .µm in diameter), the center of which are filled with gas and are encapsulated in layers such as phospholipids. The gas cores of microbubbles result in a great impedance difference compared to their surrounding tissue, and consequently, the strong acoustic waves backpropagation Abou-Elkacem et al. (2015), Lindner (2004). The amplitude of the resulting echo is such that even a single microbubble within the vessel volume can be detected using the US imaging system. Note that the microbubbles are removed from the circulatory system after a while. This process is performed through a series of mechanisms, such as microbubble absorption by the phagocyte system. Microbubbles can also be removed by propagating the acoustic wave, the energy of which is higher compared to the acoustic pressure. The contrast agent is also used in other imaging modalities, including positron emission tomography (PET), single photon emission computed tomography (SPECT), and magnetic resonance imaging (MRI), to visualize the vascular structure. In contrast to PET and SPECT in which the time interval between the contrast agent injection and data acquisition is more than one hour, the time that it takes to perform

7.3 Super-Resolution Ultrasound Imaging

341

the SRUS imaging is about a few minutes. The reason is that in SRUS imaging, the received signal corresponding to microbubbles that are traveling through the blood vessel is quickly separated from the background signal. This is achieved by using a harmonic signal extraction method previously discussed in Sect. 5.2. To further speed up the imaging process, the fast PWI technique has been suggested to be used in SRUS imaging (Kusunose and Caskey 2018; Sabuncu et al. 2023). Also, in terms of spatial resolution, the performance of the SRUS imaging is improved compared to PET and SPECT, and it is comparable with the one obtained from MRI (Dayton and Rychak 2007).

7.3.1 Principles of Super-Resolution Ultrasound Imaging The general processing steps of the SRUS imaging are schematically illustrated in Fig. 7.16. Note that SRUS imaging is also known as US localization microscopy. In the following, each processing step will be discussed separately.

7.3.1.1

Data Acquisition and Detection

Once the microbubbles are injected into the region of interest, the first step is to perform emissions and acquire the backscattered echoes. Then, the received signals are beamformed for further processing which is denoted as the detection process. The non-adaptive DAS beamformer is usually used in this regard. One should note that the received data includes the echoes reflected from the microbubbles as well as the surrounding tissue. To improve the performance of the SRUS imaging, it is desired to extract and separate the received signal corresponding to microbubbles from the surrounding tissue (Cherin et al. 2019). This can be achieved by performing harmonic imaging and using one of the harmonic signal extraction methods that are previously discussed in Sect. 5.2. It has been previously observed that by using the second harmonic component of the received signal, the contrast of microbubbles will be improved compared to the surrounding tissue. However, the second harmonic component of the received signal corresponding to the surrounding tissue is also included in the resulting received data which negatively affects the performance of the SRUS imaging. To tackle this problem, sub-harmonic as well as ultra-harmonic imaging techniques were suggested to be used (Needles et al. 2009; Dayton and Rychak 2007; Wang et al. 2016). Apart from the harmonic-based

Fig. 7.16 Processing steps of SRUS imaging

342

7 Ongoing Research Areas in Ultrasound Beamforming

methods, other contrast enhancing techniques can be used to extract the received signals corresponding to microbubbles that are listed below: • • • •

Singular value decomposition (SVD), Destruction-replenishment technique, Acoustic radiated force, Adaptive beamformer.

In the following, each of the above methods is going to be described separately. Singular Value Decomposition A simple way to extract the data corresponding to microbubbles from the surrounding tissue is to use the SVD filtering method (Brown et al. 2019). The singular vectors corresponding to the small singular values contain information about the moving objects. As microbubbles are moving along the blood vessel, it can be concluded that by singular value decomposing the beamformed data, it is possible to separate the signals of microbubbles from the stationary surrounding tissue. Note that the computational complexity of this process is high which negatively affects the frame rate. Destruction-Replenishment Technique A few minutes after the contrast agent injection, some of the microbubbles adhere to the vessel wall and the desired markers, and some others circulate freely within the vessel. In this state, US imaging is performed and the resulting beamformed data is denoted as . Ipre (x). Then, a high-energy destructive pulse is emitted to the imaging vessel in order to destroy the microbubbles that are adhered to the vessel wall. The US imaging is performed again in this state where no microbubble exists in the considered vessel. The resulting beamformed data is denoted as . Ipost (x) in this state. By subtracting the data corresponding to these two states, one can distinguish the signal associated with the targeted microbubbles from the other unwanted signals (Abou-Elkacem et al. 2015). More precisely, the desired signal is obtained according to the following equation: I

. desired

(x) = Ipre (x) − Ipost (x).

(7.37)

The disadvantage of this method is that the time required to perform this process is relatively long. Acoustic Radiated Force Another method to improve the SRUS imaging is to use the acoustic radiated force which is faster compared to the destruction-replenishment technique. It has been shown that by using the acoustic force, more microbubbles will adhere to the vessel wall (Rychak et al. 2007). One can use this property to extract the backscattered echoes corresponding to microbubbles from the unwanted signals; after the contrast agent injection, the acoustic force is radiated to the considered vessel. Consequently, the number of microbubbles that are adhered to the vessel wall will be increased, and performing the SRUS imaging will be more efficient. In this case, consider

7.3 Super-Resolution Ultrasound Imaging

343

the received signal as .ssat in which the number of adhered microbubbles to the vessel wall reaches its maximum value. Once the acoustic force is terminated, some microbubbles are diffused through the blood circulation, and only a few of them that are adhered to the vessel wall remains. The received signal corresponding to this case is denoted as .sr es which contains the residual adhered microbubbles in the vessel wall. If the initial received signal before the contrast agent injection is denoted as .sinit , the residual-to-saturation ratio is calculated as below Abou-Elkacem et al. (2015): ( .

RS R =

sr es − sinit ssat − sinit

) × 100,

(7.38)

which is used to evaluate the performance of the acoustic radiated force method. Adaptive Beamformer In addition to the destruction-replenishment technique and the acoustic radiated force method in which some modifications are applied to the imaging process in order to improve the performance of the SRUS imaging, an adaptive beamformer can also be used as a software method. In particular, the MV algorithm or the CF weighting method can be used instead of the non-adaptive DAS beamformer to obtain an improved resolution and contrast beamformed data (Tasbaz and Asl 2021a; Diamantis et al. 2018). Consequently, the performance of the isolation and localization steps will be improved. In Yan et al. (2022a), the performance of the adaptive beamformer was evaluated and compared with the DAS algorithm in 3D SRUS imaging. It was shown that by using an adaptive beamformer, more microbubbles would be reconstructed for each frame; this is due to the fact that the main lobe width of the reconstructed microbubble is reduced compared to the case where the non-adaptive DAS beamformer is performed, and therefore, the number of mis/wrong-localized microbubbles will be reduced. Also, in Hyun et al. (2017), the SLSC algorithm (described in Sect. 3.5.1.7) was proposed to be used in SRUS imaging, and it was shown that the performance of this algorithm is improved compared to the conventional DAS beamformer in terms of robustness against noise. Note that the discussed methods can be performed simultaneously to improve the performance of the SRUS imaging. For instance, harmonic imaging can be used together with the acoustic radiated force method. Also, the corresponding received signal can be processed using the adaptive MV algorithm to take advantage of this beamformer.

7.3.1.2

Isolation

In the isolation step, the detected microbubbles that are either closer to each other in comparison with a certain limit or adhered together are removed from the beamformed image. The reason is that if the microbubbles adhere together, they will interfere with the corresponding binary image, and consequently, a single point

344

7 Ongoing Research Areas in Ultrasound Beamforming

Fig. 7.17 Reconstructed images of the simulated microbubles before (left) and after (right) the isolation process

(microbubble) will be identified instead of multiple points from this image. As a result, the accuracy of the next step, i.e., the localization process in which the center of microbubbles is going to be found, will be reduced. In this step, the binary image of the input beamformed data is first constructed. This is achieved by comparing the values of the image pixels to a pre-defined threshold; the value 1 is assigned to the pixels, the intensities of which are greater compared to the selected threshold. Otherwise, the value 0 is considered for pixels. This process can be interpreted as the segmentation process. The regions in the binary image that are greater than a standard limit denote the adhered microbubbles, and therefore, they are removed from the image. Note that the standard limit is determined based on a standard PSF which is represented using the backscattered signals corresponding to a single point. It can be concluded that a few microbubbles remain in each frame by using the isolation step. Using the diluted contrast agent causes the distance between microbubbles to be increased, and therefore, the isolation process and removing the adhered microbubbles will be performed more efficiently. Figure 7.17 shows the reconstructed images of the simulated microbubbles before and after the isolation step. It can be seen that the adhered microbubbles and those that are too close to each other are removed after the isolation process. It is also possible to perform the isolation process by subtracting the consecutive frames. This method, known as the differential imaging technique, is simple and can be easily implemented (Desailly et al. 2013). By using the differential imaging technique, the moving microbubbles remain in each frame. However, the drawback of this technique is that the stationary, as well as the slow-moving microbubbles, cannot be well identified. Also, the disrupted microbubbles also remain. Furthermore, this technique suffers from errors due to tissue motion which limits the visualization of microbubbles in the imaging vessel.

7.3 Super-Resolution Ultrasound Imaging

7.3.1.3

345

Localization

The final processing step of the SRUS imaging is the localization process in which the position of each microbubble is obtained. Consider the reconstructed image corresponding to a single microbubble. The reconstructed image is considered a result of the convolution between the PSF of the system and the microbubble. Therefore, the center of the PSF of each microbubble is considered its location (Viessmann et al. 2013). To achieve this, the weighted centroid method over each segment in the binary image is usually applied to the beamformed data. The output of the localization process is demonstrated in Fig. 7.18, where red dots specify the identified centers of the microbubbles. Other methods, such as fitting a Gaussian function to the reconstructed data, can also be used to estimate the locations of microbubbles from the beamformed image (O’Reilly and Hynynen 2013; Xavier et al. 2022). One should note that due to the usage of a diluted contrast agent, and also, the isolation process, only a few microbubbles exist in each frame. Therefore, several frames are required to be acquired and processed. Then, the super-resolution image is finally constructed by performing a summation over the reconstructed frames. Therefore, the imaging speed is negatively affected since a large number of frames is required to obtain the final image. In order to tackle this problem and achieve the reconstructed image of the vessel structure using a small number of frames, in Tasbaz and Asl (2021b), it was proposed to accumulate the binary images of each frame without the need for a localization process. Indeed, the identified microbubbles are considered instead of their central locations and accumulated over different frames. In the proposed method, a beamformer that results in an improved resolution is necessary to be used in order to improve the resolution of the reconstructed microbubbles. It has been shown in Tasbaz and Asl (2021b) that the adaptive EIBMV algorithm combined with the CF weighting method results in a satisfying reconstructed image in terms

Fig. 7.18 Reconstructed images of the simulated microbubles: the left figure demonstrates the isolated microbubbles, and the right figure shows the output of the localization process. The center of each isolated microbubble is specified using a red dot

346

7 Ongoing Research Areas in Ultrasound Beamforming

of both resolution and contrast. By using this method, the imaging process will be sped up since only a limited number of frames is required. Also, the localization step will be removed from the process. However, blood flow measurement is not possible using this method, and it can only be used to identify the vessel structure. If the estimation of the blood velocity or the direction of the blood flow is of interest, the tracking process should be further performed on the localized data. This, in particular, will be achieved by calculating the cross-correlation between two consecutive frames over a specific-length window which is centered on the position of each microbubble.

7.3.2 Performance Improvement of Super-Resolution Ultrasound Imaging As mentioned earlier, in order to perform the localization process accurately, the distance between microbubbles should be greater than a specific threshold (.≥λ). Otherwise, their corresponding PSFs will be overlapped, and consequently, the center of the microbubbles cannot be found accurately. This problem can be overcome by using the diluted contrast agent. However, as the number of microbubbles is reduced in each frame, several frames are required to generate the super-resolution image. Therefore, the time takes for data acquisition and processing will be increased. Moreover, due to the long duration of the imaging process, motion may occur in the imaging tissue which induces errors in the final reconstructed image. Chest movement during breathing, the motion of the heart when it beats, and also, small contractions and expansions of some muscles in the human body are some factors that induce motion errors during data acquisition. In this section, different techniques to tackle this problem will be examined. Two-Stage Motion Correction Method In Harput et al. (2018), a two-step motion estimation method based on image registration was proposed. In this method, the affine registration is performed in the first step in order to estimate the errors caused by global motion including translation and rotation (Rueckert et al. 1999). Then, in the second step, a nonrigid registration technique is performed on the output of the previous registration step. In this state, the residual errors caused by local motions are estimated. The nonrigid image registration is performed by using the free-form deformation model based on B-splines Lee et al. (1997). In the registration process, an attempt is made to relate each imaging point (contrast-enhanced image) to its corresponding reference imaging point (B-mode image). More precisely, by performing the registration process on reference frames, a transformation matrix is obtained which is applied to the contrast-enhanced frames to compensate for the motion errors, and consequently, perform the isolation and localization processes more efficiently. The schematic of the general processing steps of the discussed method is presented in Fig. 7.19.

7.3 Super-Resolution Ultrasound Imaging

347

Fig. 7.19 General processing steps of two-stage image registration method for motion correction. Figure from Harput et al. (2018), licensed under [CC By 3.0]

Correlation-Based Motion Estimation Method An efficient way to compensate for motion error is to take advantage of the spatial correlation between the B-mode images (Taghavi et al. 2021; Jensen et al. 2019; Foiret et al. 2017). In this technique, a frame is considered as the reference image. Then, the cross-correlation between the reference image and other images is calculated. The motion in both axial and lateral directions is estimated from the calculated crosscorrelation for each frame. In cases where the motion is not uniform over the entire volume of the tissue (e.g., kidney), each image is divided into some (overlapping) blocks, and the motion estimation process is performed for each block. Note that the block size should be selected such that the motion inside the considered block is spatially invariant. Also, the frame rate should be high enough so that the fastest motion is captured within frames. In Christensen-Jeffries et al. (2014), it was proposed to use the cross-correlation method in order to remove those frames in which the breathing artifact is induced; if the cross-correlation value associated with a frame is less than the empirically adjusted threshold, it is removed from the process. In addition to the motion artifact reduction, this method also improves the frame rate since the process is performed on fewer frames. The cross-correlation-based method was also used in Hingot et al. (2017) to estimate the tissue motion in the case where the ultrafast PWI is used for SRUS imaging. Assume that. M emissions from different angles are used to obtain each frame. Therefore, it can be concluded that in the ultrafast SRUS imaging scenario, a series of . M (beamformed) blocks will be acquired. In this imaging method, the SVD filtering method is first applied to the beamformed images in order to extract the information associated with the tissue; this is achieved by filtering out the singular vectors corresponding to the small singular values. For each block, the filtered output corresponding to.ith emission is denoted as. Ii (x, z) for the position.(x, z). Also, the central beamformed image is considered as the reference image for each block which is represented by . Ir (x, z). Considering that the displacement of .(Δx, Δz) occurs between two images . Ii (x, z) and . Ir (x, z), the cross-correlation between these two images is expressed as below: corr =

.

F{Ii } ◦ F∗ {Ir } , |F{Ii } ◦ F∗ {Ir }|

(7.39)

348

7 Ongoing Research Areas in Ultrasound Beamforming

where .F{.} denotes the Fourier transform, and .◦ represents the Hadamard product. By inverse Fourier transforming the above equation, a peak at the position .(Δx, Δz) appears, and therefore, the displacement will be estimated which is used to compensate for the motion error. More precisely, the displacement is calculated from the following maximization problem: ) ( (Δx, Δz) = argmax F−1 {corr } .

.

(x,z)

(7.40)

The above process is performed for each block (within blocks). A similar procedure is also performed to estimate the displacement between blocks with the difference that the cross-correlation is calculated between the reference images of each block and that one of the first block. The estimated displacements are used in the following to compensate for the motion errors in the positions of microbubbles. In the next step, the SVD filtering method is again applied to the beamformed data to remove the tissue information and preserve the signals corresponding to microbubbles. Then, the localization process is performed to identify the positions of microbubbles. Displacements that were previously estimated are now applied to the localized images to correct the motion error. Note that the motion correction process is performed in both directions within and between blocks. Deconvolution-Based Method To speed up the SRUS imaging process, a deconvolution-based method was proposed in Yu et al. (2018) which consists of two main steps; deconvolution and spatiotemporal interframe correlation (STIC). In the first step, a deconvolution process is performed in order to separate each microbubble from its neighboring microbubbles. More precisely, by using the deconvolution method, it is possible to separate microbubbles at small distances from each other. Therefore, there is no need to remove the frames in which microbubbles are located at a distance less than the considered threshold. Consequently, the data acquisition time will be reduced. The Richardson-Lucy (RL) deconvolution method was proposed to be used in Yu et al. (2018) which is a non-linear iterative process and is formulated as below: i

.

(k+1)

=i

(k)

[ { ( )}] h ⊗ i (0) + n corr h, , h ⊗ i (k)

(7.41)

where .⊗ denotes the convolution operator, .i (k) denotes the image obtained from .kth iteration, corr.{a, b} is the correlation between the functions .a and .b, .h is the PSF of the system, and .n represents noise. Note that .(h ⊗ h (0) + n) is known as the blurred image in the RL deconvolution method. Although the data acquisition time is reduced by using the deconvolution method, the received data is still contaminated with very fast motion errors such as heart movement. To tackle this limitation and remove such an error from data, the second processing step, i.e., the STIC method, is applied to the output of the first step. In the re-alignment STIC method, the acquired frames over several cardiac cycles are

7.3 Super-Resolution Ultrasound Imaging

349

synced according to the periodic changes of the signal, and consequently, the number of microbubbles will be represented; a cardiac cycle consists of the systole and diastole phases. In the systole phase, a much larger number of microbubbles are detected compared to the diastole phase. Considering this issue, it can be concluded that by applying a low-pass filter to the signal corresponding to the flowing microbubbles, one can estimate the cardiac cycle. Once the images are re-aligned according to the described method, the super-resolution image of a single cardiac cycle is finally achieved by integrating the re-aligned images. Non-local Means Filtering Method In Song et al. (2017), it was proposed to use the non-local means (NLM) filtering method to separate the signal corresponding to microbubbles from the unwanted signals. In this method, an image registration method based on the phase correlation is first applied to the acquired frames in order to estimate and eliminate the motion due to breathing. Then, the consecutive frames are mutually subtracted to remove the contribution of the surrounding tissue as well as the stationary microbubbles. Although the signal associated with the moving microbubbles (that are of interest) remains in this state, however, the result still suffers from noise caused by different factors, including the residual tissue motion. By using the NLM filtering method, the residual noise will be efficiently suppressed from the signal. Consider two patches .i and . j of pixels . x(Ni ) and . x(N j ) in a search window with the size of . I . Assume that the patches .i and . j are centered on targeted pixels .xi and .x j , respectively. In the NLM filtering method, the value of the targeted pixel .xi is updated according to the following equation: x =

Σ

. i

w(i, j)x j ,

(7.42)

j∈I

where .w(i, j) denotes the weighting coefficient which is calculated as below: j )|| 1 − ||x(Ni )−x(N cs2 .w(i, j) = e . Di 2

(7.43)

It can be seen from the above equation that the weighting coefficient .w is obtained according to the patches .x(Ni ) and .x(N j ). In the above equation, . Di denotes the normalization factor. Also, .cs is a constant that determines the smoothness of the filtered output. Detailed explanations about the NLM filtering method can be found in Coupé et al. (2009). Compressive Sensing-Based Algorithms To achieve an accurate localization and identification process in the presence of more concentration of microbubbles, the compressive sensing algorithm has been recommended (Van Sloun et al. 2017; Yan et al. 2022b). By using this algorithm, more microbubbles can be identified in each frame. In other words, it provides superresolution imaging with fewer frames. Therefore, the speed of the imaging process

350

7 Ongoing Research Areas in Ultrasound Beamforming

will be increased. To use the compressive sensing algorithm, the forward problem based on the acquired data . s is first defined as below: .

s = Φ y,

(7.44)

where .Φ denotes the measurement matrix. Also, the distribution of microbubbles is represented in . y. Considering the above equation, and also, considering that the microbubbles are sparsely distributed over the imaging grid, the .ℓ0 -norm regularization term is applied to the existing problem, and the resulting minimization problem is reformulated as below: y

. opt

= argmin ||Φ y − s||22 + α || y||0 ,

(7.45)

y

where .α is the regularization parameter. By solving the above optimization problem, the distribution of microbubbles will be extracted. To obtain detailed information about the principles of the compressive sensing algorithm, please refer to Sect. 6.4. Adaptive Grayscale Mapping Algorithm In the destructive-replenishment algorithm, the contribution of the surrounding tissue is considerably removed from the signal corresponding to microbubbles by subtracting the images that are obtained before and after applying the destructive pulse according to (7.37). However, some artifacts may appear in the resulting image due to misregistration of the images . Ipre and . Ipost which cause ambiguity in the identification of microbubbles; in the case where the misregistration is such that the object does not overlap in the image . Ipre , the positive artifact will be generated. Also, in the case where the object does not overlap in the image . Ipost , a negative artifact will be resulted. In Shu et al. (2018), a post-processing algorithm known as the adaptive grayscale mapping technique was developed in order to identify and suppress such artifacts. In this algorithm, the brightness [.b(i, j)] and darkness [.d(i, j)] moments for the pixel corresponding to the axial and lateral positions of .i and . j, respectively, are calculated as below: b(i, j) =

j+k i+k Σ Σ

.

{Idesired (m, n) > 0} ,

(7.46)

{Idesired (m, n) < 0} .

(7.47)

m=i−k n= j−k

d(i, j ) =

j+k i+k Σ Σ m=i−k n= j−k

According to the obtained moments, each pixel value of the image. Idesired is remapped and the result is denoted as.G; if. Idesired (i, j) > 0 and.b(i, j) > d(i, j), then the value of the corresponding pixel is remapped as below: .

G(i, j) = Idesired ×

b(i, j) − d(i, j) . b(i, j)

(7.48)

7.3 Super-Resolution Ultrasound Imaging

351

Also, if . Idesired (i, j) < 0 and .b(i, j) < d(i, j), then the remapping process is performed according to the following equation: .

G(i, j) = Idesired ×

d(i, j) − b(i, j) . d(i, j)

(7.49)

Note that in the case where neither the first nor the second constraints are met, then the corresponding pixel value is assumed to be noise and it is replaced with zero. Microbubble Uncoupling via Transmit Excitation In Kim et al. (2022), a technique known as microbubble uncoupling via transmit excitation (MUTE) was developed which is used to perform an accurate localization process in a high concentration of microbubbles, and consequently, improve the frame rate. In this technique, two consecutive excitations, with and without null, are used to perform the imaging process. More precisely, in the first step, a Gaussian-shaped pulse is used which is known as the non-null excitation. Second, a doughnut-shaped pulse is used as the null excitation. Then, the beamformed images are constructed from the received signals corresponding to these two consecutive excitations. Finally, by subtracting the obtained images, microbubbles in the null regions are suppressed and the microbubble in the non-null region is preserved. Indeed, the microbubble corresponding to the null region is uncoupled from its surrounding ones. A simple schematic of the discussed MUTE technique is demonstrated in Fig. 7.20. Deep Learning-Based Algorithms One way to speed up the processing steps of the SRUS imaging is to use deep learning-based methods; by using this method, it is possible to identify and localize more microbubbles in each frame, the PSFs of which are overlapped. Therefore, the required frames to achieve the desired image will be reduced. In a deep learningbased method, a network is designed and trained in order to obtain an appropriate model. Then, the obtained model is used to identify the positions of microbubbles, and consequently, to achieve the super-resolution image. In order to train the network and obtain the optimal parameters of the model, a dataset with a high concentration of contrast agent in which the positions of microbubbles are known is used. Note that network training is a time-consuming process. However, once the training process is finished, the resulting model is used in different imaging scenarios which takes a little time, in the order of milliseconds, to obtain the solution (e.g., the localized image or the final super-resolution image).

Fig. 7.20 Schematic of the MUTE technique. Figure from Kim et al. (2022), licensed under [CC By 4.0]

352

7 Ongoing Research Areas in Ultrasound Beamforming

In particular, in Youn et al. (2019, 2020), a neural network with the well-known UNet structure was designed in which the input is the beamformed data corresponding to a few numbers of frames, and the output is the identified point targets with high intensity (in the presence of overlap between their PSFs). Also, in Nehme et al. (2018), a fully convolutional neural network was designed to achieve a super-resolution image directly from the received signal. Moreover, a 3D convolutional neural network was developed in Brown et al. (2020) to perform the spatio-temporal filtering on images and segmenting the microbubbles rapidly. A lot of similar studies based on deep learning have been done so far to improve the SRUS imaging process (Van Sloun et al. 2019; Liu et al. 2020; Chen et al. 2021; Blanken et al. 2022; Chen et al. 2022). Nevertheless, the application of deep learning-based methods in SRUS imaging is still in its infancy, and further improvement can be achieved by taking advantage of the potential capabilities of this method. Basic explanations related to the deep learning method, how to use this technique in data processing, and also, its benefits over the conventional data processing methods are presented in Sect. 7.2 of this book.

7.4 Conclusion In this chapter, point target detection in the US was discussed. It was observed that point target detection is of importance in applications such as imaging microcalcifications or kidney stone identification. In such cases, preserving the background speckle is not of interest. Rather, the main focus is on identifying strong reflectors with high accuracy. In this chapter, point target detection techniques such as the multilook method as well as the wavelet coefficient-based algorithm were investigated. The next section of this chapter was dedicated to a brief introduction to DL, which has a wide range of applications in US data processing. The architecture of some common networks, such as VGG and U-Net, as well as their applications in different parts of US data processing, were also observed. It was concluded that by designing and training an appropriate network, the trained model can be used to obtain the desired solution at high speed; in particular, it is possible to design a network and substitute it with the MV algorithm. By doing so, the high-computational complexity of the adaptive MV algorithm is bypassed while an image with the same quality is produced. Another scenario of using the DL in CPWI, which has attracted a lot of attention, is to generate an image with the quality equivalent to combining several plane waves by using only a single plane wave. In Sect. 7.3 of this chapter, as the last section, the general processing steps of SRUS imaging were studied in which the contrast agent consisting of microbubbles is used to reveal the structure of microvessels, and also, achieve other information such as blood flow. It was observed that the general processing steps of SRUS imaging, once the data acquisition is performed, include isolation and localization, which ultimately leads to the estimation of the positions of microbubbles. Also, it was observed that the

References

353

adaptive MV algorithm could be used in SRUS imaging in order to better identify the microbubbles that are close to each other compared to the conventional DAS algorithm.

References Abou-Elkacem L, Bachawal SV, Willmann JK (2015) Ultrasound molecular imaging: moving toward clinical translation. Eur J Radiol 84(9):1685–1693 Behboodi B, Rivaz H (2019) Ultrasound segmentation using U-net: learning from simulated data and testing on real data. In: 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 6628–6631 Bilodeau M, Quaegebeur N, Berry A, Masson P (2022) Correlation-based ultrasound imaging of strong reflectors with phase coherence filtering. Ultrasonics 119:106631 Blanken N, Wolterink JM, Delingette H, Brune C, Versluis M, Lajoinie G (2022) Super-resolved microbubble localization in single-channel ultrasound rf signals using deep learning. IEEE Trans Med Imaging Brown J, Christensen-Jeffries K, Harput S, Zhang G, Zhu J, Dunsby C, Tang M-X, Eckersley RJ (2019) Investigation of microbubble detection methods for super-resolution imaging of microvasculature. IEEE Trans Ultrason Ferroelectr Freq Control 66(4):676–691 Brown KG, Ghosh D, Hoyt K (2020) Deep learning of spatiotemporal filtering for fast superresolution ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 67(9):1820–1829 Canziani A, Paszke A, Culurciello E (2016) An analysis of deep neural network models for practical applications. arXiv:1605.07678 Chen H, Wu L, Dou Q, Qin J, Li S, Cheng J-Z, Ni D, Heng P-A (2017) Ultrasound standard plane detection using a composite neural network framework. IEEE Trans Cybern 47(6):1576–1586 Chen X, Lowerison M, Dong Z, Sekaran NC, Huang C, Chen S, Fan TM, Llano DA, Song P (2021) Localization free super-resolution microbubble velocimetry using a long short-term memory neural network. (bioRxiv) Chen X, Lowerison MR, Dong Z, Han A, Song P (2022) Deep learning-based microbubble localization for ultrasound localization microscopy. IEEE Trans Ultrason Ferroelectr Freq Control 69(4):1312–1325 Cherin E, Yin J, Forbrich A, White C, Dayton PA, Foster FS, Démoré CE (2019) In vitro superharmonic contrast imaging using a hybrid dual-frequency probe. Ultrasound Med Biol 45(9):2525– 2539 Chollet F (2021) Deep learning with Python. Simon and Schuster Christensen-Jeffries K, Browning RJ, Tang M-X, Dunsby C, Eckersley RJ (2014) In vivo acoustic super-resolution and super-resolved velocity mapping using microbubbles. IEEE Trans Med Imaging 34(2):433–440 Coupé P, Hellier P, Kervrann C, Barillot C (2009) Nonlocal means-based speckle filtering for ultrasound images. IEEE Trans Image Process 18(10):2221–2229 Dayton PA, Rychak JJ (2007) Molecular ultrasound imaging using microbubble contrast agents. Front Biosci Landmark 12(13):5124–5142 Desailly Y, Couture O, Fink M, Tanter M (2013) Sono-activated ultrasound localization microscopy. Appl Phys Lett 103(17):174107 Diamantis K, Anderson T, Butler MB, Villagómez-Hoyos CA, Jensen JA, Sboros V (2018) Resolving ultrasound contrast microbubbles using minimum variance beamforming. IEEE Trans Med Imaging 38(1):194–204 Fl E, Solberg S, Kvam J, Myhre OF, Brende OM, Angelsen B (2017) In vitro detection of microcalcifications using dual band ultrasound. In: 2017 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4

354

7 Ongoing Research Areas in Ultrasound Beamforming

Foiret J, Zhang H, Ilovitsh T, Mahakian L, Tam S, Ferrara KW (2017) Ultrasound localization microscopy to image and assess microvasculature in a rat kidney. Sci Rep 7(1):1–12 Gao J, Xu L, Zou Q, Zhang B, Wang D, Wan M (2023) A progressively dual reconstruction network for plane wave beamforming with both paired and unpaired training data. Ultrasonics 127:106833 Goudarzi S, Rivaz H (2022) Deep reconstruction of high-quality ultrasound images from raw planewave data: A simulation and in vivo study. Ultrasonics 125:106778 Goudarzi S, Asif A, Rivaz H (2020) Ultrasound beamforming using mobilenetv2. In: 2020 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Harput S, Christensen-Jeffries K, Brown J, Li Y, Williams KJ, Davies AH, Eckersley RJ, Dunsby C, Tang M-X (2018) Two-stage motion correction for super-resolution ultrasound imaging in human lower limb. IEEE Trans Ultrason Ferroelectr Freq Control 65(5):803–814 Hingot V, Errico C, Tanter M, Couture O (2017) Subwavelength motion-correction for ultrafast ultrasound localization microscopy. Ultrasonics 77:17–21 Huang S-W, Robert J-L, Radulescu E, Vignon F, Erkamp R (2014) Beamforming techniques for ultrasound microcalcification detection. In: 2014 IEEE international ultrasonics symposium. IEEE, pp 2193–2196 Huang X, Bell MAL, Ding K (2021) Deep learning for ultrasound beamforming in flexible array transducer. IEEE Trans Med Imaging 40(11):3178–3189 Hverven SM, Rindal OMH, Hunter AJ, Austeng A (2017) Point scatterer enhancement in ultrasound by wavelet coefficient shrinkage. In: 2017 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Hyun D, Abou-Elkacem L, Perez VA, Chowdhury SM, Willmann JK, Dahl JJ (2017) Improved sensitivity in ultrasound molecular imaging with coherence-based beamforming. IEEE Trans Med Imaging 37(1):241–250 Jensen JA, Andersen SB, Hoyos CAV, Hansen KL, Sørensen CM, Nielsen MB (2019) Tissue motion estimation and correction in super resolution imaging. In: 2019 IEEE international ultrasonics symposium (IUS). IEEE, pp 1107–1110 Khan S, Huh J, Ye JC (2020) Adaptive and compressive beamforming using deep learning for medical ultrasound. IEEE Trans Ultrason Ferroelectr Freq Control 67(8):1558–1572 Kim Y-M, Kim M-G, Oh S-H, Jung G-I, Bae H-M (2021) Learning based approach for speedof-sound adaptive Rx beamforming. In: 2021 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Kim GR, Kang J, Kwak JY, Chang JH, Kim SI, Youk JH, Moon HJ, Kim MJ, Kim E-K (2014) Photoacoustic imaging of breast microcalcifications: a preliminary study with 8-gauge corebiopsied breast specimens. PLoS ONE 9(8):e105878 Kim J, Lowerison MR, Sekaran NVC, Kou Z, Dong Z, Oelze ML, Llano DA, Song P (2022) Improved ultrasound localization microscopy based on microbubble uncoupling via transmit excitation. IEEE Trans Ultrason Ferroelectr Freq Control 69(3):1041–1052 Ko KH, Jung HK, Kim SJ, Kim H, Yoon JH (2014) Potential role of shear-wave ultrasound elastography for the differential diagnosis of breast non-mass lesions: preliminary report. Eur Radiol 24(2):305–311 Kumar V, Lee P-Y, Kim B-H, Fatemi M, Alizad A (2020) Gap-filling method for suppressing grating lobes in ultrasound imaging: Experimental study with deep-learning approach. IEEE Access 8:76276–76286 Kusunose J, Caskey CF (2018) Fast, low-frequency plane-wave imaging for ultrasound contrast imaging. Ultrasound Med Biol 44(10):2131–2142 Labyed Y, Huang L (2011) Detecting small targets using windowed time-reversal music imaging: A phantom study. In: 2011 IEEE international ultrasonics symposium. IEEE, pp 1579–1582 Labyed Y, Huang L (2013) Super-resolution ultrasound imaging using a phase-coherent music method with compensation for the phase response of transducer elements. IEEE Trans Ultrason Ferroelectr Freq Control 60(6):1048–1060 Lan Y, Zhang X (2020) Real-time ultrasound image despeckling using mixed-attention mechanism based residual unet. IEEE Access 8:195327–195340

References

355

Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, Shi W (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE, pp 4681–4690 Lee S, Wolberg G, Shin SY (1997) Scattered data interpolation with multilevel B-splines. IEEE Trans Visual Comput Graphics 3(3):228–244 Lei Z, Gao S, Hasegawa H, Zhang Z, Zhou M, Sedraoui K (2023) Fully complex-valued gated recurrent neural network for ultrasound imaging. IEEE Trans Neural Netw Learn Syst Lindner JR (2004) Microbubbles in medical imaging: current applications and future directions. Nat Rev Drug Discov 3(6):527–533 Liu X, Zhou T, Lu M, Yang Y, He Q, Luo J (2020) Deep learning for ultrasound localization microscopy. IEEE Trans Med Imaging 39(10):3064–3078 Liu T, Xu M, Zhang Z, Dai C, Wang H, Zhang R, Shi L, Wu S (2019) Direct detection and measurement of nuchal translucency with neural networks from ultrasound images. In: Smart ultrasound imaging and perinatal, preterm and paediatric image analysis. Springer, pp 20–28 Li Z, Wiacek A, Bell MAL (2020) Beamforming with deep learning from single plane wave RF data. In: 2020 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Lu J-Y, Lee P-Y, Huang C-C (2022) Improving image quality for single-angle plane wave ultrasound imaging with convolutional neural network beamformer. IEEE Trans Ultrason Ferroelectr Freq Control 69(4):1326–1336 Luchies AC, Byram BC (2018) Deep neural networks for ultrasound beamforming. IEEE Trans Med Imaging 37(9):2010–2021 Luijten B, Cohen R, de Bruijn FJ, Schmeitz HA, Mischi M, Eldar YC, van Sloun RJ (2020) Adaptive ultrasound beamforming using deep learning. IEEE Trans Med Imaging 39(12):3967–3978 Matrone G, Bell MAL, Ramalli A (2021) Spatial coherence beamforming with multi-line transmission to enhance the contrast of coherent structures in ultrasound images degraded by acoustic clutter. IEEE Trans Ultrason Ferroelectr Freq Control 68(12):3570–3582 Mei Y, Jin H, Yu B, Wu E, Yang K (2021) Visual geometry group-unet: deep learning ultrasonic image reconstruction for curved parts. J Acoust Soc Am 149(5):2997–3009 Mishra D, Chaudhury S, Sarkar M, Soin AS (2018) Ultrasound image enhancement using structure oriented adversarial network. IEEE Signal Process Lett 25(9):1349–1353 Nair AA, Tran TD, Reiter A, Bell MAL (2018) A deep learning based alternative to beamforming ultrasound images. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 3359–3363 Nair AA, Tran TD, Reiter A, Bell MAL (2019) A generative adversarial neural network for beamforming ultrasound images: invited presentation. In: 2019 53rd Annual conference on information sciences and systems (CISS). IEEE, pp 1–6 Needles A, Couture O, Foster F (2009) A method for differentiating targeted microbubbles in real time using subharmonic micro-ultrasound and interframe filtering. Ultrasound Med Biol 35(9):1564–1573 Nehme E, Weiss LE, Michaeli T, Shechtman Y (2018) Deep-storm: super-resolution singlemolecule microscopy by deep learning. Optica 5(4):458–464 Nguon LS, Seo J, Seo K, Han Y, Park S (2022) Reconstruction for plane-wave ultrasound imaging using modified u-net-based beamformer. Comput Med Imaging Graph 98:102073 O’Reilly MA, Hynynen K (2013) A super-resolution ultrasound method for brain vascular mapping. Med Phys 40(11):110701 Ouyang Y, Zhou Z, Wu W, Tian J, Xu F, Wu S, Tsui P-H (2019) A review of ultrasound detection methods for breast microcalcification. Math Biosci Eng 16(4):1761–1785 Park AY, Seo BK, Cho KR, Woo OH (2016) The utility of MicroPure.T M ultrasound technique in assessing grouped microcalcifications without a mass on mammography. J Breast Cancer 19(1):83–86 Perdios D, Besson A, Arditi M, Thiran J-P (2017) A deep learning approach to ultrasound image recovery. In: 2017 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4

356

7 Ongoing Research Areas in Ultrasound Beamforming

Peretz N, Feuer A (2020) Deep learning applied to beamforming in synthetic aperture ultrasound. arXiv:2011.10321 Qi Y, Guo Y, Wang Y (2020) Image quality enhancement using a deep neural network for plane wave medical ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 68(4):926–934 Ren J (2018) From RF signals to b-mode images using deep learning Robert J-L, Burcher M, Cohen-Bacrie C, Fink M (2006) Time reversal operator decomposition with focused transmission and robustness to speckle noise: application to microcalcification detection. J Acoust Soc Am 119(6):3848–3859 Rothlübbers S, Strohm H, Eickel K, Jenne J, Kuhlen V, Sinden D, Günther M (2020) Improving image quality of single plane wave ultrasound via deep learning based channel compounding. In: 2020 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Rueckert D, Sonoda LI, Hayes C, Hill DL, Leach MO, Hawkes DJ (1999) Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans Med Imaging 18(8):712– 721 Rychak JJ, Klibanov AL, Ley KF, Hossack JA (2007) Enhanced targeting of ultrasound contrast agents using acoustic radiation force. Ultrasound Med Biol 33(7):1132–1139 Sabuncu S, Javier Ramirez R, Fischer JM, Civitci F, Yildirim A (2023) Ultrafast background-free ultrasound imaging using blinking nanoparticles. Nano Lett Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510–4520 Seo J, Nguon LS, Park S (2023) Vascular wall motion detection models based on long short-term memory in plane-wave-based ultrasound imaging. Phys Med Biology 68(7):075005 Shu J, Hyun D, Abou-Elkacem L, Willmann J, Dahl J (2018) Adaptive grayscale mapping to improve molecular ultrasound difference images. In: 2018 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–3 Simson W, Göbl R, Paschali M, Krönke M, Scheidhauer K, Weber W, Navab N (2019) End-to-end learning-based ultrasound reconstruction. arXiv:1904.04696 Song P, Trzasko JD, Manduca A, Huang R, Kadirvel R, Kallmes DF, Chen S (2017) Improved super-resolution ultrasound microvessel imaging with spatiotemporal nonlocal means filtering and bipartite graph-based microbubble tracking. IEEE Trans Ultrason Ferroelectr Freq Control 65(2):149–167 Szasz T, Basarab A, Kouamé D (2016) Strong reflector-based beamforming in ultrasound medical imaging. Ultrasonics 66:111–124 Taghavi I, Andersen SB, Hoyos CAV, Nielsen MB, Sørensen CM, Jensen JA (2021) In vivo motion correction in super-resolution imaging of rat kidneys. IEEE Trans Ultrason Ferroelectr Freq Control 68(10):3082–3093 Taki H, Sakamoto T, Yamakawa M, Shiina T, Sato T (2012) Small calcification depiction in ultrasonography using correlation technique for breast cancer screening. In: Acoustics 2012 Tang J, Zou B, Li C, Feng S, Peng H (2021) Plane-wave image reconstruction via generative adversarial network and attention mechanism. IEEE Trans Instrum Meas 70:1–15 Tasbaz R, Asl BM (2021a) Improvement of microbubbles localization using adaptive beamforming in super-resolution ultrasound imaging. In: 2021 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Tasbaz R, Asl BM (2021b) Super-resolution ultrasound imaging with low number of frames enhanced by adaptive beamforming. In: 2021 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Thon SH, Hansen RE, Austeng A (2021) Detection of point scatterers in medical ultrasound. IEEE Trans Ultrason Ferroelectr Freq Control Thon SH, Hansen RE, Austeng A (2022) Point detection in ultrasound using prewhitening and multilook optimization. IEEE Trans Ultrason Ferroelectr Freq Control Thon SH, Austeng A, Hansen RE (2023) Point detection in textured ultrasound images. Ultrasonics 131:106968

References

357

Tierney JE, Schlunk SG, Jones R, George M, Karve P, Duddu R, Byram BC, Hsi RS (2019) In vitro feasibility of next generation non-linear beamforming ultrasound methods to characterize and size kidney stones. Urolithiasis 47(2):181–188 Tom F, Sheet D (2018) Simulating patho-realistic ultrasound images using deep generative networks with adversarial learning. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). IEEE, pp 1174–1177 Van Sloun RJ, Solomon O, Bruce M, Khaing ZZ, Eldar YC, Mischi M (2019) Deep learning for super-resolution vascular ultrasound imaging. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 1055–1059 Van Sloun RJ, Solomon O, Eldar YC, Wijkstra H, Mischi M (2017) Sparsity-driven super-resolution in clinical contrast-enhanced ultrasound. In: 2017 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Vasile CM, Udri¸stoiu AL, Ghenea AE, Popescu M, Gheonea C, Niculescu CE, Ungureanu AM, ¸ Droca¸s AI, Gruionu LG, Gruionu G, Iacob AV, Alexandru DO (2021) Intelligent Udri¸stoiu S, diagnosis of thyroid ultrasound imaging using an ensemble of deep learning methods. Medicina 57(4):395 Viessmann O, Eckersley R, Christensen-Jeffries K, Tang M-X, Dunsby C (2013) Acoustic superresolution with ultrasound and microbubbles. Phys Med Biol 58(18):6447 Wang Z, Martin KH, Huang W, Dayton PA, Jiang X (2016) Contrast enhanced superharmonic imaging for acoustic angiography using reduced form-factor lateral mode transmitters for intravascular and intracavity applications. IEEE Trans Ultrason Ferroelectr Freq Control 64(2):311–319 Wasih M, Ahmad S, Almekkawy M (2023) A robust cascaded deep neural network for image reconstruction of single plane wave ultrasound rf data. Ultrasonics 132:106981 Wiacek A, González E, Bell MAL (2020) Coherenet: a deep learning architecture for ultrasound spatial correlation estimation and coherence-based beamforming. IEEE Trans Ultrason Ferroelectr Freq Control 67(12):2574–2583 Xavier A, Alarcón H, Espíndola D (2022) Characterization of direct localization algorithms for ultrasound super-resolution imaging in a multibubble environment: A numerical and experimental study. IEEE Access 10:49991–49999 Xiao D, Pitman WM, Yiu, BY, Chee AJ, Alfred C (2022) Minimizing image quality loss after channel count reduction for plane wave ultrasound via deep learning inference. IEEE Trans Ultrason Ferroelectr Freq Control Yang X, Yu L, Wu L, Wang Y, Ni D, Qin J, Heng P-A (2017) Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images. In: Proceedings of the AAAI conference on artificial intelligence, vol 31 Yan J, Wang B, Riemer K, Hansen-Shearer J, Lerendegui M, Toulemonde M, Rowlands CJ, Weinberg PD, Tang M-X (2022a) 3D super-resolution ultrasound with adaptive weight-based beamforming. arXiv:2208.12176 Yan J, Zhang T, Broughton-Venner J, Huang P, Tang M-X (2022b) Super-resolution ultrasound through sparsity-based deconvolution and multi-feature tracking. IEEE Trans Med Imaging Yoon YH, Khan S, Huh J, Ye JC (2018) Efficient b-mode ultrasound image reconstruction from sub-sampled RF data using deep learning. IEEE Trans Med Imaging 38(2):325–336 Youn J, Ommen ML, Stuart MB, Thomsen EV, Larsen NB, Jensen JA (2020) Detection and localization of ultrasound scatterers using convolutional neural networks. IEEE Trans Med Imaging 39(12):3855–3867 Youn J, Ommen ML, Stuart MB, Thomsen EV, Larsen NB, Jensen JA (2019) Ultrasound multiple point target detection and localization using deep learning. In: 2019 IEEE international ultrasonics symposium (IUS). IEEE, pp 1937–1940 Yu Z, Tan E-L, Ni D, Qin J, Chen S, Li S, Lei B, Wang T (2017) A deep convolutional neural network-based framework for automatic fetal facial standard plane recognition. IEEE J Biomed Health Inf 22(3):874–885 Yu J, Lavery L, Kim K (2018) Super-resolution ultrasound imaging method for microvasculature in vivo with a high temporal accuracy. Sci Rep 8(1):1–11

358

7 Ongoing Research Areas in Ultrasound Beamforming

Zhang Q, Xiao Y, Dai W, Suo J, Wang C, Shi J, Zheng H (2016) Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics 72:150–157 Zhang X, Li J, He Q, Zhang H, Luo J (2018) High-quality reconstruction of plane-wave imaging using generative adversarial network. In: 2018 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–4 Zhang F, Luo L, Li J, Peng J, Zhang Y, Gao X (2023) Ultrasonic adaptive plane wave high-resolution imaging based on convolutional neural network. NDT & E International, p 102891 Zhou Z, Guo Y, Wang Y (2021) Ultrasound deep beamforming using a multiconstrained hybrid generative adversarial network. Med Image Anal 71:102086 Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycleconsistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232 Zhuang R, Chen J (2019) Deep learning based minimum variance beamforming for ultrasound imaging. In: Smart ultrasound imaging and perinatal, preterm and paediatric image analysis. Springer, pp 83–91 Zuo H, Liu S, Peng B (2021)Image quality enhancement using an improved deep neural network for single plane wave beamforming. In: 2021 IEEE international conference on systems, man, and cybernetics (SMC). IEEE, pp 3030–3034