Contemporary Algorithms: Theory and Applications. Volume I 1685079946, 9781685079949

This book provides different avenues to study algorithms. It also brings new techniques and methodologies to problem sol

236 46 32MB

English Pages 447 [450] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Contemporary Algorithms: Theory and Applications. Volume I
 1685079946, 9781685079949

Table of contents :
Mathematics Research Developments
Contemporary AlgorithmsTheory and Applications
Contents
Glossary of Symbols
Preface
Chapter 1Ball Convergence for High OrderMethods
1. Introduction
2. Local Convergence Analysis
3. Numerical Examples
4. Conclusion
References
Chapter 2Continuous Analogs of Newton-TypeMethods
1. Introduction
2. Semi-local Convergence I
3. Semi-local Convergence II
4. Conclusion
References
Chapter 3Initial Points for Newton’s Method
1. Introduction
2. Semi-local Convergence Result
3. Main Result
4. On the Convergence Region
5. A Priori Error Bounds and Quadratic Convergence of Newton’sMethod
6. Local Convergence
7. Numerical Examples
8. Conclusion
References
Chapter 4Seventh Order Methods
1. Introduction
2. Local Convergence Analysis
3. Numerical Example
4. Conclusion
References
Chapter 5Third Order Schemes
1. Introduction
2. Ball Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 6Fifth and Sixth Order Methods
1. Introduction
2. Ball Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 7Sixth Order Methods
1. Introduction
2. Ball Convergence
3. Conclusion
References
Chapter 8Extended Jarratt-Type Methods
1. Introduction
2. Convergence Analysis
3. Conclusion
References
Chapter 9Multipoint Point Schemes
1. Introduction
2. Local Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 10Fourth Order Methods
1. Introduction
2. Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 11Inexact Newton Algorithm
1. Introduction
2. Convergence of NA
3. Numerical Examples
4. Conclusion
References
Chapter 12Halley’s Method
1. Introduction
2. Convergence of HA
3. Conclusion
References
Chapter 13Newton’s Algorithm for SingularSystems
1. Introduction
2. Convergence of NA
3. Conclusion
References
Chapter 14Gauss-Newton-Algorithm
1. Introduction
2. Semi-Local Convergence
3. Local Convergence
4. Conclusion
References
Chapter 15Newton’s Algorithm on RiemannianManifolds
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 16Gauss-Newton-Kurchatov Algorithmfor Least Squares Problems
1. Introduction
2. Convergence of GNKA
3. Conclusion
References
Chapter 17Uniqueness of the Solution ofEquations in Banach Space: I
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 18Uniqueness of the Solution ofEquations in Banach Space: II
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 19Convergence of Newton’s Algorithmfor Sections on RiemannianManifolds
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 20Newton Algorithm on Lie Groups: I
1. Introduction
2. Two versions of NA
2.1. The Differential of the Map F
3. Conclusion
References
Chapter 21Newton Algorithm on Lie Groups: II
1. Introduction
2. Convergence Criteria
3. Conclusion
References
Chapter 22Two-Step Newton Method under L−Average Conditions
1. Introduction
2. Semi-Local Convergence of TSNM
3. Conclusion
References
Chapter 23Unified Methods for SolvingEquations
1. Introduction
2. Ball Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 24Eighth Convergence OrderDerivative Free Method
1. Introduction
2. Ball Convergence
3. Conclusion
References
Chapter 25m−Step Methods
1. Introduction
2. Local Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 26Third Order Schemes for SolvingEquations
1. Introduction
2. Ball Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 27Deformed Newton Method forSolving Equations
1. Introduction
2. Local Convergence ofMethod (27.3)
3. Semi-local Convergence ofMethod (27.3)
4. Numerical Examples
5. Conclusion
References
Chapter 28On the Newton-KantorovichTheorem
1. Introduction
2. Convergence Analysis
3. Concluding Remarks and Applications
4. Conclusion
References
Chapter 29Kantorovich-Type Extensions forNewton Method
1. Introduction
2. Semi-Local Convergence for Newton-Like Methods
3. Numerical Examples
4. Conclusion
References
Chapter 30Improved Convergence for theKing-Werner Method
1. Introduction
2. Convergence Analysis of King-Werner-Type Methods (30.2)and (30.3)
3. Numerical Examples
4. Conclusion
References
Chapter 31Extending the Applicability ofKing-Werner-Type Methods
1. Introduction
2. Majorizing Sequences for King-Werner-TypeMethods (31.3)and (31.4)
3. Convergence Analysis of King-Werner-Type Methods (31.3)and (31.4)
4. Numerical Examples
5. Conclusion
References
Chapter 32Parametric Efficient Family ofIterative Methods
1. Introduction
2. Convergence Analysis of Method (32.2)
3. Numerical Examples
4. Conclusion
References
Chapter 33Fourth Order Derivative FreeScheme with Three Parameters
1. Introduction
2. Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 34Jarratt-Type Methods
1. Introduction
2. Convergence Analysis
3. Conclusion
References
Chapter 35Convergence Radius of an EfficientIterative Method with FrozenDerivatives
1. Introduction
2. Convergence for Method (35.2)
3. Numerical Examples
4. Conclusion
References
Chapter 36Efficient Sixth Convergence OrderMethods under GeneralizedContinuity
1. Introduction
2. Local Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 37Fifth Order Methods underGeneralized Conditions
1. Introduction
2. Local Analysis
3. Numerical Examples
4. Conclusion
References
Chapter 38Two Fourth Order Solvers forNonlinear Equations
1. Introduction
2. Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 39Kou’s Family of Schemes
1. Introduction
2. Local Analysis
3. Numerical Examples
4. Conclusion
References
Chapter 40Multi-Step Steffensen-Line Methods
1. Introduction
2. Semi-Local Convergence
3. Conclusion
References
Chapter 41Newton-Like Scheme for SolvingInclusion Problems
1. Introduction
2. Semi-Local Convergence
3. Conclusion
References
Chapter 42Extension of Newton-Secant-LikeMethod
1. Introduction
2. Majorizing Sequences
3. Convergence for Method (42.2)
4. Conclusion
References
Chapter 43Inexact Newton-Like Method forInclusion Problems
1. Introduction
2. Convergence of INLM
3. Conclusion
References
Chapter 44Semi-Smooth Newton-TypeAlgorithms for Solving VariationalInclusion Problems
1. Introduction
2. Preliminaries
3. Convergence
4. Conclusion
References
Chapter 45Extended Inexact Newton-LikeAlgorithm under KantorovichConvergence Criteria
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 46Kantorovich-Type Results UsingNewton’s Algorithms for GeneralizedEquations
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 47Developments of Newton’s Methodunder H¨older Conditions
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 48Ham-Chun Fifth Convergence OrderSolver
1. Introduction
2. Ball Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 49A Novel Method Free fromDerivatives of Convergence Order
1. Introduction
2. Convergence
3. Example
4. Conclusion
References
Chapter 50Newton-Kantorovich Scheme forSolving Generalized Equations
1. Introduction
2. Background
3. Convergence Analysis
4. Conclusion
References
About the Authors
Christopher I. Argyros
Samundra Regmi
Ioannis K. Argyros
Dr. Santhosh George
Index
Blank Page

Citation preview

Mathematics Research Developments

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.

Mathematics Research Developments Non-Euclidean Geometry in Materials of Living and Non-Living Matter in the Space of the Highest Dimension Gennadiy Zhizhin (Author) 2022. ISBN: 978-1-68507-885-0 (Hardcover) 2022. ISBN: 979-8-88697-064-7 (eBook) Frontiers in Mathematical Modelling Research M. Haider Ali Biswas and M. Humayun Kabir (Editors) 2022. ISBN: 978-1-68507-430-2 (Hardcover) 2022. ISBN: 978-1-68507-845-4 (eBook) Mathematical Modeling of the Learning Curve and Its Practical Applications Charles Ira Abramson and Igor Stepanov (Authors) 2022. ISBN: 978-1-68507-737-2 (Hardcover) 2022. ISBN: 978-1-68507-851-5 (eBook) Partial Differential Equations: Theory, Numerical Methods and Ill-Posed Problems Michael V. Klibanov and Jingzhi Li (Authors) 2022. ISBN: 978-1-68507-592-7 (Hardcover) 2022. ISBN: 978-1-68507-727-3 (eBook) Outliers: Detection and Analysis Apra Lipi, Kishan Kumar, and Soubhik Chakraborty (Authors) 2022. ISBN: 978-1-68507-554-5 (Softcover) 2022. ISBN: 978-1-68507-587-3 (eBook)

More information about this series can be found at https://novapublishers.com/productcategory/series/mathematics-research-developments/

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros and Santhosh George

Contemporary Algorithms Theory and Applications Volume I

Copyright © 2022 by Nova Science Publishers, Inc. https://doi.org/10.52305/IHML8594 All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. We have partnered with Copyright Clearance Center to make it easy for you to obtain permissions to reuse content from this publication. Simply navigate to this publication’s page on Nova’s website and locate the “Get Permission” button below the title description. This button is linked directly to the title’s permission page on copyright.com. Alternatively, you can visit copyright.com and search by title, ISBN, or ISSN. For further questions about using the service on copyright.com, please contact: Copyright Clearance Center Phone: +1-(978) 750-8400 Fax: +1-(978) 750-4470 E-mail: [email protected].

NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works. Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the Publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in the e-book version of this book.

Library of Congress Cataloging-in-Publication Data

ISBN:  H%RRN

Published by Nova Science Publishers, Inc. † New York

The first author dedicates this book to his beloved grandparents Jolanda, Mihallaq, Anastasia and Konstantinos.

The second author dedicates this book to his mother Madhu Kumari Regmi and Father Moti Ram Regmi.

The third author dedicates this book to his wonderful children Christopher, Gus, Michael, and lovely wife Diana.

The fourth author dedicates this book to his lovely wife Rose.

Contents Glossary of Symbols

xv

Preface

xvii

1 Ball Convergence for High Order Methods 1. Introduction . . . . . . . . . . . . . . . 2. Local Convergence Analysis . . . . . . 3. Numerical Examples . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

1 1 3 9 10

2 Continuous Analogs of Newton-Type Methods 1. Introduction . . . . . . . . . . . . . . . . . 2. Semi-local Convergence I . . . . . . . . . . 3. Semi-local Convergence II . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

13 13 13 19 22

. . . .

3 Initial Points for Newton’s Method 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Semi-local Convergence Result . . . . . . . . . . . . . . . . . . . . . . 3. Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. On the Convergence Region . . . . . . . . . . . . . . . . . . . . . . . 5. A Priori Error Bounds and Quadratic Convergence of Newton’s Method 6. Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

25 25 27 28 30 31 32 34 35

4 Seventh Order Methods 1. Introduction . . . . . . . . . 2. Local Convergence Analysis 3. Numerical Example . . . . . 4. Conclusion . . . . . . . . .

. . . .

. . . .

37 37 38 44 45

5 Third Order Schemes 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Ball Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49 49 50

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

viii

Contents 3. 4.

Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Fifth and Sixth Order Methods 1. Introduction . . . . . . . . 2. Ball Convergence . . . . . 3. Numerical Examples . . . 4. Conclusion . . . . . . . .

56 58

. . . .

61 61 62 68 70

7 Sixth Order Methods 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Ball Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73 73 74 80

8 Extended Jarratt-Type Methods 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83 83 84 88

9 Multipoint Point Schemes 1. Introduction . . . . . 2. Local Convergence . 3. Numerical Examples 4. Conclusion . . . . .

. . . .

91 91 92 97 98

10 Fourth Order Methods 1. Introduction . . . . . 2. Convergence . . . . 3. Numerical Examples 4. Conclusion . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

101 101 102 108 109

11 Inexact Newton Algorithm 1. Introduction . . . . . . 2. Convergence of NA . . 3. Numerical Examples . 4. Conclusion . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

113 113 114 116 117

12 Halley’s Method 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence of HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

119 119 119 122

13 Newton’s Algorithm for Singular Systems 125 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 2. Convergence of NA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Contents 14 Gauss-Newton-Algorithm 1. Introduction . . . . . . . 2. Semi-Local Convergence 3. Local Convergence . . . 4. Conclusion . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

ix

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

131 131 131 135 136

15 Newton’s Algorithm on Riemannian Manifolds 139 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 16 Gauss-Newton-Kurchatov Algorithm for Least Squares Problems 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence of GNKA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

143 143 143 145

17 Uniqueness of the Solution of Equations in Banach Space: 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . .

I 147 . . . . . . . . . . . 147 . . . . . . . . . . . 147 . . . . . . . . . . . 149

18 Uniqueness of the Solution of Equations in Banach Space: 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . .

II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

151 151 151 152

19 Convergence of Newton’s Algorithm for Sections on Riemannian Manifolds 155 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 20 Newton Algorithm on Lie Groups: I 1. Introduction . . . . . . . . . . . . . . 2. Two versions of NA . . . . . . . . . . 2.1. The Differential of the Map F 3. Conclusion . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

159 159 159 159 161

21 Newton Algorithm on Lie Groups: II 163 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 2. Convergence Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22 Two-Step Newton Method under L− Average Conditions 169 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 2. Semi-Local Convergence of TSNM . . . . . . . . . . . . . . . . . . . . . 169 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

x

Contents

23 Unified Methods for Solving Equations 1. Introduction . . . . . . . . . . . . . 2. Ball Convergence . . . . . . . . . . 3. Numerical Examples . . . . . . . . 4. Conclusion . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

175 175 176 181 183

24 Eighth Convergence Order Derivative Free Method 187 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 2. Ball Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 25 m−Step Methods 1. Introduction . . . . . 2. Local Convergence . 3. Numerical Examples 4. Conclusion . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

197 197 198 204 206

26 Third Order Schemes for Solving Equations 1. Introduction . . . . . . . . . . . . . . . . 2. Ball Convergence . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

209 209 210 215 217

. . . . .

219 219 220 225 227 228

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

27 Deformed Newton Method for Solving Equations 1. Introduction . . . . . . . . . . . . . . . . . . 2. Local Convergence of Method (27.3) . . . . . 3. Semi-local Convergence of Method (27.3) . . 4. Numerical Examples . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . 28 On the Newton-Kantorovich Theorem 1. Introduction . . . . . . . . . . . . . . 2. Convergence Analysis . . . . . . . . 3. Concluding Remarks and Applications 4. Conclusion . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

231 231 231 236 237

29 Kantorovich-Type Extensions for Newton Method 1. Introduction . . . . . . . . . . . . . . . . . . . . . 2. Semi-Local Convergence for Newton-Like Methods 3. Numerical Examples . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

239 239 240 243 244

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

30 Improved Convergence for the King-Werner Method 247 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 2. Convergence Analysis of King-Werner-Type Methods (30.2) and (30.3) . . 250 3. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

Contents 4.

xi

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

31 Extending the Applicability of King-Werner-Type Methods 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Majorizing Sequences for King-Werner-Type Methods (31.3) and (31.4) 3. Convergence Analysis of King-Werner-Type Methods (31.3) and (31.4) 4. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

257 257 259 261 264 266

32 Parametric Efficient Family of Iterative Methods 1. Introduction . . . . . . . . . . . . . . . . . . 2. Convergence Analysis of Method (32.2) . . . 3. Numerical Examples . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

269 269 270 276 277

33 Fourth Order Derivative Free Scheme with Three Parameters 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

281 281 282 285 286

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

34 Jarratt-Type Methods 289 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 2. Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 35 Convergence Radius of an Efficient Iterative Method with Frozen Derivatives 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence for Method (35.2) . . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

297 297 298 302 303

36 Efficient Sixth Convergence Order Methods under Generalized Continuity 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

305 305 306 313 314

37 Fifth Order Methods under Generalized Conditions 1. Introduction . . . . . . . . . . . . . . . . . . . . 2. Local Analysis . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . .

. . . .

317 317 318 324 325

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

xii

Contents

38 Two Fourth Order Solvers for Nonlinear Equations 1. Introduction . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

327 327 328 334 334

39 Kou’s Family of Schemes 1. Introduction . . . . . 2. Local Analysis . . . 3. Numerical Examples 4. Conclusion . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

337 337 338 342 343

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

40 Multi-Step Steffensen-Line Methods 345 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 2. Semi-Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 41 Newton-Like Scheme for Solving Inclusion Problems 353 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 2. Semi-Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 42 Extension of Newton-Secant-Like Method 1. Introduction . . . . . . . . . . . . . . 2. Majorizing Sequences . . . . . . . . . 3. Convergence for Method (42.2) . . . . 4. Conclusion . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

359 359 360 361 363

43 Inexact Newton-Like Method for Inclusion Problems 365 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 2. Convergence of INLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 44 Semi-Smooth Newton-Type Algorithms for Solving Variational Inclusion Problems 371 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 2. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 3. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 45 Extended Inexact Newton-Like Algorithm Criteria 1. Introduction . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . .

under Kantorovich Convergence 377 . . . . . . . . . . . . . . . . . . . 377 . . . . . . . . . . . . . . . . . . . 378 . . . . . . . . . . . . . . . . . . . 380

Contents

xiii

46 Kantorovich-Type Results Using Newton’s Algorithms for Generalized Equations 381 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 47 Developments of Newton’s Method under H¨older Conditions 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

387 387 387 394

48 Ham-Chun Fifth Convergence Order Solver 1. Introduction . . . . . . . . . . . . . . . . 2. Ball Convergence . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

397 397 398 403 404

49 A Novel Method Free from Derivatives of Convergence Order 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . 3. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

407 407 408 412 413

. . . .

417 417 417 418 422

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

50 Newton-Kantorovich Scheme for Solving Generalized Equations 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Convergence Analysis . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

About the Authors

425

Index

427

Glossary of Symbols et al. etc. i.e. iff e.g. w.r.t resp. = 06 / ∈, ∈ / ⇒ ⇔ max min sup inf for all n Rn Cn X ×Y, X × X = X 2 e1 , . . ., en x = (x1 , . . ., xn )T xT {xn }n≥0 k.k k.k p |.| /./ U(x0 , R) U(x0 , R) U(R) = U(x0 , R) U,U I L L−1 M = {mi j }

et alii (and others) et cetera id est (that is) if and only if exempli gratia (for example) with respect to respectively non-equality empty set belong to and does not belong to implication if and only if maximum minimum supremum (least upper bound) infimum (greatest lower bound) for all n ∈ N Real n-dimensional space Complex n-dimensional space Cartesian product space of X and Y The coordinate vector of Rn Column vector with component xi The transpose of x Sequence of point from X Norm on X L p norm Absolute value symbol Norm symbol of a generalized Banach space X Open ball {z ∈ X|kx0 − zk < R} Closed ball {z ∈ X|kx0 − zk ≤ R} Ball centered at the zero element on X and of radius R Open, closed balls, respectively no particular reference to X, x0 or R Identity matrix operator Linear operator Inverse Matrix 1 ≤ i, J ≤ n

xvi

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

M −1 det M or |M|

∑ ∏ Z

∈ ⊂, ⊆ ∀ ⇒ ∪, ∩ A−B F :D ⊆ X →Y F 0 (x), F 00 (x)

Inverse of M Determinant of M Summation symbol Product of factors symbol Integration symbol Element inclusion Strict and non-strict set inclusion For all implication Union, intersection Difference between set A and B An operator with domain D included in X, and values in Y First, second Frchet-derivatives of F evaluated at x

Preface The book provides different avenues to study algorithms. It also brings new techniques and methodologies to problem solving in computational Sciences, Engineering, Scientific Computing and Medicine (imaging, radiation therapy) to mention a few. A plethora of algorithms which are universally applicable is presented on a sound analytical way. The chapters are written independently of each other, so they can be understood without reading earlier Chapters. But some knowledge of Analysis, Linear Algebra, and some Computing experience are required. The organization and content of the book cater to senior undergraduate, graduate students, researchers, practitioners, professionals, and academicians in the aforementioned disciplines. It can also be used as a reference book and includes numerous references and open problems.

Chapter 1

Ball Convergence for High Order Methods 1.

Introduction

In this chapter the problem of approximating a locally unique solution x∗ of the equation F(x) = 0,

(1.1)

is analyzed. Where F : D ⊆ X → Y is a Frchet-differentiable operator, X,Y are Banach spaces and D is a convex subset of X. Newton-like methods are widely used for finding solutions of (1.1). These methods are usually studied based on: semi-local and local convergence. The semi-local convergence method is based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radius of convergence balls [5, 6, 25, 32]. Third-order methods such as Eulers, Halleys, super Halleys, and Chebyshevs [1]- [38] require the evaluation of the second derivative F 00 at each step, which in general is very expensive. That is why many authors have used higher-order multi-point methods [1]- [38]. In this chapter, we present the local convergence analysis of some methods defined for each n = 0, 1, 2, · · · by yn = xn − F 0 (xn )−1 F(xn )

zn = xn − 2(F 0 (yn ) + F 0 (xn ))−1F(xn )

xn+1 = zn − F 0 (yn )−1 F(zn ),

2 yn = xn − F 0 (xn )−1 F(xn ) 3 1 zn = xn − (3F 0 (yn ) − F 0 (xn ))−1 (3F 0 (yn ) + F 0 (xn ))F 0 (xn )−1 F(xn ) 2 xn+1 = zn − 2(3F 0 (yn ) − F 0 (xn ))−1 F(zn ),

(1.2)

(1.3)

2

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. yn = xn − F 0 (xn )−1 F(xn )

zn = yn + F 0 (xn )−1 (F 0 (xn ) − F 0 (yn ))−1 (F 0 (xn ) − 3F 0 (yn ))−1 F(xn )

xn+1 = zn + F 0 (xn )−1 (F 0 (xn ) + F 0 (yn ))(F 0 (xn ) − 3F 0 (yn ))−1 F(zn ),

(1.4)

where x0 is an initial point. These methods were studied in [13, 14, 16] respectively in the special case when X = Y = R. The convergence order of these methods are 5th , 6th , and 6th , respectively. These methods require two function evaluations and derivative evaluations and two inverses of the derivatives at each step. The convergence of these methods was given under hypotheses reaching up to the sixth derivative of operator F. Therefore, these hypotheses limit the applicability of these methods although only the first derivative appears in these methods. 1 5 As a motivational example, let us define function f on D = [− , ] by 2 2  3 2 x lnx + x5 − x4 , x 6= 0 f (x) = 0, x = 0 Choose x∗ = 1. We have that f 0 (x) = 3x2 ln x2 + 5x4 − 4x3 + 2x2 , f 0 (1) = 3,

f 00 (x) = 6x lnx2 + 20x3 − 12x2 + 10x

f 000 (x) = 6 lnx2 + 60x2 − 24x + 22.

Then, function f 00 is unbounded on D. Notice that, in particular, there are a plethora of iterative methods for approximating solutions of nonlinear equations defined on R or C [1][33]. These results show that if the initial point x0 is sufficiently close to the solution x∗ , then the sequence {xn } converges to x∗ .. But how close to the solution x∗ the initial guess x0 should be? These local results give no information on the radius of the convergence ball for the corresponding method. We address this question for method (1.2) in Section 2. The same technique can be used in other methods. In the present chapter, we only use hypotheses on the first Fr´echet derivative. This way we expand the applicability of these methods. In this chapter we present a local convergence analysis for these methods under unified conditions and using only the first Fr´echet ¯ ρ) stand, respectively for the open and closed derivative of the function F. Let U(v, ρ), U(v, balls in X with center v ∈ X and of radius ρ > 0. The common set of conditions is given by (C ) : (C1 ) F : D ⊂ X → Y is a Fr´echet-differentiable operator; There exist (C2 ) x∗ ∈ D such that F(x∗ ) = 0 and F 0 (x∗ )−1 ∈ L(Y, X); (C3 ) L0 > 0 such that for each x ∈ D kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ L0 kx − x∗ k; (C4 ) L > 0 such that for each x, y ∈ D kF 0 (x∗ )−1 (F 0 (x) − F 0 (y))k ≤ Lkx − yk; (C5 ) M ≥ 1 such that for each x ∈ D kF 0 (x∗ )−1 F 0 (x)k ≤ M, and

Ball Convergence for High Order Methods

3

¯ ∗ , r) ⊆ D, for some r > 0 which may change from method to method. (C6 ) U(x The same set of conditions (C ) can be used for other methods [1]–[38]. The rest of the chapter is organized as follows. The local convergence of these methods is given in Section 2, whereas the numerical examples are given in the concluding Section 3.

2.

Local Convergence Analysis

We present the local convergence analysis of methods (1.4), (1.3) and (1.2) respectively in this section. It is convenient for the local convergence analysis of these methods that follow to define some scalar functions and parameters. First, we define the radius of the 1 convergence ball for method (1.4). Define functions g1 on the interval [0, ) by L0 g1 (t) =

Lt 2(1 − L0 t)

and parameter r1 =

2 . 2L0 + L

Then, we have that g1 (r1 ) = 1 and 0 ≤ g1 (t) < 1 for each t ∈ [0, r1 ). 1 Define functions p and h p on the interval [0, ) by L0 p(t) =

L0 (1 + 3g1 (t))t 2

and h p (t) = p(t) − 1.

1− . It follows from the intermediate L0 1 value theorem that function h p has zeros in the interval (0, ). Denote by r p the smallest L0 such zeros. Moreover, define functions g2 and h2 on the interval [0, r p ) by We get that h p (0) = −1 < 0 and h p (t) → ∞ as t →

g2 (t) = g1 (t) +

L0 M(1 + g1 (t))t 2(1 − L0 t)(1 − p(t))

and h2 (t) = g2 (t) − 1.

Then, we have that h2 (0) = −1 < 0 and h2 (t) → ∞ as t → r− p . It follows that function h2 has the smallest zeros in the interval (0, r p ) denote by r2 . Finally, define functions g3 and h3 on the interval [0, r p) by g3 (t) = (1 +

M(2 + (1 + g1 (t))t) )g2 (t) 2(1 − L0 t)(1 − p(t))

4

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and h3 (t) = g3 (t) − 1.

We obtain that h3 (0) = −1 < 0 and h3 (t) → ∞ as t → r− p . It follows that function h3 has the smallest zero in the interval (0, r p ) denoted by r3 . Notice that h3 (r2 ) =

M(2 + (1 + g1 (r2 ))r2) g2 (r2 ) > 0, 2(1 − L0 r2 )(1 − p(r2 ))

since 1 − L0 r2 > 0 and 1 − p(r2 ) > 0. Hence, we conclude that r3 < r2 . Set r = min{r1 , r2 , r3 }.

(1.5)

Then, we have that for each t ∈ [0, r) 0 ≤ g1 (t) < 1

(1.6)

0 ≤ g2 (t) < 1

(1.7)

0 ≤ g3 (t) < 1.

(1.8)

and Next, we show the local convergence result for method (1.4) under the (C ) conditions and using the preceding notation. Theorem 1. Suppose that the conditions (C ) hold with r given by (1.5). Then, the sequence {xn } generated for x0 ∈ U(x∗ , r) − {x∗} by method (1.4) is well defined, remains in U(x∗ , r) for each n = 0, 1, 2, · · · and converges to x∗ . Moreover, the following estimates hold: kyn − x∗ k ≤ g1 (kxn − x∗ k)kxn − x∗ k < kxn − x∗ k < r,

(1.9)

kzn − x∗ k ≤ g2 (kxn − x∗ k)kxn − x∗ k < kxn − x∗ k

(1.10)

kxn+1 − x∗ k ≤ g3 (kxn − x∗ k)kxn − x∗ k < kxn − x∗ k,

(1.11)

and where the 00 g00 functions are defined before Theorem 1. Furthermore, if there exists ¯ ∗ , T ) ⊆ D, then x∗ is the only solution of equation F(x) = 0 T ∈ [r, 2/L0 ) such that U(x ∗ ¯ , T ). in U(x Proof. We shall show estimates (1.9) – (1.11) by using mathematical induction. Using the hypothesis x0 ∈ U(x∗ , r) − {x∗ }, (C3 ) and (1.5) we have that kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k ≤ L0 kx0 − x∗ k < L0 r < 1.

(1.12)

It follows from (1.12) and the Banach Lemma on invertible operators [5, 6, 25] that, F 0 (x0 )−1 ∈ L(Y, X) and kF 0 (x0 )−1 F 0 (x∗ )k ≤

1 . 1 − L0 kx0 − x∗ k

(1.13)

Ball Convergence for High Order Methods

5

Hence, y0 is well defined by the first substep of method (1.4) for n = 0. Using (C2 ), (1.6) and (1.13) we get ky0 − x∗ k ≤ kx0 − x∗ − F 0 (x0 )−1 F(x0 )k 0

−1 0



≤ kF (x0 ) F (x )kk

Z 1 0

F 0 (x∗ )−1 (F 0 (x0 + θ(x0 − x∗ ))

−F 0 (x0 ))(x0 − x∗ )dθk Lkx0 − x∗ k2 ≤ 2(1 − L0 kx0 − x∗ k) = g1 (kx0 − x∗ k)kx0 − x∗ k < kx0 − x∗ k < r,

(1.14)

which shows (1.9) for n = 0 and y0 ∈ U(x∗ , r). We shall show that (F 0 (x0 ) − 3F 0 (y0 ))−1 ∈ L(Y, X). In view of (C2 ), (1.5) and (1.14) we get in turn

≤ ≤ ≤ =

k(−2F 0 (x∗ ))−1[F 0 (x0 ) − F 0 (x∗ ) + 3(F 0 (x∗ ) − F 0 (y0 ))]k 1 [kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k + 3kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k] 2 L0 (kx0 − x∗ k + 3ky0 − x∗ k) 2 L0 (1 + 3g1 (kx0 − x∗ k))kx0 − x∗ k 2 p(kx0 − x∗ k) < 1.

(1.15)

It follows from (1.15) that (F 0 (x0 ) − 3F 0 (y0 ))−1 ∈ L(Y, X) and k(F 0 (x0 ) − 3F 0 (y0 ))−1 F 0 (x∗ )k ≤

1 . 2(1 − p(kx0 − x∗ k))

(1.16)

Hence, x0 and x1 are well defined. We can write by (C1 ) and (C2 ) that ∗

F(x0 ) = F(x0 ) − F(x ) =

Z 1 0

F 0 (x∗ + θ(x0 − x∗ ))(x0 − x∗ )dθ.

(1.17)

Notice that kx∗ + θ(x0 − x∗ )k = θkx0 − x∗ k ≤ kx0 − x∗ k < r. That is x∗ + θ(x0 − x∗ ) ∈ U(x∗ , r). Using (1.17) and (C5 ), we get that kF 0 (x∗ )−1 F(x0 )k = k

Z 1 0

F 0 (x∗ )−1 F 0 (x∗ + θ(x0 − x∗ ))(x0 − x∗ )dθk

≤ Mkx0 − x∗ k.

(1.18)

It follows from the second substep of method (1.4) for n = 0 (1.5), (1.6), (1.13), (1.14), (1.16) and (1.18) that kz0 − x∗ k ≤ ky0 − x∗ k + kF 0 (x0 )−1 F 0 (x∗ )kkF 0 (x∗ )−1 (F 0 (xn ) − F 0 (yn ))k

×k(F 0 (x0 ) − 3F 0 (y0 ))−1 F 0 (x∗ )kkF 0 (x∗ )−1 F(x0 )k ML0 (1 + g1 (kx0 − x∗ k)kx0 − x∗ k2 ≤ g1 (kx0 − x∗ k)kx0 − x∗ k + 2(1 − L0 kx0 − x∗ k)(1 − p(kx0 − x∗ k)) ∗ ∗ = g2 (kx0 − x k)kx0 − x k < kx0 − x∗ k < r, (1.19)

6

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

which shows (1.10) for n = 0 and z0 ∈ U(x∗ , r). Then, using (1.5), (1.8), (1.13), (1.18)(for x0 = z0 ) and (1.19) we obtain that kx1 − x∗ k ≤ kz0 − x∗ k + kF 0 (x0 )−1 F 0 (x∗ )k(kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k

≤ = ≤ =

+kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k)k(F0 (x0 ) − 3F 0 (y0 ))−1 F 0 (x∗ )k ×kF 0 (x∗ )−1 F(z0 )k [2 + (1 + g1 (kx0 − x∗ k))kx0 − x∗ k]Mkz0 − x∗ k kz0 − x∗ k + 2(1 − L0 kx0 − x∗ k)(1 − p(kx0 − x∗ k)) M[2 + (1 + g1 (kx0 − x∗ k))kx0 − x∗ k] ]kz0 − x∗ k [1 + 2(1 − L0 kx0 − x∗ k)(1 − p(kx0 − x∗ k)) M[2 + (1 + g1 (kx0 − x∗ k))kx0 − x∗ k] [1 + ]g2 (kx0 − x∗ k)kx0 − x∗ k 2(1 − L0 kx0 − x∗ k)(1 − p(kx0 − x∗ k)) g3 (kx0 − x∗ k)kx0 − x∗ k < kx0 − x∗ k < r, (1.20)

which shows (1.11) for n = 0 and x1 ∈ U(x∗ , r). Hence by simply replacing x0 , y0 , z0 , x1 by xk , yk , zk, xk+1 in the preceding estimates we arrive at estimates (1.9)– (1.11). Using the estimate |xk+1 − x∗ | < |xk − x∗ | < r, we deduce that xk+1 ∈ U(x∗ , r) and lim xk = x∗ . To k→∞

show the uniqueness part, let Q = F(y∗ ) = 0. Using (C3 ) we get that

Z 1 0

|F 0 (x∗ )−1 (Q − F 0 (x∗ ))| ≤

¯ ∗ , T ) with F 0 (y∗ + θ(x∗ − y∗ )dθ for some y∗ ∈ U(x Z 1 0

≤ L0

L0 |y∗ + θ(x∗ − y∗ ) − x∗ |dθ

Z 1 0

(1 − θ)|x∗ − y∗ |dθ ≤

L0 T < 1. 2

(1.21)

It follows from (1.21) and the Banach Lemma on invertible functions that Q is invertible. Finally, from the identity 0 = F(x∗ ) − F(y∗ ) = Q(x∗ − y∗ ), we deduce that x∗ = y∗ . The conclusions of Theorem 1 hold for method (1.3) and method (1.2), respectively, if we define r (i.e., functions g1 , g2 , g3 etc.) in a suitable way. For method (1.3) we have that g1 (t) =

1 1 (Lt + 2M) t ∈ [0, ), 2(1 − L0 t) L0

r1 = p(t) = g2 (t) =

2(1 − M3 ) for M < 3, 2L0 + L

L0 1 (1 + 3g1 (t))t, h p (t) = p(t) − 1 t ∈ [0, ), 2 L0 1 3ML0 (1 + g1 (t)) (L + )t, t ∈ [0, r p ), 2(1 − L0 t) 2(1 − L0 t) h2 (t) = g2 (t) − 1, t ∈ [0, r p ),

g3 (t) = (1 +

M )g2 (t), t ∈ [0, r p ), 1 − p(t)

Ball Convergence for High Order Methods

7

and h3 (t) = g3 (t) − 1.

We have again that r2 < r p , r3 < r p and r3 < r2 . Set

r = min{r1 , r3 }.

(1.22)

Then, we have the following local convergence result for method (1.3) under the conditions (C ) and using the preceding notation. Theorem 2. Suppose that the conditions (C ) hold for r given by (1.22) and M ∈ [1, 3). Then, the conclusions of Theorem 1 hold but with method (1.3) replacing method (1.4) and using the 00 g00 functions as defined above Theorem 2. Finally, for method (1.2) we define g1 (t) =

1 2 Lt t ∈ [0, ), r1 = , 2(1 − L0 t) L0 2L0 + L

p(t) = L0 (1 + g1 (t))t, h p (t) = p(t) − 1, t ∈ [0, p(t) ¯ = L0 g1 (t)t, h p¯ (t) = p(t) ¯ − 1, t ∈ [0, g2 (t) =

1 ), L0

1 ), L0

1 (L + ML0 (1 + g1 (t)))t, t ∈ [0, r p ), 2(1 − L0 t)

h2 (t) = g2 (t) − 1 t ∈ [0, r p ), M g3 (t) = (1 + )g2 (t), t ∈ [0, r p ), 1 − p(t) ¯

and

h3 (t) = g3 (t) − 1.

We have that r3 < r2 and r p < r p¯ . Set

r = min{r1 , r3 }.

(1.23)

Then, we have the following local convergence result for method (1.2) under the conditions (C ) and using the preceding notation. Theorem 3. Suppose that the conditions (C ) hold for r given by (1.23). Then, the conclusions of Theorem 1 hold but with method (1.2) replacing method (1.4) and using the 00 g“ functions as defined above Theorem 3.  Remark 1.

1. In view of (C3 ) and the estimate kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + L0 kx − x∗ k

condition (C5 ) can be dropped and M can be replaced by M(t) = 1 + L0 t.

8

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. 2. The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form F 0 (x) = P(F(x)) where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1. 3. The radius r1 was shown by us to be the convergence radius of Newton’s method [5, 6] xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · · (1.24) under the conditions (C1 )–(C4 ). It follows from the definition of r that the convergence radius r of the method (1.2) cannot be larger than the convergence radius r1 of the second order Newton’s method (1.24). As already noted in [5, 6] r1 is at least as large as the convergence ball given by Rheinboldt [34] rR =

2 . 3L

(1.25)

In particular, for L0 < L we have that rR < r and

rR 1 L0 → as → 0. r1 3 L

That is our convergence ball r1 is at most three times larger than Rheinboldt’s. The same value for rR was given by Traub [37]. 4. It is worth noticing that method (1.2) is not changing when we use the conditions of Theorem 1 instead of the stronger conditions used in [13, 14, 16]. Moreover, we can compute the computational order of convergence (COC) defined by     kxn − x∗ k kxn+1 − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fr´echet derivative of operator F.

Ball Convergence for High Order Methods

3.

9

Numerical Examples

We present numerical examples in this section. ¯ 1), x∗ = (0, 0, 0)T . Define function F on D for w = Example 1. Let X = Y = R3 , D = U(0, T (x, y, z) by e−1 2 y + y, z)T . F(w) = (ex − 1, 2 Then, the Fr´echet-derivative is given by  x  e 0 0 F 0 (v) =  0 (e − 1)y + 1 0  . 0 0 1 We have that L0 = e − 1, L = M = e. The parameters are given in Table 1. Table 1.1. Parameters of methods (1.2)–(1.4) Methods/ parameters r1 rp r2 r3

(1.4) 0.3249 0.3116 0.1491 0.0577

(1.3) 0.3249 0.1021 0.0530 0.0258

(1.2) 0.3249 0.3116 0.1933 0.1665

Example 2. Returning back to the motivational example at the introduction of this chapter, we have for x∗ = 1 that L0 = L = 146.6629073, M = 101.5578008. The parameters are given in Table 2. Table 1.2. Parameters of methods (1.2)–(1.4) Methods/ parameters r1 rp r2 r3

(1.4) 0.0045 0.0041 0.0001 0.0063

(1.3) 0.0045 0.0001 0.0072 0.0001

(1.2) 0.0045 0.0041 0.0002 0.0002

Example 3. Let X = Y = C[0, 1], the space of continuous functions defined on [0, 1] and be equipped with the max norm. Let D = U(0, 1) and B(x) = F 00 (x) for each x ∈ D. Define function F on D by F(ϕ)(x) = ϕ(x) − 5

Z 1 0

xθϕ(θ)3 dθ.

(1.26)

10

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

We have that 0

F (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ D.



Then, we get that x = 0, L0 = 7.5, L = M = 15. The parameters are given in Table 3. Table 1.3. Parameters of methods (1.2)–(1.4) Methods/ parameters r1 rp r2 r3

4.

(1.4) 0.0667 0.0667 0.1124 0.0010

(1.3) 0.0667 0.0055 0.0007 0.0004

(1.2) 0.0667 0.0667 0.0197 0.0189

Conclusion

We present a local convergence analysis for some high convergence order methods to approximate a locally unique solution of an operator equation in a Banach space setting. The methods were shown to order fifth and sixth if the operator equation is defined on the m−dimensional Euclidean space [13, 14, 16]. The order of convergence was shown using hypotheses up to the sixth Fr´echet derivative of the operator involved although only the first derivative appears in these methods. In the present chapter, we only use hypotheses on the first Fr´echet-derivative. This way the applicability of these methods is expanded. Moreover, we present a radius of convergence a uniqueness result, and computable error bounds based on Lipschitz constants. Numerical examples are also presented in this chapter.

References [1] Abbasbandy S., Improving Newton-Raphson method for non-linear equations modified Adomian decomposition method, Appl. Math. Comput. 145 (2003) 887-893. [2] Adomian G., Solving Frontier Problems of Physics: The Decomposition Method. Kluwer Academic Publishers, Dordrecht, 1994. [3] Ahmad F., Hussain S., Mir N. A., Rafiq A., New sixth-order Jarratt method for solving nonlinear equations, Int. J. Appl. Math. Mech. 5(5), 27-35 (2009). [4] Amat S., Hern´andez M. A., Romero N., A modified Chebyshev’s iterative method with at least sixth order of convergence, Appl. Math. Comput. 206(1), 164-174 (2008). [5] Argyros I. K., “Convergence and Application of Newton-type Iterations”, Springer, 2008.

Ball Convergence for High Order Methods

11

[6] Argyros I. K. and Hilout Said, “A convergence analysis for directional two-step Newton methods”, Numer. Algor., 55, 503-528 (2010). [7] Bhalekar S. and Daftardar-Gejji V., Convergence of the New iterative Method, Int. J. Differential equations 2011(2011), 1-10, Article ID 989065 [8] Bruns D. D., Bailey J. E., Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady state, Chem. Eng. Sci. 32, 257-264 (1977). [9] Candela V., Marquina A., Recurrence relations for rational cubic methods I: The Halley method, Computing, 44, 169–184(1990). [10] Candela V., Marquina A., Recurrence relations for rational cubic methods II: The Chebyshev method, Computing, 45(4), 355–367(1990). [11] Chun C., Some improvements of Jarratt’s method with sixth-order convergence, Appl. Math. Comput. 190(2), 1432–1437 (1990). [12] Chun C., Iterative methods improving Newton’s method by the decomposition method, Comput. Math. Appl. 50(2005), 1559–1568. [13] Cordero A., Hueso J. L., Martinez E., Torregrossa J.R., A modified Newton-Jarratt’s composition, Numer Alg. 55, (2010), 87–99. [14] Cordero A., Hueso J. L., Martinez E., Torregrossa J., Increasing the convergence order of an iterative method for nonlinear systems, Appl. Math. Lett. 25, (2012), 2369–2374. [15] Daftardar-Gejji V. and Jafari H., An iterative method for solving non-linear functional equations, J. Math. Anal. Appl. 316(2006), 753–763. [16] Esmaeli H., Ahmadi M., Solving systems of nonlinear equations using an efficient iterative algorithm, submitted for publication. [17] Ezquerro J. A., Hern´andez M. A., Recurrence relations for Chebyshev-type methods, Appl. Math. Optim. 41(2), 227-236 (2000). [18] Ezquerro J. A., Hern´andez M. A., New iterations of R-order four with reduced computational cost. BIT Numer. Math. 49, 325- 342 (2009). [19] Ezquerro J. A., Hern´andez M. A., On the R-order of the Halley method, J. Math. Anal. Appl. 303, 591-601 (2005). [20] Guti´errez J. M., Hern´andez M. A., Recurrence relations for the super-Halley method, Computers Math. Applic. 36(7), 1–8(1998). [21] Ganesh M., Joshi M. C., Numerical solvability of Hammerstein integral equations of mixed type, IMA J. Numer. Anal. 11, 21–31( 1991). [22] He J. H., A new iteration method for solving algebraic equations, Appl. Math. Comput. 135(2003), 81-84.

12

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[23] Hern´andez M. A., Chebyshev’s approximation algorithms, and applications, Computers Math. Applic. 41(3-4), 433-455(2001). [24] Hern´andez M. A., Salanova M. A., Sufficient conditions for semilocal convergence of a fourth-order multipoint iterative method for solving equations in Banach spaces. Southwest J. Pure Appl. Math (1), 29-40(1999). [25] Kantorovich L. V., Akilov G. P., Functional Analysis, Pergamon Press, Oxford, 1982. [26] Magrenan A. A., Different anomalies in a Jarratt family of iterative root-finding methods, Appl. Math. Comput. 233, (2014), 29-38. [27] Magrenan A. A., A new tool to study real dynamics, The convergence plane, Appl. Math. Comput. 248, (2014), 23-35. [28] Noor M. A. and Noor K. I., Three-step iterative methods for non-linear equations, Appl. Math. Comput. 183(2006), 322-327. [29] Noor M. A., Some iterative methods for solving non-linear equations using homotopy perturbation method, Int. J. Comp. Math 87(2010), 141-149. [30] Noor M. A., Waseen M., Al M. A., New iterative techniques for solving non-linear equations. [31] Parhi S. K., Gupta D. K., Semilocal convergence of a Stirling-like method in Banach spaces, Int. J. Comput. Methods 7(02), 215-228(2010). [32] Petkovic M. S., Neta B., Petkovic L., Dˇzuniˇc J., Multipoint methods for solving nonlinear equations, Elsevier, 2013. [33] Ren H., Wu Q., Bi W., New variants of Jarratt method with sixth-order convergence, Numer. Algorithms 52(4), 585-603(2009). [34] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (19), 129-142 Banach Center, Warsaw Poland. [35] Sharma J. R., Gupta P., An efficient fifth-order method for solving systems of nonlinear equations, Comput. Math. Appl. 67, (2014), 591–601. [36] Soleyman J. F., Lofti T., Bakhtiari P., A multi-step class of iterative methods for nonlinear systems, Optim. Lett. 8, (2014), 1001–1015. [37] Traub J. F., Iterative methods for the solution of equations, Prentice-Hall Englewood Cliffs, New Jersey, USA, 1964. [38] Wang X., Kou J., Semi-local convergence of a class of modified super-Halley methods in Banach spaces, J. Optim. Theory. Appl. 153(2012), 779-793.

Chapter 2

Continuous Analogs of Newton-Type Methods 1.

Introduction

Let B1 , B2 denote Banach spaces, D an open, and convex set with D ⊆ B1 . One of the most interesting and challenging problems in Numerical analysis is finding a solution x∗ of equation F(x) = 0, (2.1) where F : D −→ B2 is a differentiable mapping in the sense of Fr´echet. This is the case, since problems from diverse disciplines reduce to (2.1) by mathematical modeling [1][8],[17,24,26]. The solution x∗ is needed in closed form, but this is attainable only in some instances. Hence, authors develop iterative methods converging to x∗ if certain convergence criteria are satisfied [1]-[30]. In this chapter, we show convergence of the method defined for each n = 0, 1, 2, . . . by x2n+1 = x2n − τn F 0 (x2n )−1 F(x2n )

x2n+2 = x2n+1 − F 0 (x2n+1)−1 F(x2n+1 ),

(2.2)

where {τn } is a real sequence chosen to force convergence of the sequence {xn } to x∗ . Our convergence results extend the usage method (2.2) in cases not covered before, and for Banach space valued mappings. The layout of the rest of the chapter is: The convergence is given in Section 2 and Section 3.

2.

Semi-local Convergence I

We need to state a local result for reasons of comparison [29,30], when B1 = B2 = R, and D = [a, b]. Theorem 4. Let a, b ∈ R, a < b and let F : [a, b] −→ R. Suppose: F(x) ∈ C4 [a, b]

(2.3)

14

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. F 0 (x) 6= 0, x ∈ (a, b)

(2.4)

equation (2.1) has a unique solution x∗ ∈ (a, b)

(2.5)

0 < τn < 1

(2.6)

M2 |F(xn )| 1 < , (F 0 (xn ))2 2

(2.7)

for some sequence of numbers an :=

M2 = sup |F 00 (x)|

(2.8)

F 0 (x) 6== 0,

(2.9)

x∈Br (x∗ )

F 00 (x) 6= 0 preserves sign in Br (x∗ ) = {x : |F(x)| < r}.

(2.10) ∗

Then, sequence {xn } generated by method (2.2) converges bilaterally to x with order of convergence at least two which is increased up to four when τn −→ 1 as n −→ ∞. Suitable choice of parameters τn depending on an have been given in [12, 29, 30]. There are some setbacks with the application of Theorem 4: (i) We do not know how to choose the initial point x0 other than x0 ∈ (a, b) which may be a very large interval. (ii) No computable error bounds on the distances |xn − x∗ | are given. (iii) The point x∗ must be the only root of equation (2.1) in (a, b). (iv) The hypothesis on an is not easy to verify in practice. Next, we show how to eliminate these setbacks by introducing some scalar functions and parameters. ¯ ρ) denote respectively the open and closed intervals in R with center Let U(z, ρ), U(z, z ∈ R and of radius ρ > 0. Define parameter R by R = sup{t ∈ [a, b] : U(x∗ ,t) ∈ [a, b]}.

(2.11)

Let w0 : [0, ∞) −→ [0, ∞) be a continuous and nondecreasing function with w0 (0) = 0. Assume w0 (t) = 1 (2.12) has a minimal positive solution r0 . Let w : [0, r0 ) × [0, r0) −→ [0, ∞), v : [0, r0) −→ [0, ∞) be continuous and nondecreasing functions with w(0, 0) = 0. Define functions a and h on the interval [0, r0) by R w(R, R) 01 v(θt)dθt 1 a(t) = − (1 − w0 (t))2 2 and

h(t) = g(t) − 1.

Continuous Analogs of Newton-Type Methods

15

1 We have that h(0) = − < 0 and h(t) −→ ∞ ast −→ r0−. It then follows from the intermedi2 ate value theorem that function F has roots in the interval (0, r0). Denote by r∗ the smallest such zero. Then, we have that for each t ∈ [0, r∗) 1 0 ≤ a(t) < . 2

(2.13)

Using the preceding notation, we can show the following local convergence result for the method (2.2). Proposition 1. Suppose:(2.3), (2.5), (2.6) and (2.9) (except F 0 (x) 6= 0) hold for some r ∈ [0, R]. There exist a root x∗ of equation (2.1) and function w0 : [0, ∞) −→ [0, ∞) continuous and nondecreasing with w0 (0) = 0 such that for each x ∈ U(x∗ , R) F 0 (x∗ ) 6= 0,

(2.14)

|F 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))| ≤ w0 (|x − x∗ |).

(2.15)

|F 0 (x∗ )−1 (F 0 (x) − F 0 (y))| ≤ w(|x − x∗ |, |y − x∗ |)|x − y|

(2.16)

|F 0 (x∗ )−1 F 0 (x)| ≤ v(|x − x∗ |)

(2.17)

max{r, r∗ } ≤ R.

(2.18)

Let U0 = U(x∗ , R) ∩ U(x∗ , r0 ). There exist functions w : [0, r0) × [0, r0 ) −→ [0, ∞), v : [0, r0) −→ [0, ∞) continuous and nondecreasing such that for each x, y ∈ U0

and 1 Then, the following assertions hold: F 0 (xn ) 6= 0, an < , sequence {xn } is well defined for 2 x0 ∈ U(x∗ , r∗ ) − {x∗ } and converges to x∗ . Moreover, if there exists r1∗ ≥ r∗ such that Z 1 0

w0 (θr1∗ )dθ < 1,

(2.19)

then the limit point x∗ is the only solution of equation F(x) = 0 in U1 = U(x∗ , R) ∩U(x∗ , r1∗ ). Proof. The proof is based on mathematical induction. Let x ∈ U(x∗ , r∗ ) − {x∗ }. Using the definition of r∗ and (2.15), we have that kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ w0 (kx − x∗ k) ≤ w0 (r∗ ) ≤ w0 (r0 ) < 1.

(2.20)

It follows from the Banach lemma on invertible operators [17] and (2.20) that F 0 (x) 6= 0 and kF 0 (x)−1 F 0 (x∗ )k ≤

1 . 1 − w0 (kx − x∗ k)

(2.21)

We can write by (2.14) F(x) = F(x) − F(x∗ ) =

Z 1 0

F 0 (x∗ + θ(x − x∗ ))(x − x∗ )dθ.

(2.22)

16

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Notice that kx∗ + θ(x − x∗ ) − x∗ k = θkx − x∗ k < r∗ , so x∗ + θ(x − x∗ ) ∈ U(x∗ , r∗ ) for each θ ∈ [0, 1]. Then, by (2.20) and (2.22), we get that kF 0 (x∗ )−1 F(x)k ≤

Z 1 0

v(θkx − x∗ k)dθkx − x∗ k.

(2.23)

We also have by (2.16) and (2.18) that M2 := w(r, r).

(2.24)

Using the definition of sequence {an }, (2.13), (2.21), (2.23) and (2.24) we obtain in turn that for kx0 − x∗ k ≤ t < r∗ M2 kF(xk )k w(t,t) 01 v(θt)dθt 1 ≤ < , ak = (F 0 (xk ))2 (1 − w0 (t))2 2 R

(2.25)

which shows (2.7). Hence, the conclusions of Proposition 1 hold with the exception of the uniqueness part. Let y∗ ∈ U1 with F(y∗ ) = 0. Define Q =

(2.15) and (2.19) we have in turn that

Z 1 0

F 0 (x∗ + θ(y∗ − x∗ ))dθ. By

kF (x ) (Q − F (x ))k ≤

Z 1

w0 (θky∗ − x∗ k)dθ



Z 1

w0 (θr1∗ )dθ < 1,

0

∗ −1

0



0

0

(2.26)

so Q is invertible. Then, from the identity 0 = F(y∗ ) − F (x∗ ) = Q(y∗ − x∗ ), we conclude that x∗ = y∗ . So far we showed that Theorem 4 can be weakened and the setbacks (i)-(iv) be handled using Proposition 1. However, still (2.3) is a setback. As a motivational example consider function F(x) = x3 log x2 + x5 − x4 Then, we have q = 1, and F 0 (x) = 3x2 log x2 + 5x4 − 4x3 + 2x2 , F 00 (x) = 6x logx2 + 20x3 − 12x2 + 10x, F 000 (x) = 6 logx2 + 60x2 = 24x + 22.

1 3 Obviously F 000 (x) is not bounded on [− , ]. 2 2 That is why, next we present a different local convergence analysis in a Banach space setting using only hypotheses on the first Fr´echet derivative. We consider the equation F(x) = 0,

(2.27)

Continuous Analogs of Newton-Type Methods

17

where F is a Fr´echet-differentiable operator defined on a convex subset D of a Banach space B1 with values in a Banach space B2 . As with the earlier approach, let R, r0 , w0 , w, v be as before. Moreover, suppose |1 − τn |v(0) < 1 for each n = 0, 1, 2, . . ..

(2.28)

Define functions g2n , h2n on the interval [0, r0) by g2n =

R1 0

w((1 − θ)t)dθ + |1 − τn | 1 − w0 (t)

R1 0

v(θt)dθ

and h2n (t) = g2n (t) − 1.

By (2.28), we have h2n (0) = |1 − τn|v(0) − 1 < 0 and h2n (t) −→ ∞ as t −→ r0− . Denote by r2n the smallest zero of functions h2n on the interval (0, r0 ), respectively. Moreover, define functions g2n+1, h2n+1 on the interval [0, r0 ) by g2n+1(t) =

R1 0

w((1 − θ)t)dθ 1 − w0 (t)

and h2n+1 (t) = g2n+1 (t) − 1.

We have h2n+1 (0) = −1 < 0 and h2n+1 (t) −→ ∞ as t −→ r0−. Denote by r2n+1 the smallest zeros of functions h2n+1 on the interval (0, r0), respectively. Define a radius of convergence r r = min{ri } for each n = 0, 1, 2, . . .. (2.29) Suppose that r > 0.

(2.30)

0 ≤ gi (t) < 1 for each i = 0, 1, 2, . . ..

(2.31)

We have that for each t ∈ [0, r),

Next, we present the local convergence analysis of the method (2.2) using the preceding notation for Banach space valued operators. Theorem 5. Let F : D ⊂ B1 −→ B2 be a continuous Fr´echet differentiable operator. Let also x∗ , w0 , w, v be as in Proposition 1 with sequence {τn } satisfying (2.28). Moreover, suppose that F 0 (x∗ ) is invertible, the radius of convergence r given in (2.29) satisfies (2.30) and ¯ ∗ , r∗ ) ⊆ D. U(x

(2.32)

Then, sequence {xn } generated for x0 ∈ U(x∗ , r) − {x∗ } by method (2.2) is well defined in U(x∗ , r), remains in U(x∗ , r) for each n = 0, 1, 2, . . ., and converges to x∗ so that kx2n+1 − x∗ k ≤ g2n (kx2n − x∗ k)kx2n − x∗ k ≤ kx2n − x∗ k < 1

(2.33)

18

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and kx2n+2 − x∗ k ≤ g2n+1 (kx2n+1 − x∗ k)kx2n+1 − x∗ k ≤ kx2n+1 − x∗ k < r,

(2.34)

where the function “g” are defined previously. Moreover, if there exists p ≥ r such that Z 1 0

w0 (θp)dθ < 1,

(2.35)

¯ ∗ , p). then the limit point x∗ is the only solution of equation F(x) = 0 in U2 = D ∩ U(x Proof. Let x ∈ U(x∗ , r). Using (2.12), (2.15) and (2.29), we have that F 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ w0 (kx − x∗ k) ≤ w0 (r0 ) < w0 (r) < 1, so F 0 (x) is invertible and kF 0 (x)−1 F 0 (x∗ )k ≤

1 . 1 − w0 (kx − x∗ k)

(2.36)

In particular (2.36) holds for x = x0 and x1 is well defined by the first sub-step of method (2.2). We can write x1 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + (1 − τ0 )F 0 (x0 )−1 F(x0 ).

(2.37)

Using (2.13), (2.16), (2.17), (2.36) and (2.37), we get in turn that ∗

0

−1 0



kx1 − x k ≤ kF (x0 ) F (x )kk

Z 1

0 −1 0

F 0 (x∗ )−1 (F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 ))(x0 − x∗ )dθk

+|1 − τ0 |kF 0 (x0 ) F (x∗ )kkF 0 (x∗ )−1 F(x0 )k R1

R1

w((1 − θ)kx0 − x∗ k)dθkx0 − x∗ k + |1 − τ0 | ≤ 1 − w0 (kx0 − x∗ k) ∗ ∗ = g0 (kx0 − x k)kx0 − x k ≤ kx0 − x∗ k < r, 0

0

v(θkx0 − x∗ kdθkx0 − x∗ k (2.38)

which shows (2.33) for n = 0 and x1 ∈ U(x∗ , r). Similarly x2 is well defined, since x1 ∈ U(x∗ , r), F 0 (x1 ) is invertible, and we can write by the second sub-step of method (2.2) for n = 0 that x2 − x∗ = x1 − x∗ − F 0 (x1 )−1 F(x1 ). (2.39) Then, again by (2.13), (2.16), (2.17), (2.36) and (2.37), we obtain in turn that kx2 − x∗ k ≤ kF 0 (x1 )−1 F 0 (x∗ )k ×k

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + θ(x1 − x∗ )) − F 0 (x1 ))(x1 − x∗ )dθk

R1

w((1 − θ)kx1 − x∗ k)dθkx1 − x∗ k 1 − w0 (kx1 − x∗ k) = g1 (kx1 − x∗ k)kx1 − x∗ k ≤ kx1 − x∗ k < r,



0

(2.40)

Continuous Analogs of Newton-Type Methods

19

which shows (2.34) for n = 0 and x2 ∈ U(x∗ , r). The induction is completed, if we replace x0 , x1 by x2k , x2k+1 in the preceding estimates. Then, from kx2k+1 − x∗ k ≤ ckx2k − x∗ k, c = g2k (kx0 − x∗ k) ∈ [0, 1)

(2.41)

kx2k+2 − x∗ k ≤ dkx2k+1 − x∗ k, d = g2k+1(kx0 − x∗ k) ∈ [0, 1)

(2.42)





we deduce that lim xk = x and x2k+1, x2k+2 ∈ U(x , r). The uniqueness part is shown in

Proposition 1.

k−→∞

There is also the following local convergence result in the literature which however does not provide a computable radius of convergence, error bounds, or uniqueness results. Proposition 2. [29,30] There exists ε > 0 such that for any initial approximation x0 with kx0 − x∗ k ≤ ε, method (2.2) converges to x∗ provided that τn ∈ (0, 2) for each n = 0, 1, 2, . . ..

3.

Semi-local Convergence II

We first state a well known semi-local convergence result for method yn+1 = yn − τn F 0 (yn )−1 F(yn )

(2.43)

for Banach space valued operators. Theorem 6. Assume that (i) kF 00 (x)k ≤ M for some M ≥ 0 and each x ∈ D, (ii) F 0 (x)−1 exists for each x ∈ D, (iii) kF 0 (y0 )−1 k ≤ β (iv) kF 0 (y0 )−1 F(y0 )k ≤ η, a0 = Mβη 0 −1 0 −1 (v) yn ∈ D, MkF √ (yn ) kkF (yn ) F(yn )k ≤ an < 2 for each n = 0, 1, 2, . . . and τn ∈ In = −1 + 1 + 4an (0, ) ⊆ (0, 2). an

Then, the sequence (2.43) converges to a solution x∗ of equation (2.1). Clearly, by setting yn = xn , method (2.10) becomes method (2.43). Therefore, method (2.10) converges under the hypotheses of Theorem 6 provided that τn = 1, when x2n+2 (i.e. y2n+2 ) is computed, where as τn ∈ In otherwise. Hence, we can study the convergence of method (2.43) instead of the convergence of method (2.10). The setbacks of the semi-local convergence are similar to the ones for the local convergence of method (2.43): Next, we modify Theorem 6 in two different ways: Proposition 3. Let F : D ⊂ B1 −→ B2 be Fr´echet differentiable. Suppose: (1) F 0 (y0 ) is invertible for some y0 ∈ D and

20

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(2) kF 0 (y0 )−1 (F 0 (y) − F 0 (y0 ))k ≤ K0 ky − y0 k for kF 0 (y0 )−1 F(y0 )k ≤ η, each y ∈ D, and 1 some K0 > 0. Let D0 = D ∩U(y0 , ). K0 (3) kF 0 (y0 )−1 (F 0 (y) − F 0 (x))k ≤ Kky − xk for each x, y ∈ D0 , and some K > 0. (4) yn ∈ D0 , KkF 0 (yn )−1 F 0 (y0 )kkF 0 (yn )−1 F(yn )k ≤ a0n < 2, n = 0, 1, 2, . . ., a00 = Kη and τn ∈ In0 ⊆ (0, 2). Then, the sequence (2.43) converges to a solution x∗ of equation (2.1). Proof. Let x ∈ U(y0 ,

1 ). Then, by (i) and (ii) K0 kF 0 (y0 )−1 (F 0 (x) − F 0 (y0 ))k ≤ K0 kx − y0 k < 1.

(2.44)

It follows from (2.44) and the Banach lemma on invertible operators that F 0 (x) is invertible so (i) of Theorem 6 holds (in particular for x = yn ). Condition (3) can replace the stronger condition (i) in the proof of Theorem 6. The rest of the proof follows from the proof of Theorem 6 applied to the operator F 0 (y0 )−1 F and K, a0n, replacing M and an respectively. Concerning the uniqueness of the solution, we have: Proposition 4. Under the hypotheses of Proposition 4, further suppose that there exists 1 ρ≥ such that K0 K0 ρ < 1, (2.45) then, the limit point x∗ is the only solution of equation (2.1) in D1 = D ∩ U¯ (y0 , ρ). Proof. The existence of the solution x∗ is established in Proposition 4. Let y∗ ∈ D1 with ∗

F(y ) = 0. Define linear operator Q by Q =

Z 1 0

Proposition 4 and (2.45), we have that 0

−1

0

kF (y0 ) (Q − F (y0 ))k ≤

Z 1 0

≤ K0 ≤

F 0 (x∗ + θ(y∗ − x∗ ))dθ. Then, using (2) of

K0 kx∗ + θ(y∗ − x∗ ) − y0 kdθ

Z 1 0

[(1 − θ)kx∗ − y0 k + θky∗ − y0 k]dθ

K0 1 ( + ρ) < 1, 2 K0

so Q is invertible. Proposition 4 can be presented in a more general setting along the lines of Proposition 1. Let R0 be defined by R0 = sup{t ≥ 0 : U(y0 ,t) ⊆ D}. (2.46) Let w0 , r0 , w, v be as in the local case. Proposition 5. Let F : D ⊆ B1 −→ B2 be continuously Fr´echet-differentiable. Suppose :

Continuous Analogs of Newton-Type Methods

21

(1) F 0 (y0 ) is invertible for some y0 ∈ D and kF 0 (y0 )−1 F(y0 )k ≤ η (2) kF 0 (y0 )−1 (F 0 (y) − F 0 (y0 ))k ≤ w0 (ky − y0 k) for each y ∈ D (3) kF 0 (y0 )−1 (F 0 (y) − F 0 (x))k ≤ w(ky − xk) for each x, y ∈ D0 = D ∩U(y0 , r0 ) (4) kF 0 (y0 )−1 F 0 (y)k ≤ v(ky − y0 k) for each y ∈ D0 w(kyn − yn−1 k)(η + 01 v(θkyn − y0 k)dθkyn − y0 k ≤ a1n < 2, n = (5) yn ∈ D0 , (1 − w0 (kyn − y0 k))2 0, 1, 2, . . ., a10 = w(wr0 )η and τn ∈ In1 ⊆ (0, 2). Then, the sequence (2.43) converges to a solution x∗ of equation (2.1). R

Moreover, if there exists r1 ≥ r0 such that Z 1 0

w0 (θr1 )dθ < 1,

(2.47)

¯ 0 , r1 ). then, the limit point x∗ is the only solution of equation F(x) = 0 in D1 = D ∩ U(y Proof. We have by (1) and (4) that F(yn ) = F(y0 ) + (F(yn ) − F(y0 )) = F(y0 ) +

Z 1 0

F 0 (y0 + θ(yn − y0 ))dθ(yn − y0 )

so kF 0 (y0 )−1 F(yn )k ≤ kF 0 (y0 )−1 F(y0 )k +k

Z 1 0

≤ η+

F 0 (y0 )−1 F 0 (y0 + θ(yn − y0 ))dθ(yn − y0 )k

Z 1 0

v(θkyn − y0 k)dθkyn − y0 k.

(2.48)

Therefore, KkF 0 (yn )−1 F 0 (y0 )kkF 0 (yn )−1 F(yn )k := a0n ≤ w(kyn − yn−1 k)kF 0 (yn )−1 F 0 (y0 )k2 kF 0 (y0 )−1 F(yn )k ≤

w(kyn − yn−1 k)(η + 01 v(θkyn − y0 k)dθkyn − y0 k (1 − w0 (kyn − y0 k))2

≤ a2n < 2.

R

(2.49)

22

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 2. Condition (5) in Proposition 5 can be replaced as follows: Define functions ϕ and ψ on the interval [0, r0) by w(2t)(η + 01 v(θt)dθt) ψ(t) = (1 − w0 (t))2 R

and ψ(t) = ϕ(t) − 2. We have that ψ(0) = −2 < 0 and ψ(t) −→ ∞ as t −→ r0−. Denote by r¯0 the smallest zero of function ψ on the interval (0, r0). Define D10 = D ∩U(y0 , r¯0 ). We have the estimates a0n



w(kyn − y0 k + ky0 − yn−1 k)(η + 01 v(θkyn − y0 k)dθkyn − y0 k (1 − w0 (kyn − y0 k))2



a1n

R

w(2¯r0 )(η + 01 v(θ¯r0 )dθ¯r0 ) := < 2, (1 − w0 (¯r0 ))2 R

so D10 , r¯0 , a1n can replace D0 , r0 , a0n , respectively in Proposition 5 so that condition (5) dropped is satisfied. Remark 3. A plethora of choices for sequence {τn } can also be found in the literature [12,29,30].

4.

Conclusion

In this chapter, we present new convergence results for continuous analogs of Newton-type methods for solving equations containing Banach space-valued mappings. The usage of the center Lipschitz together with the notion of the restricted convergence region lead to a finer analysis of these methods.

References [1] Amat S., Busquier S., and Plaza S., Review of some iterative root-finding methods from a dynamical point of view. Scientia, 10(3):35, 2004. [2] Argyros I. K., Computational theory of iterative methods, volume 15. Elsevier, 2007. [3] Argyros I. K., On the semilocal convergence of a fast two-step Newton method. Revista Colombiana de Matematicas, 42(1):15-24, 2008. [4] Argyros I. K., George S., Thapa N., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-I, Nova Publishes, NY, 2018. [5] Argyros I. K., George S., Thapa N., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-II, Nova Publishes, NY, 2018.

Continuous Analogs of Newton-Type Methods

23

[6] Argyros I. K. and Hilout S., Weaker conditions for the convergence of newtons method. Journal of Complexity, 28(3):364-387, 2012. [7] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [9] Budzko D., Cordero A., Torregrosa J. R., Modifications of Newton’s method to extend the convergence domain, SeMA, 66(2014), 43-53. [10] Bi W., Ren H., and Wu Q., Three-step iterative methods with eight-order convergence for solving nonlinear equations, J. Comput. Appl. Math., 225(2009) 105-112. [11] C˜atinas E., A survey on the high convergence orders and computational convergence orders of sequences, Appl. Math. Comput., 343 (2019) 1-20. [12] Cordero A., Hueso J. L., Martinez E. and Torregrosa J. R., New modifications of Potra-Pt´ak’s method with optimal fourth and eighth orders of convergence, J. Comput. Appl. Math., 234(2010) 2969-2976. [13] Cordero A., Torregrosa J. R. and Vassileva M. P., Three-step iterative methods with optimal eight order convergence, J. Comput. Appl. Math., 235(2011) 3189-3194. [14] Ezquerro J. A., Hernandez M. A., Romero N., and Velasco A. I., Improving the domain of starting points for secant-like methods, Appl. Math. Comput., 219(2012) 3677-3692. [15] Fang L. and He G., Some modifications of Newton’s method with higher-order convergence for solving nonlinear equations, J. Comput. Appl. Math., 228(2009) 296303. [16] Hernandez M. A. and Salanova M. A., Modification of the Kantorovich assumptions for semilocal convergence for the Chebyshev method, Comput. Appl. Math., 126(2000) 131-143. [17] Kantlorovich L. V. and Akilov G. P., Functional analysis, Pergamon Press, 1982. [18] Magre˜na´ n A. A. and Argyros I. K., Improved convergence analysis for Newton-like methods. Numerical Algorithms, 71(4):811-826, 2016. [19] Magre˜na´ n A. A. and Argyros I. K., Two-step newton methods. Journal of Complexity, 30(4):533-553, 2014. [20] Pavaloiu I. and Catinas E., Bilateral approximations for some Aitken-SteffensenHermite type methods of order three, Appl. Math. Comput., 217(2011) 5838-5846. [21] Potra F. A. and Pt´ak V., Nondiscrete induction and iterative processes, volume 103. Pitman Advanced Publishing Program, 1984.

24

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[22] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3 (1978), no. 1, 129–142. [23] Siyyan H. I., Shatnavi M. T., and Al-Subaihi I. A., A new one-parameter family of iterative methods with eight order of convergence for solving nonlinear equations, Inter. J. Pure Appl. Math., 84(2013) 451-461. [24] Sharma J. R. and Arora H., On efficient weighted-Newton’s methods for solving systems of nonlinear equations, Appl. Math. Comput., 222(2013) 497-506. [25] Thukral R. and Petkovic M. S., A family of three-point methods of optimal order for solving nonlinear equations, J. Comput. Appl. Math., 233(2010) 2278-2284. [26] Traub J. F., Iterative methods for the solution of equations. American Mathematical Soc., 1982. [27] Wang X. and Zhang T., A new family of Newton-type iterative methods with and without memory for solving nonlinear equations, Calcolo 51(2014) 1-15. [28] Weerakoon S. and Fernando T. G. I., A variant of Newton’s method with accelerated third-order convergence, Applied Mathematics Letters, 13(8) (2000) 87-93. [29] Zhanlav T. and Chuluunbaatar O., Some iteration methods with high order convergence for nonlinear equation, Bulletin of PFUR, Series Mathematics.Information sciences. Physics, 4 (2009) 47-55. [30] Zhanlav T., Note on the cubic decreasing region of the Chebyshev method, J. Comput. Appl. Math., 235(2010) 341-344.

Chapter 3

Initial Points for Newton’s Method 1.

Introduction

Let B1 , B2 stand for Banach spaces and D ⊂ B1 be a nonempty, convex and open set. By L B(B1, B2), we denote the space of bounded linear operators from B1 into B2. Let also ¯ d) = {y ∈ B1 : kx − yk ≤ d}. From now on by S(x, d) = {y ∈ B1 : kx − yk < d} and S(x, differentiable, we mean differentiable in the Fr´echet sense. Problems from many disciplines can be formulated using mathematical modeling [1]-[12] as an equation of the form F(x) = 0,

(3.1)

where F : D −→ B2 is a continuously differentiable operator. Solution of equation (3.1) can be found in closed form only in special cases. That is why iterative methods are utilized to produce a sequence converging to a solution x∗ of equation (3.1) under some sufficient convergence conditions [1]-[12]. We study the convergence of Newton’s method defined for each n = 0, 1, 2, . . . by xn+1 = xn − F 0 (xn )−1 F(xn ) (3.2)

where x0 ∈ D is an initial point, since it is considered the most important quadratically convergent to x∗ method. In what follows, we present a short survey of the convergence results, then we show how to extend the convergence region even further. The first semi-local convergence result for Newton’s method in Banach spaces was given by Kantorovich [10] under the following conditions:

(A1) There exists Γ0 = [F 0 (x0 )]−1 ∈ L B(B2, B1 ), for some x0 ∈ D, with kΓ0 k ≤ β and kΓ0 F(x0 )k ≤ η, (A2) kF 00 (x)k ≤ M for x ∈ D. 1 (A3) h = Mβη ≤ . 2 Theorem 7. (The Newton-Kantorovich theorem,[10]) Let F : D ⊆ B1 −→ B2 be a twice continuously differentiable operator. Assume that conditions (A1)–(A3) are satisfied. If √ 1 − 1 − 2h B(x0 , s∗ ) ⊂ D, where s∗ = η, then Newton’s sequence, given by (3.2) and starth ing at x0 , converges to a solution x∗ of the equation F(x) = 0 and xn , x∗ ∈ S(x0 , s∗ ), for all n ∈ N.

26

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

As you can see in Theorem 7, the Newton-Kantorovich theorem, we use the information around the initial point x0 , (A1), a condition on the operator involved F, (A2), and a condition for the parameters introduced in the previous two conditions to give criteria ensuring the convergence, (A3). A very important problem in the study of iterative methods is to locate starting points x0 such that the sequence {xn } is convergent. The set of these starting points is that we call the convergence domain, which is small in general so it is important to enlarge the convergence domain without additional hypotheses. Notice that the convergence domain of the method is connected with the domain of parameters associated with the semi-local convergence conditions required to obtain the convergence of the method. In this case, for each value of M that is fixed by condition (A2), the condition required to the operator F in the domain of definition D, the domain of parameters associated with conditions (A1)–(A3) is:   1 . (3.3) TK (M) = x0 ∈ D : Mβη ≤ 2 On the other hand, Huang proposed in [9] an alternative to condition (A2) that does not consist of relaxing the condition on the operator F and imposed a condition on F that leads to a modification, not a restriction, of the convergence domain. In particular, Huang proposes that F 00 is Lipschitz continuous in D and proves the semi-local convergence of Newton’s method can be proved under the following conditions: (B1) There exists Γ0 = [F 0 (x0 )]−1 ∈ L B(B2, B1 ), for some x0 ∈ D, with kΓ0 k ≤ β and kΓ0 F(x0 )k ≤ η; moreover, kF 00 (x0 )k ≤ M0 .

(B2) kF 00 (x) − F 00 (y)k ≤ Lkx − yk for x, y ∈ D.

3 (B3) 3β2 ηL2 + 3β2 M0 L + β3 M03 ≤ β2 M02 + 2βL 2 .

In this case, for each value of L that is fixed by condition (B2), the domain of parameters associated with conditions (B1)–(B3) is:    23 2 2 2 3 3 2 2 TH (L) = x0 ∈ D : 3β ηL + 3β M0 L + β M0 ≤ β M0 + 2βL (3.4)

But, if we pay attention to the proof of Huang in [9], we see that F 00 (x) doesn’t need to be Lipschitz continuous in the entire domain D, since it is enough that F 00 (x) is Lipschitz continuous only at x0 . This observation was made by Guti´errez in [8], where (B2) is replaced by kF 00 (x) − F 00 (x0 )k ≤ L0 kx − x0 k for x ∈ D, which is a center condition at the starting point x0 . Taking into account this, Guti´errez obtains a semi-local convergence result for Newton’s method under the following conditions: (B1) There exists Γ0 = [F 0 (x0 )]−1 ∈ L B(Y, X), for some x0 ∈ D, with kΓ0 k ≤ β and kΓ0 F(x0 )k ≤ η; moreover, kF 00 (x0 )k ≤ M0 .

(C2) kF 00 (x) − F 00 (x0 )k ≤ L0 kx − x0 k for x ∈ D.

(C3) 3β2 ηL20 + 3β2 M0 L0 + β3 M03 ≤ β2 M02 + 2βL0

 32

.

Initial Points for Newton’s Method

27

Notice that L0 ≤ L, so that Huang’s result is relaxed by Guti´errez in [8] by using condition (C2) instead of (B2). In this case, for each value of L0 , that is fixed by condition (C2), the domain of parameters associated with conditions (B1), (C2) and (C3) is:    32 2 2 2 3 3 2 2 DG (L0 ) = x0 ∈ D : 3β ηL0 + 3β M0 L0 + β M0 ≤ β M0 + 2βL0 (3.5)

Notice that, in this situation, from condition (C2), the convergence domain for Newton’s method consists of a single point, x0 , or it is an empty set, and Newton’s method is then never convergent. Observe that condition (C3) is (B3) with L0 instead of L. To avoid this problem that presents the last condition, we use in this chapter a center condition for the second Fr´echet derivative of the operator F involved on an auxiliary point xe in the following way: (D2) kF 00 (x) − F 00 (e x)k ≤ e Lkx − xek for x ∈ D,

once the point xe ∈ D is fixed. So, we obtain a convergence domain that is not reduced to a point or the empty set, since a nonempty set of possible starting points can be found. In this chapter, following the idea of extending the region of starting points, we try to reduce the value of parameter e L to obtain a larger convergence region. For this, our idea is to restrict the domain D by means of considering condition (D2) for x ∈ D0 with D0 ⊂ D. Moreover, as a condition on the starting point x1 , we keep a condition centered at x0 , which allows us to sharpen the bounds and relax the condition (B3). Our chapter extends earlier work by us given in [5]. We also present the local convergence of Newton’s method not given in [5] using the same idea. The layout of the rest of the chapter is: Section 2 contains the semi-local convergence, whereas Section 3 presents the local convergence of Newton’s method. Numerical examples appear in Section 4.

2.

Semi-local Convergence Result

To prove the semi-local convergence of Newton’s method, we follow Kantorovich’s technique and use the concept of majorizing sequence. A scalar sequence {tn} is a majorizing sequence of {xn } if kxn − xn−1 k ≤ tn −tn−1 , for all n ∈ N. From the last inequality, it follows the sequence {tn } is nondecreasing. Moreover, it is easy to check that if {tn } converges to t ∗ < +∞, there exists x∗ ∈ X such that x∗ = lim xn and kx∗ − xn k ≤ t ∗ − tn , for n = 0, 1, 2, . . . n

Then, the interest of the majorizing sequence is that the convergence of the sequence {xn } in the Banach space X is deduced from the convergence of the scalar sequence {tn }. From the concept of majorizing sequence, Kantorovich proves the Newton-Kantorovich theorem given in Theorem 7. For the last, a majorizing sequence is constructed from conditions (A1)–(A2) of the Newton-Kantorovich theorem, by applying Newton’s method, s0 = 0,

sn+1 = Np (sn ) = sn −

p(sn ) , p0 (sn )

n ≥ 0,

28

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

to Kantorovich’s polynomial

M 2 s η s − + . (3.6) 2 β β √ √ 1 + 1 − 2h 1 − 1 − 2h ∗∗ ∗ Note that (3.6) has two positive solutions s = η and s = η such h h 1 that s∗ ≤ s∗∗ if h = Mβη ≤ . Moreover, we consider p(s) in some interval [0, s0] taking 2 into account that s∗ ≤ s∗∗ < s0 . p(s) =

3.

Main Result

We present a semi-local convergence result for Newton’s method under a center condition of type (D2) with a restricted domain by using the technique of majorizing sequences, such as it appears in [4]. For this, we suppose the following conditions: (E1) There exists xe ∈ D such that kx0 − e xk = γ, where x0 ∈ D, and kF 00 (e x)k ≤ δ. There exists the operator Γ0 = [F 0 (x0 )]−1 ∈ L B(B2 , B1 ), with kΓ0 k ≤ β and kΓ0 F(x0 )k ≤ η. Moreover, there exists K0 > 0, such that kF 0 (x) − F 0 (x0 )k ≤ K0 kx − x0 k for x ∈ D.   1 00 00 (E2) kF (x) − F (e x)k ≤ `1 kx − xek for x ∈ D0 := D ∩ B x1 , − η , proβK0 vided that βK0 η < 1. (E3) Define the scalar function ψ1 by ψ1 (t) =

`1 3 δ1 2 t η t + t − + , 6 2 β β

(3.7)

where δ1 = max{δ + γ`1 , K0 }. There exists α1 , unique positive root of ψ01 (t) = 0 with ψ1 (α1 ) ≤ 0. 1 (E4) α1 + η ≤ . βK0 Next, we construct a majorizing sequence from conditions (E1)–(E3) by applying Newton’s method ψ1 (t 1 ) 1 (3.8) t01 = 0, tn+1 = Nψ1 (tn1 ) = tn1 − 0 n1 , n ≥ 0, ψ1 (tn ) Note that (3.7) has two positive zeros t1∗ and t1∗∗ such that t1∗ ≤ t1∗∗ if ψ1 (α1 ) ≤ 0, where α1 is the unique positive root of ψ01 (t) = 0. Moreover, we consider ψ1 (t) in some interval [0,t 0] taking into account that t1∗ ≤ t1∗∗ < t 0 . Theorem 8. Let F : D ⊆ B1 −→ B2 be a twice continuously Fr´echet differentiable operator defined on a nonempty open convex domain D of a Banach space B1 with values in a Banach space B2 . Suppose that conditions (E1)–(E4) are satisfied and S(x0 ,t1∗ ) ⊂ D, where t1∗ is the smallest positive zero of polynomial (3.7). Then, Newton’s sequence, defined in (3.2) and starting at x0 , converges to a solution x∗ of the equation F(x) = 0 and xn , x∗ ∈ S(x0 ,t1∗ ), for all n ∈ N. In addition, kx∗ − xn k ≤ t1∗ − tn1 for n ≥ 0, where {tn1 } is defined in (3.8). Moreover, the solution x∗ is unique in S(x0 ,t1∗∗ ) ∩ D if t1∗ < t1∗∗ or in S(x0 ,t1∗ ) if t1∗∗ = t1∗ .

Initial Points for Newton’s Method

29

1 − η). The rest of the proof βK0 1 follows as the corresponding one in [4] by noticing that {xn } ⊆ S(x1 , − η) for each βK0 n = 1, 2, . . .. Proof. Notice that (E4) implies that S(x1 , α1 ) ⊆ S(x1 ,

Remark 4. We have that D0 ⊆ D so ˜ `1 ≤ L. and δ1 ≤ δ0 ,

˜ K0 } . Thereso convergence analysis is finer that the one in [8,9] where δ0 = max{δ + γL, fore, the new semi-local The center-Lipschitz condition on F 0 can be replaced by the center-Lipschitz condition on derivative F 00 : kF 00 (x) − F 00 (x0 )k ≤ L0 kx − x0 k for x ∈ D

call this condition together with kF 00 (x0 )k ≤ M0 and the rest of the conditions in (E1) as (E1’). Denote by λ the unique positive solution of equation βL0 2 t + βM0 t = 1. 2 Set D1 = D ∩ S(x1 , λ − η) (E2’) kF 00 (x) − F 00 (x)k ˜ ≤ `2 kx − xk ˜ for x ∈ D1 . (E3’) Define the scalar function ψ2 by ψ2 (t) =

`2 3 δ1 2 t η t + t − + , 6 2 β β

where δ1 = max{δ + γ`2 , M0 }. The unique positive root α2 of ψ02 (t) = 0 satisfies ψ2 (α2 ) ≤ 0. (E4’) α2 + η ≤ λ. Based on (E1’)-(E3’) define Newton’s method as in (3.8) but with ψ2 replacing ψ1 , i.e., 2 t02 = 0, tn+1 = Nψ2 (tn2 ) = tn2 −

ψ2 (tn2 ) , n ≥ 0. ψ02 (tn2 )

Denote by t2∗ and t2∗∗ the two positive solutions of ψ2 (t) = 0 with t2∗ ≤ t1∗∗ . Then, we arrived at:

30

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 9. Under the conditions (E1’)-(E4’), the conclusions of Theorem 2.1 hold with {tn2 },t2∗ ,t2∗∗ replacing {tn1 },t1∗ ,t1∗∗ , respectively. Proof. Notice that we have for each x ∈ S(x0 , ρ) kF (x) − F (x0 )k = k

Z 1

F 00 (x0 + τ(x − x0 ))dτ(x − x0 )k

≤ k

Z 1

[F 00 (x0 + τ(x − x0 )) − F 00 (x0 )]dτ(x − x0 )k

0

0

0

0

+kF 00 (x0 )(x − x0 )k L0 kx − x0 k2 + M0 kx − x0 k < 1, ≤ 2 so F 0 (x)−1 is invertible. The rest follows as in [8,9] with kF 0 (x)−1 k ≤

β 1 − β( L20 kx − x0 k2 + M0 kx − x0 k)

replacing kF 0 (x)−1 k ≤

4.

β . 1 − βK0 kx − x) k

On the Convergence Region

By Huang [9] and Guti´errez [8], a sufficient condition to satisfy (E3) is: 3β2 η`21 + 3β2 δ0 `1 + β3 δ30 ≤ β2 δ20 + 2β`1

 32

.

Then, we obtain the following domain of parameters    32 2 2 2 3 3 2 2 T (`1 ) = x0 ∈ D : 3β η`1 + 3β δ0 `1 + β δ0 ≤ β δ0 + 2β`1

(3.9)

(3.10)

associated with Theorem 8 in [5]. The new results are:

3β2 η`22 + 3β2 δ1 `2 + β3 δ31 ≤ β2 δ21 + 2β`2

 32

.

As a consequence, we obtain the following region of parameters    32 2 2 2 3 3 2 2 T (`1 ) = x0 ∈ D : 3β η`2 + 3β δ1 `2 + β δ1 ≤ β δ1 + 2β`2 .

(3.11)

(3.12)

Initial Points for Newton’s Method

5.

31

A Priori Error Bounds and Quadratic Convergence of Newton’s Method

From the following theorem provides some a priori error estimates for Newton’s method, we deduce the quadratic convergence of the method under conditions (E1)–(E3). The proof of the theorem follows Ostrowski’s technique [12] and is analogous to that given in [5]. Notice first that if ψ1 (t) has two real zeros t1∗ and t1∗∗ such that 0 < t1∗ ≤ t1∗∗, we can then write   `1 ψ1 (t) = t + ε (t1∗ − t)(t1∗∗ − t) 6 with t1∗ 6=

6ε 6ε and t1∗∗ 6= . `1 `1

Theorem 10. Suppose that conditions (E1)–(E3) are satisfied and ψ1 (α1 ) ≤ 0, where α1 is a positive root of ψ01 (t) = 0 and ψ1 is given in (3.7). (a) If t1∗ < t1∗∗ and t1∗ >

6ε , then `1 n

n

(t1∗∗ − t1∗ )∆2 (t1∗∗ − t1∗ )θ2 ∗ ≤ t − t ≤ , n n n 1 P − θ2 Q0 − ∆2

n ≥ 0,

t1∗ t1∗ `1 t1∗∗ − 6ε `1 (2t1∗ − t1∗∗ ) + 6ε P, ∆ = Q , P = , Q = and provided 0 0 t1∗∗ t1∗∗ `1 t ∗ + 6ε `1t ∗ + 6ε that θ < 1 and ∆ < 1.

where θ =

(b) If t1∗ = t1∗∗ and t1∗ >

12ε , then `1   `1 t1∗ − 6ε n ∗ t1∗ ∗ t ≤ t − t ≤ , n 1 1 `1 t1∗ − 12ε 2n

n ≥ 0.

Proof. Let t1∗ < t1∗∗ and denote an = t1∗ − tn and bn = t1∗∗ − tn for all n ≥ 0. Then ψ1 (tn ) =

1 (`1 tn + 6ε) an bn , 6

and an+1 = t1∗ − tn+1 = t1∗ − tn +

ψ01 (tn ) =

`1 1 an bn − (`1tn + 6ε) (an + bn ) 6 6

ψ1 (tn ) a2n (`1 bn − 6ε − `1 tn ) = . ψ01 (tn ) `1 an bn − (`1 tn + 6ε) (an + bn )

an+1 a2 (`1 bn − (`1tn + 6ε)) = n2 and taking into account function d(t) = bn+1 bn (`1 an − (`1tn + 6ε)) `1 t1∗∗ − 6ε − 2`1 t , P ≤ min{d(t);t ∈ [0,t1∗]} = d(0) and Q0 = max{d(t);t ∈ [0,t1∗]} = d(t1∗ ) `1 t1∗ − 6ε − 2`1 t it follows  2  2 an an+1 an P ≤ ≤ Q0 . bn bn+1 bn From

32

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

In addition,  2n+1 n+1 an+1 ∆2 2n+1 −1 a0 ≤ Q0 = bn+1 b0 Q0

and

 2n+1 n+1 an+1 θ2 2n+1 −1 a0 ≥P . = bn+1 b0 P

Taking then into account that bn+1 = (t1∗∗ − t1∗ ) + an+1 , it follows: (t1∗∗ − t1∗ )θ2 n+1 P − θ2

n+1

n+1

≤ t1∗ − tn+1

(t ∗∗ − t1∗ )∆2 ≤ 1 · n+1 Q0 − ∆2

If t1∗ = t1∗∗ , then an = bn and an+1 = As a consequence,



an (`1 an − (`1 tn + 6ε)) . `1 an − 2 (`1 tn + 6ε)

 an `1 t1∗ − 6ε an ≤ an+1 ≤ and ∗ `1t1 − 12ε 2 

`1 t1∗ − 6ε `1t1∗ − 12ε

n+1

t1∗ ≤ t1∗ − tn+1 ≤

t1∗ 2n+1

.

From the last theorem, it follows that the convergence of Newton’s method is quadratic if t1∗ < t1∗∗ and linear if t1∗ = t1∗∗ . Replace conditions (E1)-(E4), `1 ,t1∗,t2∗∗ ,tn1 by (E1)’-(R4)’, `2 ,t2∗ ,t2∗∗ ,t)2n to obtain the corresponding results for Newton’s method under prime conditions.

6.

Local Convergence

The local convergence of Newton’s method using hypotheses similar to the ones for the semi-local convergence is given in this section. To achieve this we use two different sets of conditions. First, we use conditions (H): (h1) There exists x∗ ∈ D such that F(x∗ ) = 0, F 0 (x∗ )−1 ∈ L (B2, B1 ) with kF 0 (x∗ )−1 k ≤ β∗ , and x˜ ∈ D, kF 00 (x) ˜ ≤ δ, kx∗ − xk ˜ ≤ γ1 . Moreover, there exists a ≥ 0 such that kF 0 (x) − F 0 (x∗ )k ≤ akx − x∗ k, x ∈ D. Set D1 = D ∩ S(x∗ ,

1 ). aβ∗

(h2) kF 00 (x) − F 00 (x)k ˜ ≤ Nkx − xk, ˜ x ∈ D1 (h3) S(x∗ , ρ1 ) ⊂ D, where ρ1 is the only positive solution of equation ξ1 (t) = 0, where N δ ξ1 (t) = β∗ t 2 + β∗ (Nγ1 + + a)t − 1. 2 2

Initial Points for Newton’s Method

33

¯ ∗ , ρ∗ ). (h4) There exists ρ∗ ≥ ρ1 such that aρ∗ β∗ < 2. Set D2 = D ∩ S(x Theorem 11. Under the hypotheses (H), {xn } ⊂ S(x∗ , ρ1 ), lim xn = x∗ and x∗ is the only n−→∞

solution of equation F(x) = 0 in the set D2 , provided that x0 ∈ S(x∗ , ρ1 ) − {x∗ }. Proof. The proof is based on the mathematical induction and the estimates kF 0 (xn )−1 k ≤

β∗ , 1 − aβ∗ kxn − x∗ k

(3.13)

kxn+1 − x∗ k = k − F 0 (xn )−1 (F(x∗ ) − F(xn ) − F 0 (xn )(x∗ − xn ) = k − F 0 (xn )−1

Z 1 0

= k − F 0 (xn )−1 [

F 00 (xn + τ(x∗ − xn ))(x∗ − xn )2 (1 − τ)dτk

Z 1 0

∗ (F 00 (xn + τ(x∗ − xn )) − F 00 (x))(x ˜ − xn )2 (1 − τ)dτ

1 + F 00 (x(x ˜ ∗ − x)]k ˜ 2 ≤ kF 0 (xn )−1 k[

Z 1 0

kF 00 (xn + τ(x∗ − xn )) − F 00 (x)k(1 ˜ − τ)dτ

1 ∗ ˜ − xn k2 + kF 00 (x)k]kx 2 ∗k [N( kxn−x + kx∗ − xk) ˜ + 12 δ]kxn − x∗ k2 2 ≤ 1 − aβ∗ kxn − x∗ k ∗ ≤ θ0 kxn − x k < ρ1 , kx0 −x∗ k 2

+ γ1 ) + 12 δ)kx0 − x∗ k ∈ [0, 1) so lim xn = x∗ . Let G = where θ0 = n−→∞ 1 − aβ∗ kx0 − x∗ k τ(y∗ − x∗ ))dτ for y∗ ∈ D2 . Then, we get N(

kF 0 (x∗ )−1 kkG − F 0 (x∗ )k ≤ β∗ a

Z 1 0

(3.14) Z 1

F 0 (x∗ +

0

τky∗ − x∗ kdτ

≤ aβ∗ ρ∗ < 1

(3.15)

so x∗ = y∗ by the identity 0 = F(y∗ ) − F(x∗ ) = G(y∗ − x∗ ), since G−1 ∈ L (B2 , B1 ). Another set of conditions (Q): ¯ (q1) =(h1), kF 00 (x∗ ) ≤ δ¯ and kF 00 (x) − F 00 (x∗ )k ≤ bkx − x∗ k, x ∈ D D3 = D ∩ S(x∗ , ρ), β∗ b 2 ¯ − 1. where ρ¯ is the only solution of equation t + β∗ δt 2 ¯ − xk, (q2) kF 00 (x) − F 00 (x)k ˜ ≤ Nkx ˜ x ∈ D3 (q3) S(x∗ , ρ2 ) ⊂ D. where ρ2 is the only solution of equation ξ2 (t) = 0, where ξ2 (t) = β∗ (

N¯ b 2 1 ¯ 1 )t − 1. + )t + β∗ ( δ + δ¯ + Nγ 2 2 2

34

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 12. Under the hypotheses (Q), {xn } ⊂ S(x∗ , ρ2 ), lim xn = x∗ and x∗ is the only n−→∞

solution of equation F(x) = 0 in the set D3 , provided that x0 ∈ S(x∗ , ρ2 ) − {x∗ }. Proof. As in Theorem 11, but we use the estimates kF 0 (xn )−1 k ≤

β∗ ¯ 1 − β∗ (δkxn − x∗ k + b2 kxn − x∗ k2 )

instead of (3.13). Then, with this modification, we get again kxn+1 − x∗ k ≤ θ1 kxn − x∗ k < ∗ ¯ kxn−x k + γ1 ) + 1 δ)kx0 − x∗ k N( 2 2 ρ2 , where θ1 = ∈ [0, 1) so lim xn = x∗ . The uniqueness b ∗ 2 ¯ ∗ n−→∞ 1 − (δkx0 − x k + 2 kx0 − x k ) ∗ part follows again from (3.15), since if we consider y ∈ S(x∗ , ρ2 ), we get again kxn+1 − y∗ k ≤ θ1 kxn − y∗ k, so lim xn = y∗ but lim xn = x∗ , so x∗ = y∗ . n−→∞

7.

n−→∞

Numerical Examples

¯ 0 , 1 − µ), x0 = x˜ = 1, and µ ∈ I = [ 1 , 1). Define Example 4. Let B1 = B2 = R, D = S(x 2 function F on D by F(x) = x3 − µ. (3.16) 1 1 We get β = , η = (1 − µ), M = 6(2 − µ), `2 = δ = M0 = L0 = L = L˜ = 6, K0 = 3(3 − 3 3 2 1 1 2 µ), `1 = (−2µ + 5µ + 6), γ = 0. Then, Mβη ≤ for µ ∈ I1 = [ , 1), (B3) and (C3) 3−µ 2 2 are satisfied for µ ∈ I2 = I3 = [0, 0.7431], (3.9) and (3.11) are satisfied for µ ∈ I4 = I5 = [0, 1.4087]. But µ ∈ [0, 1), so I4 and I5 must be chosen as I4 = I5 = I, the best we can hope for. Notice that even for a simple academic example there are infinitely many choices of µ for which (A3)-(C3) are not satisfied. Hence, for these choices, the old results do not guarantee convergence of Newton’s method but our results do. Example 5. Let B1 = B2 = R3 , D = S(0, 1), x∗ = (0, 0, 0)T and define Q on D by Q(x) = Q(x1 , x2 , x3 ) = (ex1 − 1,

e−1 2 x2 + x2 , x3 )T . 2

(3.17)

For the points u = (u1 , u2 , u3 )T , the Fr´echet derivative is given by   u1 e 0 0 Q0 (u) =  0 (e − 1)u2 + 1 0  . 0 0 1

Using the norm of the maximum of the rows and (c2)-(c4) and since Q0 (x∗ ) = diag(1, 1, 1), 1 we have β∗ = δ = 1, γ1 = 0, a = e − 1, δ¯ = 1 = b = e − 1, N¯ = N = e e−1 . Then, we obtain that ρ1 = 0.3896, ρ2 = 0.4401. 2 The old radii given independently by Rheinboldt [13] and Traub [14] are rT = rR = 3M where M = e, so rT = rR = 0.2453 < ρ1 < ρ2 . Hence, more initial points are available under our approach. The new error bounds are tighter too.

Initial Points for Newton’s Method

8.

35

Conclusion

This chapter aims to provide a semi-local as well as a local convergence analysis for Newton’s method using the second derivative and Banach space-valued operators. We obtain a further extension than in our earlier works by locating a ball centered at the first iterate instead of the initial point. The iterate remains in the ball centered at the first iterate, and the new Lipschitz constants are at least as tight as the old Lipschitz constants. This modification leads to a finer convergence analysis than before in the semi-local convergence as well as the local convergence case too. We also include the local convergence case not covered before. These improvements are obtained using special cases of the old constants, so no additional effort is required. Hence, we extend the applicability of Newton’s method. Numerical examples show the superiority of new results over the old ones.

References [1] Argyros I. K., George S., Thapa N., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-I, Nova Publishes, NY, 2018. [2] Argyros I. K., George S., Thapa N., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-II, Nova Publishes, NY, 2018. [3] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [4] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Argyros I. K., Equerro J. A., Hernandez M. A., Magr´en˜ an A. A., Extending the domain of starting points for Newton’s method under conditions on the second derivative, J. Comput. Appl. Math. [6] Ezquerro J. A., Gonz´alez D. and Hern´andez M. A., Majorizing sequences for Newton’s method from initial value problems, J. Comput. Appl. Math., 236 (2012) 2246– 2258. ´ A., Starting points for [7] Ezquerro J. A., Hern´andez-Ver´on M. A. and Magre˜na´ n A. Newton’s method under a center Lipschitz condition for the second derivative, J. Comput. Appl. Math., 330 (2018) 721–731. [8] Guti´errez J. M., A new semilocal convergence theorem for Newton’s method, J. Comput. Appl. Math., 79 (1997) 131–145. [9] Huang Z., A note on the Kantorovich theorem for Newton method, J. Comput. Appl. Math., 47 (1993) 211–217.

36

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[10] Kantorovich L. V., The majorant principle and Newton’s method, Dokl. Akad. Nauk. SSSR, 76 (1951) 17–20 (in Russian). [11] Kantorovich L. V. and Akilov G. P., Functional analysis, Pergamon Press, Oxford, 1982. [12] Ostrowski A. M., Solution of equations and systems of equations, Academic Press, New York, 1966. [13] Rheinboldt W.C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3 (1978), no. 1, 129–142. [14] Traub J. F., Iterative methods for the solution of equations, AMS Chelsea Publishing, 1982.

Chapter 4

Seventh Order Methods 1.

Introduction

Finding a solution x∗ of the equation

G (x) = 0,

(4.1)

where G : Ω −→ B2 is Fr´echet differentiable operator is an important problem due to its wide application in many fields [1]-[24]. Here and below Ω ⊂ B1 be nonempty, open, and B1, B2 be Banach spaces. This chapter is devoted to the study of the seventh order [20] given (in B1 = B2 = Ri ) as yn = xn − [wn , xn ; G ]−1G (xn) zn = yn −Cn−1 G (yn )

xn+1 = zn − D−1 n G (zn ),

(4.2)

where Cn = 3I − [wn , xn ; G ]([yn, xn ; G ] + [yn, wn ; G ])[wn, xn ; G ]−1,

Dn = [zn , yn ; G ]−1([wn , xn ; G ] + [yn , xn ; G ] − [zn , xn ; G ])[wn, xn ; G ]−1 and un = xn + δG (xn ), δ ∈ R. Methods (4.2) was studied in [20] using conditions on eight order derivative, and Taylor series (although these derivatives do not appear in solver (4.2)) and obtained convergence order seven in [20]. The hypotheses on eight order derivatives limit the usage of the solver (4.2). 1 3 As an academic example: Let B1 = B2 = R, Ω = [− , ]. Define G on Ω by 2 2

G (x) = x3 logx2 + x5 − x4 Then, we have x∗ = 1, and

G 0 (x) = 3x2 logx2 + 5x4 − 4x3 + 2x2 , G 00(x) = 6x logx2 + 20x3 − 12x2 + 10x,

38

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

G 000(x) = 6 logx2 + 60x2 = 24x + 22. Obviously G 000(x) is not bounded on Ω. So, the convergence of solver (4.2) not guaranteed by the analysis in [11,12,13]. Other problems with the usage of solver (4.2) are: no information on how to choose x0 ; bounds on kxn − x∗ k and information on the location of x∗ . All these are addressed in this chapter by only using conditions on the first derivative and in the more general setting of Banach space valued operators. That is how we expand the applicability of the solver (4.2). To avoid the usage of the Taylor series and high convergence order derivatives, we rely on the computational order of convergence (COC) or the approximate computational order of convergence (ACOC) [1,2,4]. The layout of the rest of the chapter includes the local convergence in Section 2 and the example in Section 3.

2.

Local Convergence Analysis

Let α ≥ 0, β ≥ 0, δ ∈ R − {0}, γ = max{α|δ|, β} and consider function ϕ0 : [0, ∞) × [0, ∞) −→ [0, ∞) continuous, increasing with ϕ0 (0, 0) = 0. Assume equation ϕ0 (βt,t) = 1 (4.3) has a minimal positive solution ρ0 . Consider functions ϕ, ϕ1 : [0, ∞) × [0, ∞) −→ [0, ∞) continuous, increasing with ϕ(0, 0) = 0. Define functions ϕ¯ 1 , ϕ¯ 1 on the interval [0, ρ0) by ϕ¯ 1 (t) =

ϕ(|δ|αt,t) 1 − ϕ0 (βt,t)

and ϕ¯ 1 (t) = ϕ¯ 1 (t) − 1.

By these definitions ϕ¯ 1 (0) = −1 and ϕ¯ 1 (t) −→ ∞ with t −→ ρ− 0 . Then, the intermediate value theorem assures the existence of at least one solution of equation ϕ¯ 1 (t) = 0 in (0, ρ0 ). Denote by r1 the minimal such solution. Define functions g1 , g2 , ϕ¯ 2 , ϕ¯ 2 on the interval [0, ρ0 ) by ϕ(βt + ϕ¯ 1 (t)t,t)ϕ¯ 1(t) g1 (t) = , 1 − ϕ0 (βt,t) g2 (t) =

α(ϕ1 (βt + ϕ¯ 2 (t)t, 0) + ϕ1(βt + ϕ¯ 1 (t)t, α|δ|t))ϕ¯ 1(t) , (1 − ϕ0 (βt,t))2 ϕ¯ 2 (t) = g1 (t) + g2 (t)

and ϕ¯ 2 (t) = ϕ¯ 2 (t) − 1.

By these definitions ϕ¯ 2 (0) = −1 and ϕ¯ 2 (t) −→ ∞ with t −→ ρ− . Denote by r2 the minimal solution of equation ϕ¯ 2 (t) = 0. Assume equation p(t) = 1 (4.4)

Seventh Order Methods

39

has a minimal positive solution ρ p , where ¯ 1(t)t). p(t) = ϕ0 (ϕ¯ 2 (t)t, ϕ Set ρ1 = min{ρ0 , ρ p }.

Define functions h1 , h2 , ϕ¯ 3 , ϕ¯ 3 on the interval [0, ρ1 ) by h1 (t) = h2 (t) =

αϕ(0, ϕ¯ 1 (t)t)ϕ¯ 2(t) , 1 − p(t)

αϕ1 (ϕ¯ 1 (t)t + ϕ¯ 2 (t)t, 0)ϕ¯ 2(t) , (1 − ϕ0 (βt,t))(1 − p(t))

ϕ¯ 3 (t) = h1 (t) + h2 (t) and ϕ¯ 3 (t) = ϕ¯ 3 (t) − 1.

We have by these definitions ϕ¯ 3 (0) = −1 and ϕ¯ 3 (t) −→ ∞ with t −→ ρ− 1 . Denote by r3 the ¯ minimal solution of equation ϕ3 (t) = 0 on the interval (0, ρ1). Define a radius of convergence r by r = min{r j }, j = 1, 2, 3. (4.5) Then, we have 0 ≤ ϕ0 (βt,t) < 1 0 ≤ p(t) < 1

(4.6) (4.7)

0 ≤ ϕ¯ j (t) < 1

(4.8)

and for all t ∈ [0, r). The following conditions (D) are used in the local convergence analysis that follows: (d1) G : Ω −→ B2 is continuous, [., .; G ] : Ω × G −→ B2 is a divided difference of order one, and there exists x∗ ∈ Ω such that G (x∗ ) = 0 and G 0 (x∗ )−1 ∈ L (B2, B1 ). (d2) ϕ0 : [0, ∞) × [0, ∞) is continuous, increasing with ϕ0 (0, 0) = 0 such that for x, y ∈ Ω kG 0(x∗ )−1 ([x, y; G ] − G 0(x∗ ))k ≤ ϕ0 (kx − x∗ k, ky − x∗ k). Set Ω0 = Ω ∩ S(x∗ , ρ0 ) where ρ0 is given in (4.3. Here and below S(x, η) stand for ¯ η) stand for the closure of the open ball in B1 with center x and radius η > 0 and S(x, S(x, η). (d3) There exist continuous and increasing functions ϕ, , ϕ1 : [0, ρ0) × [0, ρ0 ) −→ [0, ∞) with ϕ(0, 0) = ϕ1 (0, 0) = 0 such that for each x, y, z, w ∈ Ω0 kG 0 (x∗ )−1 ([y, x; G ] − [z, x∗; G ])k ≤ ϕ(ky − zk, kx − x∗ k),

40

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

G 0 (x∗)−1([y, x; G ] − [z, w; G ])k ≤ ϕ1 (ky − zk, kx − wk) and for α ≥ 0, β ≥ 0 and x ∈ Ω0 kG (x)k ≤ α, kI + δ[x, x∗ ; G ]k ≤ β. ¯ ∗ , γr) ⊆ Ω, where r is given in (4.5), ρ p , ρq and ρ1 given previously exist, and (d4) S(x γ = max{α|δ|, β}. (d5) There exists R¯ ≥ r such that ¯ < 1 or ϕ0 (R, ¯ 0) < 1. ϕ0 (0, R) ¯ ∗ , R). ¯ Set Ω1 = Ω ∩ S(x Next, we present the local convergence analysis of the method (4.2) based on the preceding notation and conditions (D). Theorem 13. Under the conditions (D) further suppose that x0 ∈ S(x∗ , r) − {x∗}. Then, the following items hold {xn } ⊂ S(x∗ , r) (4.9) lim xn = x∗ ,

(4.10)

kyn − x∗ k ≤ ϕ¯ 1 (kxn − x∗ k)kxn ∗ −x∗ k ≤ kxn − x∗ k < r,

(4.11)

kzn − x∗ k ≤ ϕ¯ 2 (kxn − x∗ k)kxn ∗ −x∗ k ≤ kxn − x∗ k,

(4.12)

kxn+1 − x∗ k ≤ ϕ¯ 3 (kxn − x∗ k)kxn ∗ −x∗ k ≤ kxn − x∗ k,

(4.13)

n−→∞

and x∗ is the only solution of equation G (x) = 0 in the set Ω1 , where functions ϕ¯ j , j = 1, 2, 3, and Ω1 are defined previously. Proof. The proof is based on mathematical induction. Let x, y ∈ S(x∗ , r). Then, using (d1), (4.5), (4.6), and (d3), we have in turn kG 0(x∗ )−1 ([x + δG (x), x; G ] − G 0(x∗ ))k ≤ ϕ0 (kx + δG (x) − x∗ k, kx − x∗ k) ≤ ϕ0 (βkx − x∗ k, kx − x∗ k)

≤ ϕ0 (βr, r) < 1,

(4.14)

which together with the Banach lemma on invertible operators [14] show [x +

G (x), x; G ]−1 ∈ L (B2, B1), and k[x + G (x), x; G ]−1G 0(x∗ )k ≤

1 , 1 − ϕ0 (βkx − x∗ k, kx − x∗ k)

(4.15)

where we also used kx + δG (x) − x∗ k ≤ k(I + δ[x, x∗ ; G ])(x − x∗ )k ≤ βkx − x∗ k ≤ βr

(4.16)

Seventh Order Methods

41

so x + δG (x) ∈ S(x∗ , βr) ⊆ S(x∗ , γr) ⊆ Ω. The point y0 is well defined by the first sub step of method (4.2) for n = 0. Using (4.5), (4.8) (for j = 1), (d3), (4.15), and the first sub-step of method (4.2) for n = 0, we get in turn that ky0 − x∗ k = kx0 − x∗ − [u0 , x0 ; G ]−1G (x0 )k ≤ k[u0 , x0 ; G ]−1G 0(x∗ )k

×kG 0 (x∗ )−1 ([u0, x0 ; G ] − [x0 , x∗ ; G ])kkx0 − x∗ k ϕ(αkx0 − x∗ k, kx0 − x∗ k)kx0 − x∗ k ≤ 1 − ϕ0 (βkx0 − x∗ k, kx0 − x∗ k) = ϕ¯ 1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r,

(4.17)

showing y0 ∈ S(x∗ , r) and (4.11) for n = 0. We need the estimates obtained using the definition of C0 , (d3) and (4.15): k[w0 , x0 ; G ]−1([w0 , x0 ; G ] − [y0, x∗ ; G ])(y0 − x∗ )k

≤ k[w0 , x0 ; G ]−1G 0(x∗ )kkG 0(x∗ )−1 ([w0 , x0 ; G ] − [y0, x∗ ; G ])kk(y0 − x∗ )k ϕ(kw0 − x∗ + x∗ − y0 k, kx0 − x∗ k)ky0 − x∗ k ≤ 1 − ϕ0 (βkx0 − x∗ k, kx0 − x ∗ k) ϕ(βkx0 − x∗ k + ϕ¯ 1 (kx0 − x∗ k)kx0 − x∗ k, kx0 − x∗ k) ≤ 1 − ϕ0 (βkx0 − x∗ k)kx0 − x∗ k, kx0 − x∗ k) ×ϕ¯ 1 (kx0 − x∗ k, kx0 − x∗ k) = g1 (kx0 − x∗ k)kx0 − x∗ k

(4.18)

and C0 [w0, x0 ; G ]−1G (y0 )

= = =

=

=

(3I − [w0 , x0 ; G ]−1([y0 , x0 ; G ] + [y0, w0; G ])[w0, x0 ; G ]−1G (y0 )

[w0, x0 ; G ]−1(3[w0 , x0 ; G ] − [y0, x0 ; G ] − [y0, w0 ; G ])[w0, x0 ; G ]−1G (y0 ) [I + [w0 , x0 ; G ]−1([w0, x0 ; G ] − [y0, x0 ; G ]) +[w0 , x0 ; G ]−1([w0, x0 ; G ] − [y0, w0; G ])] ×[w0 , x0 ; G ]−1G (y0 ) (4.19)

[w0, x0 ; G ]−1G (y0 ) +([w0 , x0 ; G ]−1([w0, x0 ; G ] − [y0, x0 ; G ]) +[w0 , x0 ; G ]−1([w0, x0 ; G ] − [y0, w0; G ]))[w0, x0 ; G ]−1G (y0 ) [w0, x0 ; G ]−1G (y0 ) + T0 ,

(4.20)

so kT0 k = k([w0 , x0 ; G ]−1G 0 (x∗ ))

×[G 0(x∗ )−1 ([w0 , x0 ; G ] − [y0, x0 ; G ])

+(G 0(x∗ )−1 ([w0 , x0 ; G ] − [y0 , w0 ; G ])]

×([w0 , x0 ; G ]−1G 0(x∗ ))(G 0(x∗ )−1 G (y0))k αϕ1 (kw0 − y0 k, 0) + ϕ1 (kw0 − y0 k), kx0 − w0 k)ϕ¯ 1 (kx0 − x∗ k)kx0 − x∗ k ≤ (1 − ϕ0 (βkx0 − x∗ k, kx0 − x∗ k))2 αϕ1 (βkx0 − x∗ k + ϕ¯ 1 (kx0 − x∗ k)kx0 − x∗ k, 0) ≤ (1 − ϕ0 (βkx0 − x∗ k, kx0 − x∗ k))2

42

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. ϕ1 (βkx0 − x∗ k + ϕ¯ 1 (kx0 − x∗ k)kx0 − x∗ k, α|δ|kx0 − x∗ k) (1 − ϕ0 (βkx0 − x∗ k, kx0 − x∗ k))2 ×ϕ¯ 1 (kx0 − x∗ k)kx0 − x∗ k = g2 (kx0 − x∗ k)kx0 − x∗ k. +

(4.21)

Then, by the second sub-step of the method (4.2), we have z0 − x∗ ≤ ky0 − x0 −C0 [w0 , x0 ; G ]−1G (y0)

= y0 − x∗ − [w0 , x0 ; G ]−1G (y0 ) + T0 .

Then, by (4.5), (4.8) (for j = 2), (a3), and (4.19)-(4.21), we get in turn that kz0 − x∗ k = k(y0 − x∗ − [w0 , x0 ; G ]−1G (y0)) + T0 k

≤ ky0 − x∗ − [w0 , x0 ; G ]−1G (y0 )k + kT0 ||

≤ (g1 (kx0 − x∗ k) + g2 (kx0 − x∗ k))kx0 − x∗ k = ϕ¯ 2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(4.22)

so z0 ∈ S(x∗ , r), and (4.12) holds for n = 0. Next, we must show [z0 , y0 ; G ]−1 ∈ L (B2 , B1 ), so that x1 will then be well defined by the third substep of method (4.2). Using (a2), we have in turn that kG 0(x∗ )−1 ([z0, y0 ; G ] − G 0(x∗ ))k

≤ ϕ0 (kz0 − x∗ k, ky0 − x∗ k) ≤ ϕ0 (ϕ¯ 2 (kx0 − x∗ k)kx0 − x∗ k, ϕ¯ 1 (kx0 − x∗ k)kx0 − x∗ k)

= p(kx0 − x∗ k) < 1,

(4.23)

so [z0 , y0 ; G ]−1L (B2 , B1 ), k[z0, y0 ; G ]−1G 0(x∗ )k ≤

1 . 1 − p(kx0 − x∗ k)

(4.24)

As before we need the estimates kz0 − x∗ − [z0 , y0 ; G ]−1G (z0)k

= k([z0 , y0 ; G ]−1G 0(x∗ ))(G 0(x∗ )−1 ([z0, y0 ; G ] − [z0, x∗ ; G ])(z0 − x∗ )k ϕ(0, ky0 − x∗ k)ϕ¯ 2 (kx0 − x∗ k)kx0 − x∗ k ≤ 1 − p(kx0 − x∗ k) αϕ(0, ϕ¯ 1 (kx0 − x∗ k)kx0 − x∗ k)ϕ¯ 2 (kx0 − x∗ k)kx0 − x∗ k ≤ 1 − p(kx0 − x∗ k) = h1 (kx0 − x∗ k)kx0 − x∗ k,

(4.25)

Seventh Order Methods

43

and k([z0, y0 ; G ]−1G 0(x∗ ))(G 0(x∗ )−1 ([y0 , x0 ; G ] − [z0, x0 ; G ]))

([w0 , z0 ; G ]−1G 0(x∗ ))(G 0(x∗ )−1 G (z0))k

≤ k[z0 , y0 ; G ]−1G 0(x∗ )k

×kG 0(x∗ )−1 ([y0 , x0 ; G ] − [z0 , x0 ; G ])k

k[w0 , x0 ; G ]−1G 0(x∗ )kkG 0(x∗ )−1 G (z0)k αϕ1 (k(y0 − x∗ ) + (x∗ − z0 )k, 0)ϕ¯ 2(kx0 − x∗ k)kx0 − x∗ k ≤ (1 − ϕ0 (βkx0 − x∗ k, kx0 − x∗ k))(1 − p(kx0 − x∗ k)) αϕ1 (ϕ¯ 1 (kx0 − x∗ k)kx∗ − z0 k + ϕ¯ 2 (kx0 − x∗ k)kx0 − x∗ k, 0)ϕ¯ 2 (kx0 − x∗ k)kx0 − x∗ k ≤ (1 − ϕ0 (βkx0 − x∗ k, kx0 − x∗ k))(1 − p(kx0 − x∗ k)) = h2 (kx0 − x∗ k)kx0 − x∗ k. (4.26) Then, by the third sub-step of method (4.2), (4.5), (4.8) (for j = 3), (4.25), and (4.26), we get x1 − x∗ = z0 − x∗ − [z0 , y0 ; G ]−1D0 [w0 , x0 ; G ]−1G (z0),

(4.27)

so kx1 − x∗ k ≤ kz0 − x∗ − [z0 , y0 ; G ]−1G (z0)k +k[z0 , y0 ; G ]−1G (x∗ )k

×kG 0(x∗ )−1 ([y0 , x0 ; G ] − [z0 , x0 ; G ])k

×k[w0 , x0 ; G ]−1G 0(x∗ )kkG 0(x∗ )−1 G (z0)k ≤ h1 (kx0 − x∗ k)kx0 − x∗ k + h2 (kx0 − x∗ k)kx0 − x∗ k = ϕ¯ 3 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(4.28)

so x1 ∈ S(x∗ , r) and (4.13) holds for n = 0. The induction is finished, if x0 , y0 , z0 , x1 are replaced by xi , yi , zi, xi+1 in the preceding estimates. It then follows from the estimate kxi+1 − x∗ k ≤ ckxi − x∗ k < r,

(4.29)

that lim xi = x∗ and xi+1 ∈ S(x∗ , r), where c = ϕ¯ 3 (kx0 − x∗ k) ∈ [0, 1). Let T = [x∗ , y∗ ; G ], i−→∞

where y∗ ∈ Ω1 with G (y∗ ) = 0. In view of (d2) and (d5)

kG 0(x∗ )−1 (T − G 0 (x∗ ))k ≤ ϕ0 (0, ky∗ − x∗ k) ¯ < 1, ≤ ϕ0 (0, R) so T −1 ∈ L (B2, B1 ). Finally, using the identity 0 = G (x∗) − G (y∗ ) = T (x∗ − y∗ ), we get x∗ = y∗ .

(4.30)

44

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 5. (a) The local results can be used for projection solvers such as Arnoldi’s solver, the generalized minimum residual solver(GMREM), the generalized conjugate solver(GCM) for combined Newton/finite projection solvers, and in connection to the mesh independence principle in order to develop the cheapest and most efficient mesh refinement strategy [1]-[5]. (b) It is worth noticing that solvers (4.2) is not changing when we use the conditions of the preceding Theorem instead of the stronger conditions used in [12]. Moreover, we can compute the computational order of convergence (COC) defined as     kxn+1 − x∗ k kxn − x∗ k / ln ξ = ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence (ACOC) [9,10]     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way, we obtain in practice the order of convergence, but no higher-order derivatives are used.

3.

Numerical Example

We present the following example to test the convergence criteria. We define the divided difference, by [x, y; F ] =

Z 1 0

F 0 (y + θ(x − y))dθ,

and use δ = 1 in all examples. Example 6. Let E1 = E2 = R3 , Ω = U(0, 1), x∗ = (0, 0, 0)T and define F on Ω by

F (x) = F (u1, u2, u3) = (eu1 − 1,

e−1 2 u2 + u2 , u3 )T . 2

For the points u = (u1 , u2 , u3 )T , the Fr´echet derivative is given by  u1  e 0 0 F 0 (u) =  0 (e − 1)u2 + 1 0  . 0 0 1

(4.31)

Using the norm of the maximum of the rows x∗ = (0, 0, 0)T and since F 0 (x∗ ) = diag(1, 1, 1), e−1 1 1 1 1 we get ϕ0 (s,t) = (s + t), ϕ(s,t) = (e e−1 s + (e − 1)t), ϕ1(s,t) = e e−1 (s + t), α = 2 2 2 1 1 e−1 e−1 α(t) = e t or α = e , β = β(t) = 2 + ϕ(t,t). r1 = 0.239228, r2 = 0.121821, r3 = 0.159746.

Seventh Order Methods

45

¯ 1). Define function F on Ω by Example 7. Let E1 = E2 = C[0, 1], Ω = U(0, F(w)(x) = w(x) − 5

Z 1

xθw(θ)3 dθ.

0

Then, the Fr´echet-derivative is given by F 0 (w(ξ))(x) = ξ(x) − 15

Z 1 0

xθw(θ)2 ξ(θ)dθ, for each ξ ∈ Ω.

1 15 15 (s +t), ϕ(s,t) = (15s + 7.5t), ϕ1(s,t) = (s +t), α = α(t) = 4 2 2 15t or α = 15, β = β(t) = 2 + ϕ(t,t). Then, the radius of convergence are given by Then, we have ϕ0 (s,t) =

r1 = 0.00782289, r2 = 0.000435555r3 = 0.000143867. Example 8. Returning back to the motivational example given at the introduction of this 1 study, we get ϕ0 (s,t) = ϕ(s,t) = ϕ1 (s,t) = (96.662907)(s + t), α = α(t) = 1.0631t, β = 2 β(t) = 2 + ϕ(t,t). Then, the radius of convergence are given by r1 = 0.00464539, r2 = 0.000199677, r3 = 0.00152711.

4.

Conclusion

We study a seventh convergence order solver introduced earlier on the i−dimensional Euclidean space for solving systems of equations. We use hypotheses only on the divided differences of order one in contrast to the earlier study using hypotheses on derivatives reaching up to order eight although these derivatives do not appear on the solver. This way, we expand the applicability of the solver and in the more general setting of Banach space valued operators. Numerical examples complement the theoretical results.

References [1] Amiri A., Cardero A., Darvishi M. T., Torregrosa J. R., Stability analysis of a parametric family of seventh-order iterative methods for solving nonlinear systems, Appl. Math. Comput., 323, (2018), 43-57. [2] Argyros I. K., Ezquerro J. A., Guti´errez J. M., Her´nandez M. A., Hilout S., On the semilocal convergence of efficient Chebyshev-Secant -type solvers, J. Comput. Appl. Math., 235(2011), 3195-3206. [3] Argyros I. K., Ren H., Efficient Steffensen-type algorithms for solving nonlinear equations, Int. J. Comput. Math., 90, (2013), 691-704. [4] Argyros I. K., George S., Thapa N., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-I, Nova Publishes, NY, 2018.

46

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[5] Argyros I. K., George S., Thapa N., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-II, Nova Publishes, NY, 2018. [6] Argyros I. K. and Hilout S., Weaker conditions for the convergence of Newton’s solver. Journal of Complexity, 28(3):364-387, 2012. [7] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., A modified Newton-Jarratt’s composition, Numer. Algor., 55, (2010), 87-99. [8] Cordero A. and Torregrosa J. R., Variants of Newton’s method using fifth-order quadrature formulas, Appl. Math. Comput., 190, (2007), 686-698. [9] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., A modified Newton-Jarratt’s composition, Numer. Algor., 55, (2010), 87-99. [10] Grau-S´anchez Noguera M. M., Amat S., On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative solvers, J. Comput. Appl. Math., 237,(2013), 363-372. [11] Lotfi T., Bakhtiari P., Cordero A., Mahdiani K., Torregrosa J. R., Some new efficient multipoint iterative solvers for solving nonlinear systems of equations, Int. J. Comput. Math., 92, (2015), 1921-1934. [12] Madhu K., Babajee D. K. R., Jayaraman J., An improvement to double-step Newton solver and its multi-step version for solving system of nonlinear equations and its applications, Numer. Algor., 74,(2017), 593-607. [13] Magre˜na´ n A. A. and Argyros I. K., Improved convergence analysis for Newton-like solvers. Numerical Algorithms, 71(4):811-826, 2016. [14] Magre˜na´ n A. A. and Argyros I. K., Two-step Newton solvers. Journal of Complexity, 30(4):533-553, 2014. [15] Ostrowski A. M., Solution of equations and systems of equations, Academic Press, New York, 1960. [16] Ortega J. M., Rheinboldt W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [17] Potra F. A. and Pt´ak V., Nondiscrete induction and iterative processes, volume 103. Pitman Advanced Publishing Program, 1984. [18] Steffensen J. F., Remarks on iteration, Skand, Aktuar, 16 (1993), 64-72. [19] Sharma J. R., Arora H., Improved Newton-like solvers for solving systems of nonlinear equations, SeMA J., 74,2(2017), 147-163. [20] Sharma J. R., Arora H., An efficient derivative-free iterative method for solving systems of nonlinear equations, Appl. Anal. Discrete Math., 7, (2013), 390-403.

Seventh Order Methods

47

[21] Sharma J. R., Arora H., A novel derivative-free algorithm with seventh order convergence for solving systems of nonlinear equations, Numer. Algor., 67, (2014), 917933. [22] Sharma J. R., Gupta P. K., An efficient fifth-order solver for solving systems of nonlinear equations, Comput. Math. Appl. 67, (2014), 591–601. [23] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical solvers (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [24] Traub J. F., Iterative solvers for the solution of equations, AMS Chelsea Publishing, 1982.

Chapter 5

Third Order Schemes 1.

Introduction

In this chapter, we compare three third-order convergence schemes for approximating a solution x∗ of the nonlinear equation F(x) = 0.

(5.1)

Here F : D ⊂ B1 → B2 is a continuously differentiable nonlinear operator between the Banach spaces B1 and B2 , and D stands for an open non empty convex compact set of B1 . The three schemes are: Homeir Scheme [28]: Defined for n = 0, 1, 2, . . . by 1 yn = xn − F 0 (xn )−1 F(xn ) 2 xn+1 = xn − F 0 (yn )−1 F(xn )

(5.2)

Noor-Waseem Scheme [27]: Defined for n = 0, 1, 2, . . . by yn = xn − F 0 (xn )−1 F(xn )

xn+1 = xn − 4A−1 n F(xn )

(5.3)

and Cordero- Torregrosa Scheme [13]:Defined for n = 0, 1, 2, . . . by yn = xn − F 0 (xn )−1 F(xn )

xn+1 = xn − 3B−1 n F(xn )

(5.4)

2xn + yn 3xn + yn xn + yn xn + 3yn )+F 0 (yn ) and Bn = 2F 0 ( )−F 0 ( )+2F 0 ( ). where An = 3F 0 ( 3 4 2 4 The analysis of these methods uses assumptions on the fourth-order derivative of F. The assumptions on fourth-order derivatives reduce the applicability of schemes (5.2)–(5.4). For 1 3 example: Let B1 = B2 = R, D = [− , ]. Define f on D by 2 2  3 s logs2 + s5 − s4 , s 6= 0 f (s) = 0, s = 0.

50

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, we get f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 ,

f 00 (s) = 6s logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22,

and s∗ = 1. Obviously f 000 (s) is not bounded on D. Hence, the convergence of schemes (5.2) – (5.4) are not guaranteed by the earlier analysis. In the present chapter we use only assumptions on the first derivative to prove our results. Without any additional conditions [1]-[32], we obtained: a larger radius needed on the method of convergence (i.e. more initial points), tighter upper bounds on kxk − x∗ k (i.e. fewer iterates to achieve the desired error tolerance). The rest of the chapter is organized as follows. The convergence analysis of schemes (5.2)– (5.4) are given in Section 2 and examples are given in Section 3.

2.

Ball Convergence

We present the ball convergence analysis of scheme (5.2), scheme (5.3) and scheme (5.4), respectively using real functions and parameters. Let S = [0, ∞). Let ω0 be a continuous and increasing function in S with values in S with ω0 (0) = 0. Assume equation ω0 (s) − 1 = 0, (5.5) has a least positive zero denoted by r0 . Let S0 = [0, r0). Consider real functions ω, ω1 on S0 continuous and increasing with ω(0) = 0. Define functions g01 and h01 on S0 by g01 (s) =

R1 0

R1

ω((1 − τ)s)dτ + 12 1 − ω0 (s)

0

ω1 (τs)dτ

and h01 (s) = g01 (s) − 1. where function ω on S1 is continuous and increasing with ω(0) = 0. The Intermediate Value Theorem (IVT) is employed to establish the existence of zeros for some equations. Suppose 1 ω1 (0) − 1 < 0. (5.6) 3 By the definitions and (5.6), h01 (0) < 0 and h01 (s) −→ ∞ with s −→ r0− . Denote by R1 the least zero of equation h01 (s) = 0 in (0, r0). Suppose that equation ω0 (g01 (s)s) − 1 = 0

(5.7)

has a least positive zero in (0, r0) denoted by r1 . Define functions g2 and h2 on [0, r1 ) as g2 (s)

= g01 (s) +

(ω0 (s) + ω0 (g01 (s)s)) 01 ω1 (τs)dτ (1 − ω0 (s))(1 − ω0 (g1 (s)s)) R

Third Order Schemes

51

and h2 (s) = g2 (s) − 1.

Then, again h2 (0) = −1 and h2 (s) −→ ∞ as s −→ r1− . Denote by R2 the least zero of equation h2 (s) = 0 on (0, r1). Define a radius of convergence R given by R = min{R1 , R2 }.

(5.8)

In view of these definitions, we have for s ∈ [0, R) 0 ≤ ω0 (s) < 1

(5.9)

0 ≤ ω0 (g01 (s)s) < 1

(5.10)

0 ≤ g01 (s) < 1

(5.11)

0 ≤ g2 (s) < 1.

(5.12)

and ¯ a) be its closure. We shall use Moreover, define U(x, a) = {y ∈ B1 : kx − yk < a} and U(x, the notation en = kxn − x∗ k, for all n = 0, 1, 2, . . .. The conditions (A) shall be used. (A1) F : D −→ B2 is continuously differentiable and there exists a simple solution x∗ of equation F(x) = 0. (A2) There exists a continuous and increasing function ω0 From S into itself with ω0 (0) = 0 such that for all x ∈ D kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ω0 (kx − x∗ k). Set D0 = D ∩U(x∗ , r0 ). (A3) There exist continuous and increasing functions ω, ω1 from S0 into S with ω(0) = 0 such that for each x, y ∈ D0 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ω(ky − xk). and kF 0 (x∗ )−1 F 0 (x)k ≤ ω1 (kx − x∗ k). ¯ ∗ , R) ⊂ D, where R is defined in (5.9). (A4) U(x (A5) There exists R∗ ≥ R such that Z 1 0

ω0 (τR∗ )dτ < 1.

Set D1 = D ∩ U¯ (x∗ , R∗ ). Next, the local convergence result for scheme (5.2) follows.

52

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 14. Under the conditions (A) further consider choosing x0 ∈ U(x∗ , R) − {x∗ }. Then, sequence {xn } exists, stays in U(x∗ , R) with lim xn = x∗ . Moreover, the following n−→∞ estimates hold true kyn − x∗ k ≤ g1 (en )en ≤ en < R, (5.13) and kxn+1 − x∗ k ≤ g2 (en )en ≤ en ,

(5.14)

with “g” functions are introduced earlier, and R defined by (5.8). Furthermore, x∗ is the only solution of equation F(x) = 0 in the set D1 given in (A5). Proof. Consider x ∈ U(x∗ , R) − {x∗ }. By (A1) and (A2) kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ω0 (kx − x∗ k) < ω0 (R) ≤ 1, so by lemma of Banach on invertible operators [28] F 0 (x)−1 ∈ L(B2 , B1 ) with kF 0 (x)−1 F 0 (x∗ )k ≤

1 . 1 − ω0 (kx − x∗ k)

(5.15)

Setting x = x0 , we obtain scheme (5.2) (first sub-step for n = 0) that y0 exists. Then, using scheme (5.2) (first sub-step for n = 0), (A1), (5.9), (A3), (5.19) and (5.11) ky0 − x∗ k = kx0 − x∗ − F 0 (x0 )−1 F(x0 )k ≤ kF 0 (x0 )−1 F 0 (x∗ )kk

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + τ(x0 − x∗ )) − F 0 (x0 ))(x0 − x∗ )dτk

1 + kF 0 (x0 )−1 F 0 (x∗ )kkF 0 (x∗ )−1 F(x0 )k 2 R1 1R1 0 ω((1 − τ)e0 )dτe0 + 2 0 ω1 (τe0 )dτe0 ≤ 1 − ω0 (e0 ) = g01 (e0 )e0 ≤ e0 < R,

(5.16)

so y0 ∈ U(x∗ , R) and (5.13) is true for n = 0. We see that F 0 (y0 )−1 , x1 exist and kF 0 (y0 )−1 F 0 (x0 )k ≤ ≤

1 1 − ω0 (ky0 − x∗ k) 1 . 1 − ω0 (g1 (e0 )e0 )

(5.17)

Moreover, by (5.16) and (5.17), we get kx1 − x∗ k ≤ kx0 − x∗ − F 0 (x0 )−1 F(x0 )

+(F 0 (x0 )−1 − F 0 (y0 )−1 )F(x0 )k

≤ ky0 − x∗ k + kF 0 (x0 )−1 (F 0 (y0 ) − F 0 (x0 ))F 0 (y0 )−1 F(x0 )k   ω0 (e0 ) + ω0 (ky0 − x∗ k) 0 ≤ g1 (e0 ) + e0 (1 − ω0 (e0 ))(1 − ω0 (ky0 − x∗ k)) ≤ g2 (e0 )e0 ≤ e0 ,

(5.18)

Third Order Schemes

53

so x1 ∈ U(x∗ , R) and (5.16) is true for n = 0. Hence, estimates (5.13) and (5.14) are true for n = 0. Suppose (5.13) and (5.14) are true for j = 0, 1, 2, . . ., n−1, then by switching x0 , y0 , x1 by x j , y j , x j+1 in the previous estimates, we immediately obtain that these estimates hold for j = n, completing the induction. Moreover, by the estimate kxn+1 − x∗ k ≤ λe0 < R,

(5.19)

with λ = g2 (e0 ) ∈ [0, 1), we obtain lim xn = x∗ and xn+1 ∈ U(x∗ , R). Let u ∈ D2 with n−→∞

F(u) = 0. Set G =

Z 1 0

0

F (u + τ(x∗ − u))dτ. In view of (A2) and (A6) we get

kF (x∗ ) (G − F (x∗ ))k ≤

Z 1

ω0 ((1 − τ)kx∗ − uk)dτ



Z 1

ω0 (τR∗ )dτ < 1,

0

−1

0

0

0

(5.20)

so the invertability of G and the estimate 0 = F(x∗ ) − F(u) = G(x∗ − u)

(5.21)

we conclude that x∗ = u. The ball convergence of scheme (5.3) is given in an analogous way. But the g, h functions are replaced by the g, ¯ h¯ functions, respectively as follows. g¯1 (s) =

R1 0

ω((1 − τ)s)dτ ¯ , h1 = g¯1 (s) − 1, 1 − ω0 (s)

(3ω0 ( 2s+g3¯1(s)s ) + 4ω0 (s) + ω0 (g¯1 (s)s)) g¯2 (s) = g¯1 (s) + 4(1 − p(s))(1 − ω0(s))

R1 0

ω1 (τs)dτ

,

h¯ 2 (s) = g¯2 (s) − 1,

where

1 (2 + g¯1 (s)s) p(s) = (3ω0 ( ) + ω0 (g¯1 (s)s)). 4 3 The corresponding radius of convergence is R¯ = min{R¯ 1 , R¯ 2 }.,

(5.22)

(5.23)

where R¯ 1 , R¯ 2 are the least positive zeros of equation h¯ 1 (s) = 0 and h¯ 2 (s) = 0, respectively. We also use the estimates ¯ kyn − x∗ k ≤ g¯1 (en )en ≤ en < R,

kxn+1 − x∗ k = kyn − x∗ + (F 0 (xn )−1 − 4A−1 n )F(xn )k

= kyn − x∗ + F 0 (xn )−1 (An − 4F 0 (xn ))A−1 n F(xn )k,   2xn + yn kAn − 4F 0 (xn )k = k3 F 0 ( ) − F 0 (x∗ ) 3

54

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

≤ k(4F 0 (x∗ ))−1An k = ≤ 0 kA−1 n F (x∗ )k ≤

+4(F 0 (x∗ ) − F 0 (xn )) + (F 0 (yn ) − F 0 (x∗ ))k   2en + kyn − x∗ k + 4ω0 (en ) + ω0 (kyn − x∗ k), 3ω0 3 1 2xn + yn [3kF 0 (x∗ )−1 (F 0 ( ) − F 0 (x∗ ))k 4 3 +kF 0 (x∗ )−1 (F 0 (yn ) − F 0 (x∗ ))k] ¯ < 1, p(en ) ≤ p(R) 1 , 4(1 − p(en ))

leading to kxn+1 − x∗ k ≤ g¯2 (en )en ≤ en . Hence, we arrive at the ball convergence result for scheme (5.3). ¯ further consider choosTheorem 15. Under the conditions (A) (with R replaced by R) ¯ − {x∗ }. Then, sequence {xn } exists, stays in U(x∗ , R) ¯ with lim xn = x∗ . ing x0 ∈ U(x∗ , R) n−→∞ Moreover, the following estimates hold true ¯ kyn − x∗ k ≤ g¯1 (en )en ≤ en < R,

(5.24)

kxn+1 − x∗ k ≤ g¯2 (en )en ≤ en ,

(5.25)

and with “g” ¯ functions introduced earlier and R¯ defined by (5.23). Furthermore, x∗ is the only solution of equation F(x) = 0 in the set D2 given in (A5). Finally, for scheme (5.4), we have g˜1 (s) = g¯1 (s), h˜ 1(s) = h¯ 1 (s), R˜ 1 = R¯ 1 , Then, we use the identity xn+1 − x∗ = xn − x∗ − F 0 (xn )−1 F(xn ) + (F 0 (xn )−1 − 3B−1 n )F(xn ) = yn − x∗ + F 0 (xn )−1 (Bn − 3F 0 (xn ))B−1 n F(xn ),

leading to ˜ kxn+1 − x∗ k ≤ g˜2 (en )en ≤ en < R, where we also used the estimates 3xn + yn ) − F 0 (x∗ )) 4 +(F 0 (x∗ ) − F 0 (xn ))] xn + 3yn +[(F 0 ( ) − F 0 (x∗ )) + (F 0 (x∗ ) − F 0 (xn ))] 4 xn + 3yn xn + yn +[(F 0 ( ) − F 0 (x∗ )) + (F 0 (x∗ ) − F 0 ( )], 4 2

Bn − 3F 0 (xn ) = 2[(F 0 (

Third Order Schemes so 3xn + yn − x∗ k) + ω0 (en )) 4 xn + 3yn − x∗ k) + ω0 (en )) +ω0 (k 4 xn + 3yn xn + yn − x∗ k) + ω0 (k − x∗ k) +ω0 (k 4 2 3en + kyn − x∗ k 2[ω0 ( ) + ω0 (en )] 4 en + 3kyn − x∗ k +ω0 ( ) + ω0 (en ) 4 en + 3kyn − x∗ k en + kyn − x∗ k +ω0 ( ) + ω0 ( ) 4 2 q(en ), 3s + g˜1 (s)s 2(ω0 ( ) + ω0 (s)) 4 s + 3g˜1 (s)s +ω0 ( + ω0 (s) 4 s + 3g˜1 (s)s s + g˜1 (s)s +ω0 ( ) + ω0 ( ), 4 2  1 0 3xn + yn −1 kF (x∗ ) 2(F 0 ( ) − F 0 (x∗ )) 3 4 xn + yn +(F 0 (x∗ ) − F 0 ( )) 2  0 xn + 3yn 0 + 2(F ( ) − F (x∗ )) 4 3xn + yn 1 [2ω0 (k − x∗ k) 3 4 xn + yn xn + 3yn +ω0 (k − x∗ k) + 2ω0 (k − x∗ k)] 2 4 q0 (en ),

kF 0 (x∗ )−1 (Bn − 3F 0 (xn ))k ≤ 2(ω0 (k



≤ q(s) =

k(3F 0 (x∗ ))−1 (Bn − F 0 (x∗ ))k =

≤ ≤ where q0 (s) =

1 3s + g˜1 (s)s (2ω0 ( ) 3 4 s + g˜1 (s)s s + 3g˜1 (s)s +ω0 ( ) + 2ω0 ( )), 2 4

so 0 kB−1 n F (x∗ )k ≤

In view of the preceding estimates

1 . 3(1 − q0 (en ))

q(en ) 01 ω1 (τen )dτ kxn+1 − x∗ k ≤ [g˜1 (en ) + ]en 3(1 − ω0 (en ))(1 − q0 (en )) ˜ ≤ g˜2 (en )en ≤ en < R, R

55

56

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

where

q(s) 01 ω1 (τs)dτ g˜2 (s) = g˜1 (s) + 3(1 − ω0 (s))(1 − q0(s)) R

and

h˜ 2 (s) = g˜2 (s) − 1.

The radius of convergence R˜ is given as

R˜ = min{R˜ 1 , R˜ 2 },

(5.26)

where R˜ 1 , R˜ 2 are the least positive zero of equations h˜ 1 (s) = 0, h˜ 2 (s) = 0, respectively. Hence, we arrive at the ball convergence result for scheme (5.4). ˜ further consider choosTheorem 16. Under the conditions (A) (with R replaced by R) ˜ − {x∗ }. Then, sequence {xn } exists, stays in U(x∗ , R) ˜ with lim xn = x∗ . ing x0 ∈ U(x∗ , R) n−→∞ Moreover, the following estimates hold true ˜ kyn − x∗ k ≤ g˜1 (en )en ≤ en < R,

(5.27)

kxn+1 − x∗ k ≤ g˜2 (en )en ≤ en ,

(5.28)

and with “g” ˜ functions introduced earlier and R defined by (5.26). Furthermore, x∗ is the only solution of equation F(x) = 0 in the set D2 given in (A5). Remark 6. We can compute [24] the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k

This way, we obtain in practice the order of convergence without resorting to the computation of higher-order derivatives appearing in the method or the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of those results.

3.

Numerical Examples

Example 9. Let us consider a system of differential equations governing the motion of an object and given by H10 (x) = ex , H20 (y) = (e − 1)y + 1, H30 (z) = 1 with initial conditions H1 (0) = H2 (0) = H3 (0) = 0. Let H = (H1 , H2 , H3 ). Let B1 = B2 = ¯ 1), x∗ = (0, 0, 0)T . Define function H on D for w = (x, y, z)T by R3 , D = U(0, H(w) = (ex − 1,

e−1 2 y + y, z)T . 2

Third Order Schemes

57

The Fr´echet-derivative is defined by 

 ex 0 0 H 0 (v) =  0 (e − 1)y + 1 0  . 0 0 1

1

1

Notice that using the (A) conditions, we get ω0 (t) = (e − 1)t, ω(t) = e e−1 t, ω1 (t) = e e−1 . The radii are R1 = 0.040264470919538206117316292420583, R2 = 0.11193052205987233382877832355007, R¯1 = 0.38269191223238574472986783803208, R¯ 2 = 0.11577276293755123237616544429329, R˜2 = 0.083850634950170241377342961186514.

Example 10. Let B1 = B2 = C[0, 1], the space of continuous functions defined on [0, 1] be equipped with the max norm. Let D = U(0, 1). Define function H on D by H(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(5.29)

0

We have that 0

H (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ D.

Then, for x∗ = 0, we get ω0 (t) = 7.5t, ω(t) = 15t and ω1 (t) = 2. Then the radii are R1 = 0, R2 = 0.022326558814471354763586674607723, R¯ 1 = 0.066666666666666666666666666666667, R¯ 2 = 0.023344251192317290455324751974331, R˜ 2 = 0.0068165592653785947799272015856786. Example 11. Returning back to the motivational example at the introduction of this chapter, we have ω0 (t) = ω(t) = 96.6629073t and ω1 (t) = 2. The parameters for method (5.2) are R1 = 0, R2 = 0.0018077804647045109167485810175435, R¯ 1 = 0.0068968199414654552878434223828208, R¯ 2 = 0.0020165977852019635260805152654484, R˜ 2 = 0.00011832992987423140507710628277493. Remark 7. Condition (5.6) is violated in Example 3.2 and Example 3.3. That is why R1 = 0.

58

4.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

There are a plethora of third convergence order schemes for solving equations. These schemes are developed using different techniques based on algebraic or geometrical considerations. But the order of convergence is determined by assuming the existence of at least the fourth derivative for the operator involved. Moreover, under these techniques no estimates on the upper error bounds on the distances or results on the uniqueness of the solution based on Lipschitz or H¨older type conditions are provided. Hence, the usefulness of these schemes is very restricted. We address all these concerns by using only the first derivative, which is appearing on these schemes and under the same set of conditions. Moreover, we provide a computable ball comparison between these schemes. This way, we expand the applicability of these schemes under weaker conditions. Numerical experiments are conducted to find the convergence balls and test the criteria of convergence.

References [1] Amat S., Busquier S. and Negra M., Adaptive approximation of nonlinear operators, Numer. Funct. Anal. Optim. 25 (2004) 397–405. [2] Amat S., Argyros I. K., Busquier S., and Magre˜na´ n A. A., Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions, Numer. Algor., (2017), DOI: 10.1007/s11075-016-0152-5. [3] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. company, New York (2007). [4] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-III, Nova Publishes, NY, 2019. [5] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2019. [6] Argyros I. K., George S., Magre˜na´ n A. A., Local convergence for multi-point- parametric Chebyshev-Halley-type methods of higher convergence order. J. Comput. Appl. Math. 282, 215-224 (2015). [7] Argyros I. K., Magre˜na´ n A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [8] Argyros I. K., Magre˜na´ n A. A., A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 71, 1-23, (2015). [9] Alzahrani A. K. H., Behl R., Alshomrani A. S., Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 334, 80–93 (2018).

Third Order Schemes

59

[10] Babajee D. K. R., Dauhoo M. Z., Darvishi M. T., Karami A., Barati A., Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations. J. Comput. Appl. Math. 233, 2002–2012 (2010). [11] Behl R., Cordero A., Motsa S. S., Torregrosa J. R., Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 303, 70-88 (2017). [12] Choubey N., Panday B., Jaiswal J. P., Several two-point with memory iterative methods for solving nonlinear equations. Afrika Matematika 29, 435–449 (2018). [13] Cordero A., Torregrosa J. R., Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 183, 199–208 (2006). [14] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., A modified Newton-Jarratt’s composition. Numer. Algor. 55, 87-99 (2010). [15] Cordero A., Torregrosa J. R., Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 190, 686–698 (2007). [16] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 25, 2369–2374 (2012). [17] Darvishi M. T., Barati A., Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 188, 1678–1685 (2007). [18] Esmaeili H., Ahmadi M., An efficient three-step method to solve system of non linear equations. Appl. Math. Comput. 266, 1093–1101, (2015). [19] Fang X., Ni Q., Zeng M., A modified quasi-Newton method for nonlinear equations. J. Comput. Appl. Math. 328, 4458 (2018). [20] Fousse L., Hanrot G., Lefvre V., Plissier P., Zimmermann P., MPFR: a multipleprecision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 33(2), 15 (2007). [21] Homeier H. H. H., A modified Newton method with cubic convergence: the multivariate case. J. Comput. Appl. Math. 169, 161–169 (2004). [22] Jay L. O., A note on Q-order of convergence. BIT 41, 422–429 (2001). [23] Lotfi T., Bakhtiari P., Cordero A., Mahdiani K., Torregrosa J. R., Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 92, 1921–1934 (2015). [24] Magre˜na´ n A. A., Different anomalies in a Jarratt family of iterative root finding methods, Appl. Math. Comput. 233, (2014), 29-38. [25] Magre˜na´ n A. A., A new tool to study real dynamics: The convergence plane, Appl. Math. Comput. 248, (2014), 29-38.

60

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[26] McNamee J. M., Numerical Methods for Roots of Polynomials. Part I, Elsevier, Amsterdam (2007) Noguera M.: Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 218, 2377–2385 (2011). [27] Noor M. A., Waseem M., Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 57, 101–106 (2009)19. [28] Ortega J. M., Rheinboldt W. C., Iterative Solutions of Nonlinear Equations in Several Variables, Academic Press, New York, USA (1970). [29] Ostrowski A. M., Solution of Equation and Systems of Equations. Academic Press, New York (1960). [30] Sharma J. R., Sharma R., Bahl A., An improved Newton-Traub composition for solving systems of nonlinear equations. Appl. Math. Comput. 290, 98–110 (2016). [31] Sharma J. R., Arora H., Improved Newton-like methods for solving systems of nonlinear equations. SeMA 74, 147–163 (2017). [32] Sharma J. R., Arora H., Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 35, 269-284 (2016).

Chapter 6

Fifth and Sixth Order Methods 1.

Introduction

In this chapter, we extended the applicability of fifth and sixth-order methods using conditions on the first derivative of the operator involved. Our technique can be used to make comparisons between other methods of the same order. We extend the applicability of popular fifth-sixth order iterative methods for approximating a solution λ of the nonlinear equation Q(x) = 0, (6.1) where Q : D ⊂ X → Y is a continuously differentiable nonlinear operator between the Banach spaces X and Y, and D stand for an open non empty convex compact set of X. The fifth order methods we are interested is: Cordero et al.[9]: yn = xn − Q0 (xn )−1 Q(xn ) zn = xn − 2A−1 n Q(xn )

xn+1 = zn − Q0 (yn )−1 Q(zn ),

(6.2)

where An = Q0 (xn ) + Q0 (yn ) and the sixth order method we are interested is: Cordero et al.[9]: yn = xn − Q(xn )−1 Q(xn )

zn = yn − Q0 (xn )−1 (2I − Q0 (yn )Q0 (xn )−1 )Q(yn)

xn+1 = zn − Q0 (yn )−1 Q(zn ).

(6.3)

Convergence of method (6.2) is proved using Taylor expansions involving sixth-order derivatives, and that of method (6.3) is using Taylor expansions involving seventh order derivatives, not on these methods. The hypotheses involving higher-order derivatives limit 1 3 the applicability of these methods. For example: Let X = Y = R, D = [− , ]. Define f on 2 2 D by  3 s logs2 + s5 − s4 , s 6= 0 f (s) = 0, s = 0.

62

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, we get f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 ,

f 00 (s) = 6s logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22,

λ = 1. Obviously f 000 (s) is not bounded on D. Hence, the convergence of these methods is not guaranteed by the earlier analyses. Moreover, no upper error estimates on kxn − λk or results on the uniqueness of λ are given in earlier studies. In this chapter, we address these concerns. The same convergence order is obtained using COC or ACOC (to be precise in Remark 8) that depend only on the first derivative and then iterates. Hence, we also extend the applicability of these methods. Our technique can be used to compare other methods [1]-[16] along the same lines. The rest of the chapter is organized as follows. The convergence analysis of these methods is given in Section 2. Examples are given in Section 3.

2.

Ball Convergence

It is helpful to develop real functions and constants. Let B = [0, ∞). Suppose there exist ω0 : B −→ B nondecreasing and continuous so that ω0 (s) − 1 = 0,

(6.4)

has a least positive solution ρ0 . Set B0 = [0, ρ0). Suppose there exist functions ω : B0 −→ B and ω1 : B0 −→ B nondecreasing and continuous. Consider functions g1 and h1 on B0 as g1 (s) =

R1 0

ω((1 − τ)s)dτ 1 − ω0 (s)

and h1 (s) = g1 (s) − 1. Suppose equation h1 (s) = 0

(6.5)

p(s) − 1 = 0

(6.6)

has a least solution R1 ∈ (0, ρ0 ). Suppose equation has a minimal solution ρ p ∈ (0, ∞), where 1 p(s) = (ω0 (s) + ω0 (g1 (s)s)). 2 Set B1 = [0, ρ1 ) and ρ1 = min{ρ0 , ρ p }. Define functions g2 and h2 on B1 as g2 (s) = g1 (s) + q(s)

Z 1 0

ω1 (τs)dτ

Fifth and Sixth Order Methods

63

and h2 (s) = g2 (s) − 1, where q(s) = Suppose equation

ω0 (s) + ω0 (g1 (s)s) . 2(1 − ω0 (s))(1 − p(s)) h2 (s) = 0

(6.7)

ω0 (g2 (s)s) = 0,

(6.8)

ω0 (g2 (s)s) = 0

(6.9)

has a least solution R2 ∈ (0, ρ1 ). Suppose equation

have minimal solutions ρ2 ∈ (0, ρ0 ), ρ3 ∈ (0, ρ1 ), respectively. min{ρ1 , ρ2 , ρ3 }. Define functions b, g3 and h3 on B2 as

Set B2 = [0, ρ), ρ =

g3 (s) = [g1 (g2 (s)s) ω0 (g1 (s)s) + ω0 (g2 (s)s)) 01 ω1 (τg2 (s)s)dτ ]g2 (s) + (1 − ω0 (g1 (s)s))(1 − ω0(g2 (s)s)) R

and h3 (s) = g3 (s) − 1. Suppose equation h3 (s) = 0

(6.10)

has a least solution R3 ∈ (0, ρ). We shall show that R = min{R j }, j = 1, 2, 3.

(6.11)

is a radius of convergence for method (6.2). By these definitions, we have 0 ≤ ω0 (s) < 1

(6.12)

0 ≤ ω0 (g1 (s)s) < 1

(6.13)

0 ≤ ω0 (g2 (s)s) < 1

(6.14)

0 ≤ g j (s) < 1,

(6.15)

and for all s ∈ [0, R). From now we assume ρ j , = 0, 1, 2, 3 exist. The symbol U(x, γ) is standing ¯ γ) is the closure of for a ball with center x ∈ X and of radius γ > 0. Then, the ball U(x, U(x, γ). Let en = kxn − λk, for all n = 0, 1, 2, . . .. Our ball convergence requires hypotheses (A): (A1) Q : D −→ Y is differentiable in the Fr´echet sense and a simple solution λ of equation (6.1) exists.

64

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(A2) There exists function ω0 : B −→ B nondecreasing and a continuous so that for all x∈D kQ0 (λ)−1 (Q0(x) − Q0 (λ))k ≤ ω0 (kx − λk). Set D0 = D ∩U(λ, ρ0 ).

(A3) There exist functions ω : B0 −→ B, ω1 : B0 −→ B nondecreasing and continuous so that for each x, y ∈ D0 kQ0(λ)−1 (Q0 (y) − Q0 (x))k ≤ ω(ky − xk) and kQ0 (λ)−1 Q0 (x) ≤ ω1 (kx − λk). ¯ R) ⊂ D, where R is defined by (6.11). (A4) U(λ, (A5) There exists R4 ≥ R such that

Z 1 0

ω0 (τR4 )dτ < 1.

Set D1 = D ∩ U¯ (λ, R4). These definitions and hypotheses help us show the ball convergence of the method (6.2). Theorem 17. Suppose hypotheses (A) hold with x0 ∈ U(λ, R) − {λ}. Then, sequence {xn } generated by (6.2) is well defined,{xn } ⊂ U(λ, R) and lim n −→ ∞xn = λ, so that kyn − λk ≤ g1 (en )en ≤ en < R,

(6.16)

kzn − λk ≤ g2 (en )en ≤ en ,

(6.17)

kxn+1 − λk ≤ g3 (en )en ≤ en ,

(6.18)

where λ is the only solution of equation (6.1) in the set D1 given in (A5). Proof. Let v ∈ U(λ, R) − {λ}. In view of (6.11), (6.12), (A1) and (A2), we get kQ0 (λ)−1 (Q0(v) − Q0 (λ))k ≤ ω0 (kv − λk) ≤ ω0 (R) < 1, leading together with a result by Banach on invertible operators [2] to kQ0 (v)−1 Q0 (λ)k ≤

1 , 1 − ω0 (kv − λk)

(6.19)

and to the existence of y0 , since v = x0 , F; (x0 ) is invertible by (6.19). Using (6.11), (6.15) (for i = 1), (A1), (6.19)( for v = x0 ) (A3) and the first substep of method (6.2), we have ky0 − λk = kx0 − λ − Q0 (x0 )−1 Q(x0 )k ≤ kQ0 (x0 )−1 Q(λ)kk

Z 1 0

Q0 (λ)−1 (Q0(λ + τ(x0 − λ)) − Q0 (x0 ))dτ(x0 − λ)k

R1

ω((1 − τ)kx0 − λk)dτkx0 − λk 1 − ω0 (kx0 − λk) ≤ g1 (e0 )e0 ≤ e0 < R, ≤

0

(6.20)

Fifth and Sixth Order Methods

65

verifying (6.16) and y0 ∈ U(λ, R) − {λ} if n = 0. The existence of z0 depends on the invertibility of A) . By (6.11), (A2), (6.20), we obtain k(2Q0 (λ))−1(Q0 (x0 ) + Q0 (y0 ) − 2Q0 (λ))k 1 (kQ0(λ)−1 (Q0 (x0 ) − Q0 (λ))k + kQ0(λ)−1 (Q0 (y0 ) − Q0 (λ))k) ≤ 2 1 ≤ (ω0 (e0 ) + ω0 (ky0 − λk)) = p(e0 ) < 1, 2 so 0 kA−1 0 Q (λ)k ≤

1 . 2(1 − p(e0 ))

(6.21)

Then, we can write by the second substep of method (6.2) z0 − λ = (x0 − λ − Q0 (x0 )−1 Q(x0 ))

+(Q0 (x0 )−1 − 2A−1 0 )Q(x0 )

= (x0 − λ − Q0 (x0 )−1 Q(x0 ))

+Q0 (x0 )−1 (Q0 (y0 ) − Q0 (x0 ))A−1 0 Q(x0 ).

(6.22)

It follows by (6.11), (6.15) (for j = 2), (6.19) (for v = x0 ) and (6.20)-(6.22) kz0 − λk ≤ [g1 (e0 )e0 + kQ0 (x0 )−1 Q0 (λ)kkQ0(λ)−1 (Q0(y0 ) − Q0 (λ))k

0 0 −1 +kQ0 (λ)−1 (Q“(x0 ) − Q0 (λ))kkA−1 0 Q (λ)kkQ (λ) Q(x0 )k

≤ [g1 (e0 ) + q(e0 )

Z 1 0

ω1 (τe0 )dτ]e0

≤ g2 (e0 )e0 ≤ e0 ,

(6.23)

verifying (6.17) and z0 ∈ U(λ, R) if n = 0, where we used the estimate 0 kQ0 (x0 )−1 (A0 − 2Q0 (x0 ))A−1 0 Q (λ)k

0 = kQ0 (x0 )−1 (Q0 (y0 ) − Q0 (x0 ))A−1 0 Q (λ)k

≤ kQ0 (x0 )−1 Q0 (λ)k[kQ0(λ)−1 (Q0 (y0 ) − Q0 (λ))k 0 +kQ0 (λ)−1 (Q0(x0 ) − Q0 (λ))k]kA−1 0 Q (λ)k ω0 (ky0 − λk) + ω0 (e0 ) ≤ ≤ q(e0 ). 2(1 − ω0 (e0 ))(1 − p(e0 ))

(6.24)

By the third substep of method (6.2) and (6.19) (for v = y0 , z0 ), (6.11), (6.15) (for j = 3), (6.20), (6.23), we can have kx1 − λk = k(z0 − λ − Q0 (z0 )−1 Q(y0 ))

+Q0 (z0 )−1 (Q0 (y0 ) − Q0 (z0 ))Q0(y0 )−1 Q(z0 )k

≤ [g1 (kz0 − λk)

(ω0 (ky0 − λk) + ω0 (e0 )) 01 ω1 (τkz0 − λk)dτ + ]kz0 − λk (1 − ω0 (ky0 − λk))(1 − ω0 (kz0 − λk)) ≤ g3 (e0 )e0 ≤ e0 , R

(6.25)

66

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

verifying (6.18) and x1 ∈ U(λ, R), if n = 0. Hence, the induction for (6.16)-(6.18) is done for n = 0. Suppose (6.16)-(6.18) hold for all m = 0, 1, 2, . . ., n−1. By the preceding calculations we show (6.16)-(6.18) hold for all n, completing the induction. Next by the estimation kxm+1 − λk ≤ µkxm − λk < R,

(6.26)

where µ = g3 (e0 ) ∈ [0, 1), we get xm+1 ∈ U(λ, R) and lim xm = λ. Consider u ∈ D1 with m−→∞

Q(u) = 0 and set T =

Z 1 0

0

Q (λ + τ(u − λ))dτ. Then, by (A2) and (A5), we have

kQ0 (λ)−1 (T − Q0 (λ))k ≤

Z 1 0

ω0 (τku − λk)dτ ≤

Z 1 0

ω0 (τR4 )dτ < 1,

so λ = u, by the invertibility of T and the estimation 0 = Q(u) − Q(λ) = T (u − λ). Concerning the convergence of method (6.3) we work in a similar way. The function g1 and h1 are same but the rest differ as g¯2 (s) = [g1 (g1 (s)s) + a(a)

Z 1 0

ω1 (τg1 (s)s)dτ]g1(s),

h¯ 2 (s) = g¯2 (s) − 1, where a(s) = Suppose equations

(ω0 (s) + ω0 (g1 (s)s))2 . (1 − ω0 (g1 (s)s))(1 − ω0(s)) ω0 (g1 (s)s) − 1 = 0, ω0 (g¯2 (s)s) − 1 = 0

have minimal solutions ρ2 ∈ (0, ρ1 ), ρ3 ∈ (0, ρ) Suppose equation h¯ 2 (s) = 0 has a minimal positive solution R¯ 2 . G¯ 3 (s) = [g1 (g¯2 (s)s) (ω0 (g1 (s)s) + ω0 (g¯2 (s)s)) 01 ω1 (τg¯2 (s)s)dτ + (1 − ω0 (g1 (s)s))(1 − ω0(g¯2 (s)s)) R

barg2(s) and

h¯ 3 (s) = g¯3 (s) − 1. Suppose that equation h¯ 3 (s) = 0

Fifth and Sixth Order Methods

67

has a minimal positive solution R¯ 3 . The parameters R¯ = min{R1 , R¯ 2 , R¯ 3 } is a radius of convergence for method (6.3). These real functions and parameters are being motivated by kzn − λk = kyn − λ − Q0 (yn )−1 Q(yn )k

+(Cn Q0 (λ))Q0(λ)−1 Q(yn )k

≤ [g1 (kyn − λk) + a(en )

Z 1 0

ω1 (τkyn − λk)dτ]kyn − λk

¯ ≤ g¯2 (en )en ≤ en < R, kQ0 (yn )−1 Q0 (λ)k ≤

1 , 1 − ω0 (kyn − λk)

kCn k = k(Q0 (yn )−1 − Q0 (xn )−1 ) + Q0 (xn )−1 (Q0 (yn ) − Q0 (xn ))Q0 (xn )−1 k = kQ0 (yn )−1 (Q0(xn ) − Q0 (yn ))Q0(xn )−1

−Q0 (xn )−1 (Q0 (xn ) − Q00 (yn ))Q0(xn )−1 k

= kQ0 (yn )−1 (Q0(xn ) − Q0 (yn ))Q0(xn )−1 (Q0 (xn ) − Q0 (yn ))k, so (ω0 (en ) + ω0 (kyn − λk))2 (1 − ω0 (kyn − λk))(1 − ω0 (en )) ≤ q(en ),

kCn Q0 (λ)k ≤

and kQ0 (zn )−1 Q0 (λ)k ≤

1 , 1 − ω0 (kzn − λk)

kxn+1 − λk = k(zn − λ − Q0 (xn )−1 Q(xn ))

+Q0 (zn )−1 (Q0(yn ) − Q0 (zn ))Q0 (yn )−1 Q(zn )k

(ω0 (en ) + ω0 (kzn − λk)) 01 ω1 (τkzn − λk)dτ ≤ [g1 (kzn − λk) + ]kzn − λk (1 − ω0 (kyn − λk))(1 − ω0 (kzn − λk)) = g¯3 (en )en ≤ en . R

Hence, we arrive at: Theorem 18. Suppose that hypotheses (A) hold but with R2 , R3 , R, g2, h2 , g3 , and h3 ¯ g¯2 , h¯ 2 , g¯3 and h¯ 3 , respectively. Then, the conclusion of Theorem 17 repalced by R¯ 2 , R¯ 3 , R, but for method (6.3). 

68

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 8. We can find the convergence order by resorting to the computational order of convergence (COC) defined by     kxn − λk kxn+1 − λk / ln ξ = ln kxn − λk kxn−1 − λk or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k / ln . ξ1 = ln kxn − xn−1 k kxn−1 − xn−2 k This way, we obtain in practice the order of convergence without resorting to the computation of higher-order derivatives appearing in the method or the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of these results.

3.

Numerical Examples

Example 12. Define F on D = R by F(x) = sinx. Then, we have for λ = 0, w0 (s) = w(s) = t and ω1 (s) = 1 The radii are given in Table 1. Table 6.1. Method (6.2) R1 = 0.66666666666666666666666666666667 R2 = 0.41099846165511449980201064136054 R3 = 0.38061243296706831484854660629935 R = 0.38061243296706831484854660629935

Method (6.3) R¯ 1 = 0.66666666666666666666666666666667 R¯ 2 = 18.694218294614323383484588703141 R¯ 3 = 0.62712033715840798109297793416772 R¯ = 0.62712033715840798109297793416772

Example 13. Let us consider a system of differential equations governing the motion of an object and given by F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1 with initial conditions F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , D = ¯ 1), λ = (0, 0, 0)T . Define function F on D for w = (x, y, z)T by U(0, Q(w) = (ex − 1,

e−1 2 y + y, z)T . 2

The Fr´echet-derivative is defined by 

 ex 0 0 Q0 (v) =  0 (e − 1)y + 1 0  . 0 0 1

1

1

Notice that using the (H) conditions, we get ω0 (s) = (e − 1)s, ω(s) = e e−1 s, ω1 (s) = e e−1 . The radii are given in Table 2.

Fifth and Sixth Order Methods

69

Table 6.2. Method (6.2) R1 = 0.38269191223238574472986783803208 R2 = 0.19947194729105893751253120171896 R3 = 0.17427817666013167841043696171255 R = 0.17427817666013167841043696171255

Method (6.3) R¯ 1 = 0.38269191223238574472986783803208 R¯ 2 = 9.1758104155079323049903905484825 R¯ 3 = 0.29183576779092096353807050945761 R¯ = 0.29183576779092096353807050945761

Example 14. Let X = Y = C[0, 1], the space of continuous functions defined on [0, 1] be equipped with the max norm. Let D = U(0, 1). Define function F on D by Q(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(6.27)

0

We have that Q0 (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ D.

Then, we get that λ = 0, so ω0 (s) = 7.5s, ω(s) = 15s and ω1 (s) = 2. Then the radii are given in Table 3. Table 6.3. Method (6.2) R1 = 0.066666666666666666666666666666667 R2 = 0.036171526149577457043271522252326 R3 = 0.013662763496254719253197862371962 R = 0.013662763496254719253197862371962

Method (6.3) R¯ 1 = 0.066666666666666666666666666666667 R¯ 2 = 0.015798570567889115567883351332057 R¯ 3 = 0.015685229391061268622298285890793 R¯ = 0.015685229391061268622298285890793

Example 15. Returning back to the motivational example at the introduction of this chapter, we have ω0 (s) = ω(s) = 96.6629073s and ω1 (s) = 2. The parameters for method (6.2) are given in Table 4. Table 6.4. Method (6.2) R1 = 0.0068968199414654552878434223828208 R2 = 0.0033813318557919264800704084450444 R3 = 0.00020918263730481401478818181960406 R = 0.00020918263730481401478818181960406

Method (6.3) R¯ 1 = 0.0068968199414654552878434223828208 R¯ 2 = 2.3642325170766502751007465121802 R¯ 3 = 0.00020970859620603537966550267146459 R¯ = 0.00020970859620603537966550267146459

70

4.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

Different techniques are used to develop iterative methods. Moreover, a different set of criteria usually based on the seventh derivative are needed in the ball convergence of sixthorder methods. Then, these methods are compared using numerical examples. But we do not know: if the results of those comparisons are true if the examples change; the largest radii of convergence; error estimates on kxn − λk and uniqueness results that are computable. We address these concerns using only the first derivative and a common set of criteria. Numerical experiments are used to test the convergence criteria and further validate the theoretical results. Our technique can be used to make comparisons between other methods of the same order.

References [1] Amat S., Argyros I. K., Busquier S., Hern´andez-Ver´on M. A., Mart´ınez E., On the local convergence study for an efficient k-step iterative method, J. Comput. Appl. Math. 343 (2018), 753-761. [2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-III, Nova Publishes, NY, 2019. [4] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2019. [5] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [6] Argyros I. K., Sharma D., Parhi S. K., On the local convergence of WeerakoonFernando method with ω- continuous derivative in Banach spaces, SeMA J. (2020), DOI: https://doi.org/10.1007/s40324-020-00217-y. [7] Argyros I. K., Convergence and Application of Newton-type Iterations, Springer, (2008). [8] Chun C., Iterative methods improving Newtons method by Adomian decomposition method, Comput. Math. Appl. 50 (2005), 1559-1568. [9] Cordero A., Ezquerro J. A., Hernandez-Veron M. A., On the local convergence of a fifth-order iterative method in Banach spaces, J. Math. 46 (2014), 53-62. [10] Khan W. A., Noor K. I., Bhatti K., Ansari F. A., A new fourth order Newton-type method for solution of system of nonlinear equations, Appl. Math. Comput. 270 (2015), 724-730.

Fifth and Sixth Order Methods

71

[11] Mart´ınez E., Singh S., Hueso J. L., Gupta D. K., Enlarging the convergence domain in local convergence studies for iterative methods in Banach spaces, Appl. Math. Comput. 281 (2016), 252-265. [12] Singh S., Gupta D. K., Badoni R. P., Mart´ınez E., Hueso J. L., Local convergence of a parameter based iteration with Holder continuous derivative in Banach spaces, Calcolo 54 (2) (2017), 527-539. [13] Sharma D., Parhi S. K., On the local convergence of modified Weerakoons method in Banach spaces, J. Anal. (2019), DOI: https://doi.org/10.1007/s41478-019-00216-x. [14] Sharma D., Parhi S. K., Local convergence of a higher-order method in Banach spaces, Canad. J. Appl. Math. 2 (2020), no. 1, 68-80. [15] Petkovi´c M. S., Neta B., Petkovi´c L., D˜zuni´c D., Multipoint methods for solving nonlinear equations, Elsevier, (2013). [16] Traub J. F., Iterative Methods for Solution of Equations, Prentice-Hall, Englewood Cliffs, (1964).

Chapter 7

Sixth Order Methods 1.

Introduction

In this chapter, we compare two sixth order methods for approximating a solution x∗ of the nonlinear equation F(x) = 0, (7.1) where F : Ω ⊂ X → Y is a continuously differentiable nonlinear operator between the Banach spaces X and Y, and Ω stand for an open non empty convex compact set of X. The sixth order methods we are interested are: 2 yn = xn − F 0 (xn )−1 F(xn ) 3 5 0 0 2 −1 0 −1 zn = xn − [12(A−1 n F (xn )) − 9An F (xn ) + I]F (xn ) F(xn ), 2 xn+1 = zn + (2F 0 (xn )−1 − 3A−1 n )F(zn ),

(7.2)

and 2 yn = xn − F 0 (xn )−1 F(xn ) 3 3 −1 0 0 −1 zn = xn − [9A−1 n F (xn ) + (An F (xn )) 2 13 − I]F 0 (xn )−1 F(xn ), 2 xn+1 = zn + (2F 0 (xn )−1 − 3A−1 n )F(zn ),

(7.3)

where, An = F 0 (xn ) + F 0 (yn ). These methods use similar information; derived based on different techniques, whose convergence has been shown using Taylor expansions involving the seventh order derivative, not on these methods [1]. The hypotheses involving the seventh derivatives limit the 1 3 applicability of these methods. For example: Let X = Y = R, Ω = [− , ]. Define f on Ω 2 2 by  3 s logs2 + s5 − s4 , s 6= 0 f (s) = 0, s = 0.

74

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, we get f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 ,

f 00 (s) = 6s logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22.

Obviously f 000 (s) is not bounded on Ω. Hence, the convergence of methods (7.2) and (7.3) are not guaranteed by the earlier analysis. Moreover, in the case of these methods no computable convergence radii, upper error estimates on kxn − x∗ k or results on the uniqueness of x∗ are given. Furthermore, a performance comparison is provided between them using numerical examples. Hence, we do not know in advance based on the same set of hypotheses for which method we can obtain a larger radius of convergence (i. e., more initial points x0 ); tighter error estimates on kxn − x∗ k (i.e. fewer iterates to obtain the desired error tolerance) and best information on the location of the solution. In this chapter, we address these concerns. The same convergence order is obtained using COC or ACOC (to be precise in Remark 9) that depend only on the first derivative and then iterates. Hence, we also extend the applicability of these methods. Our technique can be used to compare other methods [1]-[27] along the same lines. The rest of the chapter is organized as follows. The convergence analysis of methods (7.2) and (7.6) are given in Section 2.

2.

Ball Convergence

Let S = [0, ∞). Consider function ω0 defined on S with values in S continuous and increasing such that equation ω0 (s) − 1 = 0, (7.4) has a least positive solution ρ0 . Moreover, consider functions ω and ω1 defined on [0, ρ0) with values in S such that the equation h1 (s) = g0 (s) − 1 = 0

(7.5)

has a least solution in (0, ρ0), denoted by R1 , where g0 (s) =

R1 0

R1

ω((1 − τ)s)dτ + 13 1 − ω0 (s)

0

ω1 (τs)dτ

.

Suppose equation p(s) − 1 = 0 has a least solution in (0, ρ0) denoted by ρ p , where 1 p(s) = (ω0 (s) + ω0 (g0 (s)s)). 2 Set ρ = min{ρ0 , ρ p }.

(7.6)

Sixth Order Methods

75

Suppose equation h2 (s) := g2 (s) − 1 = 0.

(7.7)

has a least solution in (0, ρ) denoted by R2 , where g2 (s) = g(s) + where g(s) = and a(s) =

R1 0

3a(s) 01 ω1 (τs)dτ , 1 − ω0 (s) R

ω((1 − τ)s)dτ 1 − ω0 (s)

 ω0 (s) + ω0 (g0 (s)s) 2 1 − p(s)   7 ω0 (s) + ω0 (g0 (s)s) + 4. + 4 1 − p(s) 1 4



Suppose equation ω0 (g2 (s)s) − 1 = 0

(7.8)

has a least solution in (0, ρ) denoted by ρ1 . Suppose equation h3 (s) := g3 (s) − 1 = 0

(7.9)

has a least solution in (0, ρ1) denoted by R3 , where   ω1 (s) + ω0 (g0 (s)s) + ω0 (g2 (s)s) g3 (s) = g(g2 (s)s) + 2(1 − ω0 (g2 (s)s))(1 − p(s)) Z 1  3 ω1 (g0 (s)s) + ω1 (τg2 (s)s)dτ g2 (s). 2 (1 − ω0 (s))(1 − p(s)) 0 Define a radius of convergence R be as R = min{Rm }, m = 1, 2, 3.

(7.10)

0 ≤ ω0 (s) < 1

(7.11)

0 ≤ ω0 (g2 (s)s) < 1

(7.12)

0 ≤ p(s) < 1

(7.13)

0 ≤ gm (s) < 1,

(7.14)

If follows

and for all s ∈ [0, R). Define T (x, a) = {y ∈ X : kx − yk < a} and T¯ (x, a) be its closure for a > 0. We shall use the notation en = kxn − x∗ k, for all n = 0, 1, 2, . . .. The conditions (A ) shall be used:

76

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(A1 ) F : Ω −→ Y is differentiable and there exists a simple solution x∗ of equation F(x) = 0. (A2 ) There exists a continuous and increasing function ω0 : S −→ S such that for all x ∈ Ω kF 0 (x∗ ∗)−1 (F 0 (x) − F 0 (x∗ ))k ≤ ω0 (kx − x∗ k). Set Ω0 = Ω∩, ρ0 ). (A3 ) There exists a continuous and increasing functions ω, ω1 such that for each x, y ∈ Ω0 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ω(ky − xk) and kF 0 (x∗ )−1 F 0 (x)k ≤ ω1 (kx − x∗ k). (A4 ) T¯ (x∗ , R) ⊂ Ω, and ρ0 , ρ p and Rm exist, where R is defined by (7.8) and (A5 ) There exists R4 ≥ R such that Z 1 0

ω0 (τR4 )dτ < 1.

Set Ω1 = Ω ∩ T¯ (x∗ , ρ0 ). Next, the ball convergence of method (7.2) is given. Theorem 19. Suppose conditions (A ) hold, and choose x0 ∈ T (x∗ , R) − {x∗ }. Then, sequence {xn } generated by method (7.2) exists, stays in T (x∗ , R) and converges to x∗ . Moreover, the following assertions hold kyn − x∗ k ≤ g1 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < R,

(7.15)

kzn − x∗ k ≤ g2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k,

(7.16)

kxn+1 − x∗ k ≤ g3 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k.

(7.17)

and Here, the functions gm are introduced earlier, and R is defined in (7.10). Furthermore, x∗ is the only solution of equation (7.1) in the set Ω1 . Proof. Items (7.15)-(7.17) shall be shown using induction. Let x ∈ T (x∗ , R) − {x∗ }. By, (7.10), (7.11), (A1 ) and (A2 ), we get kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ω0 (kx − x∗ k) < ω0 (R) ≤ 1, so kF 0 (x)−1 F 0 (x∗ )k ≤

1 . 1 − ω0 (kx − x∗ k)

(7.18)

(7.19)

by the Banach result on invertible operators [21]. Moreover, we have y0 exist by method (7.2) (first sub-step for n = 0). We can write F(x) = F(x) − F(x∗ ) =

Z 1 0

F 0 (x + τ(x − x∗ ))dτ(x − x∗ ),

(7.20)

Sixth Order Methods

77

so by the second condition in (A3 ) kF 0 (x∗ )−1 F 0 (x)k ≤

Z 1 0

ω1 (τkx − x∗ k)dτkx − x∗ k.

(7.21)

Therefore, by (7.10), (7.14) (for m = 1), (A1 ), (A3 ), (7.19) (for v = x0 ) and (7.21) (for x = x0 ) and method (7.2), we get ky0 − x∗ k ≤ kF 0 (x0 )−1 F 0 (x∗ )kk

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + τ(x0 − x∗ ))

1 −F 0 (x0 ))dτ(x0 − x∗ )k + kF 0 (x0 )−1 F(x∗ )kkF 0 (x∗ )−1 F 0 (x0 )k 3 R1 R [ 0 ω((1 − τ)kx0 − x∗ k)dτ + 13 01 ω1 (τkx0 − x∗ k)dτ]kx0 − x∗ k ≤ 1 − ω0 (kx0 − x∗ k) ≤ g1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < R, (7.22) so (7.15)holds for n = 0 and y0 ∈ T (x∗ , R). We can write

z0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) − 3A¯ 0 F 0 (x0 )−1 F(x0 ),

(7.23)

where 1 0 2 −1 0 A¯ 0 = 4(A−1 0 F (x0 )) − 3A0 F (x0 ) + I 2 1 2 1 −1 0 −1 0 = 4(A0 F (x0 ) − I) − 7(A0 F (x0 ) − I) − 4I. 2 2 By (7.10), (7.13), (7.19) (for v = x0 ), (A2 ) and (7.22), we get

≤ ≤ ≤ ≤

k(2F 0 (x∗ ))−1 (F 0 (x0 ) + F 0 (y0 ) − 2F 0 (x∗ ))k 1 (kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k 2 +kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k) 1 (ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k)) 2 1 (ω0 (kx0 − x∗ k) + ω0 (g1 (kx0 − x∗ k)kx0 − x∗ k)) 2 p(kx0 − x∗ k) < 1,

so k(F 0 (x0 ) + F 0 (y0 ))−1F 0 (x∗ )k ≤ Moreover,

1 . 2(1 − p(kx0 − x∗ k)

1 1 0 0 −1 0 0 kA−1 0 F (x0 ) − Ik = kA0 (F (x0 ) − (F (x0 ) + F (y0 )))k 2 2 1 −1 0 = kA (F (x0 ) − F 0 (y0 ))k 2 0 1 −1 0 ≤ kA F (x∗ )k[kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k 2 0 +kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k] ω0 (kx0 − x∗ k) + ω0 (g0 (kx0 − x∗ k)kx0 − x∗ k) , ≤ 4(1 − p(kx0 − x∗ k))

(7.24)

(7.25) (7.26)

78

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so 2 ω (kx − x k) + ω (g (kx − x k)kx − x k) 0 0 ∗ 0 0 0 ∗ 0 ∗ kA¯ 0 k ≤ 1 − p(kx0 − x∗ k) 7 ω0 (kx0 − x∗ k) + ω0 (g0 (kx0 − x∗ k)kx0 − x∗ k) + +4 4 1 − p(kx0 − x∗ k)) = a(kx0 − x∗ k). 1 4



(7.27)

Furthermore, we get from these calculations and (7.14) (for m = 2) kz0 − x∗ k ≤ k(x0 − x∗ − F 0 (x0 )−1 F(x0 ))

+3kA0 kkF 0 (x0 )−1 F 0 (x∗ )kkF 0 (x∗ )−1 F(x0 )k

3a(kx0 − x∗ k) 01 ω1 (τkx0 − x∗ k)dτ ≤ [g(kx0 − x∗ k) + ]kx0 − x∗ k 1 − ω0 (kx0 − x∗ k) = g2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k, (7.28) R

so (7.16) hold for n = 0, z0 ∈ T (x∗ , R), and (7.19) holds for n = 0. Then, by method (7.2) (third sub-step for n = 0), (7.20), (7.14) (for m = 3), and (7.28), we have in turn x1 − x∗ = z0 − x∗ − F 0 (z0 )−1 F(z0 ) + A¯ 0 F(z0 ),

(7.29)

where A¯ 0 is well defined. We need an estimate A¯ 0 = F 0 (z0 )−1 + 2F 0 (x0 )−1 − 3A−1 0

0 −1 = F 0 (z0 )−1 (A0 − F 0 (z0 ))A−1 − A−1 0 + 3(F (x0 ) 0 ) 0 −1 0 0 0 −1 = F (z0 ) (F (x0 ) + (F (y0 ) − F (z0 )))A0

+3F 0 (x0 )−1 (A0 − F 0 (x0 ))A−1 0 ,

so ω1 (kx0 − x∗ k) + ω0 (ky0 − x∗ k) + ω0 (kz0 − x∗ k) 2(1 − ω0 (kz0 − x∗ k))(1 − p(kx0 − x∗ k)) 3 ω1 (ky0 − x∗ k) + 2 (1 − ω0 (kx0 − x∗ k))(1 − p(kx0 − x∗ k)) ≤ b(kx0 − x∗ k),

kA¯ 0 k ≤

(7.30)

so kx1 − x∗ k ≤ [g(g2(kx0 − x∗ k)kx0 − x∗ k) + b(kx0 − x∗ k) = g3 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

Z 1 0

ω1 (τkz0 − x∗ k)dτ]kz0 − x∗ k (7.31)

so (7.17) holds for n = 0 and x1 ∈ T (x∗ , R). The induction for (7.15)– (7.17) is done for n = 0. Suppose estimates (7.15)-(7.17) hold for all i ≤ n − 1. By switching x0 , y0 , z0 , x1 by xi , yi , zi, xi+1 in the preceding calculations, we see that (7.15)-(7.17) hold for all n. Then, in view of the estimate kxi+1 − x∗ k ≤ rkxi − x∗ k < R, (7.32)

Sixth Order Methods

79

where r = g3 (kx0 − x∗ k) ∈ [0, 1), we conclude that xi+1 ∈ T (x∗ , R) and lim xi = x∗ . Let i−→∞

x∗∗ ∈ Ω1 with F(x∗∗ ) = 0. By the definition of Ω1 , (A2 ), (A5 ) and for Q =

Z 1

τ(x∗ − x∗∗ ))dτ, we obtain

kF 0 (x∗ )−1 (Q − F 0 (x∗ ))k ≤

Z 1 0

ω0 ((1 − τ)kx∗ − x∗∗ k)dτ ≤

Z 1 0

0

F 0 (x∗∗ +

ω0 (τR4 )dτ < 1,

so x∗ = x∗∗ , follows by the invertibility of Q and the identity 0 = F(x∗ ) − F (x∗∗ ) = Q(x∗ − x∗∗ ). Concerning the convergence of method (7.3), we set g¯0 = g0 , g¯ = g, g¯ 1 = g1 ,  2 9 ω0 (s) + ω0 (g¯0 (s)s) 16 1 − p(s) R1 ω1 (τs)dτ 5 ω0 (s) + ω0 (g¯0 (s)s) + +1 0 , 8 1 − p(s) 1 − ω0 (s)

ω0 (s) + ω0 (g¯1 (s)s) g2 (s) = g(s) ¯ + 1 − ω0 (s)

h¯ 2 (s) = g¯2 (s) − 1, 

ω1 (s) + ω0 (g¯0 (s)s) + ω0 (g¯2 (s)s) 2(1 − ω0 (g¯2 (s)s))(1 − p(s)) Z 1 3 ω1 (g¯2 (s)s) + ω1 (τg¯2 (s)s)dτ]g¯2(s) 2 (1 − ω0 (s))(1 − p(s)) 0

g¯3 (s) = [g( ¯ g¯2 (s)s) +

and h¯ 3 (s) = g¯3 (s) − 1. Suppose equations h¯ m (s) = g¯m (s) − 1 = 0

have least positive solutions denoted by R¯ m . Define a radius of convergence R¯ as R¯ = min{R¯ m }. Then, using (7.22), (7.31) and zn − x∗ = xn − x∗ − Bn F 0 (xn )−1 F(xn ), where instead of (7.28), 3 −1 0 15 0 −1 Bn = 9A−1 − I n F (xn ) + (An F (xn )) 2 2 3 13 −1 0 −1 0 −1 −1 0 2 = (An F (xn )) [9(An F (xn )) − An F (xn ) + I] 2 2 1 5 1 0 2 −1 0 = F 0 (xn )−1 An [9(A−1 n F (xn ) − I) + (An F (xn ) − I) − I], 2 2 2

(7.33)

80

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so, we get kBn k ≤

ω0 (kxn − x∗ k) + ω0 (kyn − x∗ k) 1 − ω0 (kxn − x∗ k) +

9 16



ω0 (kxn − x∗ k) + ω0 (kyn − x∗ k) 1 − p(kxn − x∗ k)

2

5 (ω0 (kxn − x∗ k) + ω0 (kyn − x∗ k)) + 1), 8 1 − p(kxn − x∗ k)

and consequently kzn − x∗ k ≤ g¯2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k. ¯ where R, g0 , g, gm, hm are replaced by R, ¯ g¯0 , g, Consider conditions ((A ) as (A), ¯ g¯m, h¯ m , respectively. ¯ the conclusions of Theorem 19 hold but for method Theorem 20. Under the conditions (A) (7.3) and the aforementioned changes. Remark 9. (a) We can find the convergence order by resorting to the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k / ln ξ = ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way we obtain in practice the order of convergence without resorting to the computation of higher order derivatives appearing in the method or in the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of those results. (b) Clearly, if g3 (s) ≤ g¯3 (s) =⇒ R¯ ≤ R

g¯3 (s) ≤ g3 (s) =⇒ R ≤ R¯ and corresponding error estimates are tighter too.

3.

Conclusion

In this literature, a different set of criteria are used, based on the seventh derivative for convergence of sixth-order methods. Comparisons of these methods are done using numerical examples. But we do not know: if the results of those comparisons are true if the examples change; the largest radii of convergence; error estimates on the distance between the iterate and solution, and computable uniqueness results. We address these concerns using only the first derivative and a common set of criteria. Our technique can be used to make comparisons between other methods of the same order.

Sixth Order Methods

81

References [1] Alzahrani A. K. H., Bhel R., Alshomrani A., Some higher order iteration functions for solving nonlinear models, Appl. Math. Comput., 334, (2018), 80-93. [2] Amat S., Busquier S., Guti´errez J. M., Geometrical constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 157, 197-205 (2003). [3] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [4] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-III, Nova Publishes, NY, 2019. [5] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2019. [6] Argyros I. K., George S., Magre˜na´ n A. A., Local convergence for multi-point- parametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math. 282, 215-224 (2015). [7] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [8] Argyros I. K., Magre˜na´ n A. A., A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 71, 1-23, (2015). [9] Cordero A., Torregrosa J. R., Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 183, 199–208 (2006). [10] Cordero A., Torregrosa J. R., Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 190, 686–698 (2007). [11] Cordero A., Mart´ınez E., Torregrosa J. R., Iterative methods of order four and five for systems of nonlinear equations. Appl. Math. Comput. 231, 541–551 (2009). [12] Darvishi M. T., Barati A., A fourth-order method from quadrature formulae to solve systemsof nonlinear equations. Appl. Math. Comput. 188, 257–261 (2007). [13] Frontini M., Sormani E., Some variant of Newton’s method with third-order convergence. Appl. Math. Comput. 140, 419–426 (2003). [14] Frontini M., Sormani E., Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 149, 771–782 (2004). [15] Gautschi W.: Numerical Analysis: An Introduction. Birkh¨auser, Boston (1997).

82

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

´ Noguera M., On the computational efficiency index and [16] Grau-S´anchez M., Grau A., some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 236, 1259–1266 (2011). [17] Guti´errez J. M., Hern´andez M. A., A family of Chebyshev–Halley type methods in Banach spaces. Bull. Aust. Math. Soc. 55, 113–130 (1997). [18] Homeier H. H. H., A modified Newton method for root finding with cubic convergence. J. Comput. Appl. Math. 157, 227–230 (2003). [19] Homeier H. H. H., A modified Newton method with cubic convergence: the multivariable case. J. Comput. Appl. Math. 169, 161–169 (2004)13. Kelley C. T.: Solving Nonlinear Equations with Newton’s Method. SIAM, Philadelphia (2003). [20] Noor M. A., Wassem M., Some iterative methods for solving a system of nonlinear equations. Appl. Math. Comput. 57, 101–106 (2009). [21] Ortega J. M., Rheinboldt W. C., Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970). [22] Ostrowski A. M., Solution of Equations and Systems of Equations. Academic Press, New York(1966). [23] Ozban A. Y., Some new variants of Newton’s method. Appl. Math. Lett. 17, 677–682 (2004). [24] Traub J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs (1964). [25] Sharma J. R., Guha R. K., Sharma R., An efficient fourth order weighted-Newton method for systems of nonlinear equations, Numer. Algor. (2013) 62:307–323. [26] Weerakoon S., Fernando T. G. I., A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000). [27] Wolfram S., The Mathematica Book, 5th edn. Wolfram Media (2003).

Chapter 8

Extended Jarratt-Type Methods 1.

Introduction

In this chapter, we consider Jarratt-type methods of order six for approximating a solution p of the nonlinear equation H(x) = 0. (8.1) Here H : Ω ⊂ X → Y is a continuously differentiable nonlinear operator between the Banach spaces X and Y, and Ω stand for an open non empty convex compact set of X. The Jarratttype methods of order six we are interested in is defined as follows [12]: 2 yn = xn − H 0 (xn )−1 H(xn ) 3 23 9 zn = xn − [ I − H 0 (xn )−1 H 0 (yn )(3I − H 0 (xn )−1 H(xn ))] 8 8 ×H 0 (xn )−1 H(xn )) 1 xn+1 = zn − (5I − 3 − H 0 (xn )−1 )H(yn ))H 0 (xn )−1 H(zn )). 2

(8.2)

The convergence of the above method has been shown using Taylor expansions involving the seventh order derivative not on these methods, of H. The hypotheses involving the seventh derivatives limit the applicability of these methods. For example: Let 1 3 B1 = B2 = R, Ω = [− , ]. Define f on Ω by 2 2  3 s logs2 + s5 − s4 , s 6= 0 f (s) = 0, s = 0. Then, we get f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 ,

f 00 (s) = 6s logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22.

Obviously f 000 (s) is not bounded on Ω. Hence, the convergence of methods (8.2) is not guaranteed by the earlier analysis [1]-[12].

84

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

In this chapter, we obtained the same convergence order using COC or ACOC (to be precise in Remark 10) that depend only on the first derivative and then iterates. Hence, we extended the applicability of the methods. Our technique can be used to compare other methods along the same lines. The rest of the chapter is organized as follows. The convergence analysis of the method (8.2) is given in Section 2.

2.

Convergence Analysis

Let S := [0, ∞). Assume there exists a continuous and increasing function ω0 defined on the interval S with values in itself such that the equation ω0 (s) − 1 = 0,

(8.3)

has a least positive solution called r0 . Set S0 = [0, r0). Assume there exist functions ω, ω1 defined on S0 with values in S. Define functions g1 and h1 on S0 as R1 1R1 0 ω0 ((1 − τ)s)dτ + 3 0 ω1 (τs)dτ g1 (s) = . 1 − ω0 (s) and

h1 (s) = g1 (s) − 1. Assume equation h1 (s) = 0.

(8.4)

has a least solution in (0, r0) denoted by R1 . Moreover, define functions g2 and h2 on S0 as b(s) 01 ω1 (τs)dτ g2 (s) = g(s) + 1 − ω0 (s) R

and

h2 (s) = g2 (s) − 1, where g(s) = and

R1 0

ω((1 − τ)s)dτ ω0 (s) + ω0 (g1 (s)s) , a(s) = 1 − ω0 (s) 1 − ω0 (s) 9 3 15 b(s) = a(s)2 + a(s) + . 8 4 8

Equation h2 (s) = 0 has a least solution in (0, r0) called R2 . Assume equation ω0 (g2 (s)s) − 1 = 0

(8.5)

(8.6)

Extended Jarratt-Type Methods

85

has a least solution r1 . Set r = min{r0 , r1 } and S1 = [0, r). Define functions g3 and h3 on S1 as (ω0 (g2 (s)s) + ω0 (s)) 01 ω1 (τg2 (s)s)dτ g3 (s) = [g(g2(s)s) + (1 − ω0 (s))(1 − ω0 (g2 (s)s)) R

3 a(s) 01 ω1 (g2 (s)s) + ]g2 (s) 2 1 − ω0 (s) R

and h3 (s) = g3 (s) − 1. Assume equation h3 (s) = 0

(8.7)

has a least solution in (0, r) called R3 . Define a radius of convergence R for method (8.2) as R = min{Rm }, m = 1, 2, 3..

(8.8)

0 ≤ ω0 (s) < 1

(8.9)

It follows 0 ≤ ω0 (g2 (s)s) < 1

(8.10)

0 ≤ gm (s) < 1,

(8.11)

and ¯ ) denote open and closed balls in X, respectively with hold for all s ∈ [0, R). Let U(x, γ), U(x, radius γ > 0 and center x ∈ X. We shall use the notation en = kxn − pk, for all n = 0, 1, 2, . . .. The following conditions (A ) are considered in our analysis. (A1 ) F : Ω −→ Y is differentiable in the Fr´echet sense and there exists a simple solution p of equation (8.1). (A2 ) There exists a continuous and increasing function ω0 : S0 −→ S such that for all x ∈ Ω kH 0 (p)−1 (H 0(x) − H 0 (p))k ≤ ω0 (kx − pk). Set Ω0 = Ω ∩U(p, r0 ). (A3 ) There exists a continuous and increasing functions ω : S0 −→ S, ω1 : S0 −→ S such that for all x, y ∈ Ω0 kH 0 (p)−1 (H 0 (y) − H 0 (x))k ≤ ω(ky − xk), H 0 (p)−1 H 0 (x)k ≤ ω1 (kx − pk). ¯ (A4 ) U(p, R) ⊂ Ω, and r0 , r1 exist and radius R is defined by (8.8). (A5 ) There exists T ≥ R such that Z 1 0

Set Ω1 = Ω ∩ U¯ (p, T ).

ω0 (τT )dτ < 1.

86

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 21. Assume conditions (A ) hold. Choose starter x0 ∈ U(p, R) − {p}. Then, sequence {xn } generated by method (8.2) is well defined, {xn } ∈ U(p, R), and lim xn = p. n−→∞ In addition, (i) the following items hold kyn − pk ≤ g1 (en )en ≤ en < R,

(8.12)

kzn − pk ≤ g2 (en )en ≤ en ,

(8.13)

kxn+1 − pk ≤ g3 (en )en ≤ en ,

(8.14)

and with functions gm given earlier and radius R defined by (8.8); and (ii) p is the only solution of equation (8.1) in the set Ω1 given in (A5 ). Proof. Let v ∈ U(p, R). In view of (8.8), (8.9), (A1 ) and (A2 ), we get kH 0 (p)−1(H 0 (v) − H 0 (p))k ≤ ω0 (kv − pk) < ω0 (R) ≤ 1, so kH 0 (v)−1H 0 (p)k ≤

1 , 1 − ω0 (kv − pk)

(8.15)

(8.16)

by a lemma of Banach on invertible operators [2]. We also have that y0 , z0 and x1 exist by method (8.2). By the first sub-step of method (8.2) for n = 0, we can write 1 y0 − p = (x0 − p − H 0 (x0 )−1 H(x0 )) + H 0 (x0 )−1 H(x0 ). 3

(8.17)

But then, using (8.8), (8.11) (for m = 1), A1 ), (A3 ), (8.16) (for v = x0 ), (8.17) and the triangle inequality, we obtain ky0 − pk ≤ kH 0(x0 )−1 H 0 (p)kk 0

−1

Z 1 0

+kH (x0 ) H(x0 )kk

H 0 (p)−1 (H 0 (p + τ(x0 − p)) − H 0 (x0 ))dτ(x0 − p)k

Z 1 0

H 0 (p + τ(x0 − p))dτ(x0 − p)k

1 1 0 ω((1 − τ)kx0 − pk)dτ + 3 0 ω1 (τkx0 − pk)dτ]kx0 − pk ≤ 1 − ω0 (kx0 − pk) ≤ g1 (kx0 − pk)kx0 − pk ≤ kx0 − pk < R,

[

R1

R

(8.18)

verifying (8.12) for n = 0 and y0 ∈ U(p, R). Then, by the second sub-step of method (8.2) for n = 0, we can write. z0 − p = x0 − p − H 0 (p)−1 H(x0 ) + M0 H 0 (p)−1H(x0 ), where M0 = −

15 9 I + Q0 (3I − Q0 ), for Q0 = H 0 (p)−1 H 0 (y0 ). But then 8 8 9 15 M0 = − Q20 + 3Q0 − I 8 8 9 15 = − (Q0 − I)2 + 3(Q0 − I) − I, 8 8

(8.19)

Extended Jarratt-Type Methods

87

and kQ0 − Ik = kH 0 (x0 )−1 [(H 0(y0 ) − H 0 (p)) + (H 0 (p) − H 0 (x0 ))]k ω0 (e0 ) + ω0 (ky0 − pk) ≤ 1 − ω0 (e0 ) ω0 (e0 ) + ω0 (g1 (e0 )e0 ) ≤ 1 − ω0 (e0 ) = a(e0 ), 15 9 kM0 k ≤ a(e0 )2 + 3a(e0 ) + = b(e0 ) 8 8 and kz0 − pk = k(x0 − p − H 0 (x0 )−1 H(x0 ))k

+kM0 kkH 0(x0 )−1 H 0 (p)kkH 0(p)−1 H(x0 )k

b(e0 ) 01 ω1 (τe0 )dτ ≤ [g(e0 ) + ]e0 1 − ω0 (e0 ) = g2 (e0 )e0 ≤ e0 , R

(8.20)

so (8.13) hold for n = 0 and z0 ∈ U(p, R). Next, we can write by the third sub-step of method (8.2) x1 − p

= (z0 − p − H 0 (z0 )−1 H(z0 )) + (H 0 (z0 )−1 − H 0 (x0 )−1 )H(z0 ) 3 − (I − Q0 )H 0 (x0 )−1 H(z0 ) 2 = (z0 − p − H 0 (z0 )−1 H(z0 )) + H 0 (z0 )−1 [(H 0 (x0 ) − H 0 (p)) + (H 0 (p) − H 0 (z0 ))H(z0 ) 3 − (I − Q0 )H 0 (x0 )−1 H(z0 ), 2

so (ω0 (kz0 − pk) + ω0 (kx0 − pk)) 01 ω1 (τkz0 − pk)dτ kx1 − pk ≤ [g(kz0 − pk) + (1 − ω0 (kx0 − pk))(1 − ω0 (kz0 − pk)) R

3 a(e0 ) 01 ω1 (τkz0 − pk)dτ + ]kz0 − pk 2 1 − ω0 (e0 ) = g3 (e0 )e0 ≤ e0 , R

(8.21)

completing the induction for estimations (8.12) -(8.14) for n = 0. Replace x0 , y0 , z0 , x1 by xi , yi , zi, xi+1 in the above calculations for i = 1, 2, . . ., n − 1 to complete the induction for estimations (8.12)-(8.14). Then, by the estimation kxi+1 − pk ≤ qkxi − pk < R,

(8.22)

where q = g3 (e0 ) ∈ [0, 1), we get xi+1 ∈ U(p, R) and lim xi = p. Consider v∗ ∈ Ω1 satisfyi−→∞

ing equation (8.1) and let B =

Z 1 0

kH 0 (p)−1 (B − H 0 (p))k ≤

0

H (p + τ(v∗ − p))dτ. Using (A2 ) and (A5 ), we have Z 1 0

ω0 ((1 − τ)kv∗ − pk)dτ ≤

Z 1 0

ω0 (τT )dτ < 1,

88

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so p = v∗ , by the existence of B−1 and the estimate 0 = H(v∗ ) − H(p) = B(v∗ − p). Remark 10. We can find the convergence order by resorting to the computational order of convergence (COC) defined by     kxn − pk kxn+1 − pk / ln ξ = ln kxn − pk kxn−1 − pk or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way, we obtain in practice the order of convergence without resorting to the computation of higher-order derivatives appearing in the method or in the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of those results.

3.

Conclusion

In earlier studies of Jarratt-type methods convergence order, six was shown using assumptions up to the seventh derivative of the operator involved. These assumptions on derivatives, not appearing in these methods limit the applicability of these methods. We address these concerns using only the first derivative, which only appears in these methods.

References [1] Amat S., Busquier S., Guti´errez J. M., Geometrical constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 157, 197-205 (2003). [2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Company, New York (2007). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2019. [4] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Cordero A., Mart´ınez E., Torregrosa J. R., Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 231, 541–551 (2009). [6] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., A modified Newton–Jarratt’s composition. Numer. Algor. 55, 87–99 (2010). [7] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 25, 2369–2374 (2012).

Extended Jarratt-Type Methods

89

[8] Darvishi M. T., Barati A., A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 188, 257–261 (2007). [9] Frontini M., Sormani E., Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 149, 771–782 (2004). [10] Grau-S´anchez Noguera M. M., Amat S., On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 237, 363–372 (2013). [11] Grau-S´anchez M., Peris J. M., Guti´errez J. M.: Accelerated iterative methods for finding solutions of a system of nonlinear equations. Appl. Math. Comput. 190, 1815–1823 (2007). [12] Sharma J. R., Arrora H., Efficient Jarratt-like methods for solving systems of nonlinear equations, Calcolo, 1(2014), 193-210, DOI 10.1007/s10092-013-0097-1.

Chapter 9

Multipoint Point Schemes 1.

Introduction

In this chapter, we consider an efficient multipoint iterative schemes for approximating a solution x∗ of the nonlinear equation Q(x) = 0,

(9.1)

where Q : D ⊂ X → Y is a continuously differentiable nonlinear operator between the Banach spaces X and Y, and D stand for an open non empty convex compact set of X. The 3(m − 1), m ≥ 3 order scheme [18] we are interested is: y1k = xk − ϕ1 (xk )−1 Q(xk )

y2k = xk − ϕ2 (xk , y1k )−1 Q(xn ),

y3k = y2k − ϕ(xk , y1k )Q(y2k ) .. . ym k+1

=

(9.2)

ykm−1 − ϕ(xk , yk1 )Qykm−1 ),

where, ϕ1 (xk ) = Q0 (xk ), 1 ϕ2 (xk , y1k ) = (Q0(xk ) + Q0 (y1k )) 2 and

7 3 ϕ(xk , y1k ) = I − 4Q0 (xk )−1 Q0 (y1k ) + (Q0(xk )−1 Q0 (y1k ))2 , 2 2 These schemes are of order 3(m − 1), (m ≥ 3), using (m − 1) vector function evaluations, two frozen derivatives per iteration. The convergence of the scheme has been shown using Taylor expansions involving 3m − 2 order derivatives, not on these methods [1]. The hypotheses involving the higher-order derivatives limit the applicability of these methods. For 1 3 example: Let X = Y = R, D = [− , ]. Define f on D by 2 2  3 s logs2 + s5 − s4 , s 6= 0 f (s) = 0, s = 0.

92

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, we get f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 ,

f 00 (s) = 6s logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22,

x∗ = 1. Obviously f 000 (s) is not bounded on D. Hence, the convergence of scheme (9.2) is not guaranteed by the earlier analyses. Moreover, no computable convergence radii, upper error estimates on kxn − x∗ k, or results on the uniqueness of x∗ are given in earlier studies. Furthermore, a performance comparison is provided between them using numerical examples. Hence, we do not know in advance based on the same set of hypotheses for which scheme we can obtain a larger radius of convergence (i. e., more initial points x0 ); tighter error estimates on kxn − x∗ k (i.e. fewer iterates to obtain the desired error tolerance) and best information on the location of the solution. In this chapter, we address these concerns. The same convergence order is obtained using COC or ACOC (to be precise in Remark 11) that depend only on the first derivative and then iterates. Hence, we also extend the applicability of the scheme. Our technique can be used to compare other schemes [1]-[28] along the same lines. The rest of the chapter is organized as follows. The convergence analysis of the scheme (9.2) is given in Section 2, and examples are given in Section 3.

2.

Local Convergence

Let M = [0, ∞). Assume there exist continuous and increasing function ω0 : M −→ M such that equation ω0 (s) − 1 = 0, (9.3) has a least positive solution r0 . Set M0 = [0, r0). Assume there exists continuous and increasing function ω : M0 −→ M. Define functions g(1) and h(1) on M0 as (1)

g (s) =

R1 0

ω((1 − τ)s)dτ . 1 − ω0 (s)

and h(1) (s) = g(1)(s) − 1. Assume equation h(1)(s) = 0

(9.4)

p(s) − 1 = 0

(9.5)

has a least solution R1 ∈ (0, r0). Assume equation has a least positive solution r p , where 1 p(s) = (ω0 (s) + ω0 (g(1)(s)s)). 2

Multipoint Point Schemes

93

Set r = min{r0 , r p } and M1 = [0, r). Define functions g(2) and h(2) on M1 by (ω0 (s) + ω0 (g(1)(s)s)) 01 ω1 (τs)dτ . g (s) = g (s) + 4(1 − p(s))(1 − ω0 (s)) (2)

R

(1)

Assume h(2)(s) := g(2)(s) − 1 = 0. has a least solution R2 ∈ (0, r). Let a : M1 −→ M be given as a(s) =

3 2

ω0 (s) + ω0 (g(1)(s)s) 1 − ω0 (s)

+

ω0 (s) + ω0 (g(1)(s)s) 1 − ω0 (s)

(9.6)

!2 !

5 + . 2

Define functions ci , g(i), h(i), i = 3, 4, . . ., m on (0, r) as ci (s) = 1 +

a(s)

R1 0

ω1 (τg(i−1)(s)s)dτ , 1 − ω0 (s)

g(i) (s) = ci−1 (s)ci−2 (s) . . .c3 (s)g(2)(s) and h(i) (s) = g(i) (s) − 1. Assume equations h(i) (s) = 0

(9.7)

has least solution Ri ∈ (0, r). Define a radius of convergence R R = min{R j }, j = 1, 2, 3, . . ., m.

(9.8)

0 ≤ ω0 (s) < 1

(9.9)

0 ≤ p(s) < 1

(9.10)

0 ≤ g(i) (s) < 1,

(9.11)

By the definition of R

and ¯ δ) be its closure for all s ∈ [0, R). Consider the ball U(x, δ) = {y ∈ X : kx − yk < δ} and U(x, for δ > 0. We shall use the notation en = kxn − x∗ k, for all n = 0, 1, 2, . . .. The following conditions (H) are introduced: (H1) F : D −→ Y is differentiable and there exists a simple solution x∗ of equation (9.1).

94

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(H2) There exists a continuous and increasing function ω0 : M −→ M such that for all x∈D kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ω0 (kx − x∗ k). Set D0 = D ∩U(x∗ , r0 ).

(H3) There exists a continuous and increasing functions ω : M0 −→ M such that for each x, y ∈ D0 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ω(ky − xk). Set D1 = D ∩U(x∗ , r).

(H6) There exists a function ω1 : M1 −→ M such that for all x ∈ D1 kF 0 (x∗ )−1 F 0 (x)k ≤ ω1 (kx − x∗ k).

¯ ∗ , R) ⊂ D, r0 , r p exist, where R is defined by (9.8). (H7) U(x (H8) There exists R∗ ≥ R such that

Z 1 0

ω0 (τR∗ )dτ < 1.

Set D2 = D ∩ U¯ (x∗ , R∗ ).

Next, we have the items to present a local convergence analysis of the method (9.2). Theorem 22. Assume conditions (H) hold, and start with x0 ∈ T (x∗ , R) − {x∗ }. Then, sequence generated by (9.2) exists, stays in U(x∗ , R) and converges to x∗ . Moreover, the following items hold kyik − x∗ k ≤ g(1) (ek )ek ≤ ek < R, (9.12)

and

m kxk+1 − x∗ k = kym k − x∗ k ≤ g (ek )ek ≤ ek .

(9.13)

Furthermore, x∗ is the only solution of equation (9.1) in the set D2 given in (H6). Proof. Let z ∈ U(x∗ , R) − {x∗ }. Using (9.8), (9.9), (H1) and (H2):

kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ω0 (kx − x∗ k) < ω0 (R) ≤ 1,

(9.14)

leading to kF 0 (x)−1 F 0 (x∗ )k ≤

1 , 1 − ω0 (kx − x∗ k)

(9.15)

by a lemma due to Banach for invertible operators [20]. We also see that y10 exists in view of the first sub-step of method (9.2) for k = 0. Then by the same sub-step, (9.8), (9.11) (for j = 1), (H1), (H3) and (9.14) (for z = x0 ), we get that ky10 − x∗ k = kx0 − x∗ − F 0 (x0 )−1 F(x0 )k ≤ kF 0 (x0 )−1 F 0 (x∗ )kk

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + τ(x0 − x∗ ))

−F 0 (x0 ))dτ(x0 − x∗ )k ≤

R1 0

ω((1 − τ)kx0 − x∗ k)dτ 1 − ω0 (kx0 − x∗ k)

≤ g(1)(e0 )e0 ≤ e0 < R,

(9.16)

Multipoint Point Schemes

95

so (9.12)holds for i = 1 and y10 ∈ U(x∗ , R). To show the existence of y20 we need the invertibility of F 0 (x0 ) + F 0 (y0 ). Indeed, by (9.8), (9.9), (H2) and (9.16),

≤ ≤ ≤ ≤

k(4F 0 (x∗ ))−1 [2(F 0 (x0 ) + F 0 (y0 ) − 4F 0 (x∗ ))k 1 [2kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k 4 +2kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k) 1 (ω0 (kx0 − x∗ k) + ω0 (ky10 − x∗ k)) 2 1 (ω0 (e0 ) + ω0 (g(1)(e0 )e0 )) 2 p(e0 ) < 1,

so kϕ1 (x0 , y10 )−1 F 0 (x∗ )k ≤

1 . 4(1 − p(e0 )

(9.17)

(9.18)

Then, by (9.8), (9.11) (for i = 2), (9.14) (for z = x0 ), method (9.2) (second substep), (H4), (9.16), ky20 − x∗ k ≤ k(x0 − x∗ − F 0 (x0 )−1 F(x0 ))

+k(F 0 (x0 )−1 − 2(F 0 (x0 ) + F(y10 ))F(x0 )k

(ω0 (kx0 − x∗ k) + ω0 (ky10 − x∗ k)) 01 ω1 (τkx0 − x∗ k)dτ ≤ [g (e0 ) + ]e0 4(1 − p(e0 ))(1 − ω0 (e0 )) (1)

R

= g(2)(e0 |)e0 ≤ e0 ,

(9.19)

verifying (9.12) for i = 2 and y20 ∈ U(x∗ , R), where we also used kF 0 (x0 )−1 − 2(F 0 (x0 ) + F 0 (y10 ))−1 k

= F 0 (x0 )−1 [(F 0 (x0 ) − F 0 (x∗ )) + (F 0 (x∗ ) − F 0 (y10 ))](F 0 (x0 ) + F 0 (y10 ))−1 k ω0 (e0 ) + ω0 (ky10 − x∗ k) ≤ . (9.20) 4(1 − p(e0 ))(1 − ω0 (e0 )) By the third substep of method (9.2), we can first write y30 − x∗ = y20 − x∗ − ϕ(x0 , y10 )F 0 (x0 )−1 F(y20 ). But for A0 = F 0 (x) )−1 F 0 (y10 ), we have

(9.21)

96

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

7 kϕ(x0 , y10 )k = k I − 4F 0 (x0 )−1 F 0 (y10 ) 2 3 + (F 0 (x0 )−1 F 0 (y10 ))2 k 2 3 5 = k (A0 − I)2 − (A0 − I) − Ik 2 2 3 5 kA0 − Ik2 + kA0 − Ik + ≤ 2 2  2 3 ω0 (ky10 − x∗ k) + ω0 (e0 ) ≤ 2 1 − ω0 (e0 ) ω0 (g(1)(e0 )e0 ) + ω0 (e0 ) 5 + 1 − ω0 (e0 ) 2 = a(e0 ), +

(9.22)

where we also used kA0 − Ik = kF 0 (x0 )−1 F 0 (x∗ )F 0 (x∗ )−1

×[(F 0 (y10 ) − F 0 (x∗ )) + (F 0 (X∗ ) − F 0 (x0 ))]k ω0 (ky10 − x∗ k) + ω0 (e0 ) ≤ . 1 − ω0 (e0 )

(9.23)

Then, by (9.21)-(9.23) and the triangle inequality we have ky30 − x∗ k

R1

ω1 (τky20 − x∗ k)dτ 2 ]ky0 − x∗ k 1 − ω0 (e0 ) ≤ g3 (e0 )e0 ≤ e0 ,

≤ [1 +

a(e0 )

0

(9.24)

verifying (9.12) for i = 3 and y30 ∈ U(x∗ , R). Similarly, we show (9.12) and (9.13) for all i and k. Next, in view of the estimate kxk+1 − x∗ k ≤ λkxk − x∗ k < R,

(9.25)

where λ = gm (e0 ) ∈ [0, 1), we deduce xk+1 ∈ U(x∗ , R) and lim xk = x∗ . Let q ∈ D1 with F(q) = 0. Let T =

Z 1 0

k−→∞

0

F (x∗ + τ(q − x∗ ))dτ. In view of (H2) and (H6), we obtain

kF 0 (x∗ )−1 (T − F 0 (x∗ ))k ≤

Z 1 0

ω0 ((1 − τ)kq − x∗ k)dτ ≤

Z 1 0

ω0 (τR∗ )dτ < 1,

so x∗ = q, follows by the invertibility of T and the identity 0 = F(q) − F(x∗ ) = T (q − x∗ ). Remark 11. We can find the convergence order by resorting to the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k

Multipoint Point Schemes

97

or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way, we obtain in practice the order of convergence without resorting to the computation of higher-order derivatives appearing in the method or the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of these results.

3.

Numerical Examples

Example 16. Let us consider a system of differential equations governing the motion of an object and given by F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1 with initial conditions F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , D = ¯ 1), x∗ = (0, 0, 0)T . Define function F on D for w = (x, y, z)T by U(0, F(w) = (ex − 1,

e−1 2 y + y, z)T . 2

The Fr´echet-derivative is defined by  ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  . 0 0 1 

1

1

Notice that using the (H) conditions, we get ω0 (s) = (e − 1)s, ω(s) = e e−1 s, ω1 (s) = e e−1 . The radii are R1 = 0.38269191223238574472986783803208 R2 = 0.43806655902644081601593484265322 R3 = 0.11902163179758018518583639888675 = R. Example 17. Let X = Y = C[0, 1], the space of continuous functions defined on [0, 1] be equipped with the max norm. Let D = U(0, 1). Define function F on D by F(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(9.26)

0

We have that 0

F (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ D.

Then, we get that x∗ = 0, so ω0 (s) = 7.5s, ω(s) = 15s and ω1 (s) = 2. Then the radii are R1 = 0.066666666666666666666666666666667 R2 = 0.051844688088604318210173005354591 R3 = 0.016904077942269182810441918718425 = R.

98

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Example 18. Returning back to the motivational example at the introduction of this chapter, we have ω0 (s) = ω(s) = 96.6629073s and ω1 (s) = 2. The parameters for method (9.2) are R1 = 0.0068968199414654552878434223828208 R2 = 0.006019570514339506010770275423738 R3 = 0.0019514330131939221137787887627724 = R.

4.

Conclusion

In this chapter, we consider an efficient multipoint iterative scheme for solving the equation Q(x) = 0. Usually these schemes are compared using numerical examples. But we do not know: if the results of those comparisons are true if the examples change; the largest radii of convergence; error estimates on the distance between the iterate and solution, and computable uniqueness results. We address these concerns using only the first derivative and a common set of criteria. Numerical experiments are used to test the convergence criteria and further validate the theoretical results. Our technique can be used to make comparisons between other schemes of the same order.

References [1] Alzahrani A. K. H., Bhel R., Alshomrani A., Some higher order iteration functions for solving nonlinear models, Appl. Math. Comput., 334, (2018), 80-93. [2] Amat S., Busquier S., Guti´errez J. M., Geometrical constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 157, 197-205 (2003). [3] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [4] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-III, Nova Publishes, NY, 2019. [5] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2019. [6] Argyros I. K., George S., Magre˜na´ n A. A., Local convergence for multi-point- parametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math. 282, 215-224 (2015). [7] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [8] Argyros I. K., Magre˜na´ n A. A., A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 71, 1-23, (2015).

Multipoint Point Schemes

99

[9] Cordero A., Torregrosa J. R., Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 183, 199–208 (2006). [10] Cordero A., Torregrosa J. R., Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 190, 686–698 (2007). [11] Cordero A., Mart´ınez E., Torregrosa J. R., Iterative methods of order four and five for systems of nonlinear equations. Appl. Math. Comput. 231, 541–551 (2009). [12] Darvishi M. T., Barati A., A fourth-order method from quadrature formulae to solve systemsof nonlinear equations. Appl. Math. Comput. 188, 257–261 (2007). [13] Frontini M., Sormani E., Some variant of Newton’s method with third-order convergence. Appl. Math. Comput. 140, 419–426 (2003). [14] Frontini M., Sormani E., Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 149, 771–782 (2004). [15] Gautschi W., Numerical Analysis: An Introduction. Birkh¨auser, Boston (1997). ´ Noguera M., On the computational efficiency index and [16] Grau-S´anchez M., Grau A., some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 236, 1259–1266 (2011). [17] Guti´errez J. M., Hern´andez M. A., A family of Chebyshev–Halley type methods in Banach spaces. Bull. Aust. Math. Soc. 55, 113–130 (1997). [18] Hommeier H. H. H., A modified Newton method for root finding with cubic convergence. J. Comput. Appl. Math. 157, 227–230 (2003). [19] Hommeier H. H. H., A modified Newton method with cubic convergence: the multivariable case. J. Comput. Appl. Math. 169, 161–169 (2004)13. Kelley C. T., Solving Nonlinear Equations with Newton’s Method. SIAM, Philadelphia (2003). [20] Lofti T., Bakhtiari P., Cordero A., Mahdiani K., Torregrosa J. J., Some new efficient iterative methods for solving systems of equations, Intern. J. Computer Mathematics, 92, 9, (2015), 1921-1934. [21] Noor M. A., Wassem M., Some iterative methods for solving a system of nonlinear equations. Appl. Math. Comput. 57, 101–106 (2009). [22] Ortega J. M., Rheinboldt W. C., Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970). [23] Ostrowski A. M., Solution of Equations and Systems of Equations. Academic Press, New York (1966). [24] Ozban A. Y., Some new variants of Newton’s method. Appl. Math. Lett. 17, 677–682 (2004).

100

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[25] Traub J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs (1964). [26] Sharma J. R., Guha R. K., Sharma R., An efficient fourth order weighted-Newton method for systems of nonlinear equations, Numer. Algor. (2013) 62:307–323. [27] Weerakoon S., Fernando T. G. I., A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000). [28] Wolfram S., The Mathematica Book, 5th edn. Wolfram Media (2003).

Chapter 10

Fourth Order Methods In this chapter we compare the ball convergence of two competing iterative methods for solving the equation G(x) = 0. Usually these methods are compared using numerical examples. But we do not know: if the results of those comparisons are true if the examples change; the largest radii of convergence; error estimates on distance between the iterate and solution, and uniqueness results that are computable. We address these concerns using only the first derivative and a common set of criteria. Numerical experiments are used to test the convergence criteria and further validate the theoretical results. Our technique can be used to make comparisons between other methods of the same order.

1.

Introduction

In this chapter, we compare the local convergence of two competing iterative methods for approximating a solution λ of the nonlinear equation G(x) = 0,

(10.1)

where Q : D ⊂ X → Y is a continuously differentiable nonlinear operator between the Banach spaces X and Y, and D stand for an open non empty convex compact set of X. The fourth order methods we are in interested are: Cordero et al.[13]: 2 yn = xn − G(xn )−1 G(xn ) 3 3 Bn G0 (xn )−1 G(xn ), xn+1 = xn − G0 (xn )−1 G(zn) + A−1 4 n Babajee et al.[9]: 2 yn = xn − G(xn )−1 G(xn ) 3 xn+1 = xn − A¯ n B¯ −1 n G(zn ). where, α ∈ R or α ∈ C, Cn = G0 (xn )−1 (G0(yn ) − G0 (xn )), 3 An = I + ( + α)Cn , Bn = Cn + αCn2 , 2

(10.2)

(10.3)

102

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. C¯n = G0 (xn )−1 G0 (yn ) − I, G0 (xn ) + G“(yn ) B¯ = 2

and

1 3 A¯ n = I − Cn + Cn2 4 4 The convergence of these methods was shown using Taylor expansions involving fifth order derivatives not on these methods [9]-[13]. The hypotheses involving fifth order derivatives 1 3 limit the applicability of these methods. For example: Let X = Y = R, D = [− , ]. Define 2 2 f on D by  3 s logs2 + s5 − s4 , s 6= 0 f (s) = 0, s = 0. Then, we get f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 ,

f 00 (s) = 6s logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22,

and λ = 1. Obviously f 000 (s) is not bounded on D. Hence, the convergence of these methods are not guaranteed by the earlier analyses. In earlier studies, no computable convergence radii, upper error estimates on kxn − λk or results on the uniqueness of λ are given. Moreover, performance comparison is provided between them using numerical examples. Hence, we do not know in advance based on the same set of hypotheses for which method we can obtain larger radius of convergence (i. e., more initial points x0 ); tighter error estimates on kxn − λk (i.e. fewer iterates to obtain a desired error tolerance) and best information on the location of the solution. In this chapter, we address these concerns. The same convergence order is obtained using COC or ACOC (to be precise in Remark 12) that depend only on the first derivative and the iterates. Hence, we also extend the applicability of these methods. Our technique can be used to compare other methods [1]-[26] along the same lines. Rest of the chapter is organized as follows. The convergence analysis of method (10.2) and (10.3) are given in Section 2 and examples are given in Section 3.

2.

Convergence

A series of scalar functions and parameters is developed useful for the convergence analysis first of method (10.2). Let S = [0, ∞). Suppose there exist function ω0 : S −→ S continuous and nondecreasing such that equation ω0 (s) − 1 = 0, (10.4) has a least positive solution r1 . Set S0 = [0, r1 ).

Fourth Order Methods

103

Suppose there exist functions ω : S0 −→ S and ω1 : S0 −→ S continuous and nondecreasing. Define functions g1 and h1 on the interval S0 by R1 0

g1 (s) = and

R1

ω((1 − τ)s)dτ + 31 1 − ω0 (s)

0

ω1 (τs)dτ

,

h1 (s) = g1 (s) − 1.

Assume equation

h1 (s) = 0

(10.5)

p(s) − 1 = 0

(10.6)

has a least solution R1 ∈ (0, r1). Suppose equation has a least positive solution r p , where

ω0(s) + 3ω0 (g1 (s)s)) 3 . p(s) = ( + |α|)( 2 1 − ω0 (s)

Set r = min{r1 , r p }. Define functions g, q, g2 and h2 on [0, r) as g(s) = q(s) = (1 + |α|

R1 0

ω((1 − τ)s)dτ , 1 − ω0 (s)

ω0 (s) + ω0 (g1 (s)s) ω0 (s) + ω0 (g1 (s)s) )( ), 1 − ω0 (s) 1 − ω0 (s)

3q(s) 01 ω1 (τs)dτ g2 (s) = g(s) + 4(1 − p(s))(1 − ω0 (s)) R

and

h2 (s) = g2 (s) − 1. Suppose that equation h2 (s) = 0

(10.7)

R = min{R1 , R2 }.

(10.8)

has a least solution R2 ∈ (0, r). Set It shall be shown that R is a radius of convergence for method (10.2). We have by these definitions that for all s ∈ [0, R) 0 ≤ ω0 (s) < 1 0 ≤ p(s) < 1,

and

(10.9) (10.10)

0 ≤ g1 (s) < 1

(10.11)

0 ≤ g2 (s) < 1.

(10.12)

¯ δ) stand for open and closed ball in X of center x ∈ X and radius The sets U(x, δ), U(x, δ > 0, respectively. From now on we assume r1 , r p , R1 , R2 as previously defined exist. We consider conditions (H) for the study of both methods:

104

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(H1) F : D −→ Y is differentiable in the Fr´echet sense and λ is a simple solution of equation (10.1). (H2) There exists a continuous and nondecreasing function ω0 : S −→ S such that for all x∈D kG0 (λ)−1 (G0(x) − G0 (λ))k ≤ ω0 (kx − λk). Set D0 = D ∩U(λ, r1 ). (H3) There exists a continuous and nondecreasing functions ω : S0 −→ S, ω1 : S0 −→ S such that for each x, y ∈ D0 kG0(λ)−1 (G0 (y) − G0 (x))k ≤ ω(ky − xk) and kG0 (λ)−1 G0 (x) ≤ ω1 (kx − λk). ¯ R) ⊂ D, where R is defined by (10.8). (H4) U(λ, (H5) There exists R∗ ≥ R such that Z 1 0

ω0 (τR∗ )dτ < 1.

Set D1 = D ∩ U¯ (λ, R∗). Next, the local convergence result for method (10.2) is introduced. Theorem 23. Suppose that the conditions (H) hold, and choose x0 ∈ U(λ, R) − {λ}. Then, lim xn = λ which uniquely solves equation (10.1) in the set D1 given in (H5). n−→∞

Proof. Let v ∈ U(λ, R) − {λ}. Then, by (10.9), (H1) and (H2), we obtain kG0 (λ)−1 (G0(v) − G0 (λ))k ≤ ω0 (kv − λk) ≤ ω0 (R) < 1, so kG0 (v)−1 G0 (λ)k ≤

1 , 1 − ω0 (kv − λk)

(10.13)

by a result by Banach on invertible operators [21] and y0 exists. Using method (10.2), (10.8), (10.11), (H3) and (10.13) for x0 = λ,, we get 1 ky0 − λk = kx0 − λ − G0 (x0 )−1 G(x0 ) + G0 (x0 )−1 G(x0 )k 3 0

−1

0

≤ kG (x0 ) G (λ)k[k

Z 1 0

G0 (λ)−1 (G0(λ + τ(x0 − λ)) − G0 (x0 ))dτ(x0 − λ)k

1 1 0 −1 0 + kG (λ) G (λ + τ(x0 − λ))dτ(x0 − λ)k 3 0 R1 1R1 0 ω((1 − τ)kx0 − λk)dτ + 3 0 ω1 (τkx0 − λk)dτ ≤ kx0 − λk 1 − ω0 (kx0 − λk) ≤ g1 (e0 )e0 ≤ e0 < R, Z

(10.14)

Fourth Order Methods

105

implying y0 ∈ U(λ, R). Next, we need the estimates kC0 k = k(G0 (x0 )−1 G0 (λ))G0(λ)−1 [(G0(y0 ) − G0 (λ)) + (G0 (λ) − G0 (x0 )]k ω0 (ky0 − λk) + ω0 (kx0 − λk) , (10.15) ≤ 1 − ω0 (kx0 − λk) 3 kA0 − Ik ≤ ( + |α|)kC0 k ≤ p(kx0 − λk) < 1, 2 so kA−1 0 k≤ and

1 , 1 − p(e0 )

(10.16)

kB0 k = kC0 (I + αC0 )k

≤ kC0 kkI + αC0 k

≤ kC0 k(1 + |α|kC0k) ≤ q(e0 ).

(10.17)

In view of (10.8, (10.12), (10.14)-(10.17) and method (10.2) kx1 − λk = k(x0 − λ − G0 (x0 )−1 G(x0 )) 3 B0 (G0(x0 )−1 F 0 (λ))(G0(λ)−1 G(x0 ))k + A−1 4 0 R 3q(e0 ) 01 ω1 (τe0 )dτ ≤ [g(e0 ) + ]e0 4(1 − p(e0 ))(1 − ω0 (e0 )) ≤ g2 (e0 )e0 ≤ e0 ,

(10.18)

implying x1 ∈ U(λ, R). Suppose estimates (10.14) and (10.18) hold for all j = 0, 1, 2, . . ., n− 1. Then, by repeating the preceding calculations, we have that the induction for estimates kyn − λk ≤ g1 (en )en ≤ en < R,

(10.19)

kxn+1 − λk ≤ g2 (en )en ≤ en < R,

(10.20)

and is completed. Then by (10.20), we have kxn+1 − λk ≤ γen

(10.21)

where γ = g2 (e0 ) ∈ [0, 1), leading to xn+1 ∈ U(λ, R) and lim xn = λ. Next, set M = n−→∞

Z 1 0

get

G0 (d + τ(µ − λ))dτ for some µ ∈ D1 with G(µ) = 0. In view of (H2) and (H5), we 0

−1

0

kG (λ) (M − G (λ))k ≤

Z 1 0

ω0 (τkµ − λk)dτ ≤

Z 1 0

ω0 (τR∗ )dτ < 1,

so λ = µ, follows since M −1 exists and 0 = G(µ) − G(λ) = M(µ − λ).

106

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Next, we study the convergence of method (10.3) in an analogous way but some functions are defined differently. Suppose that equation a(s) − 1 = 0 (10.22) has a least positive solution ra , where 1 a(s) = (ω0 (s) + ω0 (g1 (s)s)). 2 Set r¯ = min{r1 , ra }. Define functions g¯2 and h¯ 2 on [0, r¯) as b(s) 01 ω1 (τs)dτ g¯2 (s) = g(s) + (1 − ω0 (s))(1 − a(s)) R

and h¯ 2 (s) = g¯2 (s) − 1. Suppose that equation h¯ 2 (s) = 0

(10.23)

R¯ = min{R1 , R¯ 2 }.

(10.24)

has a least solution R¯ 2 ∈ (0, r¯). Define

Parameter R¯ shall be shown to be a radius of convergence for method (10.3). By these ¯ definitions and for all s ∈ [0, R) 0 ≤ ω0 (s) < 1, 0 ≤ a(s) < 1, 0 ≤ g1 (s) < 1 and 0 ≤ g¯2 (s) < 1. These scalar functions are motivated by the estimates kG0 (λ)−1 (

G0 (xn ) + G0 (yn ) − G0 (λ))k 2

1 (kG0 (λ)−1(G0 (xn ) − G0 (λ))k 2 +kG0 (λ)−1(G0 (yn ) − G0 (λ))k) 1 ≤ (ω0 (en ) + ω0 (kyn − λk)) ≤ a(en ) < 1, 2 ≤

so 0 kB¯ −1 n G (λ)k ≤

1 , 1 − a(en )

(10.25)

Fourth Order Methods

= =



≤ ≤

kG0 (λ)−1(B¯ n − G0 (xn )An )k 1 3 G0 (xn ) + G0 (yn ) − G0 (xn )(I − Cn + Cn2 )]k kG0 (λ)−1[ 2 4 4 1 0 −1 0 kG (λ) [(G (yn ) − G0 (xn )) 2 +G0 (xn )(G0(xn )−1 G0 (yn ) − I) 3 − G0 (xn )(G0 (xn )−1 G0 (yn ) − I)Cn ]k 2 1 [ω0 (en ) + ω0 (kyn − λk) 2 1 + (ω0 (en ) + ω0 (kyn − λk)) 2 3 ω0 (en ) + ω0 (kyn − λk) + (ω0 (en ) + ω0 (kyn − λk))( )] 2 1 − ω0 (en ) ω0 (en ) + ω0 (kyn − λk) 3 (1 + )(ω0 (en ) + ω0 (kyn − λk)) 4 1 − ω0 (en ) v(en ).

107

(10.26)

Consequently, we have for method (10.3) ¯ kyn − λk ≤ g1 (en )en ≤ en < R,

(10.27)

and kxn+1 − λk = k(xn − λ − G0 (xn )−1 G(xn )) +(G0 (xn )−1 − An B¯ −1 n )G(xn )k

= k(xn − λ − G0 (xn )−1 G(xn )) 0 0 −1 +G0 (xn )−1 (B¯ n − G0 (xn )An )(B−1 n G (λ))(G (λ) G(xn )k

≤ kxn − λ − G0 (xn )−1 G(xn )k

+kG0 (xn )−1 G(xn )kkG0(λ)−1 (B¯ n − G0 (xn )An )k 0 0 −1 ×kB¯ −1 n G (λ)kkG (λ) G(xn )k

b(en ) 01 ω1 (τen )dτ ]en ≤ [g(en ) + (1 − ω0 (en ))(1 − a(en )) = g¯2 (en )en ≤ en . R

(10.28)

Hence, we arrive at: ¯ g¯2 , respecTheorem 24. Suppose that the (H) conditions hold with R, g2 replaced by R, tively. Then, the conclusion of Theorem 23 hold for method (10.3).  Remark 12. We can find the convergence order by resorting to the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k

108

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k / ln . ξ1 = ln kxn − xn−1 k kxn−1 − xn−2 k This way we obtain in practice the order of convergence without resorting to the computation of higher order derivatives appearing in the method or in the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of these results.

3.

Numerical Examples

Example 19. Let us consider a system of differential equations governing the motion of an object and given by F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1 with initial conditions F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , D = ¯ 1), λ = (0, 0, 0)T . Define function F on D for w = (x, y, z)T by U(0, G(w) = (ex − 1,

e−1 2 y + y, z)T . 2

The Fr´echet-derivative is defined by 

 ex 0 0 G0 (v) =  0 (e − 1)y + 1 0  . 0 0 1

1

1

Notice that using the (H) conditions, we get ω0 (s) = (e − 1)s, ω(s) = e e−1 s, ω1 (s) = e e−1 . The radii are R1 = 0.15440695135715407082521721804369 R2 = 0.04776497066561702364850816593389 = R ¯ R¯ 2 = 0.025499970727653292063008549916958 = R. Example 20. Let X = Y = C[0, 1], the space of continuous functions defined on [0, 1] be equipped with the max norm. Let D = U(0, 1). Define function F on D by G(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(10.29)

0

We have that G0 (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ D.

Then, we get that λ = 0, so ω0 (s) = 7.5s, ω(s) = 15s and ω1 (s) = 2. Then the radii are R1 = 0.022222222222222222222222222222222 R2 = 0.0095545226733687524389715406414325 = R ¯ R¯ 2 = 0.0048252059434510516031102689282761 = R.

Fourth Order Methods

109

Example 21. Returning back to the motivational example at the introduction of this chapter, we have ω0 (s) = ω(s) = 96.6629073s and ω1 (s) = 2. The radii are R1 = 0.0022989399804884849513875177962063 R2 = 0.00077363854221430502534906370470935 = R ¯ R¯ 2 = 0.00039442284310697633617492918745029 = R.

4.

Conclusion

Different techniques are used to develop iterative methods. Moreover, different set of criteria usually based on the fifth derivative are needed in the ball convergence of four order methods. Then, these methods are compared using numerical examples. But we do not know: if the results of those comparisons are true if the examples change; the largest radii of convergence; error estimates on kxn − x∗ k and uniqueness results that are computable. We address these concerns using only the first derivative and a common set of criteria. Numerical experiments are used to test the convergence criteria and further validate the theoretical results. Our technique can be used to make comparisons between other methods of the same order.

References [1] Alzahrani A. K. H., Bhel R., Alshomrani A., Some higher order iteration functions for solving nonlinear models, Appl. Math. Comput., 334, 80-93, (2018). [2] Amat, S., Busquier, S., Guti´errez J. M., Geometrical constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math., 157, 197-205, (2003). [3] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York, (2007). [4] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-III, Nova Publishes, NY, (2019). [5] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, (2020). [6] Argyros I. K., George S., Magre˜na´ n A. A., Local convergence for multi-point- parametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math., 282, 215-224, (2015). [7] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017.

110

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[8] Argyros I. K., Magre˜na´ n A. A., A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algor., 71, 1-23, (2015). [9] Babajee D. K. R., Cordero A., Soleymani F., Torregrosa J. R., On a novel fourth order algorithm for solving systems of equations, J. Appl. Analysis, Hidawi Publ., 2012, Article ID 165452. [10] Cordero A., Feng L., Magre˜na´ n A. A., Torregrosa J. R., A new fourth order family for solving nonlinear problems and its dynamics, J. Math. Chemistry, 53, 893-910, (2015). [11] Cordero A., Torregrosa J. R., Variants of Newton’s method for functions of several variables. Appl. Math. Comput., 183, 199–208, (2006). [12] Cordero A., Torregrosa J. R., Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput., 190, 686–698, (2007). [13] Cordero A., Mart´ınez, E., Torregrosa J. R., Iterative methods of order four and five for systems of nonlinear equations. Appl. Math. Comput., 231, 541–551, (2009). [14] Darvishi M. T., Barati A., A fourth-order method from quadrature formulae to solve systemsof nonlinear equations. Appl. Math. Comput., 188, 257–261, (2007). [15] Frontini M., Sormani E., Some variant of Newton’s method with third-order convergence. Appl. Math. Comput., 140, 419–426, (2003). [16] Frontini M., Sormani E., Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput., 149, 771–782, (2004). ´ Noguera M., On the computational efficiency index and [17] Grau-S´anchez M., Grau A., some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math., 236, 1259–1266, (2011). [18] Guti´errez J. M., Hern´andez M. A., A family of Chebyshev–Halley type methods in Banach spaces. Bull. Aust. Math. Soc., 55, 113–130, (1997). [19] Hommeier H. H. H., A modified Newton method for root finding with cubic convergence. J. Comput. Appl. Math., 157, 227–230, (2003). [20] Hommeier H. H. H., A modified Newton method with cubic convergence: the multivariable case. J. Comput. Appl. Math. 169, 161–169, (2004), 13. Kelley C. T., Solving Nonlinear Equations with Newton’s Method. SIAM, Philadelphia, (2003). [21] Ortega J. M., Rheinboldt W. C., Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York, (1970). [22] Ostrowski A. M., Solution of Equations and Systems of Equations. Academic Press, New York, (1966).

Fourth Order Methods

111

[23] Ozban A. Y., Some new variants of Newton’s method. Appl. Math. Lett., 17, 677– 682, (2004). [24] Traub J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs, (1964). [25] Sharma J. R., Guha R. K., Sharma R., An efficient fourth order weighted-Newton method for systems of nonlinear equations, Numer. Algor., 62,307–323, (2013). [26] Weerakoon S., Fernando T. G. I., A variant of Newton’s method with accelerated third-orderconvergence. Appl. Math. Lett., 13, 87–93, (2000).

Chapter 11

Inexact Newton Algorithm 1.

Introduction

The convergence region for algorithms is not large (in general) restricting their utilization of them to solve equations. That is why we develop a technique that without additional conditions extends the utilization of these algorithms. The technique is so general that it can be used for algorithms other than Newton’s. Let B1 , B2 stand for Banach spaces, D ⊂ B1 be open, and H : D −→ B2 be differentiable (continuously in the Fr´echet sense). Moreover, define the inexact Newton’s method (INA) by xn+1 = xn + Rn , (11.1) where {Rn } ⊂ B1 is a residual sequence. We have now the tools to solve the equation G(x) = 0.

(11.2)

A plethora of applications from nonlinear programming and other disciplines can turn looking like (11.2) using Mathematical modeling [1]-[15]. But solution x∗ of inclusion (11.2) is rarely attainable in a closed-form. That is why to find x∗ we rely on the limit of the sequence {xn } generated by INA. A semi-local result for INA was given by Ferreira in the elegant paper [10] using majorant functions. But the convergence domain is not large (in general) limiting the applicability of these results. That is why a certain technique is developed that determines a smaller set than D, (on which the earlier majorants are defined), D0 ⊂ D with also containing the iterates. But using D0 instead of D tighter majorants are obtained resulting in new and finer semi-local results than in [10]. In particular, the advantages include: enlarged convergence region, weaker convergence criteria and tighter error estimates (on kxn − x∗ k). The novelty of the chapter is also that the new results are obtained without additional conditions because the new majorant functions are specializations of the one in [10]. The technique is so general that it can be utilized to expand the applicability of other methods. The rest of the chapter includes the semi-local convergence of NA in Section 2, example in Section 3, and the conclusions in Section 4.

114

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Convergence of NA

We develop certain Lipschitz type majorants needed in our semi-local convergence analysis of INA. Ler δ > 0. Definition 1. Suppose: there exists a continuously differentiable majorant function w0 : [0, δ) −→ R such that for all x ∈ U(z, δ) ⊂ D kG0(x0 )−1 [G0 (x) − G0 (z)]k ≤ w00 (kx − zk) − w00 (0).

(11.3)

Suppose that equation w00 (s) − w00 (0) − 1 = 0

(11.4)

has a minimal solution δ0 ∈ (0, δ]. Set D0 = D ∩U(z, δ0 ). Definition 2. Suppose: there exists a continuously differentiable function w : [0, δ0 ) −→ R such that for all x, y ∈ U(z, δ0 ) kG0(x0 )−1 [G0 (y) − G0 (x)]k ≤ w0 (ky − xk + kx − zk) − w0 (kx − zk).

(11.5)

Definition 3. Suppose: there exists a continuously differentiable function w1 : [0, δ) −→ R such that for all x, y ∈ U(z, δ) kG0 (x0 )−1 [G0(y) − G0 (x)]k ≤ w01 (ky − xk + kx − zk) − w01 (kx − zk).

(11.6)

Remark 13. These definitions imply w00 (s) ≤ w01 (s)

(11.7)

w0 (s) ≤ w01 (s)

(11.8)

and for each s ∈ [0, δ0 ] (since D0 ⊆ D). From now on we suppose w00 (s) ≤ w0 (s) for each s ∈ [0, δ0 ].

(11.9)

If not, the following results hold for w¯ replacing w, where w¯ denotes the largest of w0 , w on the interval [0, δ0 ]. Notice that (11.6) implies (11.3) and (11.5). Therefore, (11.3) and (11.5) are not additional conditions to (11.6) (used in [10]). The major deviation from the work in [10] is in the crucial observation concerning the estimate given in [10] using (11.6) kG0(z)−1 G0 (x0 )k ≤

−1 w01 (s)

(11.10)

for all x ∈ U(z, s), 0 < s < δ. But instead we use the actually needed (11.3) to obtain the tighter estimate −1 −1 kG0 (z)−1G0 (x0 )k ≤ 0 ≤ 0 (11.11) w0 (s) w (s) for all x ∈ U(z, s), 0 < s < δ0 . The modification together with (11.9) and the replacement of w1 with w in the rest of the estimates in the proof of Theorem 2.1 in [10] leads to the following extensions of them. Notice also that w = w(D0 , w0 , z) and w1 = w1 (D, z).

Inexact Newton Algorithm

115

The following conditions shall be used: (c1) w0 (0) > 0, w(0) > 0, w00 (0) = w0 (0) = −1; (c2) w00 , w0 are convex and strictly increasing; (c3) w(s) = 0 for some s ∈ (0, δ0 ); (c4) w(s) < 0 for some s ∈ (0, δ0 ); and (c5) kG0(x0 )−1 (−G(z))k ≤ w(0) = η. Conditions (11.3), (11.4), (11.5) and (c1)-(c5) are called the conditions (C) under which the following semi-local convergence results for INA are presented next. Theorem 25. Suppose: conditions (C) hold. Then, the following assertions hold: Let 1 , τ¯ := sup{s ∈ [0, ρ) : w(s) < 0}. β := sup −w(s), t∗ := min w(0) s∈[0,ρ) Take 0 ≤ δ
0, w(0) > 0, w00 (0) = w0 (0) = −1; w000 (0) > 0, w00 (0) > 0; (c2) w000 , w00 are convex and strictly increasing in (0, ρ0); (c3) w(s∗ ) = 0 for some s∗ ∈ (0, ρ0); and w0 (s∗ ) < 0.

Halley’s Method

121

(c4) kF 0 (x0 )−1 F(x0 )k ≤ w(0) and kF 0 (x0 )−1 F 00 (x0 )k ≤ w00 (0). Next, we present the semi-local convergence of HA. Theorem 26. Suppose that (12.3), (12.5), conditions (C) hold and ρ0 given in (12.4) exists. Then, the sequence {xn } generated by HA exists, stays in U(x0 , s∗ ) and converges to a ¯ 0 , s∗ ) of equation F(x) = 0, which is the only solution in U(x0 , s0∗ ), where solution x∗ ∈ U(x 0 s∗ = sup{s ∈ (s∗ , ρ0 ) : w0 (s) ≤ 0}. Moreover, the following estimates hold for each n = 0, 1, 2, . . .  3 en , en+1 := kx∗ − xn+1 k ≤ (s∗ − sn+1 ) s∗ − sn  00  w (s∗ )2 2D−w00 (s∗ ) 3 en+1 ≤ + en 3w0 (s∗ )2 −9w0 (s∗ ) and



en en+1 ≤ (s∗ − sn+1 ) sn+1 − sn

where

3

,

s0 = 0, sn+1 = sn − [I − Lw (sn )]−1w0 (sn )−1 w(sn ), 1 Lw (s) = w0 (s)−1 w00 (s)w0 (s)−1w(s). 2

Remark 16. If w0 = w = w1 , our Theorem 26 specializes to [11, Theorem 4.1]. Otherwise, it constitute an extension. In particular, notice that the semi-local convergence criteria in [11] are w1 (t∗ ) = 0 and w01 (t∗) < 0. But then, since w(0) > 0, w(t∗ ) ≤ w1 (t∗ ) = 0, w0 (t∗ ) ≤ w01 (t∗ ) < 0, so there exists s∗ ≤ t∗ so that w(s∗ ) = 0 and w1 (s∗ ) = 0. That is the convergence criteria in [11] imply the new ones but not necessarily vice versa. The same extensions hold for the error estimates if we compare {sn } with {tn } given as t0 = 0, tn+1 = tn − [I − Lw1 (tn )]−1w01 (tn )−1 w1 (tn ), 1 Lw1 (s) = w01 (s)−1 w001 (s)w01 (s)−1 w1 (s). 2 Let us consider some popular specializations Lipschitz case [1]-[6]: Choose for `0 ≤ ` ≤ `1 : β `0 w0 (s) = δ − s + s2 + s3 , 2 6 ` β w(s) = δ − s + s2 + s3 , 2 6

122

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and

`1 β w1 (s) = δ − s + s2 + s3 . 2 6

The convergence criteria are: p 2(β + 2 β2 + 2`) p δ ≤ δ0 := 3(β + 2 β2 + 2`)2

and

Notice again that

p 2(β + 2 β2 + 2`1 ) p δ ≤ δ1 := . 3(β + 2 β2 + 2`1 )2

(12.12)

(12.13)

δ < δ1 =⇒ δ < δ0 but not necessarily vice versa, unless if ` = `1 . Smale-Wang case [16,17]: Choose for γ0 ≤ γ ≤ γ1 : w0 (s) = δ − s +

γ0 s2 , 1 − γ0 s

w(s) = δ − s +

γs2 , 1 − γs

w1 (s) = δ − s +

γ1 s2 . 1 − γ1 s

and

The sufficient convergence criteria are

√ α0 := δγ ≤ 3 − 2 2 and

√ α1 := δγ1 ≤ 3 − 2 2.

Then, again, we have

√ √ α1 ≤ 3 − 2 2 =⇒ α0 ≤ 3 − 2 2

but not necessarily vice versa unless, if γ = γ1 . Examples where `0 < ` < `1 or γ0 < γ < γ1 can be found in [1]-[6].

3.

Conclusion

The convergence region Halley’s Method (HM) is small, and the error estimates are pessimistic in general. We develop a technique through which we find a subset of the original set also containing the iterates. But in this case, the majorant functions are tighter leading to finer semi-local convergence analysis of HM without additional conditions. The advantages are an Extended convergence region, weaker convergence criteria, tighter error estimates, and more precise information on the solution.

Halley’s Method

123

References [1] Argyros I. K., On the Newton-Kantorovich hypothesis for solving equations, J. Comput. Appl. Math., 169, 315-332, (2014). [2] Argyros I. K., Ball convergence theorem for Halley’s method in Banach space, J. Appl. Math. Comput., 38, 453-465, (2012). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [4] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [5] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [6] Ezquerro J. A., Hern´andez M. A., On the R-order of the Halley method, J. Math. Anal. Appl., 303, 591-601, (2005). [7] Ferreira O. P., Svaiter B. F., Kantorovich’s majorants principle for Newton’s method. Comput. Optim. Appl. 42, 213–229 (2009). [8] Gutie´rrez J. M., Hern´andez M. A., Newton’s method under weak Kantorovich conditions. IMA J. Numer. Anal. 20, 521-532 (2000). [9] Han D., Wang X., The error estimates of Halley’s method. Numer. Math. JCU Engl. Ser. 6, 231’240 (1997). [10] Hern´andez M. A., Romero N., On a characterization of some Newton-like methods of R-order at least three. J. Comput. Appl. Math. 183, 53-66 (2005). [11] Hern´andez M. A., Romero N., Toward a unified theory for third R-order iterative methods for operators with unbounded second derivative. Appl. Math. Comput. 215, 2248-2261 (2009). [12] Kantorovich L. V., Akilov G. P., Functional Analysis. Pergamon Press, Oxford (1982). [13] Ling Y., Xu X., On the semi-local convergence behavior for Halley’s method, Comput. Optim. Appl., February, 19, (2014). [14] Potra F. A., Pt´ak V., Nondiscrete Induction and Iterative Processes, Number 103 in Research Notes in Mathematics. Wiley, Boston (1984). [15] Proinov P. D., New general convergence theory for iterative processes and its applications to Newton- Kantorovich type theorems. J. Complexity 26, 3-42 (2010).

124

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[16] Smale S., Newton’s method estimates from data at one point. In: Ewing, R., Gross, K., Martin, C. (eds.) The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics, pp. 185-196. Springer, New York (1986). [17] Wang X., Convergence on the iteration of Halley family in weak conditions. Chin. Sci. Bull. 42, 552-555 (1997). [18] Ye X., Li C.: Convergence of the family of the deformed Euler-Halley iterations under the Hlder condition of the second derivative. J. Comput. Appl. Math. 194, 294-308 (2006).

Chapter 13

Newton’s Algorithm for Singular Systems 1.

Introduction

Let E1 , E2 denote Euclidean spaces with dimE1 = i, dimE2 = j, (i, j are natural numbers), D ⊆ E1 be convex and open and F : D −→ E2 be a differentiable (continuously) operator in the Fr´echet sense. By I we denote the identity on E1 and ΠA denote the projection of A ⊂ E1 . Let x0 ∈ E1 and ρ > 0, then U(x0 , ρ) stands for the open ball of radius ρ and center x0 . We consider Newton’s Algorithm (NA) given for x0 ∈ D and each n = 0, 1, 2, . . . as xn+1 = xn − F 0 (xn )+F(xn )

(13.1)

to produce a sequence {xn } converging to a limit point x∗ satisfying F 0 (x∗ )+F(x∗ ) = 0,

(13.2)

where, M + is a linear operator M + : E2 −→ E1 (or a i × j matrix) satisfying MM +M = A, M +MM + = M + , (MM +) = MM + , (M + M)∗ = M + M

(13.3)

where M ∗ is the conjugate of M. In this case M + is the Moore-Penrose inverse of M [5]. The convergence analysis given by several authors has produced a convergence region that is not large in general limiting the applicability of NA [1]-[14]. That is why we developed a technique that extends this region, and improves the estimates on kxn − x∗ k without additional conditions. This development is realized since we determine a subset of D also containing the sequence {xn }. But this way, the majorant functions are tighter leading to the aforementioned extensions. The novelty of our chapter is that these developments use majorant functions which are specializations of the old ones in [14]. The rest of the chapter includes the semi-local convergence of NA in Section 2 and conclusions in Section 3.

126

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Convergence of NA

We present two auxiliary results whose proofs can be found in [14]. Lemma 1. Let M1 and M2 be two i × j matrices with rankM1 = rankM2 = a and kM1+kkM2 − M1 k < 1. Then kM2+ − M1+ k ≤ c

kM1+kkM2 − M1 k , 1 − kM1+ kkM2 − M1 k

where  √  1+ 3   √ 2 c= 2    1

ifa < min{i, j}, ifa = min{i, j}(i 6= j), ifa = i = j.

√ 1+ 3 Remark 17. We focus only on those cases where a < min{i, j}. That is c = . The 2 full rank case, i.e., a = min{i, j}, can be studied in the similar way. Lemma 2. Let M1 and M2 be two i × j matrices with rank(M1 + M2 ) ≤ rankM1 = a and kM1+kkM2 k < 1. Then, rank(M1 + M2 ) = a and k(M1 + M2 )+ ≤

kM1+k . 1 − kM1+ kkM2 k

Let L0 , L, L1 positive and nondecreasing functions. Definition 7. Suppose that i = j. Let ρ > 0 and suppose there exists x ∈ D such that F 0 (x0 )−1 exists. (a) If +

0

0

kF(x0 ) kkF (x) − F (x0 )k ≤

Z kx−x0k 0

L0 (s)ds, x ∈ U(x0 , ρ),

(13.4)

then, F 0 is said to satisfy center-Lipschitz condition with L0 average in U(x0 , ρ). Suppose that equation L0 (s)s − 1 = 0 has a least solution ρ0 ∈ (0, ρ]. (b) If kF(x0 )+kkF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤

Z kx−x0 k+ky−xk

L(s)ds,

(13.5)

kx−x0k

holds for each xy ∈ U(x0 , ρ0 ) and y ∈ U(x0 , ρ0 − kx − x0 k), then F 0 is said to satisfy restricted- center-Lipschitz condition in inscribed sphere with L average in U(x0 , ρ0 ).

Newton’s Algorithm for Singular Systems

127

(c) If +

0

0

kF(x0 ) kkF (x) − F (y)k ≤

Z ky−xk+kx−x0k kx−x0 k

L1 (s)ds,

(13.6)

holds for each xy ∈ U(x0 , ρ) and y ∈ U(x0 , ρ − kx − x0 k), then F 0 is said to satisfy center-Lipschitz condition in inscribed sphere with L1 average in U(x0 , ρ). Remark 18. From the definitions, we know that if F 0 satisfies center Lipschitz condition in the inscribed sphere with L1 average in U(x0 , ρ), then it satisfies center Lipschitz condition with L0 average and the restricted one with L average. Moreover, we have (since ρ0 ≤ ρ) L0 (s) ≤ L1 (s)

(13.7)

L(s) ≤ L1 (s)

(13.8)

and for all s ∈ [0, ρ0). Then, it follows from (13.7) and (13.8) that L1 used in [14] can be replaced by L0 where it is needed and by L elsewhere. Hence, due to this observation the proofs of the following extended results are omitted. Lemma 3. Suppose that F 0 satisfies (13.4), (13.5), x ∈ U(x0 , ρ) satisfies rankF 0 (x) ≤ rankF 0 (x0 ) = a and

Z kx−x0 k 0

L0 (s)ds < 1. Then, the following assertions hold

(a) rankF 0 (x) = q. 1 (b) kF (x)k ≤ kF (x0 )k + 0 kF (x0 )+ k 0

0

(c) kF 0 (x)+ k ≤

1−

kF 0 (x0 )+ k

R kx−x0 k 0

L0 (s)ds

Z kx−x0k 0

L0 (s)ds.

.

Lemma 4. Suppose that F 0 satisfies (13.4), (13.5), and

Z ρ 0

L0 (s)ds < 1. Let x ∈ U(x0 , ρ)

and y ∈ U(x, ρ0 − kx − x0 k) be such that rankF 0 (x) ≤ rankF 0 (x0 ) = a and rankF 0 (y) ≤ rankF 0 (x0 ) = a. Then, √ R ky−xk L(kx − x0 k + s)ds 1 + 5 kF 0 (x)+k2 kF 0 (x0 )+k−1 0 kF (y) − F (x) k ≤ . R ky−xk 2 1 − kF 0 (x)+ kkF 0 (x0 )+k−1 L(kx − x0 k + s)ds 0

+

0

+

0

Lemma 5. Suppose that F 0 satisfies (13.5), and x, y ∈ U(x0 , ρ) be such that kx − x0 k + ky − xk ≤ ρ0 . Then, the following assertions hold (a) kF 0 (x)(y − x) + F (x) − F(y)k ≤ (b) kF(x) − F(y)k ≤

1 0 kF (x0 )+k

1 0 kF (x0 )+k

Z ky−xk 0

Z ky−xk 0

(ky − xk − s)L(kx − x0 k + s)ds.

(ky − xk − s)L(kx − x0 k + s)ds + kF 0 (x)kky − xk.

128

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Definition 8. Let T := {ξ ∈ E 1 : F 0 (ξ)+F(ξ) = 0}.

(13.9)

In general, when F is a singular system, NA may converge to a point in T rather than a solution of the equation F = 0. It is convenient to introduce some parameters. α = kF 0 (x0 )+ kkF 0 (x0 )k, β = kF 0 (x0 )+ kkF(x0 )k, γ1 = γ3 =

Z β 0

Z β 0

L(2β + s)ds, γ2 =

(β − s)L(2β + s)ds, γ4 =

Z 2β 0

Z 2β 0

L0 (s)ds,

(2β − s)L(2β + s)ds,

√ γ3 1 + 5 γ1 (β + βγ2 + 3βα + γ3 + γ4 ) q= + . (β(1 − γ2 ) 2 β(1 − γ2 )(1 − γ1 − γ2 ) Next, we present the main semi-local result.

Theorem 27. Suppose that F 0 satisfies (13.4), (13.5) and, rankF 0 (x) ≤ rankF 0 (x0 ) for each x ∈ U(x0 , 2β). If 1 γ1 + γ2 < 1 and q ≤ , (13.10) 2 then the sequence {xn } developed by NA converges to a point ξ in T, which is defined by (13.9) and the following items hold  n 1 kxn+1 − xn k ≤ kx1 − x0 k, n = 1, 2, . . ., 2 x0 − ξk ≤ 2kx1 − x0 k. Remark 19. (a) If `0 = ` = `1 our result reduce to the ones in [14]. Otherwise they constitute an improvement. In particular, let γ11 + γ12 < 1 and

1 2 be the sufficient convergence criteria given in [14]( where L0 , L are L1 ). Then, by (13.7) and (13.8), we have q1 ≤

γ11 + γ12 < 1 =⇒ γ1 + γ2 < 1 q1 ≤

1 1 =⇒ q ≤ 2 2

and q ≤ q1 justifying the extension claims made in the introduction.

Newton’s Algorithm for Singular Systems

129

(b) Our results can further specialize to extend the results in [14] in the Lipschitz case, if we choose L0 , L, L1 be (with L0 < L < L1 or L ≤ L0 ≤ L1 ) constant functions or in the Smale-Wang case, if we choose L0 (s) =

2γ 2γ0 , L(s) = 3 (1 − γ0 s) (1 − γs)3

and L1 (s) =

2γ1 (1 − γ1 s)3

with (γ0 ≤ γ ≤ γ1 or γ ≤ γ0 ≤ γ1 ). Examples where these double inequalities are strict can be found in [1]-[4].

3.

Conclusion

The convergence region of Newton’s algorithm for solving singular systems with constant rank derivatives is extended without additional conditions.

References [1] Argyros I. K. and Hilout S., Weaker conditions for the convergence of newtons method. Journal of Complexity, 28(3):364-387, 2012. [2] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-III, Nova Publishes, NY, 2020. [3] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [4] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Ben-Israel A., Greville T. N. E., Generalized Inverses: Theory and Applications, John Wiley, New York, 1974; second ed., Springer-Verlag, New York, 2003. [6] Dedieu J. P., Kim M., Newton’s method for analytic systems of equations with constant rank derivatives, J. Complexity 18 (2002) 187–209. [7] Guti´errez J. M., A new semilocal convergence theorem for Newton method, J. Comput. Appl. Math. 79 (1997) 131–145. [8] H¨aubler W. M., A Kantorovich-type convergence analysis for the Gauss–Newtonmethod, Numer. Math. 48 (1986) 119–125. [9] Hu N. C., Shen W. P., Li C., Kantorovich’s type theorems for systems of equations with constant rank derivatives, J. Comput. Appl. Math. (2007), doi:10.1016/j.cam.2007.07.006.

130

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[10] Kim M. H., Computational complexity of the Euler type algorithms for the roots of polynomials, thesis, CUNY, January 1986. [11] Li C., Zhang W. H., Jin X. Q., Convergence and uniqueness properties of Gauss– Newton’s method, Comput. Math. Appl. 47 (2004) 1057–1067. [12] Ortega J., Rheinboldt V., Numerical Solutions of Nonlinear Problems, SIAM, Philadelphia, 1968. [13] Smale S., Newton’s method estimates from data at one point, in R. Ewing, K. Gross, C. Maring (Eds.), The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics, Springer-Verlag, New York, 1986, pp. 185–196. [14] Xu X., Li C., Convergence criterion of Newton’s method for singular systems with constant rank derivatives, J. Math. Anal. Appl. 345 (2008) 689–701.

Chapter 14

Gauss-Newton-Algorithm 1.

Introduction

The GNA for x0 ∈ D is developed as x0 ∈ D, xn+1 = xn − F 0 (xn )+ F(xn )

(14.1)

for generating a sequence {xn } that approximates a zero x∗ of F 0 (.)+F(.) under certain criteria on the initial data. Here F : D ⊂ Ri −→ R j is Fr´echet differentiable and T + := F 0 (.)+ is the Moore-Penrose [1]-[17] inverse of F 0 satisfying T T + T = T, T +T T + = T + , (T T + )∗ = T T + and (T + T )∗ = T + T,

(14.2)

where T : Ri −→ R j is a linear operator (equivalently a j × i matrix), T + : R j −→ Ri and T ∗ is the adjoint of T. The convergence region of iterative algorithms is not large in general for both the local and the semi-local convergence cases. Moreover, the error estimates are pessimistic (in general). A technique is developed that determines a stricter than the original region [11] containing the iterates of GNA. Then, the l− average functions are tighter than the ones used before leading to a finer convergence analysis of GNA. The novelty of this technique is that no additional conditions are specializations of the old ones. This technique is so general that it can be used to extend other algorithms too.

2.

Semi-Local Convergence

Let ρ0 = sup{s ≥ 0 : U(x0 , s) ⊆ D}. Let also `0 , `, `1 be nondecreasing functions on [0, ρ], ρ ∈ (0, ρ0 ) with values in [0, ∞). The following average Lipschitz-type conditions play a role in the semi-local convergence of GNA. Definition 9. We say that F 0 satisfies the center-modified `0 −average Lipschitz condition on U(x0 , ρ) if kF 0 (x0 )+ kkF 0 (x) − F 0 (x0 )k ≤ holds for all x ∈ U(x0 , ρ).

Z kx−x0 k 0

`0 (s)ds

(14.3)

132

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. Suppose that equation

Z s 0

`0 (z)dz − 1 = 0

(14.4)

has a least solution s¯ ∈ (0, ρ0 ]. Set D0 = D ∩U(x0 , s). ¯ Definition 10. We say that F 0 satisfies the modified-restricted `−average Lipschitz condition on U(x0 , s) ¯ if kF 0 (x0 )+ kkF 0 (y) − F 0 (x)k ≤

Z kx−x0 k+ky−xk kx−x0k

`(s)ds

(14.5)

holds for all x, y ∈ U(x0 , s) ¯ with kx − x0 k + ky − xk < s. ¯ Definition 11. We say that F 0 satisfies the modified-restricted `1 −average Lipschitz condition on U(x0 , ρ) if 0

+

0

0

kF (x0 ) kkF (y) − F (x)k ≤

Z kx−x0k+ky−xk kx−x0 k

`1 (s)ds

(14.6)

holds for all x, y ∈ U(x0 , ρ) with kx − x0 k + ky − xk < ρ. Remark 20. In view of these definitions `0 (s) ≤ `1 (s)

(14.7)

`(s) ≤ `1 (s)

(14.8)

`0 (t) ≤ `(t) for all t ∈ [0, s]. ¯

(14.9)

and hold for all s ∈ [0, s]. ¯ We suppose from now on that

Otherwise, we replace `0 , ` in the results that follow, by `¯ where this function stands for the largest of `0 , ` on the interval [0, s]. ¯ It is also convenient to define functions on [0, ρ0 ] for η > 0 and µ ∈ [0, 1) by ϕ0µ (s) = η − (1 − µ)s +

Z s 0

ϕµ (s) = η − (1 − µ)s + and ϕ1µ (s) = η − (1 − µ)s +

`0 (z)(s − z)dz,

Z s 0

Z s 0

(14.10)

`(s)(s − z)dz

(14.11)

`1 (s)(s − z)dz.

(14.12)

We have by (14.7)-(14.9) that ϕ0µ (s) ≤ ϕµ (s) ≤ ϕ1µ (s)

(14.13)

Gauss-Newton-Algorithm

133

hold for all s ∈ [0, s]. ¯ By the intermediate value theorem, ϕ0µ (0) = ϕµ (0) = ϕ1µ (0) = η > 0 and ϕ1µ (λ) = 0 for any solution λ of equation ϕ1µ (s) = 0, we have by (14.13) that equations ˜ too with λ0 ≤ λ ˜ ≤ λ. Notice that (14.6) used in [11] ϕ0 (s) = ϕ(s) = 0 has solutions λ0 , λ implies (14.3) and (14.5), but not vice versa. Condition (14.6) was used in [11] to obtain the estimate kF 0 (x)+ k ≤ −(ϕ10 )0 (kx − x0 k)−1 kF 0 (x0 )+ k.

(14.14)

But if we use the weaker and actually needed (14.3) we obtain the tighter estimate kF 0 (x)+k ≤ −(ϕ00 )0(kx − x0 k)−1 kF 0 (x0 )+k

≤ −(ϕ10 )0(kx − x0 k)−1 kF 0 (x0 )+k.

(14.15)

Hence, ϕ00 can replace ϕ01 for these estimates, and ϕ can replace ϕ1 for the rest of the estimates in the proofs in [11] to obtain the finer semi-local convergence analysis claimed in the introduction. In order to present these results in this extended setting, we still need to define rµ := sup{s ∈ (0, ρ) : βµ = (1 − µ)rµ − and δ= and

 Z   rµ ≥

where r¯µ ∈ [0, ρ] satisfies

  rµ =

Z rµ 0

 rµ

Z 0 rµ 0

Z s 0

Z rµ 0

`(z)dz ≤ 1 − µ}

(14.16)

`(z)(rµ − z)dz,

ρ, if δ < 1 − µ r¯µ , if δ ≥ 1 − µ `(z)zdz, if δ < 1 − µ `(z)zdz, if δ ≥ 1 − µ,

`(z)dz = 1 − µ.

Moreover, we define the scalar sequence developed by sµ,0 = 0, sµ,n+1 = sµ,n −

ϕµ (sµ,n ) ϕ10 (sµ,n )

(14.17)

for all n = 0, 1, 2, . . .. From now on we shall also use the conditions kF 0 (y)+ (I − F 0 (x)F 0 (x)+)F(x)k ≤ κkx − yk for all x, y ∈ D0

(14.18)

rank(F 0 (x)) ≤ rank(F 0 (x0 )) for all x ∈ D0 .

(14.19)

and We also suppose that F 0 satisfies the four standard Moore-Penrose axioms (14.2). Furthermore, we need the standard auxiliary results [16].

134

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proposition 6. Let M0 and M be i × j matrices. Assume that 1 ≤ rank(M0 ) ≤ rank(M) and kM +kkM0 − Mk < 1. Then rank(M0 ) = rank(M) and kM0+ k ≤

kM +k . 1 − kM + kkM0 − Mk

Proposition 7. Suppose that β ≤ βµ . Then, the following items hold. (i) The function ϕµ is strictly decreasing on [0, rµ] and has exact one zero s∗µ in [0, rµ] satisfying β < s∗µ . (ii) The sequence {sµ,n } defined by (14.17) is strictly increasing and converges to s∗µ . Proposition 8. Suppose that 0 < r ≤ r0 satisfies U(x0 , r) ⊆ D and that DF satisfies (14.3). Then, for each x ∈ U(x0 , r), rank(DF(x)) = rank(DF(x0 )) and DF(x)+ k ≤ −ϕ00 (kx − x0 k)−1 kDF(x0 )+ k. Let +

β := kDF(x0 ) F(x0 )k and µ0 = κ(1 −

Z β

`(z)dz).

0

Theorem 28. Let µ ≥ µ0 . Suppose that ¯ 0 , s∗µ ) ⊆ D β ≤ βµ and U(x and that DF satisfies (14.3) and (14.4) on U(x0 , s∗µ ). Let {xn } be the sequence generated by ¯ 0 , s∗µ ) and GNA with initial point x0 . Then, {xn } converges to a zero x∗ of DF(.)+F(.) in U(x for each n ≥ 0, the following items hold. kxn − x∗ k ≤ s∗µ − sµ,n and kxn+1 − xn k ≤ sµ,n+1 − sµ,n . Corollary 1. Suppose that ¯ 0 , s∗µ ) ⊆ D β ≤ βµ and U(x

and that DF satisfies (14.3) and (14.4) on U(x0 , s∗µ ). Let {xn } be the sequence generated by ¯ 0 , s∗µ ) and GNA with initial point x0 . Then, {xn } converges to a zero x∗ of DF(.)+F(.) in U(x for each n ≥ 0, the following items hold with µ = κ........... kxn − x∗ k ≤ s∗µ − sµ,n and kxn+1 − xn k ≤ sµ,n+1 − sµ,n .

Gauss-Newton-Algorithm

135

Theorem 29. Suppose that DF(y)+ (IRi − DF(x)DF(x)+F(x)k = 0 for any x, y ∈ D0 (that if κ = 0). Suppose that ¯ 0 , s∗µ ) ⊆ D β ≤ βµ and U(x and that DF satisfies (14.3) and (14.4) on U(x0 , s∗µ ). Let {xn } be the sequence generated by ¯ 0 , s∗µ ) and GNA with initial point x0 . Then, {xn } converges to a zero x∗ of DF(.)+F(.) in U(x for each n ≥ 0, the following items hold. kxn − x∗ k ≤ s∗0 − sn , kxn+1 − xn k ≤ sn+1 − sn and kxn+1 − xn k ≤



 sn+1 − sn kxn − xn−1 k. sn − sn−1

Corollary 2. Let x0 ∈ D be such that DF(x0 ) is full row rank. Suppose that ¯ 0 , s∗µ ) ⊆ D β ≤ βµ and U(x and that DF satisfies (14.3) and (14.4) on U(x0 , s∗µ ). Let {xn } be the sequence generated by ¯ 0 , s∗µ ) and GNA with initial point x0 . Then, {xn } converges to a zero x∗ of DF(.)+F(.) in U(x for each n ≥ 0, the following items hold.   sn+1 − sn + kDF(x0 ) F(xn )k ≤ kDF(x0 )+ F(xn−1 )k. sn − sn−1

3.

Local Convergence

Recall that r0 is defined by (14.16) for µ = 0. Through the whole section, let x∗ ∈ D be such that F 0 (x∗ ) = 0 and DF(x∗ ) 6= 0. Furthermore, we shall assume that U(x∗ , r0 ) ⊆ D and rank(DF(x)) ≤ rank(DF(x∗ )) for any x ∈ D0 . The following auxiliary result estimates the quantity kDF(x0 )+F(x0 )k. Lemma 6. Let 0 < r ≤ r0 . Suppose that DF satisfies (14.3) and (14.5) on U(x∗ , r). Then, for each x0 ∈ U(x∗ , r), rank(DF(x0 )) = rank(DF(x∗ )) and +

kDF(x0 ) F(x0 )k ≤

kx0 − x∗ k +

R kx0 −x∗ k 0

1−

`(z)(z − kx0 − x∗ k)dz

R kx∗0 k 0

`0 (z)dz

.

136

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Lemma 7. Suppose that DF satisfies (14.3) and (14.5) on U(x∗ , r0 ). Let x0 ∈ U(x∗ , r0 ) and let L : [0, ρ − kx0 − x∗ k) −→ R be defined by ¯ = `(z)

`(u + kx0 − x∗ k

1−

R kx0 −x∗ k 0

`0 (z)dz

for each z ∈ [0, ρ − kx0 − x∗ k).

Then, the following items hold.

(i) rk ≤ r¯k + kx0 − x∗ k ≤ r0 , where r¯k is given by (14.6) with `¯ and κ in place of ` and µ. ¯ (ii) DF satisfies the modified `−average Lipschitz condition on U(x0 , r0 − kx0 − x∗ k). Define the function ψk on [0, rk] by ψk (s) = bk − (2 − κ)s + κ(rk − s)

Z s 0

`(z)dz + 2

Z s 0

`(z)(s − z)dz

for each s ∈ [0, rk]. Lemma 8. ψk is a strictly decreasing continuous function on [0, rk], and has exact one zero bκ r¯k in [0, rk] and it satisfies < r¯k < rk . 2−κ Theorem 30. Suppose that DF satisfies (14.3) and (14.5) on U(x∗ , r0 ). Let x0 ∈ U(x∗ , r) and let {xn } be the sequence generated by GNA with initial point x0 . Then, {xn } converges to a zero of DF(.)+F(.). Corollary 3. Suppose that DF satisfies (14.3) and (14.5) on U(x∗ , r0 ). Let x0 ∈ U(x∗ , r¯0 ) and let {xn } be the sequence generated by GNA with initial point x0 . Then, {xn } converges to a zero of DF(.)+F(.). Corollary 4. Suppose that DF satisfies (14.3) and (14.5) on U(x∗ , r0 ). Let x0 ∈ U(x∗ , r¯0 ) and let {xn } be the sequence generated by GNA with initial point x0 . Then, {xn } converges to a zero of F(.). Remark 21. If `0 = ` = `1 , in our results reduce to the ones in [11] for both the local and the semi-local convergence case. Otherwise, they constitute an extension with advantages as already stated previously. These advantages are arrived at without additional conditions since the computation of `1 requires that of `0 and ` as special cases. The Kantorovich and Smale-Wang cases can also be extended immediately, since they constitute specializations of the results of the last two sections [11,14,15,17]. Examples where (14.3) , (14.5) and (14.6) are strict can be found in [1]-[5].

4.

Conclusion

The convergence region of iterative algorithms is not large in general for both the local and the semi-local convergence cases. Moreover, the error estimates are pessimistic (in general). A technique is developed that determines a stricter than the original region containing the iterates of the Gauss-Newton-Algorithm (GNA). Then, the l− average functions are tighter than the ones used before leading to a finer convergence analysis of GNA. The novelty of this technique is that no additional conditions are specializations of the old ones. This technique is so general that it can be used to extend other algorithms too.

Gauss-Newton-Algorithm

137

References [1] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), 1-7, (1996). [2] Argyros I. K., On the Newton-Kantorovich hypothesis for solving equations, J. Comput. Appl. Math. 169 (2004) 315–332. [3] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [4] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [5] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [6] Ben-Israel A., Grenville T. N. E., Generalized Inverses: Theory and Applications, John Wiley, New York, 1974; 2nd edition, Springer-Verlag, New York, 2003. [7] Dedieu J. P., Kim M. H., Newton’s method for analytic systems of equations with constant rank derivatives, J. Complexity 18 (2002) 187–209. [8] Ezquerro J. A., Hern´andez M. A., Generalized differentiability conditions for Newton’s method, IMA J. Numer. Anal. 22 (2002) 187–205. [9] Ezquerro J. A., Hern´andez M. A., On an application of Newton’s method to nonlinear operators with w -conditioned second derivative, BIT 42 (2002) 519–530. [10] Gragg W. B., Tapia R. A., Optimal error bounds for the Newton-Kantorovich theorems, SIAM J. Numer. Anal. 11 (1974) 10–13. [11] Li C., Hu N., Wang J., Convergence behaviour of Gauss-Newton’s method and extensions of the Smale point estimate theory, J. Complexity, 26, 268-295, (2010). [12] Guti´errez J. M., Hernndez M. A., Newton’s method under weak Kantorovich conditions, IMA J. Numer. Anal. 20 (2000) 521–532. [13] H¨aubler W. M., A Kantorovich-type convergence analysis for the Gauss–Newtonmethod, Numer. Math. 48 (1986) 119–125. [14] Kantorovich L. V., Akilov G. P., Functional Analysis (Howard L. Silcock, Trans.), second edition, Pergamon Press, Oxford, 1982, (in Russian). [15] Smale S., Newton’s method estimates from data at one point, in: R. Ewing, K. Gross, C. Martin (Eds.), The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics, Springer-Verlag, New York, 1986, pp. 185–196.

138

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[16] Stewart G. W., Sun J. G., Matrix Perturbation Theory, Academic Press, New York, 1990. [17] Xu X. B., Li C., Convergence criterion of Newton’s method for singular systems with constant rank derivatives, J. Math. Anal. Appl. 345 (2008) 689–701.

Chapter 15

Newton’s Algorithm on Riemannian Manifolds 1.

Introduction

We determine a singularity z∗ for a vector field X(z) = 0,

(15.1)

where M is a Riemannian manifold, D ⊆ M is an open set, and X : D −→ T M is a differentiable (continuously ) vector field. Kantorovich-type semi-local convergence conditions have been given in [1]-[11] for Newton’s algorithm (NA) [5] defined for all m = 0, 1, 2, . . . by zm+1 = expzm (−∇X(zm ))−1 X(zm) (15.2) to generate a sequence {zm} converging to a singularity z∗ of X given in (15.1). These results generalized and extended earlier ones [5]. But the convergence region of NA is small in general. We present a new finer semi-local convergence analysis for NA/

2.

Convergence

The following majorant conditions are needed in the semi-local convergence analysis of NA. Let D ⊂ M be an open set and ρ > 0 be a given parameter. Set S = [0, ρ). Definition 12. A continuous differentiable function w0 : S −→ R is called a center majorant function at z0 ∈ D, for a continuously differentiable vector field (CDVF) X : D −→ T M with respect to Gn (z0 , ρ), if ∇X(z0 ) is invertible, U(z0 , ρ) ⊂ D, and k∇X(z0 )−1 [Pλ,1,0 ∇X(z)Pλ,0,1 − ∇X(z0 )]k ≤ w00 (d(z0 , z)) − w00 (0),

(15.3)

for all z ∈ Gn (z0 , ρ). Suppose that w00 (t) − w00 (0) − 1 = 0 has a least solution ρ0 ∈ (0, ρ]. Set S0 = [0, ρ0 ).

(15.4)

140

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Definition 13. A continuously differentiable function w : S0 −→ R is called a restricted majorant function at z0 ∈ D, for a CDVF X : D −→ T M with respect to Gn (z0 , ρ), if ∇X(z0 ) is invertible, and k∇X(z0 )−1 [Pλ,β,0 ∇X(λ(β))Pλ,α,β − Pλ,α,0 ∇X(λ(α))]k ≤ w0 (l[λ, 0, β]) − w0(l[λ, 0, α]), (15.5) for all λ ∈ Gn (z0 , ρ0 ) with α, β ∈ Dom{λ} and 0 ≤ α ≤ β. Definition 14. A continuously differentiable function w1 : S0 −→ R is called a majorant function at z0 ∈ D, for a CDVF X : D −→ T M with respect to Gn (z0 , ρ), if ∇X(z0 ) is invertible, and k∇X(z0 )−1 [Pλ,β,0 ∇X(λ(β))Pλ,α,β − Pλ,α,0 ∇X(λ(α))]k ≤ w01 (l[λ, 0, β]) − w01 (l[λ, 0, α]), (15.6) for all λ ∈ Gn (z0 , ρ0 ) with α, β ∈ Dom{λ} and 0 ≤ α ≤ β. Remark 22. These definitions imply w00 (t) ≤ w01 (t)

(15.7)

w0 (t) ≤ w01 (t)

(15.8)

w00 (t) ≤ w0 (t) for all t ∈ [0, ρ0).

(15.9)

and for all t ∈ [0, ρ0], since ρ0 ≤ ρ. We also suppose that

Otherwise, we replace w0 , w by w¯ in the result that follow, with this function standing for the largest of w0 , w on the interval [0, ρ0 ). Notice that (15.6) implies (15.3) and (15.5) but not necessarily vice versa, if w0 = w = w1 . Condition (15.6) was used in [5]. Then, the following estimate was obtained k∇X(λ(s))−1Pλ,0,s ∇X(z0 )k ≤

1 |w01 (l[λ, 0, s])|



1 |w01 (t)|

(15.10)

for l[λ, 0, s] ≤ t. But if we use the actually weaker and needed condition (15.3), we obtain the tighter bound k∇X(λ(s))−1Pλ,0,s ∇X(z0 )k ≤

1 1 1 ≤ ≤ . |w00 (l[λ, 0, s])| |w00 (t)| |w0 (l[λ, 0, s])|

(15.11)

Hence, (15.11) can replace (15.10) and w can replace w1 in the proofs of the results in [5]. Hence, we arrived at the main semi-local convergence result for NA. Theorem 31. Let M, D, ρ, ρ0, w0 , w, z0 be as previously. Further, suppose (15.3), (15.5), and (c1) w0 (0) > 0, w(0) > 0 and w00 (0) = w0 (0) = −1;

Newton’s Algorithm on Riemannian Manifolds

141

(c2) w00 , w0 are convex and strictly increasing; (c3) w(t) = 0 for some t ∈ (0, ρ0). (c4) w(t) < 0 for some t ∈ (0, ρ0). (c5) k∇X(z0 )−1 X(z0 )k ≤ w(0) = η. Set 4 = sup{−w(t) : t ∈ [0, ρ0 ) and choose r ∈ [0, h(t) =

1 |w0 (r)|

4 ] and define h : [0, ρ0 − r) −→ R by 2

[w(t + r) + 2r].

Then, the following items hold: h has the smallest zero for s∗,ρ ∈ (0, R − ρ), the sequences generated by Newton’s method for solving the equation X(z) = 0 and the equation h(t) = 0 with any starting point z0 ∈ U(z0 , ρ) and s0 = 0, respectively, zm+1 = expzm (−∇X(zm )−1 X(zm )), sm+1 = sm −

h(sm) , m = 0, 1, 2, . . . h0 (sm)

are well defined, {zm} is contained in U(z0 , s∗,ρ ) and {sm } is strictly increasing, is contained in [0, s∗,ρ), and converging to s∗,ρ. Moreover, {zm } and {sm } satisfy the inequalities d(zm , zm+1 ) ≤ sm+1 − sm , m = 0, 1, 2, . . . d(zm , zm+1 ) ≤

D− h0 (s∗,ρ ) sm+1 − sm 2 d(s , s ) ≤ d(zm−1 , zm)2 , m = 0, 1, 2, . . . m−1 m (sm − sm−1 )2 −2h0 (s∗,ρ)

and {zm } converges to z∗ ∈ U(z0 , s∗,ρ) such that X(z∗ ) = 0. Furthermore, {zm} and {sm} satisfy the inequalities 1 d(zm , z∗ ) ≤ s∗,ρ − sm , s∗,ρ − sm−1 ≤ (s∗,ρ − sm ), m = 0, 1, 2, . . . 2 the convergence of {zm} and {sm} to z∗ and s∗,ρ respectively are Q− quadratic as d(zm+1 , z∗ ) D− h0 (s∗,ρ) ≤ , m−→∞ d(zk , z∗ )2 −2h0 (s∗,ρ ) lim

0 ≤ s∗,ρ − sm+1 ≤

D−h0 (s∗,ρ ) D− h0 (s∗,ρ) ( (s∗,ρ − sm )2 , m = 0, 1, 2, . . . −2h0 (s∗,ρ) −2h0 (s∗,ρ )

¯ where s∗,ρ ≤ θ¯ := sup{t ∈ [s∗,ρ , ρ) : w(t) ≤ and z∗ is the unique singularity of X in U(z0 , θ), 0}. Remark 23. If w0 = w = w1 , then Theorem 31 reduces to Theorem 1 in [5]. Otherwise it constitute an improvement with advantages as stated in the introductions. Examples where (15.7)-(15.9) are strict can be found in [1]-[4].

142

3.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

Our new idea of the restricted convergence region is used to extend the semi-local convergence of Newton’s algorithm for determining a singularity of a differentiable vector field on a Riemannian manifold. The advantages include weaker convergence criteria and tighter error estimates on the distances involved than in earlier studies. No additional conditions are used since the new majorant functions are specializations of the ones used in earlier studies.

References [1] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), 1-7, ( 1996). [2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [4] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423 [5] Bittencourt T., Ferreira O. P., Kantorovich’s theorem on Newton’s method under majorant condition on Riemannian manifolds, J. Glob. Optimiz., October, 2016. [6] Ferreira O. P., Gonalves M. L. N., Oliveira P. R., Convergence of the Gauss-Newton method for convex composite optimization under a majorant condition. SIAM J. Optim. 23(3), 17571783 (2013). [7] Manton J. H., A framework for generalising the Newton method and other iterative methods from Euclidean space to manifolds. Numer. Math. 129(1), 91125 (2015). [8] Owren B., Welfert B., The Newton iteration on Lie groups. BIT 40(1), 121145 (2000). [9] Wang J. H., Convergence of Newtons method for sections on Riemannian manifolds. J. Optim. Theory Appl. 148(1), 125145 (2011). [10] Wang J.-H., Huang S., Li C., Extended Newtons method for mappings on Riemannian manifolds with values in a cone. Taiwan. J. Math. 13(2B), 633656 (2009). [11] Wang J.-H., Yao J.-C., Li C., Gauss-Newton method for convex composite optimizations on Riemannian manifolds. J. Global Optim. 53(1), 528 (2012).

Chapter 16

Gauss-Newton-Kurchatov Algorithm for Least Squares Problems 1.

Introduction

The region of accessibility of the Gauss-Newton-Kurchatov Algorithm (GNKA) for solving least squares problems is extended without additional conditions than the earlier studies. Tighter error estimations are also obtained. The new majorant functions are special cases of the ones used before. We are interested in finding numerically solutions x∗ of the nonlinear least squares problem 1 min F(x)T F(x), (16.1) x∈Ω 2 where F + H : Ω ⊆ Ri −→ R j ( j ≥ i) is a continuous mapping and Ω is an open and convex set. Moreover, F is a continuously differentiable whereas H is a continuous mapping. The point x∗ is obtained as the limit of a sequence {xn } generated by GNKA defined for x−1 , x0 ∈ Ω [5] and each n = 0, 1, 2, . . . by xn+1 = xn − (ATn An )−1 ATn (F(xn ) + H(xn )),

(16.2)

where An = F 0 (xn ) + H(2xn − xn−1 , xn−1 ), with H(., .) : Ω × Ω −→ L(H, H) is a divided difference of order one on Ω × Ω for H [2]. Specializations of GNKA have been studied extensively under generalized Lipschitz -type conditions [1]-[5]. In particular, Shakhno et al [5] generalized earlier results using majorant functions. But in these results the set containing initial points (or region of accessibility) that guarantee convergence of GNKA to x∗ is small in general. That is why we develop a technique to extend this region and improve on the upper estimations of kxn − x∗ k. The technique can be used to extend the applicability of other algorithms [1]-[4].

2.

Convergence of GNKA

Sufficient conditions and the rate of local convergence of the iterative process (16.2) are given in the next theorem. We use the Euclidean norm for which kD1 − D2 k = kDT1 − DT2 k,

144

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

where D1 , D2 ∈ Ri×i holds. Let D∗1 = F 0 (x∗ ) + H(x∗ , x∗ ). Theorem 32. Suppose that problem (16.1) has a solution x∗ ∈ Ω, and the inverse operation (DT1 D∗1 )−1 exists, such that k((D∗1)T D∗1 )−1 k ≤ β. On the subset Ω, the Fr´echet derivative F 0 satisfies the center radius Lipschitz, and radius Lipschitz condition with L0 , L average, respectively kF 0 (x) − F 0 (x∗ )k ≤ kF 0 (x) − F 0 (xθ )k ≤

Z ρ(x) θρ(x)

Z ρ(x)

L0 (u)du,

(16.3)

L(u)du, xθ = x∗ + θ(x − x∗ ), 0 ≤ θ ≤ 1,

(16.4)

0

the function H has the first and second order divided difference, and kH(x, y) − H(x∗ , x∗ )k ≤ kH(x, y) − H(u, v)k ≤

Z kx−yk 0

M0 (u)du

Z kx−uk+ky−vk

M(u)du

(16.5) (16.6)

0

kH(u, x, y) − H(v, x, y)k ≤

Z ku−vk

N(u)du

(16.7)

0

for all x, y, u, v ∈ Ω, ρ(x) = kx − x∗ k; L0 , L, M0 , M and N are positive nondecreasing functions on [0, 2R], R > 0. Furthermore, kF 0 (x∗ ) + H(x∗ )k ≤ η, kF 0 (x∗ ) + H(x∗ , x∗ )k ≤ α Z ρ  Z 2ρ η L(u)du + M(u)du < 1 ρ 0 0 and U(x∗ , 3ρ∗ ) ⊆ Ω, where ρ∗ is the unique positive zero of the function q given by   Z s Z 2s Z 2s q(s) = β α + L0 (u)du + M0 (u)du + 2s N(u)du 0 0 0   Z s Z s Z 2s 1 L(u)udu + M(u)du + 2s N(u)du × s 0 0 0   Z s Z 2s Z 2s + 2α + L0 (u)du + M0 (u)du + 2s N(u)du 0 0 0 Z s  Z 2s Z 2u × L0 (u)du + M0 (u)du + 2s N(u)du 0 0 0 Z s   Z 2s Z 2s 1 + L(u)du + M(u)du + 2s N(u)du η − 1. (16.8) s 0 0 0 Then, for x−1 , x0 ∈ U(x∗ , ρ∗ ), the iterative process {xn }, n = 0, 1, 2, . . . generated by GNKA is well defined, remains in U(x∗ , ρ∗ ), and converges to x∗ . Moreover, the following items hold for all n ≥ 0. kxn+1 − x∗ k ≤ c1 kxn − x∗ k + c2 kxn − xn−1 k2 +c3 kxn − x∗ k2 + c4 kxn − x∗ kkxn − xn−1 k2 ,

(16.9)

Gauss-Newton-Kurchatov Algorithm for Least Squares Problems where

η c1 = g(ρ∗ ) ρ∗

Z

ρ∗

L(u)du +

0

Z 2ρ∗

145



M(u)du ,

0

2ρ∗ η N(u)du, c2 = g(ρ∗ ) 2ρ∗ 0  Z  Z ρ∗ η 1 ρ∗ c3 = g(ρ∗ ) L(u)udu + M(u)du , ρ∗ ρ∗ 0 0

Z

c4 = g(ρ∗ )

η 2ρ∗

Z 2ρ∗

N(u)du,

0

g(s) = β(1 − β[T0 (s) + α]T (s) − α])−1 , T0 (s) = α +

Z s

T (s) = α +

0

L0 (u)du +

Z s 0

L(u)du +

Z 2s 0

Z 2s

M0 (u)du + 2s M(u)du + 2s

0

Z 2s

N(u)du,

0

Z 2s

N(u)du.

0

Corollary 5. The convergence of GNKA with zero residual is quadratic. Remark 24.

(a) It follows from the different Lipschitz conditions (16.3)-(16.6) that L0 (t) ≤ L(t)

(16.10)

M0 (t) ≤ M(t).

(16.11)

and Hence, L0 , M0 can replace L, M in the estimations that can be used (instead of L, M, respectively). Notice that (16.3) and (16.5) are special of L, M, so no additional conditions are needed to obtain the advantages already stated previously. If L0 = L and M0 = M our results coincide with the ones in [5, Theorem 1]. (b) If L0 , L, M0 , M, N are constant functions, our results clearly extend the ones in [5]. Concrete examples where (16.10) and (16.11) are strict can be found in [1]-[4].

3.

Conclusion

We extended the region of accessibility of GNKA for solving nonlinear least-squares problems under generalized Lipschitz conditions. The quadratic convergence order of the method for problems with zero residual was determined.

References [1] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), 1-7, ( 1996).

146

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [4] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [5] Shakhno S. M., Yarmola H. P., Convergence of the Gauss Newton Kantrovich method, Carpathian Math. Publ., 6, 1, 12-22, (2020).

Chapter 17

Uniqueness of the Solution of Equations in Banach Space: I 1.

Introduction

The study of the existence as well as the uniqueness for a solution x∗ of equation F(x) = 0,

(17.1)

where F : Ω ⊂ B1 −→ B2 is Fr´echet differentiable, B1 , B2 are Banach spaces and Ω is open and convex is very important in computational sciences, since many problems reduce to (17.1) using Mathematical modeling [1]-[8]. But x∗ in closed form is attainable only in special cases. That is why researchers and practitioners develop algorithms generating sequences approximating x∗ under certain conditions on the initial data. The most popular such algorithm is undoubtedly Newton’s (NA) defined for x0 ∈ Ω by xn+1 = xn − F 0 (xn )−1 F(xn ).

(17.2)

There are a plethora of local and semi-local results for NA [1]-[8] based on various conditions on F 0 . But what these results have in common are a small radius of convergence, pessimistic estimations on kxn − x∗ k and not the best possible uniqueness of x∗ information (under the conditions used). That is why, we developed a technique that addresses all these three concerns, leading to more initial points, fewer iterations to attain a predetermined error tolerance, and more precise information on the location of the solution x∗ . The novelty of the new technique is twofold: (1) The improvements are achieved under the same conditions and computational effort as in previous works [1]-[4], and (2) The technique is so general that it can be used to extend the applicability of other algorithms along the same lines. In particular, we extend the results in [5]. This is done in Section 2.

2.

Convergence

Let x∗ be a simple solution of equation (17.1) and set R := sup{s ≥ 0 : U(x∗ , s) ⊆ Ω}. Let K0 , K, K1 denote positive integrable functions. Set Ω∗ = U(x∗ , R). The following concepts are needed:

148

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Definition 15. (i) It is said that F 0 (x∗ )−1 F 0 satisfies the center radius Lipschitz condition on Ω∗ , if for each x ∈ Ω∗ kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤

Z e(x) 0

K0 (s)ds,

(17.3)

where e(x) = kx − x∗ k. Suppose that equation K0 (s)s − 1 = 0

(17.4)

has a least solution s0 ∈ (0, R]. Set Ω0 = U(x∗ , s0 ). (ii) It is said that F 0 (x∗ )−1 F 0 satisfies the center restricted Lipschitz condition on Ω0 , if for each x ∈ Ω0 , θ ∈ [0, 1], xθ = x∗ + θ(x − x∗ ) 0

−1

0

0

kF (x∗ ) (F (x) − F (xθ ))k ≤

Z e(x)

K(s)ds.

(17.5)

θe(x)

(iii) It is said that F 0 (x∗ )−1 F 0 satisfies the center radius Lipschitz condition on Ω∗ , if for each x ∈ Ω∗ , θ ∈ [0, 1], xθ = x∗ + θ(x − x∗ ) kF 0 (x∗ )−1 (F 0 (x) − F 0 (xθ ))k ≤

Z e(x) θe(x)

K1 (s)ds.

(17.6)

Remark 25. It follows from s0 ≤ R

(17.7)

K0 (s) ≤ K1 (s)

(17.8)

K(s) ≤ K1 (s)

(17.9)

that and for each s ∈ (0, s0]. Notice that (17.6) (used in [5]) implies (17.3) and (17.5) but not vice versa. Moreover, condition (17.6) was used in [5, Lemma 2.1] to obtain the estimate kF 0 (x)−1 F 0 (x∗ )k ≤

1 1−

R e(x) 0

K1 (s)ds

.

(17.10)

It turns out that the weaker condition (17.3) can be used to obtain the tighter estimate kF 0 (x)−1 F 0 (x∗ )k ≤

1 1−

R e(x) 0

K0 (s)ds

.

(17.11)

Then, using (17.11) instead (17.10) and (17.4) instead of (17.6) we can simply reproduce the proofs and results in [8] but extended. More precisely, we have the local convergence result for NA.

Uniqueness of the Solution of Equations in Banach Space: I

149

Theorem 33. Suppose: conditions (17.3), (17.5) hold and so given in (17.4) exists; equation Z t Z t K(s)ds + t K0 (s)ds − t = 0 (17.12) 0

0

has a least solution s∗ ∈ (0, s0). Then, for x0 ∈ U(x∗ , s∗ ), NA converges to x∗ and for each n = 0, 1, 2, . . . n kxn − x∗ k ≤ p2 −1 kx0 − x∗ k < s∗ , (17.13) where p ∈ [0, 1) and is given by p=

R e(x0 ) 0

e(x0 )(1 −

K(s)sds

R e(x0 ) 0

K0 (s)ds)

.

(17.14)

Concerning the uniqueness of the solution x∗ , we have Proposition 9. Suppose: conditions (17.3) holds and equation Z t 0

K0 (s)(t − s)ds − t = 0

(17.15)

has a least solution s1 ∈ (0, R). Then, x∗ is the only solution of equation (17.1) in U(x∗ , s1 ). Remark 26. If K0 = K = K1 , then the preceding results specialize to the ones in [8]. Otherwise, they constitute an improvement with advantages as stated previously. The rest of the results in [8] (which are specializations in the Lipschitz and Smale-Wang condition [5]) are extended too. We leave the details to the motivated reader. Concrete examples, where (17.7)-(17.9) are strict can be found in [3,4].

3.

Conclusion

The uniqueness of the solution of Banach space-valued equations is extended using Newton’s algorithm and generalized conditions.

References [1] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), 1-7, ( 1996). [2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [4] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423.

150

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[5] Smale S., Newton’s method estimates from data at one point. In The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics, Ewing, R., Gross, K., Martin, C. eds, New York: Spring-Verlag, 185-196 (1986). [6] Traub J. F., Wozniakowski H., Convergence and complexity of Newton iteration. J. Assoc. For Comp. Math., 29(2), 250-258 (1979). [7] Wang X., Convergence of Newton’s Method and inverse function in Banach space, Math. Comput., 68, 169-186 (1999). [8] Wang X., Convergence of Newton’s method and uniqueness of the solution of equations in Banach space.IMA J. Numer. Anal., 20, 123-134, (2000).

Chapter 18

Uniqueness of the Solution of Equations in Banach Space: II 1.

Introduction

We continue the extensions initiated previously, but K0 , K, and K1 need not be nondecreasing. However, in this case, the convergence order decreases

2.

Convergence

We extend the results in [9] along the lines of our previous chapter. Theorem 34. Suppose: conditions (18.1) and (18.3) hold and s0 given in (18.2) exists (see our earlier chapter); equation Z t 1 K0 (s)ds − = 0 (18.1) 2 0 has a least solution s∗ ∈ (0, s0). Then, for x0 ∈ U(x∗ , s∗ ), NA converges to x∗ so that for each n = 0, 1, 2, . . . kxn − x∗ k ≤ pn kx0 − x∗ k < s∗ , (18.2) where p ∈ [0, 1) and is given by p=

R e(x0 ) 0

1−

K(s)sds

R e(x0) 0

K0 (s)ds

.

(18.3)

Additionally, suppose: function K α for each α ∈ [0, 1] defined by K α (s) = s1−αK(s)

(18.4)

K0 (s) ≤ K(s)

(18.5)

is decreasing, and for each s ∈ [0, s0); equation 1 t

Z t 0

(t + s)K(s)ds − 1 = 0

152

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

has a least solution s¯ ∈ (0, s0 ]. Then, for x0 ∈ U(x∗ , s), ¯ NA converges and for each n = 0, 1, 2, . . . kxn − x∗ k ≤ pβn kx0 − x∗ k < s¯ where

n−1

βn =

∑ (1 + α) j . j=0

In the next case using only (18.1) the radius of convergence decreases. Theorem 35. Suppose: condition (18.1) holds and s0 given in (18.2) exists (see previous chapter); equation Z t 1 (18.6) K(s)ds − = 0 2 0 has a least solution s¯∗ ∈ (0, s0 ). Then, for x0 ∈ U(x∗ , s¯∗ ), NA converges to x∗ so that (18.2) holds with R e(x ) 2 0 0 K0 (s)sds . p¯ = R e(x ) 1 − 0 0 K0 (s)ds

Additionally, suppose: function K α is defined by (18.4) and (18.5). Then, the following estimates hold ¯ kxn − x∗ k ≤ pβn kx0 − x∗ k < s, ¯

where

(1 + α)n − 1 β¯ n = . α

Remark 27. Comments similar to the previous chapter can be made. Relevant work which is extended here can be found in [1]-[9].

3.

Conclusion

The uniqueness of the solution of Banach space-valued equations is extended using Newton’s algorithm and weaker generalized conditions.

References [1] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), 1-7, ( 1996). [2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020.

Uniqueness of the Solution of Equations in Banach Space: II

153

[4] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [5] Smale S., Newton’s method estimates from data at one point. In The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics, Ewing, R., Gross, K., Martin, C. eds, New York: Spring-Verlag, 185-196 (1986). [6] Traub J. F., Wozniakowski H., Convergence and complexity of Newton iteration. J. Assoc. For Comp. Math., 29(2), 250-258 (1979). [7] Wang X., Convergence of Newton’s Method and inverse function in Banach space, Math. Comput., 68, 169-186 (1999). [8] Wang X. Convergence of Newton’s method and uniqueness of the solution of equations in Banach space. IMA J. Numer. Anal., 20, 123-134, (2000). [9] Wang X.H., Li C. Convergence of Newton’s Method and Uniqueness of the Solution of Equations in Banach Spaces II. Acta Math Sinica 19, 405-412 (2003).

Chapter 19

Convergence of Newton’s Algorithm for Sections on Riemannian Manifolds 1.

Introduction

A plethora of optimization problems requires the setting of a Riemannian manifold [8]. In particular, the singular points of sections on Riemannian manifolds can be found using an iterative algorithm, preferably Newton’s (NA), since closed-form solutions can be found only in special cases. Let R denote a Riemannian manifold, µ be a C1 − section and q0 ∈ R [10]. NA is defined by for initial point q0 for µ for n = 0, 1, 2, . . . by qn+1 = expqn (−Dµ (qn )−1 µ(qn )).

(19.1)

A semi-local convergence was given in [10] using L− average conditions, which generalized and extended earlier results [1]-[12]. But, the convergence domain is small in general, the error estimates kqn − q∗ k on the distances pessimistic, and the information on the uniqueness ball for q∗ is not very precise- restricting the applicability of NA. To address these concerns, we determine a more precise region (inside the one used in [10]) which also contains the iterates qn . But on this restricted region, the L− average functions are tighter leading to weaker sufficient convergence criteria; tighter estimates on the kqn − q∗ k and more precise information on the location of the solution. These advantages are obtained without additional conditions since the new L−average functions are specializations of the ones in [10].

2.

Convergence

We refer the reader to [8] for the details of the standard concepts introduced in this chapter. Let L0 , L, L1 stand for positive integrable functions on [10,11,12], where ρ > 0. We need certain definitions of L−average Lipschitz conditions.

156

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Definition 16. Let 0 < r < ρ. Let q∗ ∈ R be such that Dµ (q∗ )−1 exists. Then Dµ (q∗ )−1 Dµ is said to satisfy the center Lipschitz condition with the L0 − average on U(q( , r) if and only if for each q ∈ U(q∗ , r), we have −1

kDµ (q∗ ) (Pq∗ ,q Dµ (q)Pq,q∗ − Dµ (q∗ ))k ≤

Z d(q∗ ,q) 0

L0 (s)ds.

(19.2)

We have the following result on the uniqueness ball of singular points of sections on Riemannian manifolds. Theorem 36. Suppose equation Z r 0

L0 (s)(r − s)ds − r = 0

(19.3)

has a least solution ρ0 ∈ (0, ρ), µ(q∗ ) = 0 and Dµ (q∗ )−1 Dµ satisfies (19.2) the L0 − average on U(q∗ , ρ0 ) is the unique singular point of µ on U(q∗ , ρ0 ). Proof. Simply replace L1 by L0 used in the proof of Theorem 3.1 in [10], where L1 satisfies (19.2) on U(q∗ , ρ). Definition 17. Let 0 < r ≤ ρ. Let q0 ∈ R be such that Dµ (q0 )−1 exists. Then, Dµ (q0 )−1 Dµ is said to satisfy the k−piece restricted L−average Lipschitz condition on U(q0 , ρ0 ) if and only if for any m points q1 , q2 , . . .qm ∈ U(q0 , r) and for any geodesics gi connecting qi , qi+1 m−1

with i = 0, 1, . . ., m − 1, g0 a minimizing geodesic connecting q0 , q1 , and

∑ l(gi) < ρ0 , we

i=0

have kDµ (q0 )−1 Pg0 ,q0 ,q1 . . .

Pgm−2 ,qm−1 ,qm (Pgm−1 ,qm−1 ,qm Dµ (qm) ×Pgm−1 ,qm ,qm−1 − Dµ (qm−1 )k



Z ∑m−1 l(gi ) i=0 ∑m−2 i=0 l(gi )

L(s)ds.

(19.4)

Remark 28. (i) Clearly, the (m + 1)− piece L−average Lipschitz condition implies the m−piece L−average Lipschitz condition. (ii) The 1-piece L−average Lipschitz condition is equivalent to the center Lipschitz condition with the L0 −average. (iii) Condition (19.4) was used in [10] but with L1 (on U(q0 , ρ)) replacing L0 (on U(q0, ρ0 )). Then, we have L0 (s) ≤ L1 (s) for each s ∈ [0, ρ0].

(19.5)

L(s) ≤ L1 (s) for each s ∈ [0, ρ0 ].

(19.6)

L0 (s) ≤ L1 (s) for each s ∈ [0, ρ0].

(19.7)

and Hence, L0 , L can replace L1 in all the results in [10]. We also suppose

Otherwise L¯ the maximum of L0 , L on [0, ρ0] replaces L1 is all the results in [10].

Convergence of Newton’s Algorithm for Sections on Riemannian Manifolds

157

Let r0 > 0 and b > 0 be such that Z r0

L(s)ds = 1 and b =

0

Z r0

(19.8)

L(s)sds.

0

Proposition 10. Let q0 ∈ R be such that Dµ (q0 )−1 exists. Suppose that β := kDµ (q0 )−1 µ(q0 )k ≤ b

(19.9)

and that Dµ (q0 )−1 Dµ satisfies the 2−piece L−average Lipschitz condition on U(q0 , r0 ). Then, NA with initial point q0 is well defined and converges to a singular point q∗ of µ on ¯ 0 , r0 ). U(q The following lemma estimates the value of the quantity kDµ (q0 )−1 Dµ k, which will be used in the proof of the main theorem of this section. Lemma 9. Let 0 < r ≤ r0 and let q0 ∈ U(q∗ , r). Suppose that Dµ (q∗ )−1 Dµ satisfies the center Lipschitz condition with the L−average on U(q∗ , r). Then Dµ (p0 )−1 exists, kDµ (q0 )−1 Pq0 ,q∗ Dµ (q∗ )k ≤ −1

kDµ (p0 ) µ(q0 )k ≤

d(q0 , q∗ ) +

0

1− Z s 0

1−

0

R d(q0,q∗ )

Lemma 10. Let ϕ : [0, r0) −→ R be defined by ϕ(s) = b − 2s + 2

1 R d(q0 ,q∗ )

L(s)(s − d(q0 , q∗ ))ds

R d(q0,q∗ ) 0

L0 (s)ds

L0 (s)ds

.

L(t − s)ds for each s ∈ [0, r0],

where b is given by (19.8). Then, ϕ is strictly decreasing on [0, r0], and has exactly one zero r¯0 in [0, r0 ] satisfying b < r¯0 < r0 . 2 Next, we present the main theorem of this section, which shows that the radius of the convergence ball of Newton’s method is independent of the sectional curvature of the underlying Riemannian manifold. Theorem 37. Suppose that µ(q∗ ) = 0 and that Dµ (q∗ )−1 Dµ satisfies the 3-piece L−average Lipschitz condition on U(q∗ , r0 ). If d(q∗ , q0 ) ≤ r¯0 with r¯0 given by Lemma 10, then the sequence {qn } generated by NA with initial point q0 is well defined and converges to q∗ . Remark 29. If L0 = L = L1 , then our results reduce to the ones in [10]. Otherwise, they constitute an improvement with pre-stated advantages. The specializations of the Kantorovich, as well as the Smale-Wang theory given in [1,2],[7]-[12], immediately are also extended in our new setting. Examples where (19.5)- (19.7) are strict can be found in [3][6].

158

3.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

Finer semi-local convergence analysis of Newton’s Algorithm for sections on Riemannian manifolds is presented based on our idea of the restricted convergence region and without additional conditions.

References [1] Adler R., Dedieu J. P., Margulies J., Martens M., Shub M., Newton method on Riemannian manifolds and a geometric model for human spine. IMA J. Numer. Anal. 22, 1-32 (2002). [2] Alvarez F., Bolte J., Munier J., A unifying local convergence result for Newton’s method in Riemannian manifolds. Found. Comput. Math. 8, 197-226 (2008). [3] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), (1996), 1-7. [4] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [5] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [6] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [7] Dedieu J. P., Priouret P., Malajovich G., Newton’s method on Riemannian manifolds: covariant alpha theory. IMA J. Numer. Anal. 23, 395-419 (2003). [8] DoCarmo M. P., Riemannian Geometry. Birkhauser, Boston (1992). [9] Ferreira O. P., Lucambio Prez L. R., Nemeth S. Z., Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim 31, 133-151 (2005). [10] Wang J. H., Convergence of Newton’s method for sections on Riemannian manifold, J. Optim. Theory Appl., 148, (2011), 125-145. [11] Li C., Wang J. H., Convergence of Newton’s method and uniqueness of zeros of vector fields on Riemannian manifolds. Sci. China Ser. A 48(11), 1465-1478 (2005). [12] Li C., Wang J. H., Newton’s method for sections on Riemannian manifolds: generalized covariant α-theory. J. Complex. 24, 423-451 (2008).

Chapter 20

Newton Algorithm on Lie Groups: I 1.

Introduction

In the important study by Owren and Welfert [11], the semi-local convergence of two versions of NA was studied under uniform boundedness (UB) of the inverse of the differential involved for Lie group valued operators. UB limits the applicability of NA. That is why in Part I, we present basic definitions and convergence results under (UB) but in Part II, we extend the applicability of NA by dropping UB and replacing it with a weaker one. Let G denote a Lie group [12].

2. 2.1.

Two versions of NA The Differential of the Map F

The differential of F at a point z ∈ G is a map Fz0 : T G|z −→ Th |F(z) ∼ = h defined as Fy0 (Mz ) =

d |t=0 F(z.exp(tL0z−1 (Mz)) dt

(20.1)

for any tangent vector Mz ∈ T G|z to the manifold G at z. The image by Fz0 of a tangent vector My is obtained by first identifying Mz with an element v ∈ g via left multiplication. Then, Fz0 (Mz) is obtained as F(zy) − F(z) Fz0 (Mz ) = lim . (20.2) t−→0 t The differential Fz0 can be expressed via a function dFz : h −→ h given by dFz = (F ◦ Lz )0 = Fz0 ◦ L0z .

(20.3)

Hence, we get d |t=0 F(z.exp(ty)). (20.4) dt Using formula (20.1), NA on Lie group G may proceed as follows: given z0 ∈ G, we first determine the differential Fz00 according to (20.1). Then, find Mz0 ∈ T G|z0 satisfying the equation Fz00 (Mz0 ) + F(z0 ) = 0, dFz (y) = Fz0 (L0z y) =

160

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and finally update z0 by z1 = z0 .exp(L0z−1 (Mz0 )). In view of (20.3), the algorithm can be 0 written as Version 1: • Given zn ∈ G, determine dFzn according to (20.4). • Find yn ∈ h such that

dFzn (yn ) + F(zn ) = 0.

(20.5)

• Compute zn+1 = zn .exp(yn ). Concerning the second version, notice that the function F˜ = F ◦ Lz ◦ exp( is a map from h to h. In particular, the differential d F˜vn (y) of F˜ at vn is a linear map defined by d ˜ n + ty). f F˜vn (y) = |t=0 F(v dt

(20.6)

If h is finite dimensional, then the standard Newton procedure can be applied to the problem ˜ F(v) = 0. This lead to the following algorithm: Version 2: • Given zn ∈ G, such that zn = y.exp(vn ), determine d F˜vn according to (20.6). • Find yn ∈ h such that

˜ n ) = 0. d F˜vn (yn ) + F(v

(20.7)

• Compute vn+1 = vn + yn and zn+1 = z.exp(vn+1 ). For a detailed comparison between the two versions, see [11]. Next, we present the semi-local convergence of (20.7) (see Theorem 4.5 in [11]), followed by the convergence of (20.5) (see Theorem 4.6 in [11]). Theorem 38. Suppose: (i) There exists a constant τ such that 0 < τ ≤ ρ and a constant γ ≥ 0 such that kdFy.exp(u) − dFy k ≤ γkuk for all y ∈ U(z, r) and u ∈ h such that kuk ≤ γ ( local Lipschitz condition); (ii) The map dFz is one-to-one and dFz−1 is bounded with constant η, i.e., kdFz−1 k ≤ η. Then, there exists a constant σ > 0 and two functions h1 (t) and h2 (t), analytic for |t| ≤ (where δ and µ are as in Lemma ), with h1 (0) = 2h2 (0) = 1, such that, if z0 ∈ U(z, σ) (a) zn ∈ U(z, σ) for all n ≥ 1 (the sequence {zn }n≥0 is well define);

δ µ

Newton Algorithm on Lie Groups: I

161

(b) z−1 zn = exp(vn ), lim vn = 0 and n−→∞

kvn+1 k ≤ η(γh1 (µσ)) + kdFz kµh2 (µσ))kvnk2 for all n ≥ 0, so, the sequence {zn }n≥0 converges quadratically to z. Theorem 39. Suppose: (i) There exists a constant τ such that 0 < τ ≤ ρ and a constant γ ≥ 0 such that kdFy.exp(u) − dFy k ≤ γkuk for all y ∈ U(z, r) and u ∈ h such that kuk ≤ γ ( local Lipschitz condition); (ii) The map dFz is one-to-one and dFz−1 is bounded with constant η, i.e., kdFz−1 k ≤ η. Then, there exists a constant σ > 0 such that, if z0 ∈ U(z, σ) (a) zn ∈ U(z, σ) for all n ≥ 1 (the sequence {zn }n≥0 is well define); (b) z−1 zn = exp(vn ), lim vn = 0 and n−→∞

kvn+1 k ≤ βγh(3µσ))kvnk2 for all n ≥ 0, where h is as in Lemma 4.1 in [11] i.e., the sequence {zn }n≥0 converges quadratically to z. In the next work, as noted previously, condition (ii) is dropped, and the applicability of NA is extended.

3.

Conclusion

Semi-local convergence of Newton’s Algorithm (NA) is presented for solving problems from numerical linear algebra, such as eigenvalue problems and continuous-time differential equations.

References [1] Abraham R., Marsden J. E., Ratiu T., Manifolds, Tensor Analysis, and Applications, Springer-Verlag, New York, 1980. [2] Adler R., Dedieu J. P., Margulies J., Martens M., Shub M., Newton method on Riemannian manifolds and a geometric model for human spine. IMA J. Numer. Anal. 22, 1-32 (2002). [3] Alvarez F., Bolte J., Munier J., A unifying local convergence result for Newton’s method in Riemannian manifolds. Found. Comput. Math. 8, 197-226 (2008).

162

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[4] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), ( 1996), 1-7. [5] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [6] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [7] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [8] Crouch P. E., Grossman R., Numerical integration of ordinary differential equations on manifolds, J. Nonlinear Sci., 3 (1993) pp. 133. [9] Golub G., Van Loan C. F., Matrix Computations, 3rd ed., Johns Hopkins University Press, Baltimore, MD, 1996. [10] Iserles A., Solving linear ordinary differential equations by exponentials of iterated commutators, Numer. Math., 45 (1984), pp. 183199. [11] Owren B., Welfert B., The Newton iteration on Lie groups, BIT, 40, 1 (2000), 121145. [12] Varadarajan V. S., Lie Groups, Lie Algebras and their Representations, GTM no. 102, Springer-Verlag, New York, 1984. [13] Warner F. W., Foundations of Differentiable Manifolds and Lie Groups, GTM no. 94, Springer-Verlag, New York, 1983.

Chapter 21

Newton Algorithm on Lie Groups: II 1.

Introduction

Convergence analysis for NA on Lie groups was given in [9] using the Kantorovich theory. We extend the applicability of NA without additional conditions and use the terminology of the previous chapter. To achieve this, we need some preliminary results. Definition 18. Let r > 0, let x0 ∈ G and M be a mapping from G to L (H). Then, M is said to satisfy (i) The central L0 −Lipschitz condition on U(x0 , r) if kM(x.exp(v)) − M(x0 )k ≤ L0 kuk

(21.1)

holds for all v ∈ H and x ∈ U(x0 , r); (ii) The restricted L− Lipschitz continuous condition on U(x0 ,

1 ) if L0

kM(x.exp(v)) − M(x)k ≤ Lkuk for all v ∈ H and x ∈ U(x0 ,

(21.2)

1 1 ) with kvk + d(x, x0 ) < ; L0 L0

(iii) The L1 − Lipschitz continuous condition on U(x0 , r) if kM(x.exp(v)) − M(x)k ≤ L1 kuk

(21.3)

for all v ∈ H and x ∈ U(x0 , r) with kvk + d(x, x0 ) < r. 1 and let x0 ∈ G be such that dFx−1 exists. Suppose that dFx−1 F 0 0 L0 satisfies the L− Lipschitz condition on U(x0 , r). Let x ∈ U(x0 , r) be such that there exists

Lemma 11. Let 0 < r ≤

k

k ≥ 1 and v0 , v1 , . . ., vk ∈ H satisfying x = x0 .exp(v0 ). . . .exp(vk ) and dFx−1 exists and kdFx−1 dFx0 k ≤

1 1 − L0 (∑ki=0 kvi k)

.

∑ kvik < r. Then,

i=0

(21.4)

164

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 30.

(a) Clearly L0 ≤ L1

(21.5)

L ≤ L1

(21.6)

L0 ≤ L.

(21.7)

and hold. We assume from now on Otherwise the results that follow hold with L0 replacing L. (b) Lemma 11 improves Lemma 2.2 in [9], where (21.4) was shown using L1 instead of L0 . But in view of (21.5) the new estimate (21.4) is tighter, weaker and actually needed. Hence, this estimate can be used and L instead of L1 in the proofs of the results in [9]. This is done in Section 2.

2.

Convergence Criteria

As in the previous chapter, we define NA with initial point x0 for F on a Lie group as follows: xn+1 = xn .exp(−dFx−1 F(xn )) for all n = 0, 1, . . .. (21.8) n Let b > 0 and ` > 0. The quadratic majorizing function g, which was used in Kantorovich and Akilov [5] and Wang [8], is defined by ` g(t) = t 2 − t + b for all t ≥ 0. 2

(21.9)

Let {sn } denote the sequence generated by Newton’s method with initial value s0 = 0 for g, i.e., sn+1 = sn − g0 (sn )−1 g(sn ) for all n = 0, 1, . . .. (21.10) 1 Assume that h := `b ≤ . Then g has two zeros ρ1 and ρ2 given by 2 √ √ 1 − 1 − 2h 1 + 1 − 2h ρ1 = and ρ1 = . ` `

(21.11)

Moreover, {sn } is monotonic increasing and convergence to ρ1 and satisfies that ρ1 − s n = where

ξ2 n

n −1

−1 j ξ ∑2j=0

ρ1 for all n = 0, 1, 2, . . .,

√ 1 − 1 − 2h √ ξ= . 1 + 1 + 2h

(21.12)

(21.13)

Recall that F : G −→ H is a C1 mapping. In the remainder of this section we always assume that x0 ∈ G is such that dFx−1 exists and set b := kdFx−1 F(x0 )k. 0 0

Newton Algorithm on Lie Groups: II

165

Theorem 40. Suppose that dFx−1 dF satisfies the L− Lipschitz condition on Cr1 (x0 ) and 0 that 1 (21.14) h = `b ≤ . 2 Then, the sequence {xn } generated by NA with initial point x0 is well defined and converges to a zero x∗ of F. Moreover, for each n = 0, 1, 2, . . . the following assertions hold: ρ(xn+1 , xn ) ≤ kdFx−1 F(xn )k ≤ sn+1 − sn , n ρ(xn , x∗ ) ≤

ξ2

n −1

n

−1 j ξ ∑2j=0

(21.15)

ρ1 .

(21.16)

The rest is devoted to an estimate of the convergence domain of Newton’s method on G around a zero x∗ of F. Below we shall always assume that x∗ ∈ G is such that dFx−1 exists. ∗ Lemma 12. Let 0 < r ≤ z1 , z2 , . . .z j ∈ H satisfying

1 and let x0 ∈ U(x∗ , r) be such that there exist j ≥ 1 and `0 x0 = x∗ .exp(z1 ) . . .exp(z j )

(21.17)

j −1 and ∑ kzi k < r. Suppose that dFx−1 exists and ∗ dF satisfies (21.1) and (21.2). Then dFx 0 i=1

j

kdFx−1 F(x0 )k ≤ 0

j

(2 + ` ∑i=1 kzi k) ∑i=1 kzi k j

2(1 − `0 ∑i=1 kzi k)

.

(21.18)

1 . Suppose that F(x∗ ) = 0 and that dFx−1 ∗ dF satisfies (21.1) and 4`0 3r (21.2) condition on U(x∗ , ). Let x0 ∈ U(x∗ , r). Then, the sequence {xn } generated (1 − `0 r by NA with initial point x0 is well defined and converges quadratically to a zero y∗ of F and 3r ρ(x∗ , y∗ ) < . 1 − `0 r Theorem 41. Let 0 < r ≤

In particular, by taking r =

1 in Theorem 41 the following corollary is obtained. 4`

Corollary 6. Suppose that F(x∗ ) = 0 and that dFx−1 ∗ dF satisfies the L− Lipschitz condition 1 1 on U(x∗ , ). Let x0 ∈ U(x∗ , ). Then, the sequence {xn } generated by NA with initial `0 4` 1 point x0 is well defined and converges quadratically to a zero y∗ of F with ρ(x∗ , y∗ ) < . `0 Corollary 7. Suppose that F(x∗ ) = 0 and that dFx−1 ∗ dF satisfies (21.1) and (21.2) on 1 1 U(x∗ , ). Let d > 0 be the largest number such that U(e, d) ⊆ exp(B(0, )) and let r = `0 `0 d 1 ∗ ∗ min{ , }. Let us write N(x , r) := x .exp(B(0, r)). Then, for each x0 ∈ N(x∗ , r), 3 + `0 d 4`0 the sequence {xn } generated by NA with initial point x0 is well defined and converges quadratically to x∗ .

166

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Corollary 8. Let G be a compact connected Lie group that is equipped with a bi-invariant 1 . Suppose that F(x∗ ) = 0 and that dFx−1 Riemannian metric. Let 0 < r ≤ ∗ dF satisfies 4`0 3r ). Let x0 ∈ U(x∗ , r). Then, the sequence {xn } generated (21.1) and (21.2) on U(x∗ , 1 − `0 r by NA with initial point x0 is well defined and converges quadratically to x∗ . In particular, taking r =

1 in Corollary 8, one has the following result. 4`0

Corollary 9. Let G be a compact connected Lie group that is equipped with a bi-invariant Riemannian metric. Suppose that F(x∗ ) = 0 and that dFx−1 ∗ dF satisfies `− Lipschitz con1 1 ). Then, the sequence {xn } generated by NA with dition on on U(x∗ , ). Let x0 ∈ U(x∗ , `0 4`0 initial point x0 is well defined and converges quadratically to x∗ . Remark 31. If `0 = ` = `1 , our results correspond to the ones in [9]. Otherwise they constitute an improvement. For example the sufficient convergence criterion in [9] is given by 1 h1 = b`1 ≤ . 2 But then, we have 1 1 h1 ≤ =⇒ h ≤ 2 2 but not necessarily vice versa. The rest of the results in [9] are extended immediately using our approach. Examples where (21.5)-(21.7) are strict can be found in [1]-[4]. Our technique can be used to extend the applicability of other iterative methods in an analogous way [1]-[9].

3.

Conclusion

The semi-local convergence of Newton’s Algorithm (NA) is extended without additional conditions as in previous chapters.

References [1] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), (1996), 1-7. [2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020.

Newton Algorithm on Lie Groups: II

167

[4] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [5] Kantorovich L. V., Akilov G. P., Functional Analysis. Oxford: Pergamon, (1982). [6] Li C., Wang J. H., Dedieu J. P., Smale’s point estimate theory for Newton’s method on Lie groups. J. Complex., 25, (2009), 128-151. [7] Owren B., Welfert B., The Newton iteration on Lie groups. BIT Numer. Math., 40, (2000), 121-145. [8] Wang J. H., Li C., Kantorovich’s theorem for Newton’s method on Lie groups. J. Zhejiang Univ. Sci. A, 8, (2007), 978-986. [9] Wang J. H., Kantorovich’s theorems for Newton’s method for mappings and optimization problems on Lie groups, IMA J. Numer. Anal., 31, (2011), 322-347.

Chapter 22

Two-Step Newton Method under L− Average Conditions 1.

Introduction

Let B1 , B2 denote Banach spaces, Ω ⊆ B1 be convex and open. We are concerned with the problem of finding a locally unique solution x∗ of equation F(x) = 0,

(22.1)

where F : Ω −→ B2 is continuously Fr´echet differentiable according to Fr´echet. Solving (22.1) is of extreme importance, since many applications reduce to solving (22.1). We resort to iterative methods through which a sequence is generated converging to x∗ under certain conditions on the initial data. Newton’s is without a doubt the most popular method for solving (22.1). It is quadratically convergent [9]. To increase the convergence order multi-step methods have been developed [1]-[21]. In particular, we study the semi-local convergence of the two-step Newton method (TSNM) defined for x0 ∈ Ω and all n = 0, 1, 2, . . . by yn = xn − F 0 (xn )−1 F9xn )

xn+1 = yn − F 0 (xn )−1 F(yn ).

(22.2)

The local, as well as semi-local convergence of TSNM, has been studied extensively under Lipschitz, H¨older and other generalized continuity conditions [1]-[21]. The convergence domain is not large, the error estimates on kxn − x∗ kare pessimistic, and the information concerning the uniqueness of the solution is not the best possible in general. We address all these concerns and present a finer semi-local convergence for TSNM without additional conditions. We present our technique using the concept of the L-average continuity condition [8,18]. But this technique is so general that it can also be used under other continuity conditions [1]-[7] and on other methods along the same lines [1]-[21].

2.

Semi-Local Convergence of TSNM

Let x0 ∈ Ω be such that the inverse of F 0 (x0 ) exists. Consider ρ > 0 such that U(x0 , ρ) ⊂ Ω. Let L0 , L and L1 be positive integrable functions. We need the following types of L-average

170

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

continuity. Definition 19. It is said that F 0 satisfies the center-L0 -average Lipschitz condition on U(x0 , ρ) if for all x ∈ U(x0 , ρ) kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤

Z kx−x0 k 0

L0 (s)ds.

(22.3)

Suppose that equation L0 (s)s − 1 = 0

(22.4)

has a least solution ρ0 ∈ (0, ρ]. Set Ω0 = Ω ∩U(x0 , ρ0 ). Definition 20. It is said that F 0 satisfies the restricted-L-average Lipschitz condition on Ω0 with kx − x0 k + ky − xk < ρ0 kF 0 (x0 )−1 (F 0 (y) − F 0 (x))k ≤

Z kx−x0k+ky−xk

L(s)ds.

(22.5)

kx−x0 k

Definition 21. It is said that F 0 satisfies the L1 -average Lipschitz condition on U(x0 , ρ) for all x, y ∈ U(x0 , ρ) with kx − x0 k + ky − xk < ρ 0

−1

0

0

kF (x0 ) (F (y) − F (x))k ≤

Z kx−x0k+ky−xk kx−x0 k

L1 (s)ds.

(22.6)

Remark 32. It follows from these definitions and ρ0 ≤ ρ

(22.7)

L0 (s) ≤ L1 (s)

(22.8)

L(s) ≤ L1 (s)

(22.9)

that and for all s ∈ [0, ρ0 ). We suppose from now on that L0 (s) ≤ L(s) for all s ∈ [0, ρ0 ).

(22.10)

Otherwise replace L by L¯ in the rest that follow, where this function is the largest of L0 and L on the interval [0, ρ0). Notice that (22.6) (is used in the studies involving average Lipschitz continuity) implies (22.3) and (22.5) but not vice versa. Hence, L0 , L can replace L1 in the semi-local convergence of TSNM (or other methods using (22.3)) to extend its applicability due to (22.8)-(22.10). It is also convenient to introduce the real functions on [0, ρ) for η ≥ 0 by ϕ0 (s) = η − s +

Z s

L0 (t)(s − t)dt,

(22.11)

0

ϕ(s) = η − s +

Z s

L(t)(s − t)dt,

(22.12)

0

Two-Step Newton Method under L− Average Conditions and ϕ0 (s) = η − s +

Z s 0

L1 (t)(s − t)dt.

171

(22.13)

Then, we have ϕ0 (s) ≤ ϕ(s) ≤ ϕ1 (s)

(22.14)

ϕ00 (s) ≤ ϕ0 (s) ≤ ϕ01 (s)

(22.15)

0 ≤ ϕ000 (s) ≤ ϕ00 (s) ≤ ϕ001 (s),

(22.16)

and where ϕ0 (s) = −1+

Z s 0

L(t)dt and ϕ00 (s) = L(s) for all s ∈ [0, ρ0 ). If we use (22.6) we obtain kF 0 (x)−1 F 0 (x0 )k ≤ −

1 . ϕ01 (kx − x0 k)

(22.17)

But if we use the weaker and actually needed (22.3), we obtain instead the tighter kF 0 (x)−1 F 0 (x0 )k ≤ −

1 ϕ00 (kx − x0 k)

≤−

1 ϕ0 (kx − x0 k)

.

(22.18)

Next, we present some standard results from convex analysis (extended from single to two step) [8,18] involving the properties of function ϕ and the corresponding scalar method for u0 and all n = 0, 1, . . ., vn = un − ϕ0 (un )−1 ϕ(un )

un+1 = vn − ϕ0 (un )−1 ϕ(vn ). Define b=

Z ρ0

L(s)sds.

(22.19)

(22.20)

0

The proofs of the results that follow are omitted as straightforward extensions of the proofs for the single-step Newton method [8,18]. Lemma 13. The following hold: (i) If 0 < η < b, then function ϕ is decreasing on [0, ρ0 ], increasing on [ρ0 , ρ] and ϕ(η) > 0, ϕ(ρ0 ) = η − b < 0, ϕ(ρ) = η > 0; ϕ have unique solutions ρ∗ , ρ∗∗ in [0, ρ0], [ρ0 , ρ], respectively satisfying η < ρ∗
0 ϕ0 (ρ∗ )

where en =

kyn − xn k kxn − x∗ k ≤ en , vn − un ρ∗ − u n

e0n 1 ϕ00 (ρ∗ ) ϕ00 (ρ∗ ) 2 , e = 1 − (ρ − t ), e = 1 + (ρ∗ − tn ) and ∗ n n e2n n 2ϕ0 (ρ∗ ) 2ϕ0 (ρ∗ )

(9) If 2+ then

where δ =

2

ϕ00 (ρ∗ ) . ϕ0 (ρ∗ )

ρ∗ ϕ00 (ρ∗ ) ρ∗ L(ρ∗ ) > 0 ⇔ 2− , Rρ ϕ0 (ρ∗ ) 1 − 0 ∗ L0 (s)ds

1 2 − ρ∗ δ kxn+1 − x∗ k ≤ δ kxn − x∗ k3 , 2 2 + ρ∗ δ

Two-Step Newton Method under L− Average Conditions

173

Remark 33. (a) If L0 = L = L1 , the results reduce to the ones where only (22.6) is used. Otherwise, they constitute an improvement with advantages as already stated. (b) Clearly, the results can specialize to the usual cases, when L is a constant function 2γ L , (Smale(Kantorovich case [9]), so ϕ(s) = η − s + s2 or when L(s) = 2 (1 − γs)3 γs2 Wang case [18-20]), so ϕ(s) = η − s + . 1 − γs

3.

Conclusion

We present an extended semi-local convergence analysis for a two-step Newton method under L-average continuity conditions to solve Banach space-valued operator equations.

References [1] Appell J., De Pascale E., Evkhuta N. A., Zabrejko P. P., On the two step Newton method for the solution of nonlinear operator equations, Math. Nachr., 172(1), (1995), 5-14. [2] Argyros I. K., A new semilocal convergence theorem for Newton’s method in Banach space using hypotheses on the second Fr´echet derivative, J. Comput. Appl. Math., 130(1-2), (2001), 369-373. [3] Argyros I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), (1996), 1-7. [4] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [5] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [6] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [7] Argyros I. K., Khattri S. K., Weak convergence conditions for the Newton’s method in Banach space using general majorizing sequences, Appl. Math. Comput., 263, (2015), 59-72. [8] Argyros I. K., Hilout S., Extending the applicability of the Gauss Newton method under average Lipschitz type conditions, Numer. Algor., 58(1), (2011), 23-52. [9] Kantorovich L. V., Akilov G. P., Functional Analysis. Oxford: Pergamon, (1982). [10] Deuflhard P., Heindl G., Affine invariant convergence theorems for Newton’s method and extensions to related methods, SIAM J. Numer. Anal., 16(1), (1979), 1-10.

174

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[11] Guti´errez J. M., Hern´andez M. A., Newton’s method under weak Kantorovich conditions, IMA J. Numer., 20(4), (2000), 521-532. [12] Ezquerro J. A., Hern´andez M. A., Generalized differentiability conditions for Newton’s method, IMA J. Numer. Anal., 22(4), (2002), 519-530. [13] Ezquerro J. A., Hern´andez M. A., An improvement of the region of accessibility of Chebyshev’s method from Newton’s method, Math. Comput., 78(267), (2009), 1613-1627. [14] Ferreira O. P., Svaiter B. F., Kantorovich’s majorants principle for Newton’s method, Comput. Optim. Appl., 42(2), (2009), 213-229. [15] Magre˜na´ n A. A., Argyros I. K., Two step Newton methods, J. Complexity, 30(4), (2014), 533-553. [16] Potra F. A., On Q-order and R-order of convergence, J. Optim. Theory. Appl., 63(3), (1989), 415-431. [17] Smale S., Newton’s method estimates from data at one point, in R.Ewing, K. Gross, C. Martin (EDs.), The merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics, Springer-Verlag, New York, 1986, 185-196. [18] Wang J. H., Convergence of Newton’s method and inverse functions theorem in Banach space, Math. Comput., 19(1999), 169-186. [19] Wang J. H., Li C., Kantorovich’s theorem for Newton’s method on Lie groups. J. Zhejiang Univ. Sci. A, 8, (2007), 978-986. [20] Wang J. H., Kantorovich’s theorems for Newton’s method for mappings and optimization problems on Lie groups, IMA J. Numer. Anal., 31, (2011), 322-347. [21] Zabrejko P. P., Nguen D. F., The majorant method in the theory of Newton Kantorovich approximations and the Ptak error estimates, Numer. Funct. Anal. Optim., 9(5-6), (1987), 671-684.

Chapter 23

Unified Methods for Solving Equations 1.

Introduction

Let X,Y denote Banach spaces, and D ⊂ X stand for an open and convex set. We are concerned with the problem of approximating a locally unique solution x∗ of the equation F(x) = 0,

(23.1)

where F : D −→ Y is a differentiable operator according to Fr´echet. A plethora of applications can be written in the form (23.1) using mathematical modeling [1]-[40]. Therefore, it is very important to determine x∗ . Most solution methods for these equations are iterative since finding x∗ in closed form can be done only in special cases. Recently, there has been a surge in the development of high convergent (higher than two) order methods based on different geometrical or algebraic motivations. What most of these approaches have in common is the fact that derivatives of order higher than one are needed for the proofs which however do not appear in these methods. Moreover, no computable error bounds on kxn − x∗ k or uniqueness of the solution results are provided. We are motivated by these concerns. In particular, we extend the applicability of these methods using only the first derivative that actually appear on these methods. Computable error bounds as well as uniqueness results based on the ω− continuity of operator F 0 . Our technique is very general, so it can be applied to extend the applicability of many methods. Next, we demonstrate our technique on the two step method yn = xn − F 0 (xn )−1 F(xn )

xn+1 = xn − A−1 n Bn

(23.2)

m

k

i=1

j=1

where x0 ∈ D an initial point, An = ∑ ci F 0 (αixn +bi yn ) and Bn = ∑ d j F(γ j xn +δ j yn ). Here, m, k are given natural numbers and {ci }, {αi }, {bi}, i = 1, 2, . . ., m, {d j }, {γ j }, {δ j }, j = 1, 2, . . ., k are scalar sequences chosen so that lim xn = x∗ . Some advantages of method n−→∞ (23.2) over other special cases of it were also reported in [1]-[40].

176

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

The technique is given in Section 2, and the numerical experiments The fourth order of convergence when X = Y = Rs for method (23.2) was shown in [37] under the criteria αi + bi = 1, γ j + δ j = 1, (23.3) m

∑ ci 6= 0,

(23.4)

i=1

and some additional conditions on these sequences. Moreover, the existence of derivatives up to order five are needed. 1 3 For example: Let X = Y = R, D = [− , ]. Define f on D by 2 2 f (s) = s3 logs2 + s5 − s4 Then, we have x∗ = 1, and f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 , f 00 (s) = 6x logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22.

Obviously f 000 (s) is not bounded on D. So, the convergence of these methods is not guaranteed by the analysis in these papers. That is why we present a ball convergence based only on the first derivative that only appears on the method (23.2). As in [37], we assume from now on that (23.4) holds too, but we use conditions less restrictive than (23.3). Our technique is introduced in the next section, and the numerical examples in Section 3.

2.

Ball Convergence

It is convenient for the ball convergence of method (23.2) that follows to develop some real functions and parameters. Set T = [0, ∞). Suppose that there exists function w0 : T −→ T continuous and nondecreasing such that equation w0 (s) − 1 = 0 (23.5)

has a minimal positive solution denoted by ρ0 . Set T0 = [0, ρ0 ). Suppose that there exists function w : T0 −→ T continuous and nondecreasing such that for R1 w((1 − θ)t)dθ , h1 (t) = g1 (t) − 1 (23.6) g1 (t) = 0 1 − w0 (t) the equation h1 (t) = 0 m

has a minimal solution r1 ∈ (0, ρ0 ). Set α = | ∑ ci |. Define function p on [0, ρ0 ) by i=1

p(t) =

1 m ∑ |ci |w0((|αi| + |bi|g1(t))t). α i=1

Unified Methods for Solving Equations

177

Suppose that equation p(t) − 1 = 0

(23.7)

has a minimal solution ρ p ∈ (0, ρ0 ). Set ρ = min{ρ0 , ρ p } and T1 = [0, ρ). Define functions q, g2 , h2 on T1 by k

q(t) =

∑ |d j | j=1

Z 1 0

w1 (θ(|γ j| + |δ j |g1 (t))t)dθ(|γ j| + |b j |g1 (t)), g2 (t) =

and

ϕ(t) + q(t) α(1 − p(t))

h2 (t) = g2 (t) − 1, m

where w1 is as function w and ϕ(t) = ∑ |ci |w1 ((|αi| + |bi |g1 (t))t). i=1

Suppose that equation h2 (t) = 0

(23.8)

r = min{r1 , r2 },

(23.9)

has a minimal solutions r2 ∈ (0, ρ). We shall show that is a radius of convergence for method (23.2). These definitions imply that for each t ∈ [0, r) 0 ≤ w0 (t) < 1

(23.10)

0 ≤ p(t) < 1

(23.11)

0 ≤ g1 (t) < 1

(23.12)

0 ≤ g2 (t) < 1.

(23.13)

and ¯ µ) denote the open and closed balls in X with center x ∈ X and of The sets U(x, µ), U(x, radius µ > 0. The following convergence criteria (A) shall be used. (a1) There exists a simple solution x∗ ∈ D of equation F(x) = 0. (a2) There exists a function w0 : T −→ T continuous and nondecreasing such that for all x∈D kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ w0 (kx − x∗ k). Set D0 = D ∩U(x∗ , ρ0 ). (a3) There exists a functions w : T0 −→ T continuous and nondecreasing such that for each x, y ∈ D0 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ w(ky − xk).

178

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(a4) There exists a functions w1 : T1 −→ T continuous and nondecreasing such that for each x ∈ D1 kF 0 (x∗ )−1 F(x)k ≤ w1 (kx − x∗ k). ¯ ∗ , r) ⊂ D, where r is defined by (23.9). (a5) U(x (a6) There exists r¯ ≥ r such that

Z 1 0

w0 (θ¯r)dθ < 1.

Set D2 = D ∩ U¯ (x∗ , r¯). Next, the ball convergence of method (23.2) is presented using convergence criteria (A) and the preceding notation. Theorem 43. Suppose that criteria (A) hold. Then, sequence {xn } starting from x0 ∈ U(x∗ , r) − {x∗ } is well defined in U(x∗ , r), remains in U(x∗ , r) for each n = 0, 1, 2, . . . and lim xn = x∗ . Moreover, the following assertions hold for en = kxn − x∗ k n−→∞

kyn − x∗ k ≤ g1 (en )en ≤ en < r,

(23.14)

en+1 ≤ g2 (en )en ≤ en < r,

(23.15)

and where r is defined in (23.9) whereas functions g1 and g2 are given previously. Furthermore, the limit point x∗ is the only solution of equation F(x) = 0 in the set D2 given in (a6). Proof. Let v ∈ U(x∗ , r) − {x∗ }. Using (a1), (a2), (23.9) and (23.10), we get in turn that kF 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ w0 (kv − x∗ k) ≤ w0 (r) < 1,

(23.16)

so F 0 (v) is invertible and kF 0 (v)−1 F 0 (x∗ )k ≤

1 , 1 − w0 (kv − x∗ k)

(23.17)

by the Banach lemma on invertible operators [26]. We also see that for v = x0 , y0 is well defined by the first sub-step of method (23.2) for n = 0. Then, we can write y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ).

(23.18)

Hence, by (23.9), (23.13), (a3), (23.17) (for v = x0 ) and (23.18), we have in turn that ky0 − x∗ k = kF 0 (x0 )−1 F 0 (x∗ )kk

Z 1 0

R1

0 w((1 − θ)e0 )dθe0 ≤ 1 − w0 (e0 ) ≤ g1 (e0 )e0 ≤ e0 < r,

[

F 0 (x∗ )−1 (F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 )(x0 − x∗ )dθk

(23.19)

Unified Methods for Solving Equations

179

so (23.14) holds and y0 ∈ U(x∗ , r). In view of (23.9), (23.11) and (23.19), we have in turn that m

m

i=1

i=1

k( ∑ ci F 0 (x∗ ))−1 (A0 − ∑ ci F 0 (x∗ ))k ≤

1 m k ∑ ci F 0 (x∗ )−1 (F 0 (αix0 + bi y0 ) − F 0 (x∗ ))k α i=1



1 m ∑ |ci|w0(kαix0 + bi y0 − (αi + bi )x∗ k) α i=1



1 m ∑ |ci|w0(|αi|e0 + bi ky0 − x∗ k) α i=1



1 m ∑ |ci|w0((|αi| + |bi|g1(e0 ))e0) = p(e0 ) ≤ p(r) < 1, α i=1

so A0 is invertible, 0 kA−1 0 F (x∗ )k ≤

1 , α(1 − p(e0 ))

(23.20)

and x1 is well defined by the second sub-step of method (23.2) for n = 0. We need an estimate on kF 0 (x∗ )−1 A0 k ≤ ≤ ≤

m

∑ |ci|w1 (kαix0 + bi y0 − x∗ k)

i=1 m

∑ |ci|w1 (|αi|e0 + |bi|ky0 − x∗ k)

i=1 m

∑ |ci|w1 ((|αi| + |bi |g1(e0))e0) = ϕ(e0 ).

(23.21)

i=1

Similarly, we get m

kF 0 (x∗ )−1 B0 k ≤ ∑ |d j | i=1

Z 1 0

w1 ((θ(|γ j | + |δ j|g1 (e0 ))e0 )dθ(|γ j | + |δ j|g1 (e0 ))e0 = q(e0 )e0 . (23.22)

Then, from the second sub-step of method (23.2), (23.9), (23.13) and (23.19)-(23.22), we obtain in turn that e1 = kx0 − x∗ − A−1 0 B0 k −1 0 = k[A0 F (x∗ )][F‘(x∗ )−1 (A0 (x0 − x∗ ) − B0 )]k

0 −1 −1 ≤ kA−1 0 F (x∗ )k[kF‘(x∗ ) A0 ke0 + kF‘(x∗ ) B0 k] (ϕ(e0 ) + q(e0 ))e0 = g2 (e0 )e0 ≤ e0 , ≤ 1 − p(e0 )

(23.23)

so (23.15) holds for n = 0 and x1 ∈ U(x∗ , r). Hence, estimations (23.14) and (23.15) hold for n = 0. Suppose these estimations hold for all j = 0, 1, 2, . . ., n. Then, by simply replace x0 , y0 , x1 by x p , y p , x p+1 , in the preceding calculations to complete induction for (23.14) and (23.15) for all n. It then follows from

180

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

e p+1 ≤ ce p < r, c = g2 (e0 ) ∈ [0, 1),

(23.24)

that lim x p = x∗ , and x p+1 ∈ U(x∗ , r). Next, consider u ∈ D2 with F(u) = 0. Using (a2) p−→∞

and (a6), we get in turn for M =

Z 1 0

F 0 (u + θ(x∗ − u))dθ that

kF 0 (x∗ )−1 (M − F 0 (x∗ ))k



Z 1 0

w0 (θkx∗ − uk)dθ ≤

Z 1 0

w0 (θ¯r)dθ < 1,

so x∗ = u follows from the invertibility of M and the identity 0 = F(x∗ ) − F(u) = M(x∗ − u). Remark 34.

1. By (a2), and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + w0 (kx − x∗ k)

second condition in (a3) can be dropped, and w1 be defined as w1 (t) = 1 + w0 (t). Notice that, if w1 (t) < 1 + w0 (t), then R1 can be large (see Example 3.1). 2. The results obtained here can be used for operators G satisfying autonomous differential equations [5]-[11] of the form F 0 (x) = T (F(x)) where T is a continuous operator. Then, since F 0 (x∗ ) = T (F(x∗ )) = T (0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: T (x) = x + 1. 3. The local results obtained here can be used for projection algorithms such as the Arnoldi’s algorithm, the generalized minimum residual algorithm (GMRES), the generalized conjugate algorithm(GCR) for combined Newton/finite projection algorithms, and in connection to the mesh independence principle can be used to develop the cheapest and most efficient mesh refinement strategies [5]-[11]. 4. Let w0 (t) = L0 t, and w(t) = Lt. The parameter rA = the convergence radius of Newton’s algorithm [11]

2 was shown by us to be 2L0 + L

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · · under the conditions (a1)-(a3) (w1 is not used). It follows that the convergence radius R of algorithm (23.2) cannot be larger than the convergence radius rA of the second

Unified Methods for Solving Equations

181

order Newton’s algorithm. As already noted in [5]-[11] rA is at least as large as the convergence ball given by Rheinboldt [32] rT R =

2 , 3L1

where L1 is the Lipschitz constant on D, L0 ≤ L1 and L ≤ L1 . In particular, for L0 < L1 or L < L1 , we have that rT R < rA and

rT R 1 L0 → as → 0. rA 3 L1

That is our convergence ball rA is at most three times larger than Rheinboldt’s. The same value for rT R was given by Traub [33]. 5. It is worth noticing that algorithm (23.2) is not changing, when we use the conditions (A) of Theorem 43 instead of the stronger conditions used in [37]. Moreover, we can compute the computational order of convergence (COC) defined by     kxn − x∗ k kxn+1 − x∗ k µ = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k µ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way we obtain in practice the order of convergence in a way that avoids the bounds given in [37] involving estimates up to the fifth Fr´echet derivative of the operator F.

3.

Numerical Examples

We use four examples to test our convergence conditions. Example 23. Let X = Y = R. Define F(x) = sinx. Then, we get that x∗ = 0, w0 (s) = w(s) = s and w1 (s) = 1. Table 23.1. Radius for Example 3.1 Radius r1 r2

1

ω1 (s) = e e−1 00.66667 1.40312

182

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Example 24. Let X = Y = C[0, 1], the space of continuous functions defined on [0, 1] with the max norm. Let D = U(0, 1). Define function F on D by F(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(23.25)

0

We have that F 0 (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

Then, we get that x∗ = 0, w0 (s) = w1 (s) =

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ D. 15 s, w1 (s) = 15. This way, we have that 2

Table 23.2. Radius for Example 3.2 Radius r1 r2

15 s 2 0.08888 0.161167

ω1 (s) =

ω1 (s) = 1 + ω0 (s) 0.08888 0.140882

Example 25. Let X = Y = R3 , D = U(0, 1), x∗ = (0, 0, 0)T and define F on D by F(x) = F(u1 , u2 , u3 ) = (eu1 − 1,

e−1 2 u2 + u2 , u3 )T . 2

(23.26)

For the points u = (u1 , u2 , u3 )T , the Fr´echet derivative is given by  u1  e 0 0 F 0 (u) =  0 (e − 1)u2 + 1 0  . 0 0 1

Using the norm of the maximum of the rows and since G0 (x∗ ) = diag(1, 1, 1), we get by 1 1 conditions (A) w0 (s) = (e − 1)s, w(s) = e e−1 s, and w1 (s) = e e−1 . Table 23.3. Radius for Example 3.3 Radius r1 r2

1

ω1 (s) = e e−1 0.382692 0.760459

ω1 (s) = 1 + ω0 (s) 0.382692 0.949683

Example 26. Returning back to the motivational example at the introduction of this chapter, we have w0 (s) = w(s) = 96.662907s, w1 (s) = 1.0631. Then, we have

Unified Methods for Solving Equations

183

Table 23.4. Radius for Example 3.4 Radius r1 r2

4.

ω1 (s) = 1.0631 0.00689682 0.0143269

ω1 (s) = 1 + ω0 (s) 0.00689682 0.0109309

Conclusion

The applicability of a unifying two-step fourth convergent order method is extended using ω− continuity conditions on the Fr´echet derivative of the operator involved in contrast to earlier studies using the fifth-order derivatives. Our analysis includes computable error estimates as well as the uniqueness of the solution results not given before.

References [1] Abbasbandy S., Extended Newtons method for a system of nonlinear equations by modified Adomian decomposition method, Appl. Math. Comput. 170 (2005) 648656. [2] Adomian G., Solving Frontier Problems of Physics: The Decomposition Method, Kluwer Academic Publishers, Dordrecht, 1994. [3] Amat S., Bermudez C., Hernandez M. A., Martinez E., On an efficient k−step iterative method for nonlinear equations, J. Comput. Appl. Math., 302, (2016), 258-271. [4] Amat S., Argyros I. K., Busquier S., Hern´andez M. A., On two high-order families of frozen Newton-type methods, Numer. Linear. Algebra Appl., 25, (2018), e2126, 1–13. [5] Argyros I. K., Computational theory of iterative solvers. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [6] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [7] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [9] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007).

184

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[10] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [11] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [12] Babajee D. K. R., Dauhoo M. Z., Darvishi M. T., Barati A., A note on the local convergence of iterative methods based on Adomian decomposition method and 3node quadrature rule, Appl. Math. Comput. 200 (2008) 452-458. [13] Babolian E., Biazar J., Vahidi A. R., Solution of a system of nonlinear equations by Adomian decomposition method, Appl. Math. Comput. 150 (2004) 847-854. [14] Cordero A., Torregrosa J. R., Variants of Newtons method for functions of several variables, Appl. Math. Comput. 183 (2006) 199-208. [15] Cordero A., Torregrosa J. R., Variants of Newtons method using fifth-order quadrature formulas, Appl. Math. Comput. 190 (2007) 686-698. [16] Cordero A., Martnez E., Torregrosa J. R., Iterative methods of order four and five for systems of nonlinear equations, J. Comput. Appl. Math. 231 (2009) 541-551. [17] Darvishi M. T., Barati A., A third-order Newton-type method to solve systems of nonlinear equations, Appl. Math. Comput. 187 (2007) 630-635. [18] Frontini M., Sormani E., Third-order methods from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput. 149 (2004) 771-782. [19] Golbabai A., Javidi M., A new family of iterative methods for solving system of nonlinear algebric equations, Appl. Math. Comput. 190 (2007) 1717-1722. [20] Golbabai A., Javidi M., Newton-like iterative methods for solving system of nonlinear equations, Appl. Math. Comput. 192 (2007) 546-551. [21] Grau-Sanchez M., Peris J. M., Gutierrez J. M., Accelerated iterative methods for finding solutions of a system of nonlinear equations, Appl. Math. Comput. 190 (2007) 1815-1823. [22] Kou J., A third-order modification of Newton method for systems of nonlinear equations, Appl. Math. Comput. 191 (2007) 117-121. [23] He J. H., A new iteration method for solving algebraic equations, Appl. Math. Comput. 135 (2003) 81-84. [24] Hueso J. L., Martinez E., Torregrosa J. R., Third and fourth order iterative methods free from the second derivative for nonlinear systems, Appl. Math. Comput. 211(2009) 190-197. [25] Hueso J. L., Martinez E., Torregrosa J. R., Third order iterative methods free from second derivative for nonlinear systems, Appl. Math. Comput. 215 (2009) 58-65.

Unified Methods for Solving Equations

185

[26] Jafari H., Daftardar-Gejji V., Revised Adomian decomposition method for solving a system of nonlinear equations, Appl. Math. Comput. 175 (2006) 1-7. [27] Kaya D., El-Sayed S. M., Adomians decomposition method applied to systems of nonlinear algebraic equations, Appl. Math. Comput. 154 (2004) 487-493. [28] Magre˜na´ n A. A., Cordero A., Guti´errez J. M., Torregrosa J. R., Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane, Mathematics and Computers in Simulation, 105:49-61, 2014. [29] Magre˜na´ n A. A., Argyros I. K., Two-step Newton methods. Journal of Complexity, 30(4):533-553, 2014. [30] Noor M. A., Noor K. I., Waseem M., Decomposition method for solving system of linear equations, Engin. Math. Lett. 2 (2013) 34-41. [31] Noor M. A., Waseem M., Noor K. I., Al-Said E., Variational iteration technique for solving a system of nonlinear equations, Optim. Lett. 7 (2013) 991-1007. [32] Noor M. A., Waseem M., Noor K. I., New iterative technique for solving a system of equations, Appl. Math. Comput., 271, (2015), 446-466. [33] Noor M. A., Waseem M., Some iterative methods for solving a system of nonlinear equations, Comput. Math. Appl. 57 (2009) 101-106. ¨ [34] Ozel M., A new decomposition method for solving system of nonlinear equations, Math. Comput. Appl. 15 (2010) 89-95. ¨ [35] Ozel M., Chundang U., Sanprasert W., Single-step formulas and multi-step formulas of the integration method for solving the initial value problem of ordinary differential equation, Appl. Math. Comput. 190 (2007) 1438-1444. [36] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical solvers (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [37] Su Q., A unified model for solving a system of nonlinear equations, Appl. Math. Comput., 290, (2016), 46-55. [38] Sharma J. R., Guha R. K., Sharma R., An efficient fourth order weighted-newton method for systems of nonlinear equations, Numer. Algorithms 62 (2013), 307-323. [39] Traub J. F., Iterative solvers for the solution of equations, AMS Chelsea Publishing, 1982. [40] Waseem M., Noor M. A., Noor K. I., Efficient method for solving a system of nonlinear equations, Appl. Math. Comput., 275, (2016), 134-146.

Chapter 24

Eighth Convergence Order Derivative Free Method 1.

Introduction

We are concerned with the problem of approximating a locally unique solution x∗ of the equation F(x) = 0, (24.1) where F : D −→ X is a differentiable operator according to Fr´echet, X denotes Banach spaces, and D ⊂ X stand for an open and convex set. Many applications can be written in the form (24.1) using mathematical modeling [1]-[7]. Iterative methods are used to find an approximation for x∗ because closed-form solutions can be found only in special cases. Many high convergent (higher than two) order methods based on different geometrical or algebraic motivations are studied in the literature, and most of these methods have used derivatives of order higher than one to prove the convergence order. Moreover, no computable error bounds on kxn − x∗ k or uniqueness of the solution results are provided. We are motivated by these concerns. In particular, we extend the applicability of these methods using only the first derivative that actually appear on these methods. Computable error bounds as well as uniqueness results based on the ω− continuity of operator F 0 . Our technique is very general, so it can be applied to extend the applicability of many methods. Next, we demonstrate our technique on the an eighth order method yn = xn − v1

zn = yn − α0 v2 − (3 − 2α0 )v3 − (α0 − 2)v4

(24.2)

xn+1 = zn − α1 v5 − α2 v6 − α3 v7 − α4 v8 − α5 v9

188

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. Tn v1 = F(xn ) Tn v2 = F(yn ) Tn v3 = Nn v2 Tn v4 = Nn v3 Tn v5 = Qn F(zn ) Tn v6 = Qn v5 Tn v7 = Qn v6 Tn v8 = Qn v7 Tn v9 = Qn v8 Tn = [xn , wn ; F], wn = xn + β0 F(xn ), Nn = [hn , yn ; F], hn = yn + β1 F(yn ), Qn = [ln , zn ; F], ln = zn + β2 F(zn ),

[., .; F] : D × D −→ L(X, X), x0 ∈ D, a divided difference of order one and α0 , α1 , . . .α5 , β0 , β1 , β2 are given numbers. The eighth order of convergence when X = Y = Rm was given in [1,2] using hypotheses reaching the ninth derivative and α1 = 4 + α5 , α2 = −6 + 4α5 , α3 = 4 + 6α5 , α4 = −1 − 4α5 and α5 is a free parameter. Some advantages of method (24.2) over other special cases of it were also reported in [1]-[48]. 1 3 For example: Let X = Y = R, D = [− , ]. Define f on D by 2 2 f (s) = s3 logs2 + s5 − s4 Then, we have x∗ = 1, and f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 , f 00 (s) = 6x logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22.

Obviously f 000 (s) is not bounded on D. So, the convergence of these methods is not guaranteed by the analysis in these papers. That is why we present a ball convergence based only on the first derivative that only appears on the method (24.2).

2.

Ball Convergence

It is convenient for the ball convergence of method (24.2) that follows to develop some real functions and parameters. Set M = [0, ∞). Suppose that there exists function w0 : M −→ M continuous and nondecreasing such that equation w0 (s) − 1 = 0 (24.3) has a least positive solution denoted by ρ. Set M0 = [0, ρ).

Eighth Convergence Order Derivative Free Method

189

Suppose that there exists functions w : M0 −→ M, w1 : M0 −→ M, w2 : M0 −→ M continuous and nondecreasing and parameters λ, γ, δ and µ such that for g1 (t) =

w(t) , 1 − w0 (t)

 |α0 |γw1 (t) |α0 − 2|γδw1 (t) + 1+ (1 − w0 (t))2 (1 − w0 (t))3  |2α0 − 1|γδ + g1 (t), (1 − w0 (t))2  |α1 |λ |α2 |λµ |α3 |λµ2 g3 (t) = 1+ + + 1 − w0 (t) (1 − w0 (t))2 (1 − w0 (t))3  |α5 |λµ4 |α4 |λµ3 + + g2 (t), (1 − w0 (t))4 (1 − w0 (t))5 g2 (t) =

and hi (t) = gi (t) − 1, i = 1, 2, 3, equations hi (t) = 0 have least solutions ri ∈ (0, ρ), respectively. We shall show that R = min{ri },

(24.4)

is a radius of convergence for method (24.2). It follows by these definitions that for each t ∈ [0, R) 0 ≤ w0 (t) < 1 (24.5) and 0 ≤ gi (t) < 1.

(24.6)

¯ µ) denote the open and closed balls in X with center x ∈ X and of The sets U(x, µ), U(x, radius µ > 0. The following conditions (A) are needed. (a1) F : D −→ X is a continuous operator with divided difference of order one [., .; F] : D × D −→ L(X, X), there exists a simple solution x∗ ∈ D of equation F(x) = 0. (a2) There exists a function w0 : M −→ M continuous and nondecreasing such that for all x∈D kF 0 (x∗ )−1 ([x, x + β0 F(x); F] − F 0 (x∗ ))k ≤ w0 (kx − x∗ k). Set D0 = D ∩U(x∗ , ρ), where ρ is given in (24.3). (a3) There exists a functions w1 , w2 : M0 −→ T continuous and nondecreasing and nonnegative parameters p, p1 , p2 , λ, γ, δ, µ such that for all x, y, h, z ∈ D0 kF 0 (x∗ )−1 ([x, x + β0 F(x); F] − [x, x∗ ; F])k ≤ w(kx − x∗ k), kF 0 (x∗ )−1 ([x, x + β0 F(x); F] − [y + β1 F(y), y; F])k ≤ w1 (kx − x∗ k),

190

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. y = x − v1 .

kF 0 (x∗ )−1 ([x, x∗ ; F] − F 0 (x∗ ))k ≤ w2 (kx − x∗ k), kF 0 (x∗ )−1 [y, x∗ ; F]k ≤ γ,

kF 0 (x∗ )−1 [z, x∗; F]k ≤ λ, kF 0 (x∗ )−1 [h, y; F]k ≤ δ, kI + β0 [x, x∗ ; F]k ≤ p, kI + β1 [y, x∗; F]k ≤ p1 , kI + β2 [z, x∗; F]k ≤ p2 , where w = x + β0 F(x), h = y + β1 F(y), l = z + β2 F(z), and y, z are given by methods (24.2). ¯ ∗ , R) ¯ ⊂ D, where R¯ = max{R, pR, p1 g1 (R)R, p2g2 (R)R}. (a4) U(x (a5) There exists R∗ ≥ R such that

w2 (R∗ ) < 1.

Set D1 = D ∩ U¯ (x∗ , R∗ ). Next, we present the ball convergence of method (24.2) under the conditions (A) and the preceding notation. Theorem 44. Suppose that conditions (A) hold. Then, sequence {xn } starting from x0 ∈ U(x∗ , R) − {x∗ } and generated by method (24.2) is well defined in U(x∗ , R), remains in U(x∗ , R) for each n = 0, 1, 2, . . . and lim xn = x∗ . Moreover, the following assertions hold for en = kxn − x∗ k

n−→∞

kyn − x∗ k ≤ g1 (en )en ≤ en < R,

(24.7)

kzn − x∗ k ≤ g2 (en )en ≤ en ,

(24.8)

en+1 ≤ g3 (en )en ≤ en ,

(24.9)

and where R is defined by (24.4). Furthermore, the limit point x∗ is the only solution of equation F(x) = 0 in the set D1 given in (a5). Proof. Let v ∈ U(x∗ , R) − {x∗ }. Using (a1), (a2), (24.4) and (24.5), we get in turn that kF 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ w0 (kv − x∗ k) ≤ w0 (R) < 1,

(24.10)

so F 0 (v)−1 ∈ L(X, X) and kF 0 (v)−1F 0 (x∗ )k ≤

1 1 − w0 (kv − x∗ k)

(24.11)

Eighth Convergence Order Derivative Free Method

191

by the Banach lemma on invertible operators [40]. Moreover, y0 , z0 , x1 , v j , j = 1, 2, . . ., 9 are well defined by method (24.2) for n = 0. Then, we have in turn the estimates by (24.4), (24.5), (24.6), (a1), (a3) and (24.11) ky0 − x∗ k = kx0 − x∗ − T0−1 F(x0 )k

≤ k([x0 , w0 ; F]−1 F 0 (x∗ ))(F 0 (x∗ )−1 ([x0 , w0 ; F] − [x0 , x∗ ; F])(x0 − x∗ )k w0 (e0 )e0 ≤ g1 (e0 )e0 ≤ e0 < R, (24.12) ≤ 1 − w0 (e0 ) kw0 − x∗ k = kx0 − x∗ + β0 (F(x0 ) − F(x∗ ))k ≤ k(I + [x0 , x∗ ; F])(x0 − x∗ )k ≤ kI + [x0 , x∗ ; F]kkx0 − x∗ k ¯ ≤ pR ≤ R,

(24.13)

kz0 − x∗ k = ky0 − x∗ − α0 v2 − (3 − 2α0 )v3 − (α0 − 2)v4 k

= ky0 − x∗ + α0 (v2 − v3 ) + (α0 − 2)(v3 − v4 ) + (2α0 − 1)v3 k

= y0 − x∗ + α0 T0−1 (T0 − N0 )T0−1 F(y0 )

+(α0 − 2)T0−1 N0 T0−1 (T0 − N0 )T0−1 F(y0 )

−(2α0 − 1)T0−1 N0 T0−1 F(y0 )k  |α0 |w1 (e0 )γ |α0 − 2|w1 (e0 )γδ ≤ 1+ + 2 (1 − w0 (e0 )) (1 − w0 (e0 ))3  |2α0 − 1|γδ + ky0 − x∗ k ≤ g2 (e0 )e0 ≤ e0 , (1 − w0 (e0 ))2

(24.14)

kF 0 (x∗ )−1 [y0 , x∗ ; F]k ≤ γ,

kF 0 (x∗ )−1 [h0 , y0 ; F]k ≤ δ, e1 = kz0 − x∗ α1 T0−1 F(z0 ) − α2 T0−1 Q0 T0−1 F(z0 )

−1 −1 −1 −α3 T0−1 Q0 T0−1 Q0 T0−1 F(z0 ) − α4 T0−1 Q−1 0 T0 Q0 T0 Q) T0 F(z0 )

−α5 T0−1 Q0 T0−1 Q0 T0−1 Q)T0−1 Q) T0−1 F(z0 )k  |α1 |λµ |α0 |λµ ≤ 1+ + 1 − w0 (e0 ) (1 − w0 (e0 ))2 |α3 |λµ2 |α4 |λµ3 + + (1 − w0 (e0 ))3 (1 − w0 (e0 ))4  |α5 |λµ4 + g2 (e0 )e0 (1 − w0 (e0 ))5 ≤ g3 (e0 )e0 ≤ e0 , kF 0 (x∗ )−1 [z0, x∗ ; F]k ≤ λ,

(24.15)

192

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. kF 0 (x∗ )−1 [l0 , z0 ; F]k ≤ µ, ky0 + β1 F(y0 ) − x∗ k = k(I + β1 [y0 , x∗ ; F])(y0 − x∗ )k

¯ ≤ p1 ky0 − x∗ k ≤ p1 g1 (e0 )e0 ≤ R,

and kz − x∗ + β2 F(y0 )k = k(I + β2 [z0 , x∗ ; F])(z0 − x∗ )k

¯ ≤ p2 kz0 − x∗ k ≤ p2 g2 (e0 )e0 ≤ R.

Hence, estimations (24.7)– (24.9) hold for n = 0. Suppose they hold for all m = 0, 1, 2, . . ., n. Then, by simply replace x0 , y0 , z0 , x1 by xm , ym , zm, xm+1 , in the preceding estimations, we terminate the induction for (24.7)– (24.9) for all n. Then from the estimation em+1 ≤ cem < r, c = g3 (e0 ) ∈ [0, 1),

(24.16)

we deduce that lim xm = x∗ , and xm+1 ∈ U(x∗ , R). Next, consider q ∈ D1 with F(q) = 0. mp−→∞

Then, by (a2) and (a5), for G = [x∗ , q; F], we have

kF 0 (x∗ )−1 (G − F 0 (x∗ ))k

≤ w2 (kx∗ − qk) ≤ w2 (R∗ ) < 1, so G−1 ∈ L(X, X). Then, we conclude x∗ = q from the identity 0 = F(x∗ ) − F(q) = G(x∗ − q).

3.

Conclusion

We extend the ball convergence of an eighth convergence derivative-free method for solving Banach space valued operators under ω− continuity conditions using only the first derivative in contrast to earlier works using conditions up to the ninth derivative.

References [1] Ahmad F., Soleymani F., Haghani F. K., Capizzano S. S., Higher-order derivative free iterative methods with and without memory for systems of nonlinear equations, Appl. Math. Comput., 314, (2017), 199-211. [2] Ahmad F., Tohidi E., Ullah M. Z., Carrasco J. A., Higher order multi-step Jarratt-like method for solving systems of nonlinear equations: application to PDEs and ODEs, Comput. Math. Appl. 70 (2015) 624636. [3] Abbasbandy S., Extended Newtons method for a system of nonlinear equations by modified Adomian decomposition method, Appl. Math. Comput. 170 (2005) 648656. [4] Adomian G., Solving Frontier Problems of Physics: The Decomposition Method, Kluwer Academic Publishers, Dordrecht, 1994.

Eighth Convergence Order Derivative Free Method

193

[5] Amat S., Bermudez C., Hernandez M. A., Martinez E., On an efficient k−step iterative method for nonlinear equations, J. Comput. Appl. Math., 302, (2016), 258-271. [6] Amat S., Argyros I. K., Busquier S., Hern´andez M. A., On two high-order families of frozen Newton-type methods, Numer. Linear. Algebra Appl., 25, (2018), e2126, 1–13. [7] Argyros I. K., Computational theory of iterative solvers. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [8] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [9] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [10] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [11] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [12] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [13] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [14] Budzkoa D.A., Cordero A., Torregrosa J. R., New family of iterative methods based on the ErmakovKalitkin scheme for solving nonlinear systems of equations, Comput. Math. Math. Phy. 55 (2015) 19471959. [15] Babajee D. K. R., Dauhoo M. Z., Darvishi M. T., Barati A., A note on the local convergence of iterative methods based on Adomian decomposition method and 3node quadrature rule, Appl. Math. Comput. 200 (2008) 452-458. [16] Babolian E., Biazar J., Vahidi A. R., Solution of a system of nonlinear equations by Adomian decomposition method, Appl. Math. Comput. 150 (2004) 847-854. [17] Cordero A., Torregrosa J. R., Variants of Newtons method for functions of several variables, Appl. Math. Comput. 183 (2006) 199-208. [18] Cordero A., Torregrosa J. R., Variants of Newtons method using fifth-order quadrature formulas, Appl. Math. Comput. 190 (2007) 686-698. [19] Cordero A., Martnez E., Torregrosa J. R., Iterative methods of order four and five for systems of nonlinear equations, J. Comput. Appl. Math. 231 (2009) 541-551.

194

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[20] Darvishi M. T., Barati A., A third-order Newton-type method to solve systems of nonlinear equations, Appl. Math. Comput. 187 (2007) 630-635. [21] Frontini M., Sormani E., Third-order methods from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput. 149 (2004) 771-782. [22] Grau-Snchez Noguera M. M., Amat S., On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods, J. Comput. Appl. Math. 237 (2013) 363372. [23] Golbabai A., Javidi M., A new family of iterative methods for solving system of nonlinear algebric equations, Appl. Math. Comput. 190 (2007) 1717-1722. [24] Golbabai A., Javidi M., Newton-like iterative methods for solving system of nonlinear equations, Appl. Math. Comput. 192 (2007) 546-551. [25] Grau-Sanchez M., Peris J. M. Gutierrez J. M., Accelerated iterative methods for finding solutions of a system of nonlinear equations, Appl. Math. Comput. 190 (2007) 1815-1823. [26] Kou J., A third-order modification of Newton method for systems of nonlinear equations, Appl. Math. Comput. 191 (2007) 117-121. [27] He J. H., A new iteration method for solving algebraic equations, Appl. Math. Comput. 135 (2003) 81-84. [28] Hueso J. L., Martinez E., Torregrosa J. R., Third and fourth order iterative methods free from second derivative for nonlinear systems, Appl. Math. Comput. 211(2009) 190-197. [29] Hueso J. L., Martinez E., Torregrosa J. R., Third order iterative methods free from second derivative for nonlinear systems, Appl. Math. Comput. 215 (2009) 58-65. [30] Jafari H., Daftardar-Gejji V., Revised Adomian decomposition method for solving a system of nonlinear equations, Appl. Math. Comput. 175 (2006) 1-7. [31] Kaya D., El-Sayed S. M., Adomians decomposition method applied to systems of nonlinear algebraic equations, Appl. Math. Comput. 154 (2004) 487-493. [32] Magre˜na´ n A. A., Cordero A., Guti´errez J. M., Torregrosa J. R., Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane, Mathematics and Computers in Simulation, 105:49-61, 2014. [33] Magre˜na´ n A. A., Argyros I. K., Two-step Newton methods. Journal of Complexity, 30(4):533-553, 2014. [34] Noor M. A., Noor K. I., Waseem M., Decomposition method for solving system of linear equations, Engin. Math. Lett. 2 (2013) 34-41. [35] Noor M. A., Waseem M., Noor K. I., Al-Said E., Variational iteration technique for solving a system of nonlinear equations, Optim. Lett. 7 (2013) 991-1007.

Eighth Convergence Order Derivative Free Method

195

[36] Noor M. A., Waseem M., Noor K. I., New iterative technique for solving a system of equations, Appl. Math. Comput., 271, (2015), 446-466. [37] Noor M. A., Waseem M., Some iterative methods for solving a system of nonlinear equations, Comput. Math. Appl. 57 (2009) 101-106. ¨ [38] Ozel M., A new decomposition method for solving system of nonlinear equations, Math. Comput. Appl. 15 (2010) 89-95. ¨ [39] Ozel M., Chundang U., Sanprasert W., Single-step formulas and multi-step formulas of the integration method for solving the initial value problem of ordinary differential equation, Appl. Math. Comput. 190 (2007) 1438-1444. [40] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical solvers (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [41] Su Q., A unified model for solving a system of nonlinear equations, Appl. Math. Comput., 290, (2016), 46-55. [42] Sharma J. R., Guha R. K., Sharma R., An efficient fourth order weighted-newton method for systems of nonlinear equations, Numer. Algorithms 62 (2013), 307-323. [43] Sharma J. R., Arora H., Petkovic M. S., An efficient derivative free family of fourth order methods for solving systems of nonlinear equations, Appl. Math. Comput. 235 (2014) 383393. [44] Soleymani F., Lotfi T., Bakhtiari P., A multi-step class of iterative methods for nonlinear systems, Optim. Lett. 8 (2014) 10011015. [45] Tsoulos I. G., Stavrakoudis A., On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods, Non. Anal.: Real World Appl. 11 (2010) 2465-2471. [46] Traub J. F., Iterative solvers for the solution of equations, AMS Chelsea Publishing, 1982. [47] Waseem M., Noor M. A., Noor K. I., Efficient method for solving a system of nonlinear equations, Appl. Math. Comput., 275, (2016), 134-146. [48] Wang X., Zhang T., Qian W., Teng M., Seventh-order derivative-free iterative method for solving nonlinear systems, Numer. Algorithm 70 (2015) 545-558.

Chapter 25

m−Step Methods 1.

Introduction

Let X,Y stand for Banach spaces, and D ⊂ X denote an open and convex set. We are interested in finding a solution x∗ for the equation F(x) = 0,

(25.1)

where F : D ⊆ X −→ Y is differentiable according to Fr´echet. We utilize m-step method defined for each n = 0, 1, 2, . . . for x(0) ∈ D by (n)

y1

(n)

y2

(n)

y3

(n)

y4

x(n+1)

2 = x(n) − α(n) 3 21 9 15 (n) = x − (I + β(n) − (β(n))2 + (β(n))3 )α(n) 8 2 8 5 1 (n) (n) = y2 − (3I − β(n) + (β(n))2 )γ3 2 2 5 1 (n) (n) = y3 − (3I − β(n) + (β(n))2 )γ4 2 2 .. . 5 1 (n) (n) = ym = ym−1 − (3I − β(n) + (β(n))2 )γm , 2 2

(25.2)

where F 0 (x(n))α(n) = F(x(n) ), F 0 (x(n))β(n) = F 0 (y(n) ) and

(n)

(n)

F 0 (x(n))γi = F(yi−1 ), i = 3, 4, . . ., m. Method (25.2) uses (m-1) function and two derivative evaluations per iteration. The convergence order 2m was established in [1](see also [17,20,23]), when X = Y = Rk using Taylor series and conditions on the derivatives up to order q > 2 not appearing on method (25.2) limiting its applicability.

198

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. 1 3 For example: Let X = Y = R, D = [− , ]. Define f on D by 2 2  3 s log s2 + s5 − s4 i f s 6= 0 f (s) = 0 i f s = 0.

Then, we have x∗ = 1, and f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 , f 00 (s) = 6x logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22.

Obviously f 000 (s) is not bounded on D. So, the convergence of algorithm (25.2) is not guaranteed by the analysis in [1] (see also [17,20,23]). In our chapter, we extend the applicability of the method (25.2) using only the first derivative appearing on it. The convergence order 2m is recovered using COC or ACOC ( to be precise in Remark 2.2), not requiring the usage of higher than one derivative. Moreover, we provide error bounds on kx(n) − x∗ k as well as uniqueness results based on ω− continuity not given in [1](see also[17,20,23]). Our idea can be used to extend the applicability of other methods [1]-[24]. The new idea is given in Section 2, and the numerical experiments in Section 3.

2.

Local Convergence

It is convenient to introduce some real functions and parameters based on which the local convergence analysis of the method (25.2) is given in this section. Let M = [0, ∞). Suppose that there exists function w0 : M −→ M continuous and nondecreasing such that equation w0 (s) − 1 = 0 (25.3) has a least positive solution denoted by s0 . Let M0 = [0, s0). Suppose that there exist functions w, w1 : M0 −→ M continuous and nondecreasing such that for R1 1R1 0 w((1 − θ)s)dθ + 3 0 w1 (θs)dθ , ϕ1 (s) = 1 − w0 (s) ψ1 (s) = ϕ1 (s) − 1,

equation ψ1 (s) = 0 has a least solution ρ1 ∈ (0, s0 ).

(25.4)

m−Step Methods

199

Suppose that for functions λ : M0 −→ M, ϕ2 : M0 −→ M, ψ2 : M0 −→ M defined by λ(s) =

ϕ2 (s) =

2(w0 (s) + w0 (ϕ1 (s)s) 1 − w0 (s)  2 w0 (s) + w0 (ϕ1 (s)s) +5 , 1 − w0 (s) R1 0

w((1 − θ)s)dθ 1 − w0 (s)

3 (w0 (s) + w0 (ϕ1 (s)s))λ(s) + 8 (1 − w0 (s))2

R1 0

w1 (θs)dθ

3 λ(s) 01 w1 (θs)dθ , + 8 1 − w0 (s) R

equation ψ2 (s) = ϕ2 (s) − 1 = 0,

(25.5)

has a least solution ρ2 ∈ (0, s0 ). Suppose that for functions µ : M0 −→ M, ϕ3 : M0 −→ M, ψ3 : M0 −→ M defined by "  1 (w0 (s) + w0 (ϕ2 (s)s) 2 µ(s) = 2 1 − w0 (s)  w0 (s) + w0 (ϕ1 (s)s) , +3 1 − w0 (s) "R 1 0 w((1 − θ)ϕ2 (s)s)dθ ψ3 (s) = 1 − w0 (s) (w0 (s) + w0 (ϕ2 (s)s)) 01 w1 (θϕ2 (s)s)dθ + (1 − w0 (s))2 # R1 µ(s) 0 w1 (θϕ2 (s)s)dθ + ϕ2 (s), 1 − w0 (s) R

ψ3 (s) = ϕ3 (s) − 1, equation

ψ3 (s) = 0

(25.6)

has a least solution ρ3 ∈ (0, s0 ). Suppose that for functions ϕi : M0 −→ M, ψi : M0 −→ M, i = 3, 4, . . ., m defined as ψi (s) = g3 (s)m−2g2 (s)g1(s) ψi (s) = ϕi (s) − 1, equation

ψi (s) = 0

(25.7)

has a least solution ρi ∈ (0, s0). We shall show that ρ = min{ρ j }, j = 1, 2, . . ., m

(25.8)

200

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

is a radius of convergence for method (25.2). It follows by these definitions that for all s ∈ [0, ρ) 0 ≤ w0 (s) < 1, (25.9) and 0 ≤ ϕ j (s) < 1.

(25.10)

Let U(v, r) stand for an open ball in X with center v ∈ X and of radius r > 0. Moreover, ¯ r) stand for its closure. U(v, The conditions (Γ) shall be used:. (Γ1 ) There exists a simple solution x∗ ∈ D of equation F(x) = 0. (Γ2 ) There exist function w0 : M −→ M continuous and nondecreasing such that for all x∈D kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ w0 kx − x∗ k. Set D0 = D ∩U(x∗ , s0 ) provided that s0 exists and is given by (25.3). (Γ3 ) There exist functions w : M0 −→ M, w1 : M0 −→ M such that for all x, y ∈ D0 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ w(kx − yk) and kF 0 (x∗ )−1 F 0 (x)k ≤ w1 (kx − x∗ k). ¯ ∗ , ρ) ⊂ D, where ρ exists and is given by (25.8) and (Γ4 ) U(x (Γ5 ) There exists ρ∗ ≥ ρ such that

Z 1 0

w0 (θρ∗ )dθ < 1. Let D1 = D ∩ U¯ (x∗ , ρ∗ ).

Next, we are based on conditions (Γ) and the introduced notation to present the local convergence analysis of the method (25.2). Theorem 45. Suppose conditions (Γ) hold. Then, sequence {x(n) } starting from x(0) ∈ U(x∗ , ρ) − {x∗ } and generated by method (25.2) is well defined in U(x∗ , ρ), remains in U(x∗ , ρ) and converges to x∗ . Moreover, x∗ is the only solution of equation F(x) = 0 in the set D1 given in (Γ5 ). Proof. Let v ∈ U(x∗ , ρ) − {x∗ }. Then, by (25.8), (25.9), (Γ1 ) and (Γ2 ), we obtain kF 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ w0 (kv − x∗ k) ≤ w0 (ρ) < 1,

(25.11)

so F 0 (v)−1 exists by the Banach lemma on invertible operators [21] and kF 0 (v)−1 F 0 (x∗ )k ≤

1 . 1 − w0 (kv − x∗ k) (0)

(25.12)

(0)

We also have by method (25.2) and (25.12) that iterates y1 , y2 , . . .x(1) are well defined. We can write by the first sub-step of method (25.2) for n = 0 that 1 (0) y1 − x∗ = x(0) − x∗ − F 0 (x(0) )−1 F(x(0) ) + F 0 (x(0))−1 F(x(0)). 3

(25.13)

m−Step Methods

201

Using (25.8), (25.10) (for j = 1), (Γ3 ), (25.12) (for v = x0) ) and (25.13), we get in turn that 1 (0) ky1 − x∗ k = kx(0) − x∗ − F 0 (x(0) )−1 F(x(0) + F 0 (x(0) )−1 F(x(0) ) 3 0 (0) −1 0 ≤ kF (x ) F (x∗ )k ×k

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + θ(x(0) − x∗ )) − F 0 (x(0) ))dθ(x(0) − x∗ )k

1 + kF 0 (x(0))−1 F 0 (x∗ )kkF 0 (x∗ )−1 F(x(0))k " R31 # R (0) − x k)dθ + 1 1 w (θkx(0) − x k)dθ ∗ 1 ∗ 0 w((1 − θ)kx 0 3 kx(0) − x∗ k 1 − w0 (kx(0) − x∗ k)

(0)

≤ ϕ1 (kx(0) − x∗ k)kx(0) − x∗ k ≤ kx(0) − x∗ k < ρ,

(25.14)

so y1 ∈ U(x∗ , ρ). Similarly, by the second sub-step of method (25.2) for n = 0 3 (0) y2 − x∗ = x(0) − x∗ − F 0 (x(0))−1 F(x(0) − F 0 (x(0))−1 F(y(0)) 8 (0) (0) 2 0 (0) −1 ×[7I − 12β + 5(β ) ]F (x ) F(x(0) ) = x(0) − x∗ − F 0 (x(0))−1 F(x(0) ) 3 − F 0 (x(0))−1 F(y(0))δF 0 (x(0))−1 F(x(0)) 8 (0) = x − x∗ − F 0 (x(0))−1 F(x(0) 3 − (I − F 0 (x(0))−1 F(y(0)))δF 0 (x(0) )−1 F(x(0)) 8 3 0 (0) −1 + δF (x ) F(x(0) ), 8

(25.15)

where δ = 7I − 12F 0 (x(0) )−1 F(y(0) ) + (F 0 (x(0))−1 F(y(0)))2 = 7I − 12(I − F 0 (x(0) )−1 F(y(0) )) − I) +5[(I − F 0 (x(0) )−1 F(y(0))) − I]2

= 7I − 12(I − F 0 (x(0) )−1 F(y(0) )) + 12I

+5[(I − F 0 (x(0) )−1 F(y(0)))2 − 2(I − F 0 (x(0) )−1 F(y(0) )) + I]

= 5(I − F 0 (x(0) )−1 F(y(0)))2 + 2(I − F 0 (x(0) )−1 F(y(0) ))

= 5[F 0 (x(0))−1 (F 0 (x(0)) − F 0 (y(0))]2

+2F 0 (x(0))−1 (F 0 (x(0)) − F 0 (y(0)).

(25.16)

Then, we have that kδk ≤ 2

w0 (kx(0) − x∗ k) + w0 (ky(0) − x∗ k) 1 − w0 (kx(0) − x∗ k)

+5

w0 (kx(0) − x∗ k) + w0 (ky(0) − x∗ k) 1 − w0 (kx(0) − x∗ k)

≤ λ(kx(0) − x∗ k).

!2 (25.17)

202

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

By returning back to (25.15) and using (25.8), (25.10) (for j = 2) and (25.16), we get (0) ky2 − x∗ k



"R

1 (0) 0 w((1 − θ)kx − x∗ k)dθ 1 − w0 (kx(0) − x∗ k)

R1

3 (w0 (kx(0) − x∗ k) + w0(ky(0) − x∗ k))λ(kx(0) − x∗ k) 8 (1 − w0 (kx(0) − x∗ k))2 # R 3 λ(kx(0) − x∗ k) 01 w1 (θkx(0) − x∗ k)dθ + kx(0) − x∗ k 8 1 − w0 (kx(0) − x∗ k)

+



0

w1 (θkx(0) − x∗ k)dθ

g2 (kx(0) − x∗ k)kx(0) − x∗ k ≤ kx(0) − x∗ k < ρ

(25.18)

(0)

so y2 ∈ U(x∗ , ρ). Then, by the third sub-step of method (25.2) for n = 0, we can write (0)

(0)

(0)

(0)

y3 − x∗ = (y2 − x∗ − F 0 (y2 )−1 F(y2 ) (0)

(0)

+(F‘(y2 )−1 − F 0 (x(0) )−1 )F(y2 ) (0)

−MF 0 (x(0) )F(y2 ),

(25.19)

where 5 1 M = 2I − F 0 (x(0))−1 F(y(0)) + (F 0 (x(0))−1 F(y(0)))2 2 2 1 0 (0) −1 (0) [4I − 5F (x ) F(y ) + (F 0 (x(0))−1 F(y(0)))2 ] = 2 1 0 (0) −1 [(F (x ) F(y(0)) − I)2 − 3(F 0 (x(0))−1 F(y(0)) − I)] = 2 1 0 (0) −1 = [(F (x ) F(y(0)) − F 0 (x(0)))2 − 3(F 0 (x(0) )−1 (F(y(0)) − F 0 (x(0))], 2 so

 !2 1  w0 (ky(0) − x∗ k) + w0 (kx(0) − x∗ k) kMk ≤ 2 1 − w0 (kx(0) − x∗ k) # w0 (ky(0) − x∗ k) + w0 (kx(0) − x∗ k) +3 1 − w0 (kx(0) − x∗ k) ≤ µ(kx(0) − x∗ k).

Then, returning back to (25.19), we get in turn that "R (0) 1 (0) 0 w((1 − θ)ky2 − x∗ k)dθ ky3 − x∗ k ≤ (0) 1 − w0 (ky2 − x∗ k) (0)

+

(w0 (ky2 − x∗ k) + w0 (kx(0) − x∗ k))

(25.20)

R1 0

(0)

w1 (θky2 − x∗ k)dθ (0)

(1 − w0 (kx(0) − x∗ k))(1 − w0 (ky2 − x∗ k)) # R (0) µ(kx(0) − x∗ k) 01 w1 (θky2 − x∗ k)dθ (0) + ky2 − x∗ k 1 − w0 (kx(0) − x∗ k)

≤ g3 (kx(0) − x∗ k)kx(0) − x∗ k ≤ kx(0) − x∗ k < ρ.

(25.21)

m−Step Methods

203

Similarly, we get for i = 4, . . ., m (0)

(0)

kyi − x∗ k ≤ g3 (kx(0) − x∗ k)kyi−1 − x∗ k .. . (0)

≤ g3 (kx(0) − x∗ k)g3 (kx(0) − x∗ k)ky2 − x∗ k

≤ ϕi (kx(0) − x∗ k)kx(0) − x∗ k ≤ kx(0) − x∗ k < ρ, (0)

so yi

(25.22)

∈ U(x∗ , ρ). In particular, for i = m kx(1) − x∗ k ≤ ϕm (kx(0) − x∗ k)kx(0) − x∗ k ≤ ckx(0) − x∗ k,

(25.23) (0)

(0)

where c = ϕm (kx(0) − x∗ k) ∈ [0, 1). By repeating these calculations with y1 , y2 , . . ., x(1), (k) (k) replaced by y1 , y2 , . . .x(k+1), we have kx(n+1) − x∗ k ≤ ckx(n) − x∗ k < ρ,

(25.24)

so lim x(n) = x∗ and x(n+1) ∈ U(x∗ , ρ). Let q ∈ D1 with F(q) = 0. Set T = n−→∞

Z 1

θ(x∗ − q))dθ. Then, by (Γ1 ), (Γ2 ) and (Γ5 ), we obtain 0

−1

0

kF (x∗ ) (T − F (x∗ ))k ≤

Z 1 0

w0 (θkx∗ − qk)dθ ≤

Z 1 0

F 0 (q +

0

w0 (θρ∗ )dθ < 1,

so x∗ = q, since T −1 exists and 0 = F(x∗ ) − F(q) = T (x∗ − q). Remark 35.

1. By (Γ2 ), and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + w0 (kx − x∗ k)

second condition in (Γ3 ) can be dropped, and w1 be defined as w1 (t) = 1 + w0 (t). Notice that, if w1 (t) < 1 + w0 (t), then R1 can be larger (see Example 3.1). 2. The results obtained here can be used for operators G satisfying autonomous differential equations [2]-[8] of the form F 0 (x) = T (F(x)) where T is a continuous operator. Then, since F 0 (x∗ ) = T (F(x∗ )) = T (0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: T (x) = x + 1.

204

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

3. The local results obtained here can be used for projection algorithms such as the Arnoldi’s algorithm, the generalized minimum residual algorithm (GMRES), the generalized conjugate algorithm (GCR) for combined Newton/finite projection algorithms, and in connection to the mesh independence principle can be used to develop the cheapest and most efficient mesh refinement strategies [1,17,20,23]. 4. Let w0 (t) = L0 t, and w(t) = Lt. The parameter rA = the convergence radius of Newton’s method

2 was shown by us to be 2L0 + L

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(25.25)

under the conditions (Γ1 )-(Γ3 ) (w1 is not used). It follows that the convergence radius R of algorithm (25.2) cannot be larger than the convergence radius rA of the second order Newton’s algorithm (25.25). As already noted in [1,17,20,23] rA is at least as large as the convergence ball given by Rheinboldt [21] rT R =

2 , 3L1

where L1 is the Lipschitz constant on Ω, L0 ≤ L1 and L ≤ L1 . In particular, for L0 < L1 or L < L1 , we have that rT R < rA and

rT R 1 L0 → as → 0. rA 3 L1 That is our convergence ball rA is at most three times larger than Rheinboldt’s. The same value for rT R was given by Traub [24].

5. It is worth noticing that solver (25.2) is not changing, when we use the conditions (Γ) of Theorem 45 instead of the stronger conditions used in [1,17,20,23]. Moreover, we can compute the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way, we obtain in practice the order of convergence in a way that avoids the existence of the seventh Fr´echet derivative for operator F.

3.

Numerical Examples

Example 27. Let B1 = B2 = Ω = R. Define F(x) = sinx. Then, we get that x∗ = 0, ω0 (s) = ω(s) = s and ω1 (s) = 1. Then, we have

m−Step Methods

205

Table 25.1. Radius for Example 3.1 Radius ρ1 ρ2 ρ3

ω1 (s) = 1 0.4444 0.19211 0.176071

ω1 (s) = 1 + ω0 (s) 0.04 0.0183391 0.0168098

Example 28. Let X = Y = C[0, 1], the space of continuous functions defined on [0, 1] with the max norm. Let Ω = U(0, 1). Define function F on Ω by F(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(25.26)

0

We have that 0

F (ϕ(λ))(x) = λ(x) − 15

Z 1 0

Then, we get that x∗ = 0, ω0 (s) = ω1 (s) =

xθϕ(θ)2 λ(θ)dθ, for each λ ∈ Ω. 15 s, ω1 (s) = 2. This way, we have that 2

Table 25.2. Radius for Example 3.2 Radius ρ1 ρ2 ρ3

15 s, 2 0.0296296 0.0165949 0.014448

ω1 (s) =

ω1 (s) = 1 + ω0 (s) 0.05333 0.0244521 no solution

Example 29. Let X = Y = R3 , Ω = U(0, 1), x∗ = (0, 0, 0)T , and define F on Ω by F(x) = F(u1 , u2 , u3 ) = (eu1 − 1,

e−1 2 u2 + u2 , u3 )T . 2

(25.27)

For the points u = (u1 , u2 , u3 )T , the Fr´echet derivative is given by  u1  e 0 0 F 0 (u) =  0 (e − 1)u2 + 1 0  . 0 0 1

Using the norm of the maximum of the rows and since G0 (x∗ ) = diag(1, 1, 1), we get by 1 1 conditions (A) ω0 (s) = (e − 1)s, ω(s) = e e−1 s, and ω1 (s) = e e−1 . Then, we have Example 30. Returning back to the motivational example at the introduction of this chapter, we have ω0 (s) = ω(s) = 96.662907s, ω1 (s) = 1.0631. Then, we have

206

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. Table 25.3. Radius for Example 3.3 Radius ρ1 ρ2 ρ3

1

ω1 (s) = e e−1 0.154407 0.0783152 0.0687962

ω1 (s) = 1 + ω0 (s) 0.229929 0.106345 0.0976131

Table 25.4. Radius for Example 3.4 Radius ρ1 ρ2 ρ3

4.

ω1 (s) = 1.0631 0.00445282 0.00192451 0.00175638

ω1 (s) = 1 + ω0 (s) 0.00413809 0.00189722 no solution

Conclusion

We extend the local convergence of an m-step method (m a natural number) for solving Banach space valued equations using only the first derivative in contrast to earlier works on the finite Euclidean space using higher than one derivative not appearing in these methods.

References [1] Abbasbandy S., Bakhtiari P., Cordero A., Torregrosa J. R., Lofti T., New efficient methods for solving equations with arbitrary even order, Appl. Math. Comput., 287288, (2016), 94-103. [2] Argyros I. K., Computational theory of iterative solvers. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [4] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [5] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [6] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007).

m−Step Methods

207

[7] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [8] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [9] Babajee D. K. R., Dauhoo M. Z., Darvishi M. T., Karami A., Barati A., Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations, J. Comput. Appl. Math. 233 (8) (2010) 2002-2012. [10] Babajee D. K. R., Cordero A., Soleymani F., Torregrosa J. R., On a novel fourth-order algorithm for solving systems of nonlinear equations, J. Appl. Math. 201212 pages. Article ID 165452. [11] Cordero A., Hueso J. L., Martinez E., Torregrosa J. R., A modified Newton-Jarratt’s composition, Numer. Algorithm 55 (2010) 87-99. [12] Cordero A., Torregrosa J. R., Variants of newton’s method using fifth-order quadrature formulas, Appl. Math. Comput. 190 (2007) 686-698. [13] Cordero A., Torregrosa J. R., Vassileva M. P., Increasing the order of convergence of iterative schemes for solving nonlinear systems, J. Comput. Appl. Math. 252 (2012) 86-94. [14] Esmaeili H., Ahmadi M., An efficient three-step method to solve system of nonlinear equations, Appl. Math. Comput. 266 (2015) 1093-1101. [15] Jarratt P., Some fourth order multipoint iterative methods for solving equations, Math. Comput. 20 (1966) 434-437. [16] Khan W. A., Noor K. I., Bhatti K., Ansari F., A new fourth order Newton-type methods for solution of system of nonlinear equations, Appl. Math. Comput. 270 (2015) 724-730. [17] Lotfi T., Bakhtiari P., Cordero A., Mahdiani K., Torregrosa J. R., Some new efficient multipoint iterative methods for solving nonlinear systems of equations, Int. J. Comput. Math. 92 (9) (2015) 1921-1934. [18] Magre˜na´ n A. A., Cordero A., Guti´errez J. M., Torregrosa J. R., Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane, Mathematics and Computers in Simulation, 105:49-61, 2014. [19] Magre˜na´ n A. A., Argyros I. K., Two-step Newton methods. Journal of Complexity, 30(4):533-553, 2014. [20] Rostamy B., Bakhtiari P., New efficient multipoint iterative method for solving nonlinear systems, Appl. Math. Comput. 266 (2015) 350-356.

208

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[21] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical solvers (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [22] Sharma J. R., Guha R. K., Sharma R., An efficient fourth order weighted-newton method for systems of nonlinear equations, Numer. Algorithm 62 (2013)307-323. [23] Soleymani F., Lotfi T., Bakhtiari P., A multi-step class of iterative methods for nonlinear systems, Optim. Lett. 8 (3) (2014) 1001-1015 . [24] Traub J. F., Iterative solvers for the solution of equations, AMS Chelsea Publishing, 1982.

Chapter 26

Third Order Schemes for Solving Equations 1.

Introduction

In this chapter, we compare three third order schemes, which produces sequences approaching a solution x∗ of equation F(x) = 0, (26.1) where F : Ω ⊂ X −→ Y with X,Y denoting Banach spaces, and Ω a nonempty, open convex set. We assume through out that the operator Fis continuously differentiable according to Fr´echet. The scheme, we are interested in are: yn = xn − F 0 (xn )−1 F(xn )

xn+1 = yn − F 0 (xn )−1 F(yn ),

yn = xn − 2F 0 (xn )−1 F(xn )

xn+1 = xn − 2(F 0 (xn ) + F 0 (yn ))−1 F(xn ),

(26.2)

(26.3)

and 1 yn = xn − F 0 (xn )−1 F(xn ) 2 xn+1 = xn − F 0 (yn )−1 F(xn ).

(26.4)

In particular, in [1] the dynamics and order were given when X = Y = R or C. The convergence analysis of these schemes used assumptions on the derivatives of F up to the order four. But these derivatives do not appear on these methods and also limit their applicability. 1 3 For example: Let X = Y = R, D = [− , ]. Define f on D by 2 2  3 s log s2 + s5 − s4 i f s 6= 0 f (s) = 0 i f s = 0.

.

210

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, we have x∗ = 1, and f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 , f 00 (s) = 6x logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22.

Obviously f 000 (s) is not bounded on D. So, the convergence of schemes (26.2), (26.3) and (26.4) are not guaranteed. Other concerns include the facts that no computable estimates on kxn − x∗ k or uniqueness of x∗ results are given. Hence, there is a need to address these matters using conditions on F 0 , which only appears on these schemes, and compare these schemes. The idea presented is general enough to be utilized for the extension of other schemes [2]-[22].

2.

Ball Convergence

The development of parameters and real functions is convenient for our ball convergence. Let M = [0, ∞). Suppose that there exists function ϕ0 : M −→ M continuous and nondecreasing such that equation ϕ0 (s) − 1 = 0 (26.5) has a minimal positive solution ρ0 . Let M0 = [0, ρ0 ). Suppose there exits functions ϕ : M0 −→ M, ϕ1 : M0 −→ M continuous and nondecreasing such that for R1 ϕ((1 − θ)s)dθ g1 (s) = 0 1 − ϕ0 (s) the equation

ϕ0 (g1 (s)s) − 1 = 0

(26.6)

has a least solution ρ¯ 0 ∈ (0, ρ0 ). Let (ϕ0 (s) + ϕ0 (g1 (s)s)) 01 ϕ1 (θg1 (s)s)dθ g2 (s) = (g1 (g1 (s)) + )g1 (s), (1 − ϕ0 (t))(1 − ϕ0 (g1 (s)s)) R

g¯1 (s) = g1 (s) − 1, and g¯2 (s) = g2 (s) − 1. Suppose equations g¯1 (t) = 0, g¯ 2 (s) = 0 have least solutions ρ1 , ρ2 ∈ (0, ρ0 ), respectively. We shall show that ρ = min{ρ1 , ρ2 },

(26.7)

(26.8)

Third Order Schemes for Solving Equations

211

is a radius of convergence for scheme (26.2). These definitions imply 0 ≤ ϕ0 (s) < 1

(26.9)

0 ≤ g1 (s) < 1

(26.10)

0 ≤ g2 (s) < 1,

(26.11)

and ¯ ∗ , a) = {x ∈ X : kx − x∗ k ≤ for all s ∈ [0, ρ). Define B(x∗ , a) = {x ∈ X : kx − x∗ k < a}, B(x a}, a > 0. The conditions (A) shall be used: (A1) There exists a simple solution x∗ ∈ Ω of equation F(x) = 0. (A2) There exists a continuous and nondecreasing function ϕ0 : M −→ M such that for all x∈Ω kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ϕ0 (kx − x∗ k). Set B0 = Ω ∩ B(x∗ , ρ0 ). (A3) There exists continuous and nondecreasing functions ϕ : M0 −→ M, ϕ1 : M0 −→ M, such that for each x, y ∈ B0 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ϕ(ky − xk), and kF 0 (x∗ )−1 F 0 (x)k ≤ ϕ1 (kx − x∗ k), ¯ ∗ , γ) ⊆ Ω, γ > 0 to be determined. (A4) B(x (A5) There exists ρ∗ ≥ γ such that Z 1 0

ϕ0 (θρ∗ )dθ < 1.

¯ ∗ , ρ∗ ). Set B1 = Ω ∩ B(x Next, we present the local convergence analysis of the scheme (26.2) using the conditions (A) and the introduced notation. Theorem 46. Suppose the conditions (A) hold with γ = ρ. Then, sequence {xn } generated by scheme (26.2) is well defined in B(x∗ , ρ), remains in B(x∗ , ρ) for each n = 0, 1, 2, . . . and lim xn = x∗ , provided x0 ∈ U(x∗ , ρ) − {x∗ }. Moreover, x∗ is the only solution of equation n−→∞

F(x) = 0 in the set B1 given in (A5).

Proof. Let u ∈ B(x∗ , ρ) − {x∗ }. Using (26.8) and (A2), we get kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ϕ0 (ku − x∗ k) ≤ ϕ0 (ρ) < 1, which together with Banach lemma on invertible operators [3,18] imply F 0 (u) is invertible, kF 0 (u)−1 F 0 (x∗ )k ≤

1 1 − ϕ0 (ku − x∗ k)

(26.12)

212

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and y0 , x1 exist by scheme (26.2), if n = 0. Then, we have by the first substep of scheme (26.2) (for n = 0), (26.8), (26.9) and (26.12) for u = x0 , and e0 = kx0 − x∗ k ky0 − x∗ k ≤ kx0 − x∗ − F 0 (x0 )−1 F(x0 )k ≤ kF 0 (x0 )−1 F 0 (x∗ )k ×k

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 ))(x0 − x∗ )dθk

R1

ϕ((1 − θ)e0 )dθe0 1 − ϕ0 (e0 ) ≤ g1 (e0 )e0 ≤ e0 < ρ, [



0

(26.13)

showing y0 ∈ B(x∗ , ρ). Moreover, by the second sub-step of scheme (26.2) (for n = 0), (26.8), (26.10), (A3), (26.120 (for x0 = u) and (26.13) that e1 ≤ ky0 − x∗ − F 0 (y0 )−1 F(y0 ))

+F 0 (y0 )−1 (F 0 (x0 ) − F 0 (y0 ))F 0 (x0 )−1 F(y0 ))k " # R (ϕ0 (ky0 − x∗ k) + ϕ0 (e0 )) 01 ϕ1 (θky0 − x∗ k)dθ ≤ g1 (ky0 − x∗ k) + ky0 − x∗ k (1 − ϕ0 (ky0 − x∗ k))(1 − ϕ0 (e0 )) ≤ g2 (e0 )e0 ≤ e0 ,

(26.14)

so x1 ∈ B(x∗ , ρ). By simply replacing x0 , y0 , x1 by xk , yk , xk+1 , we get kyk − x∗ k ≤ g1 (ek )ek and ek+1 ≤ g2 (ek )ek ≤ cek < ρ (26.15) where c = g2 (e0 ) ∈ [0, 1) leading to lim xk = x∗ , and xk+1 ∈ B(x∗ , ρ). Suppose b ∈ B1 is such that F(b) = 0 and Q =

Z 1 0

k−→∞

0

F (x∗ + θ(b − x∗ ))dθ. Then, by (A2) and (A3), we have

kF (x∗ ) (Q − F (x∗ ))k ≤

Z 1

ϕ0 (θkx∗ − bk)dθ



Z 1

ϕ0 (θρ∗ )dθ < 1,

0

−1

0

0

0

(26.16)

so x∗ = b by the invertibility of Q and the identity 0 = F(b) − F(x∗ ) = Q(b − x∗ ). Remark 36.

1. By (A2), and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + w0 (kx − x∗ k)

second condition in (A3) can be dropped, and w1 be defined as w1 (t) = 1 + w0 (t). Notice that, if w1 (t) < 1 + w0 (t), then ρ can be larger (see Example 3.1).

Third Order Schemes for Solving Equations

213

2. The results obtained here can be used for operators G satisfying autonomous differential equations [3]-[8] of the form F 0 (x) = T (F(x)) where T is a continuous operator. Then, since F 0 (x∗ ) = T (F(x∗ )) = T (0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: T (x) = x + 1. 3. The local results obtained here can be used for projection schemes such as the Arnoldi’s algorithm, the generalized minimum residual algorithm (GMRES), the generalized conjugate algorithm (GCR) for combined Newton/finite projection schemes, and in connection to the mesh independence principle can be used to develop the cheapest and most efficient mesh refinement strategies [3]-[8]. 4. Let ϕ0 (t) = L0 t, and ϕ(t) = Lt. The parameter rA = the convergence radius of Newton’s scheme [2]

2 was shown by us to be 2L0 + L

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(26.17)

under the conditions (A1)-(A3) (ϕ1 is not used). It follows that the convergence radius R of scheme (26.2) cannot be larger than the convergence radius rA of the second order Newton’s scheme (26.17). As already noted in [3] rA is at least as large as the convergence ball given by Rheinboldt [18] rT R =

2 , 3L1

where L1 is the Lipschitz constant on Ω, L0 ≤ L1 and L ≤ L1 . In particular, for L0 < L1 or L < L1 , we have that rT R < rA and

rT R 1 L0 → as → 0. rA 3 L1 That is our convergence ball rA is at most three times larger than Rheinboldt’s. The same value for rT R was given by Traub [20].

5. It is worth noticing that solver (26.2) is not changing, when we use the conditions (A) of Theorem 46 instead of the stronger conditions used in [1,14]. Moreover, we can compute the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k a = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k b = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way, we obtain in practice the order of convergence in a way that avoids the existence of the fourth Fr´echet derivative for operator F.

214

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Next, we study the local convergence of scheme (26.3) in an analogous way. This 1 time we define functions p, g3 , g¯3 on [0, d), where d = min{ρ0 , ρ p } by p(t) = (ϕ0 (t) + 2 ϕ0 (g1 (t)t)), R (ϕ0 (t) + ϕ0 (g1 (t)t)) 01 ϕ1 (θt)dθ , g3 (t) = g1 (t) + 2(1 − ϕ0 (t))(1 − p(t)) g¯3 (t) = g3 (t) − 1,

and equations p(t) − 1 = 0, g¯3 (t) = 0 has minimal solutions ρ p , ρ3 in (0, ρ0 ), (0, d), respectively. We shall show r = min{ρ1 , ρ3 }

(26.18)

is a radius of convergence for scheme (26.3). Functions p and g3 are motivated by the estimates k(2F 0 (x∗ ))−1(F 0 (x0 ) − F 0 (x∗ ) + F 0 (y0 ) − F 0 (x∗ ))k 1 ≤ (ϕ0 (e0 ) + ϕ0 (ky0 − x∗ k)) 2 1 ≤ (ϕ0 (e0 ) + ϕ0 (g1 (e0 ))e0 ) 2 = p(e0 ) ≤ p(r) < 1, so k(F 0 (x0 ) + F 0 (y0 ))−1 F 0 (x∗ )k ≤ and

1 2(1 − p(e0 ))

e1 = kx0 − x∗ − F 0 (x0 )−1 F(x0 ) + (F 0 (x0 )−1 − 2(F 0 (x0 ) + F 0 (y0 ))−1)F(x0 )k = kx0 − x∗ − F 0 (x0 )−1 F(x0 ) + F 0 (x0 )−1 (F 0 (y0 ) − F 0 (x0 ))(F 0 (x0 ) + F 0 (y0 ))−1 F(x0 )k (ϕ0 (e0 ) + ϕ0 (ky0 − x∗ k)) 01 ϕ1 (θe0 )dθ ≤ (g1 (e0 ) + ]e0 2(1 − ϕ0 (e0 ))(1 − p(e0 )) ≤ g3 (e0 )e0 ≤ e0 R

Hence, we arrive at: Theorem 47. Suppose conditions (A) hold with γ = r. Then, the conclusions of Theorem 46 hold but for scheme (26.3). Then, for the study of scheme (26.4) define functions g4 , g¯4 , g5 and g¯5 on [0, ρ¯ 0 ) by g4 (t) =

R1 0

R1

ϕ((1 − θ)t)dθ + 12 1 − ϕ0 (t) g¯4 = g4 (t) − 1,

0

ϕ1 (θt)dθ

,

Third Order Schemes for Solving Equations

215

(ϕ0 (t) + ϕ0 (g1 (t)t)) 01 ϕ1 (θt)dθ , g5 (t) = g1 (t) + (1 − ϕ0 (t))(1 − ϕ0(g1 (t)t)) R

g¯5 (t) = g5 (t) − 1

and suppose equations g¯4 (t) = 0, g¯5 (t) = 0 have least solutions R1 , R2 , respectively in (0, ρ¯ 0 ). We shall show that R = min{R1 , R2 }

(26.19)

is a radius of convergence for scheme (26.4). The motivation for the definition of these functions is given by the estimates 1 ky0 − x∗ k = k(x0 − x∗ − F 0 (x0 )−1 F(x0 )) + F 0 (x0 )−1 F(x0 )k 2 R1 1R1 ( 0 ϕ((1 − θ)e0 )dθ + 2 0 ϕ1 (θe0 )dθ ≤ e0 1 − ϕ0 (e0 ) ≤ g4 (e0 )e0 ≤ e0 < R and e1 = kx0 − x∗ − F 0 (x0 )−1 F(x0 ) + F 0 (x0 )−1 (F 0 (y0 ) − F 0 (x0 ))F 0 (y0 )−1 F(x0 )k (ϕ0 (e0 ) + ϕ0 (ky0 − x∗ k)) 01 ϕ1 (θe0 )dθ ≤ (g1 (e0 ) + e0 (1 − ϕ0 (e0 ))(1 − ϕ0 (ky0 − x∗ k)) ≤ g4 (e0 )e0 ≤ e0 . R

Hence, we arrive at: Theorem 48. Suppose conditions (A) hold with γ = R. Then, the conclusions of Theorem 46 hold but for scheme (26.4).

3.

Numerical Examples

We test the conditions (A). Example 31. Let B1 = B2 = Ω = R. Define F(x) = sinx. Then, we get that x∗ = 0, ϕ0 (s) = ϕ(s) = s and ϕ1 (s) = 1. Then, we have Table 26.1. Radius for Example 3.1 Radius ρ r R

ϕ1 (s) = 1 0.492932 0.410998 0.33333

ϕ1 (s) = 1 + ϕ0 (s) 0.484024 0.390953 0.325969

216

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Example 32. Let B1 = B2 = C[0, 1], the space of continuous functions defined on [0, 1] with the max norm. Let Ω = U(0, 1). Define function F on Ω by F(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(26.20)

0

We have that F 0 (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

Then, we get that x∗ = 0, ϕ0 (s) = ϕ1 (s) =

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ D. 15 s, ϕ1 (s) = 2. This way, we have that 2

Table 26.2. Radius for Example 3.2 Radius ρ r R

ϕ1 (s) = 2 0.0576916 0.0435799 0.01444

ϕ1 (s) = 1 + ϕ0 (s) 0.0572113 0.0520337 0.0380952

Example 33. Let B1 = B2 = R3 , Ω = U(0, 1), x∗ = (0, 0, 0)T , and define F on Ω by F(x) = F(u1 , u2 , u3 ) = (eu1 − 1,

e−1 2 u2 + u2 , u3 )T . 2

(26.21)

For the points u = (u1 , u2 , u3 )T , the Fr´echet derivative is given by  u1  e 0 0 F 0 (u) =  0 (e − 1)u2 + 1 0  . 0 0 1

Using the norm of the maximum of the rows and since G0 (x∗ ) = diag(1, 1, 1), we get by 1 1 conditions (A) ϕ0 (s) = (e − 1)s, ϕ(s) = e e−1 s, and ϕ1 (s) = e e−1 . Then, we have Table 26.3. Radius for Example 3.3 Radius ρ r R

1

ϕ1 (s) = e e−1 0.254177 0.196552 0.0402645

ϕ1 (s) = 1 + ϕ0 (s) 0.277999 0.224974 0.016331

Example 34. Returning back to the motivational example at the introduction of this chapter, we have ϕ0 (s) = ϕ(s) = 96.662907s, ϕ1 (s) = 1.0631. Then, we have

Third Order Schemes for Solving Equations

217

Table 26.4. Radius for Example 3.4 Radius ρ r R

4.

1

ϕ1 (s) = e e−1 0.00504728 0.00417909 0.00328082

ϕ1 (s) = 1 + ϕ0 (s) 0.00500734 0.00403725 0.00295578

Conclusion

A comparison is given for three third order schemes for solving Banach space valued equations using conditions on the first derivative and the same set of conditions. This technique extends to others using the fourth derivative not on these schemes.

References [1] Amat S., Busquier S., Plaza S., Dynamics of a family of third order iterative methods that do not require using the second derivative, Appl. Math. Comput., 154, (2004), 735-746. [2] Amat S., Argyros I. K., Busquier S., Hern´andez M. A., On two high-order families of frozen Newton-type methods, Numer. Linear. Algebra Appl., 25, (2018), e2126, 1–13. [3] Argyros I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [4] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [5] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [6] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [8] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [9] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., Increasing the convergence order of an iterative method for nonlinear systems, Appl. Math. Lett. 25 (2012)23692374.

218

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[10] Chun C., Neta B., Developing high order methods for the solution of systems of nonlinear equations, Appl. Math. Comput., 342, (2019), 178-190. [11] Darvishi M. T., Barati A., A fourth-order method from quadrature formulae to solve systems of nonlinear equations, Appl. Math. Comput. 188 (2007) 257-261. [12] Esmaeili H., Ahmad M., An efficient three step method to solve system of nonlinear equations, Appl. Math. Comput., 266 (2015), 1093-1101. [13] Ezquerro J. A., Hern´andez M. A., Romero N., Velasco A. I., On Steffensen’s method on Banach spaces, J. Comput. Appl. Math. 249 (2013) 9-23. [14] Homeier H. H. H., A modified Newton method with cubic convergence: the multivariable case, J. Comput. Appl. Math. 169 (2004) 161-169. [15] Hueso J. L., Martinez E., Teruel C., Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems, Comput. Appl. Math. 275 (2015) 412-420. [16] Neta B., A sixth-order family of methods for nonlinear equations, Int. J. Comput. Math. 7 (1979) 157-161. Cordero A., Hueso J. L., Martinez E. [17] Potra F. A, Ptak V., Nondiscrete induction and iterative processes, Research Notes in Mathematics, 103, Pitman Boston, M.A, 1984. [18] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical solvers (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [19] Sharma J. R., Gupta P., An efficient fifth order method for solving systems of nonlinear equations, Comput. Math. Appl. 67 (2014) 591-601. [20] Traub J. F., Iterative solvers for the solution of equations, AMS Chelsea Publishing, 1982. [21] Wang X., Kou J., Li Y., Modified Jarratt method with sixth-order convergence, Appl. Math. Lett. 22 (2009) 1798-1802. [22] Weerakoon S., Fernando T. G. I., A variant of Newton’s method with accelerated third-order convergence, Appl. Math. Lett. 13 (8) (2000) 87-93.

Chapter 27

Deformed Newton Method for Solving Equations 1.

Introduction

In this chapter, we are concerned with the problem of locating a solution x? of the nonlinear equation F(x) = 0, (27.1) where F is an operator defined on a non-empty, open convex subset D of a Banach space X with values in a Banach space Y. Many problems in Computational Sciences and other disciplines can be brought in a form like (27.1) using mathematical modeling [4]. The solutions of such equations can be rarely found in closed form. That is why most solution methods for these equations are usually iterative. If F is a differentiable operator, Newton’s method is the most used iterative method to solve (27.1), which is given by [1,2,4,5] xn+1 = xn − F 0 (xn )−1 F(xn ),

n ≥ 0, x0 ∈ D.

(27.2)

It converges quadratically to a solution of Eq. (27.1), if the initial guess is close enough to the solution. Although the Newton method is self correcting, that is to say, xn+1 depends only on F and xn , the rounding error of the previous iteration will not transmit step by step. However, if one of the iterative points in the middle is wrong, it may increase the number of iterations and even do not converge. In addition, for the larger vibration function, the two iteration points may be far apart before and after the use of the Newton method, which makes the iteration oscillate around the solution. To avoid the shortcomings, Guo Xue-Ping and Feng jing [3] proposed a deformed Newton’s method given by   x = x0 − F 0 (x0 )−1 F(x0 ),   n xn + xn−1 (27.3) , yn =  2   xn+1 = yn − F 0 (yn )−1 F(yn ), n = 1, 2, · · · , x0 ∈ D.

When the iteration is far away from the convergent orbit, the iteration will be pulled back by using method (27.3). A semi-local convergence theorem was established in [3] by using a

220

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

majorizing sequence technique. The conditions used in the main theorem of method (27.3) in [3] can be given by  0 −1  kF (x0 ) F(x0 )k ≤ β, (27.4) kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ L1 kx − yk, for any x, y ∈ D,   β L1 ≤ 1/2.

Note that Condition (27.4) is the same as the one used in the Kantorovich theorem with affine invariance for Newton’s method (27.2) [1]. In this chapter, a new local convergence theorem for method (27.3) is provided, and 2 at least if the Fr´echet the radius of convergence ball of method (27.3) is proven to be 3 K1 derivative satisfies kF 0 (x? )−1 (F 0 (x) − F 0 (y))k ≤ K1 kx − yk, for any x, y ∈ D. Moreover, the radius of convergence ball of method (27.3) is proven to be if F 0 satisfies (27.5) and the center Lipschitz condition

(27.5)

2 at least K1 + 2 K?

kF 0 (x? )−1 (F 0 (x) − F 0 (x? ))k ≤ K? kx − yk, for any x, y ∈ D

(27.6)

simultaneously. The semi-local convergence is also extended by using tighter Lipschitz constants lending to an extended convergence domain and more precise error bounds on the distances kxn − x? k, kxn+1 − xn k and kyn − xn k. The advantages are obtained under the same computational effort since the new Lipschitz constants are special cases of the old Lipschitz constants. At least, some numerical examples are given to test the theoretical analysis.

2.

Local Convergence of Method (27.3)

Let x ∈ X and r > 0. Denote B(x, r) = {y ∈ X : ky − xk < r} and B(x, r) = {y ∈ X : ky − xk ≤ r}. We have: Theorem 49. Let F : D ⊆ X → Y be a Fr´echet-differentiable operator. Suppose: There exists x? ∈ D such that F(x? ) = 0 and F 0 (x? ) is invertible; There exists K? > 0 such that Condition (27.6) holds; 1 Set Do = D ∩ B(x? , ). K? There exists K = K(K? ) such that for each x, y ∈ D0 kF 0 (x? )−1 (F 0 (x) − F 0 (y))k ≤ K kx − yk and B(x? , ρ) ⊆ D, where

(27.7)

2 . (27.8) K + 2 K? Then, sequence {xn } generated by method (27.3) is well defined, remains in B(x? , ρ) and converges to x? provided that x0 ∈ B(x? , ρ). Moreover, the following estimates hold kx? − xn+1 k  kx? − yn k 2 ≤ , n = 1, 2, · · · , (27.9) ρ ρ ρ=

Deformed Newton Method for Solving Equations and

kx? − xn+1 k  kx? − x0 k 2 2 ≤ , ρ ρ

221

n +1

n = 1, 2, · · · ,

(27.10)

where, bdc denotes the biggest integer which is not bigger than d. Proof. using the hypotheses, x0 ∈ B(x? , ρ), we have from (27.3) that kI − F 0 (x? )−1 F 0 (x0 )k = kF 0 (x? )−1 (F 0 (x0 ) − F 0 (x? ))k ≤ K? kx? − x0 k 2 K? < K? ρ = < 1. K + 2 K?

(27.11)

It is follows from (27.11) and Banach lemma on invertible operators [4] that F 0 (x0 ) is invertible, and 1 . (27.12) kF 0 (x0 )−1 F 0 (x? )k ≤ 1 − K? kx? − x0 k Using (27.3), (27.7), (27.8) and (27.12), we obtain in turn that kx? − x1 k = kx? − x0 − F 0 (x0 )−1 (F(x? ) − F(x0 ))k

 = k − F 0 (x0 )−1 F(x? ) − F (x0 ) − F 0 (x0 ) (x? − x0 ) k

= k − F 0 (x0 )−1 F 0 (x? ) F 0 (x? )−1

Z 1 0

[F 0 (t x? + (1 − t) x0 ) − F 0 (x0 )] dt (x? − x0 )k

1 1 K kx? − x0 k2 ? 2 ≤ K t kx − x k dt = 0 1 − K? kx? − x0 k 0 2 (1 − K? kx? − x0 k) Kρ kx? − x0 k2 kx? − x0 k2 ≤ = ≤ kx? − x0 k < ρ, 2 − 2 K? ρ ρ ρ

Z

(27.13)

which shows that x1 is well defined, and x1 ∈ B(x? , ρ). In view of (27.3) and (27.13), we have  x1 + x0 1 k≤ kx? − x1 k + kx? − x0 k 2 2 ≤ kx? − x0 k < ρ.

kx? − y1 k = kx? −

(27.14)

Taking a similar analysis as (27.11), we get kI − F 0 (x? )−1 F 0 (y1 )k = kF 0 (x? )−1 (F 0 (y1 ) − F 0 (x? ))k ≤ K? kx? − y1 k 2 K? ≤ K? kx? − x0 k < K? ρ = < 1. K + 2 K?

(27.15)

It follows from (27.15) and the Banach lemma on invertible operators that F 0 (y1 ) is invertible, and 1 1 ≤ 1 − K? kx? − y1 k 1 − K? kx? − x0 k 1 K + 2 K? < = . 1 − K? ρ K

kF 0 (y1 )−1 F 0 (x? )k ≤

(27.16)

222

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Taking a similar analysis as (27.13), we obtain in turn that kx? − x2 k = kx? − y1 − F 0 (y1 )−1 (F(x? ) − F(y1 ))k = k − F 0 (y1 )−1 F 0 (x? ) F 0 (x? )−1

Z 1 0

[F 0 (t x? + (1 − t) y1 ) − F 0 (y1 )] dt (x? − y1 )k

1 K kx? − y1 k2 1 K t kx? − y1 k2 dt = ? 1 − K? kx − y1 k 0 2 (1 − K? kx? − y1 k) Kρ kx? − y1 k2 kx? − y1 k2 ≤ = 2 − 2 K? ρ ρ ρ ? 2 kx − x0 k ≤ ≤ kx? − x0 k < ρ, ρ

Z



(27.17)

which shows that x2 is well defined and x2 ∈ B(x? , ρ), (27.9) and (27.10) hold for n = 1. Generally, suppose k ≥ 2 is a fixed integer, xn are well defined, xn ∈ B(x? , ρ) for n = 1, 2, · · · , k, and both (27.9) and (27.10) hold for n = 1, 2, · · · , k − 1. By (27.3) and induction hypotheses, we have kx? − yk k = kx? − similarly as (27.15), we get

 1 xk + xk−1 k≤ kx? − xk k + kx? − xk−1 k < ρ. 2 2

kI − F 0 (x? )−1 F 0 (yk )k = kF 0 (x? )−1 (F 0 (yk ) − F 0 (x? ))k ≤ K? kx? − yk k 2 K? < K? ρ = < 1. K + 2 K?

(27.18)

(27.19)

It follows from (27.19) and the Banach lemma on invertible operators that F 0 (yk ) is invertible, and kF 0 (yk )−1 F 0 (x? )k ≤

1 1 K + 2 K? < = . ? 1 − K? kx − yk k 1 − K? ρ K

(27.20)

Similarly as (27.17), we have kx? − xk+1 k = kx? − yk − F 0 (yk )−1 (F(x? ) − F(yk ))k 0

= k − F (yk )

−1

0

?

0

? −1

F (x ) F (x )

Z 1 0

[F 0 (t x? + (1 − t) yk ) − F 0 (yk )] dt (x? − yk )k

1 1 K kx? − yk k2 ? 2 ≤ K t kx − y k dt = k 1 − K? kx? − yk k 0 2 (1 − K? kx? − yk k) Kρ kx? − yk k2 kx? − yk k2 ≤ = < ρ, 2 − 2 K? ρ ρ ρ

Z

(27.21)

which shows that xk+1 is well defined and xk+1 ∈ B(x? , ρ), and (27.9) holds for n = k.

Deformed Newton Method for Solving Equations

223

Moreover, in view of (27.3), (27.21) and induction hypotheses, we have kx? − xk+1 k  kx? − yk k 2  1 kx? − xk k 1 kx? − xk−1 k 2 ≤ ≤ + ρ ρ 2 ρ 2 ρ k−1 +1 b k−2 c+1 2  1  kx? − x k 2b 2 c 1  kx? − x0 k 2 2 0 ≤ + 2 ρ 2 ρ k+1 b bkc c  1  kx? − x k 2 2 1  kx? − x0 k 2 2 2 0 = + 2 ρ 2 ρ = Ak+1 .

(27.22)

The following analysis is carried out in two cases. Case 1: k is an even number, that is, there exists an integer j ≥ 1, such that k = 2 j. We have from the Definition of Ak+1 that  1  kx? − x k 2b 2k c 1  kx? − x k 2b k2 c 2  kx? − x k 2b 2k c+1 0 0 0 + = Ak+1 = 2 ρ 2 ρ ρ

(27.23)

Case 2: k is an odd number, that is, there exists an integer j ≥ 1, such that k = 2 j + 1. We have from the Definition of Ak+1 that  1  kx? − x k 2 j+1 1  kx? − x k 2 j 2 0 0 + 2 ρ 2 ρ  1  kx? − x k 2 j 1  kx? − x k 2 j 2 0 0 ≤ + 2 ρ 2 ρ  kx? − x k 2 j+1 0 = ρ  kx? − x k 2b 2k c+1 0 = . ρ

Ak+1 =

(27.24)

Combining the above cases, we have from (27.22) that kx? − xk+1 k  kx? − x0 k 2 ≤ ρ ρ

b 2k c+1

,

(27.25)

which means that (27.10) holds for n = k. Now, by induction, we have that sequence {xn } generated by method (27.3) is well defined, remains in B(x? , ρ), and estimates (27.9) and (27.10) hold for any positive integer n. Moreover, it is obvious from (27.10) that {xn } converges to x? . Concerning the uniqueness of the solution x? , we have: Proposition 11. Under the hypotheses of Theorem 49, further suppose that there exists ρ? ≥ ρ such that K? ρ? < 2 (27.26) Then, the limit point x? is the only solution of equation F(x) = 0 in D1 = D ∩ B(x? , ρ? ).

224

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proof. Let y? ∈ D1 with F(y? ) = 0. Define operator Q by Z 1   Q= F 0 x? + θ (y? − x? ) dθ. Using (27.7), (27.6), we have in turn that 0

kF 0 (x? )−1 (Q − F 0 (x? ))k ≤

Z 1 0

≤ K?

  F 0 (x? )−1 F 0 (x? + θ (y? − x? )) − F 0 (x? )

Z 1 0

θ ky? − x? k =

K? ky? − x? k 2

K? ρ? < 1. ≤ 2 So, Q is invertible, and it follows from the identity 0 = F(y? ) − F(x? ) = Q(y? − x? )

(27.27)

(27.28)

that x? = y? . Remark 37. We have by (27.5) and (27.6) that K? ≤ K

and

K? can be arbitrarily small [1]. It also follows from (27.5) and (27.7) that K K ≤ K1

(27.29)

(27.30)

holds, since D0 ⊆ D (see also the numerical examples). It is worth noticing that the iterates of method (27.3) remain in D0 which is a more accurate location than D. Example 35. Let X = Y = R3 and D = B(0, 1). Define operator F on D by F(z) = (ez1 − 1,

e−1 2 z + z2 , z3 )T 2 2

(27.31)

for each z = (z1 , z2 , z3 )T ∈ D. Clearly, x? = (0, 0, 0)T . We have that the Fr´echet derivative of operator F is given by  z1  e 0 0 F 0 (z) =  0 (e − 1) z2 + 1 0 (27.32) 0 0 1 Then, using (27.5), (27.6) and (27.7), we obtain, K1 = e, K? = e − 1, D0 = B(0, 1

and K = e e−1 . It follows from (27.8) that ρ ≈ 0.382691912. If we use K = K1 , we get and if K = K1 = K? , then

ρ1 ≈ 0.324947231, ρ2 ≈ 0.245252961.

Notice that ρ2 < ρ1 < ρ.

1 ) e−1

Deformed Newton Method for Solving Equations

225

Example 36. Let X = Y = C[0, 1] stand for the space of continuous functions defined on the interval [0, 1]. Choose, D = B(0, 1). We shall use the max norm. Define operator F on D by F(ϕ)(x) = ϕ(x) − 5

Z 1

x τ ϕ(τ)3 dτ.

(27.33)

0

Then, the Fr´echet derivative is defined by Z 1

0

F (ϕ[v])(x) = v(x) − 15

x τ ϕ(τ)2 v(τ)dτ

0

f or each v ∈ D.

(27.34)

Clearly, we have x? (x) = 0 for each x ∈ [0, 1], K? = 7.5 and K1 = K = 15. Then, by (27.9), we get 2 1 ρ1 = ρ = = , 15 + 2 (7.5) 15 whereas

2 2 = < ρ. 3 × 15 45

ρ2 =

3.

Semi-local Convergence of Method (27.3)

We improve the semi-local convergence of method (27.3) given in [3]. Theorem 50. Let F : D ⊆ X → Y be Fr´echet-differentiable operator. Suppose: There exist x0 ∈ D, β ≥ 0 such that F 0 (x0 ) is invertible and kF 0 (x0 )−1 F(x0 )k ≤ β; There exists L0 > 0 such that for each x ∈ D kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ L0 kx − x0 k,

(27.35)

1 ); L0 There exists L > 0 such that for each x, y ∈ D2

set D2 = D ∩ B(x0 ,

kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ L kx − yk; Lβ ≤

1 2

(27.36) (27.37)

and B(x0 , r) ⊆ D, where

(27.38)

p

1 −2Lβ . (27.39) L Then, sequence {xn }, {yn } starting from x0 and generated by method (27.3) are well defined in B(x0 , r), remain in B(x0 , r) and converge to a solution x? ∈ B(x0 , r) of equation F(x) = 0. Moreover, the following error estimates hold r=

1−

kxn+1 − xn k ≤ tn+1 − tn ,

n = 0, 1, 2, · · · ,

(27.40)

226

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. kxn+1 − xn k ≤

where, {tn } is defined by

and h(t) is defined by

r − tn+1 (kx? − xn k + kxn − yn k), (r − tn )2

n = 1, 2, · · · ,

 t0 = 0,     t1 = t0 − h0 (t0 )−1 h(t0 ), tn + tn−1  sn = ,   2   tn+1 = sn − h0 (sn )−1 h(sn ), n = 1, 2, · · · , h(t) = β − t +

L 2 t . 2

(27.41)

(27.42)

(27.43)

Proof. Notice that the iterates lie in D2 , which is a more precise location than D used in [3]. The rest follows the proof in [3] by replacing L1 by L, where L1 is the Lipschitz constant on D (see also Remark 38). Concerning the uniqueness of the solution, we have: Proposition 12. Under the hypotheses of Theorem 50, further suppose that there exists r? ≥ r such that L0 (r + r? ) < 2. (27.44) Then, the limit point x? is the only solution in D3 = D ∩ B(x0 , r? ) of the equation F(x) = 0. Proof. Let y? ∈ D3 with F(y? ) = 0. Use (27.35), (27.44) and operator Q as defined in Proposition 11 to obtain in turn that Z 1     kF 0 (x0 )−1 Q − F 0 (x0 ) k = k F 0 (x0 )−1 F 0 (x? + θ (y? − x? )) − F 0 (x0 ) d θk 0

≤ L0 ≤

Z 1 0

[(1 − θ) kx? − x0 k + θ ky? − x0 k] d θ

L0 (r + r? ) < 1, 2

(27.45)

so Q is invertible. The rest as identical to Proposition 11 is omitted. Remark 38. In [3] the Lipschitz condition for each x, y ∈ D 0 0 kF 0 (x−1 0 ) (F (x) − F (y))k ≤ L1 kx − yk

(27.46)

for some L1 > 0 together with the Kantorovich-type semi-local convergence criterion for method (27.3) were given in (27.6). Notice that L ≤ L1 , (27.47) since D2 ⊆ D, so

L1 β ≤

1 1 =⇒ L β ≤ 2 2

(27.48)

Deformed Newton Method for Solving Equations

227

but not necessarily vice versa, unless if L = L1 . Hence, the convergence domain of method (27.3) is extended. Moreover, the new error bounds are tighter, if strict inequality holds in (27.41). Furthermore, no uniqueness results were given in [3] for the local or semi-local case. 1 Example 37. Let X = Y = R, x0 = 1 and D = B(x0 , 1 − p) for p ∈ (0, ). Define function 2 F on D by F(x) = x3 − p. (27.49) 1 Then, using the definition of β, (27.35), (27.36), (27.44) and (27.49), we get β = (1 − p), 3 4− p L0 = 3 − p, L = 2 ( ) and L1 = 2(2 − p). 3− p Notice that 1 L1 < L for each p ∈ (0, ) (27.50) 2 and √ 1 (27.51) L0 < L for each p ∈ (2 − 3, ). 2 Convergence criterion (27.44) is not satisfied, since L1 β >

1 2

1 for each p ∈ (0, ). 2

(27.52)

However, our new criterion (27.37) is satisfied, since Lβ ≤

1 2

1 for each p ∈ [0.461983163, ). 2

(27.53)

Remark 39. The improvements are obtained in this chapter under the same computational cost as in [3] since the new Lipschitz constants are special cases of the old ones.

4.

Numerical Examples

Some numerical examples are presented to justify the theoretical results. Example 38. Let X = Y = R and D = [−3, 3]. Define function F on D by F(x) = arctanx.

(27.54) (1)

Let us denote sequence {xn } generated by method (27.3) as {xn } and sequence {xn } gen(2) erated by Newton’s method (27.2) as {xn }. The two sequences are listed in Table 1 for x0 = 1.4 and Table 2 for x0 = 2. We observe from Table 1 and Table 2 that sequence {xn } generated by method (27.3) converges whereas sequence {xn } generated by Newton’s method (27.2) diverges. π π Example 39. Let X = Y = R and D = [− , ]. Define function F on D by 2 2 F(x) = sinx.

(27.55)

228

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. (1)

Let us denote sequence {xn } generated by method (27.3) as {xn } and sequence {xn } gen(2) erated by Newton’s method (27.2) as {xn }. The two sequences are listed in Table 3 for x0 = 1.22. We observe from Table 3 that sequence {xn } generated by method (27.3) converges whereas sequence {xn } generated by Newton’s method (27.2) doesn’t converge to the solution in D, since the second step x2 doesn’t fall in D.

Table 27.1. The results for Example 38 when x0 = 1.4 n 1 2 3 4 5 6 7

(1)

{xn } generated by (27.3) -1.413618649 2.10483E-7 0.215853874 -0.00836164 -0.00826497 3.83028E-10 4.70481E-11

(2)

{xn } generated by (27.2) -1.413618649 1.450129315 -1.550625976 1.847054084 -2.893562393 8.710325847 -103.2497738

Table 27.2. The results for Example 38 when x0 = 2 n 1 2 3 4 5 6 7 8

5.

(1)

{xn } generated by (27.3) -3.535743589 0.273081655 2.106339697 -0.916092627 -0.131831416 0.091171147 5.60136E-06 -6.3138E-05

(2)

{xn } generated by (27.2) -3.535743589 13.95095909 -279.3440665 122016.9989

Conclusion

An extension of a deformed Newton’s method in Banach spaces is provided for approximating a solution of an operator equation. The deformed Newton’s method was presented to avoid some shortcomings of Newton’s method in [Guo Xue-ping, Feng jing, Convergence of a deformed Newton’s method. J. of Zhejiang University: Science Edition, 2006, 33 (4) 389-392 ]. The radius of convergence ball of the deformed Newton’s method is given under Lipschitz and center Lipschitz conditions for the Fr´echet derivative of the involved function. Moreover, the semi-local convergence is improved by using our idea of the restricted convergence domain. Numerical examples are presented to illustrate the theoretical results.

Deformed Newton Method for Solving Equations

229

Table 27.3. The results for Example 39 when x0 = 1.22 n 1 2 3 4 5 6 7

(1)

{xn } generated by (27.3) -1.512754199 0.001054476 0.186733391 -0.000276902 -0.00027104 6.85476E-12 8.29641E-13

(2)

{xn } generated by (27.2) -1.512754199 15.69675944 15.70796374 15.70796327 15.70796327 15.70796327 15.70796327

References [1] Argyros I. K., Computational theory of iterative methods, Series: Studies in Computational Mathematics 15, Editors, Chui C. K. and Wuytack L., Elservier Publ. Co. New York, USA, 2007. [2] Argyros I. K., On the Newton-Kantorovich hypothesis for solving equations, J. Comput. Appl. Math., 169 (2004) 315-332. [3] Guo Xue-Ping, Feng Jing, Convergence of a deformed Newton’s method. J. of Zhejiang University: Science Edition, 2006, 33 (4) 389-392. [4] Ortega J. M., Rheinbolt W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [5] Traub J. F., Iterative Methods for the Solution of Equations, Prentice-Hall Englewood Cliffs, New Jersey, 1994.

Chapter 28

On the Newton-Kantorovich Theorem 1.

Introduction

In [2], Argyros studied the problem of approximating a locally unique solution of the nonlinear equation F (x) = 0, (28.1) where F is a Fr´echet differentiable operator defined on an open convex subset D of a Banach space A with values in a Banach space B. In [2], the existence and uniqueness of exact solutions to an equation (28.1) was proved using Newton-Kantorovich theorem [5, Th. 6, (1, XVIII)]. Moreover, a priori and a posteriori estimates are obtained as a direct consequence of the Newton-Kantorovich theorem [3]. “Tsuchiya in [8] used this theorem to show the existence of a finite element solutions of strongly nonlinear elliptic boundary value problems. However, it is possible that the basic condition in this theorem, the so-called the Newton-Kantorovich’s hypothesis is given in [5] may not be satisfied and still Newton’s method converges [1], [3]. That is why we introduced a weaker hypothesis (see Theorem 51 that follows) originated in [1], which can always replace the Newton-Kantorovich hypothesis used in [5], [8], (see also (3.3)), and under the same computational cost. This way, we can use Newton’s method to solve a a wider range of problems than before [8]. Moreover, finer estimates on the distances involved and more precise information on the location of the solution are obtained in [1], [3]”(see [2]). As in [2], we provide examples of elliptic boundary value problems where our results apply.

2.

Convergence Analysis

¯ ρ) stand, respectively for the open and closed balls in A with center v ∈ A Let U(v, ρ), U(v, and of radius ρ > 0.

232

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

We state the version of our main semi-local convergence result in [3] needed in this study (see also [1,p. 132, Case 3 for δ = δ0 ]). Theorem 51. Let F : D ⊆ A → B be a nonlinear Fr´echet differentiable operator, where D is open, convex, and A , B are Banach spaces. Suppose: There exists a point x0 ∈ D such that the Fr´echet derivative F 0 (x0 ) ∈ L (A , B ) is an isomorphism and F (x0 ) 6= 0; there exists positive constants `0 and ` such that the following center Lipschitz and Lipschitz conditions are satisfied:



0

−1  0 F (x) − F 0 (x0 ) ≤ `0 kx − x0 k for each x ∈ D (28.2)

F (x0 )

 1

0 −1  0 (28.3) F (x) − F 0 (y) ≤ ` kx − yk for all x, y ∈ D0 = D ∩U(x0 , );

F (x0 ) `0 Setting:



η = F 0 (x0 )−1 F (x0 ) and

h1 =

Suppose further

p p 1 (4`0 + `0 + 8`0 ` + `0 `) η. 4 h1 ≤ 1;

(28.4)

and U¯ (x1 ,t ∗ − η) ⊆ D,

where, x1 = x0 − F 0 (x0 )−1 F (x0 ) , and t ∗ ≥ η is the unique least upper bound of nondecreasing majorizing sequence {tn} given by: t0 = 0, t1 = η, tn+2 = tn+1 + where, `1 =



`0 i f ` if

`1 (tn+1 − tn )2 (n ≥ 0), 2 (1 − `0 tn+1)

(28.5)

n=0 . n>0

¯ since D0 ⊆ D. Notice that in [2] (28.3) is valid on D with constant ` replaced by `¯ with ` ≤ `, However, the iterated {xn } lie in D0 which is a more precise location than D used in [2]. Then, the proof in [2] goes through in this improved setting. Let h¯ 1 , denote h1 but with `¯ replacing `. We have that h¯ 1 ≤=⇒ h1 ≤ 1. (28.6) ∗ ∗ ¯ Then equation F (x) = 0 has a solution x ∈ U (x1 ,t − η) and this solution is unique in U (x0 ,t ∗) ∩ D, if `0 = ` and h1 < 1, and U¯ (x0 ,t ∗ ) ∩ D, if `0 = ` and h1 = 1. If `0 6= ` the 1 solution x∗ is unique in U(x0 , R) provided that (t ∗ + R) `0 ≤ 1 and U (x0 , R) ⊆ D. 2 Moreover, we have the estimatekx∗ − x0 k ≤ t ∗ .

On the Newton-Kantorovich Theorem

233

We will simply use k·k if the norm of the element involved is well understood. Otherwise, we will use k·kX for the norm on a particular set X. We assume the following: (A1 ) there exist Banach spaces Z ⊆ X and U ⊆ Y such that the inclusions are continuous, and the restriction of F to Z, denoted again by F, is a Fr´echet differentiable operator from Z to U. (A2 ) For any v ∈ Z the derivative F 0 (v) ∈ L (Z,U) can be extended to F 0 (v) ∈ L (X,Y ) and it is: - center locally Lipschitz continuous at a fixed u0 ∈ Z, i.e., for any bounded convex set T ⊆ Z with u0 ∈ T there exists a positive constant c0 depending on u0 and T such that

0

F (v) − F 0 (u0 ) ≤ c0 kv − u0 k , for all v ∈ T. (28.7)

- Locally Lipschitz continuous on Z, i.e., for any bounded convex set T ⊆ Z there exists a positive constant c1 depending on T such that

0

F (v) − F 0 (w) ≤ c1 kv − wk , for all v.w ∈ T. (28.8)

(A3 ) There are Banach spaces V ⊆ Z and W ⊆ U such that the inclusions are continuous. We suppose that there exists a subset S ⊆ V for which the following holds: ”if F 0 (u) ∈ L (V,W) is an isomorphism between V and W at u ∈ S, then there exists F 0 (u) ∈ L (X,Y ), which is an isomorphism between X and Y as well”. To define discretized solutions of F (u) = 0, we introduce the finite dimensional subspaces Sd ⊆ Z and Sd ⊆ U parametrized by d, 0 < d < 1 with the following properties: (A4 ) There exists r ≥ 0 and a positive constant c2 independent of d such that kvd kZ ≤

c2 kvd kX , for all vd ∈ Sd . dr

(28.9)

(A5 ) There exists projection Πd : X → Sd for each Sd such that, if u0 ∈ S is a solution of F (u) = 0, then lim d −r ku0 − Πd u0 kX = 0 (28.10) d→0

and lim d −r ku0 − Πd u0 kZ = 0.

d→0

(28.11)

We can show the following result concerning the existence of locally unique solutions of discretized equations. Theorem 52. Assume that conditions (A1 )–(A5 ) hold. Suppose F 0 (u0 ) ∈ L (V,W ) is an isomorphism, and u0 ∈ S. Moreover, assume F 0 (u0 ) can be decomposed into F 0 (u0 ) = Q + R, where Q ∈ L (X,Y ) and R ∈ L (X,Y ) is compact. The discretized nonlinear operator Fd : Z → U is defined by Fd (u) = (I − Pd ) Q (u) + Pd F (u)

(28.12)

where I is the identity of Y, and Pd : Y → Sd is a projection such that lim kv − Pd vkY = 0, for all v ∈ Y,

d→0

(28.13)

234

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and (I − Pd )Q (vd ) = 0, for all vd ∈ Sd .

(28.14)

Then, for sufficiently small d > 0, there exists ud ∈ Sd such that Fd (ud ) = 0, and ud is locally unique. Moreover the following estimate holds kud − Πd (u0 )k ≤ `1 ku0 − Πd (u0 )k

(28.15)

where `1 is a positive constant independent of d. Proof. The proof is similar to the corresponding one in [8,Th. 2.1, p. 126]. However, there are some crucial differences where (28.7) is used (needed) instead of condition (28.8). Step 1. We claim that there exists a positive constant c3 , independent of d, such that, for sufficiently small h > 0,

0

Fd (Πd (u0 ))vd ≥ c3 kvd k , for all vd ∈ Sd . (28.16) X Y



From (A3 ) and u0 ∈ S, F 0 (u0 ) ∈ L (X,Y ) is an isomorphism. Set B0 = F 0 (u0 )−1 . We can have in turn  Fd0 (Πd (u0 )) vd = F 0 (u0 )vd + Pd F 0 (Πd (u0 )) − F 0 (u0 ) vd (28.17)  0 − (I − Pd ) −Q + F (u0 ) vd . Since −Q + F 0 (u0 ) ∈ L (X,Y ) is compact we get by (28.13) that

 lim (I − Pd ) −Q + F 0 (u0 ) = 0. d→0

(28.18)

By (28.13) there exists a positive constant c4 such that sup kPd k ≤ c4 .

(28.19)

d>0

That is, using (28.7) we get



Pd F 0 (Πd (u0 )) − F 0 (u0 ) ≤ c0 c4 kΠd (u0 ) − u0 k .

(28.20)

Hence, by (28.11) we can have



0



Fd (Πd (u0 )) vd ≥ 1 − δ(d) kvd k , B0

where lim δ(d) = 0, and (28.16) holds with c3 = d→0

(28.21)

B−1 0 . 2

Step 2. We shall show:



lim d −r Fd0 (Πd (u0 ))−1 Fd (Πd (u0 )) = 0.

d→0

(28.22)

On the Newton-Kantorovich Theorem

235

Note that kFd (Πd (u0 ))k ≤ c4 kFd (Πd (u0 )) − Fd (u0 )k ≤ c4

Z 1 0

kGt k dt kΠd (u0 ) − u0 k

≤ c4 c5 kΠd (u0 ) − u0 k ,

(28.23)

Gt = F 0 ((1 − t) u0 + tΠd (u0 ))

(28.24)



kGt k ≤ Gt − F 0 (u0 ) + F 0 (u0 )

≤ c0 t kΠd (u0 ) − u0 k + F 0 (u0 ) ≤ c5

(28.25)

≤ c1 c2 c4 d −r kwd − vd kX

(28.26)

where and we used

where c5 is independent of d. The claim is proved. Step 3. We use our modification of the Newton-Kantorovich theorem with the following choices: A = Sd ⊆ Z, with norm d −r kwd kX , B = Sd ⊆ U with norm d −r kwd kY , x0 = Πd (u0 ) , F = Fd . Notice that kSkL(A,B) = kSkL(X,Y ) for any linear operator S ∈ L (Sd , Sd ). By Step 1, we know Fd0 (Πd (u0 )) ∈ L (Sd , Sd ) is an isomorphism. It follows from (28.8) and (A4 ) that for any wd , vd ∈ Sd ,

0

F (wd ) − F 0 (vd ) ≤ c1 c4 kwd − vd k d d Z Similarly, we get using (28.7) and (A4 ) that

0

F (wd ) − F 0 (Πd (u0 )) ≤ c1 c2 c4 d −r kwd − x0 k . d d X

Hence assumptions are satisfied with

−1 ` = c1 c2 c−1 3 c4 and `0 = c0 c2 c3 c4 .

(28.27)

From Step 2, we may take sufficiently small d > 0 such that h1 ≤ 1, where



η = d −r Fd0 (Πd (u0 ))−1 Fd (Πd (u0 )) . X

That is, assumption (28.4) is satisfied. Hence for sufficiently small d > 0 there exists a locally unique ud ∈ Sd such that Fd (ud ) = 0 and kud − Πd (u0 )kX ≤ 2d r η ≤ 2c−1 3 kFd (Πd (u0 ))kY ≤ 2c−1 3 c4 c5 ku0 − Πd (u0 )kX .

It follows (28.15) holds with `1 = 2c−1 3 c4 c5 . That completes the proof of the Theorem.

236

3.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Concluding Remarks and Applications

Remark 40. In general, since `0 ≤ `¯ c0 ≤ c1

(28.28)

`¯ can be arbitrarily large. If `¯ = `0 our Theorem 52 reduces to the corresponding `0 Theorem 2.1 in [8,p. 126]. Otherwise our condition h¯ 1 ≤ 1 or new condition h1 ≤ are weaker than the corresponding one in [8] using the famous for its simplicity and clarity Newton-Kantorovich hypothesis holds and

h = 2 ` η ≤ 1 [5], [1].

(28.29)

h ≤ 1 =⇒ h¯ 1 ≤ 1 and h ≤ 1 =⇒ h1 ≤ 1

(28.30)

That is, but not necessarily vice versa, unless if `0 = `¯ = `. As already shown in [2], finer error estimates on the distances kud − Πd (u0 )k and a more precise information on the location of the solution are provided here and under the same computational cost since in practice the evaluation of c1 requires that of c0 . Note also that our parameter d will be smaller than the corresponding one in [8] which in turn implies fewer computations and smaller dimension subspaces Sd are used to approximate ud . This observation is very important in computational mathematics [1]. The above observations suggest that all results obtained in [8] can be improved if rewritten with weaker h1 ≤ 1 or h¯ 1 ≤ 1 instead of stronger h ≤ 1. However, we do not attempt this here (leaving this task to the motivated reader). Instead, we provide examples of nonlinear problems already reported in [8] where finite element methods apply along the lines of our theorem above. Example 40. [8] Find u ∈ H01 (J), J = (b, c) ⊆ R such that hF (u), vi =

Z  J

   g0 x, u, u0 v0 + g x, u, u0 v dx = 0, for all v ∈ H01 (J)

(28.31)

where g0 and g1 are sufficiently smooth functions from J × R × R to R.

Example 41. [8] For the N-dimensional case (N = 2, 3) let D ⊆ RN be a bounded domain with a Lipschitz boundary. Then consider the problem: find u ∈ H01 (D) such that hF (u) , vi =

Z

D

[q0 (x, u, ∇u) · ∇v + q (x, u, ∇u) · v] dx = 0, for all v ∈ H01 (D),

(28.32)

where q0 ∈ D × R × RN to R are sufficiently smooth functions. Remark 41. Since equations (28.31) and (28.32) are defined in divergence form, their finite element solutions are defined in a natural way. Finite element methods applied to nonlinear elliptic boundary value problems have also been considered by other authors [4], [5]. Finally, more details on Examples 40 and 41 can be found in [8].

On the Newton-Kantorovich Theorem

4.

237

Conclusion

Using a weaker version of the Newton-Kantorovich theorem [3], Argyros [2] provided a discretization result to find finite element solutions to elliptic boundary value problems. In this chapter, using a weaker version of the Newton-Kantorovich theorem and restricted convergence domains, we improve the results in [2]. We obtained our results under weaker and under the same computational cost. The analysis lead to finer estimates of the distances involved and more precise information on the location of the solution than before in earlier studies.

References [1] Argyros I. K., Computational theory of iterative methods, Series: Studies in Computational Mathematics, 15, Editors, Chui C. K. and Wuytack L., Elsevier Publ. Co., 2007, New York, U.S.A. [2] Argyros I. K., On the Newton-Kantorovich theorem and nonlinear finite element methods, Applicationes Mathematicae, 36, 1(2009), 75–81. [3] Argyros I. K. and Hilout S. Weaker conditions for the convergence of Newton’s method, J. Complexity, 28, (2012), 364–387. [4] Feinstauer M., Zernicek A., Finite element solution on nonlinear elliptic problems, Numer. Math. 50, (1987), 471–475. [5] Kantorovich L. V., Akilov G. P., Functional Analysis in Normed Spaces, Pergamon Press, Oxford, 1982. [6] Pousin J., Rappaz J., Consistency, stability, a priori and a posteriori errors for PetrovGalerkin’s method applied to nonlinear problems, Numer. Math., 69 (1994), 213– 231. [7] Tsuchiya T., Babuska I., A priori error estimates of finite element solutions of parametrized strongly nonlinear boundary value problems, J. Comp. Appl. Math., 79 (1997) 41–66. [8] Tsuchiya T., An application of the Kantorovich theorem to nonlinear finite element analysis, Numer. Math., 84 (1999), 121–141.

Chapter 29

Kantorovich-Type Extensions for Newton Method 1.

Introduction

Let F : Ω ⊂ B1 −→ B2 be a Fr´echet differentiable operator. Newton-like methods defined for each n = 0, 1, 2, · · · by xn+1 = xn − A] (xn )F(xn ), (29.1) where x0 ∈ Ω is an initial point are undoubtedly very popular methods for generating a sequence {xn } approximating a solution x∗ of the equation F(x) = 0.

(29.2)

Here and below A(xn ) ∈ L(B1 , B2 ) is an approximation of the Fr´echet derivative F 0 (xn ), A] (xn ) denotes an outer inverse of A(xn ) i.e., A](xn )A(xn )A](xn ) = A](xn ) and L(B1 , B2 ) denote the set of all bounded linear operators on the Banach space B1 into the Banach space B2 . The setting (29.1) includes generalized Newton methods for undetermined systems, Gauss-Newton method for nonlinear least-squares problems, and Newton-like method for nonlinear ill-posed operator equations in Banach spaces [1]-[24]. Several authors have used outer inverses or generalized inverses in the context of Newton’s method, for example, Deuflhard and Heindl [11], H¨aubler [15], Yamamoto [23] and Nashed and Chen [20]. However, in these papers except [20], the authors assume the following condition on either an outer inverse or the Moore-Penrose inverse: kF 0 (y)](I − F 0 (x)F 0 (x)])F(x)k ≤ α(x)kx − yk, α(x) ≤ α¯ < 1 for each x, y ∈ Ω. This condition is very strong and can hardly be satisfied in concrete cases [20]. Using stability and perturbation bounds for out inverses, Nashed and Chen [20] established a sharp generalization of the Kantorovich theory and the Mysovskii theory for operator equations in Banach spaces where the derivative is not necessarily invertible. In this chapter, we further improve the results of Nashed and Chen [20] using our new idea of the restricted convergence domain. That is we find a more precise domain, where the iterates {xn } lie,

240

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

leading to at least as small Lipschitz constants which in turn lead to larger convergence domains, tighter error bounds on the distances involved, and more precise information on the location of the solution. The rest of the chapter is organized as follows. Section 2 contains the semi-local convergence analysis for Newton-like method (29.1). Numerical examples are presented in Section 3.

2.

Semi-Local Convergence for Newton-Like Methods

¯ ρ) stand, respectively for the open and closed balls in B1 with center v ∈ B1 Let U(v, ρ), U(v, and of radius ρ > 0. We use the notation R (A) to denote the range of the operator A. We prove the Kantorovich-type theorem for Newton-like method (29.1). Theorem 53. Suppose F : Ω ⊂ B1 −→ B2 is a Fr´echet differentiable operator and A(x) ∈ L(B1 , B2 ). Moreover, suppose that there exist an open convex subset Ω0 of Ω, x0 ∈ Ω0 , a bounded outer inverse A] of A(= A(x0 )) and constants η, K0 > 0, M0 > 0, L ≥ 0, µ0 , `0 ∈ [0, 1) such that for all x ∈ Ω0 the following conditions hold: kA]F(x0 )k ≤ η

(29.3)

kA](A(x) − A)| ≤ Lkx − x0 k + `0 .

(29.4)

and

Moreover, suppose that for each x, y ∈ Ω1 := U(x0 ,

1 − `0 ) ∩ Ω0 L

kA] (F 0 (x) − F 0 (y))| ≤ K0 kx − yk

(29.5)

kA] (F 0 (x) − A(x))| ≤ M0 kx − x0 k + µ0

(29.6)

b0 := µ0 + `0

(29.7)

1 h0 := σ0 η ≤ (1 − b0 )2 , σ0 = max{K0 , L + M0 } 2

(29.8)

and ¯ 0 , s ∗ ) ⊂ Ω0 , U(x q where σ0 := max{K0 , M0 + L} and s∗ = (1 − b0 − (1 − b)2 − 2h0 )/σ0 . Then,

(29.9)

(i) the sequence {xn } generated by (29.1) for A(xk )] = (I + A] (A(xk) − A))−1 A] is well ¯ 0 , s∗ ) of A] F(x) = 0; defined, remains in U(x0 , s∗ ) and converges to a solution x∗ ∈ U(x

(ii) the equation A] F(x) = 0 has a unique solution in U¯ 0 ∩ {R (A]) + x0 }, where  1   U(x ¯ 0 , s ∗ ) ∩ Ω0 , i f h0 = (1 − b0 )2 2 U¯ 0 =  ¯ 0 , s∗∗ ) ∩ Ω0 , i f h0 < 1 (1 − b0 )2  U(x 2

Kantorovich-Type Extensions for Newton Method

241

R (A] ) + x0 := {x + x0 : x ∈ R (A])} and ∗∗

s = (1 − b0 + (iii) the following estimates hold

q

(1 − b0 )2 − 2h0 )/σ) .

(29.10)

kxn+1 − xn k ≤ sn+1 − sn

(29.11)

kxn − x∗ k ≤ s∗ ,

(29.12)

and σ0 where f 0 (t) = t 2 − (1 − b0 )t + η, g0 (t) = 1 − Lt − `0 and majorizing sequence {sn } 2 is given by f0 (sn ) s0 = 0, sn+1 = sn + . (29.13) g0 (sn ) Proof. Simply replace K, M, µ, `, b, h,σ,t ∗,t ∗∗ , f , g, {tn}, respectively by K0 , M0 , µ0 , `0 , b0 , h0 , σ0 , s∗ , s∗∗ , f 0 , g0 , {sn } in the proof of Theorem 3. 1 in [20,p,241] and notice that the iterates {xn } belong in the set Ω1 , which is at least as small as Ω0 used in [20].  Remark 42. (a) If conditions (29.5) and (29.6) hold on Ω0 instead of Ω1 , then Theorem 53 reduces to Theorem 3.1 in [20,p.241]. Otherwise, i.e., if Ω1 is a strict subset of Ω0 , then K0 M0 µ0







K,

(29.14)

M,

(29.15)

µ,

(29.16)

`0 ≤ `, 1 1 h = ση ≤ (1 − b)2 =⇒ h0 ≤ (1 − b0 )2 2 2

(29.17) (29.18)

but not necessarily vice versa unless, if equality holds in all estimates (29.14)– (29.17), sn ≤ tn ,

s



(29.19)



(29.20)

s∗∗ ≤ t ∗∗

(29.21)

≤ t

and where K, M, µ, `, σ, h are the parameters used in [20] instead of K0 , M0 , µ0 , `0 , σ0 , h0 , f (tn ) σ t0 = 0,tn+1 = tn − , f (t) = t 2 − (1 − b)t + η, g(t) = 1 − Lt − `0 , t ∗ = g(t ) 2 n p p 2 1 − b − (1 − b) − 2h 1 − b + (1 − b)2 − 2h and t ∗∗ = . σ σ

242

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(b) It is worth noticing that Theorem 3.1 in [20] generalized the Yamamoto theorem (see [23]), if A] (xn ) = A(xn )−1 . (c) The rest of the results in [20] can be extended along the same lines. In particular, if we let A(x) = F 0 (x)], then we obtain for `0 = µ0 = M0 = 0, the convergence result for Newton’s method xn+1 = xn − F 0 (xn )] F(xn ) as a special case of Theorem 53. The sufficient convergence criterion (29.8) reduces to 1 (29.22) h¯ 0 = σ0 η ≤ , σ0 = max{K0 , L} 2 which is at least as weak as the Kantorovich hypothesis [17] used in [20]: 1 h¯ = ση ≤ , σ = K = L 2

(29.23)

(see also the numerical examples). (d) The preceding results can be improved even further, if we define instead of set Ω1 , 1 − `0 Ω∗1 := Ω0 ∩U(x1 , − kx1 − x0 k) for x1 = x0 − A] (x0 )F(x0 ) provided that L Lη ≤ 1 − `0 .

(29.24)

The set Ω∗1 ( as Ω1 ) is still obtained using the initial data. Denote corresponding constants K01 , M01 , µ10 , `10 , b10 and h10 . The new constants are again at least as small as constants K0 , M0 , µ0 , `0 , b0 and h0 , respectively, since Ω∗1 ⊆ Ω1 . In particular, we have that 1 1 h0 ≤ =⇒ h10 ≤ . (29.25) 2 2 Notice also that the second inequality in (29.25) implies (29.24). Moreover, the ball U(x0 , s∗ ) is replaced by U(x1 , s¯∗ − η) in Theorem 53 where s¯∗ , s¯∗∗ are obtained as s∗ , s∗∗ , respectively by exchanging the constants. (e) The improvements over earlier studies [1]-[24] given here are also obtained under the same computational cost, since in practice the computation of constants K, L, `, M, µ, requires the computation of the new constants as special cases. (f) With the exception of the uniqueness part, the results of Theorem 53 can be extended to hold for the equation F(x) + Q(x) = 0 (29.26) using the corresponding iteration xn+1 = xn − A] (xn )(F(xn ) + Q(xn )),

(29.27)

where Q : Ω −→ B2 is a continuous operator. Suppose that there exist µ1 ∈ [0, 1) (or µ2 ) such that for each x, y ∈ Ω1 (or Ω∗1 ) kA] (Q(x) − Q(y))k ≤ µ1 kx − yk.

(29.28)

Kantorovich-Type Extensions for Newton Method

243

Define µ∗0 = µ0 + µ1 (or µ∗0 = µ10 + µ2 ). Then, the conclusions of Theorem 53 for equation (29.26) with the exception of the uniqueness part hold for iteration {xn } generated by (29.27) provided that (29.28) holds and µ∗0 replaces µ0 in Theorem 53. Hence, the preceding results can be used to solve equations like (29.26) containing a non-differentiable term. Finally, these results improve the corresponding ones using larger constants K, M, µ, `, b, h, σ,t ∗,t ∗∗ in [9, 10, 14, 17, 20, 21, 22, 23, 24].

3.

Numerical Examples

We present two numerical examples. For simplicity, we choose A(x) = F 0 (x) for each x ∈ Ω. ¯ 1−a) for a ∈ (0, 0.5) Example 42. Let B1 = B2 = R, x0 = 1. Define function F on Ω = U(1, by F(x) = x3 − a.

1 1 Then, for x0 = 1, we have η = (1 − a), L = 3 − a < K = 2(2 − a), K0 = 2(1 + ) < K. We 3 L also have that √   < K, i f a < 2 − √3 K0 = K, i f a = √ 2− 3  > K, i f 2 − 3 < a < 0.5. Kantorovich condition (29.23) is not satisfied, since

1 h¯ = Kη > for each a ∈ (0, 0.5). 2 However, our condition (29.22) 1 1 1 h¯ 0 = max{2(1 + ), L}(1 − a) ≤ 3 L 2 for each a ∈ (0.461983163,0.5). Example 43. Let B1 = B2 = C [0, 1] be the space of continuous functions defined in [0, 1] equipped with the max-norm. Let Ω = {x ∈ C [0, 1]; kxk ≤ R}, such that R > 0 and F defined on Ω and given by F(x)(s) = x(s) − ϕ(s) − λ

Z 1

G(s,t)x(t)3 dt,

0

x ∈ C[0, 1], s ∈ [0, 1],

where ϕ ∈ C [0, 1] is a given function, λ is a real constant and the kernel G is the Green function  (1 − s)t, t ≤ s, G(s,t) = s(1 − t), s ≤ t.

In this case, for each x ∈ Ω, F 0 (x) is a linear operator defined on Ω by the following expression: [F 0 (x)(v)](s) = v(s) − 3λ

Z 1 0

G(s,t)x(t)2v(t) dt,

v ∈ C[0, 1], s ∈ [0, 1].

244

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

If we choose x0 (s) = ϕ(s) = 1, it follows that kI − F 0 (x0 )k ≤ 3|λ|/8. Thus, if |λ| < 8/3, F 0 (x0 )−1 is defined and 8 . kF 0 (x0 )−1 k ≤ 8 − 3|λ| Moreover,

kF(x0 )k ≤

|λ| , 8

so η = kF 0 (x0 )−1 F(x0 )k ≤ On the other hand, for x, y ∈ Ω we have kF 0 (x) − F 0 (y)k ≤ kx − yk

|λ| . 8 − 3|λ|

1 + 3|λ|(kx + yk) 1 + 6R|λ| ≤ kx − yk . 8 8

and

1 + 3|λ|(kxk + 1) 1 + 3(1 + R)|λ| ≤ kx − 1k . 8 8 Choosing λ = 1.175 and R = 2, we have η = 0.26257 . . ., K = 2.76875...,K0 = 1.8875 . . ., L = 1.47314 . . ., M0 = `0 = µ0 = 0. Using these values, we obtain that condition (29.23) is not satisfied, since kF 0 (x) − F 0 (1)k ≤ kx − 1k

1.02688 . . . 1 h¯ = > , 2 2 but condition (29.22) is satisfied, since 0.986217 . . . 1 < . h¯ 0 = 2 2 Hence, we can ensure the convergence of Newton’s method by Theorem 53.

4.

Conclusion

We present Kantorovich-type extensions of Newton-like methods using outer inverses for singular operator equations. In particular, we show how to expand the convergence domain of the method considered in earlier studies by using the center Lipschitz condition and more precise information about where the iterates are located leading to smaller Lipschitz constants. Numerical examples further illustrate the theoretical results.

References [1] Amat S., Busquier S., Negra M., Adaptive approximation of nonlinear operators. Numer. Funct. Anal. Optim. 25 (2004) 397–405. [2] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387.

Kantorovich-Type Extensions for Newton Method

245

[3] Argyros I. K., A unifying local-semi-local convergence analysis and applications for two-point Newton-like methods in Banach space. J. Math. Anal. Appl. 298 (2004) 374–397. [4] Argyros I. K., On the Newton-Kantorovich hypothesis for solving equations. J. Comput. Appl. Math. 169 (2004) 315–332. [5] Argyros I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [6] Argyros I. K., Cho Y. J., Hilout S., Numerical methods for equations and its applications, CRC Press/Taylor and Francis Publ., New York, 2012. [7] Argyros I. K., Hilout S., Extending the Newton-Kantorovich hypothesis for solving equations. J. Comput. Appl. Math. 234 (2010) 2993–3006. [8] Argyros I. K., Hilout S., convergence of Newton’s method under weak majorant condition. J. Comput. Appl. Math. 236 (2012) 1892–1902. [9] Chen X. and Yamamoto T., Convergence domains of certain iterative methods for solving nonlinear equations, Numer. Funct. Anal. Optimization, 10, (1989), 37–48. [10] Dennis. Jr J. E., On Newton-like methods, Numer. Math. 11, (1968), 324–330. [11] Deuflbard P. and Heindl G., Affine invariant convergence theorem for Newton’s method and extensions to related methods, SIAM J. Numer. Anal., 16,(1979), 1–10. [12] Ezqu´erro J. A., Guti´errez J. M., Hern´andez M. A., Romero N., Rubio M. J., The Newton method: from Newton to Kantorovich. (Spanish) Gac. R. Soc. Mat. Esp. 13 (2010) 53–76. [13] Ezqu´erro J. A., Hern´andez M. A., An improvement of the region of accessibility of Chebyshev’s method from Newton’s method. Math. Comp. 78 (2009) 1613–1627. [14] Ezqu´erro J. A., Hern´andez M. A., Romero N., Newton-type methods of high order and domains of semilocal and global convergence. Appl. Math. Comput. 214 (2009) 142–154. [15] H¨aubler W. M., A Kantorovich-type convergence analysis for the Gauss-Newton method, Numer. Math. 48,(1986), 119–125. [16] Hern´andez M. A., A modification of the classical Kantorovich conditions for Newton’s method. J. Comp. Appl. Math. 137 (2001) 201–205. [17] Kantorovich L. V., Akilov G. P., Functional Analysis, Pergamon Press, Oxford, 1982. [18] Magr´en˜ an A. A., A new tool to study real dynamics: The convergence plane, Appl. Math. Comput. 248(2014), 215–225.

246

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[19] Magr´en˜ an A. A., Improved convergence analysis for Newton-like methods, Numerical Algorithms (2015), 1–23. [20] Nashed M. Z. and Chen X., Convergence of Newton-like methods for singular operator equations using outer inverses, Numer. Math. 66,(1993), 235–257. [21] Ortega J. M. and Rheinboldt W. C., Iterative solution of nonlinear equations in several variables, Academic Press, New York, 1970. [22] Proinov P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complexity 26 (2010) 3–42. [23] Yamamoto T., Uniqueness of the solution in a Kantorovich-type theorem of H¨aubler for the Gauss-Newton method, Japan, J. Appl. Math. 6, (1989), 77–81. [24] Zabrejko P. P., Nguen D. F., The majorant method in the theory of NewtonKantorovich approximations and the Pt´ak error estimates. Numer. Funct. Anal. Optim. 9 (1987) 671–684.

Chapter 30

Improved Convergence for the King-Werner Method 1.

Introduction

In [7], Argyros and Ren studied the problem of approximating a locally unique solution x of equation F(x) = 0, (30.1) ?

where F is Fr´echet-differentiable operator defined on a convex subset of a Banach space X with values in a Banach space Y . In this chapter, we extend the applicability of the method considered in [7] using the idea of restricted convergence domains. Precisely in [7], Argyros and Ren considered KingWerner-type method with repeated initial points, namely: Given x0 ∈ D, let y0

= x0

x1

= x0 − F 0 (

yn xn+1

x0 + y0 −1 ) F(x0 ), 2 x + y n−1 n−1 −1 ) F(xn ), = xn − F 0 ( 2 x + y n n −1 = xn − F 0 ( ) F(xn ) 2

(30.2)

for each n = 1, 2, ... and X = Y = R. Notice that the initial predictor step is just a Newton step based on the estimated derivative. The re-use of the derivative means that the evaluations of the yn values in (30.2) essentially come for free, which then enables the more appropriate value of the derivative√ to be used in the corrector step in (30.2). Method (30.2) was also shown to be of order 1 + 2. defined by: Given x0 , y0 ∈ D, let xn−1 + yn−1 −1 xn = xn−1 − F 0 ( ) F(xn−1 ) f or each n = 1, 2, ... 2 (30.3) x + y n−1 n−1 −1 yn = xn − F 0 ( ) F(xn ) f or each n = 1, 2, .... 2 The convergence analysis is based only on hypotheses up to the first Fr´echet derivative of operator F. Therefore, the applicability of the method is extended. Other advantages of

248

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

our approach are: in the local case a usable radius of convergence as well as error bounds on the distances kxn − x? k are obtained and the convergence domain in the semi-local case can Notice that the efficiency index of method (30.3) is given in [10] to be √ be larger. 1 1 1 ( 2 + 1) 2 ≈ 1.5538 which is larger than 2 2 ≈ 1.4142 of Newton’s method and 3 3 ≈ 1.4422 of the cubic convergence methods such as Halley’s method [6], Chebyshev’s method [2], and Potra-Ptak’s method [11] The analysis in [7] was based on the following Lemmas on majorizing sequences. Lemma 14. ([7, Lemma 2.1]) Let L0 > 0, L > 0, s ≥ 0, η > 0 be given parameters. Denote by α the only positive root of polynomial p defined by p(t) =

L0 3 L0 2 t + t + Lt − L. 2 2

(30.4)

Suppose that 0
0, L > 0, s ≥ 0, η > 0 such that F 0 (x0 )−1 ∈ L(Y, X)

(30.12)

Improved Convergence for the King-Wermer Method

251

kF 0 (x0 )−1 F(x0 )k ≤ η

(30.13)

kx0 − y0 k ≤ s

(30.14)

kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ L0 kx − x0 k for each x ∈ D kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ Lkx − yk for each x, y ∈ D0 := D ∩U(x0 ,

(30.15) 1 ) L0

U(x0 ,t ? ) ⊆ D

(30.16) (30.17)

and hypotheses of Lemma 2.1 hold, where t ? is given in Lemma 2.1. Then, sequence {xn } generated by King-Werner-type (30.3) is well defined, remains in U(x0 ,t ? ) and converges to a unique solution x? ∈ U(x0 ,t ? ) of equation F(x) = 0. Moreover, the following estimates hold for each n = 0, 1, 2, ... kxn − x? k ≤ t ? − tn , (30.18)

where, {tn } is given in Lemma 2.1. Furthermore, if there exists R > t ? such that U(x0 , R) ⊆ D

(30.19)

L0 (t ? + R) ≤ 2,

(30.20)

and then, the point x? is the only solution of equation F(x) = 0 in U(x0 , R). Proof. Simply notice that the iterates remain in D0 which is a more precise location than D used in [7], since D0 ⊆ D. Then, the proof is analogous to the corresponding one in [7]. Remark 44. In [7], Argyros and Ren used instead of (30.16) the condition kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ L1 kx − yk for each x ∈ D. 1 ) ⊆ D. Therefore the new convergence criteria are weaker L0 and the error bounds tighter than in [7] under the same computational cost, since the computation of L1 requires the computation of L or L0 as a special case.

But L ≤ L1 , since D ∩ U(x0 ,

Hence, we arrive at the following semi-local convergence result for King-Werner-type method (30.2). Theorem 55. Let F : D ⊆ X → Y be a Fr´echet-differentiable operator. Suppose that there exist x0 , y0 ∈ D, L0 > 0, L > 0, η > 0 such that F 0 (x0 )−1 ∈ L(Y, X)

kF 0 (x0 )−1 F(x0 )k ≤ η kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ L0 kx − x0 k for each x ∈ D kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ Lkx − yk for each x ∈ D0

252

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. U(x0 , r? ) ⊆ D

and hypotheses of Lemma 1.3 hold with s = 0, where r? is given in Lemma 1.3. Then, sequence {xn } generated by King-Werner-type method (30.2) with y0 = x0 is well defined, remains in U(x0 , r? ) and converges to a unique solution x? ∈ U(x0 , r? ) of equation F(x) = 0. Moreover, the following estimates hold for each n = 0, 1, 2, ... kxn − x? k ≤ r? − rn , where, {rn } is given in Lemma 1.3. Furthermore, if there exists R > r? such that U(x0 , R) ⊆ D and L0 (r? + R) ≤ 2,

then, the point x? is the only solution of equation F(x) = 0 in U(x0 , R). Remark 45. (a) In the literature (with the exception of our chapters) (30.16) is only used for the computation of the upper bounds of the inverses of the operators involved. (b) The limit point t ? (or r? ) can replaced by t ?? (or r?? ) (which are given in closed form) in the hypotheses of Theorem 2.1 (or Theorem 2.3). Next, we present the local convergence analysis of King-Werner-type method (30.3). Theorem 56. Let F : D ⊆ X → Y be a Fr´echet-differentiable operator. Suppose that there exist x? ∈ D, l0 > 0, l > 0 such that F(x? ) = 0, F 0 (x? )−1 ∈ L(Y, X)

(30.21)

kF(x? )−1 (F 0 (x) − F 0 (x? ))k ≤ l0 kx − x? k for each x ∈ D

(30.22)

kF(x? )−1 (F 0 (x) − F 0 (y))k ≤ lkx − yk for each x, y ∈ D1 := D ∩U(x? ,

1 ) l0

(30.23)

and U(x? , ρ) ⊆ D,

(30.24)

2 . 3l + 2l0

(30.25)

where ρ=

Then, sequence {xn } generated by King-Werner-type method (1.4) is well defined, remains in U(x? , ρ) and converges to x? , provided that x0 , y0 ∈ U(x? , ρ). Remark 46. It is worth noticing that the radius of convergence for Newton’s method due to Traub [14] or Rheinbolt [13] is given by ξ=

2 , 3l1

(30.26)

where l1 is the Lipschitz constant in (30.23) for x, y ∈ D. We have that l0 ≤ l1 and l ≤ l1 .

Improved Convergence for the King-Wermer Method

253

The corresponding radius due to us in [2,5,6] is given by ξ1 =

2 . 2l0 + l

(30.27)

Comparing (30.25), (30.26) and (30.27), we see that ρ < ξ < ξ1 .

(30.28)

Notice however that King-Werner-type method (30.3) is faster than Newton’s method. Finally notice that ρ ρ 1 ξ1 l0 → 1, → and → 3 as → 0. (30.29) ξ ξ1 3 ξ l

3.

Numerical Examples

We present some numerical examples in this section. ¯ 1), x∗ = (0, 0, 0)T . Define function F on D for Example 44. Let X = Y = R3 , D = U(0, w = (x, y, z)T by e−1 2 F(w) = (ex − 1, y + y, z)T . 2 Then, the Fr´echet-derivative is given by  x  e 0 0 F 0 (v) =  0 (e − 1)y + 1 0  . 0 0 1 1

We, have that a = 1, L2,1 = e, l0 = e − 1 < l = e l0 , l1 = L1 = e and Γ = e. Then, using Theorem 2.5 and Remark 2.6 we obtain the following radii ξ = 0.2452529607,

ξ1 = 0.3826919122 and

Then, the system in [15,16], since A =

ρ = 0.2271364235.

e2 e2 and B = becomes 2 24

e2 2 e2 v + w=1 24 2 e2 vw = w. 2 Hence, we have ρ0 = .1350812188, ρ1 = .2691499885, b0 = .270670566 and b = .1350812188. Notice that under our approach and under weaker hypotheses e2 v2 +

b < ρ < ξ < ξ1 . Next we present an example when X = Y = C[0, 1].

254

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Example 45. Let X = Y = C[0, 1], the space of continuous functions defined on [0, 1] equipped with the max norm and D = U(0, 1). Define operator F on D by F(x)(t) = x(t) −

Z 1

tθx3 (θ)dθ.

0

Then, we have F 0 (x)(w)(t) = w(t) − 3

Z 1

tθx2 (θ)w(θ)dθ

f or each w ∈ D.

0

3 Then, we have for x? (t) = 0 (t ∈ [0, 1]) that l = L1 = 3 and l0 = . Then, using (30.25) we 2 obtain that 1 ρ= . 6 Example 46. Let also X = Y = C[0, 1] equipped with the max norm and D = U(0, r) for some r > 1. Define F on D by F(x)(s) = x(s) − y(s) − µ

Z 1

G(s,t)x3(t)dt,

0

x ∈ C[0, 1], s ∈ [0, 1].

y ∈ C[0, 1] is given, µ is a real parameter, and the Kernel G is the Green’s function defined by  (1 − s)t if t ≤ s G(s,t) = s(1 − t) if s ≤ t. Then, the Fr´echet derivative of F is defined by (F 0 (x)(w))(s) = w(s) − 3µ

Z 1

G(s,t)x2(t)w(t)dt,

0

w ∈ C[0, 1], s ∈ [0, 1].

8 Let us choose x0 (s) = y(s) = 1 and |µ| < . Then, we have that 3 3 kI − F 0 (x0 )k ≤ µ, F 0 (x0 )−1 ∈ L(Y, X), 8 8 |µ| 3(1 + r)|µ| kF 0 (x0 )−1 k ≤ , η= , L0 = , 8 − 3|µ| 8 − 3|µ| 8 − 3|µ| and

6r|µ| . 8 − 3|µ| 1 Let us simple choose y0 (s) = 1, r = 3 and µ = . Then, we have that 2 L=

s = 0,

η = .076923077,

L0 = .923076923,

L = 1.384615385

and L(s + η) 2(1 −

L(s+η) L0 2 (2 + 2−L0 s )η)

= 0.057441746,

α = 0.711345739,

That is, condition (30.5) is satisfied and Theorem 2.1 applies.

1 − L0 η = 0.928994083.

Improved Convergence for the King-Wermer Method

4.

255

Conclusion

We present a local as well as a semi-local convergence analysis of some efficient King√ Werner-type methods of order 1 + 2 using our new idea of restricted convergence domains. That is we find a more precise domain containing the iterates than in earlier studies leading to smaller Lipschitz constants, larger convergence radii, and tighter error bounds on the error distance involved. Numerical examples are also presented to illustrate the theoretical results.

References [1] Amat S., Busquier S., Negra M., Adaptive approximation of nonlinear operators, Numer. Funct. Anal. Optim. 25 (2004) 397–405. [2] Argyros I. K., Computational theory of iterative methods, Series: Studies in Computational Mathematics 15, Editors, Chui C. K. and Wuytack L., Elservier Publ. Co. New York, USA, 2007. [3] Argyros I. K., A semilocal convergence analysis for directional Newton methods, Math. Comput. 80 (2011) 327–343. [4] Argyros I. K., Hilout S., Estimating upper bounds on the limit points of majorizing sequences for Newton’s method, Numer. Algorithms 62 (2013) 115–132. [5] Argyros I. K., Hilout S., Computational methods in nonlinear analysis. Efficient algorithms, fixed point theory, and applications, World Scientific, 2013. [6] Argyros I. K., Ren H. M., Ball convergence theorems for Halley’s method in Banach spaces. J. Appl. Math. Comput. 38 (2012) 453–465. [7] Argyros I. K., √ Ren H. M., On the convergence of efficient King-Werner-type methods of order 1+ 2, Journal of Computational and Applied Mathematics, 285(C), (2015), 169–180. [8] Kantorovich L. V., Akilov G. P., Functional Analysis, Pergamon Press, Oxford, 1982. [9] King R. F., Tangent methods for nonlinear equations, Numer. Math. 18 (1972) 298– 304. [10] McDougall T. J., Wotherspoon S.√J., A simple modification of Newton’s method to achieve convergence of order 1 + 2, Appl. Math. Lett. 29 (2014) 20–25. [11] Potra F. A., Ptak V., Nondiscrete induction and iterative processes[J]. Research Notes in Mathematics, 103, Pitman, Boston, 5(1984) 112–119. [12] Ren H. M., Wu Q. B., Bi W. H., On the convergence of a new secant-like method for solving nonlinear equations, Appl. Math. Comput. 217 (2010) 583–589. [13] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3 (1977) 129–142.

256

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[14] Traub J. F., Iterative Methods for the Solution of Equations, Englewood Cliffs, Prentice Hull, 1984. √ [15] Werner W., Uber ein Verfahren der Ordnung 1 + 2 zur Nullstellenbestimmung, Numer. Math. 32 (1979) 333–342. √ [16] Werner W., Some supplementary results on the 1 + 2 order method for the solution of nonlinear equations, Numer. Math. 38 (1982) 383–392.

Chapter 31

Extending the Applicability of King-Werner-Type Methods 1.

Introduction

Many problems in Computational Sciences and other disciplines can be brought in a form of equation F(x) = 0, (31.1) where F is Fr´echet-differentiable operator defined on a convex subset of a Banach space

B1 with values in a Banach space B2 using mathematical modeling [2,5,14]. Therefore, the

problem of approximating a locally unique solution x? of (31.1) is an important problem in numerical functional analysis. Many methods are studied for the approximating a locally unique solution x? of (31.1). Werner in [15,16] studied a method originally proposed by King [9] defined by: Given x0 , y0 ∈ Ω, let xn + yn −1 ) F(xn ), xn+1 = xn − F 0 ( 2 (31.2) xn + yn −1 yn+1 = xn+1 − F 0 ( ) F(xn+1 ) 2

for each n = 0, 1, 2, ... and B1 = Ri , B2 = R where i is a whole number. The local convergence analysis is based on assumptions of the form: (H0 ) There exists x? ∈ Ω such that F(x? ) = 0; (H1 ) F ∈ C2,a (Ω), a ∈ (0, 1]; (H2 ) F 0 (x)−1 ∈ L(B2 , B1 ) and kF 0 (x)−1 k ≤ Γ; (H3 ) The Lipschitz condition kF 0 (x) − F 0 (y)k ≤ L1 kx − yk holds for each x, y ∈ Ω; (H4 ) The Lipschitz condition kF 00 (x) − F 00 (y)k ≤ L2,a kx − yka

258

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

holds for each x, y ∈ Ω; (H5 ) U(x? , b) ⊆ Ω for

b = min{b0 , ρ0 , ρ1 },

where b0 =

2 L1 Γ

and ρ0 , ρ1 solve the system [14,p.337] Bv1+a + Aw = 1 2Av2 + Avw = w, ΓL2,a 1 . A = ΓL1 , B = 2 4(a + 1)(a + 2) √ The convergence order was shown to be 1 + 2. However, there are cases where , e.g. (H4 ) is violated. For an example, define function f : [−1, 1] → (−∞, ∞) by f (x) = x2 lnx2 + c1 x2 + c2 x + c3 ,

f (0) = c3 ,

where c1 , c2 , c3 are given real numbers. Then, we have that lim x2 lnx2 = 0, lim x ln x2 = 0, x→0

x→0

f 0 (x) = 2x lnx2 + 2(c1 + 1)x + c2 and f 00 (x) = 2(lnx2 + 3 + c1 ). Then, function f does not satisfy (H4 ) for α = 1. McDougall et al. in [11] studied the King-Werner-type method with repeated initial points, namely: Given x0 ∈ Ω, let y0 = x0 x0 + y0 −1 ) F(x0 ), x1 = x0 − F 0 ( 2 x + y (31.3) n−1 n−1 −1 yn = xn − F 0 ( ) F(xn ), 2 xn + yn −1 xn+1 = xn − F 0 ( ) F(xn ) 2 for each n = 1, 2, ... and B1 = B2 = R. Notice that the initial predictor step is just a Newton step based on the estimated derivative. The re-use of the derivative means that the evaluations of the yn values in (31.3) essentially come for free, which then enables the more appropriate value of the derivative √ to be used in the corrector step in (31.3) . Method (31.3) was also shown to be of order 1 + 2. Argyros and Ren [7] studied the local as well as the semi-local convergence analysis of a more general method than (31.3) in a Banach space setting. Precisely, Argyros and Ren considered the King-Werner-type method defined by: Given x0 , y0 ∈ Ω, let xn−1 + yn−1 −1 xn = xn−1 − F 0 ( ) F(xn−1 ) f or each n = 1, 2, ... 2 (31.4) xn−1 + yn−1 −1 yn = xn − F 0 ( ) F(xn ) f or each n = 1, 2, .... 2 The convergence analysis is based only on hypotheses up to the first Fr´echet derivative of operator F. Therefore, the applicability of the method is extended. Other advantages of our

Extending the Applicability of King-Werner-Type Methods

259

approach are: in the local case a usable radius of convergence as well as error bounds on the distances kxn − x? k are obtained and the convergence domain in the semi-local case can be larger. Notice efficiency index of method (31.4) (i.e of method (31.3) ) is given in √ that the 1 1 [10] to be ( 2 + 1) 2 ≈ 1.5538 which is larger than 2 2 ≈ 1.4142 of Newton’s method and 1 3 3 ≈ 1.4422 of the cubic convergence methods such as Halley’s method [6], Chebyshev’s method [2], and Potra-Ptak’s method [11]. In this chapter, using our new idea of restricted convergence domains we improve the results in [7]. The chapter is organized as follows: Section 2 contains results on majorizing sequences for King-Werner-type methods (31.3) and (31.4) . The semi-local and local convergence analysis of King-Werner-type methods (31.3) and (31.4) is presented in Section 3. Finally, the numerical examples are presented in the concluding Section 4.

2.

Majorizing Sequences for King-Werner-Type Methods (31.3) and (31.4)

The following auxiliary results on majorizing sequence for King-Werner-type methods (31.3) and (31.4) can be found in [7]. Lemma 17. Let L0 > 0, L > 0, s ≥ 0, η > 0 be given parameters. Denote by α the only positive root of polynomial p defined by p(t) =

L0 3 L0 2 t + t + Lt − L. 2 2

(31.5)

Suppose that 0
0, L > 0, s ≥ 0, η > 0 such that F 0 (x0 )−1 ∈ L(B2 , B1 )

(31.13)

262

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. kF 0 (x0 )−1 F(x0 )k ≤ η

(31.14)

kx0 − y0 k ≤ s

(31.15)

kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ L0 kx − x0 k for each x ∈ Ω kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ Lkx − yk for each x, y ∈ Ω0 := Ω ∩U(x0 ,

(31.16) 1 ) L0

U(x0 ,t ? ) ⊆ Ω

(31.17) (31.18)

and hypotheses of Lemma 2.1 hold, where t ? is given in Lemma 2.1. Then, sequence {xn } generated by King-Werner-type (31.4) is well defined, remains in U(x0 ,t ? ) and converges to a unique solution x? ∈ U(x0 ,t ? ) of equation F(x) = 0. Moreover, the following estimates hold for each n = 0, 1, 2, ... kxn − x? k ≤ t ? − tn , (31.19)

where, {tn } is given in Lemma 2.1. Furthermore, if there exists R > t ? such that U(x0 , R) ⊆ Ω

(31.20)

L0 (t ? + R) ≤ 2,

(31.21)

and then, the point x? is the only solution of equation F(x) = 0 in U(x0 , R). Proof. Simply notice that the iterates remain in Ω0 , which is a more precise location than Ω used in [7], since Ω0 ⊆ Ω. The rest of the proof is analogous to the one in [7]. Remark 48. It follows from the proof and x0 = y0 that (31.16) is not needed in the computation of the upper bound on kx2 − x1 k and ky1 − x1 k. Then, under the hypotheses of Lemma 2.3 we obtain (using (31.16) instead of (31.17)) that 1 x1 + y1 −1 0 x0 + y0 ) F (x0 )kkF 0 (x0 )−1 [F 0 (x0 + θ(x1 − x0 )) − F 0 ( )]kkx1 − x0 kdθ 2 2 Z 10 kx1 − x0 k ≤ kF 0 (x0 )−1 [F 0 (x0 + θ(x1 − x0 )) − F 0 (x0 )]dθk L0 0 1 − 2 (kx1 − x0 k + ky1 − x0 k) L0 (q0 − r0 + r1 − r0 )(r1 − r0 ) ≤ = r2 − r1 2(1 − L20 (q1 + r1 ))

kx2 − x1 k ≤ kF 0 (

Z

and similarly ky1 − x1 k ≤ q1 − r1 , which justify the definition of q1 , r1 , r2 and consequently the definition of sequence {rn }. Hence, we arrive at the following semi-local convergence result for King-Werner-type method (31.3). Theorem 58. Let F : Ω ⊆ B1 → B2 be a Fr´echet-differentiable operator. Suppose that there exist x0 , y0 ∈ Ω, L0 > 0, L > 0, η > 0 such that F 0 (x0 )−1 ∈ L(B2 , B1 )

kF 0 (x0 )−1 F(x0 )k ≤ η

Extending the Applicability of King-Werner-Type Methods

263

kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ L0 kx − x0 k for each x ∈ Ω

kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ Lkx − yk for each x, y ∈ Ω0 U(x0 , r? ) ⊆ Ω

and hypotheses of Lemma 2.3 hold with s = 0, where r? is given in Lemma 2.3. Then, sequence {xn } generated by King-Werner-type method (31.3) with y0 = x0 is well defined, remains in U(x0 , r? ) and converges to a unique solution x? ∈ U(x0 , r? ) of equation F(x) = 0. Moreover, the following estimates hold for each n = 0, 1, 2, ... kxn − x? k ≤ r? − rn , where, {rn } is given in Lemma 2.3. Furthermore, if there exists R > r? such that U(x0 , R) ⊆ Ω and L0 (r? + R) ≤ 2,

then, the point x? is the only solution of equation F(x) = 0 in U(x0 , R). Remark 49. (a) In [7], Argyros and Ren used instead of (31.17) the condition kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ L∗ kx − yk for each x, y ∈ Ω.

(31.22)

But using (31.17) and (31.22) we get that L ≤ L∗ holds, since Ω0 ⊆ Ω. In case L < L∗ , then the new convergence analysis is better than the old one. Notice also that we have L0 ≤ L∗ . The advantages are obtained under the same computational cost as before since in practice the computation of constant L∗ requires the computation of L0 and L as special cases. In the literature (with the exception of our chapters) (31.17) is only used for the computation of the upper bounds of the inverses of the operators involved. (b) The limit point t ? (or r? ) can replaced by t ?? (or r?? ) (which are given in closed form) in the hypotheses of Theorem 3.1 (or Theorem 3.3). Next, we present the local convergence analysis of the King-Werner-type method (1.4). Theorem 59. Let F : Ω ⊆ B1 → B2 be a Fr´echet-differentiable operator. Suppose that there exist x? ∈ Ω, l0 > 0, l > 0 such that F(x? ) = 0, F 0 (x? )−1 ∈ L(B2 , B1 )

(31.23)

kF(x? )−1 (F 0 (x) − F 0 (x? ))k ≤ l0 kx − x? k for each x ∈ Ω

(31.24)

kF(x? )−1 (F 0 (x) − F 0 (y))k ≤ lkx − yk for each x, y ∈ Ω1 := Ω ∩U(x∗

1 ) `0

(31.25)

264

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and U(x? , ρ) ⊆ Ω,

(31.26)

2 . 3l + 2l0

(31.27)

where ρ=

Then, sequence {xn } generated by King-Werner-type method (1.4) is well defined, remains in U(x? , ρ) and converges to x? , provided that x0 , y0 ∈ U(x? , ρ). Proof. As in the proof of Theorem 3.1, the iterate remains in Ω1 and Ω1 ⊆ Ω. Remark 50. It is worth noticing that the radius of convergence for Newton’s method due to Traub [13] or Rheinbolt [12] is given by ξ=

2 3l1

(31.28)

where l1 > 0 and kF 0 (x∗ )−1 (F 0 (x) − F 0 (y))k ≤ l1 kx − yk for each x, y ∈ Ω.

(31.29)

In view of (31.25) and (31.29), we have l ≤ l1 .

(31.30)

l0 ≤ l1 .

(31.31)

We also have that The corresponding radii for Newton’s method in [2,5,6] are given by ξ1 =

2 2 and ξ2 = . 2l0 + l1 2l0 + l

(31.32)

2 . 3l1 + 2l0

(31.33)

The radious in [8] given by ρ1 = Comparing, we see that ρ < ξ ≤ ξ2 , ξ ≤ ξ1 and ρ1 ≤ ρ.

(31.34)

Notice that the King-Werner-type method (31.4) is faster than Newton’s method.

4.

Numerical Examples

We present some numerical examples in this section. ¯ 1) and x? = 0. Define mapping F on Ω by Example 47. Let B1 = B2 = R, Ω = U(0, F(x) = ex − 1. Then, the Fr´echet-derivatives of F are given by F 0 (x) = ex ,

F 00 (x) = ex .

(31.35)

Extending the Applicability of King-Werner-Type Methods

265 1

Notice that F(x? ) = 0, F 0 (x? ) = F 0 (x? )−1 = 1, a = 1, L2,1 = e, l0 = e − 1 < l = e l0 , l1 = L1 = L∗ = e and Γ = 1. Then, using Theorem 3.5 and Remark 3.6 we obtain the following radii ξ = 0.32494723137268988 ξ1 = 0.382691912232385744, ρ = 0.2271364235452698332 and ρ1 = 0.17254157587297255. Then, the system in (H5 ), since A =

e2 e2 and B = becomes 2 24 e2 2 e2 v + w=1 24 2

e2 vw = w. 2 Hence, we have b0 = 0.73575888234288455 and b = 0.17254157587297255.. Notice that under our approach and under weaker hypotheses e2 v2 +

b < ρ < ξ < ξ1 . Next we present an example when B1 = B2 = C[0, 1]. Example 48. Let B1 = B2 = C[0, 1], the space of continuous functions defined on [0, 1] equipped with the max norm and Ω = U(0, 1). Define operator F on Ω by F(x)(t) = x(t) −

Z 1

tθx3 (θ)dθ.

0

Then, we have F 0 (x)(w)(t) = w(t) − 3

Z 1

tθx2 (θ)w(θ)dθ

0

f or each w ∈ Ω.

3 Then, we have for x? (t) = 0 (t ∈ [0, 1]) that l = L1 = L∗ = 3 and l0 = . Then, using (3.20) 2 we obtain that 1 ρ0 = ρ1 = ρ = . 6 Example 49. Let also B1 = B2 = C[0, 1] equipped with the max norm and Ω = U(0, r) for some r > 1. Define F on Ω by F(x)(s) = x(s) − y(s) − µ

Z 1

G(s,t)x3(t)dt,

0

x ∈ C[0, 1], s ∈ [0, 1].

y ∈ C[0, 1] is given, µ is a real parameter, and the Kernel G is the Green’s function defined by  (1 − s)t if t ≤ s G(s,t) = s(1 − t) if s ≤ t. Then, the Fr´echet derivative of F is defined by (F 0 (x)(w))(s) = w(s) − 3µ

Z 1 0

G(s,t)x2(t)w(t)dt,

w ∈ C[0, 1], s ∈ [0, 1].

266

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

8 Let us choose x0 (s) = y(s) = 1 and |µ| < . Then, we have that 3 3 kI − F 0 (x0 )k ≤ µ, F 0 (x0 )−1 ∈ L(B2 , B1 ), 8 8 |µ| 3(1 + r)|µ| , η= , L0 = , kF 0 (x0 )−1 k ≤ 8 − 3|µ| 8 − 3|µ| 8 − 3|µ| and L=

6r|µ| . 8 − 3|µ|

1 Let us simple choose y0 (s) = 1, r = 3 and µ = . Then, we have that 2 s = 0,

η = .076923077,

L0 = .923076923,

L = L∗ = 1.384615385

and L(s + η) 2(1 −

L(s+η) L0 2 (2 + 2−L0 s )η)

= 0.057441746,

α = 0.711345739,

1 − L0 η = 0.928994083.

That is, condition (2.2) is satisfied, and Theorem 3.1 applies.

5.

Conclusion

We present a local as well as a semi-local convergence analysis of some efficient King√ Werner-type methods of order 1 + 2 in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. We use our new idea of restricted convergence domains, where the iterates lie leading to smaller Lipschitz constants yielding in turn a more precise local as well as semi-local convergence analysis than in earlier studies. Our results compare favorably to earlier results using the same or stronger hypotheses. Numerical examples are also presented to illustrate the theoretical results.

References [1] Amat S., Busquier S., Negra M., Adaptive approximation of nonlinear operators, Numer. Funct. Anal. Optim. 25 (2004) 397–405. [2] Argyros I. K., Computational theory of iterative methods, Series: Studies in Computational Mathematics 15, Editors, Chui C. K. and Wuytack L., Elservier Publ. Co. New York, USA, 2007. [3] Argyros I. K., A semilocal convergence analysis for directional Newton methods, Math. Comput. 80 (2011) 327–343. [4] Argyros I. K., Hilout S., Estimating upper bounds on the limit points of majorizing sequences for Newton’s method, Numer. Algorithms 62 (2013) 115–132.

Extending the Applicability of King-Werner-Type Methods

267

[5] Argyros I. K., Hilout S., Computational methods in nonlinear analysis. Efficient algorithms, fixed point theory and applications, World Scientific, 2013. [6] Argyros I. K., Ren H. M., Ball convergence theorems for Halley’s method in Banach spaces. J. Appl. Math. Comput. 38 (2012) 453–465. [7] Argyros I.√K., Ren H., On the convergence of efficient King-Werner-type methods of order 1 + 2, J. Comput. Appl. Math., 285(2015), 169-180. [8] Kantorovich L. V., Akilov G. P., Functional Analysis, Pergamon Press, Oxford, 1982. [9] King R. F., Tangent methods for nonlinear equations, Numer. Math. 18 (1972) 298– 304. [10] McDougall T. J., Wotherspoon S.√J., A simple modification of Newton’s method to achieve convergence of order 1 + 2, Appl. Math. Lett. 29 (2014) 20–25. [11] Potra F. A., Ptak V., Nondiscrete induction and iterative processes[J]. Research Notes in Mathematics, 103, Pitman, Boston, 5(1984) 112–119. [12] Ren H. M., Wu Q. B., Bi W. H., On convergence of a new secant like method for solving nonlinear equations, Appl. Math. Comput. 217 (2010) 583–589. [13] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3 (1977) 129–142. [14] Traub J. F., Iterative Methods for the Solution of Equations, Englewood Cliffs, Prentice Hull, 1984. √ [15] Werner W., Uber ein Verfahren der Ordnung 1 + 2 zur Nullstellenbestimmung, Numer. Math. 32 (1979) 333–342. √ [16] Werner W., Some supplementary results on the 1 + 2 order method for the solution of nonlinear equations, Numer. Math. 38 (1982) 383–392.

Chapter 32

Parametric Efficient Family of Iterative Methods 1.

Introduction

Let B1 , B2 denote Banach spaces, T ⊆ B1 be nonempty convex and open set. Set L B(B1 , B2 ) = {V : B1 → B2 is a bounded linear operator}. Many problems in Mechanics, Biomechanics, Physics, Mathematical Chemistry, Economics, Radiative transfer, Biology, Ecology, Medicine, Engineering and other areas [4]-[28] are reduced to a nonlinear equation F(x) = 0, (32.1) where F : T → B2 is continuously differentiable in the Fr´echet sense. Therefore solving equation (32.1) is an extremely important and difficult problem in general. A solution ξ is very difficult to find, especially in closed or analytical form. This function forces practitioners and researchers to develop higher order and efficient methods converging to ξ by starting from a point x0 ∈ T sufficiently close to it [1]-[28]. Motivated by Traub-like and Newton’s the following methods were studied in [13]: yn = xn − F 0 (xn )−1 F(xn )

zn = yn − [wn , yn ; F]−1 F(yn )

(32.2)

xn+1 = zn − [wn , yn ; F]−1 F(zn), where [·, ·; F] : T × T → L B(B1 , B2 ) is a divided difference of order one satisfying [x, y; F] (x − y) = F(x) − F(y) for all x, y ∈ T with x 6= y and [x, y; F] = F 0 (x) for all x ∈ T provided F is differentiable, and wn = w(xn ), w : T → T is given iteration function. To be more precise the special choice of w given by wn = yn + α F(yn ) + β F(yn )2 ,

(32.3)

was used in [13], for parameters α and β not zero at the same time, f i (x) the co-ordinate functions for F and F(x)2 = ( f 12 (x), f 22 (x), · · · , f n2 (x))T . (32.4)

270

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Moreover, they used method (32.2) when B1 = B2 = Rk , and compare it favorably to other sixth order methods. However, in this chapter, we do not necessarily assume that w is given by (32.3) hold. The sixth convergence order was verified for any value of α and β using the Taylor series but hypotheses on the derivatives up to the seventh order [13]. That simply means that convergence is not guaranteed for mappings that are not up to the seventh-order differentiable. For example 1 3 Example 50. Define function F on T = [− , ) by 2 2  3 2 x lnx + x5 − x4 , F(x) = 0,

x 6= 0 x=0

(32.5)

Choose x∗ = 1. We also have that F 0 (x) = 3x2 lnx2 + 5x4 − 4x3 + 2x2 , F 00 (x) = 6xlnx2 + 20x3 + 12x2 + 10x and F 000 (x) = 6lnx2 + 60x2 − 24x + 22.

Notice that F 000 (x) is unbounded on T .

Another problem is that no error bounds on kxn − ξk or results in the uniqueness of ξ or how close to ξ we should start, and the selection of x0 is really “ a shot in the dark”. To address all these concerns about this very efficient and useful method, we only use conditions on the first derivative. Moreover, converge radius, error estimates, and uniqueness results are computed based on these conditions. Furthermore, we rely on the computational order (COC) or approximated computational convergence order (ACOC) formulae to determine the order [11, 28]. These formulae only use the first derivative too. That is how we extend the applicability of the method (32.2). Our technique can be used to study other methods too in an analogous way. It is worth noticing that if α = β = 0, method (32.2) reduces to Newton-Secant-like methods, and if β = 0 to Newton-Steffensen-like methods. The layout for the rest of the chapter includes the convergence analysis of the method (32.2) in section 2 and the numerical examples in section 3.

2.

Convergence Analysis of Method (32.2)

Let D = [0, ∞). Let also ϕ0 : D → R be a continuous, and increasing function satisfying ϕ0 (0) = 0. Assume ϕ0 (t) = 1 (32.6) has at least one positive zero. Denote by r0 the minimal such zero. Set D0 = [0, r0 ). Assume there exists function ϕ : D0 → R continuous and increasing satisfying ϕ(0) = 0. Define R1 ϕ((1 − τ)t) dτ functions g1 and g1 on the interval D0 by g1 (t) = 0 , and g1 (t) = g1 (t) − 1 − ϕ0 (t)

Parametric Efficient Family of Iterative Methods

271

1. By these definitions g1 (0) = −1 and g1 (t) → ∞ for t → r0− . The application of the intermediate value theorem on function g1 assures the existence of at least one zero in (0, r0). Denote by ρ1 the minimal such zero. Assume ϕ0 (g1 (t)t) = 1, ϕ2 (ϕ5 (t), g1 (t)t) = 1 (32.7) have at least one positive zero, where ϕ5 is as ϕ. Denote by r1 the minimal such zero and let D1 = [0, r0 ], where r0 = min{r0 , r1 }. Assume there exist functions ϕ1 : D1 → R, ϕ2 : D1 × D1 → R, and ϕ3 : D1 → R are continuous and increasing. Define functions g2 and g2 on the interval D1 by R1 R ϕ1 (ϕ5 (t) + g1 (t)t) 01 ϕ3 (τ g1 (t)t) dτ 0 ϕ((1 − τ) g1 (t)t) d τ g2 (t) = { + } g1 (t) and 1 − ϕ0 (g1 (t)t) (1 − ϕ0 (g1 (t)t))(1 − ϕ2 (ϕ5 (t), g1 (t)t)) g2 (t) = g2 (t) − 1. By these definitions g2 (0) = −1, and g2 (t) → ∞ for t → r− 0 . Denote by ρ2 the minimal zero of function g2 on (0, r0 ). Assume ϕ0 (g2 (t)t) = 1 (32.8) has at least one positive zero. Denote by r2 the minimal such zero. Set D2 = [0, r0 ), where r0 = min{r0 , r2 }. Define functions g3 and g3 on the interval D2 by (R 1 0 ϕ((1 − τ) g2 (t)t) d τ g3 (t) = 1 − ϕ0 (g2 (t)t) ) R ϕ4 (ϕ5 (t) + g2 (t)t, g1 (t)t + g2 (t)t) 01 ϕ3 (τ g2 (t)t) dτ + g2 (t) (1 − ϕ0 (g2 (t)t))(1 − ϕ2 (ϕ5 (t), g1 (t)t)) −

and g3 (t) = g3 (t) − 1. By these definitions g3 (0) = −1, and g3 (t) → ∞ for t → r0 . Denote by ρ3 the minimal zero of function g3 on (0, r0 ). Define a radius of convergence ρ by ρ = min{ρ j },

j = 1, 2, 3, · · · .

(32.9)

By the preceding we have for all t ∈ [0, ρ) 0 ≤ ϕ0 (t) < 1,

0 ≤ ϕ2 (ϕ5 (t), g1 (t)t) < 1,

0 ≤ ϕ0 (g1 (t)t) < 1,

0 ≤ ϕ0 (g2 (t)t) < 1

(32.10) (32.11) (32.12) (32.13)

and o ≤ g j (t) < 1.

(32.14)

Define S(x, µ) = {y ∈ T : ky − xk < µ} and let S(x, µ) be the closure of S(x, µ). Next, we list the conditions (A) to be used in the convergence analysis: (a1 ) F : T → B2 is continuous, differentiable, [·, ·; F] : T × T → L B(B1 , B2 ) is a divided difference of order one, F(ξ) = 0, and F 0 (ξ)−1 ∈ L B(B1 , B2 ) for some ξ ∈ T. (a2 ) ϕ0 : D → R is increasing, continuous, ϕ0 (0) = 0 and for all x ∈ T kF 0 (ξ)−1 (F 0 (x) − F 0 (ξ))k ≤ ϕ0 (kx − ξk).

272

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Define T0 = T ∩ S(ξ, r0 ), where r0 is given in (32.6). (a3 ) ϕ : D0 → R is continuous, increasing, ϕ(0) = 0 and for x, y ∈ T0 kF 0 (ξ)−1 (F 0 (y) − F 0 (x))k ≤ ϕ (ky − xk). (a4 ) ϕ1 : D1 → R, ϕ3 : D1 → R, ϕ2 : D1 × D1 → R, ϕ5 : D1 → R, are increasing, continuous, w : T → T is continuous and for all x, y ∈ T1 := T ∩ S(ξ, r0 ) kF 0 (ξ)−1 ([w, y; F] − F 0 (y))k ≤ ϕ1 (kw − yk),

kF 0 (ξ)−1 ([w, y; F] − F 0 (ξ))k ≤ ϕ2 (kw − ξk, ky − ξk), kF 0 (ξ)−1 F 0 (x)k ≤ ϕ3 (kx − ξk),

and kw − ξk ≤ ϕ5 (kx − ξk), where r0 is given in (32.7). (a5 ) ϕ4 : D2 × D2 → R is increasing, continuous and for all y, z ∈ T2 := T ∩ S(x0 , r0 ) kF 0 (ξ)−1 ([w, y; F] − F 0 (z))k ≤ ϕ4 (kw − zk, ky − zk), (a6 ) S (ξ, ρ) ⊂ T, r0 , r0 , r0 given by (32.6), (32.7), (32.8) exist and ρ is defined in (32.9). (a7 ) There exist ρ ≥ ρ such that Z 1 0

ϕ0 (τ ρ) dτ < 1.

Define T3 = T ∩ S (ξ, ρ). Theorem 60. Assume conditions (A) hold. Then, for x0 ∈ S(ξ, ρ) − {ξ}, {xn } produced by (32.2) is such that {xn } ⊆ S (ξ, ρ) , lim xn = ξ so that n→∞

kyn − ξk ≤ g1 (kxn − ξk) kxn − ξk ≤ kxn − ξk < ρ kzn − ξk ≤ g2 (kxn − ξk) kxn − ξk ≤ kxn − ξk,

(32.15) (32.16)

kxn+1 − ξk ≤ g3 (kxn − ξk) kxn − ξk ≤ kxn − ξk,

(32.17)

and where functions g j are given previously. The only solution of equation F(x) = 0 in T3 is ξ, where T3 is given in (a7 ). Proof. If v ∈ S (ξ, ρ), then (32.6), (32.9), (32.11) and (a2 ) give kF 0 (ξ)−1 (F 0 (ξ) − F 0 (v))k ≤ ϕ0 (kξ − vk) ≤ ϕ0 (r0 ) ≤ ϕ0 (ρ) < 1.

(32.18)

This estimation together with the lemma by Banach for operator that are invertible [8, 25] assure F 0 (v)−1 ∈ L B(B1 , B2 ) with kF 0 (v)−1 F 0 (ξ)k ≤

1 . 1 − ϕ0 (kv − ξk)

(32.19)

Parametric Efficient Family of Iterative Methods

273

It also follows that iterate y0 exists by (32.19), and the first sub-step of method (32.2). Using the first sub-step of method (32.2) for n = 0, (32.9), (32.11), (32.14) (for j = 1), (32.19) (for v = x0 ), and (a3 ) ky0 − ξk = kx0 − ξ − F 0 (x0 )−1 F(x0 )k ≤ kF 0 (x0 )−1 F 0 (ξ)k

k

Z 1 0

F 0 (ξ)−1 (F 0 (ξ + τ (x0 − ξ)) − F 0 (x0 )) dτ (x0 − ξ)k

R1

ϕ ((1 − τ) kx0 − ξk) dτ kx0 − ξk = g1 (kx0 − ξk) kx0 − ξk 1 − ϕ0 (kx0 − ξk) ≤ kx0 − ξk < ρ, ≤

0

(32.20)

showing y0 ∈ S(ξ, ρ) and (32.15) true for n = 0. By (a1 ) and (a4 ) we have kF 0 (ξ)−1 F(v)k = kF 0 (ξ)−1 (F(v) − F(ξ))k Z 1

=k ≤

Z

0

0 1

F 0 (ξ)−1 F 0 (ξ + τ (v − ξ)) dτ (v − ξ)k

ϕ3 (τ kv − ξk) dτ kv − ξk.

(32.21)

We get the estimates by (32.9), (32.11), and (a3 ) kF 0 (ξ)−1 ([w0 , y0 ; F] − F 0 (ξ))k ≤ ϕ2 (kw0 − ξk, ky0 − ξk) ≤ ϕ0 (ρ) < 1

(32.22)

leading to k[w0 , y0 ; F]−1 F 0 (ξ)k ≤

1 , 1 − ϕ2 (kw0 − ξk, ky0 − ξk)

(32.23)

so z0 and x1 exist. Then, by (32.19) (for v = x0 , y0 ), (32.9), (32.14) (for j = 2), (32.20), (32.21), (a3 ) and the second sub-step of method (32.2) for n = 0. kz0 − ξk = k(y0 − ξ − F 0 (y0 )−1 F(y0 ))

+ F 0 (y0 )−1 ([w0 , y0 ; F] − F 0 (y0 )) [w0 , y0 ; F]−1 F(y0 )k (R 1 0 ϕ((1 − τ) ky0 − ξk d τ ≤ 1 − ϕ0 (ky0 − ξk) ) R ϕ1 (k(w0 − ξ) + (ξ − y0 )k) 01 ϕ3 (τ ky0 − ξk) dτ ky0 − ξk + (1 − ϕ0 (ky0 − ξk))(1 − ϕ2 (kw0 − ξk, ky0 − ξk)) ≤ g2 (kx0 − ξk) kx0 − ξk ≤ kx0 − ξk < ρ,

(32.24)

showing z0 ∈ S(ξ, ρ) and (32.16) true for n = 0. In view of (32.9), (32.14), (32.19) (for

274

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

v = z0 ), (32.20) - (32.24), and the last sub-step of method (32.2), we obtain the estimations kx1 − ξk = k(z0 − ξ − F 0 (z0 )−1 F(z0 ))

+ F 0 (z0 )−1 ([w0 , y0 ; F] − F 0 (z0 )) [w0 , y0 ; F]−1 F(z0 )k (R 1 0 ϕ((1 − τ) kz0 − ξk d τ ≤ 1 − ϕ0 (kz0 − ξk) ϕ4 (k(w0 − ξ) + (ξ − z0 )k, k(y0 − ξ) + (ξ − z0 )k) 01 ϕ3 (τ kz0 − ξk) dτ + (1 − ϕ0 (kz0 − ξk))(1 − ϕ2 (kw0 − ξk, kz0 − ξk)) R

× kz0 − ξk

≤ g3 (kx0 − ξk) kx0 − ξk ≤ kx0 − ξk < ρ,

)

(32.25)

which completes the induction for estimations (32.15) - (32.17) if n = 0 and x1 ∈ S(ξ, ρ). By repeating the previous estimations for xm , ym , zm , xm+1 replacing x0 , y0 , z0 , x1 respectively the induction for items (32.15) - (32.17) is completed. Moreover, by the estimation kxm+1 − ξk ≤ γ kxm − ξk ≤ ρ,

γ = g3 (kx0 − ξk) ∈ [0, 1),

we arrive at lim xm = ξ, and xm+1 ∈ S(ξ, ρ). Let G = m→∞

Z 1

and F(ξ0 ) = 0. By (a2 ), (a7 ), we get that kF 0 (ξ)−1 (G − F 0 (ξ))k ≤

Z 1 0

0

(32.26)

F 0 (ξ + τ (ξ0 − ξ)) dτ for ξ0 ∈ T3

ϕ0 ((1 − τ) kξ0 − ξk) dτ ≤

Z 1 0

ϕ0 (τ ρ) dτ < 1,

(32.27)

so G−1 ∈ L B(B1 , B2 ), leading to ξ = ξ0 , where we also used the estimation 0 = F(ξ0 ) − F(ξ) = G (ξ0 − ξ). 2 was obtained 2L0 + L by Argyros in [4] as the convergence radius for Newton’s method under condition (32.17)-(32.19). Notice that the convergence radius for Newton’s method given independently by Rheinboldt [26] and Traub [28] is given by

Remark 51.

(a) Let ϕ0 (t) = L0 t and ϕ(t) = Lt. The radius r1 =

ρ=

2 < r1 , 3L1

where L1 is the Lipschitz constant on T, so L0 ≤ L1 and L ≤ L1 . As an example, let us consider the function f (x) = ex − 1. Then x∗ = 0. Set D = U(0, 1). Then, we have 1 that L0 = e − 1 < L = e e−1 < L = e, so ρ = 0.24252961 < r1 = 0.3827.

Moreover, the new error bounds [4,5,6,7] are: kxn+1 − x∗ k ≤

L kxn − x∗ k2 , 1 − L0 kxn − x∗ k

whereas the old ones [15,17] kxn+1 − x∗ k ≤

L1 kxn − x∗ k2 . 1 − L1 kxn − x∗ k

Parametric Efficient Family of Iterative Methods

275

Clearly, the new error bounds are more precise, if L0 < L or L < L1 . Clearly, we do not expect the radius of convergence of method (32.2) given by r to be larger than r1 (see (32.8)) . (b) The local results can be used for projection methods such as Arnoldi’s method, the generalized minimum residual method(GMREM), the generalized conjugate method(GCM) for combined Newton/finite projection methods, and in connection to the mesh independence principle in order to develop the cheapest and most efficient mesh refinement strategy [4,5,6,7,8,9,10,11,12]. (c) The results can be also be used to solve equations where the operator F 0 satisfies the autonomous differential equation [4,5,6,7,8,9,10,11,12]: F 0 (x) = p(F(x)), where p is a known continuous operator. Since F 0 (x∗ ) = p(F(x∗ )) = p(0), we can apply the results without actually knowing the solution x∗ . Let as an example F(x) = ex − 1. Then, we can choose p(x) = x + 1 and x∗ = 0. (d) It is worth noticing that method (32.2) are not changing if we use the new instead of the old conditions [13]. Moreover, for the error bounds in practice we can use the computational order of convergence (COC) ξ=

−xn+1 k ln kxkxn+2 n+1 −xn k n+1 −xn k ln kx kxn−xn−1 k

,

for each n = 1, 2, . . .

or the approximate computational order of convergence (ACOC) ∗

ξ∗ =

n+2 −x k ln kx kxn+1 −x∗ k ∗

−x k ln kxkxn+1 ∗ n −x k

,

for each n = 0, 1, 2, . . .

instead of the error bounds obtained in Theorem 60. (e) In view of (32.13) and the estimate kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik ≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + L0 kx − x∗ k condition (32.15) can be dropped and M can be replaced by M(t) = 1 + L0 t or M(t) = M = 2, since t ∈ [0,

1 ). L0

276

3.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Numerical Examples

We use [x, y; F] =

Z 1 0

F 0 (y + τ (x − y)) dτ and w as given in (32.3) for α =

1 kF 0 (ξ)k, and 10

1 β= kF 0 (ξ)k2 . In view of the definition of the divided difference, conditions (A) and 10 the estimation (for x ∈ S(ξ, ρ)) kw(x) − ξk ≤ ky(x) − ξk + α kF 0 (ξ)−1 F(y(x))k + β kF 0 (ξ)−1 F(y(x))k2 ≤ g1 (t)t + α

Z 1 0

ϕ3 (τ g1 (t)t) dτ + β (

Z 1 0

ϕ3 (τ g1 (t)t) dτ)2 = ϕ5 (t).

Then, we can choose functions ϕi , i = 1, 2, 4 in terms of functions ϕ0 , ϕ, ϕ5 as follows ϕ (ϕ5 (t)) + ϕ(g1 (t)t) 2 ϕ0 (ϕ5 (s)) + ϕ0 (g1 (t)t) ϕ2 (s,t) = 2 ϕ1 (t) =

and ϕ4 (s,t) = ϕ2 (s,t) + ϕ0 (t) in all examples. Example 51. Let B1 = B2 = C[0, 1], the space of continuous functions defined on [0, 1] and be equipped with the max norm. Let T = U(0, 1). Define function F on T by F(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(32.28)

0

We have that 0

F (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ T.

1 Then, we get that x∗ = 0, ϕ0 (t) = 7.5t, ϕ(t) = 15t, ϕ3 (t) = 15, α = β = . This way, we 10 have that ρ1 = 0.066667, ρ2 = 0.00097005,ρ3 = 0.000510334. Example 52. Returning back to the motivation example at the introduction on this chap3 9 ter, we have ϕ0 (t) = ϕ(t) = 96.662907t, ϕ3 (t) = 1.0631, α = , and β = . Then, the 10 10 parameters for method (32.2) are ρ1 = 0.00689682,ρ2 = 0.00000221412,ρ3 = 0.00000121412. Example 53. Let B1 = B2 = R3 , T = S(0, 1), x∗ = (0, 0, 0)T and define F on T by F(x) = F(x1 , x2 , x3 ) = (ex1 − 1,

e−1 2 x2 + x2 , x3 )T . 2

(32.29)

Parametric Efficient Family of Iterative Methods For the points u = (u1 , u2 , u3 )T , the Fr´echet derivative is given by  u1  e 0 0 F 0 (u) =  0 (e − 1)u2 + 1 0  . 0 0 1

1

277

1

Then, F 0 (x∗ ) = diag(1, 1, 1), we have ϕ0 (t) = (e − 1)t, ϕ(t) = e e−1 t, ϕ3 (t) = e e−1 , α = β = 1 . 10 Then, we obtain that ρ1 = 0.382692, ρ2 = 0.96949, ρ3 = 0.154419.

4.

Conclusion

A ball convergence is presented to solve Banach space-valued equations using a TraubNewton-like method. The analysis leads to the computation of a convergences radius, error estimates, and uniqueness of the solution results based on Lipschitz-like functions and the first derivative. Our approach also extends the applicability of this family of methods from the n-dimensional Euclidean to the more general Banach space case. Numerical examples complement the theoretical results.

References [1] Amat S., Argyros I. K., Busquier S., and Her´nandez-Ve´ron M. A.. On two high-order families of frozen Newton-type methods. Numerical Linear Algebra with Applications, 25(1), 2018. [2] Amat S., Bermudez ´ C., Her´nandez-Ve´ron M. A., and Martinez E., On an efficient kstep iterative method for nonlinear equations. Journal of Computational and Applied Mathematics, 302:258-271, 2016. [3] Amat S., Busquier S., Bermudez ´ C., and Plaza S., On two families of high order Newton type methods. Applied Mathematics Letters, 25(12):2209-2217, 2012. [4] Argyros I. K., Computational theory of iterative methods, volume 15. Elsevier, 2007. [5] Argyros I. K., George S., Thapa N., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-I, Nova Publishes, NY, 2018. [6] Argyros I. K., George S., Thapa N., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-II, Nova Publishes, NY, 2018. [7] Argyros I. K., Cordero A., Magre˜na´ n A. A., and Torregrosa J. R., Third-degree anomalies of Traub’s method. Journal of Computational and Applied Mathematics, 309:511-521, 2017.

278

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[8] Argyros I. K. and Hilout S., Weaker conditions for the convergence of Newtons method. Journal of Complexity, 28(3):364-387, 2012. [9] Argyros I. K. and Hilout S., Computational methods in nonlinear analysis: efficient algorithms, fixed point theory and applications. World Scientific, 2013. [10] Argyros I. K., Magre˜na´ n A. A., Orcos L., and Sicilia J. A., Local convergence of a relaxed two-step Newton like method with applications. Journal of Mathematical Chemistry, 55(7):1427-1442, 2017. [11] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [12] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [13] Chicharo F. I., Cordero A., Garrido N., Torregrosa J. R., A new efficient methods for solving nonlinear systems, J. Diff. Equat. Appl., (2019). [14] Cordero A. and Torregrosa J. R., Low-complexity root-finding iteration functions with no derivatives of any order of convergence. Journal of Computational and Applied Mathematics, 275:502-515, 2015. [15] Cordero A., Torregrosa J. R., and Vindel P., Study of the dynamics of third-order iterative methods on quadratic polynomials. International Journal of Computer Mathematics, 89(13-14):1826-1836, 2012. [16] Ezquerro J. A. and Her´nandez-Ve´ron M. A., How to improve the domain of starting points for Steffensen’s method. Studies in Applied Mathematics, 132(4):354-380, 2014. [17] Ezquerro J. A. and Her´nandez-Ve´ron M. A., Majorizing sequences for nonlinear Fredholm Hammerstein integral equations. Studies in Applied Mathematics, 2017. ´ [18] Ezquerro J. Grau A.-Sanchez M., Her´nandez-Ve´ron M. A., and Noguera M., A family of iterative methods that uses divided differences of first and second orders. Numerical algorithms, 70(3):571-589, 2015. [19] Ezquerro J. A., Her´nandez-Ve´ron M. A., and Velasco A. I., An analysis of the semilocal convergence for Secant-like methods. Applied Mathematics and Computation, 266:883-892, 2015. [20] Her´nandez-Ve´ron M. A., Mart´ınez E., and Teruel C., Semilocal convergence of a kstep iterative process and its application for solving a special kind of conservative problems. Numerical Algorithms, 76(2):309-331, 2017. [21] Kantorovich L. V. and Akilov G. P., Functional analysis, Pergamon press, 1982. [22] Magre˜na´ n A. A., Cordero A., Guti´errez J. M., and Torregrosa J. R., Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane. Mathematics and Computers in Simulation, 105:49-61, 2014.

Parametric Efficient Family of Iterative Methods

279

[23] Magre˜na´ n A. A. and Argyros I. K., Improved convergence analysis for Newton-like methods. Numerical Algorithms, 71(4):811-826, 2016. [24] Magre˜na´ n A. A. and Argyros I. K., Two-step Newton methods. Journal of Complexity, 30(4):533-553, 2014. [25] Potra F. A. and Pt´ak V., Nondiscrete induction and iterative processes, volume 103, Pitman Advanced Publishing Program, 1984. [26] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3, no. 1, 129–142, 1978. [27] Ren H. and Argyros I. K., On the convergence of King-Werner-type methods of order free of derivatives. Applied Mathematics and Computation, 256:148-159, 2015. [28] Traub J. F., Iterative methods for the solution of equations. American Mathematical Soc., 1982.

Chapter 33

Fourth Order Derivative Free Scheme with Three Parameters 1.

Introduction

Let X,Y be Banach spaces, Ω ⊂ X be a nonempty and open set, and F a continuous operator mapping Ω into Y. Numerous problems from diverse disciplines can be formulated like equation F(x) = 0,

(33.1)

using mathematical modeling [3,5,6,7,9,15,17,18,19,21]. The solution x∗ is sought in closed form. But this is achieved only in special occasions. That is why iterative schemes are developed generating sequences converging to x∗ under suitable convergence criteria [1][26]. We study the derivative free scheme developed for x0 ∈ Ω and all n = 0, 1, 2, . . . by un = xn + bF(xn ) yn = xn − [un , xn ; F]−1 F(xn )

(33.2)

zn = yn + cF (xn )

xn+1 = yn − (aI + Gn ((3 − 2a)I + (a − 2)Gn ))[un, xn ; F]−1 F(yn ),

where Gn = [un , xn ; F]−1 [zn , yn ; F], [., .; F] : Ω × Ω −→ L (X,Y ) is a divided difference of order one [18], and a, b, c are real or complex parameters. The fourth convergence order of scheme (33.2) was established in [25] when X = Y = R j using Taylor series expansions and hypotheses up to the fifth derivative of F (not appearing on scheme (33.2)). These hypotheses limit the applicability of scheme (33.2). As a motivational example, consider 1 3 function Let f : [− , ] −→ R defined by 2 2  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. 1 3 Then, it is easy to see that f 000 is not bounded on [− , ]. Hence, the results in [25] can2 2 not be used to solve equation (33.1) using scheme (33.2). Moreover, no upper bounds on

282

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

kxn − x∗ k or results on the uniqueness of x∗ were given. Motivated by all these, we develop a technique using only the divided difference of order one (that appears on (33.2)) which also gives computable upper bounds on kxn − x∗ k and uniqueness results. Hence, we extend the applicability of the scheme (33.2). Moreover, the Computational Order of Convergence (COC) and Approximate Computational Order of Convergence (ACOC) is used to determine the convergence order which does not require the usage of higher-order derivatives or divided differences. This is done in Section 2. Numerical examples appear in Section 3. The technique can be used in other methods [1]-[26].

2.

Convergence

It is more convenient for the local convergence analysis to develop some real parameters and functions. Let T = [0, ∞), and α ≥ 0, δ ≥ 0, µ ≥ 0, q ≥ 0 be given parameters. Assume there exists function ϕ0 : T × T −→ T which is continuous and nondecreasing such that equation 1 − ϕ0 (|b|δt,t) = 0 (33.3) has a least positive solution ρ0 . Let T0 = [0, ρ0 ). Assume there exits function ϕ : T0 × T0 −→ T, which is continuous and nondecreasing such that for ϕ(|b|δt,t) ¯ h1 (s) = , h1 (t) = h1 (t) − 1, 1 − ϕ0 (|b|t,t)  2 |3 − 2a|λ λ p(t) = |a| + + |a − 2| , 1 − ϕ0 (|b|δt,t) 1 − ϕ0 (|b|δt,t)   qp(t) h2 (t) = 1 + h1 (t), h¯ 2 (t) = h2 (t) − 1 1 − ϕ0 (|b|δt,t)

equations

h¯ 1 (t) = 0

(33.4)

h¯ 2 (t) = 0

(33.5)

and have least solution ρ1 and ρ2 , respectively in(0, ρ0). We shall show that ρ given by ρ = min{ρ1 , ρ2 },

(33.6)

is a radius of convergence for scheme (33.2). Using these definitions, we see that for each t ∈ [0, ρ) 0 ≤ ϕ0 (|b|δt,t) < 1 (33.7) 0 ≤ h1 (t) < 1

(33.8)

0 ≤ h2 (t) < 1.

(33.9)

and ¯ ∗ , ε) = {x ∈ X : kx − x∗ k ≤ ε}, ε > 0. Define B(x∗ , ε) = {x ∈ X : kx − x∗ k < ε}, and B(x We shall utilize conditions (A ):

Fourth Order Derivative Free Scheme with Three Parameters

283

(A1 ) There exists a simple solution x∗ ∈ Ω of equation F(x) = 0 and δ ≥ 0 such that kF(x)k ≤ δ for all x ∈ Ω. (A2 ) There exists function ϕ0 : T × T −→ T a continuous and nondecreasing such that for all u, x ∈ Ω kF 0 (x∗ )−1 ([u, x; F] − F 0 (x∗ ))k ≤ ϕ0 (ku − x∗ k, kx − x∗ k). Set Ω0 = Ω ∩ B(x∗ , ρ0 ). (A3 ) There exists function ϕ : T0 × T0 −→ T continuous and nondecreasing such that for each u, x ∈ Ω0 kF 0 (x∗ )−1 ([u, x; F] − [x, x∗ : F])k ≤ ϕ(ku − xk, kx − x∗ k), and parameters α ≥ 0, λ ≥ 0, q ≥ 0, µ ≥ 0 such that kI + b[x, x∗ ; F]k ≤ α, kF 0 (x∗ )−1 [z, y; F]k ≤ λ for all z, y ∈ Ω0 , kF 0 (x∗ )−1 F(y)k ≤ q for all y ∈ Ω0 , kI + c[y, x∗ ; F]k ≤ µ for all y ∈ Ω0 . ¯ ∗ , R) ⊆ Ω, where R = max{αρ, µh1(ρ)ρ, ρ}. (A4 ) B(x (A5 ) There exists ρ∗ ≥ ρ such that

ϕ0 (0, ρ∗) < 1.

¯ ∗ , ρ∗ ). Set Ω1 = Ω ∩ B(x Next, we develop the local convergence analysis of the scheme (33.2) using the conditions (A ) and the preceding notation. Theorem 61. Under the conditions (A ) further assume x0 ∈ B(x∗ , ρ) − {x∗ }. Then, sequence {xn } generated by scheme (33.2) is well defined in B(x∗ , ρ), remains in B(x∗ , ρ) for each n = 0, 1, 2, . . . and converges to x∗ . Moreover, x∗ is the only solution of equation F(x) = 0 in the set Ω1 given in (A5 ). Proof. We shall first show u0 ∈ Ω. Indeed, from (A1 ) and the first condition in (A3 ), we have ku0 − x∗ k ≤ kx0 − x∗ + b(F(x0 ) − F(x∗ ))k = k(I + b[x0 , x∗ ; F])(x0 − x∗ )k

≤ kI + b[x0 , x∗ ; F]kkx0 − x∗ k ≤ αkx0 − x∗ k ≤ R,

284

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so u0 ∈ Ω (by (A4 )). Then, by (A2 ), (33.6) and (33.7), we get kF 0 (x∗ )−1 ([u0 , x0 ; F] − F 0 (x∗ ))k ≤ ϕ0 (ku0 − x∗ k, kx0 − x∗ k) ≤ ϕ0 (kbF(x0 )k, kx0 − x∗ k)

≤ ϕ0 (|b|k[x0, x∗ ; F](x0 − x∗ )k, kx0 − x∗ k)

≤ ϕ0 (|b|δkx0 − x∗ k, kx0 − x∗ k)

≤ ϕ0 (|b|δρ, ρ) < 1, so k[u0, x0 ; F]−1 F 0 (x∗ )k ≤

1 1 − ϕ0 (|b|δkx0 − x∗ k, kx0 − x∗ k)

(33.10)

by the Banach Lemma on invertible operators [18] and (33.7). Hence, y0 and x1 are defined by scheme (33.2) for n = 0. Then, in view of (33.6), (33.8), first condition of (A3 ), and (33.10), we obtain in turn ky0 − x∗ k = kx0 − x∗ − [u0 , x0 ; F]−1 F(x0 )k

= k[u0 , x0 ; F]−1 ([u0 , x0 ; F] − [x0 , x∗ ; F])(x0 − x∗ )k

≤ k[u0 , x0 ; F]−1 F 0 (x∗ )kkF 0 (x∗ )−1 ([u0, x0 ; F] − [x0 , x∗ ; F])kkx0 − x∗ k ϕ(ku0 − x0 k, kx0 − x∗ k)kx0 − x∗ k ≤ 1 − ϕ0 (|b|δkx0 − x∗ k, kx0 − x∗ k) ≤ h1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < ρ,

(33.11)

so y0 ∈ B(x∗ , ρ). Set A0 = aI + (3 − 2a)G0 + (a − 2)G20 . Then, we get from the above |3 − 2a|λ 1 − ϕ0 (|b|δkx0 − x∗ k, kx0 − x∗ k) 2  λ = p(kx0 − x∗ k). +|a − 2| 1 − ϕ0 (|b|δkx0 − x∗ k, kx0 − x∗ k)

kA0 k ≤ |a| +

(33.12)

Using (33.2), (33.6), (33.9), (33.11) and (33.12), we get kx1 − x∗ k = ky0 − x∗ + A0 [u0 , x0 ; F]−1 F(y0 )k

≤ [1 + kA0 kk[u0, x0 ; F]−1 F 0 (x∗ )kkF 0 (x∗ )−1 [y0 , x∗ ; F]k]ky0 − x∗ k   qp(kx0 − x∗ k) ≤ 1+ ky0 − x∗ k 1 − ϕ0 (|b|δkx0 − x∗ k, kx0 − x∗ k) ≤ h2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k, (33.13)

so x1 ∈ B(x∗ , ρ), where we also used kz0 − x∗ k = k(I + c[y0 , x∗ ; F])(y0 − x∗ )k

≤ µky0 − x∗ k ≤ µh1 (kx0 − x∗ k)kx0 − x∗ k ≤ µh1 (ρ)ρ ≤ R,

Fourth Order Derivative Free Scheme with Three Parameters

285

and kF 0 (x∗ )−1 [z0 , y0 ; F]k ≤ λ, so z0 ∈ Ω. Simply exchange u0 , x0 , y0 , z0 , x1 by um , xm , ym , zm, xm+1 in preceding calculations to arrive at kym − x∗ k ≤ h1 (kxm − x∗ k)kxm − x∗ k and kxm+1 − x∗ k ≤ h2 (kxm − x∗ k)kxm − x∗ k, so kxm+1 − x∗ k ≤ γkxm − x∗ k < ρ,

(33.14)

where γ = h2 (kx0 − x∗ k) ∈ [0, 1). Hence, we conclude from (33.14) that lim xm = x∗ and m−→∞

xm+1 ∈ B(x∗ , ρ). Finally, concerning the uniqueness part, let y∗ ∈ Ω1 with F(y∗ ) = 0, and set T = [x∗ , y∗ ; F]. Then, by (A2 ) and (A5 ) kF 0 (x∗ )−1 (T − F 0 (x∗ ))k ≤ ϕ0 (0, kx∗ − y∗ k) ≤ ϕ0 (0, ρ∗ ) < 1,

so x∗ = y∗ follows from the invertibility of T and the identity 0 = F(x∗ ) − F(y∗ ) = T (x∗ − y∗ ).

3.

Numerical Examples

Example 54. Let X = Y = R3 , Ω = U(0, 1), x∗ = (0, 0, 0)T . Consider F on Ω by F(x) = F(u1 , u2 , u3 ) = (eu1 − 1,

e−1 2 u2 + u2 , u3 )T . 2

(33.15)

The Fr´echet derivative is given for u = (u1 , u2 , u3 )T , by  u1  e 0 0 F 0 (u) =  0 (e − 1)u2 + 1 0  . 0 0 1

We have taken

[x, y; F] =

Z 1 0

F 0 (y + θ(x − y))dθ.

Using the norm of the maximum of the rows and since F 0 (x∗ ) = diag(1, 1, 1)T , we get 1 1 1 ϕ0 (s,t) = (e − 1)(s + t), ϕ(s, y) = (es + (e − 1)t), a = 2, b = −2 = c α = µ = e e−1 , λ = 2 2 1 1 1 e−1 e−1 e ,q = e . 2 ρ1 = 0.20852, ρ2 = 0.0554957.

286

4.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

The local convergence analysis of a fourth-order derivative free scheme with three parameters is studied for solving Banach space-valued equations and systems of equations. Our idea extends the applicability of the scheme in cases not covered before and can be used on other schemes. Numerical examples are used to test the convergence criteria.

References [1] Amat S., Busquier S., Convergence and numerical analysis of two-step Steffensens methods, Comput. Math. Appl. 49 (2005) 13-22. [2] Amat S., Busquier S., A two-step Steffensens under modified convergence conditions, J. Math. Anal. Appl. 324 (2006) 1084-1092. [3] Argyros I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [4] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [5] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [6] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [8] Argyros I. K., George S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [9] Brent R. P., Some efficient algorithms for solving systems of nonlinear equations, SIAM J. Numer. Anal. 10 (1973) 327-344. [10] D´zunic J., Petkovic M. S., On generalized multipoint root-solvers with memory, J. Comput. Appl. Math. 236 (2012) 2909-2920. [11] Fousse L., Hanrot G., Lef´evre V., Pelissier P., Zimmermann P., MPFR: a multipleprecision binary floating-point library with correct rounding, ACMTrans. Math. Softw. 33 (2) (2007) 15 (Art. 13). [12] Grau-S´anchez M., Grau A., Noguera M., Frozen divided difference scheme for solving systems of nonlinear equations, J. Comput. Appl. Math. 235 (2011)1739-1743. [13] Grau-S´anchez M. Noguera M., A technique to choose the most efficient method between secant method and some variants, Appl. Math. Comput. 218(2012) 6415-6426.

Fourth Order Derivative Free Scheme with Three Parameters

287

[14] Grau-S´anchez M., Peris J. M., Guti´errez J. M., Accelerated iterative methods for finding solutions of a system of nonlinear equations, Appl. Math. Comput. 190 (2007) 1815-1823. [15] Kelley C. T., Solving Nonlinear Equations with Newtons Method, SIAM, Philadelphia, 2003. [16] Liu Z., Zheng Q., Zhao P., A variant of Steffensens method of fourth-order convergence and its applications, Appl. Math. Comput. 216 (2010) 1978-1983. [17] Mc Namee J.M., Numerical Methods for Roots of Polynomials, Part I, Elsevier, Amsterdam, 2007. [18] Ortega J. M., Rheinboldt W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [19] Petkovic M. S., Neta B., Petkovic L. D., D´zunic J., Multipoint Methods for Solving Nonlinear Equations, Elsevier, Boston, 2013. [20] Petkovic M. S., Iterative Methods for Simultaneous Inclusion of Polynomial Zeros, Springer-Verlag, Berlin-Heidelberg-New York, 1989. [21] Potra F. A., Pt´ak V., Nondiscrete Induction and Iterative Processes, Pitman Publishing, Boston, 1984. [22] Ren H., Wu Q., Bi W., A class of two-step Steffensen type methods with fourth-order convergence, Appl. Math. Comput. 209 (2009) 206-210. [23] Steffensen J. F., Remarks on iteration, Skand. Aktuar Tidskr. 16 (1933) 64-72. [24] Sharma J. R., Arora H., An efficient derivative free iterative method for solving systems of nonlinear equations, Appl. Anal. Discrete Math. 7 (2013) 390-403. [25] Sharma J. R., Arora H., Petkovic M. S., An efficient derivative free family of fourth order methods for solving systems of nonlinear equations, Appl. Math. Comput., 235, (2014), 383-393. [26] Sharma J. R., GuhaR.K., Gupta P., Some efficient derivative free methods with memory for solving nonlinear equations, Appl. Math. Comput. 219 (2012)699-707.

Chapter 34

Jarratt-Type Methods 1.

Introduction

In this chapter, we consider Jarratt-type methods of order six for approximating a solution p of the nonlinear equation H(x) = 0. (34.1) Here H : Ω ⊂ X → Y is a continuously differentiable nonlinear operator between the Banach spaces X and Y, and Ω stand for an open non empty convex compact set of X. The Jarratttype methods of order six we are interested in is defined as follows [12]: 2 yn = xn − H 0 (xn )−1 H(xn ) 3 23 9 zn = xn − [ I − H 0 (xn )−1 H 0 (yn )(3I − H 0 (xn )−1 H(xn ))] 8 8 ×H 0 (xn )−1 H(xn )) 1 xn+1 = zn − (5I − 3 − H 0 (xn )−1 )H(yn ))H 0 (xn )−1 H(zn )). 2

(34.2)

The convergence of the above method has been shown using Taylor expansions involving the seventh order derivative not on these methods, of H. The hypotheses involving the seventh derivatives limit the applicability of these methods. For example: Let 1 3 B1 = B2 = R, Ω = [− , ]. Define f on Ω by 2 2  3 s logs2 + s5 − s4 , s 6= 0 f (s) = 0, s = 0. Then, we get f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 ,

f 00 (s) = 6s logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22.

Obviously f 000 (s) is not bounded on Ω. Hence, the convergence of methods (34.2) is not guaranteed by the earlier analysis [1]-[12].

290

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

In this chapter, we obtained the same convergence order using COC or ACOC (to be precise in Remark 52) that depend only on the first derivative and then iterates. Hence, we extended the applicability of the methods. Our technique can be used to compare other methods along the same lines. The rest of the chapter is organized as follows. The convergence analysis of the method (34.2) is given in Section 2.

2.

Convergence Analysis

Let S := [0, ∞). Assume there exists a continuous and increasing function ω0 defined on the interval S with values in itself such that the equation ω0 (s) − 1 = 0,

(34.3)

has a least positive solution called r0 . Set S0 = [0, r0). Assume there exist functions ω, ω1 defined on S0 with values in S. Define functions g1 and h1 on S0 as R1 1R1 0 ω0 ((1 − τ)s)dτ + 3 0 ω1 (τs)dτ g1 (s) = . 1 − ω0 (s) and

h1 (s) = g1 (s) − 1. Assume equation h1 (s) = 0.

(34.4)

has a least solution in (0, r0) denoted by R1 . Moreover, define functions g2 and h2 on S0 as b(s) 01 ω1 (τs)dτ g2 (s) = g(s) + 1 − ω0 (s) R

and

h2 (s) = g2 (s) − 1, where g(s) = and

R1 0

ω((1 − τ)s)dτ ω0 (s) + ω0 (g1 (s)s) , a(s) = 1 − ω0 (s) 1 − ω0 (s) 9 3 15 b(s) = a(s)2 + a(s) + . 8 4 8

Equation h2 (s) = 0 has a least solution in (0, r0) called R2 . Assume equation ω0 (g2 (s)s) − 1 = 0

(34.5)

(34.6)

Jarratt-Type Methods

291

has a least solution r1 . Set r = min{r0 , r1 } and S1 = [0, r). Define functions g3 and h3 on S1 as (ω0 (g2 (s)s) + ω0 (s)) 01 ω1 (τg2 (s)s)dτ g3 (s) = [g(g2(s)s) + (1 − ω0 (s))(1 − ω0 (g2 (s)s)) R

3 a(s) 01 ω1 (g2 (s)s) + ]g2 (s) 2 1 − ω0 (s) R

and h3 (s) = g3 (s) − 1. Assume equation h3 (s) = 0

(34.7)

has a least solution in (0, r) called R3 . Define a radius of convergence R for method (34.2) as R = min{Rm }, m = 1, 2, 3.. (34.8) It follows 0 ≤ ω0 (s) < 1

(34.9)

0 ≤ ω0 (g2 (s)s) < 1

(34.10)

0 ≤ gm (s) < 1,

(34.11)

and ¯ ) denote open and closed balls in X, respectively with hold for all s ∈ [0, R). Let U(x, γ), U(x, radius γ > 0 and center x ∈ X. We shall use the notation en = kxn − pk, for all n = 0, 1, 2, . . .. The following conditions (A ) are considered in our analysis. (A1 ) F : Ω −→ Y is differentiable in the Fr´echet sense and there exists a simple solution p of equation (34.1). (A2 ) There exists a continuous and increasing function ω0 : S0 −→ S such that for all x ∈ Ω kH 0 (p)−1 (H 0(x) − H 0 (p))k ≤ ω0 (kx − pk). Set Ω0 = Ω ∩U(p, r0 ). (A3 ) There exists a continuous and increasing functions ω : S0 −→ S, ω1 : S0 −→ S such that for all x, y ∈ Ω0 kH 0 (p)−1 (H 0 (y) − H 0 (x))k ≤ ω(ky − xk), H 0 (p)−1 H 0 (x)k ≤ ω1 (kx − pk). ¯ (A4 ) U(p, R) ⊂ Ω, and r0 , r1 exist and radius R is defined by (34.8). (A5 ) There exists T ≥ R such that Z 1 0

Set Ω1 = Ω ∩ U¯ (p, T ).

ω0 (τT )dτ < 1.

292

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 62. Assume conditions (A )hold. Choose starter x0 ∈ U(p, R) − {p}. Then, sequence {xn } generated by method (34.2) is well defined, {xn } ∈ U(p, R), and lim xn = p. n−→∞ In addition, (i) the following items hold kyn − pk ≤ g1 (en )en ≤ en < R,

(34.12)

kzn − pk ≤ g2 (en )en ≤ en ,

(34.13)

kxn+1 − pk ≤ g3 (en )en ≤ en ,

(34.14)

and with functions gm given earlier and radius R defined by (34.8); and (ii) p is the only solution of equation (34.1) in the set Ω1 given in (A5 ). Proof. Let v ∈ U(p, R). In view of (34.8), (34.9), (A1 ) and (A2 ), we get kH 0 (p)−1(H 0 (v) − H 0 (p))k ≤ ω0 (kv − pk) < ω0 (R) ≤ 1, so kH 0 (v)−1H 0 (p)k ≤

1 , 1 − ω0 (kv − pk)

(34.15)

(34.16)

by a lemma of Banach on invertible operators [2]. We also have that y0 , z0 and x1 exist by method (34.2). By the first sub-step of method (34.2) for n = 0, we can write 1 y0 − p = (x0 − p − H 0 (x0 )−1 H(x0 )) + H 0 (x0 )−1 H(x0 ). 3

(34.17)

But then, using (34.8), (34.11) (for m = 1), A1 ), (A3 ), (34.16) (for v = x0 ), (34.17) and the triangle inequality, we obtain ky0 − pk ≤ kH 0(x0 )−1 H 0 (p)kk 0

−1

Z 1 0

+kH (x0 ) H(x0 )kk

H 0 (p)−1 (H 0 (p + τ(x0 − p)) − H 0 (x0 ))dτ(x0 − p)k

Z 1 0

H 0 (p + τ(x0 − p))dτ(x0 − p)k

1 1 0 ω((1 − τ)kx0 − pk)dτ + 3 0 ω1 (τkx0 − pk)dτ]kx0 − pk ≤ 1 − ω0 (kx0 − pk) ≤ g1 (kx0 − pk)kx0 − pk ≤ kx0 − pk < R,

[

R1

R

(34.18)

verifying (34.12) for n = 0 and y0 ∈ U(p, R). Then, by the second sub-step of method (34.2) for n = 0, we can write. z0 − p = x0 − p − H 0 (p)−1 H(x0 ) + M0 H 0 (p)−1H(x0 ), where M0 = −

15 9 I + Q0 (3I − Q0 ), for Q0 = H 0 (p)−1 H 0 (y0 ). But then 8 8 9 15 M0 = − Q20 + 3Q0 − I 8 8 9 15 = − (Q0 − I)2 + 3(Q0 − I) − I, 8 8

(34.19)

Jarratt-Type Methods

293

and kQ0 − Ik = kH 0 (x0 )−1 [(H 0(y0 ) − H 0 (p)) + (H 0 (p) − H 0 (x0 ))]k ω0 (e0 ) + ω0 (ky0 − pk) ≤ 1 − ω0 (e0 ) ω0 (e0 ) + ω0 (g1 (e0 )e0 ) ≤ 1 − ω0 (e0 ) = a(e0 ), 15 9 kM0 k ≤ a(e0 )2 + 3a(e0 ) + = b(e0 ) 8 8 and kz0 − pk = k(x0 − p − H 0 (x0 )−1 H(x0 ))k

+kM0 kkH 0(x0 )−1 H 0 (p)kkH 0(p)−1 H(x0 )k

b(e0 ) 01 ω1 (τe0 )dτ ≤ [g(e0 ) + ]e0 1 − ω0 (e0 ) = g2 (e0 )e0 ≤ e0 , R

(34.20)

so (34.13) hold for n = 0 and z0 ∈ U(p, R). Next, we can write by the third sub-step of method (34.2) x1 − p

= (z0 − p − H 0 (z0 )−1 H(z0 )) + (H 0 (z0 )−1 − H 0 (x0 )−1 )H(z0 ) 3 − (I − Q0 )H 0 (x0 )−1 H(z0 ) 2 = (z0 − p − H 0 (z0 )−1 H(z0 )) + H 0 (z0 )−1 [(H 0 (x0 ) − H 0 (p)) + (H 0 (p) − H 0 (z0 ))H(z0 ) 3 − (I − Q0 )H 0 (x0 )−1 H(z0 ), 2

so (ω0 (kz0 − pk) + ω0 (kx0 − pk)) 01 ω1 (τkz0 − pk)dτ kx1 − pk ≤ [g(kz0 − pk) + (1 − ω0 (kx0 − pk))(1 − ω0 (kz0 − pk)) R

3 a(e0 ) 01 ω1 (τkz0 − pk)dτ + ]kz0 − pk 2 1 − ω0 (e0 ) = g3 (e0 )e0 ≤ e0 , R

(34.21)

completing the induction for estimations (34.12) -(34.14) for n = 0. Replace x0 , y0 , z0 , x1 by xi , yi , zi, xi+1 in the above calculations for i = 1, 2, . . ., n − 1 to complete the induction for estimations (34.12)-(34.14). Then, by the estimation kxi+1 − pk ≤ qkxi − pk < R,

(34.22)

where q = g3 (e0 ) ∈ [0, 1), we get xi+1 ∈ U(p, R) and lim xi = p. Consider v∗ ∈ Ω1 satisfyi−→∞

ing equation (34.1) and let B =

Z 1 0

kH 0 (p)−1 (B − H 0 (p))k ≤

0

H (p + τ(v∗ − p))dτ. Using (A2 ) and (A5 ), we have

Z 1 0

ω0 ((1 − τ)kv∗ − pk)dτ ≤

Z 1 0

ω0 (τT )dτ < 1,

294

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so p = v∗ , by the existence of B−1 and the estimate 0 = H(v∗ ) − H(p) = B(v∗ − p). Remark 52. We can find the convergence order by resorting to the computational order of convergence (COC) defined by     kxn − pk kxn+1 − pk / ln ξ = ln kxn − pk kxn−1 − pk or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k This way, we obtain in practice the order of convergence without resorting to the computation of higher-order derivatives appearing in the method or the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of those results.

3.

Conclusion

In earlier studies of Jarratt-type methods convergence order, six was shown using assumptions up to the seventh derivative of the operator involved. These assumptions on derivatives, not appearing in these methods limit the applicability of these methods. We address these concerns using only the first derivative, which only appears in these methods.

References [1] Amat S., Busquier S., Guti´errez J. M., Geometrical constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 157, 197-205 (2003). [2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2019. [4] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Cordero A., Mart´ınez E., Torregrosa J. R., Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 231, 541–551 (2009). [6] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., A modified Newton–Jarratt’s composition. Numer. Algor. 55, 87–99 (2010). [7] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 25, 2369–2374 (2012).

Jarratt-Type Methods

295

[8] Darvishi M. T., Barati A., A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 188, 257–261 (2007). [9] Frontini M., Sormani E., Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 149, 771–782 (2004). [10] Grau-S´anchez M., Noguera M., Amat S., On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 237, 363–372 (2013). [11] Grau-S´anchez M., Peris J. M., Guti´errez J. M., Accelerated iterative methods for finding solutions of a system of nonlinear equations. Appl. Math. Comput. 190, 1815–1823 (2007). [12] Sharma J. R., Arrora H., Efficient Jarratt-like methods for solving systems of nonlinear equations, Calcolo, 1(2014), 193-210, DOI 10.1007/s10092-013-0097-1.

Chapter 35

Convergence Radius of an Efficient Iterative Method with Frozen Derivatives 1.

Introduction

We consider solving equation F(x) = 0,

(35.1)

where F : D ⊂ X −→ Y is continuously Fr´echet differentiable, X,Y are Banach spaces and D is a nonempty convex set. Iterative methods are used to generate a sequence converging to a solution x∗ of equation (35.1) under certain conditions [1]-[12]. Recently a surge has been noticed in the development of efficient iterative methods with frozen derivatives. The convergence order is obtained using Taylor expansions and conditions on high-order derivatives not appearing in the method. These conditions limit the applicability of the methods. 1 3 For example: Let X = Y = R, D = [− , ]. Define f on D by 2 2  3 s logs2 + s5 − s4 i f s 6= 0 f (s) = 0 i f s = 0. Then, we have x∗ = 1, and f 0 (s) = 3s2 log s2 + 5s4 − 4s3 + 2s2 , f 00 (s) = 6x logs2 + 20s3 − 12s2 + 10s, f 000 (s) = 6 logs2 + 60s2 − 24s + 22.

Obviously f 000 (s) is not bounded on D. So, the convergence of these methods is not guaranteed by the analysis in these papers. Moreover, no comparable error estimates are given on the distances involved or the uniqueness of the solution results. That is why we develop a technique so general that it can

298

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

be used on iterative methods and address these problems by using only the first derivative, which only appears in these methods. We demonstrate this technique on the (3(i + 1), (i = 1, 2, . . .) convergence order method 7 3 (−1) defined for all n = 0, 1, 2, . . ., yn = yn and hn = h(xn , yn ) = I + An (−4I + An ), An = 2 2 F 0 (xn )−1 F 0 (yn ) by yn = xn − F 0 (xn )−F(xn ) 1 (0) yn = yn − (I − An )F 0 (xn )−1 F(xn ) 2 (1) (0) (0) yn = yn − h(xn , yn )F 0 (xn )−1 F(yn ) (2)

yn

(3) yn

(i−1)

yn

(i) yn

(1)

(1)

= yn − h(xn , yn )F 0 (xn )−1 F(yn )

= .. .

(35.2)

(2) (2) yn − h(xn , yn )F 0 (xn )−1 F(yn ) (i−2)

= yn

(i−2)

− h(xn , yn )F 0 (xn )−1 F(yn

= xn+1 =

)

(i−1) (i−1) yn − h(xn , yn )F 0 (xn )−1 F(yn ).

The efficiency, convergence order, and comparisons with other methods using similar information were given in [10] when X = Y = Rk . The convergence was shown using the seventh derivative. We include error bounds on kxn − x∗ k and uniqueness results not given in [10]. Our technique is so general that it can be used to extend the usage of other methods [1]-[12]. The chapter contains local convergence analysis in Section 2 and the numerical examples in Section 3.

2.

Convergence for Method (35.2)

Set S = [0, ∞). Let w0 : S −→ S be a continuous and nondecreasing function. Suppose that equation ω0 (t) − 1 = 0 (35.3) has a least positive solution ρ0 . Set S0 = [0, ρ0). Let ω : S0 −→ S and ω1 : S0 −→ S be continuous and nondecreasing functions. Suppose that equations ϕ−1 (t) − 1 = 0 (35.4) ϕ0 (t) − 1 = 0

(35.5)

ψm (t)ϕ0 (t) − 1 = 0, m = 1, 2, . . ., i

(35.6)

and have least solutions r−1 , r0 , rm ∈ (0, ρ0), respectively, where ϕ−1 (t) =

R1 0

ω((1 − θ)t)dθ , 1 − ω0 (t)

Convergence Radius of an Efficient Iterative Method with Frozen Derivatives

299

(ω0 (t) + ω0 (ϕ−1 (t)t)) 01 ω1 (θt)dθ , ϕ0 (t) = ϕ−1 (t)t + 2(1 − ω0 (t))2 R

and

(ω0 (t) + ω0 (ϕ−1 (t)t)) 01 ω1 (θt)dθ ψ(t) = ϕ−1 (ϕ0 (t)t) + (1 − ω0 (t))2    ! 1 ω0 (t) + ω0 (ϕ−1 (t)t) 2 ω0 (t) + ω0 (ϕ−1 (t)t) + 3 +2 2 1 − ω0 (t) 1 − ω0 (t) R

×

Z 1 ω1 (θt)dθ 0

1 − ω0 (t)

.

Define r = min{r j }, j = −1, 0, 1, . . ., m.

(35.7)

It follows by the definition of r that for each t ∈ [0, r) 0 ≤ ω0 (t) < 1

(35.8)

0 ≤ ϕ j (t) < 1.

(35.9)

and ¯ α) We shall show that r is a radius of convergence for method (35.2). Let B(x, α), B(x, denote the open and closed balls respectively in X with center x ∈ X and of radius α > 0. The following set of conditions (A) shall be used in the local convergence analysis of method (35.2). (A1) F : D ⊂ X −→ Y is Fr´echet continuously differentiable and there exists x∗ ∈ D such that F(x∗ ) = 0 and F 0 (x∗ )−1 ∈ L(Y, X). (A2) There exists function ω0 : S −→ S continuous and nondecreasing such that for each x∈D kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ω0 (kx − x∗ k). Set D0 = D ∩ B(x∗ , ρ0 ) (A3) There exists functions ω : S0 −→ S, ω1 : S0 −→ S continuous and nondecreasing such that for each x, y ∈ D0 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ω(ky − xk) and kF 0 (x∗ )−1 F 0 (x)k ≤ ω1 (kx − x∗ k). ¯ ∗ , r) ⊂ D, where r is defined by (35.7). (A4) B(x (A5) There exists r∗ ≥ r such that Z 1 0

¯ ∗ , r∗ ). Set D1 = D ∩ B(x

ω0 (θr∗ )dθ < 1.

300

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Next, the local convergence of method (35.2) is given using the conditions (A) and the aforementioned notation. Theorem 63. Suppose that the conditions (A) hold. Then, sequence {xn } generated by method (35.2) for any starting point x0 ∈ B(x∗ , r) − {x∗} is well defined in B(x∗ , r), remains in B(x∗ , r) and converges to x∗ so that for all n = 0, 1, 2, . . ., m = 1, 2, . . ., i, kyn − x∗ k ≤ ϕ−1 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < r, (0)

(m)

kyn − x∗ k ≤ ϕ0 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k,

(35.10) (35.11)

kyn − x∗ k ≤ ψm (kxn − x∗ k)ϕ0 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k,

(35.12)

kxn+1 − x∗ k ≤ ψi (kxn − x∗ k)ϕ0 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k.

(35.13)

and Proof. Let v ∈ B(x∗ , r) − {x∗ }. Using (35.7), (35.8), (A1) and (A2), we get in turn kF 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ ω0 (kv − x∗ k) ≤ ω0 (r) < 1.

(35.14)

It follows by (35.14) and a perturbation Lemma by Banach [2,8] that F 0 (v)−1 ∈ L(Y, X) and kF 0 (v)−1 F 0 (x∗ )k ≤ (0)

1 . 1 − ω0 (kv − x∗ k)

(35.15)

(m)

It also follows that y0 , y0 , . . .y0 exist by method (35.2). By the first sub-step of method (35.2), (35.9) for j = −1, (A3) and (35.15), we have in turn ky0 − x∗ k = kx0 − x∗ − F 0 (x0 )−1 F(x0 )k = kF 0 (x0 )−1

Z 1 0

(F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 ))dθ(x0 − x∗ )k

R1

ω((1 − θ)kx0 − x∗ k)dθkx0 − x∗ k 1 − ω0 (kx0 − x∗ k) ≤ ϕ−1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r,



0

(35.16)

so y0 ∈ B(x∗ , r). Then, by the second sub-step of method (35.2), (35.9) for j = 0, (A3) and (35.15), we obtain in turn 1 (0) ky0 − x∗ k = ky0 − x∗ + F 0 (x0 )−1 F 0 (x∗ )F 0 (x∗ )−1 (F 0 (x0 ) − F 0 (y0 )) 2 ×F 0 (x0 )−1 F 0 (x∗ )F 0 (x∗ )−1 F(x0 )k

(35.17)

≤ [ϕ−1 (kx0 − x∗ k)kx0 − x∗ k

# R 1 (ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k)) 01 ω1 (θkx0 − x∗ k)dθ + kx0 − x∗ k 2 (1 − ω0 (kx0 − x∗ k))2

≤ ϕ0 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r,

(35.18)

Convergence Radius of an Efficient Iterative Method with Frozen Derivatives

301

(0)

so y0 ∈ B(x∗ , r). Next, by the rest of the sub-steps of method (35.2), (35.9) for j = 1, 2, . . ., m and (35.15), we have in turn 3 7 (0) (1) (0) y0 − x∗ = y0 − x∗ − ( I + A0 (−4I + A0 ))F 0 (x0 )−1 F(y0 ) 2 2 (0) (0) 0 −1 = y0 − x∗ − F (x0 ) F(y0 ) 1 (0) − (3(A0 − I)2 − 2(A0 − I))F 0 (x0 )−1 F(y0 ) 2 (0) (0) (0) = y0 − x∗ − F 0 (y0 )−1 F(y0 (0)

(0)

+(F 0 (y0 )−1 − F 0 (x0 )−1 )F(y0 ) 1 (0) − (3(A0 − I)2 − 2(A0 − I))F 0 (x0 )−1 F(y0 ), 2

(35.19)

which by the triangle inequality leads to (1)

(0)

ky0 − x∗ k ≤ [ϕ−1 (ky0 − x∗ k) +

(0)

(ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k))

 1 + 3 2

R1 0

(0) (1 − ω0 (ky0 − x∗ k))(1 − ω0 (kx0 − x∗ k)) !2 (0) ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k)

1 − ω0 (kx0 − x∗ k) (0)

ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k) 1 − ω0 (kx0 − x∗ k)

+2

(0)

ω1 (θky0 − x∗ k)dθ

!! R

≤ ϕ1 (kx0 − x∗ k)ϕ) (kx0 − x∗ k)kx0 − x∗ k

(0)

ky0 − x∗ k

(0) 1 0 ω1 (θky0 − x∗ k)dθ

1 − ω0 (kx0 − x∗ k)

≤ kx0 − x∗ k < r,

(35.20)

(01)

so y0 ∈ B(x∗ , r). Similarly, (m)

ϕ1 (kx0 − x∗ k) . . .ϕ1 (kx0 − x∗ k) ϕ0 (kx0 − x∗ k)kx0 − x∗ k m − times = ψm (kx0 − x∗ k)ϕ0 (kx0 − x∗ k)kx0 − x∗ k

ky0 − x∗ k ≤

≤ kx0 − x∗ k, (m)

so y0 ∈ B(x∗ , r), and kx1 − x∗ k ≤ ψi (kx0 − x∗ k)ϕ0 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(35.21)

so x1 ∈ B(x∗ , r). Hence, estimates (35.10)-(35.13) are shown for n = 0. By replacing (1) (m) (1) (m) x0 , y0 , y0 , . . .y0 , x1 by xk , yk , yk , . . .yk , xk+1, k = 0, 1, . . .n, we show (35.10)-(35.13) hold for each n = 0, 1, 2, . . ., j = −1, 0, 1, . . .i. So, we get kxk+1 − x∗ k ≤ ckxk − x∗ k,

(35.22)

302

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

where c = ψi (kx0 − x∗ k)ϕ0 (kx0 − x∗ k) ∈ [0, 1), concluding that lim xk = x∗ , and xk+1 ∈ k−→∞

B(x∗ , r).

Finally, let x∗∗ ∈ D1 with F(x∗∗ ) = 0. Set Q =

Z 1

(A2), (A5) and (35.14), we get

kF 0 (x∗ )−1 (Q − F 0 (x∗ ))k ≤

Z 1 0

0

F 0 (x∗∗ + θ(x∗ − x∗∗ ))dθ. Using (A1),

ω0 (θkx∗ − x∗∗ )kdθ ≤

Z 1 0

ω0 (θr∗∗)dθ < 1,

so Q−1 ∈ L(Y, X). Consequently, from 0 = F(x∗∗ ) − F(x∗ ) = Q(x∗∗ − x∗ ), we obtain x∗∗ = x∗ . Remark 53. If {xn } is an iterative sequence converging to x∗ , then the COC is defined as     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k where the ACOC is    kxn − xn−1 k kxn+1 − xn k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k 

The calculation of these parameters does not need high-order derivatives.

3.

Numerical Examples

Example 55. Consider the kinematic system F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1 ¯ 1), p = with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let B1 = B2 = R3 , D = B(0, t t (0, 0, 0) . Define function F on D for w = (x, y, z) by F(w) = (ex − 1, Then, we get

e−1 2 y + y, z)t . 2



 ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so ω0 (t) = (e − 1)t, ω(t) = e e−1 t, ω1 (t) = e e−1 . Then, the radii are r−1 = 0.382692, r0 = 0.234496, r1 = 0.11851. Example 56. Consider B1 = B2 = C[0, 1], D = B(0, 1) and F : D −→ B2 defined by F(φ)(x) = ϕ(x) − 5

Z 1 0

xθφ(θ)3dθ.

(35.23)

Convergence Radius of an Efficient Iterative Method with Frozen Derivatives

303

We have that 0

F (φ(ξ))(x) = ξ(x) − 15

Z 1 0

xθφ(θ)2ξ(θ)dθ, for each ξ ∈ D.

Then, we get that x∗ = 0, so ω0 (t) = 7.5t, ω(t) = 15t and ω1 (t) = 2. Then, the radii are r−1 = 0.06667, r0 = 0.0420116, r1 = 0.0182586. Example 57. By the academic example of the introduction, we have ω0 (t) = ω(t) = 96.6629073t and ω1 (t) = 2. Then, the radii are r−1 = 0.00689682, r0 = 0.003543946, r1 = 0.00150425.

4.

Conclusion

We determine a radius of convergence for an efficient iterative method with frozen derivatives to solve Banach space-defined equations. Our convergence analysis used ω− continuity conditions only on the first derivative. Earlier studies have used hypotheses up to the seventh derivative, limiting the applicability of the method. Numerical examples complete the chapter.

References [1] Amat S., Hern´andez M. A., Romero N., Semilocal convergence of a sixth order iterative method for quadratic equations, Applied Numerical Mathematics, 62 (2012), 833-841. [2] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [4] Behl R., Cordero A., Motsa S. S., Torregrosa J. R.: Stable high order iterative methods for solving nonlinear models, Appl. Math. Comput., 303, 70-88, (2017). [5] Cordero A., Hueso J. L., Mart´ınez E., Torregrosa J. R., A modified Newton-Jarratt’s composition, Numer. Algor., 55, 87-99, (2010). [6] Magre˜na´ n A. A., Different anomalies in a Jarratt family of iterative root finding methods, Appl. Math. Comput. 233, (2014), 29-38. [7] Noor M. A., Wassem M., Some iterative methods for solving a system of nonlinear equations. Appl. Math. Comput. 57, 101–106 (2009)

304

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[8] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [9] Sharma J. R., Kumar S., A class of computationally efficient Newton-like methods with frozen operator for nonlinear systems, Intern. J. Non. Sc. Numer. Simulations. [10] Traub J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs (1964). [11] Sharma J. R., Arora H., Improved Newton-like methods for solving systems of nonlinear equations, SeMA, 74, 147-163,(2017). [12] Weerakoon S., Fernando T. G. I., A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000).

Chapter 36

Efficient Sixth Convergence Order Methods under Generalized Continuity 1.

Introduction

In this chapter, we extend the applicability of two sixth convergence order methods under generalized continuity conditions for solving nonlinear equation F(x) = 0,

(36.1)

where F : Ω ⊂ X −→ Y is continuously Fr´echet differentiable, X,Y are Banach spaces, and Ω is a nonempty convex set. The methods under consideration in this chapter are: Lofti et all [16]: ym = xn − F 0 (xn )−1 F(xn )

zn = xn − 2(F 0 (xn ) + F 0 (yn ))−1 F(xn ) (36.2) 7 3 xn+1 = zn − ( I − 4F 0 (xn )−1 F 0 (yn ) + (F 0 (xn )−1 F 0 (yn ))2 )F 0 (xn )−1 F(zn ) 2 2 and Esmael et all [12]: ym = xn − F 0 (xn )−1 F(xn ) 1 zn = yn + (F 0 (xn )−1 + 2(F 0 (xn ) − 3F 0 (yn ))−1)F(xn ) (36.3) 3 1 xn+1 = zn + (−F 0 (xn )−1 + 4(F 0 (xn ) − 3F 0 (yn ))−1 )F(zn ). 3 The convergence order of iterative methods, in general, was obtained using Taylor expansions and conditions on seventh order derivatives not appearing in the method. These conditions limit the applicability of the methods [1]-[24].

306

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. 1 3 For example: Let X = Y = R, Ω = [− , ]. Define f on Ω by 2 2  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0.

Then, we have t∗ = 1, and f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on Ω. So, the convergence of these methods is not guaranteed by the analysis in these papers. The convergence order of methods (36.2) and (36.3) was given using assumptions on the derivatives of order up to seven. The first derivative has only been used in our convergence hypotheses. Notice that this is the only derivative appearing on the method. We also provide a computable radius of convergence not given in [12, 16]. This way, we locate a set of initial points for the convergence of the method. The numerical examples are chosen to show how the radii theoretically predicted are computed. In particular, the last example shows that earlier results cannot be used to show the convergence of the method. Our results significantly extend the applicability of these methods and provide a new way of looking at iterative methods. The chapter contains local convergence analysis in Section 2 and the numerical examples in Section 3.

2.

Local Convergence

The convergence radii are determined by solving some scalar equations first for the method (36.2). Set D = [0, ∞). Suppose function ϕ0 : D −→ D is continuous and nondecreasing such that equation ϕ0 (t) − 1 = 0 has a least solution R0 ∈ D − {0}. Set D0 = [0, R0 ). Suppose that equation ψ1 (t) − 1 = 0,

(36.4)

(36.5)

has a least solution r1 ∈ (0, R0), where ψ1 (t) =

R1 0

ϕ((1 − θ)t)dθ 1 − ϕ0 (t)

with ϕ : D0 −→ D is some continuous and nondecreasing function. Suppose that equation p(t) − 1 = 0

1 has a least solution R p ∈ (0, R0 ), where p(t) = (ϕ0 (t) + ϕ0 (ψ1 (t)t)). Set 2 R1 = min{R0 , R p }

(36.6)

Efficient Sixth Convergence Order Methods under Generalized Continuity

307

and D1 = [0, R1). Suppose that equation ψ2 (t) − 1 = 0 has a least solution r2 ∈ (0, R1), where ϕ1 : [0, R1 ) −→ D is some continuous and nondecreasing function and ψ2 (t) =

ψ1 (t) (ϕ0 (t) + ϕ0 (ψ1 (t)t)) 01 ϕ1 (θt)dθ + (1 − ϕ0 (t))(1 − p(t)) R

Suppose that equation ϕ0 (ψ2 (t)t) − 1 = 0 has a least solution R2 ∈ (0, R1). Suppose that equation ψ3 (t) − 1 = 0

(36.7)

has a least solution r3 ∈ (0, R2), where ψ3 (t) = [ψ1 (ψ2 (t)t) (ϕ0 (t) + ϕ0 (ψ2 (t)t)) 01 ϕ1 (θψ2 (t)t)dθ + (1 − ϕ0 (t))(1 − ϕ0 (ψ2 (t)t))   1 (ϕ0 (t) + ϕ0 (ψ1 (t)t)) 2 (ϕ0 (t) + ϕ0 (ψ1 (t)t)) + 3( ) + 2( ) 2 (1 − ϕ0 (t))2 1 − ϕ0 (t) R

R1 0

ϕ1 (θψ2 (t)t)dθ ]ψ2 (t). 1 − ϕ0 (t)

We shall show r = min{rk }, k = 1, 2, 3

(36.8)

is a radius of convergence for method (36.2). Clearly, we get from (36.8) that for each t ∈ [0, r) 0 ≤ ϕ0 (t) < 1, (36.9) 0 ≤ p(t) < 1,

(36.10)

0 ≤ ϕ0 (ψ2 (t)t) < 1,

(36.11)

0 ≤ ψk (t) < 1.

(36.12)

and ¯ α) denote the open and closed balls, respectively in X with The notations M(x, α), M(x, center x ∈ X and of radius α > 0. The conditions (C) are used with the scalar functions as defined previously. Assume: (c1) F : Ω ⊂ X −→ Y is Fr´echet continuously differentiable; there exists simple x∗ ∈ Ω such that F(x∗ ) = 0.

308

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(c2) For all x ∈ Ω

kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ϕ0 (kx − x∗ k).

Set Ω0 = Ω ∩ M(x∗ , R0 ). (c3) For all x, y ∈ Ω0

kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ϕ(ky − xk). kF 0 (x∗ )−1 F 0 (x)k ≤ ϕ1 (kx − x∗ k).

¯ ∗ , ρ) ⊂ Ω for some ρ > 0 to be determined. (c4) M(x (c5) There exists r∗ ≥ r such that

Z 1 0

¯ ∗ , r∗ ). ϕ0 (θr∗ )dθ < 1. Set Ω1 = Ω ∩ M(x

Theorem 64. Under the conditions (C) further choose x0 ∈ M(x∗ , r)−{x∗ }. Then, sequence {xn } generated by method (36.2) exists in M(x∗ , r), stays in M(x∗ , r) and converges to x∗ , so that kyn − x∗ k ≤ ψ1 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < r, (36.13) kzn − x∗ k ≤ ψ2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k,

(36.14)

kxn+1 − x∗ k ≤ ψ3 (kxn − x∗ kkxn − x∗ k ≤ kxn − x∗ k,

(36.15)

and where the functions ψk , k = 1, 2, 3 are given previously and radius r is given by (36.8). Moreover, x∗ the only solution of equation F(x) = 0 in the set Ω1 given in (c5). Proof. Error estimates (36.13)-(36.15) are proved by mathematical induction on m. Consider arbitrary u ∈ M(x∗ , r) − {x∗ }. In view of (36.8), (36.9), (c1) and (c2), we get in turn that kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ϕ0 (ku − x∗ k) ≤ ϕ0 (r) < 1 (36.16)

which together with a lemma on invertible operators by Banach [8] that F 0 (u)−1 ∈ L(Y, X), with 1 . (36.17) kF 0 (u)−1 F 0 (x∗ )k ≤ 1 − ϕ0 (ku − x∗ k)

It follows that iterate y0 exists by the first sub-step of method (36.2), for n = 0. We can also write by this sub-step that y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) = [F 0 (x0 )−1 F 0 (x∗ )] ×[

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 ))dθ(x0 − x∗ )]. (36.18)

By (36.8), (36.12) (for m = 1), (c3), (36.17) (for u = x0 ) and (36.18), we get in turn that R1

ϕ((1 − θ)kx0 − x∗ k)dθkx0 − x∗ k 1 − ϕ0 (kx0 − x∗ k) ≤ ψ1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r,

ky0 − x∗ k ≤

0

(36.19)

Efficient Sixth Convergence Order Methods under Generalized Continuity

309

showing y0 ∈ M(x∗ , r) − {x∗ }, and estimate (36.13) for n = 0. Next, we prove (F 0 (x0 ) + F 0 (y0 ))−1 exists. Indeed, by (36.8), (36.10), (c2) and (36.19), we obtain in turn that k(2F 0 (x∗ ))−1 (F 0 (x0 ) + F 0 (y0 )) − 2F 0 (x∗ ))k 1 (kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k ≤ 2 +kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ )))k) 1 (ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k)) ≤ 2 ≤ p(kx0 − x∗ k) ≤ p(r) < 1, so k(F 0 (x0 ) + F 0 (y0 ))−1 F 0 (x∗ )k ≤

1 , 2(1 − p(kx0 − x∗ k))

(36.20)

(36.21)

and z0 is well defined by the second sub-step of method (36.2) for n = 0, from which we can also write. z0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 )

+(F 0 (x0 )−1 − 2(F 0 (x0 ) + F 0 (y0 ))−1F(x0 )

= y0 − x∗ + F 0 (x0 )−1 ((F 0 (y0 ) − F 0 (x∗ ))

+(F 0 (x∗ ) − F 0 (x0 )))(F 0 (x0 ) + F 0 (y0 ))−1F(x0 ).

(36.22)

Using (36.8), (36.12)(for m = 2), (36.17) ( for u = x0 ), (36.19)-(36.21) and (36.24), we get in turn that kz0 − x∗ k ≤ [ψ1 (kx0 − x∗ k))

(ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k)) 01 ϕ1 (θkx0 − x∗ k)dθ + 2(1 − ϕ0 (kx0 − x∗ k))(1 − p(kx0 − x∗ k)) ≤ ψ2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k (36.23) R

showing z0 ∈ M(x∗ , r) and (36.14) holds for n = 0. Iterate x1 is well defined by the third sub-step of method (36.2) from which we can also write that x1 − x∗ = z0 − x∗ − F 0 (z0 )−1 F(z0 )

+(F 0 (z0 )−1 − F 0 (x0 )−1 )F(z0 ) 1 − (5I − 8F 0 (x0 )−1 F 0 (y0 )) + 3(F 0 (x0 )−1 F 0 (y0 ))2 )F 0 (x0 )−1 F(z0 ) 2 = z0 − x∗ − F 0 (z0 )−1 F(z0 )

+F 0 (z0 )−1 (F 0 (x0 ) − F 0 (z0 ))F 0 (x0 )−1 F(z0 ) 1 − (3(F 0 (x0 )−1 F 0 (y0 ) − I)2 − 2(F 0 (x0 )−1 F 0 (y0 ) − I))F 0 (x0 )−1 F(z0 ). 2 (36.24)

310

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

By (36.8), (36.12) (for m = 3), (36.17) (for u = z0 ), (36.19), (36.23) and (36.26), we obtain in turn that " R (ϕ0 (kx0 − x∗ k) + ϕ0 (kz0 − x∗ k)) 01 ϕ1 (θkz0 − x∗ k)dθ kx1 − x∗ k ≤ ψ1 (kz0 − x∗ k) + (1 − ϕ0 (kx0 − x∗ k))(1 − ϕ0 (kz0 − x∗ k)) 2 1 ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k) + (3 2 1 − ϕ0 (kx0 − x∗ k) ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k) )) +2( 1 − ϕ0 (kx0 − x∗ k) R1

ϕ1 (θkz0 − x∗ k)dθ ]kz0 − x∗ k 1 − ϕ0 (kx0 − x∗ k) ≤ ψ3 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k, 0

(36.25)

showing (36.15) for n = 0 and x1 ∈ M(x∗ , r). Simply switch x0 , y0 , z0 , x1 by xm , ym , zm , xm+1 in the preceding calculations to complete the induction for items (36.13)-(36.15). Then, by the estimation kxm+1 − x∗ k ≤ bkxm − x∗ k < r, (36.26) where b = ψ3 (kx0 − x∗ k) ∈ [0, 1), we conclude lim xm = x∗ and xm+1 ∈ M(x∗ , r). Furtherm−→∞

more, let Q =

Z 1 0

by (h5) that

0

F (x∗ + θ(x∗∗ − x∗ ))dθ for some x∗∗ ∈ Ω1 with F(x∗∗ ) = 0. It then follows

kF 0 (x∗ )−1 (Q − F 0 (x∗ ))k ≤

Z 1 0

ϕ0 (θkx∗ − x∗∗ )kdθ ≤

Z 1 0

ϕ0 (θ˜r)dθ < 1,

so Q−1 ∈ L(Y, X). Consequently, from 0 = F(x∗∗ ) − F(x∗ ) = Q(x∗∗ − x∗ ), we obtain x∗∗ = x∗ . Remark 54.

1. In view of (c2) and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + L0 kx − x∗ k

the condition in (c3) can be dropped and ϕ1 can be replaced by ϕ1 (t) = 1 + ϕ0 (t) or ϕ1 (t) = 1 + ϕ0 (R0 ), since t ∈ [0, R0 ). 2. The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form F 0 (x) = P(F(x))

Efficient Sixth Convergence Order Methods under Generalized Continuity

311

where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1. 3. Let ϕ0 (t) = L0 t, and ϕ(t) = Lt. In [2, 3] we showed that rA = gence radius of Newton’s method:

2 is the conver2L0 + L

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(36.27)

under the conditions (36.11) and (36.12). It follows from the definition of r in (36.8) that the convergence radius r of the method (36.2) cannot be larger than the convergence radius rA of the second order Newton’s method (36.27). As already noted in [2, 3] rA is at least as large as the convergence radius given by Rheinboldt [18] rR =

2 , 3L

(36.28)

where L1 is the Lipschitz constant on D. The same value for rR was given by Traub [19]. In particular, for L0 < L1 we have that rR < rA and

rR 1 L0 → as → 0. rA 3 L1 That is the radius of convergence rA is at most three times larger than Rheinboldt’s. 4. We can compute the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k Next, we present the local convergence analysis of the method (36.3), but we use the functions:

¯ 1 (t) = ψ1 (t), ψ (ϕ (t) + ϕ0 (ψ1 (t)t)) 01 ϕ1 (θt)dθ ¯ 2 (t) = ψ1 (t) + 0 ψ , 2(1 − ϕ0 (t))(1 − q(t)) R

¯ (t)t) + ϕ0 (t)) 01 ϕ1 (θψ ¯ 2 (t)t)dθ (ϕ (ψ ¯ 3 (t) = [ψ1 (ψ ¯ 2 (t)t) + 0 2 ψ ¯ 2 (t)t))(1 − ϕ0(t)) (1 − ϕ0 (ψ R

¯ 1 (t)t) + ϕ0 (t)) 01 ϕ1 (θψ ¯ 2 (t)t)dθ (ϕ0 (ψ ¯ 2 (t). + ]ψ (1 − ϕ0 (t))(1 − q(t)) R

312

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

1 where q(t) = (ϕ0 (t) + 3(ψ1 (t)t)), equations 2 ¯ 2 (t) − 1 = 0 ψ

(36.29)

¯ 3 (t) − 1 = 0 ψ

(36.30)

r¯ = min{r1 , r¯2 , r¯3 },

(36.31)

and radius of convergence where r¯2 , r¯3 are the least solutions of equations (36.29) and (36.30) (if they exist), respectively in (0, R0). We need the estimates under condition (C) with ρ = r¯ : zn − x∗ = xn − x∗ − F 0 (xn )−1 F(xn ) + 1 + F 0 (xn )−1 (F 0 (xn ) − 3F 0 (yn )) + 2F 0 (xn )) 3 (F 0 (x0 ) − 3F 0 (yn ))−1 F(xn ) = yn − x∗ + F 0 (xn )−1 (F 0 (xn ) − F 0 (yn )) (F 0 (xn ) − 3F 0 (yn ))−1 F(xn ),

kzn − x∗ k ≤ [ψ1 (kxn − x∗ k)

(ϕ0 (kxn − x∗ k) + ϕ0 (kyn − x∗ k)) 01 ϕ1 (θkxn − x∗ k)dθ ]kxn − x∗ k + 2(1 − ϕ0 (kxn − x∗ k))(1 − q(kxn − x∗ k)) ¯ 2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < r¯, ≤ ψ R

k(2F 0 (x∗ ))−1 (F 0 (xn ) − 3F 0 (yn ) − 2F 0 (x∗ ))k 1 ≤ (kF 0 (x∗ )−1 (F 0 (xn ) − F 0 (x∗ ))k + 3kF 0 (x∗ )−1 (F 0 (yn ) − F 0 (x∗ ))k 2 1 ≤ (ϕ0 (kxn − x∗ k) + 3ϕ0 (kyn − x∗ k)) 2 ≤ q(kxn − x∗ k) ≤ q(¯r) < 1, so k(F 0 (xn ) − 3F 0 (yn ))−1 F 0 (x∗ )k ≤

1 , 2(1 − q(kxn − x∗ k))

xn+1 − x∗ = zn − x∗ − F 0 (zn )−1 F(zn) + (F 0 (zn )−1 − F 0 (xn )−1 )F(zn ), 1 4 = +(F 0 (xn )−1 − F 0 (xn )−1 + (F 0 (xn ) − 3F 0 (yn ))−1 F(zn ) 3 3 = zn − x∗ − F 0 (zn )−1 F(zn) +F 0 (zn )−1 (F 0 (xn ) − F 0 (zn ))F 0 (xn )−1 F(zn )

+2F 0 (xn )−1 (F 0 (xn ) − F 0 (yn ))(F 0 (xn ) − 3F 0 (yn ))−1 F(zn ),

Efficient Sixth Convergence Order Methods under Generalized Continuity

313

so ϕ0 (kzn − x∗ k) + ϕ0 (kxn − x∗ k) (1 − ϕ0 (kzn − x∗ k))(1 − ϕ0 (kxn − x∗ k)) ϕ0 (kyn − x∗ k) + ϕ0 (kxn − x∗ k) + (1 − ϕ0 (kxn − x∗ k))(1 − q(kxn − x∗ k))

kxn+1 − x∗ k ≤ [ψ1 (kzn − x∗ k) + (

Z 1 0

ϕ1 (θkzn − x∗ k)dθkzn − x∗ k

¯ 3 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k. ≤ ψ Hence, we arrive at the local convergence result for the method (36.3). Theorem 65. Under the conditions (C) for ρ = r¯ further choose x0 ∈ M(x∗ , r¯) − {x∗ }. Then,the conclusion of Theorem 64 hold for method (36.3) with g¯2 , g¯3 , r¯ replacing g2 , g3 and r, respectively.

3.

Numerical Examples

Example 58. Consider the kinematic system F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1

¯ 1), x∗ = with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , D = U(0, T T (0, 0, 0) . Define function F on D for w = (x, y, z) by F(w) = (ex − 1, Then, we get

e−1 2 y + y, z)T . 2



 ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so ϕ0 (t) = (e − 1)t, ϕ(t) = e e−1 t, ϕ1 (t) = e e−1 . Then, the radii:

r1 = 0.382692, r2 = 0.14602, r3 = 0.107984, r¯2 = 0.193554, r¯3 = 0.14699. Example 59. Consider X = Y = C[0, 1], D = U(0, 1) and F : D −→ Y defined by F(ψ)(x) = ϕ(x) − 5

Z 1

xθψ(θ)3 dθ.

(36.32)

0

We have that F 0 (ψ(ξ))(x) = ξ(x) − 15

Z 1 0

xθψ(θ)2ξ(θ)dθ, for each ξ ∈ D.

Then, we get that x∗ = 0, so ϕ0 (t) = 7.5t, ϕ(t) = 15t and ϕ1 (t) = 2. Then, the radii: r1 = 0.066667, r2 = 0.0272888, r3 = 0.0208187, r¯2 = 0.036647, r¯3 = 0.0279804. Example 60. By the academic example of the introduction, we have ϕ0 (t) = ϕ(t) = 96.6629073t and ϕ1 (t) = 2. Then, the radii: r1 = 0.00689681, r2 = 0.00247191, r3 = 0.00180832, r¯2 = 0.00344627, r¯3 = 0.002524048.

314

4.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

In this chapter, we consider the convergence of two sixth convergence order methods for solving a nonlinear equation. We present the local convergence analysis not given before, which is based on the first Fr´echet derivative that only appears in the method. Numerical examples where the theoretical results are tested completely in the chapter.

References [1] Argyros I. K., A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach spaces, J. Math. Anal. Appl. 298 (2004) 374-397. [2] Argyros I. K., Convergence and Applications of Newton-Type Iterations, SpringerVerlag, New York, 2008. [3] Argyros I. K., A semilocal convergence analysis for directional Newton methods, Math. Comp. 80 (2011) 327-343. [4] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [5] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [6] Argyros I. K., George S., Magre˜na´ n A. A., Local convergence for multi-point- parametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math. 282, 215-224 (2015). [7] Argyros I. K., Magre˜na´ n A. A., A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 71, 1-23, (2015). [8] Argyros I. K., George S., On the complexity of extending the convergence region for Traubs method, Journal of Complexity 56, 101423. [9] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [10] Babajee D. K. R., Dauhoo M. Z., Darvishi M. T., Karami A., Barati A., Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations, J. Comput. Appl. Math. 233 (2010) 2002-2012. [11] Darvishi M. T., AQ two step high order Newton like method for solving systems of nonlinear equations, Int. J. Pure Appl. Math., 57(2009), 543-555. [12] Esmaili H., Ahmudi M., An efficient three step method to solve system of nonlinear equations, Appl. Math. Comput., 266, (2015), 1093-1101.

Efficient Sixth Convergence Order Methods under Generalized Continuity

315

[13] Grau-S´anchez, M., Grau A., Noguera M., Ostrowski type methods for solving systems of nonlinear equations, Appl. Math. Comput. 218 (2011) 2377-2385. [14] Jaiswal J. P., Semilocal convergence of an eighth-order method in Banach spaces and its computational efficiency, Numer. Algorithms 71 (2016) 933-951. [15] Jaiswal J. P., Analysis of semilocal convergence in Banach spaces under relaxed condition and computational efficiency, Numer. Anal. Appl. 10 (2017) 129-139 [16] Lofti T., Bokhtiari, P., Cordero A., Mahdinni, K., Torregrosa J. R., Some new efficient multipoint iterative methods for solving systems of equations, Int. J. Comput. Math., 92, (2014), 1921-1934. [17] Regmi, S., Argyros I. K., Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces, Nova Science Publisher, NY, 2019. [18] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [19] Traub J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs (1964). [20] Sharma J. R., Arrora H., Improved Newton-like methods for solving systems of nonlinear equations, SeMA, 74, 147-163,(2017). [21] Sharma J. R., Kumar D., A fast and efficient composite Newton-Chebyshev method for systems of nonlinear equations, J. Complexity, 49, (2018), 56-73. [22] Sharma R., Sharma J. R., Kalra N., A modified Newton-Ozban composition for solving nonlinear systems, International J. of computational methods, 17, 8, (2020), world scientific publ. Comp. [23] Wang X., Li Y., An efficient sixth order Newton type method for solving nonlinear systems, Algorithms, 10, 45, (2017), 1-9. [24] Weerakoon S., Fernando T. G. I., A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000).

Chapter 37

Fifth Order Methods under Generalized Conditions 1.

Introduction

In this chapter we compare the radii of convergence of two fifth convergence order methods for solving nonlinear equation F(x) = 0, (37.1) where F : Ω ⊂ B1 −→ B2 is continuously Fr´echet differentiable, B1 , B2 are Banach spaces, and Ω is a nonempty convex set. The methods under consideration in this chapter are [19]: 1 yn = xn − F 0 (xn )−1 F(xn ) 2 zn = xn − F 0 (yn )−1 F(xn ))]

xn+1 = zn − (2F 0 (yn )−1 − F 0 (xn )−1 )F(zn )

(37.2)

and [12] yn = xn − F 0 (xn )−1 F(xn )

zn = xn − 2(F 0 (yn ) + F 0 (xn ))−1F(xn ) 0

(37.3)

−1

xn+1 = zn − F (yn ) )F(zn ).

The convergence order of these methods was obtained in [19,12], respectively when

B1 = B2 = Rm using Taylor expansions and conditions up to the sixth order derivative not appearing on the method. These conditions limit the applicability of the methods [1]-[25]. 1 3 For example: Let B1 = B2 = R, Ω = [− , ]. Define f on Ω by 2 2  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we have t∗ = 1,

f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22.

318

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Obviously f 000 (t) is not bounded on Ω. So, the convergence of these methods is not guaranteed by the analysis in these papers. Our convergence analysis is based on the first Fr´echet derivative that only appears in the method. We also provide a computable radius of convergence not given in [19, 12]. This way, we locate a set of initial points for the convergence of the method. The numerical examples are chosen to show how the radii theoretically predicted are computed. In particular, the last example shows that earlier results cannot be used to show the convergence of the method. Our results significantly extend the applicability of these methods and provide a new way of looking at iterative methods. The chapter contains local convergence analysis in Section 2 and the numerical examples in Section 3.

2.

Local Analysis

We consider functions and parameters needed in the local convergence of methods (37.2) and (37.3) that follows. Let A = [0, ∞). Suppose equation: (i) ϕ0 (t) − 1 = 0

(37.4)

has a least solution r0 ∈ A − {0} for some function ϕ0 : A −→ A which is continuous and nondecreasing. Let A0 = [0, r0). (ii) ψ1 (t) − 1 = 0,

(37.5)

has a least solution ρ1 ∈ (0, r0 ), for some functions ϕ : A0 −→ A and ϕ1 : A0 −→ A continuous and nondecreasing, where ψ1 (t) =

R1 0

R1

ϕ((1 − θ)t)dθ + 12 1 − ϕ0 (t)

0

ϕ1 (θt)dθ

.

(iii) ϕ0 (ψ1 (t)t) − 1 = 0

(37.6)

has a least solution r = A − {0}. Let r2 = min{r0 , r0 } and A1 = [0, r2 ). (iv) ψ2 (t) − 1 = 0

(37.7)

has a least solution ρ2 ∈ A1 − {0}, where ψ2 : A1 −→ A defined by

ψ2 (t) =

R1 0

ϕ((1 − θ)t)dθ 1 − ϕ0 (t)

(ϕ0 (t) + ϕ0 (ψ1 (t)t)) 01 ϕ1 (θt)dθ . + (1 − ϕ0 (t)))(1 − ϕ0 (ψ1 (t)t)) R

Fifth Order Methods under Generalized Conditions

319

(v) ϕ0 (ψ2 (t)t) − 1 = 0

(37.8)

has a least solution r3 ∈ A1 − {0}. Let r = min{r2 , r3 } and A2 = [0, r). (vi) ψ3 (t) − 1 = 0

(37.9)

has a least solution ρ3 ∈ A2 − {0} for function ψ3 : A2 −→ A defined by "R 1 0 ϕ((1 − θ)ψ2 (t)t)dθ ψ3 (t) = 1 − ϕ0 (ψ2 (t)t)

(ϕ0 (ψ1 (t)t) + ϕ0 (ψ2 (t)t)) 01 ϕ1 (θψ2 (t)t)dθ + (1 − ϕ0 (ψ1 (t)t))(1 − ϕ0(ψ2 (t)t)) # R (ϕ0 (t) + ϕ0 (ψ1 (t)t)) 01 ϕ1 (θψ2 (t)t)dθ + ψ2 (t). (1 − ϕ0 (t))(1 − ϕ0 (ψ1 (t)t)) R

We prove in Theorem 66 that ρ = min{ρk }, k = 1, 2, 3

(37.10)

is a convergence radius for method (37.2). In view of (37.10) it follows that for all t ∈ [0, r) 0 ≤ ϕ0 (t) < 1,

(37.11)

0 ≤ ϕ0 (ψ1 (t)t) < 1,

(37.12)

0 ≤ ϕ0 (ψ2 (t)t) < 1

(37.13)

0 ≤ ψk (t) < 1.

(37.14)

and ¯ α) stand for the open and closed balls, respectively in B1 The notations D(x, α), D(x, with center x ∈ B1 and of radius α > 0. The following conditions (C) are needed: Assume with the “ϕ ” functions as defined previously: (c1) F : Ω ⊂ B1 −→ B2 is Fr´echet continuously differentiable and x∗ ∈ Ω is such that F(x∗ ) = 0 and simple. (c2) kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ϕ0 (kx − x∗ k) for all x ∈ Ω. Set Ω0 = Ω ∩ T (x∗ , R0 ). (c3) kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ϕ(ky − xk)

for all x, y ∈ Ω0 .

kF 0 (x∗ )−1 F 0 (x)k ≤ ϕ1 (kx − x∗ k)

320

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

¯ ∗ , β) ⊂ Ω for some β > 0 to be determined. (c4) D(x (c5) There exists ρ∗ ≥ ρ such that

Z 1 0

¯ ∗ , ρ∗ ). ϕ0 (θρ∗ )dθ < 1. Set Ω1 = Ω ∩ D(x

The main local convergence result for method (37.2) follows under conditions (C) and the previous notation. Theorem 66. Suppose conditions (C) with β = ρ and choose x0 ∈ D(x∗ , ρ) − {x∗ }. Then, sequence {xn } generated by method (37.2) is well defined in D(x∗ , ρ), for all n = 0, 1, 2, . . ., remains in D(x∗ , ρ) and converges to x∗ so that kyn − x∗ k ≤ ψ1 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < ρ,

(37.15)

kzn − x∗ k ≤ ψ2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k,

(37.16)

kxn+1 − x∗ k ≤ ψ3 (kxn − x∗ kkxn − x∗ k ≤ kxn − x∗ k,

(37.17)

and where ρ is defined by (37.10) and the functions ψk , k = 1, 2, 3 are given previously. Moreover, x∗ the only solution of equation F(x) = 0 in the set Ω1 given in (c5). Proof. We use induction for items (37.15)-(37.17). Consider u ∈ D(x∗ , ρ) − {x∗ }. It then follows from (37.10), (37.11), (c1) and (c2) that kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ϕ0 (ku − x∗ k) ≤ ϕ0 (ρ) < 1.

(37.18)

The celebrated lemma by Banach [3,4] on invertible operators and (37.18) give F 0 (u) is invertible and 1 kF 0 (u)−1 F 0 (x∗ )k ≤ . (37.19) 1 − ϕ0 (ku − x∗ k) Iterate y0 is well defined by the first sub-step of method (37.2) for n = 0, and we can write y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) 1 + F 0 (x0 )−1 F 0 (x0 ). 2

(37.20)

Using (37.12), (37.14) (for k = 1), (c3), (37.19) (for u = x0 ) and (37.20), we get in turn that R1

ϕ((1 − θ)kx0 − x∗ k)dθ + 21 01 ϕ1 (θkx0 − x∗ k)dθ ky0 − x∗ k ≤ 1 − ϕ0 (kx0 − x∗ k) = ψ1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < ρ, 0

R

(37.21)

showing y0 ∈ D(x∗ , ρ) − {x∗}, and (37.15) for n = 0. We also have that F 0 (y0 ) is invertible by (37.19) for u = y0 . Iterate z0 is well defined and we can also have by the second sub-step of method (37.2) z0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 )

+F 0 (x0 )−1 [F 0 (y0 ) − F 0 (x0 )]F 0 (y0 )−1 F(x0 ).

Fifth Order Methods under Generalized Conditions

321

Using (37.10), (37.14)(for k = 2), (37.19) (for u = y0 ) and (37.21), we obtain in turn "R 1 0 ϕ((1 − θ)kx0 − x∗ k)dθ kz0 − x∗ k ≤ 1 − ϕ0 (kx0 − x∗ k) # R (ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k)) 01 ϕ1 (θkx0 − x∗ k)dθ + ) kx0 − x∗ k (1 − ϕ0 (kx0 − x∗ k))(1 − ϕ0 (ky0 − x∗ k)) ≤ ψ2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(37.22)

which shows z0 ∈ D(x∗ , ρ) and (37.16) for n = 0. Iterate x1 is well defined by the method (37.2) from which we can also write x1 − x∗ = z0 − x∗ − F 0 (z0 )−1 F(z0 )

+(F 0 (z0 )−1 − 2F 0 (y0 )−1 + F 0 (x0 )−1 )F 0 (z0 )) = z0 − x∗ − F 0 (z0 )−1 F(z0 ) +F 0 (z0 )−1 (F 0 (y0 ) − F 0 (z0 ))F 0 (y0 )−1 F(z0 )

+F 0 (x0 )−1 (F 0 (y0 ) − F 0 (x0 ))F 0 (y0 )−1 F(z0 ).

(37.23)

Next, using (37.10), (37.14) (for k = 3), (37.19) (for u = x0 , y0 , z0 ), (37.22) and (37.23), we have in turn "R 1 0 ϕ((1 − θ)kz0 − x∗ k)dθ kx1 − x∗ k ≤ 1 − ϕ0 (kz0 − x∗ k) (ϕ0 (ky0 − x∗ k) + ϕ0 (kz0 − x∗ k)) 01 ϕ1 (θkz0 − x∗ k)dθ + (1 − ϕ0 (ky0 − x∗ k))(1 − ϕ0 (kz0 − x∗ k)) R

# R (ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k)) 01 ϕ1 (θkz0 − x∗ k)dθ + kz0 − x∗ k (1 − ϕ0 (kx0 − x∗ k))(1 − ϕ0 (ky0 − x∗ k))

≤ ψ3 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(37.24)

which shows x1 ∈ D(x∗ , ρ) and (37.17) for n = 0. Simply repalce x0 , y0 , z0 , x1 by x j , y j , z j , x j+1 in the previous calculations to complete the induction for items (37.15)(37.17). Then, by the estimate kx j+1 − x∗ k ≤ γkx j − x∗ k 0. The conditions (C) shall be used. (c1) F : Ω ⊂ X −→ Y is Fr´echet continuously differentiable and x∗ ∈ D is a simpile solution of equation F(x) = 0., F 0 (x∗ )−1 ∈ L(Y, X). (c2) There exists a continuous and nondecreasing function ω0 : T −→ T such that for each x∈Ω kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ω0 (kx − x∗ k). Set Ω0 = Ω ∩ B(x∗ , R0 ).

(c3) There continuous and nondecreasing exist functions ω : T0 −→ T, ω1 : T0 −→ T such that for each x, y ∈ Ω0 kF 0 (x∗ )−1 (F 0 (x) − F 0 (y))k ≤ ω(kx − yk) and kF 0 (x∗ )−1 F 0 (x)k ≤ ω1 (kx − yk). ¯ ∗ , ρ) ⊂ Ω, where ρ > 0 is to be determined. (c4) B(x (c5) There exists r¯ ≥ ρ such that

Z 1 0

¯ ∗ , r¯). ω0 (τ¯r)dτ < 1. Set Ω1 = Ω ∩ B(x

The local convergence of solver (38.2) follows under conditions (C) with the preceding notation.

330

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 68. Under assumptions (C) further choose x0 ∈ B(x∗ , r) − {x∗ }. Then, sequence {xn } generated by solver (38.2) is well defined in B(x∗ , r), remains in B(x∗ , r) for all n = 0, 1, 2, . . ., and converges to x∗ so that kyn − x∗ k ≤ g1 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < r,

(38.12)

kxn+1 − x∗ k ≤ g2 (kxn − x∗ kkxn − x∗ k ≤ kxn − x∗ k,

(38.13)

and with the functions gi , i = 1, 2 given previously, and radius r is defined by (38.8). Moreover, x∗ is the only solution of equation F(x) = 0 in the set Ω1 given in (c5). Proof. Items (38.12) and (38.13) are proved using induction. First choose u ∈ B(x∗ , r) − {x∗ } to be arbitrary. Using (c1), (c2), (38.8) and (38.9), we have in turn kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ω0 (ku − x∗ k) ≤ ω0 (r) < 1

(38.14)

which together with a Lemma on invertible operators due to Banach [8] imply F 0 (u)−1 ∈ L(Y, X), and 1 . (38.15) kF 0 (u)−1 F 0 (x∗ )k ≤ 1 − ω0 (ku − x∗ k) We also see that y0 is well defined by the first sub-step of method (38.2) for n = 0. We can also write 1 y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + F 0 (x0 )−1 F(x0 ) 3 = [F 0 (x0 )−1 F 0 (x∗ )][

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + τ(x0 − x∗ )) − F 0 (x0 ))dτ(x0 − x∗ )]

1 + F 0 (x0 )−1 F(x0 ). 3

(38.16)

By (38.8), (38.10) (for i = 1), (c3), (38.15) (for u = x0 ) and (38.16), we get in turn ω((1 − τ)kx0 − x∗ k)dτ + 31 01 ω1 (τkx0 − x∗ k)dτ ky0 − x k ≤ kx0 − x∗ k 1 − ω0 (kx0 − x∗ k) ≤ g1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r, (38.17) ∗

R1 0

R

proving y0 ∈ B(x∗ , r), and (38.12) holds for n = 0. Next, we show the invertibility of F 0 (x0 ) + F 0 (y0 ). Indeed, by (38.8), (38.10), (c1), (c2) and (38.17), we obtain in turn that k(2F 0 (x∗ ))−1 (F 0 (x0 ) + F 0 (y0 ) − 2F 0 (x∗ ))k 1 ≤ (ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k)) 2 ≤ p(kx0 − x∗ k) ≤ p(r) < 1, so k(F 0 (x0 ) + F 0 (y0 ))−1 F 0 (x∗ )k ≤

1 , 1 − p(kx0 − x∗ k)

(38.18)

Two Fourth Order Solvers for Nonlinear Equations

331

and x1 exists. Then, we can write by the second sub-step of solver (38.2) for n = 0, x1 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 )

+(F 0 (x0 )−1 − 2(F 0 (x0 ) + F 0 (y0 ))−1 )F(x0 ) 1 + ((F 0 (x0 )−1 F 0 (y0 ) − I) + 3(F 0 (x0 )−1 F 0 (y0 ) − I)2 ) 2 ×(F 0 (x0 ) + F 0 (y0 ))−1F(x0 )

(38.19)

= x0 − x∗ − F 0 (x0 )−1 F(x0 )

+F 0 (x0 )−1 (F 0 (y0 ) − F 0 (x0 ))(F 0 (x0 ) + F 0 (y0 ))−1 F(x0 ) 1 + ((F 0 (x0 )−1 F 0 (y0 ) − I) + 3(F 0 (x0 )−1 F 0 (y0 ) − I)2 2 ×(F 0 (x0 ) + F 0 (y0 ))−1F(x0 ).

(38.20)

In view of (38.8), (38.11) (for i = 2), (38.15) (for u = x0 ),(38.17), (38.18) and (38.22), we have in turn that "R 1 ∗ ∗ 0 ω((1 − τ)kx0 − x k)dτ kx1 − x k ≤ 1 − ω0 (kx0 − x∗ k) (ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k)) 01 ω1 (τkx0 − x∗ k)dτ + 2(1 − ω0 (kx0 − x∗ k))(1 − p(kx0 − x∗ k))  1 ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k) + 4 1 − (ω0 (kx0 − x∗ k)  ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k) 2 +3( ) 1 − (ω0 (kx0 − x∗ k) # R1 ∗ ω (τkx − x k)dτ 1 0 × 0 kx0 − x∗ k 1 − p(kx0 − x∗ k) R

≤ g2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(38.21)

proving x1 ∈ B(x∗ , r) and (38.13). The induction for (38.12) and (38.13) terminates if x0 , y0 , x1 are replaced by x j , y j , x j+1 in the preceding calculations. It follows from kx j+1 − x∗ k ≤ bkx j − x∗ k < r,

(38.22)

where b = g2 (kx0 − x∗ k) ∈ [0, 1), that lim xn = x∗ and xn+1 ∈ B(x∗ , r). n−→∞

Considering v ∈ Ω1 with F(v) = 0, set M =

and (c5), we have in turn that

kF 0 (x∗ )−1 (M − F 0 (x∗ ))k ≤

Z 1 0

Z 1 0

F 0 (x∗ + τ(x∗∗ − x∗ ))dτ. Using (c1), (c2)

ω0 (τkx∗ − v)kdτ ≤

Z 1 0

ω0 (τ¯r)dτ < 1,

so v = x∗ follows from the invertibility of M and 0 = F(v) − F(x∗ ) = M(v − x∗ ).

332

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 56.

1. In view of (c2) and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + L0 kx − x∗ k

condition (c3) can be dropped and ω1 can be replaced by ω1 (t) = 1 + ω0 t or ω1 (t) = 1 + ω0 (R), since t ∈ [0, R). 2. The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form F 0 (x) = P(F(x)) where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1. 3. Let ω0 (t) = L0 t, and ω(t) = Lt. In [2, 3] we showed that rA = gence radius of Newton’s method:

2 is the conver2L0 + L

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(38.23)

under the conditions (c1) and (c2). It follows from the definition of r in (38.10) that the convergence radius r of the method (38.2) cannot be larger than the convergence radius rA of the second order Newton’s method (38.23). As already noted in [2, 3] rA is at least as large as the convergence radius given by Rheinboldt [18] 2 , (38.24) 3L where L1 is the Lipschitz constant on D. The same value for rR was given by Traub [22]. In particular, for L0 < L1 we have that rR =

rR < rA and

rR 1 L0 → as → 0. rA 3 L1 That is the radius of convergence rA is at most three times larger than Rheinboldt’s. 4. We can compute the computational order of convergence (COC) defined by  n+1    kxn − x∗ k kx − x∗ k µ = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence  n+1    kx − xn k kxn − xn−1 k µ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k

Two Fourth Order Solvers for Nonlinear Equations Next, we deal with solver (38.3) along the same lines. Assume that equation g˜2 (t) − 1 = 0

333

(38.25)

has a least solution r˜2 ∈ (0, R0), where g˜2 (t) =

R1

ω((1 − τ)t)dτ 1 − ω0 (t)  3 ω0 (t) + ω0 (g1 (t)t) + 3 8 1 − ω0 (g1 (t)t) R ω0 (t) + ω0 (g1 (t)t) 01 ω1 (τt)dτ . + 1 − ω0 (t) 1 − ω0 (t) 0

We shall prove that r˜ = min{r1 , r˜2 }

(38.26)

is a radius of convergence for solver (38.3). We need the estimates xn+1 − x∗ = xn − x∗ − F 0 (xn )−1 F(xn ) 1 9 3 + (3I − F 0 (yn )−1 F 0 (xn ) − F 0 (xn )−1 F 0 (yn ))F 0 (xn )−1 F(xn ) 2 4 4 = xn − x∗ − F 0 (xn )−1 F(xn ) 3 + (3(I − F 0 (yn )−1 F 0 (xn )) + (I − F 0 (xn )−1 F 0 (yn ))F 0 (xn )−1 F(xn ), 8 so ∗

kxn+1 − x k ≤

"R

1 ∗ 0 ω((1 − τ)kxn − x k)dτ 1 − ω0 (kxn − x∗ k)

 3 ω0 (kxn − x∗ k) + ω0 (kyn − x∗ k) + 3 8 1 − ω0 (kyn − x∗ k)  ∗ ω0 (kxn − x k) + ω0 (kyn − x∗ k) + 1 − ω0 (kxn − x∗ k) # R1 ∗ 0 ω1 (τkxn − x k)dτ × kxn − x∗ k 1 − ω0 (kxn − x∗ k)

≤ g˜2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k.

(38.27)

Hence, we arrive at the local convergence result for the solver (38.3). Theorem 69. Under the assumptions (C) further choose x0 ∈ B(x∗ , r˜) − {x∗ } for ρ = r˜. Then, the conclusions of Theorem 68 hold but for solver (38.3) with g˜2 , r˜ replacing g2 and r, respectively.

334

3.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Numerical Examples

Example 62. Consider the kinematic system F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1 ¯ 1), x∗ = with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , D = U(0, (0, 0, 0)T . Define function F on D for w = (x, y, z)T by F(w) = (ex − 1, Then, we get

e−1 2 y + y, z)T . 2



 ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1

1 1 1 so ω0 (t) = (e − 1)t, ω(t) = e e−1 t, and ω1 (t) = e e−1 , ω2 (s,t) = (e − 1)(s + t). Then, the 2 radii: r = 0.114755, r˜ = 0.114755.

Example 63. Consider X = Y = C[0, 1], D = U(0, 1) and F : D −→ Y defined by F(φ)(x) = ϕ(x) − 5

Z 1

xτφ(τ)3 dτ.

(38.28)

0

We have that F 0 (φ(ξ))(x) = ξ(x) − 15

Z 1 0

xτφ(τ)2 ξ(τ)dτ, for each ξ ∈ D.

Then, we get that x∗ = 0, so ω0 (t) = 15t, ω(t) = 30t and ω1 (t) = 30. Then, the radii: r = 0.00025693 = r˜. Example 64. Looking at the motivational example, choose x∗ = 1, We have ω0 (t) = ω(t) = 96.6629073t, and ω1 (t) = 2. Then, the radii: r = 0.00188056 = r˜.

4.

Conclusion

A ball convergence comparison is developed between three Banach space valued schemes of fourth convergence order to solve nonlinear equations under ω− continuity conditions on the derivative.

Two Fourth Order Solvers for Nonlinear Equations

335

References [1] Argyros I. K., A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach spaces, J. Math. Anal. Appl. 298 (2004) 374-397. [2] Argyros I. K., Convergence and Applications of Newton-Type Iterations, SpringerVerlag, New York, 2008. [3] Argyros I. K., A semilocal convergence analysis for directional Newton methods, Math. Comp. 80 (2011) 327-343. [4] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [5] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [6] Argyros I. K., George S., Magre˜na´ n A. A., Local convergence for multi-point- parametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math. 282, 215-224 (2015). [7] Argyros I. K., Magre˜na´ n A. A., A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 71, 1-23, (2015). [8] Argyros I. K., George S., On the complexity of extending the convergence region for Traubs method, Journal of Complexity 56, 101423. [9] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [10] Babajee D. K. R., Dauhoo M. Z., Darvishi M. T., Karami A., Barati A., Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations, J. Comput. Appl. Math. 233 (2010) 2002-2012. [11] Darvishi M. T., AQ two step high order Newton like method for solving systems of nonlinear equations, Int. J. Pure Appl. Math., 57(2009), 543-555. [12] Grau-S´anchez, M, Grau A., Noguera M., Ostrowski type methods for solving systems of nonlinear equations, Appl. Math. Comput. 218 (2011) 2377-2385. [13] Jaiswal J. P., Semilocal convergence of an eighth-order method in Banach spaces and its computational efficiency, Numer. Algorithms 71 (2016) 933-951. [14] Jaiswal J. P., Analysis of semilocal convergence in Banach spaces under relaxed condition and computational efficiency, Numer. Anal. Appl. 10 (2017) 129-139. [15] Jarratt P., Some fourth order multipoint iterative methods for solving equations, Math. Comput., 20, 95(1966), 434-437.

336

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[16] Ostrowski A. M., Solutions of equations and systems of equations, Academic Press, New York, 1966. [17] Regmi S., Argyros I. K., Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces, Nova Science Publisher, NY, 2019. [18] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [19] Sharma J. R., Arora H., Improved Newton-like methods for solving systems of nonlinear equations, SeMA, 74, 147-163, (2017). [20] Sharma J. R., Kumar D., A fast and efficient composite Newton-Chebyshev method for systems of nonlinear equations, J. Complexity, 49, (2018), 56-73. [21] Sharma J. R, Guha R. K., Sharma R., An efficient fourth order weighted Newton method for systems of nonlinear equations, Numer. Algor., 62,(2013), 307-323. [22] Traub J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs (1964).

Chapter 39

Kou’s Family of Schemes 1.

Introduction

Let F : Ω ⊂ X −→ Y be a differentiable operator in the Fr´echet sense with Ω being nonempty, convex, open set, and X,Y be Banach spaces. A plethora of problems is modeled using equation F(x) = 0.

(39.1)

Then, to find a solution x∗ of equation (39.1), we rely mostly on iterative schemes. This is the case since solutions in closed form can be obtained only in special cases. In this article, we develop the local convergence analysis of Kou’s family of schemes defined as yn = xn − αF 0 (xn )−1 F(xn )

3 xn+1 = xn − F 0 (xn )−1 F(xn ) + (βF 0 (yn ) + (1 − β)F 0 (xn ))−1 4 0 0 0 ×(F (yn ) − F (xn ))F (xn )−1 F(xn ),

(39.2)

where α ∈ S − {0}, β ∈ S and S = R or S = C. Scheme (39.2) was studied in [17] when 2 X = Y = R and α = . The fourth convergence order was shown based on derivatives up 3 to order five (not on scheme (39.2)) limiting the applicability of the scheme. The dynamics and stebility were given in [14]. As an academic and motivational example 1 3 Let X = Y = R, Ω = [− , ]. Define f on Ω by 2 2  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we have t∗ = 1, and f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on Ω. So, the convergence of these methods is not guaranteed by the analysis in earlier papers [1]-[22].

338

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Other concerns are: the lack of upper error estimates on kxn − x∗ k or results in the location and uniqueness of x∗ . These concerns constitute our motivation for writing this article. In particular, we find computable convergence radius and error estimates relying only on the derivative appearing on the scheme and generalized conditions on F 0 . That is how we extend the utilization of the scheme (39.2). Notice that local convergence results on iterative schemes are significant since they reveal how difficult it is to pick starting points x0 . Our idea can be used analogously on other schemes and for the same reasons because it is so general. The local analysis is developed in Section 2, whereas the examples appear in Section 3.

2.

Local Analysis

It is easier tor the local convergence analysis of scheme (39.2) to develop real parameters and functions. Set T = [0, ∞). Suppose equation: (a) ξ0 (t) − 1 = 0 has a minimal solution r0 ∈ T − {0}, for some function ξ0 : T −→ T nondecreasing and continuous. Set T0 = [0, r0). (b) ζ1 (t) − 1 = 0 has a minimal solution ρ1 ∈ T0 − {0}, where ξ : T0 −→ T, ξ1 : T0 −→ T are nondecreasing with ζ1 : T0 −→ T defined as ζ1 (t) =

R1 0

ξ((1 − θ)t)dθ + |1 − α| 1 − ξ0 (t)

R1 0

ξ1 (θt)dθ

.

(c) p(t) − 1 = 0 has a minimal solution r p ∈ T0 − {0}, where p : T0 −→ T is defined as p(t) = |β|ξ0 (ζ1 (t)t) + |1 − β|ξ0 (t). Set r = min{r0 , r p } and T1 = [0, r). (d) ζ2 (t) − 1 = 0 has a minimal solution ρ2 ∈ T1 − {0}, where ζ2 : T1 −→ T is defined as ζ2 (t) =

R1 0

ξ((1 − θ)t)dθ 1 − ξ0 (t)

3(ξ0 (t) + ξ0 (ζ1 (t)t)) 01 ξ1 (θt)dθ + . 4(1 − ξ0 (t))(1 − p(t) R

Kou’s Family of Schemes

339

Define parameter ρ as ρ = min{ρi }, i = 1, 2.

(39.3)

It shall be shown that ρ is a convergence radius for scheme (39.2). Set T2 = [0, ρ). Notice that 0 ≤ ξ0 (t) < 1 (39.4) 0 ≤ p(t) < 1

(39.5)

0 ≤ ζi (t) < 1

(39.6)

and hold for all t ∈ T2 . ¯ ∗ , λ) we denote the closure of open ball U(x∗ , λ) with center x∗ ∈ X and of radius By U(x λ > 0. We suppose from now on that x∗ is a simple solution of an equation F(x) = 0. We shall base the analysis on conditions (C) with functions ξ0 , ξ, and ξ1 as previously given. Suppose: (c1) For each u ∈ Ω kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ξ0 (ku − x∗ k). Set Ω0 = U(x∗ , r0 ) ∩ Ω. (c2) For each u, v ∈ Ω0 kF 0 (x∗ )−1 (F 0 (v) − F 0 (u))k ≤ ξ(kv − uk) and kF 0 (x∗ )−1 F 0 (u)k ≤ ξ1 (ku − x∗ k). ¯ ∗ , ρ) ⊂ Ω. (c3) U(x (c4) There exists ρ∗ ≥ ρ satisfying

Z 1 0

¯ ∗ , ρ∗ ) ∩ Ω. ξ0 (θρ∗ )dθ < 1. Set Ω1 = U(x

Next, the main result follows the local convergence of the scheme (39.2) by conditions (C). Theorem 70. Suppose conditions (C) hold. Then, sequence {xn } converges to x∗ provided that x0 ∈ U(x∗ , ρ) − {x∗ }. Moreover, the only solution of equation F(x) = 0 in the set Ω1 given in (c4) is x∗ . Proof. We shall show using induction on j that ky j − x∗ k ≤ ζ1 (kx j − x∗ k)kx j − x∗ k ≤ kx j − x∗ k < ρ

(39.7)

kx j+1 − x∗ k ≤ ζ2 (kx j − x∗ k)kx j − x∗ k ≤ kx j − x∗ k,

(39.8)

and with radius ρ and functions ζi are given previously.

340

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. Let u ∈ U(x∗ , ρ) − {x∗ }. Using (39.3), (39.4) and (c1), we have kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ξ0 (kx − x∗ k) ≤ ξ0 (ρ) < 1

(39.9)

which together with a lemma on inverses of linear operators due to Banach [4] give F 0 (x)−1 ∈ L(Y, X) with kF 0 (x)−1 F 0 (x∗ )k ≤

1 . 1 − ξ0 (kx − x∗ k)

(39.10)

If x = x0 , iterate y0 is well defined by the first sub-step of scheme (39.2), and we write y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + (1 − α)F 0 (x0 )−1 F(x0 ) = [F 0 (x0 )−1 F 0 (x∗ )] ×[

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 ))dθ(x0 − x∗ )]

+(1 − α)F 0 (x0 )−1 F 0 (x∗ ) ×

Z 1 0

F 0 (x∗ )−1 F 0 (x∗ + θ(x0 − x∗ ))dθ(x0 − x∗ ).

(39.11)

By (39.3), (39.6) (for i = 1), (c2), (39.10) (for x = x0 ) and (39.11), we get ξ((1 − θ)kx0 − x∗ k)dθ + |1 − α| 01 ξ1 (θkx0 − x∗ k)dθ)kx0 − x∗ k ky0 − x∗ k ≤ 1 − ξ0 (kx0 − x∗ k) ≤ ζ1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < ρ. (39.12) (

R1

R

0

That is (39.7) holds for j = 0 and y0 ∈ U(x∗ , ρ). Next, it is shown that A0 = βF 0 (y0 ) + (1 − β)F 0 (x0 ) is invertible. By (39.3), (39.5), (c1) and (39.12), we obtain kF 0 (x∗ )−1 (A0 − F 0 (x∗ ))k = kF 0 (x∗ )−1 (βF 0 (y0 ) + (1 − β)F 0 (x0 ) − βF 0 (x∗ ) − (1 − β)F 0 (x∗ ))k ≤ |β|kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k

+|1 − β|kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k

≤ |β|ξ0 (ky0 − x∗ k) + |1 − β|ξ0 (kx0 − x∗ k) ≤ p(kx0 − x∗ k ≤ p(ρ) < 1,

so 0 kA−1 0 F (x∗ )k ≤

1 . 1 − p(kx0 − x∗ k)

(39.13)

Moreover, x1 is well defined by the second sub-step of the scheme (39.2) from which we can also write 3 x1 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + A−1 (F 0 (y0 ) − F 0 (x0 ))F 0 (x0 )−1 F(x0 ). 4 0

(39.14)

Kou’s Family of Schemes

341

In view of (39.3), (39.6) (for i = 2), (39.10) (for x = x) ), (39.12), (39.13) and (39.14), we have "R 1 0 ξ0 ((1 − θ)kx0 − x∗ k)dθ kx1 − x∗ k ≤ 1 − ξ0 (kx0 − x∗ k) # R 3(ξ0 (kx0 − x∗ k) + ξ0 (ky0 − x∗ k)) 01 ξ1 (θkx0 − x∗ k)dθ + kx0 − x∗ k 4(1 − ξ0 (kx0 − x∗ k))(1 − p(kx0 − x∗ k)) ≤ ζ2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k.

(39.15)

That is (39.8) holds for j = 0 and x1 ∈ U(x∗ , ρ). Then, replace x0 , y0 , x1 by xm , ym, xm+1 in the previous calculations to complete the induction for (39.7) and (39.8). Hence, by the estimation kxm+1 − x∗ k ≤ qkxm − x∗ k < ρ, (39.16) where q = ζ2 (kx0 − x∗ k) ∈ [0, 1), we conclude lim xm = x∗ and xm+1 ∈ U(x∗ , ρ). Consider m−→∞

H= get

Z 1 0

0

F (x∗ + θ(z − x∗ ))dθ for some z ∈ Ω1 with F(z) = 0. Then, by (c1) and (c4), we kF 0 (x∗ )−1 (H − F 0 (x∗ ))k ≤

Z 1 0

ξ0 (θkz − x∗ ||)dθ ≤

Z 1 0

ξ0 (θρ∗ )dθ < 1,

so z = x∗ follows by the invertibility of H and the identity 0 = F(z) − F(x∗ ) = H(z − x∗ ).  Remark 57.

1. In view of (c1) and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + ϕ0 (kx − x∗ k)

the second condition in (c2) can be dropped and ϕ1 can be replaced by ϕ1 (t) = 1 + ϕ0 (t) or ϕ1 (t) = 1 + ϕ0 (r0 ), since t ∈ [0, r0 ). 2. The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form F 0 (x) = P(F(x)) where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1.

342

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

3. Let ϕ0 (t) = L0 t, and ϕ(t) = Lt. In [2, 3] we showed that rA = gence radius of Newton’s method:

2 is the conver2L0 + L

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(39.17)

under the conditions (h1) - (h3). It follows from the definition of α, that the convergence radius ρ of the method (39.2) cannot be larger than the convergence radius rA of the second order Newton’s method (39.17). As already noted in [2, 3] rA is at least as large as the convergence radius given by Rheinboldt [20] 2 , (39.18) 3L where L1 is the Lipschitz constant on D. The same value for rR was given by Traub [22]. In particular, for L0 < L1 we have that rR =

rR < rA and

rR 1 L0 → as → 0. rA 3 L1 That is the radius of convergence rA is at most three times larger than Rheinboldt. 4. We can compute the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k ξ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k

3.

Numerical Examples

Example 65. Consider the kinematic system F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1 ¯ 1), x∗ = with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , Ω = U(0, T T (0, 0, 0) . Define function F on Ω for w = (x, y, z) by F(w) = (ex − 1, Then, we get

e−1 2 y + y, z)T . 2



 ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so ξ0 (t) = (e − 1)t, ξ(t) = e e−1 t, ξ1 (t) = e e−1 . Then, the radii are: ρ1 = 0.0402645, ρ2 = 0.109833.

Kou’s Family of Schemes

4.

343

Conclusion

Kou’s family of schemes for solving equations on the real line is rewritten in a Banach space setting. Then, the local convergence of this family is provided under generalized conditions on the derivative. We also identify the members of this family with the largest convergence radius.

References [1] Adomian G., Solving Frontier Problem of Physics: The Decomposition Method, Kluwer Academic Publishers, Dordrecht, 1994. [2] Amat S., Busquier S., Plaza S., Review of some iterative root-finding methods from a dynamical point of view, Sci. Ser. A: Math. Sci. 10(2004) 3-35. [3] Argyros I. K., A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach spaces, J. Math. Anal. Appl. 298 (2004) 374-397. [4] Argyros I. K., Convergence and Applications of Newton-Type Iterations, SpringerVerlag, New York, 2008. [5] Argyros I. K., A semilocal convergence analysis for directional Newton methods, Math. Comp. 80 (2011) 327-343. [6] Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [7] Argyros I. K., Magre˜na´ n A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [8] Argyros I. K., George S., Magre˜na´ n A. A., Local convergence for multi-point- parametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math. 282, 215-224 (2015). [9] Argyros I. K., Magre˜na´ n A. A., A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 71, 1-23, (2015). [10] Argyros I. K., George S., On the complexity of extending the convergence region for Traubs method, Journal of Complexity 56, 101423. [11] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [12] Artidiello S., Chicharro F., Cordero A., Torregrosa J. R., Local convergence and dynamical analysis of a new family of optimal fourth-order iterative methods, Int. J. Comput. Math. 90(10)(2013) 20492060.

344

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[13] Byrne C., A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, 20(2004)103-120. [14] Cordero A., Guasp L., Torregrosa J. R., Choosing the most stable members of Kou’s family of iterative methods, J. Comput. Appl. Math., 330, (2018), 759-769. [15] Ezquerro J. A.,Gutierrez J. M., Hernandez M. A., Salanova M. A., Chebyshevlike methods and quadratic equations, Rev. Anal. Numr. Theor. Approx. 28(1999) 2335. [16] Homeier H. H. H., On Newton-type methods with cubic convergence, J. Comput. Appl. Math., 176(2005), 425-432. [17] Kou J., Li Y., Wang X., Fourth order iterative methods free from second derivative, Appl. Math. Comput. 184(2007) 880-885. [18] Magrenan A. A.,Gutierrez J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math. 275(2015)527-538. [19] Petkovic M., Neta B., Petkovic L. D., Dzunic J., Multi point Methods for Solving Nonlinear Equations, Academic Press, Amsterdam, 2013. [20] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [21] Sharma J. R., Sharma R., A new family of modified Ostrowskis method with accelerated eighth-order convergence, Numer. Algorithms 54 (2010)445458. [22] Traub J. F., Iterative Methods for the Solution of Equations, Chelsea Publishing Company, New York,1982.

Chapter 40

Multi-Step Steffensen-Line Methods 1.

Introduction

We present a finer than before semi-local convergence analysis of multi-step Steffensen-like methods involving differentiable or non-differentiable operators to solve equations defined on the finite-dimensional Euclidean space. Our idea is based on the center continuity conditions and is so general that it can be used to extend the applicability of other methods. The advantages are obtained under the same computational cost as in previous studies. Our technique can be used to extend the applicability of other methods in a similar way. We are concerned with the problem of approximating a locally unique solution x∗ of the equation G (x) = 0, (40.1) where G : D ⊆ Ri −→ Ri is a continuous operator and D is nonempty and open. Finding x∗ is one of the most challenging and useful problems in computational sciences since many problems reduce to solving equations (40.1). Most solution methods are iterative since closed-form solutions are hard to find. We study the semi-local convergence of the multi step Steffensen-like method defined in [10] by: vi+1 = vi − [vi − Tol, xi + Tol; G ]−1G (vi ), i = 0, 1, 2, . . ., n0 − 1, x0 = vn0

yn = xn − G (xn ) zn = xn + G (xn )

(40.2)

xn+1 = xn − [yn , zn ; G ], n = 0, 1, 2, . . ., where for tol > 0, Tol = (tol,tol, . . .,tol) ∈ Ri . Sufficient convergence criteria for the semilocal convergence of method (40.2) were given in [10]. But the convergence domain is small in general. That is why, we provide a new semi-local convergence analysis with advantages: (a) Weaker sufficient convergence criteria; (b) Tighter upper bounds on kxn − x∗ k, kxn+1 − xn k.

346

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(c) An at least as precise information on the location of x∗ . These advantages are also obtained under the same computational cost. The results are presented next.

2.

Semi-Local Convergence

Our extension of the semi-local convergence for method (40.2) is built on some definitions and auxiliary results. First of all, from now on G : D ⊆ Ri −→ Ri is a Fr´echet differentiable opeartor. Suppose operator [u1 , u2 ; G ] =

Z 1 0

G 0(u2 + θ(u1 − u2 ))dθ

(40.3)

is well defined for all u1 , u2 ∈ D with u1 6= u2 . In case G is Fr´echet differentiable, we have G 0(x) = [x, x; G ]. Definition 22. Let v0 ∈ D. We say that G 0 is center Lipschitz continuous on D if there exists L0 > 0 such that kG 0(x) − G 0 (v0 )k ≤ L0 kx − v0 k (40.4) holds for all x ∈ D. Let γ > 0 and set D0 = U(v0 ,

1 ) ∩ D. L0 γ

(40.5)

Definition 23. We say that G 0 is restricted Lipschitz continuous on D0 if there exists L > 0 such that kG 0(x) − G 0 (y)k ≤ Lkx − yk (40.6) holds for all x, y ∈ D0 .

Definition 24. We say that G 0 is Lipschitz continuous on D if there exists L1 > 0 such that kG 0(x) − G 0 (y)k ≤ L1 kx − yk

(40.7)

holds for all x, y ∈ D. Remark 58. It follows from these definitions that L0 ≤ L1

(40.8)

Ł ≤ L1

(40.9)

D0 ⊆ D.

(40.10)

as well as hold, since Notice that L0 = L0 (D), L = L(L0 , D0 ), L1 = L1 (D), and D0 is used to define L. Moreover, we suppose from now on that L0 ≤ L. (40.11)

Otherwise (i.e. if L < L0 ) the results that follow also hold with L0 replacing L.

Multi-Step Steffensen-Line Methods

347

We shall use conditions (C): (c1) F 0 (v0 )−1 ∈ L(Ri , Ri) with kF 0 (v0 )−1 k ≤ γ and kF(v0 )k ≤ δ0 . (c2) Conditions (40.4) and (40.6) hold. Lemma 20. Under conditions (C) further suppose (a) U(v0 , ρ0 ) ⊆ D, ρ0 = ρ + α, ρ > 0, kTolk ≤ α and γL0 ρ0 < 1.

(40.12)

Then, operator [u1 , u2 ; G ]−1 ∈ L(Ri , Ri) and estimate k[u1 , u2 ; G ]−1k ≤

γ 1 − γL0 ρ0

(40.13)

holds. (b) vi−1 , vi ∈ D0 for i = 0, 1, 2, . . ., n0 , then the estimate kG (vi)k ≤

L (α + kvi − vi−1 k)kxi − xi−1 k 2

(40.14)

holds. (c) xi−1 , xi ∈ D0 for i ≥ 1, then the estimate kG (vi)k ≤

L (kG (xi−1)k + kxi − xi−1 k)kxi − xi−1 k 2

(40.15)

holds. Proof. (a) Using (40.3), (40.4) and (c1), we write in turn that kG 0(v0 )−1 ([u1 , u2 ; G ] − G 0(v0 ))k 0

−1

≤ kG (v0 ) kk ≤ γL0

Z 1 0

Z 1 0

(G 0(u2 + θ(u1 − u2 )) − G 0 (v0 ))dθk

kθ(u1 − v0 ) + (1 − θ)(u2 − v0 )kdθ

≤ γL0 ρ0 < 1.

(40.16)

It then follows from (40.16) and a lemma due to Banach on inverse linear operator [13] that (40.13) holds. The rest of the results listed below are also shown by simply replacing L1 with L in the corresponding results in [10]. As in [10], let Ai = [vi − Tol, vi + Tol; G ], Bi = [xi − G (xi), xi + G (xi); G ], a0 = ` 1 γ2 Lδ0 , b0 = γLTol, T = (b0 + `a0 ), ` = and define g : R −→ R by 2 1 − b0 − γLρ g(t) = −2γ3 L3 t 3 + (4 − 5b0 )γ2 L2 t 2 − (2 + a0 − 5b0 + 3b20 )γLt + 2a0 (1 − b0 ). Notice that g(t) −→ −∞ ad t −→ ∞ and g(0) = a0 (1 − b0 ) > 0. Then, let R stand for the minimal positive root of equation p(t) = 0. We use [z] to denote the integer part of z ∈ R.

348

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 71. Under conditions (C) further suppose that there exists tol > 0 such that b0 < 1,

(40.17)

T < 1,

(40.18)

ρ
, 1 − 2γ0 δ0 if α ≤

Then, the conclusions of Theorem 71 hold for method (40.2). Proposition 14. Under the conditions of Theorem 72 further suppose that ¯ 0 + µ¯ 0 (ku1 − v0 + Tolk + ku2 − v0 − Tolk) (a) k[u1, u2 ; G ] − [v0 − Tol, v0 + Tol; G ]k ≤ λ for each u1 , u2 ∈ D2 ; and (b) there exists ρ∗ such that ¯ 0 + µ¯ 0 (ρ + ρ∗ + 2α)) < 1. γ0 (λ

(40.25)

Set D3 = U[x∗ , ρ∗ ] ∩ D. Then, the only solution of equation G (x) = 0 in the domain D3 is x∗ .

350

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proof. As in Proposition 13 but using (nc2), we get in turn that −1 kA−1 0 (M − A0 )k ≤ kA0 kkA0 − Mk ¯ 0 + µ¯ 0 (kx∗∗ − v) k + kx∗ − v0 k + 2α)) ≤ γ0 (λ

¯ 0 + µ¯ 0 (ρ∗ + ρ + 2α)) < 1, ≤ γ0 (λ

so again x∗∗ = x∗ . Remark 60. Notice that λ0 ≤ λ1 , µ0 ≤ µ1 , λ ≤ λ1textnormaland µ ≤ µ1 .

(40.26)

Hence, comments similar to the ones in Remark 59 can follow.

3.

Conclusion

The Steffensen-like method (40.2) has been extended under the same conditions as before [10] with advantages as stated in the introduction.

References [1] Alarcon, V., Amat S., Busquier S., Lopez, D. J., A Steffensens type method in Banach spaces with applications on boundary-value problems, Journal of Computational and Applied Mathematics 216 (2008) 243- 250. [2] Amat S., Ezquerro J. A., Hernandez-Veron, M. A., On a Steffensen-like method for solving nonlinear equations, Calcolo (2016) 53:171-188 DOI 10.1007/s10092-0150142-3. [3] Amat, Busquier S., S., Convergence and numerical analysis of two-step Steffensen’s methods, Comput. Math. Appl. 49 (2005) 13-22. [4] Amat, Busquier S., S., A two-step Steffensen’s under modified convergence conditions, J. Math. Anal. Appl. 324 (2006) 1084-1092. [5] Argyros I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [6] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [7] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017.

Multi-Step Steffensen-Line Methods

351

[9] Ezquerro J. A., Hernandez, M. A., Romero N., Velasco A. I., On Steffensen’s method on Banach spaces, Journal of Computational and Applied Mathematics, 249, (2013), 9-23. [10] Hernandez Veron, M. A, Yadav S., Magrenan A. A, Martinez E., Singh, S., On the complexity of extending the accessibility for Steffensen-type methods, J. Complexity, (2021). [11] Hernandez, M. A., Rubio, M. J., A uniparametric family of iterative processes for solving nondifferentiable equations, J. Math. Anal. Appl., 275, (2002), 821-834. [12] Hilout S., Convergence analysis of a family of Steffensen-type methods for generalized equations, J. Math. Anal. Appl., 339, (2008), 753-761. [13] Kantorovich L. V., Akilov G. P., Functional Analysis, second edition, Pergamon Press, Oxford, 1982, translated from Russian by Howard L. Silcock. [14] King H. T., Traub J. F., Optimal order of one-point and multipoint iteration, Carnegie Mellon University, Research Showcase@CMU. Computer Science Department, Paper 1747, 1973. [15] Moccari M., Lotfi T., On a two-step optimal Steffensen-type method: Relaxed local and semi-local convergence analysis and dynamical stability, J. Math. Anal. Appl., 468, (2018), 240-269.

Chapter 41

Newton-Like Scheme for Solving Inclusion Problems 1.

Introduction

We are concerned with finding solutions x∗ to the inclusion problem F(x) ∈ K

(41.1)

where F : Ω ⊂ B1 −→ B2 is a Fr´echet differentiable operator between Banach spaces B1 and B2 , Ω is an open set and K 6= 0/ is a closed convex cone. The solution x∗ is taken as the limit of the Newton-like method xn+1 = xn + rn rn ∈ arg min{krk : F(xn ) + A(xn )r ∈ K},

(41.2)

where A : Ω −→ L(B1 , B2 ) approximates F 0 (.) If A(x) = F 0 (x) for all x ∈ Ω, then (41.2) reduces to Robinson’s method for solving (41.1) [20]. A Kantorovich-like result for method (41.2) was given in [21]. But the convergence region is small in general limiting the applicability of the method (41.2). We determine a subset Ω0 of Ω where the iterates of the sequence {xn } also lie. Then, the new Lipschitz-like parameters are at least as tight leading to: (i) Weaker sufficient semi-local convergence criteria, and (ii) Tighter error bounds on kxn+1 − xn k and kxn − x∗ k. Hence, the applicability of the method (41.2) is extended. It is worth noting that these benefits are obtained under the same computational effort since the new Lipschitz parameters are special cases of the old ones. Relevant work and related topics can be found in [1]-[19]. Our technique can extend other methods too in an analogous way [1]-[20].

2.

Semi-Local Convergence

We assume familiarity with the standard concepts and symbols. More details can be found in [1]-[19]. Next, the main semi-local convergence result is developed.

354

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 73. Let x0 ∈ Ω. Suppose that operator F satisfies Range Tx0 = B2 ( Robinson’s condition), where Tx0 : B1 ⇒ B2 is sub-linear operator. Moreover, suppose kTx−1 F(x0 )k = δ, 0 kTx−1 (A(v) − A(x0 ))k ≤ Lkv − x0 k + `, v ∈ Ω, 0 L > 0 and ` ≥ 0; Set Ω0 = Ω ∩U(x0 ,

1−` ), L

kTx−1 (F 0 (v) − F 0 (w))k ≤ K0 kv − wk, v, w ∈ Ω0 , 0 K0 > 0;

kTx−1 (F 0 (v) − A(v))k ≤ M0 kv − x0 k + m0 , v ∈ Ω0 , 0

M0 ≥ 0, m0 ≥ 0; ` + m0 < 1, α0 : max{1,

L + M0 ), F(x0 ) 6= 0 K0

α0 K0 δ 1 ≤ 2 (1 − ` − m0 ) 2 p 1 − `0 − m0 − (1 − `0 − m0 )2 − 2α0 K0 δ s∗ = , α0 K) h0 =

(41.3)

(41.4)

f0 (sn ) U¯ = U[x0 , s∗ ] ⊂ Ω. Furthermore, define sequence {sn } as s0 = 0, s1 = δ sn+1 = sn + , g0 (sn ) or 1 K0 ( s2 − (1 − m0 − `)sn g0 (sn ) 2 n +δ − (K0 − M0 − L)sn−1 (sn − sn−1 )),

sn+1 = sn +

where

1 f0 (t) = α0 K0 t 2 − (1 − ` − m0 )t + δ 2

and g0 (t) = 1 − ` − Lt.

¯ remains in U¯ and Then, sequence {xn } generated by method (41.2) is well defined in U, ¯ converges to a solution x∗ ∈ U of the inclusion problem (41.1), so that kxn+1 − xn k ≤ sn+1 − sn and kx∗ − xn k ≤ s∗ − tn . Proof. We use Ω0 , which is needed and tighter than Ω used in [21]. The rest is identical to Theorem 5 in [21].

Newton-Like Scheme for Solving Inclusion Problems

355

Remark 61. The corresponding parameters in [21] are such that K0 ≤ M,

(41.5)

M0 ≤ M,

(41.6)

m0 ≤ m,

(41.7)

Ω0 ≤ Ω,

(41.8)

since

and majorant sequences {tn} are as t0 = 0,t1 = δ, tn+1 = tn +

f (tn ) , or g(tn)

1 K 2 ( t − (1 − m − `)tn g(tn) 2 n +δ − (K − M − L)tn−1 (tn − tn−1 )),

tn+1 = tn +

where

1 f (t) = αKt 2 − (1 − ` − L)t + δ 2

and g(t) = 1 − ` − Lt. It then follows by a simple induction argument that 0 ≤ sn ≤ tn , 0 ≤ sn+1 − sn ≤ tn+1 − tn , s∗ ≤ t∗ and

1 1 ⇒ h0 ≤ , 2 2 justifying the benefits as claimed in the introduction. Direct study of the second {sn } can give even weaker convergence criteria [2,3,4,5,6]. Examples where (41.5)-(41.8) are strict can be found in [2,3,4,5,6]. h≤

3.

Conclusion

The convergence ball for iterative schemes is small in general limiting their applicability. That is why, a new technique is presented that allows the extension of the domain without additional conditions. This technique is based on our new idea of restricted convergence domain and the center-Lipschitz condition.

356

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

References [1] Adly S., Van Ngai H., Nguyen V. V. (2016), Newtons method for solving generalized equations: Kantorovichs and Smales approaches. J. Math. Anal. Appl. 439(1):396418. [2] Argyros I. K. (2008), Convergence and applications of Newton-type iterations. Springer, New York [3] Argyros I. K, Hilout S. (2010), Inexact Newton-type methods. 26(6):577590.

J. Complex.

[4] Argyros I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008) [5] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [6] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021. [7] Bonnans J. F. (1994), Local analysis of Newton-type methods for variational inequalities and nonlinear programming. Appl Math Optim 29(2):161186. [8] Chen X., Nashed Z., Qi L. (1997), Convergence of Newtons method for singular smooth and nonsmooth equations using adaptive outer inverses. SIAM J. Optim. 7(2):445462. [9] Cibulka R., Dontchev A., Geoffroy M. H. (2015) Inexact newton methods and DennisMor theorems for nonsmooth generalized equations. SIAM J. Control Optim. 53(2):10031019. [10] Daniel J. W. (1973) Newtons method for nonlinear inequalities. Numer. Math. 21:381387. [11] Dennis J. E. Jr, (1970) On the convergence of Newton-like methods. In: Numerical methods for nonlinear algebraic equations (Proc. Conf., Univ. Essex, Colchester, 1969). Gordon and Breach, London, pp 163181. [12] Dontchev A. L., Rockafellar R. T. (2009), Implicit functions and solution mappings. Springer Monographs in Mathematics. A view from variational analysis. Springer, Dordrecht. [13] Ferreira O. P., Goncalves M. L. N., Oliveira P. R. (2013), Convergence of the GaussNewton method for convex composite optimization under a majorant condition. SIAM J. Optim. 23(3):17571783. [14] He Y., Sun J. (2005), Error bounds for degenerate cone inclusion problems. Math. Oper. Res. 30(3):701-717,

Newton-Like Scheme for Solving Inclusion Problems

357

[15] Josephy N. (1979), Newtons method for generalized equations and the PIES energy model. University of WisconsinMadison, Madison. [16] Kantorovich L. V., Akilov G. P. (1964), Functional analysis in normed spaces. The Macmillan Co., New York. [17] Khan A. A., Sama M. (2013), A new conical regularization for some optimization and optimal control problems convergence analysis and finite element discretization. Numer. Funct. Anal. Optim. 24(8): 861895. [18] Ortega J. M., Rhienboldt W. C. (1970), Iterative solution of nonlinear equations in several variables. Academic Press, New York. [19] Pietrus A., Jean-Alexis C. (2013), Newton-secant method for functions with values in a cone. Serdica Math J. 39(34):271286 [20] Robinson S. M. (1972), Extension of Newtons method to nonlinear functions with values in a cone. Numer. Math. 19:341347. [21] Silva G. N., Santos P. S. M., Souza S. S. (2018), Extended Newton-type method for nonlinear functions with values in a cone, Comput. Appl Math. 37(4), 50825097.

Chapter 42

Extension of Newton-Secant-Like Method 1.

Introduction

A plethora of problems from computational sciences reduce to solving the variational inclusion 0 ∈ ϕ(x) + ψ(x) + G(x),

(42.1)

where X,Y are Banach spaces ϕ : D ⊆ X −→ Y is a differentiable operator, ψ : D ⊂ X −→ Y has a divided difference of order one, x∗ is a solution of problem (42.1) and G : X ⇒ Y is a set valued operator. The following Algorithm was studied in [12]. Newton-Secant-like cone (ϕ, ψ,C, x0 , x1 , θ). / stop. Step 1. If T −1 (x0 , x1 )(−ϕ(x1 ) − ψ(x1 )) = 0, Step 2. Do while λ > θ (i) Pick x as a solution of the problem minimize {kx − x1 k : ϕ(x1 ) + ψ(x1 ) + (G0 (x1 ) + [x0 , x1 ; ψ](x − x1 ) ∈ C}

(42.2)

(ii) Compute λ = kx − x1 k; x0 = x1 , x1 = x. Step 3. Return x. Here C denotes cone, T a convex process and G = −C [9]-[16]. If C = {0} (42.2) reduces to the one in [13]. Moreover, if ψ = 0 then (42.2) reduces to the one studied in [6]. The convergence analysis of (42.2) was given in [12]. But the convergence domain is small in general limiting its applicability. We show how to extend the convergence domain without new conditions. This way, we extend the applicability of the method (42.2)

360

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Majorizing Sequences

Let α ≥ 0, β ≥ 0, L > 0, M > 0 and K > 0 be given parameters. Define scalar sequence {vn } by v0 = 0, v1 = α, v2 = β, L vn+1 = vn + M( (vn − vn−1 )2 + 2K(vn − vn−1 )(vn − vn−2 )). 2

(42.3)

Then the following result for majorizing sequences was given in [12] for method (42.2). Lemma 21. Suppose: (a) α ≤ M, β ≤ 2α L (b) η = M 2 ( + 2K) < 1. 2 Then, sequence {vn } generated by (42.3) is well defined in U(v1 , q0 ), stay in U(v1 , q0 ), M ∞ wj where q0 = ∑ η , {w j } is the Fibonacci sequence given by w0 = w1 = 1, w j+1 = w j + η j=1 √ qn 1+ 5 √1 ∗ ∗ w j−1 , j = 1, 2, . . .. Moreover, v = lim vn exists and 0 ≤ v − vn ≤ cη 5 , q1 = . n−→∞ 2 Remark 62. Convergence criteria (a) and (b) may not be fulfilled. That is why we revisit the convergence of sequence by defining t0 = 0,t1 = α,t2 = β, tn+1 = tn + M0 (

L0 (tn − tn−1 )2 + 2K0 (tn − tn−1 )(tn − tn−2 )), 2

(42.4)

where M0 > 0, L0 > 0 and K0 > 0 are given parameters. Next, we present a second result on majorizing sequences using our technique of recurrent scalar functions. Lemma 22. Suppose: L0 1 L0 (a)α ≤ β, M0 ( + 2K0 )(β − α) < , M0 ( (β − α) + 2K0 β) < 1 2 2 2 M0 ( L2 + 2K0 )(β − α) L0 } 0, x0 ∈ D be such that U(x0 , ρ) ⊂ D. Next, we introduce and compare different types of majorant conditions for G. Definition 26. We say that a continuously differentiable function g0 : [0, ρ) −→ R is a center majorant at x0 ∈ D for G if kTx−1 (G0 (x) − G0 (x0 ))k ≤ g00 (kx − x0 k) − g0 (0) 0

(43.6)

for all x ∈ U(x0 , ρ).

Suppose that function g00 (t) has a smallest zero ρ0 ∈ (0, ρ). Set D0 = U(x0 , ρ0 ).

Definition 27. We say that a continuously differentiable function g : [0, ρ0 ) −→ R is restricted majorant at x0 ∈ D0 for G if (G0 (y) − G0 (x))k ≤ g0 (kx − x0 k + ky − xk) − g0 (kx − x0 k) kTx−1 0

(43.7)

for all x, y ∈ D0 with kx − x0 k + ky − xk < ρ0 . Definition 28. We say that a continuously differentiable function g1 : [0, ρ) −→ R is restricted majorant at x0 ∈ D for G if kTx−1 (G0(y) − G0 (x))k ≤ g01 (kx − x0 k + ky − xk) − g1 (kx − x0 k) 0

(43.8)

for all x, y ∈ D with kx − x0 k + ky − xk < ρ. Remark 64. If follows from the last three definitions that

for all t ∈ [0, ρ0), since

g00 (t) ≤ g01 (t)

(43.9)

g0 (t) ≤ g01 (t)

(43.10)

D0 ⊂ D.

(43.11)

Notice that g1 = g1 (D, ρ), g0 = g0 (D, ρ) but g = g(D0, ρ) ). We shall assume that g0 (t) ≤ g(t) and g00 (t) ≤ g0 (t)

(43.12)

for all t ∈ [0, ρ0 ). Otherwise we shall use g¯ for g which is defined as the max{g0 , g} on the interval [0, ρ0). The semi-local convergence analysis for INLM was given in [13] using (43.8). But under our technique we shall use (43.7) instead (43.8). This leads to: weaker semi-local convergence criteria and more precise error bounds on the distances kvn+1 − vn k and kvn −x∗ k. Notice also that these advantages are obtained without additional conditions, since in practice the computation of function g1 requires the computation of g0 and g as special cases. Examples where (43.9)-(43.12) are strict can be found in [13].

Inexact Newton-Like Method for Inclusion Problems

367

On top of conditions (43.5)-(43.7) we assume (A1) g0 (0) > 0, g(0) > 0, g00 (0) = −1, g0 (0) = −1, (A2) g00 and g0 are convex and strictly increasing. (A3) g(t) = 0 for fixed t ∈ (0, ρ0). (A4) g(t) < 0 for fixed t ∈ (0, ρ0). Next, the main semi-local convergence result for INLM follows. Theorem 75. Under these conditions further suppose (−G(x0 ))k ≤ g(0). kTx−1 0

(43.13)

α Set α = sup{−g0 (t) : t ∈ [0, ρ)}. Pick β ∈ [0, ) and define parameters pβ = 2 pβ −(g(t) + 2β) 0 , qβ = sup{t ∈ [β, ρ0) : pβ + g0 (t) < 0}, and τβ = . Then, if sup 0 |g0 (β)|(t − β) 2 − pβ τ ∈ [0, τβ ] and v0 ∈ U(x0 , β), sequence {vn } generated by INLM is well defined for any particular choice of each rk ,  n 1 + τ2 −1 kTx0 (−G(vn ))k ≤ (g(0) + 2β), 2 remains in U(v0 , qβ ) for all n = 0, 1, 2, . . . and converges to a point x∗ ∈ U[x0 , qβ ] such that G(x∗ ) ∈ K. Moreover, if (A5) qβ < ρ0 − β, then  1 + τ 1 + τ D− g0 (qβ + β) kvn − vn+1 k ≤ kvn − vn−1 k 1−τ 2 |g00 (qβ + β)|  2|g0(β)| + g0(qβ + β) +τ kvn − vn−1 k |g00 (qβ + β)| Furthermore, if 0τ
β (i) Pick x solving minimize {kx − x0 k : F(x0 )+ M F(x0 )(x − x0 ) ∈ C}. (ii) Compute: α = kx − x0 k; x0 = x. 3. Return x.

(44.4)

372

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

If C = {0}, (44.4) reduces to the algorithm studied in [12]. The convergence domain is small in general for these algorithms [1]-[13]. Therefore it is important to increase this domain without imposing new conditions. This is shown in Section 3 for algorithm (44.4). The same technique can extend the applicability of other algorithms along the same lines [1]-[13]. Other benefits include tighter error estimates on kxn − x∗ k. By using this technique, we determine a smaller subset D0 of D also containing the iterates xn . But in this subset, the Lipschitz-type parameters are special cases of the original ones and at least as tight, leading to the aforementioned benefits.

2.

Preliminaries

We only include material to understand the results that follow. More details about the concepts introduced can be found in [5]-[13]. Definition 29. Consider Q : Ri −→ Ri to be a Lipschitz continuous mapping (locally). The Jacobian (limiting) ∂0 Q(x), x ∈ Ri is given as ∂0 Q(x) = {M ∈ L(Ri , Ri ) : ∃vm ∈ Dom (Q), lim vm = x, lim Q0 (vm ) = M}. m−→∞

m−→∞

Moreover, the Jacobian (Clarke) of Q at x ∈ Ri is denoted by ∂Q(x) and is the closed convex hull of ∂0 Q(x) [8]. Definition 30. Q is semi-smooth at x ∈ Ri if it is Lipschitzian locally at x and lim

V ∈∂Q(x+θp0 p0 −→p,θ−→0

{Vp0 }

exists for all p ∈ Ri . Moreover, Q is semi-smooth on D ⊂ Ri if it is semi-smooth at all x ∈ D. Definition 31. Q is λ− order semi-smooth at x if for all V ∈ ∂Q(x + p) Vp − Q0 (x; p) = O(kpk1+λ), for some λ ∈ (0, 1] holds.

3.

Convergence

The following auxiliary result is useful. Lemma 23. [6] Assume that mapping F is λ−order semi-smooth on D. Then, there exist constants ρ > 0 and K > 0 such that for each u, w ∈ D with kx − yk ≤ ρ and each 4F(w) ∈ ∂E(w), the following holds kF(u) − F(w) − 4F(w)(u − w)k ≤ Kku − wk1+λ . The convergence analysis is based on conditions (A):

Semi-Smooth Newton-Type Algorithms ...

373

(A1) There exist x0 ∈ D, b ≥ 0 such that kTu−1 (x0 )k ≤ b (A2) kF(u) − F(x0 )k ≤ Lkx − x0 k for all u ∈ D and some L > 0, or k4F(x0 )k ≤ L 1 Set D0 = U(x0 , ) ∩ D. L (A3) k4F(w)k ≤ µ1 for each w ∈ D0 and each 4F(w) ∈ ∂(w). 3 (A4) γ < , 2 H0 = µK0 γλ < 1 − and µ=

2γ 3

b . 1 − (µ1 + L)b

(44.5)

(44.6)

(A5) kx1 − x0 k ≤ γ, where x1 is given by (44.4), and K0 is the constant given in Lemma 44.5 but on D0 . Then, we can show the main convergence result for Algorithm (44.4) under the conditions (A) and the developed notation. Theorem 76. Under conditions (A) further assume there exists a point x0 ∈ D such that T (x0 ) maps Ri onto R j . There exists ρ∗ > 0 such that if U(x0 , ρ∗ ) ⊂ D, then, sequence {xn } ρ generated by Algorithm (44.4) is well defined remains in U(x0 , ) and converges to some 2 x∗ such that F(x∗ ) ∈ C. Moreover, the following estimates hold: kxm − x∗ k ≤ em , where

(44.7)

(1+λ)m −1 λ

em = γ

H0

(1+λ)m

1 − H0

.

(44.8)

Proof. Simply repeat the proof of Theorem 4.1 in [6]. But using (A2), (A3), Sk (x) = −(4F(xk ) − 4F (x0 ))x, kT −1 (x0 )kkSkk < 1 so kT −1 (xk ) ≤ ≤

kT −1 (x0 )k 1 − kT −1 (x0 )kkSk k b = µ. 1 − b(L + µ1 )

(44.9)

Estimate (44.9) is more precise that kT −1 (xk )k ≤

b 1 − 2bM1

(44.10)

374

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

used in [6], where it was assumed instead of (A2) that k4F(w)k ≤ 2bM1

(44.11)

for each 4F9w) ∈ ∂F(w) and w ∈ D. Remark 66. Notice that D0 ⊂ D,

(44.12)

L ≤ M1

(44.13)

µ 1 ≤ M1

(44.14)

K1 ≤ 2K

(44.15)

µ ≤ M,

(44.16)

so

H0 ≤ H = MKγλ < 1 − and H ≤ 1−

2γ ρ

(44.17)

2γ 2γ ⇒ H0 ≤ 1 − , ρ ρ

(44.18)

but not necessarily vice versa if K1 = K, D0 = D, and µ1 + L = M1 , where M=

b [6]. 1 − 2bM1

(44.19)

Estimates (44.12)-(44.18) justify the benefits as claimed in the introduction. Examples where (44.12)-(44.18) are strict can be found in [2,3,4,5]. Notice that these benefits are obtained under the same computational cost as in [6], since in practice the computation of the constants (in [6]) require that of ours as special cases.

4.

Conclusion

The semi-smooth Newton-type algorithm is extended with no additional hypotheses for solving variational inclusion problems.

References [1] Aubin J.-P., Frankowska H., Set Valued Analysis. Birkhuser, Boston (1990). [2] Argyros I. K, Hilout S. (2010), Inexact Newton-type methods. 26(6):577590.

J. Complex.

[3] Argyros I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008). [4] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018.

Semi-Smooth Newton-Type Algorithms ...

375

[5] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021. [6] Bernard S., Cabuzer C., Nurio S. P., Pietros A., Extended Semi-smooth Newton Method for Functions with Values in a Cone, Acta Appl. Math., 155, (2018), 8598. [7] Cabuzel C., Pietrus A., Local convergence of Newtons method for subanalytic variational inclusions. Positivity 12(3), (2008), 525-533. [8] Clarke F. H., Optimization and Nonsmooth Analysis. SIAM, Philadelphia (1990). [9] Dontchev A. L., Rockafellar R. T., Implicit Functions and Solution Mappings. Springer Monographs in Mathematics. Springer, New York (2009). [10] Pi´etrus A., Non differentiable perturbed Newtons method for functions with values on a cone. Investig. Oper. 35(1), (2014), 58-67. [11] Qi L., Sun J., Semismooth version of Newtons method. Math. Program. 58, (1993), 353-367. [12] Robinson S. M., Generalized equations and their solutions, part II: application to nonlinear programming. Math. Program. Stud. 19, (1982), 200-221. [13] Rockafellar R. T., Convex Analysis. Princeton Univ. Press, Princeton (1970).

Chapter 45

Extended Inexact Newton-Like Algorithm under Kantorovich Convergence Criteria 1.

Introduction

Let B1 , B2 are Banach spaces Ω ⊂ B1 is nonempty and open set, and G : Ω ⇒ B2 be a continuously differentiable operator in the Fr´echet-sense. Computing solutions x∗ ∈ Ω of nonlinear equation G(x) = 0

(45.1)

is of great importance in computational sciences since many applications reduce to solving this equation [1]-[7]. The solution x∗ is found in closed form only in special cases. That is why most solution algorithms are iterative. There is extensive literature on local and semilocal convergence of algorithms under the certain condition on the initial data (x0 , G.G0). But the convergence region is small. Hence, it is important to extend it, especially without new conditions. We shall show that this is possible for inexact Newton-like algorithm [1]-[7] given by: Algorithm. Given an initial point x0 and for m = 0 do until convergence (i) For the residual ρm and iterate xm , find dm such that G0 (xm)dm = −G(xm ) + ρm . (ii) Set xm+1 = xm + dm . (iii) Set m = m + 1 and go back to (i). The following condition was used in [6] kG0 (x0 )−1 ρm k ≤ αm kG0 (x0 )−1 G(xm )k1+λ,

(45.2)

378

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

where {αm } is a sequence of nonnegative numbers forcing convergence, and λ ∈ [0, 1]. Moreover, it is supposed that α = sup αm < 1. (45.3) m

The semi-local convergence benefits of our technique involve tighter error estimates on the distances kxm − x∗ k. Our technique can be used to extend the applicability of other algorithms along the same lines [1]-[7]

2.

Convergence

Certain Lipschitz-type conditions are needed, so we can compare the parameters involved. Let x0 ∈ Ω such that F 0 (x0 )−1 ∈ L(B2 , B1 ) and R > 0. Definition 32. Operator G0 (x0 )−1 G0 satisfies the center-Lipschitz condition on U(x0 , R) ⊆ Ω if kG0 (x0 )−1 (G0 (u) − G0 (x0 ))k ≤ K0 ku − x0 k (45.4) holds for all u ∈ U(x0 , R) and some K0 > 0. Set Ω0 = U(x0 ,

1 ) ∩ Ω. K0

Definition 33. Operator G0 (x0 )−1 G0 satisfies the restricted Lipschitz condition on Ω0 if kG0 (x0 )−1 (G0 (u) − G0(v))k ≤ Kkx − vk

(45.5)

holds for all u, v ∈ Ω0 and some K > 0. Definition 34. Operator G0 (x0 )−1 G0 satisfies the Lipschitz condition on U(x0 , R) if kG0 (x0 )−1 (G0 (u) − G0 (v))k ≤ K1 kx − vk

(45.6)

holds for all u, v ∈ Ω and some K1 > 0. Remark 67. We have K0 ≤ K1

(45.7)

K ≤ K1

(45.8)

Ω0 ⊆ Ω.

(45.9)

K0 ≤ K

(45.10)

and hold, since Moreover, suppose that holds. Otherwise the results that follow hold with K0 replacing K. Examples, where (45.7)-(45.10) are strict can be found in [1]-[5]. The convergence analysis in [6] used (45.6) to obtain the estimate kF 0 (v)−1 G0 (x0 )k ≤

1 1 − K1 kv − x0 k

(45.11)

Extended Inexact Newton-Like Algorithm ...

379

for all v ∈ U(x0 , R). But using the weaker (45.4) one obtains the more precise than (45.12) estimate given by kF 0 (v)−1 G0 (x0 )k ≤

1 1 − K0 kv − x0 k

(45.12)

for all v ∈ Ω0 . This modification in the proof of the main Theorem 3.1 in [6] and using K instead of K1 lead to our next result. Hence, the proof is omitted. Theorem 77. Under consitions (45.4) and (45.5) further suppose that  (1 − α)2     K(1 + α)(2(1 + α) − α(1 − α)2) , if λ = 0 1−λ η≤ σ0 R2∗ + 2λ2 R∗1+λ − 1   , α 1+λ }, if λ > 0,  min{ 1  2(1 + τ)(1 + α 1+α )

(45.13)

where η = kG0(x0 )−1 G(x0 )k,

1

K(1 + α 1+α ) σ0 = , (1 + ηKα(1 + α)(1 + α(ηλ − 1)) τ= µ=

(

α , 1 + α(ηλ − 1)

η(1 + α), if λ = 0 1 1+α η(1 + α )(1 + τ), if λ > 0,

R∗ is the unique positive solution of f 0 , f (t) =

σ0 2 t + 21−λ τt 1+λ − (1 + τ)t + µ. 2

Then, sequence {xn } generated by algorithm (45.2) exists, remains in U(x0 , R∗ ) and converges to x∗ solving (45.1) so that kx∗ − xm k ≤ s∗ − sm , where s∗ = lim sm , m−→∞

σ0 2 t + τt 1+λ − (1 + τ)t + µ, 2 f 0 (sm) sm+1 = sm − 0 . g (sm)

Remark 68. If Ω0 = Ω, and K0 = K = K1 , then Theorem 77 reduces to Theorem 3.1 in [6]. Otherwise, our Theorem is an improvement (see Remark 67) with benefits as already stated in the introduction. It is worth noticing that K0 and K are special cases of K1 , so no additional conditions have been used. Notice also that the computation of K1 in practice requires that of K0 and K. the specializations of Theorem 3.1 in [6] are also extended immediately. We leave the details to the motivated reader.

380

3.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

An inexact Newton-like algorithm is extended under Kantorovich convergence criteria with no additional hypotheses for solving nonlinear equations.

References [1] Aubin J.-P., Frankowska H., Set Valued Analysis. Birkhuser, Boston (1990). [2] Argyros I. K., Hilout S. (2010), Inexact Newton-type methods. 26(6):577590.

J. Complex.

[3] Argyros I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008). [4] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [5] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021. [6] Shen W., Li C., Kantorovich-type criterion for inexact Newton methods, Appl. Numer. Math., 59, (2009), 1599-1611. [7] Ypma, T. J., Local convergence of inexact Newton methods, SIAM. J. Numer. Anal., 21(1984), 583-590.

Chapter 46

Kantorovich-Type Results Using Newton’s Algorithms for Generalized Equations 1.

Introduction

Let B1 , B2 are Banach spaces Ω ⊂ B1 be an open set F : Ω ⇒ B2 be a continuously differentiable operator in the Fr´echet-sense and G : B1 ⇒ B2 be a set-valued operator whose graph is closed [6,9,10,14]. A plethora of applications from computational sciences reduce to finding a solution x∗ ∈ Ω of the generalized equation 0 ∈ F(x) + G(x).

(46.1)

It is desirable to find points x∗ in closed form. But this is possible only in special cases. That is why most solution schemes provided by researchers and practitioners are iterative. In particular, Newton’s algorithmNA) defined by 0 ∈ F(xn ) + F 0 (xn )(xn+1 − xn )

(46.2)

has been utilized by many authors to generate a sequence converging locally or semi-locally to x∗ [6,7,10,11,12,13,14]. But the convergence domain given before is small limiting the choice of the initial point x0 . In our chapter, we develop a technique that: (i) Extends the convergence domain so a wider choice of initial points becomes available. (ii) Tighter estimates on the distances kxn+1 − xn k, kxn − x∗ k are obtained. Hence, requiring less computation of iterates to obtain the desired error tolerance. These benefits are obtained under the same computational effort since in practice the computation of the old Lipschitz constants requires that of ours as special cases. Our technique can be used to extend the applicability of other algorithms along the same lines [1]-[14]. In particular, we extend the results in [2] which in turn extended earlier ones [10,11,12,13,14].

382

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Convergence

We assume familiarity with the standard concepts introduced here and refer the reader to [7] for more details. The local convergence results for algorithm (46.2) are first presented. Let x∗ ∈ Ω be a solution of (46.1) and set a = F 0 (x∗ )x∗ − F(x∗ ) ∈ B1 . Suppose that set-valued operators ψ(.) = F 0 (x∗ )(.) + G(.) is metrically regular in a neighborhood U = UB1 (x∗ , r) ×UB2 (a, ρ) of (x∗ , a) with modulus θ such that U(x∗ , r) ⊂ Ω. Set b = min{r, θρ, ρ}. Next, we present certain Lipschitz-type conditions, so we can compare them. Definition 35. We say the F 0 is center Lipschitz continuous if kF 0 (x) − F 0 (x∗ )k ≤

K0 kx − x∗ k Θ

(46.3)

for all x ∈ Ω and some K0 > 0. Set Ω0 = U(x∗ ,

Θ ) ∩ Ω. K)

Definition 36. We say the F 0 is restricted Lipschitz continuous if kF 0 (u) − F 0 (v)k ≤

K ku − vk Θ

(46.4)

for all u, v ∈ Ω) and some K > 0. Definition 37. We say the F 0 is Lipschitz continuous if kF 0 (u) − F 0 (v)k ≤

K1 ku − vk Θ

(46.5)

for all u, v ∈ Ω and some K1 > 0. Remark 69. (a)Notice that K0 = K0 (Θ, x∗ , r, Ω), K1 = K1 (Θ, x∗ , r, Ω) but K = K(Θ, x∗ , r, Ω0). (b) In view of (46.3)-(46.5) and Ω0 ⊆ Ω. (46.6) It follows that K0 ≤ K1

(46.7)

K ≤ K1 .

(46.8)

and

Kantorovich-Type Results Using Newton’s Algorithms for Generalized Equations 383 In what follows we assume K0 ≤ K.

(46.9)

Otherwise, the results that follow hold with K0 replacing K. Examples, where (46.6)-(46.9) are strict can be found in [2]. Notice that (46.3) is used to define Ω0 used in our proofs instead of Ω (used in [2]). This way K can replace less precise K1 in all the proofs in [2]. Moreover, notice that we do not require F to be twice differentiable as in [2]. Theorem 78. (local convergence) Under conditions (46.3) and (46.4) further suppose 2Kr < 1.

(46.10)

Then, for any starting point 0 ∈ UB1 (x∗ , b) there exists a sequence {xn } generated by NA converging quadratically to x∗ , so that kx∗ − xn+1 k ≤

1 kx∗ − xn k2 . 2r

Proof. Simply replace K1 by K in the proof of Theorem 3.1 in [2]. Let Θ > 0, b > 0 and v ∈ Ω so that U(v, b) ⊂ Ω. Define γ = γ(Θ, v) = Θd(0, F(v) + G(v)). Theorem 79. (Semi-local convergence) Let x ∈ Ω, δ ∈ (0, 1] and r > 0, τ > 0 such that the following conditions hold. (C1) ψ = F 0 (x) + G is metrically regular on U = U(ψ, x, 4r, τ) with modulus Θ > Reg (ψ, x, 4r, τ). (C2) d(0, Q(x)) < τ, Q = F + G. √ 1− 1−δ (C3) 2aγ ≤ r, where a = δ. Then, the following assertions hold (a) There exists x∗ ∈ Ω solving equation (46.1) so that kx∗ − xk ≤ 2aγ ≤ r. (b) There exist a sequence {xn } generated bu NA with starter x converging to x∗ , so that √ n 4 1 − δ q2 kx∗ − xn k ≤ n γ, δ 1 − q2 √ 1− 1−δ √ for δ < 1 and q = ; 1+ 1−δ kx∗ − xn k ≤ 21−n γ for δ = 1.

384

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proof. Simply replace K1 with K in the proof of Theorem 3.4 in [2]. Remark 70. If Ω0 = Ω, and K = K1 , our results reduce to the corresponding ones in [2]. Otherwise, they constitute an improvement with advantages as already stated in the introduction. (a) In [1,2, 8,9,10,11,12,13,14] they assumed instead of (46.10) that 2K1 r < 1

(46.11)

K0 can be arbitrary small which gives a smaller radius of convergence. Notice also that K [4,5,6,7]. (b) Notice for example that in [2] they assumed instead of (C3) (C3)’ γK1 ≤ δ. (C3)0 ⇒ (C3) but not necessarily vice versa unless in K1 = K. Moreover, we also have √ 1 − 1 − δ1 √ q ≤ q1 = . 1 + 1 − δ1 (c) Clearly our results extend the rest of the results in [2] which in turn extended earlier ones in [1,8,9,10,11,12,13,14].

3.

Conclusion

Kantorovich-type results using Newton’s Algorithms for generalized equations are extended with no additional hypotheses for solving nonlinear equations.

References [1] Adly S., Cibulka R., Ngai H. V., Newtons method for solving inclusions using setvalued approximations, SIAM J. Optim., 25 (1) (2015) 159-184. [2] Adly S., Ngai H. V., Nguyen V. V., Newtons method for solving generalized equations: Kantorovichs and Smales approaches, J. Math. Anal. Appl., 439, (2016), 396-418 [3] Aubin J.-P., Frankowska H., Set Valued Analysis. Birkhuser, Boston (1990). [4] Argyros I. K., Hilout S. (2010), Inexact Newton-type methods. J. Complexity, 26(6):577-590. [5] Argyros I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008) [6] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [7] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021.

Kantorovich-Type Results Using Newton’s Algorithms for Generalized Equations 385 [8] Bonnans J. F., Local analysis of Newton-type methods for variational inequalities and nonlinear programming, Appl. Math. Optim. 29 (1994) 161-186. [9] Dontchev A. L., Rockafellar R. T., Implicit Functions and Solution Mappings: A View from Variational Analysis, Springer Monographs in Mathematics, Springer, New York, 2009. [10] Kantorovich L. V., Akilov G. P. (1964), Functional analysis in normed spaces. The Macmillan Co., New York. [11] Mordukhovich B. S., Variational Analysis and Generalized Differentiation I: Basic Theory, Grundlehren der Mathematischen Wissenschaften (A Series of Comprehensive Studies in Mathematics), vol. 330, Springer, Berlin, Heidelberg, 2006. [12] Rashid M. H., Yu S. H., Li C., Wu S. Y., Convergence analysis of the Gauss-Newtontype method for Lipschitz-like mappings, J. Optim. Theory Appl. 158 (2013) 216233. [13] Robinson S. M., Strongly regular generalized equations, Math. Oper. Res. 5 (1) (1980) 43-62. [14] Rockafellar R. T., Wets R. J.-B. , Variational Analysis, Grundlehren der Mathematischen Wissenschaften (A Series of Comprehensive Studies in Mathematics), vol. 317, Springer, Berlin, Heidelberg, 1998.

Chapter 47

Developments of Newton’s Method under H¨older Conditions 1.

Introduction

The computation of a solution x∗ of a nonlinear equation F(x) = 0

(47.1)

is importanat in computational sciences, since many applications can be written as (47.1). Here F : Ω ⊆ X −→ Y is Fr´echet-differentiable operator, X,Y are Banach spaces and Ω 6= 0/ is a convex and open set. But this can be attained only in special cases. That explains why most solution methods for (47.1) are iterative. There is a plethora of methods for solving (47.1) [1]-[14]. Among them Newton’s method (NM) defined by x0 ∈ Ω, xn+1 = xn − F 0 (xn )−1 F(xn )

(47.2)

seems to be the most popular [2,4]. But the convergence domain is small, limiting the applicability of NM. That is why we have developed a technique that determines a subset Ω0 of Ω also containing the iterates {xn }. Hence, the H¨older constants are at least as tight as the ones in Ω. This crucial modification leads to: weaker sufficient convergence criteria, the extension of the convergence domain, tighter error estimates on kx∗ − xn k, kxn+1 − xn k and a more precise information on x∗ . It is worth noticing that these advantages are obtained without additional conditions since in practice the evolution of the old H¨olderian constants require that of the new conditions as special cases.

2.

Convergence

We introduce certain H¨older conditions crucial for the semi-local convergence. Ler p ∈ (0, 1]. Suppose there exists x0 ∈ Ω such that F 0 (x0 )−1 ∈ L(Y, X).

Definition 38. Operator F 0 is center H¨olderian on Ω if there exists H0 > 0 such that kF 0 (x0 )−1 F 0 (w) − F 0 (x0 )k ≤ H0 kw − x0 k p

(47.3)

388

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

for all w ∈ Ω. Set

1

Ω0 = U(x0 ,

1 p

H0

) ∩ Ω.

(47.4)

0

Definition 39. Operator F is center H¨olderian on Ω0 if there exists H > 0 such that

where H˜ =



˜ − uk p , kF 0 (x0 )−1 F 0 (w) − F 0 (u)k ≤ Hkw

(47.5)

H, w = u − F 0 (u)−1F(u), u ∈ D0 . K, w, u ∈ Ω0 .

We present the results with H although K can be used too. But notice H ≤ K.

Definition 40. Operator F 0 is center H¨olderian on Ω if there exists H1 > 0 such that kF 0 (x0 )−1 F 0 (w) − F 0 (u)k ≤ H1 kw − uk p

(47.6)

for all w, u ∈ Ω. Remark 71. It follows from (47.4), that Ω0 ⊆ Ω.

(47.7)

Then, by (47.3)-(47.7) the following items hold H0 ≤ H1

(47.8)

H ≤ H1 .

(47.9)

H0 ≤ H.

(47.10)

and We shall assume that

Otherwise the results that follow hold with H0 replacing H. Notice that H0 = H0 H0 (x0 , Ω), H1 = H1 (x0 , Ω), H = H(x0 , Ω0 ) and can be small (arbitrarily) [2,3,4]. In H1 earlier studies [1]-[14] the estimate 1 kF 0 (z)−1F 0 (x0 )k ≤ (47.11) 1 1 − H1 kz − x0 k p for all z ∈ U(x0 ,

1

1 p

) was found using (47.6). But, if we use (47.3) to obtain the weaker

H1 and more precise estimate kF 0 (z)−1F 0 (x0 )k ≤ for all z ∈ U(x0 ,

1

1 1

(47.12)

1 − H0 kz − x0 k p

1 ). This modification in the proofs and exchanging H1 by H leads to H0p the advantages as already mentioned in the introduction. That is why we omit the proofs in our results that follow. Notice also that in practice the computation of H1 require that of H0 and H as special cases. Hence, the applicability of NM is extended without additional conditions.

Developments of Newton’s Method under H¨older Conditions

389

Let d ≥ 0 be such that kF 0 (x0 )−1 F(x0 )k ≤ d.

(47.13)

We assume that (47.3)-(47.5) hold from now on unless otherwise stated. First, we extend the results by Keller [11] for NM. Similarly, the results for the chord method can also be extended. We leave the details to the motivated reader. For brevity, we skip the extensions on the radii of convergence balls and only mention convergence criteria and error estimates. Theorem 80. Assume: 1+λ , Hrλ < 2+λ   2+λ λ Hr λ d ≤ 1− 1+λ and ¯ 0 , r) ⊂ Ω. U(x Then, lim xn = x∗ ∈ U(x0 , r0 ) and F(x∗ ) = 0. Furthermore, n−→∞

µλ 2+λ

!(1+λ) p





1

kx∗ − xn k ≤ where µ =

r 1

,

µλ

Hrλ 1 < 1. 1 − H0 rλ 1 + λ

Proof. See Theorem 2 in [11]. Theorem 81. Assume: 1 Hd < 2+λ λ

λ 1+λ

and ¯ 0 , r) ⊂ Ω. U(x Then, lim xn = x∗ ∈ U(x0 , r0 ), F(x∗ ) = 0 and n−→∞

1

kx∗ − xn k ≤ p

Hr0 where λ = p 1 − H0 r0



d r0

p

λp 1−λ

!(1+p)n

d 1

,

µp

1 < 1 and r0 is the minimal positive root of scalar equation 1+ p (2 + p)Ht 1+p − (1 + p)(t − d) = 0

provided that r ≥ r0 .

390

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proof. See Theorem 4 in [11]. Theorem 82. Assume:



p Hd ≤ 1 − 1+ p p

R≥ and

p

,

1+ p d 2 + p − (1 + p) p ¯ 0 , r) ⊂ Ω. U(x

Then, lim xn = x∗ ∈ U(x0 , r), F(x∗ ) = 0 and n−→∞  n 1 1 1 n kx∗ − xn k ≤ [(1 + p)H p d](1+p) H p . 1+ p Proof. See Theorem 5 in [11]. Next, we extend a result given in [6] which in turn extended earlier ones [1], [7]-[14]. It is convenient to define function on the interval [0, ∞) by H 1+p t −t +d 1+ p

g(t) = gβ (t) =

βH 1+p t − t + d (β ≥ 0) 1+ p

h(t) =

t 1+p + (1 + p)t , (1 + p)1+p − 1

v(p) = max h(t), t≥0

δ(p) = min{β ≥ 1 : max h(t) ≤ β, 0 ≤ t ≤ t(β)} and scalar sequence {sn } by s0 = 0, sn = sn−1 −

gd (sn−1 ) . g0 (sn−1)

Then, we can show: Theorem 83. Assume: 1 d≤ v(p)



p 1+ p

p

and U(x0 , r¯) ⊆ Ω, where r¯ is the minimal solution of equation gv(p) = 0, gv(t) =

v(p)H 1+p t − t + d. 1+ p

Developments of Newton’s Method under H¨older Conditions

391

Proof. See Theorem 2.2 in [6]. Next, we present the extensions of the work by Rokne in [13] but for the Newton-like method (NLM) xn+1 = xn − L−1 n F(xn ),

where Ln is a linear operator approximating F 0 (xn ). Theorem 84. Assume:

kL(x) − L(x0 )k ≤ M0 kx − x0 k p for all x ∈ Ω. Set Ω0 = U(x0 ,

1

1

).

(γ2 M0 ) p

¯ − yk p kF 0 (x) − F 0 (y)k ≤ Mkx for all x, y ∈ Ω0 ,

kF 0 (x) − L(x)k ≤ γ0 + γ1 kx − x0 k p

for all x ∈ Ω0 , and some γ0 ≥ 0, γ1 ≥ 0. L(x0 )−1 ∈ L(Y, X) with kL(x0 )−1 k ≤ γ2 and kL(x0 )−1 F(x0 )k ≤ γ3 , function q defined by q(t) = t 1+p (γ2 γ0 + γ2 M0 ) + t(

¯ p γ2 Md + γ2 γ0 − 1) − γ2 M0 γ3t p + γ3 1+ p

has a smallest positive zero R > γ3 , ¯ p < 1, γ2 MR   ¯ p p γ2 Md p ρ= + γ2 γ0 + γ2 γ1 R < 1, ¯ p 1+ p 1 − γ2 MR

¯ 0 , R) ⊂ Ω. Then lim xn = x∗ and F(x∗ ) = 0. U(x n−→∞

Proof. See Theorem 1 in [13]. Many results of Newton’s method were also reported in the elegant book [9]. Next, we show how to extend one of them. The details of how to extend the result of them are left to the motivated reader. Theorem 85. Suppose: conditions (47.3), (47.5), (47.10), and (C) h0 = Hd p ∈ (0, ρ) where ρ is the only solution of equation (1 + p) p (1 − t)1+p − t p = 0, p ∈ (0, 1] 1 (1 + p)(1 − h0 ) in (0, ] and U(x0 , s) ⊂ Ω, where s = hold. Then, sequence {xn } con2 (1 + p) − (2 + p)h0 verges to a solution x∗ of equation F(x) = 0. Moreover, {xn }, x∗ ∈ U[x0 , s] and x∗ is the only d solution in Ω ∩U(x0 , 1/p ). Moreover, the following error estimates hold h0 kxn − x∗ k ≤ en ,

392

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

where en = δ

(1+p)n −1 p2

An (1+p)n p

d, with δ =

1−δ A 1 t f1 (t) = and f 2 (t) = . 1 −t 1+ p

h1 , A = 1 − h0 , h1 = h0 f1 (h0 )1+p f2 (h0 ) p , h0

Finally, we extend the results by F. Cianciaruso and E. De Pascale in [6] who in turn extended earlier ones [1,5,7,11,12,14]. Define scalar sequence {vn } for h = d p H by 1

v0 = 0, v1 = h p , (vn − vn−1 )1+p vn+1 = vn + p . (1 + p)(1 − vn )

(47.14)

Next, we extend Theorem 2.1 and Theorem 2.3 in [6], respectively. Theorem 86. Let function f : [1, ∞) −→ [0, ∞), R : [0, ∞) −→ [0, ∞) be defined by 1+ p 1 f (t) = (1 − ) 1 1 t ((1 + p) 1−p + (t(t − 1) p ) 1−p )1−p and

1

R(t) = Suppose that

(1 + p) p 1

1

((1 + p) 1−p + (t(t − 1) p ) 1−p )1−p

.

h ≤ f (M),

(47.15)

where p M is a global maximum for function f , given explicitly by M = 1 + 1 + 4(1 + p) p p1−p . Then, the following assertion hold 2 vn ≤ R(M)(1 −

1 ), Mn

(47.16)

1 vn+1 1 − Mn+1 ≤ , vn 1 − M1n

and lim vn = v∗ ∈ [0, R(M)].

(47.17)

vn ≤ vn+1 ≤ R(M) < 1

n−→∞

Simply use H for H1 in [6].  1

Theorem 87. Under condition (47.15) further suppose that r∗ = H − p v∗ ≤ ρ and U(x0 , ρ) ⊆ Ω. Then, sequence {xn } generated by NM is well defined in U(x0 , v∗ ), stays in U(x0 , v∗ ) and converges to the unique solution x∗ ∈ U[x0 , v∗ ] of equation F(x) = 0, so that kxn+1 − xn k ≤ vn+1 − vn and kx∗ − xn k ≤ v∗ − vn .

Developments of Newton’s Method under H¨older Conditions

393

Proof. Simply use H for H1 used in [6]. Remark 72. (1) If K = H1 the last two results coincide with the corresponding ones in [6]. But if K < H1 then the new results constitute an improvement with benefits already stated in the introduction. Notice that the majorizing sequence {wn } in [6] was defined for h1 = d p H1 by 1

w0 = 0, w1 = h1p , wn+1 = wn +

(wn − wn−1 )1+p p , (1 + p)(1 − wn )

(47.18)

and the convergence criterion is h1 ≤ f (M).

(47.19)

It then follows by (47.9), (47.14), (47.15), (47.18) and (47.19) that h1 ≤ f (M) ⇒ h ≤ f (M)

(47.20)

but not necessarily vice versa, unless if H = H1 , vn ≤ wn , 0 ≤ vn+1 − vn ≤ wn+1 − wn and 0 ≤ v∗ ≤ w∗ = lim wn . n−→∞

(2) In view of (47.11) and (47.12) sequence {un } defined for each n = 0, 1, 2, . . . by 1

u0 = 0, u1 = h1p , u2 = u1 + un+1 = un +

H0 (u1 − u0 )1+p p , (1 + p)(1 − H0 u1 ) H(un − un−1 )1+p p (1 + p)(1 − H0 un )

is a tighter majorizing sequence than {vn } and can replace it in Theorem 86 and Theorem 87. Concerning the uniqueness of the solution x∗ we provide a result based only on (47.3). Proposition 15. Suppose: (1) The point x∗ ∈ U(x0 , a) ⊂ Ω is a simple solution of equation F(x) = 0 for some a > 0. (2) Condition (47.3) holds. (3) There exist b ≥ a such that H0

Z 1 0

((1 − τ)a + τb) p dτ < 1.

(47.21)

Let G = U[x0 , b] ∩ Ω. Then, the point x∗ is the only solution of equation F(x) = 0 in the set G.

394

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proof. Let z∗ ∈ G with F(z∗ ) = 0. By (47.3) and (47.21), we obtain in turn for Q =

Z 1 0

F 0 (x∗ + τ(z∗ − x∗ ))dτ

kF 0 (x0 )−1 (Q − F 0 (x0 ))k ≤ H0

Z 1

kx∗ + τ(z∗ − x∗ ) − x0 k p dτ

≤ H0

Z 1

[(1 − τ)kx∗ − x0 k + τkz∗ − x0 k] p dτ

≤ H0

Z 1

((1 − τ)a + τb) p dτ < 1,

0

0

0

showing z∗ = x∗ by the invertibility of Q and the approximation Q(x∗ − z∗ ) = F(x∗ ) − F(z∗ ) = 0. Notice that if K = H1 the results coincide with the ones of Theorem 3.4 in [9]. But, if K < H1 then they constitute an extension. Remark 73. (a) We gave the results in affine invariant form. (b)The results in this chapter can be extended more if we consider the set S = U(x1 ,

1

− H 1/p d) provided that H 1/p d < 1. Moreover, suppose S ⊂ Ω. Then, S ⊂ Ω0 , so the H¨olderian constant corresponding to S is at least as small as K, and can replace it in all previous results.

3.

Conclusion

The semi-local convergence criteria for Newton’s method are weakened without new conditions. Moreover, tighter error distances are provided as well as more precise information on the location of the solution.

References [1] Appell J., De Pascale E., Lysenko Ju. V., Zabreiko P. P., New results on NewtonKantorovich approximations with applications to nonlinear integral equations, Numer. Funct. Anal. Optim., 18(1&2), 1-18. [1] Argyros I. K, Hilout S. (2010), Inexact Newton-type methods. 26(6):577-590.

J. Complex.

[3] Argyros I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008). [4] Argyros I. K., George S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021.

Developments of Newton’s Method under H¨older Conditions

395

[5] Cianciaruso F., De Pascale E., Newton-Kantorovich approximations when the derivative is H¨olderian: old and new results, Numer. Funct. Anal. and Optimization, 24 (7&8), (2003), 713-723. [6] Cianciaruso F., De Pascale E., Estimates on majorizing sequences in the NewtonKantorovich method: A further improvement, J. Math. Anal. Applic., 322, (2006), 329-335. [7] Demidovich N. T, Zabreiko P. P., Lysenko Ju. V., Some remarks on the NewtonKantorovich method for nonlinear equations with H¨older continuous linearizations, Izv. Akad. Nauk, Beloruss, (1993), 3:22-26 (Russian). [8] De Pascale E., Zabreiko P. P., The convergence of the Newton-Kantorovich method under Vertgeim conditions: a new improvement, Z. Anal. Anwend., 17(No.2) (1998), 271-280. [9] Ezquerro J. A., Hernandez-Veron, M., Mild differentiability conditions for Newton’s method in Banach spaces, Frontiers in Mathematics, Birkhauser Cham, Switzerland, (2020). [10] Kantorovich L. V., Akilov G. P., Functional analysis in normed spaces. The Macmillan Co, New York (1964). [11] Keller H. B., Newton’s method under mild differentiability conditions, J. Computer and system sciences, 4, (1970), 15-28. [12] Lysenko J. V., Conditions for the convergence of the Newton-Kantorovich method for nonlinear equations with H¨older linearization, Dokl. Akad. Nauk. BSSR, (1994), 38:20-24 (in Russian). [13] Rokne J., Newton’s method under mild differentiability conditions with error analysis, Numer. Math., 18, (1972), 401-412. [14] Vertgeim B. A., On some methods of the approximate solution of nonlinear functional equations in Banach spaces, Uspekhi Mat. Nauk., 12, (1957), 166-169 (in Russian)(1960) Engl. Transl: Amer. Math. Soc. Transl., 16(2):378-382.

Chapter 48

Ham-Chun Fifth Convergence Order Solver 1.

Introduction

We extend the applicability of the Ham-Chun fifth order solver for solving nonlinear equations involving Banach space-valued equations. This is done by using assumptions only on the first derivative that does appear on the solvers, whereas in earlier works up to the sixth derivative are used to establish the convergence. Our technique is so general that it can be used to extend the usage of other solvers along the same lines. Let X and Y be Banach spaces, and Ω ⊂ X be an open-convex set. We are concerned with the problem of extending the applicability of a novel Ham-Chun fifth order solver [8]: yk = xk − F 0 (xk )−1 F(xk )

0 0 0 −1 xk+1 = yk − B−1 k (F (yk ) + 3F (xk )F (xk ) F(xk ),

(48.1)

where Bk = 5F 0 (yk ) − F 0 (xk ),for approximating a solution x∗ of the equation F(x) = 0.

(48.2)

Here F : Ω ⊂ X −→ Y is a nonlinear Fr´echet differentiable operator. Convergence analysis of higher-order iterative solvers can be found in [1]-[17] and the references therein. The convergence order was established in [8] using hypotheses up to the sixth derivative (not on these solvers) and in the setting of X = Y = R. No computable error bounds on kxn − x∗ k or uniqueness results were given either. We address all these problems in this chapter. The assumptions on derivatives of order up to six reduce the applicability of the solvers. 1 3 For example: Let X = Y = R, Ω = [− , ]. Define f on Ω by 2 2  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0.

398

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, we have t∗ = 1, and f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on Ω. So, the convergence of the above solvers is not guaranteed by the analysis in earlier papers. Moreover, our approach is so general that it can be used to extend the applicability of other solvers [1]-[17] in a similar way. The rest of the chapter contains the convergence analysis of this solver and the numerical examples.

2.

Ball Convergence

Let us develop functions and parameters to be used in the convergence of solver (48.1). Set T = [0, ∞). Assume function: (a) ξ0 (t) − 1 has a minimal zero R0 ∈ T − {0} for some function ξ0 : T −→ T which is continuous and nondecreasing. Set T0 = [0, R0 ). (b) η1 (t) − 1 has a minimal zero r1 ∈ T0 − {0} for some function ξ : T0 −→ T which is continuous and nondecreasing whereas function η1 : T0 −→ T is defined by η1 (t) =

R1 0

ξ((1 − τ)t)dτ . 1 − ξ0 (t)

(c) p(t) − 1 has a minimal zero R1 ∈ T0 − {0} where function p : T0 −→ T is defined by p(t) = 1 (5ξ0 (η1 (t)t) + ξ0 (t)). 4 Set R2 = min{R0 , R1 } and T1 = [0, R2 ). (d) ξ0 (η1 (t)t) − 1 has a minimal zero R3 ∈ T1 − {0}. Set R = min{R2 , R3 } and T2 = [0, R). (e) η2 (t) − 1

Ham-Chun Fifth Convergence Order Solver

399

has a minimal zero r2 ∈ T2 − {0} for some function ξ1 : T2 −→ T which is continuous and nondecreasing, and function η2 : T2 −→ T is defined by "R 1 0 ξ((1 − τ)η1 (t)t)dτ η2 (t) = 1 − ξ0 (η1 (t)t) +

(ξ0 (t) + ξ0 (η1 (t)t))

R1 )

ξ1 (τη1 (t)t)dτ

(1 − ξ0 (t))(1 − ξ0 (η1 (t)t))  Z (ξ0 (t) + ξ0 (η1 (t)t)) 1 + ξ1 (τη1 (t)t)dτ η1 (t). (1 − p(t))(1 − ξ0 (t)) 0 Next, we shall show that r = min{ri}, i = 1, 2

(48.3)

is a radius of convergence for solver (48.1). Set T3 = [0, r). The, definition of radius r implies that the following hold for all t ∈ T3 0 ≤ ξ0 (t) < 1

(48.4)

0 ≤ p(t) < 1

(48.5)

0 ≤ ξ0 (η1 (t)t) < 1,

(48.6)

and 0 ≤ ηi (t) < 1.



(48.7) ∗



Let S[x , γ] stand for the closure of the open ball S(x , γ) in X with center x ∈ X and radius γ > 0. We assume from now on that x∗ is a simple solution of equation F(x) = 0 and the functions “ηi “ are as previously defined. The conditions (C) shall be used in the ball convergence of the solver (48.1). Suppose: (c1) For all v ∈ Ω

kF 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ ξ0 (kv − x∗ k).

Set Ω0 = S[x∗ , R0 ] ∩ Ω. (c2) For all u, v ∈ Ω0

kF 0 (x∗ )−1 (F 0 (v) − F 0 (u))k ≤ ξ(kv − uk) kF 0 (x∗ )−1 F 0 (u)k ≤ ξ1 (ku − x∗ k).

and (c3) S[x∗ , r] ⊂ Ω. Next, we develop the ball convergence of the solver (48.1) based on the preceding notation and conditions (C).

400

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 88. Under conditions (C) further suppose that x0 ∈ S(x∗ , r) − {x∗ }. Then, sequence {xn } starting at x0 and generated by solver (48.1), exists in S(x∗ , r), remains in S(x∗ , r) for all n = 0, 1, 2, . . . and converges to x∗ , so that kyn − x∗ k ≤ η1 (qn )qn ≤ qn < r,

(48.8)

qn+1 ≤ η3 (qn )qn ≤ qn ,

(48.9)

and where qn = kxn − x∗ k, radius r and function ηi are given previously. Proof. Assertions (48.8) and (48.9) are shown using mathematical induction. Let u ∈ S[x∗ , ρ] − {x∗ } be arbitrary. By using (48.3), (48.4) and (c1), we get in turn that kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ξ0 (ku − x∗ k) ≤ ξ0 (r) < 1.

(48.10)

It then follows from (48.12) and a lemma due to Banach on inverses of linear operators [10] that F 0 (u)−1 ∈ L(Y, X), and kF 0 (u)−1F 0 (x∗ )k ≤

1 . 1 − ξ0 (ku − x∗ k)

(48.11)

The iterate y0 exists by (48.1) (first sub-step for n = 0). We can also write y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) = F 0 (x0 )−1

Z 1 0

(F 0 (x∗ + τ(x0 − x∗ )) − F 0 (x0 ))dτ(x0 − x∗ ). (48.12)

In view of (48.3), (48.7) (for j = 1), (c2), (48.11) (for u = x0 ) and (48.12), we have in turn that ∗

R1

ξ((1 − τ)q0 )dτq0 1 − ξ0 (q0 ) ≤ η1 (q0 )q0 ≤ q0 < r,

ky0 − x k ≤

0

(48.13)

showing y0 ∈ S(x∗ , r), (48.8) for n = 0. Next, we show B−1 0 ∈ L(Y, X). Indeed, by (48.3)(48.5), (c1), (48.13) and the triangle inequality, we obtain in turn k(4F 0 (x∗ ))−1 (B0 − 4F 0 (x0 ))k 1 ≤ (5kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k + kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k 4 1 ≤ (5ξ0 (ky0 − x∗ k) + ξ0 (q0 )) 4 ≤ p(q0 ) ≤ p(r) < 1, so 0 kB−1 0 F (xn )k ≤

1 . 4(1 − p(q0 ))

(48.14)

Ham-Chun Fifth Convergence Order Solver

401

Moreover, iterate x1 exists by solver (48.1)(second sub-step) and we can write x1 − x∗ = y0 − x∗ − F 0 (y0 )−1 F(y0 )

+(F 0 (y0 )−1 − F 0 (x0 )−1 )F(y0 ) 0 0 0 −1 +(I − B−1 0 (F (y0 ) + 3F (x0 )))F (x0 ) F(y0 )

= y0 − x∗ − F 0 (y0 )−1 F(y0 )

+F 0 (y0 )−1 (F 0 (x0 ) − F 0 (y0 )F 0 (x0 )−1 )F(y0 )

0 0 0 −1 +B−1 0 (B0 − F (y0 ) − 3F (x0 ))F (x0 ) F(y0 )

= y0 − x∗ − F 0 (y0 )−1 F(y0 )

+F 0 (y0 )−1 (F 0 (x0 ) − F 0 (y0 )F 0 (x0 )−1 )F(y0 )

0 0 ∗ 0 ∗ 0 0 −1 +4B−1 0 ((F (y0 ) − F (x )) + (F (x ) − F (x0 ))F (x0 ) F(y0 ).

(48.15)

Furthermore, from (48.3), (48.7) (for i = 2), (48.11)(for u = x0 , y0 ), (48.13)– (48.15), and the triangle inequality, we get in turn that

q1 ≤

"R

1 ∗ 0 ξ((1 − τ)ky0 − x k)dτ 1 − ξ0 (ky0 − x∗ k)

1 (ξ0 (ky0 − x∗ k) + ξ0 (q0 )) + ξ1 (τky0 − x∗ k)dτ (1 − ξ0 (ky0 − x∗ k))(1 − ξ0 (q0 )) 0  Z ξ0 (q0 ) + ξ0 (ky0 − x∗ k) 1 ∗ + ξ1 (τky0 − x k)dτ ky0 − x∗ k (1 − p(q0 ))(1 − ξ0 (q0 )) 0 ≤ η2 (q0 )q0 ≤ q0 ,

Z

(48.16)

proving x1 ∈ S(x∗ , r) and (48.9)for n = 0. Simply replace x0 , y0 , x1 by xi , yi , xi+1 , in the preceding calculations to terminate the induction for assertions (48.8) and (48.9). It then follows by the estimation qi+1 ≤ ρqi < r, (48.17)

where ρ = η3 (q0 ) ∈ [0, 1) we deduce that xi+1 ∈ S(x∗ , ρ) and lim xi = x∗ . i−→∞

Next, we develop a uniqueness of the solution result. Proposition 16. Suppose: (i) Point x∗ ∈ Ω is a simple solution of F(x) = 0 and (ii) There exists r∗ ≥ r satisfying Z 1 0

Set Ω1 = S[x∗ , r∗ ] ∩ Ω.

ξ0 (τr∗ )dτ < 1.

(48.18)

402

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, the only solution of F(x) = 0 in the region Ω1 is x∗ . Proof. Set T =

Z 1 0

F 0 (x∗ + τ(q − x∗ ))dτ for some q ∈ Ω1 with F(q) = 0. Then, in view of

(c1) and (ii), we have in turn that

kF 0 (x∗ )−1 (T − F 0 (x∗ ))k ≤

Z 1

ξ0 (τkq − x∗ k)dτ



Z 1

ξ0 (τr∗ )dτ.

0

0

(48.19)

Hence T −1 ∈ L(Y, X) and from 0 = F(q) − F(x∗ ) = T (q − x∗ ), we conclude q = x∗ . Remark 74. by

a. We can compute the computational order of convergence (COC) defined 

   kxn − x∗ k kxn+1 − x∗ k COC = ln / ln kxn − x∗ k kxn−1 − x∗ k

or the approximate computational order of convergence (ACOC)     kxn+1 − xn k kxn − xn−1 k ACOC = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k b. In view of (c2) and the estimate kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + ξ0 kx − x∗ k

condition (c3) can be dropped and ω1 can be replaced by ξ1 (t) = 1 + ξ0 (t) or ξ1 (t) = 2, since t ∈ [0, R0 ). c. The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form F 0 (x) = P(F(x)) where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1. d. Let ω0 (t) = K0 t, and ω(t) = Kt. In [2, 3] we showed that rA = vergence radius of Newton’s method:

2 is the con2K0 + K

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(48.20)

Ham-Chun Fifth Convergence Order Solver

403

under the conditions (c1) and (c2). It follows from the definition of r in (48.3) that it cannot be larger than the convergence radius rA of the second order Newton’s solver(48.20). As already noted in [2, 3] rA is at least as large as the convergence radius given by Rheinboldt [15] 2 , (48.21) rR = 3K where K1 is the Lipschitz constant on D. The same value for rR was given by Traub [17]. In particular, for K0 < K1 we have that rR < rA and

rR 1 K0 → as → 0. rA 3 K1 That is the radius of convergence rA is at most three times larger than Rheinboldt’s.

3.

Numerical Examples

Example 66. Consider the kinematic system F1 (x) = ex , F2 (y) = (e − 1)y + 1, F3 (z) = 1

with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , Ω = B[0, 1], x∗ = (0, 0, 0)t . Define function F on Ω for w = (x, y, z)t by F(w) = (ex − 1, Then, we get

e−1 2 y + y, z)t . 2



 ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so ξ0 (t) = (e − 1)t, ξ(t) = e e−1 t, ξ1 (t) = e e−1 . Then, the radii are: r1 = 0.382692, and r2 = 0.213156 = r. Example 67. Consider X = Y = C[0, 1], Ω = B[0, 1] and F : Ω −→ Y defined by F(ψ)(x) = ϕ(x) − 5

Z 1

xθψ(θ)3 dθ.

(48.22)

0

We have that F 0 (ψ(ξ))(x) = ξ(x) − 15

Z 1 0

xθψ(θ)2ξ(θ)dθ, for each ξ ∈ D.

Then, we get that x∗ = 0, so ξ0 (t) = 7.5t, ξ(t) = 15t, and ξ1 (t) = 2. Then, the radii are: r1 = 0.066667, and r2 = 0.0368455 = r. Example 68. By the academic example of the introduction, we have ξ0 (t) = ξ(t) = 96.6629073t and ξ1 (t) = 2. Then, the radii are: r1 = 0.00689682, andr2 = 0.00573936 = r.

404

4.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

Extended local convergence analysis of the solver (48.1) under a set of conditions involving only the first derivative is given. Moreover, error estimates and uniqueness of the solution results were given, in contrast to the earlier study [8] using hypotheses up to the sixth derivative. Hence, we expand the applicability of the solver (48.1).

References [1] Argyros I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [2] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [3] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [4] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Behl R., Bhalla S., Martinez E., Alsulami M. A., Derivative-free King’s solver for multiple zeros of nonlinear functions, Mathematics 2021, 9, 1242.https://doi.org/10.3390/math 9111242. [6] Behl R., Bhalla S., Magrenan A. A., Moysi A., An optimal derivative free family of Chebyshev-Halley’s solver for multiple zeros. Mathematics, 2021,9, 546. [7] Cordero A., Jordan C., Codesal E., Torregrosa J. R., Highly efficient algorithms for solving nonlinear systems with arbitrary order of convergence p + 3, p ≥ 5, J. Comput. Appl. Math., 330, (2018), 748-758. [8] Ham Y., Chun C., A fifth order iterative solver for solving nonlinear equations, Appl. Math. Comput., 194, (2007), 287-290. [9] Hernandez M. A., Rubio M. J., A uniparametric family of iterative processes for solving nondifferentiable equations, J. Math. Anal. Appl., 275, (2002), 821-834. [10] Kantorovich L. V., Akilov G. P., Functional Analysis, second edition, Pergamon Press, Oxford, 1982, translated from Russian by Howard L. Silcock. [11] King H. T., Traub J. F., Optimal order of one-point and multipoint iteration, Carnegie Mellon University, Research Showcase@CMU. Computer Science Department, Paper 1747, 1973. [12] Magr´en˜ an A. A., Argyros I. K., Rainer J. J., Sicilia J. A., Ball convergence of a sixthorder Newton-like method based on means under weak conditions, J. Math Chem (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y.

Ham-Chun Fifth Convergence Order Solver

405

[13] Magr´en˜ an A. A., Guti´errez J. M., Real dynamics for damped Newton’s solver applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [14] Ren H., Wu Q., Bi W., A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput. 209, (2009), 206–210. [15] Rheinboldt W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [16] Sharma J. R., Arora, H., Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 35, (2016), 269-284. doi:10.1007/s40314-014-0193-0. [17] Traub J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, New Jersey (1964).

Chapter 49

A Novel Method Free from Derivatives of Convergence Order 1.

Introduction

We are concerned with the problem of approximating a locally unique solution x∗ of the equation F(x) = 0, (49.1) where F : Ω ⊆ X −→ X is a continuous operator and Ω is nonempty and open. Finding x∗ is one of the most challenging and useful problems in computational sciences since many problems reduce to solving equations (49.1). Most solution methods are iterative since closed-form solutions are hard to find. We study the convergence of the seventh order method defined in [19] by: yn = xn − F[vn , xn ]−1 F(xn )

vn = xn + bF(xn ), b 6= 0

zn = yn − (3I − F[vn , xn ]−1 ([F[yn , xn ] − F[yn , vn ])F[vn , xn ]−1 F(yn ) (49.2) −1 −1 xn+1 = zn − F[zn , yn ] (F[vn , zn ] + F[yn , xn ] − F[zn , xn ])F[vn , xn ] F(zn ), for n = 0, 1, 2, . . ., where F[x, y] is the divided difference of order one. Note that method (49.2) uses four-function evaluation, five divided differences, and two inversions of divided difference. Sufficient convergence criteria for the convergence of method (49.2) were given in [19]. Moreover, the benefits of using (49.20 were well explained in [19]. But the convergence analysis uses assumptions on the derivatives up to the order eight. These conditions limit the applicability of the methods. 1 3 For example: Let X = R, Ω = [− , ]. Define f on Ω by 2 2  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we have t∗ = 1,

f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22.

408

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Obviously f 000 (t) is not bounded on Ω. So, the convergence of these methods is not guaranteed by the analysis in earlier papers. That is why, we provide a new convergence analysis using the first derivative and divided difference. Our analysis includes computable error distances on kxn − x∗ k as well as uniqueness results not given in [19]. Our technique to its generality can be used to extend the applicability of other methods analogously [1]-[25]. The results are presented next.

2.

Convergence

It is convenient to develop parameters and real functions for the ball convergence analysis of method (49.2). Let a ≥ 0, α ≥ 0 and β ≥ 0. Set T = [0, ∞). Assume function(s): (1) ξ0 (t) − 1 has a minimal zero s0 ∈ T − {0}, where ξ0 : T −→ T is some continuous and nondecreasing function. Set T0 = [0, s0). (2) λ1 (t) − 1 and λ2 (t) − 1 have minimal zeros r1 , r2 ∈ T0 −{0}, respectively for some functions ξ : T0 −→ T, ξ1 : T0 −→ T, ξ2 : T0 −→ T and ξ3 : T0 −→ T which are continuous and nondecreasing, and functions λ1 : T0 −→ T, λ2 : T0 −→ T are defined by λ1 (t) = and λ2 (t) =



ξ(t) 1 − ξ0 (t)

 2ξ0 (t) + ξ1 (t) + ξ2 (t) + 2 ξ3 (t) +a λ1 (t). 1 − ξ0 (t) (1 − ξ0 (t))2

(3) ξ4 (t) − 1 has a minimal zero s1 ∈ T0 − {0} for some function ξ4 : T0 −→ T which is continuous and nondecreasing. Set s = min{s0 , s1 } and T1 = [0, s). (4) λ3 (t) − 1 has a minimal zero r3 ∈ T1 − {0} for some function ξ5 : T1 −→ T, ξ6 : T1 −→ T which are continuous and nondecreasing and λ3 : T1 −→ T is defined by   ξ5 (t) ξ1 (t) + ξ6 (t) λ3 (t) = +a λ2 (t). 1 − ξ4 (t) (1 − ξ0 (t))(1 − ξ4 (t))

A Novel Method Free from Derivatives of Convergence Order The parameter r∗ given by

r∗ = min{rm }, m = 1, 2, 3

409

(49.3)

is shown to be a convergence radius for method (49.2). Set T2 = [0, r∗ ). Then, the definition of r∗ gives that 0 ≤ ξ0 (t) < 1 (49.4) 0 ≤ ξ4 (t) < 1

(49.5)

0 ≤ λm (t) < 1

(49.6)

and hold for all t ∈ T2 . The notation B[x∗ , γ] stands for the closure of the open ball B(x, γ) with center x∗ and of radius γ > 0. The hypotheses (H) are used in the ball convergence analysis of method (49.2) provided that the functions “ξ” are defined previously, and x∗ is a simple solution of equation F(x) = 0. Assume: (h1) For all x ∈ Ω kF 0 (x∗ )−1 (F[x + bF(x), x] − F 0 (x∗ ))k ≤ ξ0 (kx − x∗ k). Set Ω0 = B(x∗ , s0 ) ∩ Ω. (h2) For all x ∈ Ω0 kF 0 (x∗ )−1 (F[x + bF(x), x] − F[x, x∗ ])k ≤ ξ(kx − x∗ k), kF 0 (x∗ )−1 (F[y, x] − F 0 (x∗ ))k ≤ ξ1 (kx − x∗ k),

kF 0 (x∗ )−1 (F[y, x + bF(x)] − F 0 (x∗ ))k ≤ ξ2 (kx − x∗ k),

kF 0 (x∗ )−1 (F[x + bF(x), x] − F[y, x∗])k ≤ ξ3 (kx − x∗ k), kF 0 (x∗ )−1 (F[z, y] − F 0 (x∗ ))k ≤ ξ4 (kx − x∗ k),

kF 0 (x∗ )−1 (F[z, y] − F[z, x∗ ])k ≤ ξ5 (kx − x∗ k), kF 0 (x∗ )−1 (F[z, x] − F 0 (x∗ ))k ≤ ξ6 (kx − x∗ k), kF 0 (x∗ )−1 F[x, x∗ ]k ≤ a, kI + bF[x, x∗ ]k ≤ β.

and (h3) B[x∗ , R] ⊂ Ω, where R = max{r, βr}. Next, the ball convergence analysis is developed for method (49.2) using the hypotheses (H). Let pn = kxn − x∗ k. Theorem 89. Assume hypotheses (H) hold and choose x0 ∈ B(x∗ , r) − {x∗ } arbitrarily. Then, we conclude lim xn = x∗ . n−→∞

410

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proof. The following items are shown using mathematical induction {xn } ⊂ B(x∗ , r), kyn − x∗ k ≤ λ1 (pn )pn ≤ pn < r kzn − x∗ k ≤ λ2 (pn )pn ≤ pn

kxn+1 − x∗ k ≤ λ3 (pn )pn ≤ pn

(49.7) (49.8) (49.9) (49.10)

and lim xn = x∗ .

n−→∞

(49.11)

Let u ∈ B(x∗ , r) − {x∗ } be arbitrary. Then, by (49.3), (49.4) and (h1) we have in turn that kF 0 (x∗ )−1 (F[u + bF(u), u] − F 0 (x∗ ))k ≤ ξ0 (ku − x∗ k) ≤ ξ0 (r) < 1,

(49.12)

so the lemma by Banach on linear invertible operators [13] and (49.12) given kF[u + bF(u), u]−1F 0 (x∗ )k ≤

1 . 1 − ξ0 (ku − x∗ k)

(49.13)

Hence, iterates y0 and z0 are well defined by the first two sub-steps of method (49.2) and (49.13) for u = x0 , respectively. Notice also that we used kv0 − x∗ k = kx0 + bF(x0 ) − x∗ k

= k(I + bF[x0 , x∗ ])(x0 − x∗ )k ≤ kI + bF[x0 , x∗ ]kkx0 − x∗ k ≤ βr ≤ R,

so v0 ∈ B[x∗ , R] ⊂ Ω. Then, by the first two sub-steps of method (49.2) we can also write y0 − x∗ = x0 − F[v0 , x0 ]−1 F(x0 )

= F[v0 , x0 ]−1 (F[v0 , x0 ] − F[x0 , x∗ ])(x0 − x∗ )

(49.14)

and z0 − x∗ = y0 − x∗ − F [v0 , y0 ]−1 F(y0 )

+F[v0 , x0 ]−1 (2F[v0 , x0 ] − F[y0 , x0 ]

+F[y0 , v0 ])F[v0 , x0 ]−1 F(y0 ),

(49.15)

respectively. In view of (49.3), (49.6) (for m = 1, 2), (h2), (49.13) (for u = x0 ), (49.14), (49.15) and the triangle inequality, we get in turn that ξ(p0 )p0 1 − ξ0 (p0 ) ≤ λ1 (p0 )p0 ≤ p0 < r,

ky0 − x∗ k ≤ and



(49.16)

 ξ3 (p0 ) 2ξ0 (p0 ) + ξ1 (p0 ) + ξ2 (p0 ) + 2 kz0 − x∗ k ≤ +a ky0 − x∗ k 1 − ξ0 (p0 ) (1 − ξ0 (p0 ))2 ≤ λ2 (p0 )p0 ≤ p0 , (49.17)

A Novel Method Free from Derivatives of Convergence Order

411

showing y0 , z0 ∈ B(x∗ , r) and items (49.8) and (49.9) for n = 0. Next, we show F[z0 , v0 ]−1 ∈ L(X, X), so that iterate x1 exists by the last sub-step of method (49.2). Indeed, by (49.3), (49.5) and (h2), we obtain kF 0 (x∗ )−1 (F[z0 , y0 ] − F 0 (x∗ ))k ≤ ξ4 (kx0 − x∗ k) ≤ ξ4 (r) < 1, so kF[z0 , y0 ]−1 F 0 (x∗ )k ≤ Moreover, we can write

1 . 1 − ξ4 (p0 )

x1 − x∗ = z0 − x∗ − F[z0 , y0 ]−1 F(z0 )

−F[z0 , y0 ]−1 (F[y0 , x0 ] − F[z0 , x0 ])F[v0 , x0 ]−1 F(z0 ).

Then, by (49.3), (49.6) (for m = 3), (h2) and (49.16)-(49.19), we obtain in turn   ξ5 (p0 ) (ξ1 (p0 ) + ξ6 (p0 ) p1 ≤ + kz0 − x∗ k 1 − ξ4 (p0 ) (1 − ξ0 (p0 ))(1 − ξ4 (p0 )) ≤ λ3 (p0 )p0 ≤ p0 ,

(49.18)

(49.19)

(49.20)

showing x1 ∈ B(x∗ , r) and (49.10) for n = 0. The induction is terminated by using xi , vi , yi , zi, xi+1 instead of x0 , v0 , y0 , z0 , x1 in the previous calculations. Then, the estimation pi+1 ≤ qpi < r,

(49.21)

where q = λ3 (p0 ) ∈ [0, 1) gives xi+1 ∈ B(x∗ , r) and lim xi = x∗ . i−→∞

Next, concerning the uniqueness of the solution, we give a result not necessarily relying on the hypotheses (H). Proposition 17. Suppose: equation F(x) = 0 has a simple solution x∗ ∈ Ω; for all x ∈ Ω kF 0 (x∗ )−1 (F[x, x∗ ] − F 0 (x∗ ))k ≤ ξ7 (kx − x∗ k)

(49.22)

and function ξ7 (t) − 1 has a minimal positive root s, ¯ where ξ7 : T −→ T is a continuous and nondecreasing function. Set Ω1 = B[x∗ , s] ˜ ∩ Ω for 0 < s˜ < s. ¯ Then, the only solution of equation F(x) = 0 in the region Ω1 is x∗ . Proof. Set M = F[x∗ , x∗ ] for some x∗ ∈ Ω1 with F(x∗ ) = 0. Then, using (49.22), we get kF 0 (x∗ )−1 (M − F 0 (x∗ )) ≤ ξ7 (kx∗ − x∗ k) ≤ ξ7 (s) ˜ < 1, so M −1 ∈ L(X, X) and x∗ = x∗ follows from M(x∗ − x∗ ) = F(x∗ ) − F(x∗ ).

412

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 75. Z 1 0

(a) Let us consider choices F[x, y] =

1 0 (F (x) + F 0 (y)) or F[x, y] = 2

F 0 (x + θ(y − x))dθ or the standard definition of the divided difference when

X = Ri [1,2,3,4,12,13,19]. Moreover, suppose

kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) ≤ ζ0 (kx − x∗ k) and kF 0 (x∗ )−1 (F 0 (x) − F 0 (y)) ≤ ζ(kx − x∗ k), where functions ζ0 : T −→ T, ζ : T −→ T are continuous and nondecreasing. Then, under the first or second choice above it can easily be seen that the hypotheses (H) require for kF[x, x∗ ]k ≤ α the choices as given in Example 1. (b) Hypotheses (H) can be condensed using instead the classical but strongest and less precise condition for studying methods with divided differences [19] kF 0 (x∗ )−1 (F[u1 , u2 ] − F[u3 , u4 ]) ≤ ξ8 (ku1 − u3 k, ku2 − u4 k) for all u1 , u2 , u3 , u4 ∈ Ω, where function ξ8 : T × T −→ T is continuous and nondecreasing. However this condition does not give the largest convergence conditions and all the “ξ” functions are at least as small as ξ8 (t,t).

3.

Example

Example 69. Consider the kinematic system F1 (x) = ex , F2 (y) = (e − 1)y + 1, F3 (z) = 1 ¯ 1), x∗ = with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = R3 , Ω = U(0, T T (0, 0, 0) . Define function F on Ω for w = (x, y, z) by F(w) = (ex − 1, Then, we get

e−1 2 y + y, z)T . 2



 ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  . 0 0 1

1 1 Choose b = −1. Then, a = α = β = (1 + e e−1 ), 2

ξ0 (t) = ξ(t) = ξ1 (t) =

1 (ϕ0 (αt) + ϕ0 (t)), 2 1 ϕ0 (βt), 2 1 (ϕ0 (λ1 (t)t) + ϕ0 (t)), 2

A Novel Method Free from Derivatives of Convergence Order ξ2 (t) = ξ3 (t) = ξ4 (t) = ξ5 (t) = ξ6 (t) = and

413

1 (ϕ0 (λ1 (t)t) + ϕ0 (βt)), 2 1 (ϕ0 ((β + λ1 (t))t) + ϕ0 (t)), 2 1 (ϕ0 (λ2 (t)t) + ϕ0 (λ1 (t)t)), 2 1 ϕ0 (λ1 (t)t), 2 1 (ϕ0 (λ2 (t)t) + ϕ0 (t)), 2 1 ξ7 (t) = ϕ0 (t), 2 1

where ϕ0 (t) = (e − 1)t and ϕ(t) = e e−1 )t. Then, we get that the radii are: r1 = 0.307146, r2 = 0.0938676, r3 = 0.110306.

4.

Conclusion

In this chapter, we extend the applicability of a method (free from derivatives) of convergence order seven considered by Sharma et al (2014). We present a finer than before convergence analysis using assumptions only on the first divided difference. The earlier results have used hypotheses up to the eighth derivative restricting the applicability of the method. Our technique can be used to extend the applicability of other methods in a similar way.

References [1] Argyros I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [2] Argyros I. K., Hilout S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [3] Argyros I. K., Magr´en˜ an A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [4] Argyros I. K., Magr´en˜ an A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Behl R., Bhalla S., Martinez E., Alsulami M. A., Derivative-free King’s scheme for multiple zeros of nonlinear functions, Mathematics 2021, 9, 1242. https://doi.org/10.3390/math 9111242. [6] Behl R., Bhalla S., Magrenan A. A., Moysi A., An optimal derivative free family of Chebyshev-Halley’s method for multiple zeros. Mathematics, 2021, 9, 546.

414

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[7] Cordero A., Hueso J. L., Martinez E., Torregrosa J. R., A modified Newton-Jarratts composition. Numer. Algor. 55, (2010),87–99. [8] Grau-Sanchez M., Grau A., Noguera M., Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 235, (2011),1739–1743. [9] Liu Z., Zheng Q., Zhao P., A variant of Steffensens method of fourth-order convergence and its applications. Appl. Math. Comput. 216, (2010), 1978–1983. [10] Ezquerro J. A., Hernandez M. A., Romero N., Velasco A. I., On Steffensen’s method on Banach spaces, Journal of Computational and Applied Mathematics, 249, (2013), 9-23. [11] Hernandez Veron M. A., Yadav S., Magrenan A. A., Martinez E., Singh S., On the complexity of extending the accessibility for Steffensen-type methods, J. Complexity, (2021). [12] Hernandez M. A., Rubio M. J., A uniparametric family of iterative processes for solving nondifferentiable equations, J. Math. Anal. Appl., 275, (2002), 821-834. [13] Kantorovich L. V., Akilov G. P., Functional Analysis, second edition, Pergamon Press, Oxford, 1982, translated from Russian by Howard L. Silcock. [14] King H. T., Traub J. F., Optimal order of one-point and multipoint iteration, Carnegie Mellon University, Research Showcase@CMU. Computer Science Department, Paper 1747, 1973. [15] Magr´en˜ an A. A., Argyros I. K., Rainer J. J., Sicilia J. A., Ball convergence of a sixthorder Newton-like method based on means under weak conditions, J Math Chem (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y. [16] Magr´en˜ an A. A., Guti´errez J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [17] Ren H., Wu Q., Bi W., A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput. 209, (2009), 206–210. [18] Sharma J. R., Arora H., An efficient derivative free iterative method for solving systems of nonlinear equations. Appl. Anal. Discrete Math. 7, (2013), 390–403. [19] Sharma J. R., Arora H., A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algor. 4, (2014), 91793319. [20] Sharma J. R., Gupta P., Efficient Family of Traub-Steffensen-Type Methods for Solving Systems of Nonlinear Equations. Advances in Numerical Analysis. Article ID 152187, p. 11 (2014). [21] Sharma J. R., Arora H., Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 35, (2016), 269-284. doi:10.1007/s40314-014-0193-0.

A Novel Method Free from Derivatives of Convergence Order

415

[22] Steffensen J. F., Remarks on iteration. Skand Aktuar Tidsr. 16, (1933), 64-72. [23] Traub J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, New Jersey (1964). [24] Wang X., Zhang T., A family of Steffensen type methods with seventh-order convergence. Numer. Algor. 62, (2013), 429-444. [25] Wang X., Zhang T., Qian W., Teng M., Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algor. 70, (2015), 545–558.

Chapter 50

Newton-Kantorovich Scheme for Solving Generalized Equations 1.

Introduction

We are concerned with the problem of approximating a solution x∗ of the generalized equation ˜ 0 ∈ F(x) + F(x), (50.1) where F : Ω ⊆ T1 −→ T2 is differentiable (as by Fr´echet), Ω is open, convex, T1 , T2 are Banach spaces and F˜ : T1 ⇒ T2 is a multifunction. Robinson [16](see also [6,8,11,12,17]) provided a convergence analysis for Newton Kantorovich scheme ˜ n+1 ), 0 ∈ F(xn ) + Ln (xn+1 − xn ) + F(x

(50.2)

which generates a sequence {xn } approximating x∗ under certain conditions. Here, (F(x0 )+ −1 ˜ F 0 (x0 )(. − x) ) + F(.)) is Aubin continuous 0 for x1 and Ln : T1 −→ T2 is an approximation 0 to F (x) satisfying certain conditions (see (H3) and (H4) in section 3). In this chapter, we provide a new semi-local convergence analysis with benefits as already mentioned in the abstract when compared to earlier works [5]-[17]. This is done in Section 3, whereas in Section 2 auxiliary results are restated to make the chapter as self-contained as possible. More details can be found in [15,17] and the references therein.

2.

Background

¯ µ) we denote the closure of the open ball in X with center x ∈ X and of radius By U(x, µ > 0. The set L(X,Y ) stands for all bounded linear operators from X into Y. The inverse of ˜ F˜ is given by F −1 (z) = {x ∈ X : z ∈ F(x)} with F −1 : Y ⇒ X. Let A and B be subsets of X. Then, we define d(x, B) = inf kx − yk and e(A, B) = sup d(x, B). x∈Ω

x∈A

We also need the auxiliary standard definition and Theorem [6].

418

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Definition 41. We say that a multifunction 4 : X ⇒ Y is regular at v0 for u0 if u0 ∈ 4(v0 ), there exists N1 of v0 and N2 of u0 and a constant ε > 0 such that gph4 ∩ (N1 × N2 ) is closed and d(x, 4−1 (z)) ≤ εd(z, 4(z)) for each (x, y) ∈ N1 × N1 . Theorem 90. Consider 4 : X ⇒ Y to be a multifunction. Pick z0 ∈ X. Suppose: there exists ¯ 0 , ε1 ) × U(z ¯ 0 , ε1 )) is closed and the following ε1 > 0 and ε2 ∈ [0, 1) such that gph4 ∩ (U(z hypotheses hold: (1) d(z0 , 4(z0 )) ≤ ε1 (1 − ε2 ). ¯ 0 , ε1 ), 4(u2 )) ≤ ε2 (ku1 − u2 k for each u1 , u2 ∈ U(z ¯ 0 , ε1 ). (2) e(4(u1 ) ∩ U(z Then, 4 has a fixed point in U(z0 , ε1 ).

3.

Convergence Analysis

Our convergence analysis is based on some scalar majorizing sequence depending on functions and parameters. Let α, β, c, λ, γ, δ, η, be nonnegative parameters. Consider function v0 : [0, ∞) −→ [0, ∞) continuous and nondecreasing. (H1) Suppose that equation λ(c + v0 (r)) − 1 = 0 has a minimal positive solution r¯. Consider functions v : [0, r¯) −→ [0, ∞), ω0 : [0, r¯) −→ [0, ∞), ω : [0, r¯) −→ [0, ∞) continuous and nondecreasing. (H2) Suppose that equation

η −r = 0 1 − q(r)

have minimal solution ρ such that η ≤ ρ < min{¯r, Z 1 0

ω0 ((1 − θ)ρ)dθρ +

where q(r) =

λ[

R1 0

Z 1 0

α+η } and 2

ω((1 − θ)ρ)dθρ + v(ρ) ≤ β,

ω((1 − θ)η)dθ + v(r)] . 1 − λ(c + v0 (r))

Then, we can show the following result on majorizing sequences for the method (50.2). Lemma 24. Under hypotheses (H1) and (H2), scalar sequence {rn } given by r0 = 0, r1 = η rn+1 = rn +

λ [ 1 − λ(c + v0 (rn))

Z 1 0

ω((1 − θ)(rn − rn−1 ))dθ + v(rn−1 )](rn − rn−1 )

Newton-Kantorovich Scheme for Solving Generalized Equations for each n = 1, 2, . . . is well defined, nondecreasing, bounded from above by r∗∗ = and as such it converges to its unique least upper bound r∗ , which satisfies

419 η , 1 − q(ρ)

η ≤ r∗ ≤ r∗∗ . Proof. By the definition of sequence {rn }, we have r2 = r1 + and

λ ( 1 − λ(c + v0 (r1 ))

λ ( r2 − r1 = 1 − λ(c + v0 (η))

so

Z 1

Z 1 0

0

ω((1 − θ)r1 )dθ + v(r0 ))r1 ≥ 0

ω((1 − θ)η)dθ + v(0))η ≤ qη < η,

r2 ≤ r1 + qr = (1 + q)η